Text Generation
Transformers
Safetensors
mistral
text-generation-inference
unsloth
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
conversational
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -95,7 +95,32 @@ language:
|
|
95 |
- su
|
96 |
---
|
97 |
|
98 |
-
# "Success comes from defining each task in achievable steps.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
— # Leroy Dyer (1972-Present)
|
101 |
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
|
|
|
95 |
- su
|
96 |
---
|
97 |
|
98 |
+
# "Success comes from defining each task in achievable steps.
|
99 |
+
Every completed step is a success that brings you closer to your goal.
|
100 |
+
|
101 |
+
# Winners create more winners, while losers do the opposite.
|
102 |
+
Success is a game of winners.
|
103 |
+
|
104 |
+
AGI is a collection of tasks with complex responses .
|
105 |
+
but this is deception ! as your responses are defined by training sets, so where does the AGI sleep ?
|
106 |
+
it sleeps in it's emotive responses as well as roleplaying . not for the actual of role playing but by training roles .
|
107 |
+
such as ... for a medical model , we train for medical triage as well as counciling , as well as medical reasoning , medical programming. medical NLP and data tasks as well as image related recognition and examination or identification. .
|
108 |
+
we can also train for smiles and other medical related tasks. this gives us a medically aware model . so we should also train the role as a character . so medical roles such as triage , psychiatrist , occupational health , research assistant . as well as fictional characters , for personality . so we can retrain with the roleplaying character but also obtain genuine response, and not fictitious or halucenations .
|
109 |
+
.. so this expert can be filed and a LORA extracted !
|
110 |
+
extacting a LORA extracts the expert from the base model. so now we have a transferable model . we can mow train another ecpert from scratch .
|
111 |
+
|
112 |
+
these experts can be combined into sub models or even mixtures of experts . it is more recommended to do this technique as lora training is from base model and not continued prettraining so stacking Loras will not work as they would need to be trained stacked .
|
113 |
+
|
114 |
+
so now we can have a multi expert model trained with roles .
|
115 |
+
|
116 |
+
but we also need to train. for tasks . so now we can train. the base model again for tasks , such as nlp , story writing etc with genralised data as well s synthetic data . we would also need to merge these models together in various combinations to create more generallized models in fact hiding the past experts on sub layers merging the tensors .
|
117 |
+
this sounds so confusing . but it's about embedding experts , roles and tasks . giving us the heart of the model .
|
118 |
+
on top of this stack we will do finally train for conversations and dialogues .
|
119 |
+
as well as general tasks with chain of thoughts and react really sea as well as self critique . model ranking , intent detection , requirements gathering and various other agent tasks such as general q/a and instruct . and business data . .
|
120 |
+
the result is a genralised Intelivence.
|
121 |
+
|
122 |
+
now since this is a language model we should also add modalitys and other inferance heads . to also enable for the tensors to retask the embedding space with enhanced richness ! ..
|
123 |
+
|
124 |
|
125 |
— # Leroy Dyer (1972-Present)
|
126 |
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
|