---
library_name: transformers
tags:
- Think
- Plan
- Reason
- Reflect
---
# Model Card for Model ID
# Intuitive Thinker
To enhance the reasoning capabilities of smaller-sized language models, employ a system of thinking that incorporates mental models, structured Chain-of-Thought processes, and thoughtful reflection before responding to user queries.
***Problem:***
smaller-sized transformer models exhibit inferior reasoning capabilities compared to their larger counterparts, whose advanced reasoning abilities stem from broader connection networks that facilitate cross-domain inference.
***Solution:***
Two-Step Approach:
> 1. Finetuning: Commence by fine-tuning the Llama 3.1, a smaller-sized transformer model with 8 billion parameters, on an enhanced reasoning dataset to bolster its cognitive capabilities.
> 2. Revelation of Internal Processes: Subsequently, leverage a system of thinking model guidance techniques (Think, Plan, Reasoning and Reflection) to unveil the model's internal thought processes and the rationales underlying its processing mechanisms.
***Available Mental Models***
System of thinking, reasoning and reflection
> 1. Chain-of-Thoughts
> 2. Thinking Fast and Slow
> 3. Critical Thinking
> 4. Iceberg Mental Model
> 5. Second Order Thinking
***Models***
Base Model: mychen76/Llama-3.1_Intuitive-Thinker
https://huggingface.co/mychen76/Llama-3.1_Intuitive-Thinker
Quantized: mychen76/Llama-3.1_Intuitive-Thinker_8B_2309_GGUF
https://huggingface.co/mychen76/Llama-3.1_Intuitive-Thinker_8B_2309_GGUF
***HF Usage***
notebook: https://github.com/minyang-chen/intuitive_thinker/blob/main/intuitive_thinker_demo.ipynb
```python
from intuitive_thinker.mental_model import MentalModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
import json
question="count number of r in word strawberry?"
## format question using mental model template
mental_model = MentalModel(MentalModel.CHAIN_OF_THOUGHTS)
prompt = json.loads(mental_model(question))
bnb_config = BitsAndBytesConfig(
load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True
)
# Prepare the input as before
messages = [
{"role": "system", "content": prompt['system_message']},
{"role": "user", "content": prompt['user_input'] }
]
hf_model_id = "mychen76/Llama-3.1_Intuitive-Thinker"
# 1: Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(hf_model_id, device_map="auto", quantization_config=bnb_config, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(hf_model_id)
# 2: Apply the chat template
formatted_chat = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
#print("Formatted chat:\n", formatted_chat)
# 3: Tokenize the chat (This can be combined with the previous step using tokenize=True)
inputs = tokenizer(formatted_chat, return_tensors="pt", add_special_tokens=False)
# Move the tokenized inputs to the same device the model is on (GPU/CPU)
inputs = {key: tensor.to(model.device) for key, tensor in inputs.items()}
#print("Tokenized inputs:\n", inputs)
# 4: Generate text from the model
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1)
#print("Generated tokens:\n", outputs)
# 5: Decode the output back to a string
decoded_output = tokenizer.decode(outputs[0][inputs['input_ids'].size(1):], skip_special_tokens=True)
print("Decoded output:\n", decoded_output)
```
result
```bash
Decoded output:
a. Briefly analyze the question and outline our approach:
The question asks for the count of the letter 'r' in the word'strawberry'. This is a simple text analysis task that requires examining the word for the target letter.
b. Present a clear plan of steps to solve the problem:
1. Write out the word'strawberry' in its full form.
2. Count the occurrences of the letter 'r' in the word.
3. Provide the final count as the answer.
c. Use a "Chain of Thought" reasoning process:
1. Start by writing out the word: s-t-r-a-w-b-e-r-r-y
2. Now, count the 'r's: r (1), r (2), r (3)
3. We have found all 'r's in the word.
a. Review our reasoning:
We have broken down the task into simple steps and followed through with the counting process.
b. Check for potential errors or oversights:
- We have accounted for all letters in the word.
- We have specifically targeted the letter 'r' for counting.
- We have counted each occurrence of 'r' correctly.
c. Confirm our conclusion:
Yes, our conclusion is correct. The word'strawberry' contains 3 occurrences of the letter 'r'.
```
***Ollama.com***
https://ollama.com/mychen76/llama3.1-intuitive-thinker
For direct easy to use each mental model has been package on own model package.
1. Chain-of-Thoughs: [mychen76/llama3.1-intuitive-thinker:chain-of-thoughts.q5]
2. Thinking Fast and Slow: [mychen76/llama3.1-intuitive-thinker:thinking-fast-n-slow.q5]
3. Critical Thinking: [mychen76/llama3.1-intuitive-thinker:critical-thinking.q5]
4. Iceberg Mental Model: [mychen76/llama3.1-intuitive-thinker:iceberg-mental-model.q5]
5. Second Order Thinking: [mychen76/llama3.1-intuitive-thinker:second-order-thinking.q5]
### Samples
***Sample: Chain-of-Thoughts***
=========== user question ===================
```
Patel family who is planning a road trip from Toronto to Vancouver, covering a total distance of 4,400 km. They want to divide the drive into equal parts over 5 days, with each
... day's drive being 15 km longer than the previous day. If they drive x km on the first day, what is the value of x that will allow them to complete the trip in 5 days?
```
=========== model Response ===================
```
To find the value of x, we need to determine how much distance the family needs to cover each day and ensure it adds up to 4,400 km over 5 days.
Let's break down their driving plan:
1. They drive x km on the first day.
2. Each subsequent day's drive is 15 km longer than the previous day.
3. The trip covers a total distance of 4,400 km.
We can express the total distance covered as a sum of distances driven each day:
Day 1: x
Day 2: x + 15
Day 3: (x + 15) + 15 = x + 30
Day 4: (x + 30) + 15 = x + 45
Day 5: (x + 45) + 15 = x + 60
The sum of these distances should equal the total trip distance:
x + (x + 15) + (x + 30) + (x + 45) + (x + 60) = 4,400 km
Let's simplify this equation by combining like terms:
5x + 150 = 4,400 km
Checking our reasoning process:
- We've correctly expressed the distances driven each day.
- The equation we formed is accurate and reflects the problem statement.
- Let's solve for x to confirm our understanding.
Subtracting 150 from both sides of the equation gives us:
5x = 4,250 km
Dividing both sides by 5 yields:
x = 850 km
Our calculation appears sound. This value for x ensures the family will drive equal distances each day, with the last two days being 30 and 45 km longer than the previous
ones.
```
#### Ollama Usage
https://ollama.com/mychen76/llama3.1-intuitive-thinker
For direct easy to use each mental model has been package on own model package.
1. Chain-of-Thoughs: [mychen76/llama3.1-intuitive-thinker:chain-of-thoughts.q5]
2. Thinking Fast and Slow: [mychen76/llama3.1-intuitive-thinker:thinking-fast-n-slow.q5]
3. Critical Thinking: [mychen76/llama3.1-intuitive-thinker:critical-thinking.q5]
4. Iceberg Mental Model: [mychen76/llama3.1-intuitive-thinker:iceberg-mental-model.q5]
5. Second Order Thinking: [mychen76/llama3.1-intuitive-thinker:second-order-thinking.q5]
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** mychen76@gmail.com
- **Model type:** Llama
- **License:** Follow Llama 3 Licenses
- **Finetuned from model:** Llama3.1 8B
## Training Details
### Training Data
[More Information Needed]
### Training Procedure
Finetuning