Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
|
16 |
# Model Card for ThinkLink Gemma-2-2B-IT
|
17 |
|
18 |
-
|
19 |
|
20 |
The **ThinkLink Gemma-2-2B-IT** model helps users solve coding test problems by providing guided hints and questions, encouraging self-reflection and critical thinking rather than directly offering solutions.
|
21 |
|
@@ -23,61 +23,49 @@ The **ThinkLink Gemma-2-2B-IT** model helps users solve coding test problems by
|
|
23 |
|
24 |
### Model Description
|
25 |
|
26 |
-
|
27 |
-
This is a fine-tuned version of the **Gemma-2-2B-IT** model, aimed at helping users solve coding problems step by step by providing guided hints and promoting self-reflection. The model does not directly provide solutions but instead asks structured questions to enhance the user's understanding of problem-solving strategies, particularly in coding tests.
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
-
<!-- - **Funded by [optional]:** [More Information Needed] -->
|
32 |
-
<!-- - **Shared by [optional]:** [More Information Needed] -->
|
33 |
-
- **Model type:** Causal Language Model (AutoModelForCausalLM)
|
34 |
-
- **Language(s) (NLP):** English (primary)
|
35 |
-
<!-- - **License:** [More Information Needed] -->
|
36 |
-
- **Finetuned from model [optional]:** [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it)
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
- **Repository:** [
|
43 |
-
|
44 |
-
- **Demo [optional]:** [More Information Needed] -->
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
### Direct Use
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
This model can be used for educational purposes, especially for coding test preparation. It generates step-by-step problem-solving hints and structured questions to guide users through coding problems.
|
55 |
-
|
56 |
|
57 |
### Downstream Use
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
-
|
62 |
-
- Can be integrated into learning platforms to assist with coding challenges or programming interviews.
|
63 |
|
64 |
### Out-of-Scope Use
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
-
|
69 |
-
- It is not suitable for tasks that require a detailed and immediate answer to general-purpose questions or advanced mathematical computations.
|
70 |
-
|
71 |
-
<!-- ## Bias, Risks, and Limitations
|
72 |
-
|
73 |
-
This section is meant to convey both technical and sociotechnical limitations.
|
74 |
|
75 |
-
|
76 |
|
77 |
### Recommendations
|
78 |
|
79 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
80 |
-
|
81 |
Users should be aware that the model encourages self-reflection and thought processes, which may not always align with seeking quick solutions. The model’s effectiveness depends on user interaction and the problem context, and it may not perform well in certain programming domains without further fine-tuning.
|
82 |
|
83 |
|
@@ -101,55 +89,18 @@ print(tokenizer.decode(outputs[0]))
|
|
101 |
|
102 |
### Training Data
|
103 |
|
104 |
-
|
105 |
|
106 |
-
|
|
|
107 |
|
108 |
### Training Procedure
|
109 |
|
110 |
-
|
111 |
-
|
112 |
-
- Training
|
113 |
-
- Hardware: 1 L4 GPU
|
114 |
-
- Training time: Approximately 17 hours
|
115 |
-
- Fine-tuning approach: Low-Rank Adaptation (LoRA)
|
116 |
-
|
117 |
-
|
118 |
-
<!-- #### Training Hyperparameters
|
119 |
-
|
120 |
-
#### Speeds, Sizes, Times [optional]
|
121 |
-
|
122 |
-
This section provides information about throughput, start/end time, checkpoint size if relevant, etc.
|
123 |
|
124 |
-
|
125 |
-
|
126 |
-
## Evaluation
|
127 |
-
|
128 |
-
This section describes the evaluation protocols and provides the results.
|
129 |
-
|
130 |
-
### Testing Data, Factors & Metrics
|
131 |
-
|
132 |
-
#### Testing Data
|
133 |
-
|
134 |
-
This should link to a Dataset Card if possible.
|
135 |
-
|
136 |
-
[More Information Needed]
|
137 |
-
|
138 |
-
#### Factors
|
139 |
-
|
140 |
-
These are the things the evaluation is disaggregating by, e.g., subpopulations or domains.
|
141 |
-
|
142 |
-
[More Information Needed]
|
143 |
-
|
144 |
-
#### Metrics
|
145 |
-
|
146 |
-
These are the evaluation metrics being used, ideally with a description of why.
|
147 |
-
|
148 |
-
[More Information Needed]
|
149 |
-
|
150 |
-
### Results
|
151 |
-
|
152 |
-
[More Information Needed] -->
|
153 |
|
154 |
#### Summary
|
155 |
The model was able to effectively guide users through various coding challenges by providing structured hints and questions that promoted deeper understanding.
|
@@ -157,7 +108,6 @@ The model was able to effectively guide users through various coding challenges
|
|
157 |
|
158 |
## Citation
|
159 |
|
160 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
161 |
|
162 |
**BibTeX:**
|
163 |
|
|
|
15 |
|
16 |
# Model Card for ThinkLink Gemma-2-2B-IT
|
17 |
|
18 |
+
## Model Summary
|
19 |
|
20 |
The **ThinkLink Gemma-2-2B-IT** model helps users solve coding test problems by providing guided hints and questions, encouraging self-reflection and critical thinking rather than directly offering solutions.
|
21 |
|
|
|
23 |
|
24 |
### Model Description
|
25 |
|
26 |
+
**ThinkLink Gemma-2-2B-IT** is a fine-tuned version of the **Gemma-2-2B-IT** model, aimed specifically at coding test preparation. It transforms how users interact with coding problems by prompting them with questions that guide them to the solution. This approach enhances learning outcomes by focusing on problem-solving strategies rather than rote solution delivery.
|
|
|
27 |
|
28 |
+
- **Developer:** MinnieMin
|
29 |
+
- **Base Model:** [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it)
|
30 |
+
- **Model Type:** Causal Language Model (AutoModelForCausalLM)
|
31 |
+
- **Language:** English
|
32 |
+
- **License:** Gemma License
|
33 |
+
- **Fine-tuning Method:** Low-Rank Adaptation (LoRA)
|
34 |
+
- **Model Size:** 2.2B parameters
|
35 |
|
36 |
+
This model is intended to assist users by asking strategic questions about coding test problems, such as identifying problem types, challenging parts, and formulating structured solutions. The fine-tuning is centered on making the model act like a tutor, guiding users through the thought process rather than merely providing answers.
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
---
|
39 |
|
40 |
+
## Model Sources
|
41 |
|
42 |
+
- **Repository:** [ThinkLink Gemma-2-2B-IT Model](https://huggingface.co/MinnieMin/gemma-2-2b-it-ThinkLink)
|
43 |
+
- **Fine-tuned on:** [RayBernard/leetcode](https://huggingface.co/datasets/RayBernard/leetcode)
|
|
|
44 |
|
45 |
+
---
|
46 |
|
47 |
+
## Intended Use
|
48 |
|
49 |
### Direct Use
|
50 |
|
51 |
+
This model is ideal for educational purposes, particularly in coding test preparation, where it generates hints and structured questions to help users solve problems step-by-step. The model is tailored for users looking to improve their coding and problem-solving skills by engaging with the problem rather than seeking direct solutions.
|
|
|
|
|
|
|
52 |
|
53 |
### Downstream Use
|
54 |
|
55 |
+
- Can be used in coding education platforms as a guided tutor for programming interview preparation.
|
56 |
+
- Useful for platforms providing learning assistance in problem-solving, software development, or competitive programming.
|
57 |
+
- With further fine-tuning, it could be adapted to other domains that benefit from structured problem-solving guidance, like mathematics or algorithmic reasoning.
|
|
|
58 |
|
59 |
### Out-of-Scope Use
|
60 |
|
61 |
+
- It is not suitable for users looking for direct and immediate solutions, as the model is designed to guide rather than answer directly.
|
62 |
+
- It is not optimized for general-purpose natural language generation or tasks requiring complex reasoning outside coding-related problems.
|
63 |
+
- It may not perform well in advanced mathematical or scientific computations without further fine-tuning.
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
---
|
66 |
|
67 |
### Recommendations
|
68 |
|
|
|
|
|
69 |
Users should be aware that the model encourages self-reflection and thought processes, which may not always align with seeking quick solutions. The model’s effectiveness depends on user interaction and the problem context, and it may not perform well in certain programming domains without further fine-tuning.
|
70 |
|
71 |
|
|
|
89 |
|
90 |
### Training Data
|
91 |
|
92 |
+
The model was fine-tuned on a dataset primarily composed of **LeetCode** coding problems and solutions. The dataset was processed to focus on guiding users through steps such as identifying problem types, edge cases, and key strategies rather than providing direct solutions.
|
93 |
|
94 |
+
- **Dataset**: [RayBernard/leetcode](https://huggingface.co/datasets/RayBernard/leetcode)
|
95 |
+
- **Size**: Approximately 10,000 structured coding problems and related explanations
|
96 |
|
97 |
### Training Procedure
|
98 |
|
99 |
+
- **Hardware**: The model was trained on an **L4 GPU** using mixed precision (fp16) to optimize resource usage.
|
100 |
+
- **Training Time**: Approximately 17 hours.
|
101 |
+
- **Training Method**: The fine-tuning was performed using **Low-Rank Adaptation (LoRA)**, which enables efficient fine-tuning with fewer parameters by adapting the weights of the base model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
|
105 |
#### Summary
|
106 |
The model was able to effectively guide users through various coding challenges by providing structured hints and questions that promoted deeper understanding.
|
|
|
108 |
|
109 |
## Citation
|
110 |
|
|
|
111 |
|
112 |
**BibTeX:**
|
113 |
|