Intellect V0.2 (1.6B) is a small model that is still under development and has not been extensively tested. We do not recommend deploying it for production use, but it performs well for private applications. Feedback is welcome.

Introduction

We introduce Intellect 1.6B (V0.2), our first-ever reasoning model. It is a full-parameter fine-tune of GPT2-XL (licensed under MIT), trained using the Pakistan-China-Alpaca dataset (licensed under MIT).
Intellect V0.2 (1.6B) is licensed under Apache 2.0, meaning you are free to use it in personal projects. However, this fine-tune is highly experimental, and we do not recommend it for serious, production-ready deployments.
You can find the FP32 version here.

Usage

Since the data used for training were only one-message-in one-message-out pairs, the model often repeats itself after the user sent a follow-up question.
The chat-template is Alpaca, which looks something like following:

### Instruction:
{{{ INPUT }}}
### Response:
{{{ OUTPUT }}}

Training Details

We used SGD (instead of AdamW) with an initial learning rate of 1.0e-5, which allowed us to train the model with a batch size of 1 and a maximum context length of 1K(the maximum GPT2-XL supports) while staying within our allocated 64GB memory for this project.
Training was completed in under a day, which is why PhantasiaAI was unavailable from 05/02/2025 00:00 - 19:00. The service is now fully operational.


Visit our website.
Check out our Character.AI alternative.
Support us financially.

Downloads last month
18
GGUF
Model size
1.64B params
Architecture
gpt2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for XeTute/Intellect_V0.2-1.6B-GGUF

Quantized
(1)
this model

Dataset used to train XeTute/Intellect_V0.2-1.6B-GGUF

Collection including XeTute/Intellect_V0.2-1.6B-GGUF