--- library_name: transformers license: mit datasets: - MBZUAI/LaMini-instruction --- # Saving 77% of the Parameters in Large Language Models Technical Report This repository contains experiment results for the [Saving 77% of the Parameters in Large Language Models Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT). ## Abstract This technical report demonstrates that large language models (LLMs) can maintain their learning capacity while reducing their non-embedding parameters by up to 77%. We achieve this by adapting a parameter reduction technique originally developed for computer vision, replacing dense layers with an optimized subnetwork that contains grouped pointwise convolutions. Using Microsoft's phi-3-mini-4k-instruct as our baseline, we show that our optimized model (kphi-3) achieves comparable validation loss while using only 15-23% of the original non-embedding parameters. All experiments were conducted on a single NVIDIA L4 GPU within a 3-day timeframe, supporting the democratization of AI research. Our findings suggest that current LLM architectures may be substantially overparameterized, opening possibilities for more efficient model training and deployment. ## Key Findings - Achieved 77% parameter reduction while maintaining model performance. - Demonstrated better generalization in optimized models. - Improved output quality in qualitative testing. ## Implementation Details - Base Model: [kphi3](https://github.com/joaopauloschuler/less-parameters-llm). - Training Dataset: [LaMini](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). - Architecture: Modified transformer decoder with grouped pointwise convolutions. ## Results The following table shows LaMini training results with the baseline and the optimized versions. From left to right: experiment label, model name, number of transformer decoder layers, intermediate dimensions, number of non-embedding parameters, training loss and validation loss. | label | model | layers | interm. dims. | non-emb. params. | % | Train Loss | Val. Loss | |:-----:|:------:|:-------:|:-------------:|:----------------:|:---:|:----------:|:----------:| | [JP47D54C](https://github.com/joaopauloschuler/less-parameters-llm/tree/main/raw/JP47D54C_Baseline_2T.ipynb) | phi-3 | 2 | 8192 | [227M](https://huggingface.co/schuler/experimental-JP47D54C) | | **1.08** | 1.58 | | [JP47D55C](https://github.com/joaopauloschuler/less-parameters-llm/tree/main/raw/JP47D55C_kphi3_2T.ipynb) | kphi-3 | 2 | 9216 | [**35M**](https://huggingface.co/schuler/experimental-JP47D55C) | **15%** | 1.26 | 1.60 | | [JP47D56C](https://github.com/joaopauloschuler/less-parameters-llm/tree/main/raw/JP47D56C_kphi3_3T.ipynb) | kphi-3 | 3 | 9216 | [53M](https://huggingface.co/schuler/experimental-JP47D56C) | 23% | 1.21 | **1.57** | ## Quick Links - [๐Ÿ“„ Full Technical Report (PDF)](https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT) - [๐Ÿค— Model Checkpoints on HuggingFace](https://huggingface.co/schuler/) - [๐Ÿ“Š Raw Experiment Files](https://github.com/joaopauloschuler/less-parameters-llm/tree/main/raw) ## Usage: ``` from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig, pipeline from transformers import LlamaTokenizer import torch REPO_NAME = 'schuler/experimental-JP47D56C' def load_model(local_repo_name): tokenizer = LlamaTokenizer.from_pretrained(local_repo_name, trust_remote_code=True) generator_conf = GenerationConfig.from_pretrained(local_repo_name) model = AutoModelForCausalLM.from_pretrained(local_repo_name, trust_remote_code=True, torch_dtype=torch.bfloat16, attn_implementation="eager") # model.to('cuda') return tokenizer, generator_conf, model tokenizer, generator_conf, model = load_model(REPO_NAME) global_error = '' try: generator = pipeline("text-generation", model=model, tokenizer=tokenizer) except Exception as e: global_error = f"Failed to load model: {str(e)}" def PrintTest(str): print(generator(str, max_new_tokens=256, do_sample=True, top_p=0.25, repetition_penalty=1.2)) PrintTest(f"<|user|>\nHello\n<|end|>\n<|assistant|>\n") PrintTest(f"<|user|>Hello\n<|end|><|assistant|>") PrintTest(f"<|user|>\nWhat is the human body?\n<|end|>\n<|assistant|>\n") PrintTest(f"<|user|>What is the human body?\n<|end|><|assistant|>") PrintTest(f"<|user|>What is biology?\n<|end|><|assistant|>") PrintTest(f"<|user|>Can you comment about democracy?\n<|end|><|assistant|>") PrintTest(f"<|user|>Can you provide detailed comments about the concept of democracy?\n<|end|><|assistant|>") PrintTest(f"<|user|>Please give me a detailed description of the python computer language.\n<|end|><|assistant|>") PrintTest(f"<|user|>If you had a difficult task to complete, how would you complete it?\n<|end|><|assistant|>") PrintTest(f"<|user|>Before replying to my question, I would like you to provide two candidate solutions and do some reflexion about these solution. Then, you'll pick the best as the final reply. What is best: eating healthy or eating economically?\n<|end|><|assistant|>") ``` ## Output Examples ``` ``` ## Citing this Model ``` @article{SchulerRojas_2025, title={Saving 77% of the Parameters in Large Language Models Technical Report}, url={https://www.researchgate.net/publication/388835829_SAVING_77_OF_THE_PARAMETERS_IN_LARGE_LANGUAGE_MODELS_TECHNICAL_REPORT}, author={Schwarz Schuler, Joao Paulo and Rojas Gรณmez, Alejandra}, year={2025}} ```