π Innovatronix Home Automation Language Model (Beta)
This model, created by the Innovatronix team, serves as a lightweight LLM model tailored for home automation applications. It is a fine-tuned version of Flan T5, trained on a custom dataset containing data related to home automation. The model is designed for basic conversational interactions and is currently in beta development.
β¨ Features
Conversational Control: Engage in dialogue with the model to automate smart home functions, commanding and retrieving data from connected smart devices through natural language interactions.
Lightweight and Efficient: Optimized for reduced storage and computational demands, allowing seamless deployment in local environments without excessive resource consumption.
Versatile Deployment: Flexibly deployable across various platforms, including mobile applications and web interfaces, providing users with accessibility to control their smart homes from preferred devices.
π Training Data
The model was fine-tuned on a meticulously handcrafted dataset encompassing a wide range of commands, queries, and contextual information pertaining to controlling and managing smart devices within a home setting.
You can view and download the dataset here: Dataset
β Example usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Robin246/inxai_v1.1")
model = AutoModelForSeq2SeqLM.from_pretrained("Robin246/inxai_v1.1")
# Adjust the parameters if needed
def generate_response(input_prompt, model, tokenizer):
input_text = f"Input prompt: {input_prompt}"
input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=64, padding="max_length", truncation=True)
output_ids = model.generate(input_ids,
max_length=256,
num_return_sequences=1,
num_beams=2,
early_stopping=True,
#do_sample=True,
#temperature=0.8,
#top_k=50
) #You can vary the top_k or add any other parameters
generated_output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return generated_output
while True:
user_input = input("Enter prompt: ")
user_input = ["{}".format(user_input)]
if user_input=='Quit':
break
else:
reply = generate_response(user_input, model, tokenizer)
print("Generated Reply({}):".format(model), reply)
#INXAI from huggingface 'Robin246/inxai_v1.1'
β Citation
Developers:
- Robinkumar
- Kiransekar
- Magesh
- Lathikaa Shri
Base model credits
This model was fine-tuned from Google's Flan-T5 Model using a custom dataset.
- Downloads last month
- 108