Cisco iNAM

Cisco iNAM (Intelligent Networking, Automation, and Management), is a nano sized LLM used for asking questions about Cisco Datacenter Products. It is finetuned from the pretrained Phi-2 model from Microsoft Research.

Model Details

Model Description

Model is quantized to 4-bit to be able to run inference on physical deployments of datacenter products. Initial launch is planned for Nexus Dashboard.

  • Developed by: Cisco
  • Funded by [optional]: Cisco
  • Model type: Transformer
  • Language(s) (NLP): English
  • License: Cisco Commercial

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Prompt Format

iNAM uses ChatML as the prompt format.

It's recommended to always prompt with a system instruction (use whatever system prompt you like):

<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
Downloads last month
12
GGUF
Model size
2.78B params
Architecture
phi2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train ndavidson/iNAM-2.7B-v1.0-beta