# Training details Dataset : llama tokenizer 4000 distribution : 85% 15% only for attributes number of epochs : 3 Progressive : no