RaushanTurganbay HF staff commited on
Commit
0f72cba
·
verified ·
1 Parent(s): 5cfc077

Update for chat template

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -100,6 +100,25 @@ output = model.generate(**inputs, max_new_tokens=100)
100
  print(processor.decode(output[0], skip_special_tokens=True))
101
  ```
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ### Model optimization
104
 
105
  #### 4-bit quantization through `bitsandbytes` library
 
100
  print(processor.decode(output[0], skip_special_tokens=True))
101
  ```
102
 
103
+ -----------
104
+ From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
105
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
106
+
107
+ ```python
108
+ messages = [
109
+ {
110
+ "role": "user",
111
+ "content": [
112
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
113
+ {"type": "text", "text": "What is shown in this image?"},
114
+ ],
115
+ },
116
+ ]
117
+
118
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
119
+ output = model.generate(**inputs, max_new_tokens=50)
120
+ ```
121
+
122
  ### Model optimization
123
 
124
  #### 4-bit quantization through `bitsandbytes` library