nielsr HF staff commited on
Commit
0bbebd4
·
1 Parent(s): bb332cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -35,8 +35,8 @@ import torch
35
  num_frames = 16
36
  video = list(np.random.randn(16, 3, 224, 224))
37
 
38
- feature_extractor = VideoMAEFeatureExtractor.from_pretrained("MCG-NJU/videomae-base")
39
- model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base")
40
 
41
  pixel_values = feature_extractor(video, return_tensors="pt").pixel_values
42
 
@@ -48,7 +48,7 @@ outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
48
  loss = outputs.loss
49
  ```
50
 
51
- For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
52
 
53
  ## Training data
54
 
 
35
  num_frames = 16
36
  video = list(np.random.randn(16, 3, 224, 224))
37
 
38
+ feature_extractor = VideoMAEFeatureExtractor.from_pretrained("MCG-NJU/videomae-base-short")
39
+ model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base-short")
40
 
41
  pixel_values = feature_extractor(video, return_tensors="pt").pixel_values
42
 
 
48
  loss = outputs.loss
49
  ```
50
 
51
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/videomae.html#).
52
 
53
  ## Training data
54