Birchlabs commited on
Commit
33f9a21
·
verified ·
1 Parent(s): 9de47b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -34,7 +34,7 @@ Processed distribution of Google's [C4](https://www.tensorflow.org/datasets/cata
34
 
35
  Uses the text data from [`allenai/c4`](https://huggingface.co/datasets/allenai/c4).
36
 
37
- `en` subset only.
38
 
39
  T5 tokenizer was applied to the text.
40
  Distributed as a ragged array.
@@ -49,6 +49,13 @@ Download size of all shards:
49
  | Test | 299M | 8 | 44M | 179K |
50
  | **Total** | **296G** | _N/A_ | _N/A_ | _N/A_ |
51
 
 
 
 
 
 
 
 
52
  Download everything via:
53
 
54
  ```bash
 
34
 
35
  Uses the text data from [`allenai/c4`](https://huggingface.co/datasets/allenai/c4).
36
 
37
+ Includes `en` subset only.
38
 
39
  T5 tokenizer was applied to the text.
40
  Distributed as a ragged array.
 
49
  | Test | 299M | 8 | 44M | 179K |
50
  | **Total** | **296G** | _N/A_ | _N/A_ | _N/A_ |
51
 
52
+ The data is uncompressed, in order to preserve support for random-seeking.
53
+ `.data.npy` would probably benefit from compression, because token sequences exhibit patterns.
54
+
55
+ Tokenization achieves a ~44% compression ratio.
56
+ Allen AI's original gzipped JSONL text data achieved a ~61% compression ratio.
57
+ So tokenized is about 13% bigger.
58
+
59
  Download everything via:
60
 
61
  ```bash