Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,55 +1,57 @@
|
|
1 |
-
---
|
2 |
-
license: cc0-1.0
|
3 |
-
task_categories:
|
4 |
-
- text-classification
|
5 |
-
- text-generation
|
6 |
-
- text-to-speech
|
7 |
-
language:
|
8 |
-
- my
|
9 |
-
pretty_name: my_written_corpus
|
10 |
-
size_categories:
|
11 |
-
- 10M<n<100M
|
12 |
-
tags:
|
13 |
-
-
|
14 |
-
-
|
15 |
-
-
|
16 |
-
-
|
17 |
-
-
|
18 |
-
---
|
19 |
|
20 |
# Myanmar Written Corpus
|
21 |
|
22 |
-
The **Myanmar Written Corpus** is a
|
|
|
|
|
23 |
|
24 |
## Dataset Overview
|
25 |
|
26 |
- **Language**: Myanmar
|
27 |
- **Format**: Parquet
|
28 |
-
- **Size**: 10 million sentences
|
29 |
- **License**: [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
|
30 |
-
- **
|
31 |
|
32 |
## Features
|
33 |
|
34 |
-
| Field Name | Description
|
35 |
-
|
36 |
-
| `sentence_id` |
|
37 |
-
| `text` | The written Myanmar
|
38 |
-
| `string_length` |
|
39 |
-
| `length_category` |
|
40 |
|
41 |
## Dataset Statistics
|
42 |
|
43 |
- **Total Sentences**: 10 million.
|
44 |
- **Character Length Ranges**:
|
45 |
-
- `<100`:
|
46 |
-
- `<200`:
|
47 |
-
- `<300`:
|
48 |
-
- ... (
|
49 |
|
50 |
## Example Usage
|
51 |
|
52 |
-
The dataset can be
|
53 |
|
54 |
```python
|
55 |
from datasets import load_dataset
|
@@ -57,30 +59,35 @@ from datasets import load_dataset
|
|
57 |
# Load the dataset
|
58 |
dataset = load_dataset("freococo/myanmar-written-corpus")
|
59 |
|
60 |
-
#
|
61 |
-
print(dataset["train"][
|
|
|
|
|
|
|
|
|
|
|
62 |
```
|
63 |
|
64 |
## Applications
|
65 |
|
66 |
This dataset is suitable for:
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
|
73 |
## License
|
74 |
|
75 |
-
This dataset is licensed under the CC0 1.0 License. You are free to use, modify, and distribute the dataset without restrictions.
|
76 |
|
77 |
## Acknowledgments
|
78 |
|
79 |
This dataset is derived from publicly available sources, including:
|
80 |
-
|
81 |
-
|
82 |
|
83 |
-
The dataset has undergone
|
84 |
|
85 |
## Citation
|
86 |
|
|
|
1 |
+
---
|
2 |
+
license: cc0-1.0
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
- text-generation
|
6 |
+
- text-to-speech
|
7 |
+
language:
|
8 |
+
- my
|
9 |
+
pretty_name: my_written_corpus
|
10 |
+
size_categories:
|
11 |
+
- 10M<n<100M
|
12 |
+
tags:
|
13 |
+
- nlp
|
14 |
+
- chatgpt
|
15 |
+
- tts
|
16 |
+
- corpus,
|
17 |
+
- language,
|
18 |
+
---
|
19 |
|
20 |
# Myanmar Written Corpus
|
21 |
|
22 |
+
The **Myanmar Written Corpus** is a comprehensive collection of high-quality written Myanmar text, designed to address the lack of large-scale, openly accessible resources for Myanmar Natural Language Processing (NLP). It is tailored to support various tasks such as text-to-speech (TTS), automatic speech recognition (ASR), translation, text generation, and more.
|
23 |
+
|
24 |
+
This dataset serves as a critical resource for researchers and developers aiming to advance Myanmar language technologies.
|
25 |
|
26 |
## Dataset Overview
|
27 |
|
28 |
- **Language**: Myanmar
|
29 |
- **Format**: Parquet
|
30 |
+
- **Size**: 10 million sentences
|
31 |
- **License**: [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
|
32 |
+
- **Purpose**: To support research and development in Myanmar NLP tasks.
|
33 |
|
34 |
## Features
|
35 |
|
36 |
+
| Field Name | Description |
|
37 |
+
|-------------------|---------------------------------------------------------------------------|
|
38 |
+
| `sentence_id` | A unique numeric ID for each sentence. |
|
39 |
+
| `text` | The original written Myanmar sentence. |
|
40 |
+
| `string_length` | The number of characters in the sentence. |
|
41 |
+
| `length_category` | Categorization of the sentence length (e.g., `<100`, `<200`, etc.). |
|
42 |
|
43 |
## Dataset Statistics
|
44 |
|
45 |
- **Total Sentences**: 10 million.
|
46 |
- **Character Length Ranges**:
|
47 |
+
- `<100`: To be analyzed.
|
48 |
+
- `<200`: To be analyzed.
|
49 |
+
- `<300`: To be analyzed.
|
50 |
+
- ... (Expand as needed).
|
51 |
|
52 |
## Example Usage
|
53 |
|
54 |
+
The dataset can be loaded using the `datasets` library from Hugging Face:
|
55 |
|
56 |
```python
|
57 |
from datasets import load_dataset
|
|
|
59 |
# Load the dataset
|
60 |
dataset = load_dataset("freococo/myanmar-written-corpus")
|
61 |
|
62 |
+
# Access features
|
63 |
+
print(dataset["train"][0]) # Print the first example
|
64 |
+
print(dataset["train"][:5]) # Print the first 5 examples
|
65 |
+
|
66 |
+
# Access specific columns
|
67 |
+
texts = dataset["train"]["text"] # Extract all sentences
|
68 |
+
lengths = dataset["train"]["string_length"] # Extract string lengths
|
69 |
```
|
70 |
|
71 |
## Applications
|
72 |
|
73 |
This dataset is suitable for:
|
74 |
+
- **Training NLP models**: Enables fine-tuning transformers for Myanmar language tasks.
|
75 |
+
- **Text-to-Speech (TTS)**: Provides diverse text for TTS synthesis models.
|
76 |
+
- **Automatic Speech Recognition (ASR)**: Aids in building language models for ASR systems.
|
77 |
+
- **Machine Translation (MT)**: Offers a resource for Myanmar-to-other-languages translation models.
|
78 |
+
- **Text Generation**: Supports training for Myanmar text generation models.
|
79 |
|
80 |
## License
|
81 |
|
82 |
+
This dataset is licensed under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/). You are free to use, modify, and distribute the dataset without restrictions.
|
83 |
|
84 |
## Acknowledgments
|
85 |
|
86 |
This dataset is derived from publicly available sources, including:
|
87 |
+
- The Hugging Face FineWeb2 dataset, specifically its Myanmar subset.
|
88 |
+
- Original writers, speakers, and creators of the sentences.
|
89 |
|
90 |
+
The dataset has undergone extensive manual and automated processing to ensure quality and utility. These efforts included text cleaning, deduplication, and categorization.
|
91 |
|
92 |
## Citation
|
93 |
|