Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
Chinese
Size:
10K - 100K
License:
Upload 249 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +7 -59
- .idea/.gitignore +8 -0
- .idea/inspectionProfiles/Project_Default.xml +32 -0
- .idea/inspectionProfiles/profiles_settings.xml +6 -0
- .idea/misc.xml +4 -0
- .idea/modules.xml +8 -0
- .idea/nlp_corpus.iml +8 -0
- .idea/vcs.xml +6 -0
- README.md +10 -0
- open_ner_data/2020_ccks_ner/chip_2020_1_test1/test1.txt +0 -0
- open_ner_data/2020_ccks_ner/chip_2020_1_train/train_data.txt +0 -0
- open_ner_data/2020_ccks_ner/chip_2020_1_train/val_data.txt +0 -0
- open_ner_data/2020_ccks_ner/中文医学文本命名实体识别_test2/中文医学文本命名实体识别_test2/test2.txt +0 -0
- open_ner_data/MSRA/msra.txt +3 -0
- open_ner_data/MSRA/msra_1000.txt +0 -0
- open_ner_data/MSRA/msra_test.txt +0 -0
- open_ner_data/MSRA/msra_train.txt +3 -0
- open_ner_data/ResumeNER/dev.char.bmes +0 -0
- open_ner_data/ResumeNER/dev.txt +0 -0
- open_ner_data/ResumeNER/test.char.bmes +0 -0
- open_ner_data/ResumeNER/test.txt +0 -0
- open_ner_data/ResumeNER/train.char.bmes +0 -0
- open_ner_data/ResumeNER/train.txt +0 -0
- open_ner_data/__init__.py +0 -0
- open_ner_data/boson/boson.txt +0 -0
- open_ner_data/boson/boson_1000.txt +0 -0
- open_ner_data/cluener_public/dev.txt +0 -0
- open_ner_data/cluener_public/test.txt +0 -0
- open_ner_data/cluener_public/train.txt +0 -0
- open_ner_data/cluener_public/train_1000.txt +0 -0
- open_ner_data/data_transfer.py +331 -0
- open_ner_data/people_daily/people_daily_ner.txt +3 -0
- open_ner_data/people_daily/people_daily_ner_1000.txt +0 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1000.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1001.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1002.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1003.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1004.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1005.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1006.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1007.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1008.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1009.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1010.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1011.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1012.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1013.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1014.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1015.txt +1 -0
- open_ner_data/tianchi_yiyao/chusai_xuanshou/1016.txt +1 -0
.gitattributes
CHANGED
@@ -1,59 +1,7 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.mds filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
28 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
38 |
-
# Audio files - uncompressed
|
39 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
41 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
42 |
-
# Audio files - compressed
|
43 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
44 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
47 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
48 |
-
# Image files - uncompressed
|
49 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
52 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
53 |
-
# Image files - compressed
|
54 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
55 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
56 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
57 |
-
# Video files - compressed
|
58 |
-
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
-
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
dialog/chitchat/douban/train.txt filter=lfs diff=lfs merge=lfs -text
|
2 |
+
dialog/chitchat/LCCC/train_dev.txt filter=lfs diff=lfs merge=lfs -text
|
3 |
+
dialog/chitchat/weibo/train.txt filter=lfs diff=lfs merge=lfs -text
|
4 |
+
dialog/knowledge/tencent/train.txt.zip filter=lfs diff=lfs merge=lfs -text
|
5 |
+
open_ner_data/MSRA/msra_train.txt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
open_ner_data/MSRA/msra.txt filter=lfs diff=lfs merge=lfs -text
|
7 |
+
open_ner_data/people_daily/people_daily_ner.txt filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.idea/.gitignore
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Default ignored files
|
2 |
+
/shelf/
|
3 |
+
/workspace.xml
|
4 |
+
# Datasource local storage ignored files
|
5 |
+
/../../../../:\craig\nlp_corpus\.idea/dataSources/
|
6 |
+
/dataSources.local.xml
|
7 |
+
# Editor-based HTTP Client requests
|
8 |
+
/httpRequests/
|
.idea/inspectionProfiles/Project_Default.xml
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<component name="InspectionProjectProfileManager">
|
2 |
+
<profile version="1.0">
|
3 |
+
<option name="myName" value="Project Default" />
|
4 |
+
<inspection_tool class="PyPackageRequirementsInspection" enabled="true" level="WARNING" enabled_by_default="true">
|
5 |
+
<option name="ignoredPackages">
|
6 |
+
<value>
|
7 |
+
<list size="19">
|
8 |
+
<item index="0" class="java.lang.String" itemvalue="gensim" />
|
9 |
+
<item index="1" class="java.lang.String" itemvalue="pyplotz" />
|
10 |
+
<item index="2" class="java.lang.String" itemvalue="jieba" />
|
11 |
+
<item index="3" class="java.lang.String" itemvalue="fairseq" />
|
12 |
+
<item index="4" class="java.lang.String" itemvalue="scikit_learn" />
|
13 |
+
<item index="5" class="java.lang.String" itemvalue="torch" />
|
14 |
+
<item index="6" class="java.lang.String" itemvalue="torchvision" />
|
15 |
+
<item index="7" class="java.lang.String" itemvalue="pytorch_crf" />
|
16 |
+
<item index="8" class="java.lang.String" itemvalue="redis" />
|
17 |
+
<item index="9" class="java.lang.String" itemvalue="torchcrf" />
|
18 |
+
<item index="10" class="java.lang.String" itemvalue="pysolr" />
|
19 |
+
<item index="11" class="java.lang.String" itemvalue="kafka" />
|
20 |
+
<item index="12" class="java.lang.String" itemvalue="tensorboardX" />
|
21 |
+
<item index="13" class="java.lang.String" itemvalue="Flask_Cors" />
|
22 |
+
<item index="14" class="java.lang.String" itemvalue="arango" />
|
23 |
+
<item index="15" class="java.lang.String" itemvalue="pyltp" />
|
24 |
+
<item index="16" class="java.lang.String" itemvalue="rpyc" />
|
25 |
+
<item index="17" class="java.lang.String" itemvalue="pytorch_transformers" />
|
26 |
+
<item index="18" class="java.lang.String" itemvalue="bayesian_optimization" />
|
27 |
+
</list>
|
28 |
+
</value>
|
29 |
+
</option>
|
30 |
+
</inspection_tool>
|
31 |
+
</profile>
|
32 |
+
</component>
|
.idea/inspectionProfiles/profiles_settings.xml
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<component name="InspectionProjectProfileManager">
|
2 |
+
<settings>
|
3 |
+
<option name="USE_PROJECT_PROFILE" value="false" />
|
4 |
+
<version value="1.0" />
|
5 |
+
</settings>
|
6 |
+
</component>
|
.idea/misc.xml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<?xml version="1.0" encoding="UTF-8"?>
|
2 |
+
<project version="4">
|
3 |
+
<component name="ProjectRootManager" version="2" project-jdk-name="Python 2.7" project-jdk-type="Python SDK" />
|
4 |
+
</project>
|
.idea/modules.xml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<?xml version="1.0" encoding="UTF-8"?>
|
2 |
+
<project version="4">
|
3 |
+
<component name="ProjectModuleManager">
|
4 |
+
<modules>
|
5 |
+
<module fileurl="file://$PROJECT_DIR$/.idea/nlp_corpus.iml" filepath="$PROJECT_DIR$/.idea/nlp_corpus.iml" />
|
6 |
+
</modules>
|
7 |
+
</component>
|
8 |
+
</project>
|
.idea/nlp_corpus.iml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<?xml version="1.0" encoding="UTF-8"?>
|
2 |
+
<module type="PYTHON_MODULE" version="4">
|
3 |
+
<component name="NewModuleRootManager">
|
4 |
+
<content url="file://$MODULE_DIR$" />
|
5 |
+
<orderEntry type="inheritedJdk" />
|
6 |
+
<orderEntry type="sourceFolder" forTests="false" />
|
7 |
+
</component>
|
8 |
+
</module>
|
.idea/vcs.xml
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<?xml version="1.0" encoding="UTF-8"?>
|
2 |
+
<project version="4">
|
3 |
+
<component name="VcsDirectoryMappings">
|
4 |
+
<mapping directory="$PROJECT_DIR$" vcs="Git" />
|
5 |
+
</component>
|
6 |
+
</project>
|
README.md
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# nlp_corpus
|
2 |
+
## 1 中文实体识别
|
3 |
+
- open_ner_data为网上开放的ner数据集,已将不同的数据格式转化为统一的数据格式,格式转换脚本为data_transfer.py
|
4 |
+
### 1.1 boson数据集
|
5 |
+
### 1.2 clue细粒度实体识别数据集
|
6 |
+
### 1.3 微软实体识别数据集
|
7 |
+
### 1.4 人民网实体识别数据集(98年)
|
8 |
+
### 1.5 中药说明书实体识别数据集(“万创杯”中医药天池大数据竞赛)
|
9 |
+
### 1.6 视频_音乐_图书数据集
|
10 |
+
### 1.7 微博数据集
|
open_ner_data/2020_ccks_ner/chip_2020_1_test1/test1.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/2020_ccks_ner/chip_2020_1_train/train_data.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/2020_ccks_ner/chip_2020_1_train/val_data.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/2020_ccks_ner/中文医学文本命名实体识别_test2/中文医学文本命名实体识别_test2/test2.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/MSRA/msra.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01383bca37db7e2902d98f15d6a51842e143f3ef47b33db297788a1934de9950
|
3 |
+
size 15374320
|
open_ner_data/MSRA/msra_1000.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/MSRA/msra_test.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/MSRA/msra_train.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:40734813d10dfe333055daedbeb7ce3ba2398d1cd1d640144a6a11cf194f0a89
|
3 |
+
size 13251833
|
open_ner_data/ResumeNER/dev.char.bmes
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/ResumeNER/dev.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/ResumeNER/test.char.bmes
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/ResumeNER/test.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/ResumeNER/train.char.bmes
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/ResumeNER/train.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/__init__.py
ADDED
File without changes
|
open_ner_data/boson/boson.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/boson/boson_1000.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/cluener_public/dev.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/cluener_public/test.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/cluener_public/train.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/cluener_public/train_1000.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/data_transfer.py
ADDED
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#coding=utf-8
|
2 |
+
import numpy as np
|
3 |
+
import json
|
4 |
+
import string
|
5 |
+
import os
|
6 |
+
import re
|
7 |
+
from collections import defaultdict
|
8 |
+
|
9 |
+
# 将人民日报数据集进行转换
|
10 |
+
def transfer_data_0(source_file, target_file):
|
11 |
+
'''
|
12 |
+
人民日报数据格式:
|
13 |
+
1 迈向 vt O _
|
14 |
+
2 充满 vt O _
|
15 |
+
3 希望 n O _
|
16 |
+
4 的 ud O _
|
17 |
+
5 新 a O _
|
18 |
+
6 世纪 n O _
|
19 |
+
7 —— wp O _
|
20 |
+
8 一九九八年新年 t DATE _
|
21 |
+
9 讲话 n O _
|
22 |
+
10 ( wkz O _
|
23 |
+
'''
|
24 |
+
with open(source_file) as f, open(
|
25 |
+
target_file, "w+", encoding="utf-8") as g:
|
26 |
+
text = ""
|
27 |
+
entity_list = [] # {"entity_index": {"begin": 21, "end": 25}, "entity_type": "影视作品", "entity": "喜剧之王"}
|
28 |
+
lines = 0
|
29 |
+
for word_line in f:
|
30 |
+
if word_line != "\n": # 是句子的词
|
31 |
+
# print(word_line)
|
32 |
+
word_split = word_line.strip().split("\t")
|
33 |
+
# print(word_split)
|
34 |
+
if word_split[3] != "O":
|
35 |
+
entity_list.append({"entity_index": {"begin": len(text), "end": len(text + word_split[1])},
|
36 |
+
"entity_type": word_split[3], "entity": word_split[1]})
|
37 |
+
text += word_split[1]
|
38 |
+
else: # 句子的结尾
|
39 |
+
g.write(json.dumps({"text": text, "entity_list": entity_list}, ensure_ascii=False) + "\n")
|
40 |
+
lines += 1
|
41 |
+
text = ""
|
42 |
+
entity_list = []
|
43 |
+
if lines == 1000:
|
44 |
+
break
|
45 |
+
print("共有{}行".format(lines))
|
46 |
+
|
47 |
+
|
48 |
+
# 将平常ner标注(微博、微软)数据转化为项目所需数据格式
|
49 |
+
def transfer_data_1(source_file, target_file):
|
50 |
+
# 同时满足BIEO标注、BIO标注和BMESO标注
|
51 |
+
'''
|
52 |
+
将
|
53 |
+
男 B-PER.NOM /// B-PER
|
54 |
+
女 B-PER.NOM
|
55 |
+
必 O
|
56 |
+
看 O
|
57 |
+
的 O
|
58 |
+
微 O
|
59 |
+
博 O
|
60 |
+
花 O
|
61 |
+
心 O
|
62 |
+
|
63 |
+
我 O
|
64 |
+
参 O
|
65 |
+
与 O
|
66 |
+
了 O
|
67 |
+
南 B-GPE.NAM
|
68 |
+
都 I-GPE.NAM
|
69 |
+
标注类型转化为要求的数据格式
|
70 |
+
'''
|
71 |
+
with open(source_file, errors="ignore") as f, open(target_file, "w+", encoding="utf-8") as g:
|
72 |
+
text = ""
|
73 |
+
entity_list = [] # {"entity_index": {"begin": 21, "end": 25}, "entity_type": "影视作品", "entity": "喜剧之王"}
|
74 |
+
lines = 0
|
75 |
+
words_start = 0 # 词的开始边界
|
76 |
+
words_end = 0 # 词的结束边界
|
77 |
+
words_bool = None # 是否存在未加入的新词,存在的话设置为词的类型,默认没有
|
78 |
+
for word_line in f:
|
79 |
+
word_line = word_line.strip()
|
80 |
+
word_split = word_line.strip().split(" ")
|
81 |
+
if '' in word_split:
|
82 |
+
word_split.remove('')
|
83 |
+
if word_split: # 是句子的词
|
84 |
+
if len(word_split) == 1:
|
85 |
+
word_split.insert(0, "、")
|
86 |
+
# print(word_split)
|
87 |
+
if (word_split[1].startswith("B") or word_split[1].startswith("S")) and not word_split[1].endswith("NOM"):
|
88 |
+
if words_bool:
|
89 |
+
entity_list.append({"entity_index": {"begin": words_start, "end": words_start + words_end},
|
90 |
+
"entity_type": words_bool,
|
91 |
+
"entity": text[words_start:words_start + words_end]})
|
92 |
+
words_start = len(text)
|
93 |
+
words_end = 1
|
94 |
+
if "." in word_split[1]:
|
95 |
+
words_bool = word_split[1][2:word_split[1].rfind(".")+1]
|
96 |
+
else:
|
97 |
+
words_bool = word_split[1][2:]
|
98 |
+
elif (word_split[1].startswith("M") or word_split[1].startswith("I") or word_split[1].startswith("E")) and not word_split[1].endswith("NOM"):
|
99 |
+
words_end += 1
|
100 |
+
elif word_split[1] == "O" and words_bool:
|
101 |
+
entity_list.append({"entity_index": {"begin": words_start, "end": words_start + words_end},
|
102 |
+
"entity_type": words_bool,
|
103 |
+
"entity": text[words_start:words_start + words_end]})
|
104 |
+
words_bool = None
|
105 |
+
text += word_split[0]
|
106 |
+
else: # 句子的结尾
|
107 |
+
if words_bool:
|
108 |
+
entity_list.append({"entity_index": {"begin": words_start, "end": words_start + words_end},
|
109 |
+
"entity_type": words_bool,
|
110 |
+
"entity": text[words_start:words_start + words_end]})
|
111 |
+
words_bool = None
|
112 |
+
g.write(json.dumps({"text":text,"entity_list":entity_list}, ensure_ascii=False) + "\n")
|
113 |
+
lines += 1
|
114 |
+
text = ""
|
115 |
+
entity_list = []
|
116 |
+
# if lines == 1000:
|
117 |
+
# break
|
118 |
+
print("共有{}行".format(lines))
|
119 |
+
#
|
120 |
+
# transfer_data_1("/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/source_data/ChineseNLPCorpus/NER/MSRA/dh_msra.txt",
|
121 |
+
# "/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/msra_1000.txt")
|
122 |
+
# transfer_data_1("/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/data/train.txt",
|
123 |
+
# "/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/train.txt")
|
124 |
+
# transfer_data_1("/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/data/valid.txt",
|
125 |
+
# "/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/dev.txt")
|
126 |
+
# transfer_data_1("/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/data/test.txt",
|
127 |
+
# "/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/video_music_book_datasets/test.txt")
|
128 |
+
# transfer_data_1("./ResumeNER/train.char.bmes", "./ResumeNER/train.txt")
|
129 |
+
# transfer_data_1("./ResumeNER/dev.char.bmes", "./ResumeNER/dev.txt")
|
130 |
+
# transfer_data_1("./ResumeNER/test.char.bmes", "./ResumeNER/test.txt")
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
# boson ner数据格式转化
|
135 |
+
def transfer_data_2(source_file, target_file):
|
136 |
+
'''
|
137 |
+
boson数据格式:
|
138 |
+
完成!!!!!!!!!!给大家看看 {{time:今天}}{{person_name:吕小珊}}要交大家 新手也可以简单上手!!! 上学也不会觉得奇怪的妆感喔^^ 大家加油喔~~!!!!!你的喜欢
|
139 |
+
会是{{person_name:吕小珊}} 最你的喜欢 会是{{person_name:吕小珊}} 最大的动力唷~~!!! 谢谢大家~~ 大的动力唷~~!!! 谢谢大家~~
|
140 |
+
'''
|
141 |
+
p = re.compile("({{.*?:.*?}})")
|
142 |
+
p_ = re.compile("{{.*?:(.*?)}}")
|
143 |
+
length = 0
|
144 |
+
with open(source_file) as f, open(target_file, "w+", encoding="utf-8") as g:
|
145 |
+
for s in f:
|
146 |
+
total_de = 0
|
147 |
+
entity_list = []
|
148 |
+
|
149 |
+
for item1, item2 in zip(p.finditer(s), p_.findall(s)):
|
150 |
+
# 替换
|
151 |
+
start = item1.start() - total_de
|
152 |
+
ss = s[start:item1.end() - total_de]
|
153 |
+
total_de += len(ss) - len(item2)
|
154 |
+
s = s.replace(ss, item2, 1)
|
155 |
+
item1.start() - total_de
|
156 |
+
entity_list.append({"entity_index": {"begin": start, "end": start + len(item2)},
|
157 |
+
"entity_type": ss[2:len(ss) - 3 - len(item2)], "entity": item2})
|
158 |
+
|
159 |
+
g.write(json.dumps({"text": s, "entity_list": entity_list}, ensure_ascii=False)+"\n")
|
160 |
+
length += 1
|
161 |
+
if length == 1000:
|
162 |
+
break
|
163 |
+
print("共有{}行".format(length))
|
164 |
+
# transfer_data_1("/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/source_data/ChineseNLPCorpus/NER/boson/origindata.txt",
|
165 |
+
# "/home/liguocai/model_py36/data_diversity/product_testdata_kg/open_ner_data/boson_1000.txt")
|
166 |
+
|
167 |
+
|
168 |
+
# clue数据集转化
|
169 |
+
def transfer_data_3(source_file, target_file):
|
170 |
+
'''
|
171 |
+
源数据:
|
172 |
+
{"text": "她写道:抗战胜利时我从重庆坐民联轮到南京,去中山陵瞻仰,也到秦淮河去过。然后就去北京了。", "label": {"address": {"重庆": [[11, 12]], "南京": [[18, 19]],
|
173 |
+
"北京": [[40, 41]]}, "scene": {"中山陵": [[22, 24]], "秦淮河": [[30, 32]]}}}
|
174 |
+
'''
|
175 |
+
with open(source_file) as f, open(target_file, "w+", encoding="utf-8") as g:
|
176 |
+
length = 0
|
177 |
+
for line in f:
|
178 |
+
line_json = json.loads(line)
|
179 |
+
text = line_json['text']
|
180 |
+
entity_list = []
|
181 |
+
|
182 |
+
if "label" in line_json.keys():
|
183 |
+
for label, e in line_json['label'].items():
|
184 |
+
for e_name, e_index in e.items():
|
185 |
+
entity_list.append({"entity_index": {"begin": e_index[0][0], "end": e_index[0][1]+1},
|
186 |
+
"entity_type": label, "entity": e_name})
|
187 |
+
|
188 |
+
g.write(json.dumps({"text": text, "entity_list": entity_list}, ensure_ascii=False) + "\n")
|
189 |
+
length += 1
|
190 |
+
if length == 1000:
|
191 |
+
break
|
192 |
+
|
193 |
+
print("共有{}行".format(length))
|
194 |
+
|
195 |
+
# transfer_data_3('./open_ner_data/cluener_public/dev.json', './open_ner_data/cluener_public/dev.txt')
|
196 |
+
# transfer_data_3('./open_ner_data/cluener_public/train.json', './open_ner_data/cluener_public/train_1000.txt')
|
197 |
+
# transfer_data_3('./open_ner_data/cluener_public/test.json', './open_ner_data/cluener_public/test.txt')
|
198 |
+
|
199 |
+
# 将brat标注的文件转化为所需格式
|
200 |
+
def transfer_data_4(source_file, test=False):
|
201 |
+
"""
|
202 |
+
T1 DRUG_EFFICACY 1 5 补肾益肺
|
203 |
+
T2 DRUG_EFFICACY 6 10 益精助阳
|
204 |
+
T3 DRUG_EFFICACY 11 15 益气定喘
|
205 |
+
T4 SYMPTOM 23 27 精神倦怠
|
206 |
+
T5 SYNDROME 35 37 阴虚
|
207 |
+
T6 SYMPTOM 37 39 咳嗽
|
208 |
+
T7 SYMPTOM 39 41 体弱
|
209 |
+
"""
|
210 |
+
|
211 |
+
lines = 0
|
212 |
+
|
213 |
+
map_dict = {"DRUG":"药品",
|
214 |
+
"DRUG_INGREDIENT":"药物成分",
|
215 |
+
"DISEASE":"疾病",
|
216 |
+
"SYMPTOM":"症状",
|
217 |
+
"SYNDROME":"证候",
|
218 |
+
"DISEASE_GROUP":"疾病分组",
|
219 |
+
"FOOD":"食物",
|
220 |
+
"FOOD_GROUP":"食物分组",
|
221 |
+
"PERSON_GROUP":"人群",
|
222 |
+
"DRUG_GROUP":"药品分组",
|
223 |
+
"DRUG_DOSAGE":"药物剂型",
|
224 |
+
"DRUG_TASTE":"药物性味",
|
225 |
+
"DRUG_EFFICACY":"中药功效"}
|
226 |
+
|
227 |
+
if not test:
|
228 |
+
file_list = []
|
229 |
+
for file_name in os.listdir(source_file):
|
230 |
+
if file_name.endswith(".ann"):
|
231 |
+
file_list.append(file_name[:-3])
|
232 |
+
with open(source_file[:source_file.rfind("/")+1] + "train.txt", "w+", encoding="utf-8") as f:
|
233 |
+
for file_name in file_list:
|
234 |
+
with open(os.path.join(source_file,file_name+"ann")) as w, open(os.path.join(source_file,file_name+"txt")) as g:
|
235 |
+
text = g.read()
|
236 |
+
entity_list = []
|
237 |
+
for line in w:
|
238 |
+
_, entity_type, begin, end, entity = line.strip().split()
|
239 |
+
entity_type, begin, end = map_dict[entity_type], int(begin), int(end)
|
240 |
+
entity_list.append({"entity_index": {"begin": begin, "end": end},
|
241 |
+
"entity_type": entity_type, "entity": entity})
|
242 |
+
f.write(json.dumps({"text": text, "entity_list": entity_list}, ensure_ascii=False) + "\n")
|
243 |
+
lines += 1
|
244 |
+
else:
|
245 |
+
with open(source_file[:source_file.rfind("/") + 1] + "test.txt", "w+", encoding="utf-8") as f:
|
246 |
+
for file in os.listdir(source_file):
|
247 |
+
with open(os.path.join(source_file,file)) as g:
|
248 |
+
text = g.read()
|
249 |
+
f.write(json.dumps({"text": text, "entity_list": []}, ensure_ascii=False) + "\n")
|
250 |
+
lines += 1
|
251 |
+
|
252 |
+
print("共有数据{}行".format(lines))
|
253 |
+
|
254 |
+
# transfer_data_4("./open_ner_data/tianchi_yiyao/train", test=False)
|
255 |
+
# transfer_data_4("./open_ner_data/tianchi_yiyao/chusai_xuanshou", test=True)
|
256 |
+
|
257 |
+
# 依渡云数据集格式转化
|
258 |
+
def transfer_data_5(source_file, target_file):
|
259 |
+
"""
|
260 |
+
{"originalText": ",患者7月前因“下腹腹胀伴反酸”至我院就诊,完善相关检查,诊断“胃体胃窦癌(CT4N2M0,IIIB期)”,
|
261 |
+
建议先行化疗,患者及家属表示理解同意 ,遂于2015-5-26、2015-06-19、2015-07-13分别予XELOX
|
262 |
+
(希罗达 1250MG BID PO D1-14+奥沙利铂150MG IVDRIP Q3W)化疗三程,过程顺利,无明显副反应,
|
263 |
+
后于2015-08-24在全麻上行胃癌根治术(远端胃大切),术程顺利,术后预防感染支持对症等处理。,术后病理示:
|
264 |
+
胃中至低分化管状腺癌(LAUREN,分型:肠型),浸润至胃壁浆膜上层,可见神经束侵犯,未见明确脉管内癌栓;
|
265 |
+
肿瘤消退分级(MANDARD),:TRG4;网膜组织未见癌;LN(-);YPT3N0M0,IIA期。术后恢复可,于2015-10-10、
|
266 |
+
开始采用XELOX化疗方案化疗(奥沙利铂150MG Q3W IVDRIP+卡培他滨1250MGBID*14天)一程,过程顺利。
|
267 |
+
现为行上程化疗来我院就诊,拟“胃癌综合治疗后” 收入我科。自下次出院以来,患者精神可,食欲尚可,大小便正常,
|
268 |
+
体重无明显上降。", "entities":
|
269 |
+
[{"end_pos": 10, "label_type": "解剖部位", "overlap": 0, "start_pos": 8},
|
270 |
+
{"end_pos": 11, "label_type": "解剖部位", "overlap": 0, "start_pos": 10},
|
271 |
+
{"label_type": "疾病和诊断", "overlap": 0, "start_pos": 32, "end_pos": 52},
|
272 |
+
{"end_pos": 118, "label_type": "药物", "overlap": 0, "start_pos": 115},
|
273 |
+
{"end_pos": 143, "label_type": "药物", "overlap": 0, "start_pos": 139},
|
274 |
+
{"label_type": "手术", "overlap": 0, "start_pos": 193, "end_pos": 206},
|
275 |
+
{"label_type": "疾病和诊断", "overlap": 0, "start_pos": 233, "end_pos": 257},
|
276 |
+
{"label_type": "解剖部位", "overlap": 0, "start_pos": 261, "end_pos": 262},
|
277 |
+
{"end_pos": 374, "label_type": "药 物", "overlap": 0, "start_pos": 370},
|
278 |
+
{"end_pos": 395, "label_type": "药物", "overlap": 0, "start_pos": 391},
|
279 |
+
{"label_type": "疾病和诊断", "overlap": 0, "start_pos": 432, "end_pos": 439}]}
|
280 |
+
"""
|
281 |
+
with open(source_file, encoding="utf-8-sig") as f, open(target_file, "w+", encoding="utf-8") as g:
|
282 |
+
length = 0
|
283 |
+
error = 0
|
284 |
+
for line in f:
|
285 |
+
try:
|
286 |
+
line_json = json.loads(line)
|
287 |
+
entity_list = []
|
288 |
+
text = line_json["originalText"]
|
289 |
+
for entities in line_json["entities"]:
|
290 |
+
entity_list.append({"entity_index": {"begin": entities["start_pos"],
|
291 |
+
"end": entities["end_pos"]},
|
292 |
+
"entity_type": entities["label_type"],
|
293 |
+
"entity": text[entities["start_pos"]:entities["end_pos"]]})
|
294 |
+
g.write(json.dumps({"text": text, "entity_list": entity_list}, ensure_ascii=False) + "\n")
|
295 |
+
length += 1
|
296 |
+
except:
|
297 |
+
error += 1
|
298 |
+
print("错误:{}个".format(error))
|
299 |
+
print("共有{}行".format(length))
|
300 |
+
|
301 |
+
|
302 |
+
# 统计实体类型和个数
|
303 |
+
def sta_entity(file, num=None):
|
304 |
+
sta_dict = defaultdict(int)
|
305 |
+
with open(file, encoding="utf-8") as f:
|
306 |
+
data_list = list(f.readlines())
|
307 |
+
|
308 |
+
length = len(data_list) if not num else len
|
309 |
+
|
310 |
+
entity_type = []
|
311 |
+
for line in data_list[:length]:
|
312 |
+
text_e = json.loads(line)
|
313 |
+
for e in text_e["entity_list"]:
|
314 |
+
if e["entity_type"] not in entity_type:
|
315 |
+
entity_type.append(e["entity_type"])
|
316 |
+
sta_dict[e["entity_type"]] += 1
|
317 |
+
|
318 |
+
entity_type.sort()
|
319 |
+
print("实体类型:",entity_type)
|
320 |
+
print("实体类型及个数:", sta_dict)
|
321 |
+
|
322 |
+
|
323 |
+
print("train1")
|
324 |
+
# transfer_data_5("yidu-s4k/subtask1_training_part1.txt", "yidu-s4k/train1.txt")
|
325 |
+
sta_entity("yidu-s4k/train1.txt")
|
326 |
+
print("train2")
|
327 |
+
transfer_data_5("yidu-s4k/subtask1_training_part2.txt", "yidu-s4k/train2.txt")
|
328 |
+
sta_entity("yidu-s4k/train2.txt")
|
329 |
+
print("test")
|
330 |
+
transfer_data_5("yidu-s4k/subtask1_test_set_with_answer.json", "yidu-s4k/test.txt")
|
331 |
+
sta_entity("yidu-s4k/test.txt")
|
open_ner_data/people_daily/people_daily_ner.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d239843ff28d0012cd53c7a6c6fd7dbdcaf1e2e407ea2e1da4cc88009b1dd6d
|
3 |
+
size 11709985
|
open_ner_data/people_daily/people_daily_ner_1000.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1000.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
灌肠用。取本品50ml,将药液加温至38~39°C,臀部抬高10cm插管,肛管插入深度10~15cm。肛管插入后,讲管端套的熟料瓶颈部,加压挤入即可。灌入后膝胸卧位30分钟。每日一次,两周为一个疗程。月经干净后3~5天开始用药。 红虎灌肠液(50毫升装)-安徽天洋药业 清热解毒,化湿除带,祛瘀止痛,散结消癥,用于慢性盆腔炎所致小腹疼痛,腰骶酸痛,带下量多,或有发热 安徽天洋药业有限公司
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1001.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
云南永安制药有限公司 开水冲服。一次10克,一日3次。 孕妇禁用。糖尿病患者禁服。 每袋装12g(相当于原药材8g)。 非处方药物(甲类) 补血,活血,通络。用于月经量少、后错,血虚萎黄后错,血虚萎黄,风湿痹痛,肢体麻木糖尿病 尚不明确。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1002.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
北京同仁堂科技发展股份有限公司制药厂 1.忌食辛辣,少进油腻。 2.感冒发热病人不宜服用。 3.有高血压、心脏病、肝病、糖尿病、肾病等慢性病严重者应在医师指导下服用。 4.伴有月经紊乱者,应在医师指导下服用。 5.眩晕症状较重者,应及时去医院就诊。 6.服药2周症状无缓解,应去医院就诊。 7.对本品过敏者禁用,过敏体质者慎用。 8.本品性状发生改变时禁止使用。 9.请将本品放在儿童不能接触的地方。 10.如正在使用其他药品,使用本品前请咨询医师或药师。 本品为浅黄色至棕黄色颗粒,气微香,味微苦。 滋养肝肾、宁心安神。用于更年期综合症属阴虚肝旺症,症见烘热汗出,头晕耳鸣,失眠多梦,五心烦热,腰背酸痛,大便干燥,心烦易怒,舌红少苔,脉弦细或弦细 开水冲服。一次1袋(12g),一日3次。 如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 12g*10袋/盒 用于更年期综合症属阴虚肝旺症 铝塑复合膜包装,每袋装12克,每盒装10袋。 非处方药物(甲类),中药保护品种二级 12g*10袋/盒 用于更年期综合症属阴虚肝旺更年期综合症气微香,味微苦。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1003.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
口服。一次3粒,一日3次,3个月经周期为一疗程。 吉林省东北亚药业股份有限公司 月经期暂停服用。 用于子宫肌瘤气滞血瘀,症见经期延长,经量过多,经色紫黯有块,小腹或乳房胀痛等。 偶见服药初期胃脘不适。 如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 本品为硬胶囊,内容物为棕褐色的颗粒;气微腥,微苦。 铝塑泡罩包装,12粒*2板/盒。 每粒装0.45g 详见说明书 软坚散结?钛觯稣瘫尽S糜谧庸×觯脱?症见经期延长,经量过多,经色紫黯有块,小腹或乳房胀痛
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1004.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
滋阴清热,健脾养血。用于放环后引起的出血,月经提前量多或月经紊乱,腰骶酸痛,下腹坠痛,心烦易怒,手足心热 陕西步长高新制药有限公司 口服,一次5片,一日2次。 请遵医嘱。 尚不明确。 0.46g*3*15片
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1005.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 开水冲服,一次14克,一日3次。 养血,调经,止痛。用于月经量少、后错,经期腹痛 健民集团叶开泰国药(随州)有限公司 1,忌食生冷食物。2,患有其他疾病者,应在医师指导下服用。3,平素月经正常,突然出现月经过少,或经期错后,应去医院就诊。4,治疗痛经,宜在经前3~5天开始服药,连服一周,如有生育要求应在医师指导下服用。5,服药后痛经不减轻,或重度痛经者,应到医院诊治。6,服药2周症状无缓解,应去医院就诊。7,对本品过敏者禁用,过敏体质者慎用。8,本品性状发生改变时禁止使用。9,请将本品放在儿童不能接触的地方。10,如正在使用其他药品,使用本品前请咨询医师或药师。 本品为妇科月经不调类非处方药药品。 养血,调经,止痛。用于月经量少、后错,经期腹痛。 养血,调经,止痛。用于月经量少、后错,经期腹痛 14g*5袋 非处方药物(乙类),国家医保目录(乙类) 孕妇禁用。糖尿病者禁服。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1006.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
每瓶装100ml;每瓶装200ml 尚不明确。 外用。用稀释10%溶液擦洗,重症可加大浓度;用牛尾线消毒棉球蘸取适量浓溶液置于阴道中治疗阴道炎,一日2次。 清热燥湿、止痒,广谱抗菌、抗病毒,抗炎镇痛抑制变态反应,用于各种细菌性、霉菌性、滴虫性外阴炎、阴道炎所致妇女阴部瘙痒、红肿,白带过多 陕西关爱制药有限公司 用于各种细菌性、霉菌性、滴虫性外阴炎、阴道炎所致妇女阴部瘙痒、红肿,白带过多
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1007.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
滋阴清热,固经止带。用于阴虚血热,月经先期,经血量多、色紫黑 口服。一次6克,一日2次。 非处方药物(甲类),国家医保目录(乙类) 上海和黄药业有限公司
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1008.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 清热解毒、燥湿杀虫,收敛止痒。用于各种病困所致的阴道炎 尚不明确。 2.5g*5粒 阴道给药,每次1粒,一日1次。睡前将栓剂放入阴道深处。 本品如遇高温天气,可能出现软化现象,只需放入阴凉环境或冰箱冷藏室中,恢复原状即可使用,对产品疗效无影响。 用于各种病因所致的阴道炎症 PVC/LDPE药用复合硬片包装;每盒5粒。 通药制药集团股份有限公司 尚不明确。 本品为紫红色的栓剂。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1009.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
株洲千金药业股份有限公司 阴道给药。晚上临睡前将阴道给药器中的药物送入阴道深处。每次月经干净3天后开始用药,一次1支,一日一次,每个月经周期连续使用10天,持续两个月经周期为一个疗程。 本品为棕褐色凝胶;气芳香。 清热燥湿,祛瘀生肌。用于慢性宫颈炎祛瘀生肌。用于慢性宫颈炎之宫颈糜烂、中医辨证属于湿热瘀阻所致者,症见带下量多、色黄或白,腰腹坠胀色黄或白,腰腹坠胀,口苦咽干,舌红苔黄腻,脉弦或滑 偶见给药局部出现瘙痒、皮疹或疼痛,一般停药后可自行消失。 4g*3支(千金) 1.过敏体质者慎用。2.使用给药器勿用力太过,以免伤及阴道后穹窿等部位,3.本品适用范围不包括宫颈息肉,宫颈粘膜炎、宫颈糜囊肿,宫颈肥大患者,4.请将本品放在儿童不能接触的地方。 1.孕妇及月经期妇女禁用。2,对本品过敏者禁用。3.本品性状发生改变时禁用。 聚丙烯预灌封阴道用给药器包装。3支/盒。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1010.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
山西澳迩药业有限公司 活血、祛瘀、止痛。用于产后恶露不行,少腹疼痛,也可试用于上节育环后引起的阴道流血,月经过多月经过多 如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 6g*12袋 1.收缩子宫:新生化颗粒使DNA含量和子宫利用葡萄糖能力增加,促进子宫蛋白质合成及子宫增生,以促进子宫收缩,从而起到止血并排出瘀血的目的。实验室研究表明,新生化颗粒能明显增加大鼠离体子宫的收缩张力、收缩频率和收缩振幅,且呈剂量依赖性关系。冲洗药液后,子宫活动仍可恢复到正常状态。2.镇痛:实验室研究表明,新生化颗粒能明显减少大鼠扭体次数。3.抗血小板凝聚及抗血栓抗血小板凝聚镇痛:实验室研究表明,新生化颗粒能明显减少大鼠扭体次数。3.抗血小板凝聚及抗血栓作用:新生化颗粒能抑制血小板聚集促进剂(H-SHT)产生。血液流变学表明,新生化颗粒通过降低血浆纤维蛋白原浓度,增加血小板细胞表面电荷,促进细胞解聚,降低血液粘度,达到抗血栓形成的作用。从而使瘀血不易凝固而利于排出。4.造血和抗贫血作用:新生化颗粒能促进血红蛋白(Hb)和红细胞(RBC)的生成。对造血干细胞(CFU—S)增值有显著的刺激作用,并能促进红系细胞分化。粒单细胞(CFU—D)、红系(BFU—E)祖细胞的产率均有明显升高作用。新生化颗粒同时还能抑制补体(c3b)与红细胞膜结合,降低补体溶血功能。5.改善微循环:增加子宫毛细血管流量,促进子宫修复。6.抗炎:新生化颗粒有很好的抗炎抑菌作用。体外试验表明,新生化颗粒对痢疾杆菌、大肠杆菌、绿脓杆菌、变形杆菌和金黄色葡萄球菌均有很好的抑菌作用。 热水冲服,一次1袋,一日2-3次。 用于产后恶露不行,少腹疼痛,也可用于上节育环后引起的阴道流血,月经过多 国家医保目录(乙类)
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1011.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
尚不明确。 每盒装10袋。 孕妇慎用。 本品为棕色或棕褐色颗粒;味甜、微苦味甜、微苦。 福建省泉州罗裳山制药厂 6g*10袋 口服。每次12g(2袋),一日2次。 清热凉血,消肿止痛。用于盆腔炎、附件炎、子宫内膜炎等引起的带下、腹痛 孕妇慎用。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1012.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
口服,一次2片,一日2次。 疏肝活血,调经止痛。用于痛经、月经量少、后错属气滞血瘀疏肝活血,调经止痛 用于痛经、月经量少、后错属气滞血瘀证者 1.忌食生冷食物、不宜洗凉水澡。<br/>2.患有其他疾病者,应在医师指导下服用。<br/>3.平素月经正常,突然出现月经量少,或月经错后,或阴道不规则出血应去医院就诊。<br/>4.经期或经后小腹隐痛喜按,痛经伴月经过多者均不宜选用。<br/>5.治疗痛经,宜在经前3~5天开始服药,连服1周。如有生育要求应在医师指导下服用。<br/>6.服药后痛经不减轻,或重度痛经者,应到医院诊治。<br/>7.对本品过敏者禁用,过敏体质者慎用。<br/>8.本品性状发生改变时禁止使用。<br/>9.请将本品放在儿童不能接触的地方。<br/>10.如正在服用其他药品,使用本品前请咨询医师或药师。 合肥今越制药有限公司 铝塑包装。12片/板×2板/盒。 0.5克*24片/盒 片剂(薄膜衣) 如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 孕妇禁用。 本品为薄膜衣片,除去包衣后显棕色;气微,味微苦。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1013.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
温经散寒,活血止痛,主治寒瘀证所致或经期小腹疼痛,经血量少,经行不畅,血色紫暗有块,块下痛减,乳房胀痛,四肢不温或畏寒,小腹发冷,带下量多,舌质黯或有瘀点,苔白,脉沉紧等症.适用于原发性痛经 口服54;一次5粒54;一日3次54;月经前开始服药54;服用15天.连用3个月经周期. 尚不明确。 每粒装0.5g 陕西摩美得制药有限公司
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1014.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
本品为黑褐色颗粒;气香,味甜。 株洲千金药业股份有限公司 忌生冷辛辣,孕妇禁服。 开水冲服,一次12g,一日2次。 12g*10袋 补益气血,祛瘀生新。用于气血两虚兼血瘀证产后腹痛 动物试验表明,补血益母颗粒能使失血性贫血小鼠RBC、Hb恢复至正常水平;能对抗环磷酰胺损伤骨髓造血系统所致的血细胞减少,能使WBC、HB、RBC明显升高;且对环磷酰胺所致小鼠脾脏萎缩有明显的对抗作用;它对小鼠既具有活血作用又能缩短小鼠的凝血时间;提高小鼠巨噬细胞的吞噬功能和促进小鼠溶血素抗体的形成;促进小鼠腹腔绵羊红细胞的吸收;但对正常大鼠离体子宫平滑肌未显示出作用。 12g*10袋/盒。 1.忌食寒凉、生冷食物。2.感冒时不宜服用。3.平素月经正常,突然出现月经量少,或月经错后,或阴道不规则出血应去医院就诊。4.按照用法用量服用,长期服用应向医师咨询。5.服药二周症状无改善,应去医院就诊。6.对本品过敏者禁用,过敏体质者慎用。7.本品性状发生改变时禁止使用。8.请将本品放在儿童过敏体质者慎用。7.本品性状发生改变时禁止使用。8.请将本品放在儿童不能接触的地方。9.如正在使用其他药品,使用本品前请咨询医师或药师。 尚不明确。
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1015.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
15g*10袋 活血调经。用于月经量少,产后腹痛活血调经本品过敏者心脏病 祛瘀生新。用于月经量少、后错,经来腹痛 本品为棕黄色至棕褐色的颗粒;昧甜、微苦。 尚不明确。 孕妇禁用。 15g*10袋/盒。 四川逢春制药有限公司 1.忌食生冷食物。 2.气血两虚生冷食物。 2.气血两虚引起的月经量少,色淡质稀,伴有头晕心悸,疲乏无力等不宜选用本药。3.有高血压、心脏病、肾病、糖尿病或正在接受其他治疗的患者均应在医师指导下服用。 4.平素月经量正常,突然出现经量少,须去医院就诊。 5.青春期少女及更年期妇女应在医师指导下服药。 6.各种流产后腹痛伴有阴道出血,服药一周无效者应去医院就诊。7.按照用法用量服用,服药过程中出现不良反应应停药,并向医师咨询。 8.对本品过敏者禁用。 开水冲服,一次1袋,一日2次。 如与其他药物同时使用可能会发生药物相互作用,详情请咨询医师或药师。 非处方药物(乙类),国家基本药物目录(2012)
|
open_ner_data/tianchi_yiyao/chusai_xuanshou/1016.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
本品对前列腺F2a所致的大鼠子宫痉挛性收缩有一定的拮抗作用。此外,还有降低全血比粘度、血浆比粘度及红细胞压积,降低血小板释放因子等作用。 开水冲服。一次1袋(12g),一日2次。月经前3天开始服药,连服7天或遵医嘱,三个经期为一个疗程。 铝塑复合膜,每袋装12g。 国家医保目录(乙类) 12g*6袋 活血化瘀、温经通脉、理气止痛。用于气滞寒凝血瘀活血化瘀、温经通脉、理气止痛。用于气滞寒凝血瘀所致的痛经。证见行经小腹胀痛或冷痛,经行不畅冷痛,经行不畅,经血暗有血块,或乳房胀痛,或胸闷,或手足不温,舌暗或有瘀斑 尚不明确。 北京长城制药厂 内血虚内热者忌用。 本品为棕色的颗粒;气微、味甜、微苦。 月经过多,月经提前者慎用。忌生冷。
|