QizhiPei commited on
Commit
50db0be
·
1 Parent(s): 1975677

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - QizhiPei/BioT5_finetune_dataset
5
+ language:
6
+ - en
7
+ ---
8
+ ## Example Usage
9
+ ```python
10
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
11
+
12
+ tokenizer = T5Tokenizer.from_pretrained("QizhiPei/biot5-base-mol2text", model_max_length=512)
13
+ model = T5ForConditionalGeneration.from_pretrained('QizhiPei/biot5-base-mol2text')
14
+
15
+ task_definition = 'Definition: You are given a molecule SELFIES. Your job is to generate the molecule description in English that fits the molecule SELFIES.\n\n'
16
+ selfies_input = '[C][C][Branch1][C][O][C][C][=Branch1][C][=O][C][=Branch1][C][=O][O-1]'
17
+ task_input = f'Now complete the following example -\nInput: <bom>{selfies_input}<eom>\nOutput: '
18
+
19
+ model_input = task_definition + task_input
20
+ input_ids = tokenizer(model_input, return_tensors="pt").input_ids
21
+
22
+ generation_config = model.generation_config
23
+ generation_config.max_length = 512
24
+ generation_config.num_beams = 1
25
+
26
+ outputs = model.generate(input_ids, generation_config=generation_config)
27
+
28
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
29
+ ```
30
+
31
+ ## References
32
+ For more information, please refer to our paper and GitHub repository.
33
+
34
+ Paper: [BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations](https://arxiv.org/abs/2310.07276)
35
+
36
+ GitHub: [BioT5](https://github.com/QizhiPei/BioT5)
37
+
38
+ Authors: *Qizhi Pei, Wei Zhang, Jinhua Zhu, Kehan Wu, Kaiyuan Gao, Lijun Wu, Yingce Xia, and Rui Yan*