|
---
|
|
license: apache-2.0
|
|
task_categories:
|
|
- text-to-image
|
|
language:
|
|
- en
|
|
tags:
|
|
- scientific
|
|
- science
|
|
- text-to-image
|
|
- text-to-code
|
|
pretty_name: ScImage
|
|
size_categories:
|
|
- 1K<n<10K
|
|
---
|
|
|
|
The prompt for ICLR2025 paper |
|
|
|
[**ScImage: HOW GOOD ARE MULTIMODAL LARGE LANGUAGE MODELS AT SCIENTIFIC TEXT-TO-IMAGE GENERATION?**](https://arxiv.org/pdf/2412.02368) |
|
|
|
The prompt template and the object list will be added soon. |
|
|
|
``` |
|
@inproceedings{ |
|
scimage2025, |
|
title={ScImage: How Good Are Multimodal Large Language Models at Scientific Text-to-Image Generation?}, |
|
author={Zhang, Leixin and Cheng, Yinjie and Zhai, Weihe and Eger, Steffen and Belouadi, Jonas and Moafian, Fahimeh and Zhao, Zhixue}, |
|
booktitle={The Thirteenth International Conference on Learning Representations}, |
|
year={2025}, |
|
url={https://openreview.net/pdf?id=ugyqNEOjoU} |
|
} |
|
``` |