Hui Sun

CocoSun

AI & ML interests

None yet

Recent Activity

Organizations

Social Post Explorers's profile picture

CocoSun's activity

reacted to as-cle-bert's post with 👍 9 months ago
view post
Post
1339
Hi HF Community!🤗

I'm thrilled to share the latest updates regarding the Space I built for protein 3D structure prediction ( as-cle-bert/proteinviz): thanks to @lunarflu inputs, @osanseviero precious advice and @simonduerr 's article "Visualize proteins on Hugging Face Spaces" (https://huggingface.co/blog/spaces_3dmoljs, go check it out!), I was able to finally display the 3D protein models directly on-browser, without any need for fancy downloads of big HTMLs!

Take a look to the attached video, that shows how everything works, and make sure to visit the GitHub repository (https://github.com/AstraBert/proteinviz: leave a little ⭐ while you're there!)🥰

May you have fun and luck with your protein research!🧬
·
reacted to as-cle-bert's post with 🔥 9 months ago
view post
Post
1450
Hi HF Community!🤗

If you are excited about AlphaFold3, but upset because it is not open-source, I might have a solution to cheer you up a little bit:

as-cle-bert/proteinviz

This is a space that lets you predict the 3D structure of proteins from their amino-acidic sequences, with the protein folding model facebook/esmfold_v1: using this space is the perfect quick-start to become a Protein Scientist! (or maybe not, who knows...🤔)

In the meantime, if you are curious about what's going on with AlphaFold3 and want something Biologist🔬/Computer Scientist💻-friendly, you can also check out the latest community blog post I wrote: https://huggingface.co/blog/as-cle-bert/what-is-going-on-with-alphafold3 🚀

Have fun and enjoy open-source science!🧬
·
reacted to their post with 🔥 9 months ago
view post
Post
2602
Google LLM(Multimodal) Medical Foundation Model Summary

1.Med-PaLM: Large language models encode clinical knowledge, https://www.nature.com/articles/s41586-023-06291-2
2.Med-PaLM 2: Towards Expert-Level Medical Question Answering with Large Language Models, http://arxiv.org/abs/2305.09617
3.Med-PaLM M: Towards Generalist Biomedical AI, http://arxiv.org/abs/2307.14334
4.Med-Gemini: Capabilities of Gemini Models in Medicine, https://arxiv.org/abs/2404.18416v2; Advancing Multimodal Medical Capabilities of Gemini, https://arxiv.org/abs/2405.03162

  • 1 reply
·
reacted to KingNish's post with 🔥 9 months ago
view post
Post
4636
Microsoft Just Launched 3 Powerful Models

1. Phi 3 Medium (4k and 128k): A 14b Instruct tuned models that outperformed big models like Command R+ (104b), GPT 3.5 Pro, Gemini Pro, and is highly competitive with top models such as Mixtral 8x22b, Llama3 70B, and GPT 4.
microsoft/Phi-3-medium-4k-instruct
DEMO: https://huggingface.co/spaces/Walmart-the-bag/Phi-3-Medium

2. Phi 3 Mini Vision 128k: A 4.5 billion-parameter, instruction-tuned vision model that has outperformed models such as Llava3 and Claude 3, and is providing stiff competition to Gemini 1Pro Vision.
microsoft/Phi-3-vision-128k-instruct

3. Phi3 Small (8k and 128k): Better than Llama3 8b, Mixtral 8x7b and GPT 3.5 turbo.
microsoft/Phi-3-small-128k-instruct
·
posted an update 9 months ago
view post
Post
2602
Google LLM(Multimodal) Medical Foundation Model Summary

1.Med-PaLM: Large language models encode clinical knowledge, https://www.nature.com/articles/s41586-023-06291-2
2.Med-PaLM 2: Towards Expert-Level Medical Question Answering with Large Language Models, http://arxiv.org/abs/2305.09617
3.Med-PaLM M: Towards Generalist Biomedical AI, http://arxiv.org/abs/2307.14334
4.Med-Gemini: Capabilities of Gemini Models in Medicine, https://arxiv.org/abs/2404.18416v2; Advancing Multimodal Medical Capabilities of Gemini, https://arxiv.org/abs/2405.03162

  • 1 reply
·
posted an update 9 months ago
reacted to WizardLM's post with 🚀 10 months ago
view post
Post
40729
🔥🔥🔥 Introducing WizardLM-2!

📙Release Blog: https://wizardlm.github.io/WizardLM2
✅Model Weights: microsoft/wizardlm-661d403f71e6c8257dbd598a
🐦Twitter: https://twitter.com/WizardLM_AI/status/1779899325868589372

We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.

WizardLM-2 8x22B is our most advanced model, and the best opensource LLM in our internal evaluation on highly complex tasks. WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.

🤗 WizardLM 2 Capacities:

1. MT-Bench (Figure-1)
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary works such as GPT-4-Trubo and Glaude-3. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.

2. Human Preferences Evaluation (Figure 2)
Through this human preferences evaluation, WizardLM-2's capabilities are very close to the cutting-edge proprietary models such as GPT-4-1106-preview, and significantly ahead of all the other open source models.

🔍Method Overview:
As the natural world's human-generated data becomes increasingly exhausted through LLM training, we believe that: the data carefully created by AI and the model step-by-step supervised by AI will be the sole path towards more powerful AI.

In the past one year, we built a fully AI powered synthetic training system. (As shown in the Figure 3).
·
reacted to vladbogo's post with 👍 11 months ago
view post
Post
Synth^2 is a new approach that leverages large language models and text-to-image generators to create synthetic image-caption data for boosting visual-language model performance.

Key Points:
* Overcomes data limitations by generating high-quality synthetic image-caption pairs, reducing reliance on costly human annotations.
* Achieves competitive results on image captioning tasks using 40x less paired data than state-of-the-art methods.

Paper: Synth$^2$: Boosting Visual-Language Models with Synthetic Captions and Image Embeddings (2403.07750)

Congrats to the authors for their work!