[FEEDBACK] Daily Papers

#32
by kramp HF staff - opened
Hugging Face org
edited Jul 25, 2024

Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.

How to submit a paper to the Daily Papers, like @akhaliq (AK)?

  • Submitting is available to paper authors
  • Only recent papers (less than 7d) can be featured on the Daily

Then drop the arxiv id in the form at https://huggingface.co/papers/submit

  • Add medias to the paper (images, videos) when relevant
  • You can start the discussion to engage with the community

Please check out the documentation

We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".

Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co/AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co/datasets/AIDC-AI/Ovis-dataset

This comment has been hidden
Hugging Face org

@Yiwen-ntu for now we support only videos as paper covers in the Daily.

This comment has been hidden
This comment has been hidden

we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644

Does the system determine paper publish date by latest version updated or initial submission date? I missed the deadline to put my paper on Daily Paper and now my paper has a major revision and I want to put it up again.

@akhaliq @kramp

Dear AK and HF Team,

We are writing to request the inclusion of our latest paper in the Hugging Face Daily Papers. Our paper, titled "Eagle 2: Building Post-Training Data Strategies from Scratch for Frontier Vision-Language Models“
, was submitted to arXiv on January 20, 2025 (arxiv id). However, due to an unexpected hold from arXiv, it was not publicly available until today.

Since the Hugging Face Daily Papers typically consider papers uploaded within the past seven days, I wanted to clarify that our delay was not intentional but rather due to arXiv’s review process. Given this context, we would really appreciate it if our paper could still be considered for inclusion.

Here are interests of our paper for HF Daily Paper:

  • 🚀 Data-Centric VLM Post-Training Strategy: Unlike most open-source models that only release final weights, we systematically design and analyze a post-training data strategy from scratch, revealing its crucial role in developing frontier-level vision-language models (VLMs).

  • 🚀 Comprehensive Transparency & Open-Source Insights: We provide detailed insights into data curation, training recipes, and model design, offering the open-source community a reproducible framework to develop competitive VLMs.

step_by_step_abl (1).jpg

📑 Paper: https://arxiv.org/abs/2412.20422
🌐 Project Page: https://github.com/NVlabs/EAGLE

Zhiqi Li

PathoLM is a genome foundation model trained on 30 bacterial and viral species to predict pathogenicity directly from DNA sequences. Built using transformer-based architectures, it generalizes across diverse pathogens and non-pathogens, enabling accurate classification for emerging infectious diseases and metagenomic data. Check out the code and dataset on
GitHub: https://github.com/Sajib-006/Patho-LM

As AI become agents 🤖, how can we reliably delegate tasks to them, if they cannot communicate their limitations😭 or ask for help or test-time compute 🧑‍🚒 when needed?

We present our new pre-print Self-Regulation and Requesting Interventions that investigates how LLM agents 🤖 can assess their own limitations and determine when to leverage test-time compute, larger models, or human intervention🧑‍🚒.

Combining LLM-based process reward models (PRMs) with classical RL, we developed an off-line and hybrid method that eschews the inefficiencies of end-to-end deep learning while leveraging the robustness of PRMs. Empirically, our method matches the performance of ALWAYS using interventions, even when requiring only one intervention per task—just about 1/10 intervention usage.

ouput_gif.gif
Paper: https://arxiv.org/pdf/2502.04576
Webpage: https://soyeonm.github.io/self_reg/

How about searching papers from any search bar ? Today, we have to navigate to Daily papers to be able to find one by arxiv code. Often, I forget that and, fail by first trying on the main search bar (often from homepage).

Sign up or log in to comment