Adina Yakefu

AdinaY

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

Hugging Face's profile picture Hugging Face Chinese Localization's profile picture Huggingface Projects's profile picture Blog-explorers's profile picture ICCV2023's profile picture Open LLM Leaderboard's profile picture huggingPartyParis's profile picture Qwen's profile picture Women on Hugging Face's profile picture Journalists on Hugging Face's profile picture Social Post Explorers's profile picture Chinese LLMs on Hugging Face's profile picture Hugging Face for Legal's profile picture

AdinaY's activity

reacted to lin-tan's post with ๐Ÿ”ฅ about 22 hours ago
view post
Post
1340
๐Ÿš€ Excited to share that our paper, "SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models", has been accepted to #ICRA2025! ๐Ÿ”— Preprint: https://arxiv.org/pdf/2409.19471

We introduce SELP (Safe Efficient LLM Planner), a novel approach for generating plans that adhere to user-specified constraints while optimizing for time-efficient execution. By leveraging linear temporal logic (LTL) to interpret natural language commands, SELP effectively handles complex commands and long-horizon tasks. ๐Ÿค–

๐Ÿ’กSELP presents three key insights:
1๏ธโƒฃ Equivalence Voting: Ensures robust translations from natural language instructions into LTL specifications.
2๏ธโƒฃ Constrained Decoding: Uses the generated LTL formula to guide the autoregressive inference of plans, ensuring the generated plans conform to the LTL.
3๏ธโƒฃ Domain-Specific Fine-Tuning: Customizes LLMs for specific robotic tasks, boosting both safety and efficiency.

๐Ÿ“Š Experiment: Our experiments demonstrate SELPโ€™s effectiveness and generalizability across diverse tasks. In drone navigation, SELP outperforms state-of-the-art LLM planners by 10.8% in safety rate and by 19.8% in plan efficiency. For robot manipulation, SELP achieves a 20.4% improvement in safety rate.

@yiwu @jiang719

#ICRA2025 #LLM #Robotics #Agent #LLMPlanner
reacted to their post with ๐Ÿ”ฅ about 22 hours ago
view post
Post
1139
Xwen ๐Ÿ”ฅ a series of open models based on Qwen2.5 models, developed by a brilliant research team of PhD students from the Chinese community.
shenzhi-wang/xwen-chat-679e30ab1f4b90cfa7dbc49e
โœจ 7B/72B
โœจ Apache 2.0
โœจ Xwen-72B-Chat outperformed DeepSeek V3 on Arena Hard Auto
posted an update about 22 hours ago
view post
Post
1139
Xwen ๐Ÿ”ฅ a series of open models based on Qwen2.5 models, developed by a brilliant research team of PhD students from the Chinese community.
shenzhi-wang/xwen-chat-679e30ab1f4b90cfa7dbc49e
โœจ 7B/72B
โœจ Apache 2.0
โœจ Xwen-72B-Chat outperformed DeepSeek V3 on Arena Hard Auto
upvoted an article 1 day ago
view article
Article

Open-source DeepResearch โ€“ Freeing our search agents

โ€ข 635
reacted to m-ric's post with ๐Ÿš€๐Ÿ”ฅ 1 day ago
view post
Post
5530
Introducing ๐—ผ๐—ฝ๐—ฒ๐—ป ๐——๐—ฒ๐—ฒ๐—ฝ-๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต by Hugging Face! ๐Ÿ’ฅ

OpenAI's latest agentic app Deep Research seems really good... But it's closed, as usual.

โฑ๏ธ So with a team of cracked colleagues, we set ourselves a 24hours deadline to replicate and open-source Deep Research! โฑ๏ธ

โžก๏ธ We built open-Deep-Research, an entirely open agent that can: navigate the web autonomously, scroll and search through pages, download and manipulate files, run calculation on data...

We aimed for the best performance: are the agent's answers really rigorous?

On GAIA benchmark, Deep Research had 67% accuracy on the validation set.
โžก๏ธ open Deep Research is at 55% (powered by o1), it is:
- the best pass@1 solution submitted
- the best open solution ๐Ÿ’ช๐Ÿ’ช

And it's only getting started ! Please jump in, drop PRs, and let's bring it to the top !

Read the blog post ๐Ÿ‘‰ https://huggingface.co/blog/open-deep-research
reacted to albertvillanova's post with ๐Ÿค— 1 day ago
view post
Post
1727
๐Ÿš€ Introducing @huggingface Open Deep-Research๐Ÿ’ฅ

In just 24 hours, we built an open-source agent that:
โœ… Autonomously browse the web
โœ… Search, scroll & extract info
โœ… Download & manipulate files
โœ… Run calculations on data

55% on GAIA validation set! Help us improve it!๐Ÿ’ก
https://huggingface.co/blog/open-deep-research
  • 1 reply
ยท
reacted to victor's post with ๐Ÿ”ฅโค๏ธ 1 day ago
view post
Post
2775
Hey everyone, we've given https://hf.co/spaces page a fresh update!

Smart Search: Now just type what you want to doโ€”like "make a viral meme" or "generate music"โ€”and our search gets it.

New Categories: Check out the cool new filter bar with icons to help you pick a category fast.

Redesigned Space Cards: Reworked a bit to really show off the app descriptions, so you know what each Space does at a glance.

Random Prompt: Need ideas? Hit the dice button for a burst of inspiration.

Weโ€™d love to hear what you thinkโ€”drop us some feedback plz!
ยท
reacted to ZhengPeng7's post with ๐Ÿ”ฅ๐Ÿ‘ 1 day ago
view post
Post
1353
We just released the [BiRefNet_HR]( ZhengPeng7/BiRefNet_HR) for general use on higher resolution images, which was trained with images in 2048x2048. If your images are mostly larger than 1024x1024, use BiRefNet_HR for better results! Thanks to @Freepik for the kind support of H200s for this huge training.

HF Model: ZhengPeng7/BiRefNet_HR.
HF Demo: ZhengPeng7/BiRefNet_demo, where you need to choose General-HR and set high resolution.
PyTorch weights & ONNX: in Google Drive and the GitHub release.

Here is a comparison between the results of the original one and the new HR one on HR inputs:

And, the performance of this new HR one and the previous one trained in 1024x1024 on val set: