Explain like i'm 5 the last take from @thomwolf on X about Dario's essay on DeepSeek:
—› Open-source AI is like a big cookbook that everyone can read and improve. Instead of a few chefs keeping their recipes secret, anyone can cook, test, and invent new things.
If only one company controls AI, everything stops if they have a problem—like when the internet goes down. With open-source, many people can help, making sure it keeps running smoothly.
AI isn’t just a race between two countries; it’s a team effort around the world. By sharing, we move faster and create safer technology for everyone. — 🤗
Yo fam, this ain't just another AI drop— this is the FUTURE of emotional intelligence! 🚀
Introducing HAI-SER, powered by Structured Emotional Reasoning (SER), the next-level AI that doesn’t just understand your words—it feels you, analyzes your emotions, and helps you navigate life’s toughest moments. 💡
💥 What makes HAI-SER a game-changer? 🔹 Emotional Vibe Check – Gets the mood, energy, and what’s really going on 🎭 🔹 Mind-State Analysis – Breaks down your thoughts, beliefs, and patterns 🤯 🔹 Root Cause Deep-Dive – Unpacks the WHY behind your emotions 💡 🔹 Impact Check – Sees how it’s affecting your life and mental health 💔 🔹 Safety Check – Prioritizes your well-being and crisis management 🚨 🔹 Healing Game Plan – Custom strategies to help you bounce back 💪 🔹 Growth Potential – Turns struggles into opportunities for self-improvement 📈 🔹 How to Approach – Teaches you and others how to communicate and heal 🤝 🔹 Personalized Response – Not just generic advice—real talk, tailored to YOU 💯
No more robotic AI responses. No more surface-level advice. HAI-SER gets deep, analyzing emotions with precision and giving real, actionable support.
This ain’t just AI—this is your digital therapist, life coach, and hype squad all in one. Whether it’s mental health, career struggles, relationships, or personal growth, HAI-SER has your back.
🚀 The future of emotionally intelligent AI is HERE. Are you ready? 🔥💯
With the big hype around AI agents these days, I couldn’t stop thinking about how AI agents could truly enhance real-world activities. What sort of applications could we build with those AI agents: agentic RAG? self-correcting text-to-sql? Nah, boring…
Passionate about outdoors, I’ve always dreamed of a tool that could simplify planning mountain trips while accounting for all potential risks. That’s why I built 𝗔𝗹𝗽𝗶𝗻𝗲 𝗔𝗴𝗲𝗻𝘁, a smart assistant designed to help you plan safe and enjoyable itineraries in the French Alps and Pyrenees.
Built using Hugging Face's 𝘀𝗺𝗼𝗹𝗮𝗴𝗲𝗻𝘁𝘀 library, Alpine Agent combines the power of AI with trusted resources like 𝘚𝘬𝘪𝘵𝘰𝘶𝘳.𝘧𝘳 (https://skitour.fr/) and METEO FRANCE. Whether it’s suggesting a route with moderate difficulty or analyzing avalanche risks and weather conditions, this agent dynamically integrates data to deliver personalized recommendations.
In my latest blog post, I share how I developed this project—from defining tools and integrating APIs to selecting the best LLMs like 𝘘𝘸𝘦𝘯2.5-𝘊𝘰𝘥𝘦𝘳-32𝘉-𝘐𝘯𝘴𝘵𝘳𝘶𝘤𝘵, 𝘓𝘭𝘢𝘮𝘢-3.3-70𝘉-𝘐𝘯𝘴𝘵𝘳𝘶𝘤𝘵, or 𝘎𝘗𝘛-4.
🙋🏻♂️Hey there folks , Open LLM Europe just released Lucie 7B-Instruct model , a billingual instruct model trained on open data ! You can check out my unofficial demo here while we wait for the official inference api from the group : Tonic/Lucie-7B hope you like it 🚀
Published a new blogpost 📖 In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer. 🔗 https://huggingface.co/blog/not-lain/tensor-dims some interesting takeaways :
Community fine-tuned models are more carbon efficient than the models they are derived from! 🥳🌿
@alozowski@clefourrier@SaylorTwift@albertvillanova evaluated CO₂ emissions associated with model inference for over 3000 models on the Open LLM Leaderboard. Interesting trends and new insights emerged...👀
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 💀) to the ranking of best LLMs in Arabic !
Observations: - DeepSeek-v3 ranked 3rd and only Open model among the top 5 !
- A 14B open model (Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !
- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct. It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining. Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)