Victor Nogueira's picture

Victor Nogueira

Felladrin

AI & ML interests

Models to run in the web browser

Recent Activity

updated a Space about 3 hours ago
Felladrin/awesome-ai-web-search
liked a model about 4 hours ago
watt-ai/watt-tool-8B
updated a collection about 5 hours ago
Leaderboards
View all activity

Organizations

Blog-explorers's profile picture MLX Community's profile picture Social Post Explorers's profile picture M4-ai's profile picture ONNX Community's profile picture Smol Community's profile picture

Felladrin's activity

replied to victor's post 1 day ago
view reply

This update is massive!! 🙌

I’d love if we could also filter spaces in a way we could list only the ones in Running state.

reacted to Tonic's post with 🔥 8 days ago
view post
Post
2823
🙋🏻‍♂️ Hey there folks ,

our team made a game during the @mistral-game-jam and we're trying to win the community award !

try our game out and drop us a ❤️ like basically to vote for us !

Mistral-AI-Game-Jam/TextToSurvive

hope you like it !
reacted to AdinaY's post with 🚀 10 days ago
view post
Post
2602
🔥So many exciting releases coming from the Chinese community this month!
zh-ai-community/2025-january-6786b054f492fb223591269e

LLMs:
✨ Qwen2.5 -1M by Alibaba
Qwen/qwen25-1m-679325716327ec07860530ba
✨ InternLM3-8B-Instruct by Shanghai AI Lab
internlm/internlm3-8b-instruct
✨ MiniMax-Text-01 by MiniMax AI
MiniMaxAI/MiniMax-Text-01
✨ RWKV-7 by BlinkDL -- RNN + Transformer 👀
BlinkDL/rwkv-7-world
✨ DeepSeek-R1 by DeepSeek -- THE ONE 🙌
https://huggingface.co/deepseek-ai
✨ Baichuan-M1-14B by Baichuan - Medical 🩺
baichuan-inc/Baichuan-M1-14B-Base
✨ Qwen2.5-Math-PRM by Alibaba - Math 🔢
Qwen/Qwen2.5-Math-PRM-7B

Code:
✨ Tare by Bytedance
https://trae.ai

TTS:
✨ T2A-01-HD by MiniMax AI
https://hailuo.ai/audio
✨ LLaSA by HKUST Audio
HKUSTAudio/Llasa-3B

MLLM:
✨ Kimi k1.5 by Moonshot AI
https://kimi.ai
✨ MiniCPM-o-2_6 by OpenBMB
openbmb/MiniCPM-o-2_6
✨ Sa2VA-4B by ByteDance
ByteDance/Sa2VA-4B
✨ VideoLLaMA 3 by Alibaba DAMO
DAMO-NLP-SG/videollama3-678cdda9281a0e32fe79af15
✨ LLaVA-Mini by Chinese Academy of Sciences
ICTNLP/llava-mini-llama-3.1-8b
✨Hunyuan-7B by Tencent
tencent/Hunyuan-7B-Instruct
✨ Hunyuan 3D 2.0 by Tencent
tencent/Hunyuan3D-2
✨MiniMax-VL-01 by MiniMax AI - A non transformer based VLM 👀
MiniMaxAI/MiniMax-VL-01

Agent:
✨ UI-TARS by Bytedance
bytedance-research/UI-TARS-7B-SFT
✨ GLM-PC by Zhipu AI
https://cogagent.aminer.cn

Dataset:
✨ Fineweb-Edu-Chinese by Opencsg
opencsg/Fineweb-Edu-Chinese-V2.1
✨ Multimodal_textbook by Alibaba
DAMO-NLP-SG/multimodal_textbook
✨ MME-Finance by Hithink AI
·
reacted to ngxson's post with 🚀 28 days ago
reacted to tomaarsen's post with ❤️ about 1 month ago
view post
Post
2978
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!

Details:
🤖 Based on ModernBERT-base with 149M parameters.
📊 Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
🏎️ Immediate FA2 and unpacking support for super efficient inference.
🪆 Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
➡️ Maximum sequence length of 8192 tokens!
2️⃣ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
➕ Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
🏛️ Apache 2.0 licensed: fully commercially permissible

Try it out here: nomic-ai/modernbert-embed-base

Very nice work by Zach Nussbaum and colleagues at Nomic AI.
reacted to MoritzLaurer's post with 👍 about 2 months ago
view post
Post
2609
Quite excited by the ModernBERT release! 0.15/0.4B small, 2T modern pre-training data and tokenizer with code, 8k context window, great efficient model for embeddings & classification!

This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D

Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !

Paper and models here 👇https://huggingface.co/collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
·
reacted to s3nh's post with 🤗 about 2 months ago
view post
Post
1859
Welcome back,

Small Language Models Enthusiasts and GPU Poor oss enjoyers lets connect.
Just created an organization which main target is to have fun with smaller models tuneable on consumer range GPUs, feel free to join and lets have some fun, much love ;3

https://huggingface.co/SmolTuners
·
reacted to bartowski's post with 👍 about 2 months ago
view post
Post
39314
Looks like Q4_0_N_M file types are going away

Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable)

You can see the reference PR here:

https://github.com/ggerganov/llama.cpp/pull/10446

So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms)

As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those !

Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541

Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
·
reacted to thomwolf's post with 🚀 about 2 months ago
view post
Post
5260
We are proud to announce HuggingFaceFW/fineweb-2: A sparkling update to HuggingFaceFW/fineweb with 1000s of 🗣️languages.

We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.

🥂 FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.

The dataset is released under the permissive 📜 ODC-By 1.0 license, and the 💻 code to reproduce it and our evaluations is public.

We will very soon announce a big community project, and are working on a 📝 blogpost walking you through the entire dataset creation process. Stay tuned!

In the mean time come ask us question on our chat place: HuggingFaceFW/discussion

H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi
  • 2 replies
·
reacted to ginipick's post with 🚀 about 2 months ago
view post
Post
3360
# 🎨 FLUX LLAMA: Turn Your PC into a Design Studio

Hello! Today, we're introducing FLUX LLAMA, an innovative AI image generation tool that ranked 2nd in HuggingFace's weekly downloads. Now you can create professional-grade images with clear text right from your PC, without the need for high-performance servers! 😊

## ✨ What It Can Do
- 🔍 **Crystal Clear Text**: Type "Welcome" and see it appear crystal clear in your image
- 🖥️ **Local Processing**: Run it on your PC with just an RTX 3060 (8x lighter with 4-bit quantization)
- ⚡ **Quick Generation**: Create professional marketing images in 5 minutes
- 🌏 **Multilingual Support**: Perfect results in any language
- 🎯 **Real-time Editing**: Instant image modifications and regeneration

## 🛠 Core Technology
- Double Stream + Single Stream architecture for perfect text processing
- Powerful embedding combination of T5-XXL and CLIP
- 4-bit quantization optimization (3GB → 375MB)
- Fast processing with local GPU acceleration
- Automatic language translation pipeline

## 💡 Use Cases
- SNS marketing image creation
- Product promotion banner generation
- Event poster design
- Social media content creation
- Product description image generation

No more hiring designers or learning complex design tools! Simply input what you want, and AI will create professional-grade results.

Easy to start, professional results - that's the magic of FLUX LLAMA! 🌟

Start creating now! Share your experience with us 😊

#FLUXLLAMA #AIImageGeneration #MarketingTools #DesignAI #HuggingFace

PS: FLUX LLAMA is an innovative AI image generation tool developed by GiniPick, optimized especially for creating images with text. Plus, it boasts a lightweight model that runs on standard PCs!

ginipick/FLUXllama
·
reacted to garrethlee's post with 🧠 about 2 months ago
view post
Post
1935
The latest o1 model from OpenAI is still unable to answer 9.11 > 9.9 correctly 🤔

A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!

In this blog post, we discuss:
🔢 The different ways numbers are tokenized in modern LLMs
🧪 Our detailed approach in comparing these various methods
🥪 How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
👑 and a definitive, best tokenization method for math in LLMs!

Check out our work here: huggingface/number-tokenization-blog
  • 2 replies
·
reacted to dylanebert's post with 🚀 2 months ago
view post
Post
1646
Generate meshes with AI locally in Blender

📢 New open-source release

meshgen, a local blender integration of LLaMa-Mesh, is open source and available now 🤗

get started here: https://github.com/huggingface/meshgen
reacted to andito's post with 🔥❤️ 2 months ago
view post
Post
3350
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.

- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! 🤯
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! 🚀
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!

Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
reacted to clem's post with 🚀 2 months ago
view post
Post
1993
I've been in Brazil for 10 days now 🇧🇷🇧🇷🇧🇷

I've been surprised by the gap between the massive number of people interested in AI (chatgpt adoption is crazy here) and the relatively low number of real AI builders - aka people and companies building their own AI models, datasets and apps.

Lots of efforts needed across the world for everyone to participate, control and benefit this foundational technology, starting with open-source & multi-lingual AI, more access to GPUs & AI builder training for all!
reacted to cfahlgren1's post with ❤️ 3 months ago
view post
Post
923
observers 🔭 - automatically log all OpenAI compatible requests to a dataset💽

• supports any OpenAI compatible endpoint 💪
• supports DuckDB, Hugging Face Datasets, and Argilla as stores

> pip install observers

No complex framework. Just a few lines of code to start sending your traces somewhere. Let us know what you think! @davidberenstein1957 and I will continue iterating!

Here's an example dataset that was logged to Hugging Face from Ollama: cfahlgren1/llama-3.1-awesome-chatgpt-prompts
reacted to merve's post with 🚀 3 months ago
view post
Post
3238
your hugging face profile now has your recent activities 🤗
replied to cfahlgren1's post 3 months ago
view reply

That's amazing!! It makes it so much easier! Thank you for sharing!

reacted to cfahlgren1's post with ❤️ 3 months ago
view post
Post
3171
You can clean and format datasets entirely in the browser with a few lines of SQL.

In this post, I replicate the process @mlabonne used to clean the new microsoft/orca-agentinstruct-1M-v1 dataset.

The cleaning process consists of:
- Joining the separate splits together / add split column
- Converting string messages into list of structs
- Removing empty system prompts

https://huggingface.co/blog/cfahlgren1/the-beginners-guide-to-cleaning-a-dataset

Here's his new cleaned dataset: mlabonne/orca-agentinstruct-1M-v1-cleaned
  • 1 reply
·
replied to victor's post 3 months ago
view reply

Hugging Face is doing great! It's building a healthy community - Kudos for that!

One thing that I miss here is the ability to filter models during the search based on the size (model parameters). I'm particularly interested in models smaller than 2B, but currently, I have to go through several pages (and other search steps) to find them.