GO:OD:AM's picture
1 21

GO:OD:AM PRO

tegridydev

AI & ML interests

Mechanistic Interpretability (MI) Research & sp00ky code stuff

Recent Activity

reacted to their post with ❀️ 6 days ago
Open-MalSec v0.1 – Open-Source Cybersecurity Dataset Evening! 🫑 πŸ“‚ Just uploaded an early-stage open-source cybersecurity dataset focused on phishing, scams, and malware-related text samples. This is the base version (v0.1)β€”a few structured sample files. Full dataset builds will come over the next few weeks. πŸ”— Dataset link: https://huggingface.co/datasets/tegridydev/open-malsec πŸ” What’s in v0.1? A few structured scam examples (text-based) Covers DeFi, crypto, phishing, and social engineering Initial labelling format for scam classification ⚠️ This is not a full dataset yet (samples are currently available). Just establishing the structure + getting feedback. πŸ“‚ Current Schema & Labelling Approach "instruction" β†’ Task prompt (e.g., "Evaluate this message for scams") "input" β†’ Source & message details (e.g., Telegram post, Tweet) "output" β†’ Scam classification & risk indicators πŸ—‚οΈ Current v0.1 Sample Categories Crypto Scams β†’ Meme token pump & dumps, fake DeFi projects Phishing β†’ Suspicious finance/social media messages Social Engineering β†’ Manipulative messages exploiting trust πŸ”œ Next Steps - Expanding datasets with more phishing & malware examples - Refining schema & annotation quality - Open to feedback, contributions, and suggestions If this is something you might find useful, bookmark/follow/like the dataset repo <3 πŸ’¬ Thoughts, feedback, and ideas are always welcome! Drop a comment or DMs are open πŸ€™
View all activity

Organizations

None yet

tegridydev's activity

upvoted an article 6 days ago
view article
Article

LLM Dataset Formats 101: A No‐BS Guide for Hugging Face Devs

By tegridydev β€’
β€’ 4
published an article 6 days ago
view article
Article

LLM Dataset Formats 101: A No‐BS Guide for Hugging Face Devs

By tegridydev β€’
β€’ 4
reacted to their post with ❀️ 6 days ago
view post
Post
1397
Open-MalSec v0.1 – Open-Source Cybersecurity Dataset

Evening! 🫑

πŸ“‚ Just uploaded an early-stage open-source cybersecurity dataset focused on phishing, scams, and malware-related text samples.

This is the base version (v0.1)β€”a few structured sample files. Full dataset builds will come over the next few weeks.

πŸ”— Dataset link:

tegridydev/open-malsec

πŸ” What’s in v0.1?
A few structured scam examples (text-based)
Covers DeFi, crypto, phishing, and social engineering
Initial labelling format for scam classification

⚠️ This is not a full dataset yet (samples are currently available). Just establishing the structure + getting feedback.

πŸ“‚ Current Schema & Labelling Approach
"instruction" β†’ Task prompt (e.g., "Evaluate this message for scams")
"input" β†’ Source & message details (e.g., Telegram post, Tweet)
"output" β†’ Scam classification & risk indicators

πŸ—‚οΈ Current v0.1 Sample Categories
Crypto Scams β†’ Meme token pump & dumps, fake DeFi projects
Phishing β†’ Suspicious finance/social media messages
Social Engineering β†’ Manipulative messages exploiting trust

πŸ”œ Next Steps
- Expanding datasets with more phishing & malware examples
- Refining schema & annotation quality
- Open to feedback, contributions, and suggestions

If this is something you might find useful, bookmark/follow/like the dataset repo <3

πŸ’¬ Thoughts, feedback, and ideas are always welcome! Drop a comment or DMs are open πŸ€™
posted an update 6 days ago
view post
Post
1397
Open-MalSec v0.1 – Open-Source Cybersecurity Dataset

Evening! 🫑

πŸ“‚ Just uploaded an early-stage open-source cybersecurity dataset focused on phishing, scams, and malware-related text samples.

This is the base version (v0.1)β€”a few structured sample files. Full dataset builds will come over the next few weeks.

πŸ”— Dataset link:

tegridydev/open-malsec

πŸ” What’s in v0.1?
A few structured scam examples (text-based)
Covers DeFi, crypto, phishing, and social engineering
Initial labelling format for scam classification

⚠️ This is not a full dataset yet (samples are currently available). Just establishing the structure + getting feedback.

πŸ“‚ Current Schema & Labelling Approach
"instruction" β†’ Task prompt (e.g., "Evaluate this message for scams")
"input" β†’ Source & message details (e.g., Telegram post, Tweet)
"output" β†’ Scam classification & risk indicators

πŸ—‚οΈ Current v0.1 Sample Categories
Crypto Scams β†’ Meme token pump & dumps, fake DeFi projects
Phishing β†’ Suspicious finance/social media messages
Social Engineering β†’ Manipulative messages exploiting trust

πŸ”œ Next Steps
- Expanding datasets with more phishing & malware examples
- Refining schema & annotation quality
- Open to feedback, contributions, and suggestions

If this is something you might find useful, bookmark/follow/like the dataset repo <3

πŸ’¬ Thoughts, feedback, and ideas are always welcome! Drop a comment or DMs are open πŸ€™
reacted to their post with πŸ‘€ 7 days ago
view post
Post
1372
So, what is #MechanisticInterpretability πŸ€”

Mechanistic Interpretability (MI) is the discipline of opening the black box of large language models (and other neural networks) to understand the underlying circuits, features and/or mechanisms that give rise to specific behaviours

Instead of treating a model as a monolithic function, we can:

1. Trace how input tokens propagate through attention heads & MLP layers
2. Identify localized β€œcircuit motifs”
3. Develop methods to systematically break down or β€œedit” these circuits to confirm we understand the causal structure.

Mechanistic Interpretability aims to yield human-understandable explanations of how advanced models represent and manipulate concepts which hopefully leads to

1. Trust & Reliability
2. Safety & Alignment
3. Better Debugging / Development Insights

https://bsky.app/profile/mechanistics.bsky.social/post/3lgvvv72uls2x
  • 1 reply
Β·
posted an update 8 days ago
view post
Post
1372
So, what is #MechanisticInterpretability πŸ€”

Mechanistic Interpretability (MI) is the discipline of opening the black box of large language models (and other neural networks) to understand the underlying circuits, features and/or mechanisms that give rise to specific behaviours

Instead of treating a model as a monolithic function, we can:

1. Trace how input tokens propagate through attention heads & MLP layers
2. Identify localized β€œcircuit motifs”
3. Develop methods to systematically break down or β€œedit” these circuits to confirm we understand the causal structure.

Mechanistic Interpretability aims to yield human-understandable explanations of how advanced models represent and manipulate concepts which hopefully leads to

1. Trust & Reliability
2. Safety & Alignment
3. Better Debugging / Development Insights

https://bsky.app/profile/mechanistics.bsky.social/post/3lgvvv72uls2x
  • 1 reply
Β·