-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 11 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 9 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 25
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06855
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 3 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
Partially Rewriting a Transformer in Natural Language
Paper • 2501.18838 • Published • 1 -
AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders
Paper • 2501.17148 • Published • 1 -
Sparse Autoencoders Trained on the Same Data Learn Different Features
Paper • 2501.16615 • Published • 1 -
Open Problems in Mechanistic Interpretability
Paper • 2501.16496 • Published • 16
-
vectara/hallucination_evaluation_model
Text Classification • Updated • 127k • 245 -
notrichardren/HaluEval
Viewer • Updated • 35k • 557 -
TRUE: Re-evaluating Factual Consistency Evaluation
Paper • 2204.04991 • Published • 1 -
Fine-grained Hallucination Detection and Editing for Language Models
Paper • 2401.06855 • Published • 4