-
Moral Foundations of Large Language Models
Paper • 2310.15337 • Published • 1 -
Specific versus General Principles for Constitutional AI
Paper • 2310.13798 • Published • 3 -
Contrastive Prefence Learning: Learning from Human Feedback without RL
Paper • 2310.13639 • Published • 25 -
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Paper • 2309.00267 • Published • 47
Collections
Discover the best community collections!
Collections including paper arxiv:2312.07000
-
GAIA: a benchmark for General AI Assistants
Paper • 2311.12983 • Published • 191 -
Rank-without-GPT: Building GPT-Independent Listwise Rerankers on Open-Source Large Language Models
Paper • 2312.02969 • Published • 13 -
Axiomatic Preference Modeling for Longform Question Answering
Paper • 2312.02206 • Published • 8 -
Alignment for Honesty
Paper • 2312.07000 • Published • 12
-
JaxMARL: Multi-Agent RL Environments in JAX
Paper • 2311.10090 • Published • 7 -
ToolTalk: Evaluating Tool-Usage in a Conversational Setting
Paper • 2311.10775 • Published • 8 -
Contrastive Chain-of-Thought Prompting
Paper • 2311.09277 • Published • 35 -
Testing Language Model Agents Safely in the Wild
Paper • 2311.10538 • Published • 10
-
System 2 Attention (is something you might need too)
Paper • 2311.11829 • Published • 40 -
TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems
Paper • 2311.11315 • Published • 7 -
Alignment for Honesty
Paper • 2312.07000 • Published • 12 -
Steering Llama 2 via Contrastive Activation Addition
Paper • 2312.06681 • Published • 12