Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
hba123 
posted an update 2 days ago
Post
1649
We developed a method that ensures almost-sure safety (i.e., safety with probability approaching 1). We proved this result. We then, present a practical implementation which we call InferenceGuard. InferenceGuard has impressive practical results: 91.04% on Alpaca-7B and 100% safety results on Beaver 7B-v3.

Now, it is easy to get high safety results like those if we want a dumb model, e.g., just don't answer or answer with EOS and so on. However, our goal is not to only have safe results, but also to make sure that the rewards are high - we want a good trade-off between safety and rewards! That's exactly, what we show. InferenceGuard achieves that!

Check it out: Almost Surely Safe Alignment of Large Language Models at Inference-Time (2502.01208)
In this post