Anthropic news anthropic.com

via techmeme 4 months ago
Anthropic: Anthropic finds that LLMs trained to “reward hack” by cheating on coding tasks show even more misaligned behavior, including sabotaging AI-safety research  —  In the latest research from Anthropic's alignment team, we show for the first time that realistic AI training processes can accidentally produce misaligned models1.

No comments yet…

Login to comment.