How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful […] © 2024 TechCrunch. All rights reserved. For personal use only.

No comments yet…

Login to comment.