A simple trick involving poetry is enough to jailbreak the tech industry's leading AI models, researchers found.
Even the tech industry’s top AI models, created with billions of dollars in funding, are astonishingly easy to “jailbreak,” or trick into producing dangerous responses they’re prohibited from giving — like explaining how to build bombs, for example. ... [4253 chars]

