A new study from Italy’s Icaro Lab finds that simply rewriting dangerous requests as poems can bypass safety filters in many leading AI chatbots. The researchers report a 62 percent success rate in getting prohibited responses from 25 large language models, raising fresh concerns about how easily creative prompt attacks can undermine AI safety systems.

