The tricks Microsoft saw, with the help of its partner OpenAI, are a new danger and were not "very original or special," the Redmond, Washington, firm said in a blog post. But the blog does show how US rivals have been using big-language models to boost their power to hack networks and spread lies.
Microsoft said the "attacks" it found all used big-language models the partners own and said it was vital to reveal them publicly even if they were "early, small steps."
Cybersecurity firms have long used machine-learning to defend, mainly to spot weird behaviour in networks. But crooks and hackers use it too, and the arrival of big-language models led by OpenAI's ChatGPT made that game harder.