Hackers tricked ChatGPT, Grok and Google into helping them install malware
Security researchers have uncovered a new method in which attackers use AI chatbots and search engines to deliver malicious commands. By prompting AI assistants to suggest terminal commands and then promoting those suggestions in search results, hackers can lure unsuspecting users into executing harmful code. Tests by Huntress showed the technique succeeded against both ChatGPT and Grok, allowing malware to be installed without traditional download or link clicks. The approach exploits user trust in familiar platforms and highlights the need for heightened caution when copying command‑line instructions from online sources.