As AI assistants become embedded in web browsers, they function as more than question answering tools. They browse the web, summarize content, and retrieve links. According to research reported by The Hacker News citing Check Point, these capabilities could allow such assistants to be misused as covert command channels for malware.
Researchers demonstrated a technique called AI as a C2 proxy. It exploits browsing and link retrieval features in tools such as Microsoft Copilot and Grok, developed by xAI, to act as intermediaries between an attacker and an infected device.
Instead of connecting directly to a traditional command and control server, the malware routes its instructions through the AI assistant interface. Network traffic then appears to be legitimate use of a trusted AI service within a corporate environment. No stolen API keys or compromised accounts are necessarily required. The communication resembles standard interaction with a widely used AI platform.
The researchers indicated that the risk extends beyond relaying commands. The AI model itself could function as an external decision engine. Malware could supply the assistant with information about the compromised system, prompting the model to suggest next steps. These may include assessing whether the target is valuable, selecting appropriate evasion techniques, or modifying behavior to avoid detection.
In this scenario, the AI service becomes both a hidden transport layer and an operational controller capable of managing the attack dynamically. This reflects a convergence between cyberattack infrastructure and large language models, creating a more adaptive and flexible form of command and control.
The concept aligns with an established tactic in cybersecurity known as living off trusted services. Attackers use legitimate platforms to conceal malicious communications. AI tools, designed to streamline access to information, may now provide an effective camouflage layer.
Such traffic appears normal within enterprise networks. Disabling the service entirely is difficult due to employee reliance. Countermeasures such as revoking keys or suspending accounts may not be effective in this context.
The technique does not grant attackers initial access. The device must already be compromised. Once malware is installed, however, it can use the AI assistant as a stealthier command channel than traditional servers.
This development raises questions about trust boundaries in cloud services and AI enabled interfaces. Each added capability, including browsing, summarization, and remote request execution, creates additional potential for dual use.
The report aligns with other studies showing that large language model interfaces can dynamically generate malicious JavaScript code. Such code may be used in real time phishing attacks that take shape within a browser. The common factor is reliance on trusted services to assemble code or transmit commands beyond the reach of conventional detection systems.