Grok AI is under investigation across several countries for it's ability to generate sexually explicit images using the likeness of real people.
Metro Police said multiple CyberTips were sent to the National Center for Missing and Exploited Children regarding possession ...
A Nashville man was charged after a police investigation into child sexual abuse material found in an xAI account.
What is Grok? Explore Elon Musk’s AI chatbot with real-time X data, bold personality, advanced features, pricing, risks, and ...
AI bias issues are increasingly evident in modern models, with Grok 4.1 bias highlighting challenges in fairness, representation, and decision-making. Training data imbalances and demographic skews ...
In this episode of eSpeaks, Jennifer Margles, Director of Product Management at BMC Software, discusses the transition from traditional job scheduling to the era of the autonomous enterprise. eSpeaks’ ...
Federal agencies are warning that Elon Musk’s xAI kit is flaky, while the Pentagon continues to push to grant it access to more secrets. Officials across multiple federal agencies have raised concerns ...
Grok 4.2 is an advanced AI model designed to handle complex reasoning and decision-making tasks through a collaborative multi-agent framework. As overviewed by the AI Grid, this system integrates the ...
NEW YORK, NY, Feb 3 (Reuters) - Elon Musk’s flagship artificial intelligence chatbot, Grok, continues to generate sexualized images of people even when users explicitly warn that the subjects do not ...
The United Kingdom's data protection authority launched a formal investigation into X and its Irish subsidiary over reports that the Grok AI assistant was used to generate nonconsensual sexual images.
Indonesia has allowed Elon Musk's Grok chatbot to resume services, lifting a ban over sexualised images on the app, after X Corp committed to improving compliance with the country's laws, according to ...
Researchers have coined a new way to trick artificial intelligence (AI) chatbots into generating malicious outputs. AI security startup NeuralTrust calls it "semantic chaining," and it requires just a ...