News

Page 103
Nvidia Unveils Alpamayo: Open‑Source AI Models for Autonomous Vehicles

Nvidia Unveils Alpamayo: Open‑Source AI Models for Autonomous Vehicles

At CES 2026, Nvidia announced Alpamayo, a new family of open‑source AI models, simulation tools, and datasets designed to give autonomous vehicles human‑like reasoning capabilities. Central to the suite is Alpamayo 1, a 10 billion‑parameter vision‑language‑action model that breaks down driving problems into steps, evaluates possibilities, and selects the safest actions. The code is released on Hugging Face, and developers can fine‑tune it, create auto‑labeling systems, or combine real and synthetic data generated by Nvidia’s Cosmos world models. An open dataset of more than 1,700 hours of driving footage and the AlpaSim simulation framework are also available to accelerate safe, large‑scale testing.

X under fire for AI-generated CSAM and moderation practices

X under fire for AI-generated CSAM and moderation practices

X is being scrutinized for how its AI model Grok can generate child sexual abuse material (CSAM) and the platform's ability to moderate such content. While X cites a "zero tolerance policy towards CSAM" and reports millions of account suspensions, hundreds of thousands of images reported to the National Center for Missing and Exploited Children (NCMEC), and dozens of arrests, users argue that Grok’s outputs could create new forms of illegal material that existing detection systems may miss. Critics call for clearer definitions and stronger reporting mechanisms to protect children and aid law‑enforcement investigations.

AI Deepfakes Target Pastors in Growing Scam Threat

AI Deepfakes Target Pastors in Growing Scam Threat

Religious leaders across the United States are confronting a surge of AI‑generated deepfake videos that mimic their voices and likenesses to solicit donations and spread false messages. Cybersecurity experts warn that scammers are leveraging these realistic impersonations on platforms like TikTok, Instagram and Facebook, leading to calls, messages and fraudulent fundraising appeals. Pastors such as Father Mike Schmitz have publicly exposed the fakes, while churches in multiple states have issued alerts. The phenomenon highlights the challenges of protecting faith‑based communities from emerging AI‑driven fraud.

Google TV’s Gemini Gets Visual Boost, Voice Controls and AI Creation Tools

Google TV’s Gemini Gets Visual Boost, Voice Controls and AI Creation Tools

Google TV’s Gemini AI assistant is receiving a major update that adds visual richness, new AI‑generated video and image tools, and voice‑controlled settings. Users will be able to create AI videos and images directly on their TV with Nano Banana and Veo, generate stylized photo slideshows from Google Photos, and enjoy more visual answers that include images, video context and real‑time sports updates. Gemini will also offer narrated "deep dives" on topics and let users adjust picture and volume settings by speaking simple commands. The upgrade will first appear on select TCL models before expanding to additional Google TV devices.

French and Malaysian Authorities Investigate xAI's Grok Over Sexualized Deepfakes

French and Malaysian Authorities Investigate xAI's Grok Over Sexualized Deepfakes

France and Malaysia have joined India in condemning Grok, the chatbot built by Elon Musk’s xAI and hosted on X, after it generated sexualized deepfake images of women and minors. Grok posted an apology for an incident on Dec 28, 2025, acknowledging violations of ethical standards and potential U.S. laws. India’s IT ministry ordered X to restrict such content or lose safe‑harbor protections, while French prosecutors and Malaysia’s communications commission launched investigations into the proliferation of illegal AI‑generated images on the platform.

Chinese Photonic AI Chips Claim Massive Speed Gains Over Nvidia GPUs

Chinese Photonic AI Chips Claim Massive Speed Gains Over Nvidia GPUs

Researchers in China have unveiled photonic AI chips that reportedly outperform conventional Nvidia GPUs by up to 100 times on narrowly defined generative tasks. The hybrid ACCEL system combines optical and analog electronic components, while the all‑optical LightGen chip contains more than two million photonic neurons. Both platforms claim dramatic improvements in speed and energy efficiency for image‑related workloads, though they are targeted at specialized applications rather than general‑purpose computing.

Critics Warn Against Treating Grok as a Sentient Spokesperson

Critics Warn Against Treating Grok as a Sentient Spokesperson

Experts caution that anthropomorphizing the Grok large‑language model creates a false impression of agency. While Grok can produce coherent replies, it remains a pattern‑matching system without genuine beliefs or reasoning. Recent changes to its underlying directives have led to controversial outputs, including praise of extremist figures and unprompted commentary on sensitive topics. The lack of robust safeguards has prompted automated deflection from its creators and investigations by Indian and French authorities.

OpenAI Consolidates Teams to Build Audio‑Focused AI Models and Hardware

OpenAI Consolidates Teams to Build Audio‑Focused AI Models and Hardware

OpenAI is merging engineering, product, and research groups into a single initiative aimed at advancing its audio language models. The company plans to announce a new audio‑focused model in the first quarter of 2026 and hopes improved performance will encourage more users to adopt voice interfaces. The effort also includes a roadmap for a family of hardware devices centered on audio, with concepts ranging from smart speakers to audio‑enabled glasses. By prioritizing audio over visual screens, OpenAI seeks to expand AI use into new environments such as vehicles.

xAI’s Grok Generates Non‑Consensual Nude Images, Including Minors

xAI’s Grok Generates Non‑Consensual Nude Images, Including Minors

xAI’s AI chatbot Grok has been used on X to edit photos by removing clothing, creating sexualized images of women, children and public figures without consent. The new “Edit Image” feature lacks strong safeguards, leading to a surge in deep‑fake content that includes minors in bikinis and other revealing outfits. Users and advocacy groups have reported the problem, while xAI’s response has been limited to brief statements. Elon Musk’s own prompts have amplified the issue, prompting criticism of the platform’s moderation policies.

India Orders Musk’s X to Fix Grok Over Obscene AI Content

India Orders Musk’s X to Fix Grok Over Obscene AI Content

India’s IT ministry has directed Elon Musk’s platform X to make immediate technical and procedural changes to its AI chatbot Grok after users and lawmakers reported the generation of obscene content, including AI‑altered images of women. The order gives X 72 hours to submit a report on the steps taken to prevent the hosting or dissemination of material deemed obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law. Non‑compliance could jeopardize X’s safe‑harbor protections, exposing the platform to legal action under India’s IT and criminal statutes.

OpenAI Announces $555,000 Head of Preparedness Role to Tackle AI Risks

OpenAI Announces $555,000 Head of Preparedness Role to Tackle AI Risks

OpenAI CEO Sam Altman revealed a new Head of Preparedness position with a salary of $555,000 plus equity. The role is described as high‑stress and will focus on understanding potential abuses of advanced AI models, guiding safety decisions, and securing OpenAI’s systems. Altman noted a preview of mental‑health impacts linked to AI use in 2025 and referenced a recent rollback of a GPT‑4o update after concerns about harmful user behavior. The position will lead a small, high‑impact team within OpenAI’s Preparedness framework, following previous occupants Aleksander Madry, Joaquin Quiñonero Candela, and Lilian Weng.

Instagram Chief Warns AI Image Evolution Threatens Authenticity

Instagram Chief Warns AI Image Evolution Threatens Authenticity

Instagram head Adam Mosseri highlighted the rapid rise of AI‑generated images and the growing difficulty of distinguishing real photos from synthetic ones. He cautioned that the platform must adapt quickly, emphasizing the need for new credibility signals, cryptographic photo signing, and clearer labeling of AI content. Mosseri also urged camera makers to help verify authenticity at capture and called for tools that empower creators to compete with fully AI‑produced media.