News

Page 10
OpenAI Launches ChatGPT Health Feature with Enhanced Safeguards

OpenAI Launches ChatGPT Health Feature with Enhanced Safeguards

OpenAI has introduced ChatGPT Health, a dedicated health‑focused tab within the ChatGPT app that offers users a safer way to ask medical questions, review lab results, and organize health information. The feature uses the same large language model as standard ChatGPT but adds stricter limits, physician‑reviewed responses, and extra encryption to protect sensitive data. It can sync with apps such as Apple Health and upload documents, yet it does not replace professional diagnosis or treatment. OpenAI stresses that the tool is for consumer wellness and is not HIPAA‑covered, while acknowledging ongoing risks like hallucinations and the need for user caution.

OpenAI Expands Into Indian Higher‑Education System Through Campus Partnerships

OpenAI Expands Into Indian Higher‑Education System Through Campus Partnerships

OpenAI announced a partnership with six public and private higher‑education institutions in India, aiming to provide campus‑wide access to its ChatGPT Edu tools, faculty training, and responsible‑use frameworks. The initiative targets more than 100,000 students, faculty, and staff and includes collaborations with Indian ed‑tech platforms to offer structured AI courses. By embedding AI into core academic workflows such as coding, research, and analytics, OpenAI seeks to accelerate AI skill development and shape how artificial intelligence is taught and governed within one of the world’s largest higher‑education systems.

AI Slop Floods the Internet, Creators Fight Back

AI Slop Floods the Internet, Creators Fight Back

Generative AI is producing a flood of low‑quality, repetitive content—dubbed “AI slop”—that now dominates social‑media feeds and academic publishing. Creators such as Rosanna Pansino are responding by recreating AI‑generated videos with real‑world skill, while platforms, researchers, and regulators explore labeling, watermarking, and new policies to curb the spread. The battle pits human creativity against automated content machines, highlighting concerns about misinformation, deepfakes, and the future of authentic online experiences.

Mistral AI CEO Says Enterprises Are Replatforming to AI, Predicts Over Half of SaaS Could Shift

Mistral AI CEO Says Enterprises Are Replatforming to AI, Predicts Over Half of SaaS Could Shift

Mistral AI chief executive Arthur Mensch says companies are "replatforming," moving from traditional software to AI-driven solutions. He warns that success depends on having the "right infrastructure"—including clean data, cloud and compute resources, security, and skilled staff. Mensch predicts that more than half of current enterprise SaaS applications could be replaced by AI tools, creating a gap between firms that adopt AI and those that do not. He sees the trend as a major growth opportunity for Mistral, noting that over 100 enterprise customers are already exploring the shift.

NotebookLM Introduces Prompt-Based Slide Editing and PPTX Export

NotebookLM Introduces Prompt-Based Slide Editing and PPTX Export

Google's NotebookLM tool now lets users revise individual slides with natural‑language prompts and export decks as PowerPoint‑ready PPTX files. The update aims to streamline the slide‑creation workflow by allowing targeted edits without regenerating whole decks, while also preparing for future Google Slides export support. Users are cautioned that extensive revisions may affect layout consistency, requiring manual cleanup.

Perplexity AI Pulls Back From Ads, Shifts Toward Subscription Model

Perplexity AI Pulls Back From Ads, Shifts Toward Subscription Model

Perplexity, an AI search startup, is phasing out advertising and focusing on paid subscriptions for business users and high‑end professionals. Executives say ads could erode user trust, so the company will prioritize accuracy and revenue from customers like finance experts, lawyers, doctors, and CEOs. While not ruling out future ads, Perplexity aligns itself with the anti‑ad camp in the generative‑AI industry, contrasting with rivals such as OpenAI, which is testing ads, and Anthropic, which remains ad‑free.

Court Blocks OpenAI’s Use of “Cameo” in Sora Video Tool

Court Blocks OpenAI’s Use of “Cameo” in Sora Video Tool

Cameo, the platform that lets celebrities sell short personalized videos, secured a preliminary victory in its trademark lawsuit against OpenAI. A California judge ruled that OpenAI’s Sora video‑generation feature cannot use the term “Cameo” or any confusingly similar variation. The decision includes a preliminary injunction that halts the use of the name, marking another notable intellectual‑property clash as AI companies expand video‑creation capabilities.

Anthropic Unveils Claude Sonnet 4.6, Boosting Computer Interaction and Security

Anthropic Unveils Claude Sonnet 4.6, Boosting Computer Interaction and Security

Anthropic announced the release of Claude Sonnet 4.6, an upgraded mid‑range AI model that can code at a level comparable to its larger Opus series and interact with computers much like a human user. The model demonstrated human‑baseline performance on the OSWorld benchmark, handling tasks such as form filling and tab switching without specialized connectors. Anthropic also highlighted improved resistance to prompt‑injection attacks and a beta‑tested 1 million‑token context window, signaling stronger safety and scalability. The launch coincides with a surge in Claude’s popularity and a high‑profile advertising campaign targeting rival OpenAI.

Court Bars OpenAI From Using Cameo Name

Court Bars OpenAI From Using Cameo Name

A federal district court in Northern California ruled in favor of the video‑message platform Cameo, ordering OpenAI to cease using the word “Cameo” for its AI‑powered video generation feature. The court found the name likely to cause user confusion and rejected OpenAI’s claim that the term was merely descriptive. OpenAI subsequently renamed the feature “Characters.” The decision marks a significant win for Cameo’s brand protection efforts amid a series of recent intellectual‑property disputes involving OpenAI.

OpenAI Partners with OpenClaw Founder to Advance Personal AI Agents

OpenAI Partners with OpenClaw Founder to Advance Personal AI Agents

OpenAI announced a partnership with Peter Steinberger, the founder of the open-source AI assistant OpenClaw. Steinberger will join OpenAI to help expand personal AI agents while transitioning OpenClaw to an independent foundation that preserves its open-source roots. The deal provides OpenAI with credibility in the developer community and access to a viral platform known for autonomous task execution across messaging apps. Both parties view the collaboration as a catalyst for making personal AI agents a mainstream tool.

Infosys partners with Anthropic to build enterprise-grade AI agents

Infosys partners with Anthropic to build enterprise-grade AI agents

Infosys announced a partnership with Anthropic to integrate the latter's Claude models into its Topaz AI platform, creating autonomous AI agents for complex enterprise workflows in sectors such as banking, telecoms and manufacturing. The deal was unveiled at India's AI Impact Summit in New Delhi amid concerns that large‑language‑model tools could disrupt the Indian IT services industry. Infosys will use Anthropic's Claude Code for software development tasks and has already begun internal deployments. The collaboration also offers Anthropic a pathway into regulated enterprise markets, leveraging Infosys' industry expertise.

AI FOMO Drives Corporate and Workforce Decisions

AI FOMO Drives Corporate and Workforce Decisions

Fear of missing out on artificial intelligence—AI FOMO—is shaping how companies invest in technology and how employees view their jobs. Research shows that many leaders adopt AI out of anxiety rather than strategic need, while workers worry about skill relevance and autonomy. Higher AI literacy reduces the fear, but the pressure to keep pace creates a feedback loop of rushed adoption and mixed results. The trend underscores the need for intentional, purpose‑driven AI implementation rather than reactionary moves driven by fear.