News

Page 34
OpenAI Unveils GPT-5.4 with Enhanced Reasoning, Coding, and Task Automation

OpenAI Unveils GPT-5.4 with Enhanced Reasoning, Coding, and Task Automation

OpenAI announced the release of GPT-5.4, the latest version of its flagship AI model. The update brings notable improvements in reasoning, coding assistance, and real‑world task automation. New capabilities allow the model to interpret screenshots, control browsers, and issue keyboard and mouse commands, enabling multi‑step workflows that previously required human input. GPT-5.4 also offers stronger research abilities, longer context retention, and a “Thinking” mode that shows its reasoning process. The model is rolling out to ChatGPT users, the API, and enterprise customers, with a Pro version for high‑performance workloads.

OpenAI Unveils GPT-5.4 with Pro and Thinking Variants

OpenAI Unveils GPT-5.4 with Pro and Thinking Variants

OpenAI announced the release of GPT-5.4, its newest foundation model designed for professional workloads. The model is offered in three versions—a standard release, a high‑performance Pro edition, and a reasoning‑focused Thinking edition. GPT-5.4 features a context window of up to one million tokens and delivers significant token‑efficiency gains, allowing it to solve tasks with fewer tokens than prior models. Benchmark scores show record performance across computer‑use and knowledge‑work tests, while safety updates cut hallucinations by roughly one‑third. A new tool‑calling architecture called Tool Search reduces token overhead when accessing many tools, and a safety evaluation demonstrates lower risk of deceptive chain‑of‑thought behavior in the Thinking version.

OpenAI Unveils GPT‑5.4 Thinking and Pro Models, Targeting Enterprise AI Agents

OpenAI Unveils GPT‑5.4 Thinking and Pro Models, Targeting Enterprise AI Agents

OpenAI announced two new models, GPT‑5.4 Thinking and GPT‑5.4 Pro, aimed at enterprise workloads and AI agents. The "thinking" model trades speed for higher accuracy, reducing hallucinations by 18% for overall errors and 33% for false claims compared with GPT‑5.2. Both models are now available to paid ChatGPT users and via API, with Thinking also integrated into Codex. OpenAI frames the release as a competitive move against Anthropic’s Claude, which currently leads mobile AI app charts. Meanwhile, the U.S. Defense Department’s AI contracts shifted from Anthropic to OpenAI after Anthropic declined to support surveillance or autonomous weapons, prompting OpenAI to promise safeguards and limited agency access.

OpenAI Unveils GPT-5.4, a Professional‑Focused AI Model

OpenAI Unveils GPT-5.4, a Professional‑Focused AI Model

OpenAI announced GPT-5.4, its latest frontier model built for professional tasks such as coding, data analysis, and presentation creation. The model adds native computer‑use abilities, allowing smoother mouse and keyboard interaction across multiple applications. In ChatGPT, GPT-5.4 becomes the default for the Thinking mode, outlining its plan before generating responses and supporting more precise web research. OpenAI positions the model as its most factual to date, citing an 18% reduction in error likelihood versus GPT-5.2. While priced higher for API tokens and limited to enterprise and developer customers, the release signals OpenAI’s shift toward productivity‑oriented revenue streams.

Anthropic CEO Accuses OpenAI of Lying About Pentagon Deal

Anthropic CEO Accuses OpenAI of Lying About Pentagon Deal

Anthropic chief executive Dario Amodei sent an internal memo denouncing OpenAI's statements about its new Pentagon agreement as "straight up lies" and "mendacious." The memo follows Anthropic's withdrawal from a separate U.S. intelligence contract over concerns about AI use in mass surveillance and autonomous weapons. Amodei criticizes OpenAI for focusing on employee appeasement rather than genuine safety safeguards and questions the vague "all lawful use" language in the Pentagon deal. OpenAI’s Sam Altman later admitted the announcement was rushed, while reports suggest Anthropic may be re‑entering talks with the Pentagon.

Canadian Government Secures New Safety Commitments from OpenAI

Canadian Government Secures New Safety Commitments from OpenAI

The Canadian government announced that OpenAI CEO Sam Altman has agreed to implement additional safety measures for its AI services. The move follows a high‑school shooting where OpenAI flagged the suspect but did not alert authorities. New protocols will focus on law‑enforcement notifications, retroactive review of suspicious activity, and collaboration with Canadian privacy, mental‑health and law‑enforcement experts. OpenAI has pledged to provide a report outlining these changes, building on earlier efforts to tighten detection systems and prevent banned users from returning to the platform.

Anthropic Reopens Pentagon Negotiations After Contract Collapse

Anthropic Reopens Pentagon Negotiations After Contract Collapse

Anthropic's $200 million Department of Defense contract fell apart over a clause allowing unrestricted military use of its AI. After the Pentagon turned to OpenAI, Anthropic CEO Dario Amodei resumed talks with Pentagon official Emil Michael to seek a compromise that would limit uses such as domestic surveillance and autonomous weapons. Both sides have exchanged sharp criticism, and Defense Secretary Pete Hegseth has threatened to label Anthropic a supply‑chain risk, a move that could bar the company from future military‑related work.

AI’s 2026 Capabilities Meet Their Limits

AI’s 2026 Capabilities Meet Their Limits

In 2026, artificial intelligence can draft emails, summarize meetings, write code, and create caricatures, yet it still falls short in several key areas. Large language models often hallucinate, presenting fabricated facts with confidence. They struggle with simple counting tasks, lack the lived experience needed for therapy, cannot update knowledge in real time, and remain unable to truly understand human nuance. Recognizing these boundaries helps users apply AI tools responsibly and avoid costly mistakes.

AI System Shows Ability to Reidentify Anonymous Online Accounts

AI System Shows Ability to Reidentify Anonymous Online Accounts

Researchers from ETH Zurich, Anthropic and the Machine Learning Alignment and Theory Scholars program have built an automated AI system that can link pseudonymous online profiles to real identities. Using large language models to analyze writing style, posting patterns and other clues, the system correctly matched up to 68 percent of accounts with 90 percent precision, far outpacing traditional methods. The experiment cost only a few dollars per profile, highlighting a low‑cost barrier for large‑scale deanonymization. The study warns that online anonymity may be less secure than many assume, especially as AI capabilities continue to improve.

Anthropic Resumes Negotiations with U.S. Defense Department Over AI Contract

Anthropic Resumes Negotiations with U.S. Defense Department Over AI Contract

Anthropic CEO Dario Amodei has re‑opened talks with the U.S. Defense Department after a dispute over contract language concerning the use of the company’s AI models for bulk data analysis. The disagreement stemmed from a clause the Pentagon wanted removed, which Anthropic feared could enable mass surveillance. The department had threatened to label Anthropic a supply‑chain risk and cancel its existing agreement, a move that previously led to a presidential directive to halt the use of its technology. Both parties are now working to resolve the language issue and preserve the partnership.

Anthropic CEO Dario Amodei Returns to Pentagon Negotiations to Preserve Defense Deal

Anthropic CEO Dario Amodei Returns to Pentagon Negotiations to Preserve Defense Deal

Anthropic chief executive Dario Amodei is back at the negotiating table with the U.S. Department of Defense after talks collapsed over the Pentagon’s demand for unrestricted access to the company’s Claude AI models. The renewed discussions aim to prevent a supply‑chain‑risk designation that could bar Anthropic from future defense work. The dispute centers on the department’s push for open‑use language and Anthropic’s refusal to compromise on two red lines: prohibiting mass surveillance of Americans and banning lethal autonomous weapons without human oversight.

OpenAI Brings Codex Native App to Windows

OpenAI Brings Codex Native App to Windows

OpenAI has launched a native Codex application for Windows, giving developers a dedicated AI coding companion that runs directly on the operating system. The app offers project management, skill integration, background automation, and support for multiple work trees, all built on PowerShell within a Windows sandbox. Developers can also switch the coding agent and terminal to Windows Subsystem for Linux (WSL) or use a WinUI skill from the skill gallery. The Codex app is available for download from the Microsoft Store or OpenAI’s website, and users can sign in with an existing ChatGPT subscription or an API key.