News

Page 44
Supreme Court Declines to Hear AI-Generated Art Copyright Case

Supreme Court Declines to Hear AI-Generated Art Copyright Case

The U.S. Supreme Court refused to review a lawsuit brought by computer scientist Stephen Thaler seeking copyright protection for an artwork produced by his own artificial‑intelligence system. The denial leaves in place lower‑court rulings that rejected Thaler’s claim, reinforcing the Copyright Office’s stance that works must be created by a human author to qualify for protection. The decision also underscores similar rejections of Thaler’s AI‑generated patent and trademark applications, highlighting ongoing legal challenges for creators using machine‑learning tools.

OpenAI Secures Pentagon Contract While Anthropic Rejects Terms

OpenAI Secures Pentagon Contract While Anthropic Rejects Terms

OpenAI announced a new agreement with the Pentagon that it says respects its safety principles on domestic mass surveillance and autonomous weapon systems. Critics point out that the deal relies on the phrase “any lawful use,” which they argue could allow broad government use of the technology. Anthropic refused a similar contract, was labeled a supply‑chain risk, and has drawn industry support. The dispute highlights differing approaches to AI safety, legal compliance, and the role of technical safeguards in military applications.

Anthropic’s Claude Services Experience Widespread Outage Amid Growing User Demand

Anthropic’s Claude Services Experience Widespread Outage Amid Growing User Demand

Anthropic faced a large‑scale disruption affecting Claude.ai and Claude Code, with thousands of users unable to log in. The company said the issue was tied to login/logout paths and that the Claude API remained functional while a fix was being implemented. The outage occurred as the Claude app surged to the top of the App Store, overtaking ChatGPT, and as the company faced scrutiny from the U.S. government over its AI safeguards for defense use.

Anthropic Introduces Memory Import Tool for Claude, Enabling Seamless Switch from Other AI Chatbots

Anthropic Introduces Memory Import Tool for Claude, Enabling Seamless Switch from Other AI Chatbots

Anthropic has launched a new memory import feature for its Claude AI chatbot that lets users transfer conversation context from competing chatbots such as ChatGPT, Gemini, and Copilot. By extracting memories into a text prompt and feeding them into Claude, the system can assimilate the information within roughly 24 hours. Users can view and adjust what Claude remembers through the app’s settings, with a focus on work‑related topics. The rollout comes as Claude recently surged to the top of the free App Store charts, overtaking ChatGPT after Anthropic’s stance on Department of Defense AI guardrails sparked user interest and controversy.

U.S. Government Blacklists Anthropic After Pentagon Contract Refusal

U.S. Government Blacklists Anthropic After Pentagon Contract Refusal

The Trump administration halted all federal use of Anthropic's artificial‑intelligence technology after the company declined to allow its tools to be used for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth invoked a national‑security law to blacklist Anthropic, jeopardizing a contract worth up to $200 million and potentially barring the firm from future defense work. The move has sparked debate over AI safety commitments, industry self‑regulation, and the need for binding government oversight.

Anthropic’s Claude Climbs to No. 2 in Apple’s U.S. App Store Amid Pentagon Dispute

Anthropic’s Claude Climbs to No. 2 in Apple’s U.S. App Store Amid Pentagon Dispute

Anthropic’s chatbot Claude has risen to the second spot among free apps in Apple’s U.S. App Store, trailing only OpenAI’s ChatGPT and ahead of Google’s Gemini. The surge follows a high‑profile clash with the Pentagon, where Anthropic sought safeguards against the Department of Defense using its models for mass domestic surveillance or fully autonomous weapons. President Donald Trump ordered federal agencies to cease using Anthropic products, and the Defense Secretary labeled the company a supply‑chain threat. OpenAI later announced its own agreement with the Pentagon that includes similar safeguards.

Trump Moves to Ban Anthropic from the US Government

Trump Moves to Ban Anthropic from the US Government

A dispute between the Department of Defense and AI company Anthropic has intensified, with officials exchanging criticisms publicly. Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and gave the firm a deadline to revise its contract to permit “all lawful use” of its models. Experts suggest the conflict stems more from differing attitudes than concrete policy disagreements, noting that Anthropic has so far supported the Pentagon’s proposed uses. The company, founded on AI safety principles, has warned about the risks of fully autonomous weapons while acknowledging their potential defensive value.

OpenAI Secures Deal with U.S. Defense Department to Deploy Its AI Models

OpenAI Secures Deal with U.S. Defense Department to Deploy Its AI Models

OpenAI announced a contract with the U.S. Defense Department to place its artificial‑intelligence models within the agency’s network. The agreement includes two core safety principles—prohibitions on domestic mass surveillance and a requirement for human responsibility over the use of force, including autonomous weapon systems. OpenAI will provide technical safeguards, assign engineers to work with the department, and run the models on cloud infrastructure, with a pending partnership to use Amazon Web Services for enterprise customers. The deal comes as rival Anthropic declined a similar government offer, citing concerns over surveillance and weaponization.

Pentagon Labels Anthropic a Supply‑Chain Risk, Sparking Industry Backlash

Pentagon Labels Anthropic a Supply‑Chain Risk, Sparking Industry Backlash

The U.S. secretary of defense announced that Anthropic, a leading AI startup, is now designated as a supply‑chain risk for any contractor, supplier, or partner doing business with the military. The move has sent shockwaves through the tech sector, prompting Anthropic to vow legal action and raising concerns about the impact on existing defense contracts and broader AI collaborations. Industry leaders, legal experts, and policy analysts are debating the legality and potential precedent of the designation, while companies that work with both the Pentagon and Anthropic are left uncertain about their future relationships.

OpenAI Terminates Employee Over Use of Confidential Data on Prediction Markets

OpenAI Terminates Employee Over Use of Confidential Data on Prediction Markets

OpenAI has dismissed an unnamed employee after discovering that the worker used confidential company information to trade on prediction market platforms such as Polymarket. The company said the conduct violated its internal policy that bars the use of inside information for personal gain. Prediction markets, which allow users to wager on real‑world outcomes, argue they are financial platforms rather than gambling sites. OpenAI did not provide further comment, and the incident highlights the growing scrutiny of insider activity on emerging financial exchanges.

OpenAI Reports ChatGPT Surpasses 900 Million Weekly Users and 50 Million Subscribers

OpenAI Reports ChatGPT Surpasses 900 Million Weekly Users and 50 Million Subscribers

OpenAI announced that its chatbot ChatGPT now has more than 900 million weekly active users, up from 700 million earlier. The service also boasts more than 50 million consumer subscribers and a fourfold increase in business subscribers, reaching 9 million. The user base has more than doubled since February 2025, when it had 400 million users. Competitor Anthropic reported a 60% rise in free users and a doubling of paid subscribers. OpenAI also disclosed a scaled‑back investment from Nvidia and new contributions from SoftBank and Amazon.