Latest AI News

Google Launches Nano Banana 2 AI Image Generator in Gemini

Google Launches Nano Banana 2 AI Image Generator in Gemini

Google has introduced Nano Banana 2, an upgraded AI image generator that blends text rendering and web‑search capabilities with faster image creation. Integrated as the default model in the Gemini chatbot, Nano Banana 2 can pull real‑time information from the web to produce infographics and edit existing photos with photorealistic results. The tool is freely accessible through the Gemini app, Google Search, AI Studio, Cloud and other services, and includes watermarks to signal AI‑generated content.

Tech Giants Unveil Major Product Updates: Google’s AI Image Upgrade, Lenovo’s Foldable Handheld, and Apple’s Upcoming Launch Week

Tech Giants Unveil Major Product Updates: Google’s AI Image Upgrade, Lenovo’s Foldable Handheld, and Apple’s Upcoming Launch Week

Google announced a new version of its Nano Banana AI image service that promises better text rendering, real‑time web knowledge, and higher visual fidelity. At the same time, the mechanical‑keyboard community is shifting from loud clicky switches to quieter “thock” sounds achieved with damping foams and lubricated linear switches. Lenovo is reportedly planning a Legion Go handheld that can transform into a Windows tablet with a foldable screen, while Apple has sent invitations for a multi‑day launch event that may showcase new MacBooks and an iPhone 17e. These developments highlight a wave of innovation across hardware and AI software.

Anthropic Rejects Pentagon’s Demand for Unrestricted AI Access

Anthropic Rejects Pentagon’s Demand for Unrestricted AI Access

Anthropic has turned down a Pentagon request for unrestricted use of its AI models, citing concerns over mass surveillance of Americans and fully autonomous lethal weapons. The company’s CEO, Dario Amodei, emphasized a commitment to democratic values and offered to transition the military to alternative providers if required. The standoff follows a broader push by the Department of Defense to renegotiate AI contracts with multiple vendors, with some firms reportedly agreeing to the new terms while Anthropic remains firm on its red lines.

Perplexity Launches “Computer” AI Agent Platform with Cloud‑Based, Curated Integrations

Perplexity Launches “Computer” AI Agent Platform with Cloud‑Based, Curated Integrations

Perplexity introduced Computer, an AI agent that can assign tasks to other AI agents. Operating primarily in the cloud, the service runs within a controlled environment that limits integrations to vetted plugins. Users can supply context through files such as USER.MD, MEMORY.MD, SOUL.MD, and HEARTBEAT.MD, allowing the agent to create, modify, or delete files on the user’s system. While the design aims to temper the wild capabilities seen in tools like OpenClaw, Perplexity acknowledges that large‑language‑model errors and security concerns remain, especially when the agent works with unbacked‑up data.

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts

Researchers from Stanford and Princeton compared the responses of several Chinese and American large language models to politically sensitive questions. The study found that Chinese models refuse to answer a significantly larger share of these queries, provide shorter replies, and sometimes deliver inaccurate information. The authors suggest that manual fine‑tuning, rather than censored training data, drives much of this behavior. Additional work shows that extracting hidden instructions from Chinese models is difficult, highlighting the challenges of studying AI‑driven censorship in real time.

IronCurtain: Open‑Source Framework to Constrain AI Assistants

IronCurtain: Open‑Source Framework to Constrain AI Assistants

IronCurtain is an open‑source project that isolates AI assistants in a virtual machine and enforces user‑written policies written in plain English. By converting natural‑language rules into enforceable security constraints through a large language model, the system adds a layer of control that prevents rogue actions such as unwanted deletions or phishing. The prototype is model‑independent, logs policy decisions, and is positioned as a research tool for the community rather than a consumer product. Its creators emphasize the need for structured guardrails to keep agentic AI useful yet safe.