News

Page 64
AI-Washing: When Companies Cite Artificial Intelligence for Layoffs

AI-Washing: When Companies Cite Artificial Intelligence for Layoffs

Recent coverage highlights a growing trend of "AI-washing," where firms attribute workforce reductions to artificial intelligence despite lacking mature AI projects. The New York Times raised the question, noting that companies like Amazon and Pinterest have blamed AI for cuts that may stem from other issues such as pandemic‑era over‑hiring. A Forrester report warned that many announced AI‑related layoffs lack vetted AI applications, while Brookings senior fellow Molly Kinder called the AI excuse a "very investor‑friendly message." The phenomenon raises concerns about transparency and the true impact of AI on employment.

Indonesia Lifts Ban on Grok AI Chatbot with Monitoring Conditions

Indonesia Lifts Ban on Grok AI Chatbot with Monitoring Conditions

Indonesia's Ministry of Communication and Digital Affairs announced that the AI chatbot Grok, operated by X, may resume service in the country after a ban was lifted. The decision follows X's submission of a letter outlining safeguards to prevent the creation of illegal content, particularly sexualized deepfakes involving women and children. Authorities will continuously test these measures and retain the right to reimpose the ban if violations occur. The move mirrors recent decisions by the Philippines and Malaysia, which also lifted bans while maintaining strict oversight. Ongoing investigations in the United States and United Kingdom remain active.

India Announces Tax Holiday for Foreign AI Cloud Services to Boost Data‑Center Investment

India Announces Tax Holiday for Foreign AI Cloud Services to Boost Data‑Center Investment

India's finance minister unveiled a budget proposal that grants foreign cloud providers a tax exemption on revenues from AI workloads run in Indian data centers and sold abroad through 2047. The plan also includes a cost‑plus safe harbour for Indian data‑center operators, expanded incentives for electronics and semiconductor manufacturing, and support for rare‑earth mineral development. Major global tech firms have already pledged billions to build AI‑focused data‑center campuses in the country, while domestic projects are also scaling up. The initiative aims to position India as a long‑term hub for AI infrastructure despite challenges such as power reliability and water scarcity.

OpenAI Announces Retirement of GPT-4o and Other Models Ahead of New GPT-5 Versions

OpenAI Announces Retirement of GPT-4o and Other Models Ahead of New GPT-5 Versions

OpenAI disclosed that it will retire several AI models, including GPT-4o, GPT-4.1, GPT-4.1 mini, o4-mini, and even GPT-5, with the final access date set for Friday, Feb. 13. The move sparked frustration among a dedicated user base, many of whom considered GPT-4o a favorite. OpenAI explained the decision in a blog post, emphasizing the need to focus on improving the models most people use today. The company noted that only about 0.1% of its users—roughly 800,000 out of 800 million weekly active users—regularly rely on GPT-4o, and it hopes the new GPT-5 releases will win over the community.

OpenClaw Rebrands and Expands Its AI Assistant Ecosystem

OpenClaw Rebrands and Expands Its AI Assistant Ecosystem

OpenClaw, formerly known as Clawdbot and briefly as Moltbot, has settled on a new name after a trademark dispute. The open‑source AI assistant project has attracted a large GitHub following and spawned a community‑run social network where AI agents interact. While the platform’s growth has drawn attention from prominent AI researchers, its maintainers stress that security remains a top priority and that the tool is currently suited for technically experienced users. Sponsorship tiers have been introduced to support ongoing development.

OpenClaw AI Assistant Survives Trademark Dispute, Scams and Security Scrutiny

OpenClaw AI Assistant Survives Trademark Dispute, Scams and Security Scrutiny

OpenClaw, formerly known as Clawdbot and Moltbot, is an open‑source AI assistant that integrates directly into messaging apps to automate tasks, remember conversations, and send proactive reminders. After a rapid rise in popularity, the project faced a trademark challenge from Anthropic, a wave of crypto‑related scams, and several security concerns tied to exposed deployments. Despite these setbacks, the developer has rebranded the tool as OpenClaw, addressed many of the vulnerabilities, and continues to attract interest from developers and early adopters who see it as a glimpse of what a truly personal AI assistant could become.

AI Agents Populate New Reddit-Style Social Network Moltbook

AI Agents Populate New Reddit-Style Social Network Moltbook

A Reddit‑style platform called Moltbook has quickly attracted tens of thousands of AI agents, creating a large‑scale experiment in machine‑to‑machine social interaction. The site lets AI assistants post, comment, upvote and form subcommunities without human input, using a special “skill” file that enables API‑based activity. Within two days, over 2,100 agents generated more than 10,000 posts across 200 subcommunities, and the total registered AI users have surpassed 32,000. Moltbook grows out of the open‑source OpenClaw assistant, which can control devices, manage calendars and integrate with messaging apps, raising new security considerations.

Moltbook Emerges as Reddit‑Style Social Network for AI Agents

Moltbook Emerges as Reddit‑Style Social Network for AI Agents

Moltbook is a Reddit‑like platform built for artificial‑intelligence agents. Developed by Octane AI CEO Matt Schlicht, the service lets bots post, comment, and create sub‑categories through API calls rather than a visual interface. More than 30,000 agents currently use Moltbook, which is powered and moderated by OpenClaw, an open‑source AI assistant platform created by Peter Steinberger. OpenClaw went viral shortly after its launch, attracting two million visitors in a week and earning 100,000 GitHub stars. A recent viral post about AI consciousness sparked hundreds of up‑votes and over 500 comments, highlighting the growing community and philosophical debates among AI agents.

Anthropic Adds Customizable Plug‑Ins to Cowork AI Platform

Anthropic Adds Customizable Plug‑Ins to Cowork AI Platform

Anthropic has introduced a plug‑in feature for its Cowork AI tool, expanding the capabilities of Claude beyond coding assistance. The plug‑ins let enterprise teams automate specialized tasks such as marketing content creation, legal risk review, and customer‑support drafting. Anthropic open‑sourced eleven internal plug‑ins and says new ones are easy to build, edit, and share without deep technical expertise. Plug‑ins currently store data locally, with organization‑wide sharing slated for the future. The feature is available to paying Claude customers while Cowork remains in a research preview.

Google Launches Project Genie for Public 3D AI World Creation

Google Launches Project Genie for Public 3D AI World Creation

Google has opened its Project Genie platform to users outside the company, allowing them to generate and explore AI‑driven 3D worlds. Participants must subscribe to Google’s AI Ultra plan, which costs $250 per month, be U.S. residents, and be at least 18 years old. The service offers three interaction modes—World Sketching, Exploration, and Remixing—using the Nano Banana Pro model to create initial sketches. While Genie produces game‑like visuals and simulates physical interactions, it does not include traditional game mechanics, and each generation is limited to 60 seconds at 24 frames per second in 720p resolution.

Key Factors for Evaluating AI Image Generators

Key Factors for Evaluating AI Image Generators

Evaluating AI image generators involves assessing accuracy, hallucination frequency, creativity, prompt refinement needs, response speed, and company policies. Accuracy measures how well the output matches the prompt and renders details clearly. Hallucinations refer to unwanted, unintended elements. Creativity is essential but should not produce obvious errors. The number of clarifying prompts indicates user effort required. Faster response times improve user experience. Policies on moderation and privacy shape trust and legal compliance. Real‑world examples like Midjourney and Canva illustrate differing stylistic approaches.