Australia Considers Requiring App Stores to Block AI Services Lacking Age Verification

Australia Considers Requiring App Stores to Block AI Services Lacking Age Verification
Engadget

Key Points

  • Australian regulators may require app stores to block AI services lacking age verification.
  • Compliance deadline could be as early as March 9.
  • Only nine of fifty surveyed AI chat services have age‑assurance measures in place.
  • Non‑compliant services risk fines up to A$49.5 million.
  • The proposal reflects a global debate on responsibility for protecting children online.
  • U.S. platforms Apple and Google have lobbied against store‑level restrictions.
  • Australia previously enacted a ban on certain social media for users under 16.

Australian regulators are weighing a proposal that would compel app stores to block AI chat services that do not implement age‑verification systems. The move aims to protect younger users from mature content and could be enforced as early as March 9. Non‑compliant services may face fines up to A$49.5 million. The policy reflects a broader global debate over who should bear responsibility for safeguarding children online, with the United States seeing platform operators lobby against store‑level restrictions.

Regulatory Proposal

Australia's government is considering a strict approach to limit younger users' access to AI chatbots. According to Reuters, regulators may require app storefronts to block AI services that fail to implement age verification for restricting mature content. The deadline for compliance could be as soon as March 9.

Enforcement Powers

The eSafety commissioner indicated that the agency will use its full range of powers where there is non‑compliance. Potential actions could target gatekeeper services such as search engines and app stores that serve as key access points to AI applications.

Industry Landscape

A Reuters review of fifty leading text‑based AI chat services in the region found that only nine had introduced or announced plans for age assurance. Eleven services reported having blanket content filters or intended to block all Australians from using their service. The majority had not taken public action, leaving a substantial gap ahead of the proposed deadline.

Potential Penalties

Failure to comply with the new requirements could result in fines of up to A$49.5 million (approximately $35 million).

Global Context

The question of which parties are responsible for keeping children from accessing potentially harmful content is being debated worldwide. In the United States, Apple and Google have lobbied to shift the responsibility to platforms rather than app store operators. Australia's language regarding all stores is not yet definitive, but it aligns with the government's broader agenda, including a sweeping ban on the use of certain social media and digital platforms for citizens under age 16 enacted last year.

Implications for the Tech Sector

If adopted, the policy would pressure AI developers to integrate robust age‑verification mechanisms or risk being barred from major app marketplaces. This could reshape how AI services are distributed and accessed in Australia, prompting companies to prioritize child safety features to maintain market presence.

#Australia#AI regulation#age verification#app stores#eSafety#digital safety#AI chatbots#technology policy#online safety#children protection
Generated with  News Factory -  Source: Engadget

Also available in:

Australia Considers Requiring App Stores to Block AI Services Lacking Age Verification | AI News