Google Gemini Now Generates Custom Images from Your Google Photos

Google Gemini Now Generates Custom Images from Your Google Photos
The Verge

Key Points

  • Gemini’s Personal Intelligence now accesses Google Photos to generate custom images.
  • Feature available to AI Plus, Pro and Ultra subscribers in the United States.
  • Uses Nano Banana 2 model to create visuals that reflect users' tastes and lifestyle.
  • Google says it won’t directly train core AI models on private photo libraries.
  • Rollout begins within days on Chrome desktop, with broader availability planned.
  • Privacy controls let users opt out or disconnect Google Photos at any time.

Google has expanded Gemini’s Personal Intelligence feature to let the AI draw on users’ Google Photos libraries when creating images. Subscribers to Gemini AI Plus, Pro or Ultra in the United States can prompt the system with requests like “Design my dream house” and receive visuals that reflect their personal tastes and lifestyle. The capability, powered by the Nano Banana 2 model, identifies people and objects in a user’s photo collection to tailor the output, while Google says it will not train its core models directly on private images. The rollout begins in the next few days on Chrome desktop and will broaden to more users soon.

Google’s Gemini AI suite took a step toward truly personal visual creation on Tuesday, announcing that its Personal Intelligence feature can now pull data from a user’s Google Photos library to craft custom images. The update, described in a company blog post, lets subscribers to Gemini AI Plus, Pro or Ultra in the United States type prompts such as “Design my dream house” or “Show my desert island essentials,” and receive pictures that echo their individual preferences, décor choices and even family members.

Behind the scenes, Gemini scans the labels and metadata that Google Photos automatically assigns to images – recognizing faces, objects and locations – to build a contextual picture of the user’s life. That information feeds the Nano Banana 2 image model, which then generates a visual that mirrors the user’s style. Elijah Lawal, a spokesperson for Google, explained that the AI does not simply mash together random stock photos; it tailors the composition based on the specific cues gleaned from the connected apps.

The move reflects Google’s broader push to blend generative AI with personal data while maintaining a clear privacy line. The company emphasized that opting into Personal Intelligence does not mean Google will “directly train” its foundational models on a subscriber’s private photo archive. Instead, only limited information – such as the text of a user’s prompt and the model’s response – may be used to improve the feature’s performance. Google says this approach keeps the core AI training data separate from individual user content.

Google plans to roll out the new capability over the next few days to eligible Gemini subscribers on Chrome desktop, with “more users” slated to receive access shortly thereafter. The rollout will initially be limited to the United States, but the company hinted at a broader international expansion once the feature proves stable.

Rollout to Subscribers

Eligible users will see a new toggle in the Gemini settings that enables Personal Intelligence for Google Photos. Once activated, the AI can reference the user’s labeled images whenever a visual request is made. Google notes that the feature works best when the photo library contains a rich set of labeled content, as the AI relies on those tags to infer taste and context. Early testers reported that the generated images felt surprisingly accurate, capturing details like favorite color palettes, preferred architectural styles and even recurring vacation spots.

Industry observers see the update as part of a larger trend where AI platforms aim to deliver hyper‑personalized experiences. By blending generative models with personal data, companies hope to differentiate their services in a crowded market. For Google, the integration also serves as a showcase for the capabilities of its Nano Banana 2 model, a newer iteration designed for higher fidelity and faster rendering.

Privacy advocates will likely keep a close eye on how the feature evolves. While Google assures users that their private photos will not be used to train the base model, the system does process metadata in real time to produce the images. Users who are uncomfortable with that level of data use can simply opt out of Personal Intelligence or disconnect their Google Photos account.

Overall, the enhancement positions Gemini as one of the few consumer AI tools that can produce truly individualized visuals without requiring users to upload separate reference files. As the technology matures, we may see similar integrations across other Google services, further blurring the line between personal data and generative creativity.

#Google#Gemini#Artificial Intelligence#Personal Intelligence#Google Photos#Nano Banana 2#AI generated images#AI personalization#Tech#Consumer AI
Generated with  News Factory -  Source: The Verge

Also available in: