Recall - AI Memory Assistant
Recall - AI Memory Assistant
Designing a new way to search personal photo libraries using memory fragments instead of keywords.
Role
Product designer
Scope
Feature highlight
Time line
48 hours

The Problem
Modern photo libraries often contain thousands of images, yet finding a specific memory remains frustrating. Most photo apps rely on dates, locations, albums and/or manual tagging. However, people rarely remember photos this way. Instead, they remember fragments of moments.
For example:
“The gelato I had somewhere in Portofino.”
This reveals a gap between how systems organize photos and how humans recall experiences.
How might we design a photo search experience that works even when users remember very little?
Constraints
The solution needed to work within realistic product constraints:
• Mobile-first interaction
• Photo libraries containing 5,000+ images
• Must work for both tech-savvy and non-tech-savvy users
• AI can understand context ("photos from my trip to Italy," "pictures with my mom," "that
sunset photo")
Insights

Memory is reconstructed, not retrieved
People rebuild memories in fragments.
Memory is reconstructed, not retrieved. People recall fragments and build clarity over time, so the system should help piece memories together instead of requiring precise input.

Recognition is easier than recall
Didn’t reflect the new brand
Recognition is easier than recall. Users may not describe what they want, but can identify it when they see it, so visual results should guide refinement.
People rebuild memories in fragments.
If search could adapt to how memory works, users could rediscover moments more naturally.
Design approach
The assistant enables memory reconstruction in three steps: users describe what they remember, the system extracts key context like people or place, and users can refine results with a reference image if needed.
Conversations
recall
Context
extraction
Reference
refinement
Introducing Recall
Find any memory. Miss nothing

The photo assistant is designed around how people actually remember. Instead of relying on precise search, it lets users describe moments naturally, even if their memory is incomplete. The system interprets these fragments, surfaces relevant visuals, and continuously refines results through interaction.
Start with a keyword

People, places and all things related to the word appears for you to enhance your search and find the memory

Chat like you're recollecting the memory, this time we help with the prompts

The best part?
Upload reference images of poses, colours, shapes, compositions or even textures.


Don't celebrate alone, share the moment!


Incase of failed attempts, restart with better prompting, this time, guided.


Recall - AI Memory Assistant
Designing a new way to search personal photo libraries using memory fragments instead of keywords.
Role
Product designer
Scope
Feature highlight
Time line
48 hours


The Problem
Modern photo libraries often contain thousands of images, yet finding a specific memory remains frustrating. Most photo apps rely on dates, locations, albums and/or manual tagging. However, people rarely remember photos this way. Instead, they remember fragments of moments.
For example:
“The gelato I had somewhere in Portofino.”
This reveals a gap between how systems organize photos and how humans recall experiences.
How might we design a photo search experience that works even when users remember very little?
Constraints
Insights

Memory is reconstructed, not retrieved
People rebuild memories in fragments.
Memory is reconstructed, not retrieved. People recall fragments and build clarity over time, so the system should help piece memories together instead of requiring precise input.

Recognition is easier than recall
Didn’t reflect the new brand
Recognition is easier than recall. Users may not describe what they want, but can identify it when they see it, so visual results should guide refinement.
People rebuild memories in fragments.
If search could adapt to how memory works, users could rediscover moments more naturally.
Design approach
The assistant enables memory reconstruction in three steps: users describe what they remember, the system extracts key context like people or place, and users can refine results with a reference image if needed.
Conversations
recall
Context
extraction
Reference
refinement
Introducing Recall
Miss nothing


The photo assistant is designed around how people actually remember. Instead of relying on precise search, it lets users describe moments naturally, even if their memory is incomplete. The system interprets these fragments, surfaces relevant visuals, and continuously refines results through interaction. By combining conversational input with visual feedback, it transforms photo search into a fluid process of rediscovering memories.
Start with a keyword


People, places and all things related to the word appears for you to enhance your search and find the memory


Chat like you're recollecting the memory, this time we help with the prompts


The best part?
Upload reference images of poses, colours, shapes, compositions or even textures.




Don't celebrate alone, share the moment!




Incase of failed attempts, restart with better prompting, this time, guided.



