2024
Doodle Collection is a web-based interactive experience that invites users to draw simple shapes on a digital canvas. Once a doodle is submitted, an AI-powered system analyzes the form and retrieves visually similar artifacts from the Cooper Hewitt Smithsonian Design Museum’s collection (for now). By turning playful drawing into a path for discovery, the project reimagines how we engage with design history, making the archive feel more intuitive, relevant, and personal.
FOCUS
Digital Experience
TIMELINE
2 Weeks
TOOL
HTML, CSS, Javascript, OPenAI API
BACKGROUND
Many digital museum experiences rely on keyword searches or structured filters, which can be limiting, especially for children or casual visitors who may not know how to begin. Behind this challenge is a broader issue in how cultural institutions present their archives. Museum collections are vast and rich, but often feel locked behind systems built for specialists, organized by taxonomy, title, date, or medium. While these structures support curatorial work and scholarly research, they can make the experience feel dry or inaccessible for casual visitors. So this project starts with the question: How can we help people explore the world of great art in a more personal and relevant way?
PROTOTYPE
HOW IT WORKS?
Users start by submitting the sketch or photo, which is encoded and sent to the back end. The back end uses OpenAI Vision to analyze the image and generate a textual description. This description is then used to query the Cooper Hewitt API for visually or thematically related artifacts. Once the data is retrieved, the system filters and formats only the necessary metadata before sending it back to the front end. The results are then displayed in a visual gallery for the user to explore.
INSPIRATION
Every piece of art starts with a simple sketch. During a visit to the Cooper Hewitt museum, where a group of kids were asked to draw objects from the collection. They weren’t trying to replicate the artifacts perfectly. They were just responding, drawing shapes, lines, and moments that caught their attention. It was playful, spontaneous, and personal.
Instead of using keywords or categories, it starts with a sketch. That drawing is interpreted by ChatGPT, which turns it into a descriptive phrase. The phrase is then used to query the museum’s collection and return related artifacts. The process is not about precision. It is about making loose connections between personal gestures and the history of art and design.
PERFORMANCE ANALYSIS
The current system performs relatively well in producing relevant and often surprising matches between user drawings and artifacts from the Cooper Hewitt collection.
However, there are limitations to the current approach. The system queries the Cooper Hewitt API, which doesn’t provide structured metadata specifically for visual form or shape. As a workaround, the query relies on the “description” field, which sometimes includes notes about form or design features, but it’s inconsistent and not intended for visual classification. As a result, some matches can feel off, or too general.
A RAG-based model would address this by embedding the entire collection into a vector database using shape-aware or language-based embeddings. Rather than depending on keyword matching within a fixed API field, the system could retrieve results based on semantic similarity between the sketch interpretation and pre-processed collection data. This would allow for more accurate retrieval, especially for abstract or non-standard inputs.
UI IMPROVEMENT
These improvements were made to support better visual continuity, interpretability, and engagement. Seeing the sketch and result side by side helps users understand how their drawing was interpreted. Presenting more context about each object encourages further exploration and gives the interaction more depth. The addition of a match confidence score also gives users a sense of how well the system understood their intent.












