Making music discovery effortless, wherever life takes you.
Add a Feature
Role
UX / UI Designer • Researcher
Timeline
12 weeks
Tools
Figma • Participate
Background
Listening to music on the go should feel effortless, but moments like driving, working out, or multitasking can make it harder to interact with apps safely and intuitively.
While Spotify delivers a polished experience, it doesn’t always support hands-free discovery or real-time song identification.
This project set out to enhance Spotify’s experience—adding an AI voice assistant and native song ID to make music discovery and interaction even more seamless, intuitive, and accessible.
Double Diamond Framework
I used the Double Diamond framework to guide this project—starting with broad research, then defining key user needs before developing and refining solutions.
This approach helped me stay focused on real user pain points early, and flexible later as I iterated based on usability feedback.
Double Diamond kept the process user-centered from start to finish, ensuring every design decision tied back to the real-world needs of Spotify listeners.
Discover
Primary Research: User Interviews
To better understand how users interact with music apps on the go, I interviewed five frequent music streamers with a range of listening habits and discovery preferences.
Afterwards, I created an Affinity Map to synthesize the data:
💡 Key Themes & Insights
🎧 Hands-Free Interaction
Users wanted easier ways to control music while driving, cooking, or multitasking.🔍 Song Identification Struggles
Several users relied on third-party apps like Shazam or Siri to identify music, breaking the flow of their listening experience.🗣️ Frustration with Voice Tools
Participants described current voice assistants as robotic, rigid, and lacking natural language understanding.📚 Desire for Integration
Users wished for a more seamless way to discover and save songs without leaving Spotify.🎯 Feature Preferences
Voice commands that understand natural, conversational prompts
Built-in song ID feature that saves identified tracks to playlists
Quick access to features from the Home screen without digging through Settings
These interviews validated the need for smarter, more integrated features within Spotify—and highlighted opportunities to remove friction from spontaneous music discovery and hands-free control.
Primary Research: Competitive Analyses
After uncovering user needs, I conducted two competitive analyses—one for voice assistants and one for audio fingerprinting apps—to evaluate where current solutions succeed and where they fall short.
The analysis revealed a key gap:
Voice assistants like Siri, Alexa, and Google Assistant offer hands-free controls but often feel rigid or disconnected. Song identification apps like Shazam and SoundHound help recognize music but require users to leave their current experience.
No platform delivers real-time song discovery and intuitive voice interaction fully integrated into one streaming app—leaving clear space for Spotify to innovate.
Define
Research Synthesis
100% of participants stream music daily, often while multitasking (driving, cooking, working out)
80% struggled to identify songs without switching apps, relying on tools like Shazam or Siri
60% expressed frustration with current voice assistants feeling robotic or limited
60% wanted faster, more intuitive access to music controls without digging through menus
Saving identified songs automatically into playlists or libraries was a frequently requested feature
Research validated early assumptions around hands-free interaction and uncovered a stronger need for real-time, integrated music discovery without breaking the listening flow.
Persona
Based on my research, I created a persona to represent key user needs around hands-free interaction and spontaneous music discovery.
POVs & HMWs
POV:
Spotify users are eager to explore new music through channels beyond curated playlists, signaling a need for more diverse and personalized discovery methods.
HMWs:
How might we expand music discovery options for users who want more than just curated playlists?
POV:
Some users struggle to accurately identify songs they hear, as current audio fingerprinting tools fall short in both reliability and integration with Spotify.
HMW:
How might we enable users to instantly recognize songs and seamlessly add them to their library using a more accurate and integrated solution?
POV:
Users interested in hands-free control often avoid using Spotify’s voice commands due to poor natural language comprehension and limited functionality across tasks and devices.
HMW:
How might we redesign voice interactions so users can intuitively request curated music or identify songs on the go?
Prioritization
With clearly defined problem spaces, I moved into the Develop phase—generating ideas that could address users’ needs for seamless, hands-free interaction and real-time music discovery.
Initial concepts included:
AI Voice Assistant for natural, hands-free music control
Native Audio Fingerprinting button for instant song identification
Offline functionality for Audio Fingerprinting
Voice commands that support personalized, mood-based music discovery
Interactive voice responses to adapt playback based on user requests
Quick-save actions for identified songs into playlists or libraries
Deep integration with Spotify’s existing ecosystem to avoid third-party disruptions
These ideas were evaluated and prioritized using an Impact–Effort Matrix, helping identify the highest-value features to include in the first prototype.
After generating feature concepts, I used an Impact–Effort Matrix to prioritize solutions based on user needs, feasibility, and potential impact on hands-free interaction and music discovery.
High-impact, low-effort features—such as a native Audio Fingerprinting button, voice commands for discovery, and offline song identification—were prioritized for early implementation.
Higher-effort ideas, like a fully interactive voice assistant that could dynamically adapt to user requests, were thoughtfully scoped to lightweight versions to fit within project constraints.
After prioritizing key features, I mapped out Spotify’s new user flows to guide the structure of the added experiences.
I organized essential actions like activating voice control, identifying songs, and saving discovered music into a simplified sitemap—ensuring the experience stayed lightweight, intuitive, and easy to access on the go.
User Flow
After mapping the structure and user flows, I moved into low-fidelity wireframes to visualize core interactions.
Low Fidelity Wireframes
User testing confirmed that the core structure and flows were intuitive, with participants completing key tasks successfully.
After initial confusion locating "Hey Spotify," I added Home screen buttons for easier access.
A second round of testing validated the update, with all users completing tasks without issue.
Lo-Fi Usability Testing
Validate task flows:
Enable ‘Hey Spotify’ Voice Assistant
Enable Audio Fingerprinting
Use Voice Commands for ‘Hey Spotify’
Identify a song using Audio Fingerprinting
👥 Participants
5 testers
🗂️ Format
Moderated, remote (~15 min)
Hi-Fi Usability Testing
Core Flows:
Access ‘Hey Spotify’ feature
Access Audio Fingerprinting feature
Enable ‘Hey Spotify’ in Settings
Enable Audio Fingerprinting in Settings
👥 Participants
5 testers who stream music daily, like discovering new music, and rarely use voice assistants
🗂️ Format
Moderated remote sessions through Participate
(~30 min)
✅
What Worked
5/5 successfully completed all tasks with high confidence
4/5 of users found the features easy to access and intuitive
Users felt the experience aligned well with Spotify’s existing design system
⚠️
Pain Points
3/5 were confused by the Terms & Conditions flow and requested a button or confirmation modal.
2/5 made errors trying to locate feature settings
2/5 suggested feature icons should look more like buttons or cards
1 user found "Audio Fingerprinting" terminology unclear
User testing on the high-fidelity prototype surfaced areas where the experience still needed refinement.
While the overall structure felt intuitive and aligned with Spotify’s design system, feedback revealed opportunities to improve clarity, streamline interactions, and make key features easier to find and use.
💡
What to Improve
Add clear confirmation after accepting Terms & Conditions
Simplify confusing labels in Settings (e.g., clarify “Voice Interactions”)
Improve discoverability with better onboarding and clearer home screen access
Redesign icons to feel more interactive and recognizable
Use more intuitive language for features (e.g., rename "Audio Fingerprinting")
Iterations
Hey Spotify flow
Added a launch modal to onboard users to “Hey Spotify” and “Song ID,” improving feature awareness and discoverability.
Redesigned feature icons and layout using colorful, tappable cards with “New!” labels to enhance visual clarity and interaction.
Renamed “Audio Fingerprinting” to “Song ID” to make the feature’s purpose more understandable to users.
Added a confirmation button that only becomes active after users accept Terms & Conditions, creating a smoother and clearer opt-in flow
Song ID flow
Relabeled “Voice Interactions” to “Hey Spotify & Song ID” in Settings to make the feature entry point clearer and more direct for users.
UI Kit
Interactive Prototype
What’s Next
Next, I want to keep improving the onboarding experience and test how users respond over time. I’m also curious about responsibly using listening behavior data to personalize feature discovery and refine future updates based on real user needs.
Reflections
This project challenged me to think critically about how users discover and interact with new features — especially when those features are meant to improve safety and convenience. Through research, I realized that Spotify’s decision to discontinue "Car Thing" validated the need for better hands-free options like "Hey Spotify" and "Song ID." It confirmed that I was solving a real gap users were facing.
One of the biggest lessons I took away was to not overlook user pain points, even when they seem small. After early usability testing, I had enough feedback to realize that "Audio Fingerprinting" needed a more intuitive name and that users needed better onboarding. At first, I didn’t act on those insights — but addressing them in the high-fidelity iterations showed me how much even small changes can dramatically improve user confidence and experience.
I also strengthened my technical skills, especially in building more realistic, conditional prototypes that better mirror real-world behavior.
Overall, this project reinforced that good design is often invisible — it quietly clears the way so users can focus on what they actually want to do. Moving forward, I want to keep sharpening that mindset: staying responsive to user feedback, trusting the data, and designing experiences that feel natural, helpful, and human.