Co-Parenting made simple.
0 to 1 MVP Sprint
Role
Product Designer (solo)
Collaborator
Adrian Cole (Product Manager)
Sprint Timeline
June 25 – July 3, 2025 (8 days)
Tools
Figma • Stitch • Mobbin
Project Background
Adrian Cole, a Product Manager I’d worked with before, brought me into this sprint project. The end client is a healthcare executive who needed a mobile app that would help divorced or separated parents navigate co-parenting in a structured, emotionally safe, and empowering way.
The project came with a detailed PRD and design brief — but no user research, no existing brand, and just one week to deliver.
The Challenge
This was not a bootcamp-style project. There was:
❌ No user research
❌ No testing or iteration
❌ No dev handoff
Instead, I had:
✅ A structured PRD and Design Brief with lots of features
✅ Emotionally complex content to design for
✅ A fixed 1-week deadline
✅ A mandate to create something that felt calm, private, and supportive
The core challenge? How do you bring structure and sensitivity to a product focused on post-divorce parenting — without user validation, and in a sprint?
My Process
1. Interpreting the PRD
The PRD covered:
AI-powered decision-making tools
A 12-month emotional healing LMS
Structured onboarding
Emotional safety guidelines
Topic categories (custody, new partners, sex/drugs, etc.)
I created screen maps and UI structures based on this content, always referencing the emotional tone.
2. Experimenting with Stitch
Instead of starting with hand sketches (my usual workflow), I tested Stitch, Google’s AI design tool, to generate wireframes quickly.
Pros:
Fast generation of basic layouts
Aesthetically clean outputs
Cons:
Missed key prompt elements
Not scalable for real product systems
I made the call to rebuild all screens from scratch in Figma, using Stitch only as rough inspiration.
3. Collaboration with the PM
Adrian and I aligned on:
MVP scope
Feature prioritization
Emotional boundaries (e.g., no co-parent chat in MVP)
I checked in frequently to make sure design decisions matched the client’s therapeutic intent.
4. Design Execution in Figma
I built out:
Splash, login, onboarding
Dashboard with Healing Path entry
AI Decision Companion chat
Support request flow
Emotional check-ins and progress tracking
POVs & HMWs
POV:
Spotify users are eager to explore new music through channels beyond curated playlists, signaling a need for more diverse and personalized discovery methods.
HMWs:
How might we expand music discovery options for users who want more than just curated playlists?
POV:
Some users struggle to accurately identify songs they hear, as current audio fingerprinting tools fall short in both reliability and integration with Spotify.
HMW:
How might we enable users to instantly recognize songs and seamlessly add them to their library using a more accurate and integrated solution?
POV:
Users interested in hands-free control often avoid using Spotify’s voice commands due to poor natural language comprehension and limited functionality across tasks and devices.
HMW:
How might we redesign voice interactions so users can intuitively request curated music or identify songs on the go?
Prioritization
With clearly defined problem spaces, I moved into the Develop phase—generating ideas that could address users’ needs for seamless, hands-free interaction and real-time music discovery.
Initial concepts included:
AI Voice Assistant for natural, hands-free music control
Native Audio Fingerprinting button for instant song identification
Offline functionality for Audio Fingerprinting
Voice commands that support personalized, mood-based music discovery
Interactive voice responses to adapt playback based on user requests
Quick-save actions for identified songs into playlists or libraries
Deep integration with Spotify’s existing ecosystem to avoid third-party disruptions
These ideas were evaluated and prioritized using an Impact–Effort Matrix, helping identify the highest-value features to include in the first prototype.
After generating feature concepts, I used an Impact–Effort Matrix to prioritize solutions based on user needs, feasibility, and potential impact on hands-free interaction and music discovery.
High-impact, low-effort features—such as a native Audio Fingerprinting button, voice commands for discovery, and offline song identification—were prioritized for early implementation.
Higher-effort ideas, like a fully interactive voice assistant that could dynamically adapt to user requests, were thoughtfully scoped to lightweight versions to fit within project constraints.
After prioritizing key features, I mapped out Spotify’s new user flows to guide the structure of the added experiences.
I organized essential actions like activating voice control, identifying songs, and saving discovered music into a simplified sitemap—ensuring the experience stayed lightweight, intuitive, and easy to access on the go.
User Flow
After mapping the structure and user flows, I moved into low-fidelity wireframes to visualize core interactions.
Low Fidelity Wireframes
User testing confirmed that the core structure and flows were intuitive, with participants completing key tasks successfully.
After initial confusion locating "Hey Spotify," I added Home screen buttons for easier access.
A second round of testing validated the update, with all users completing tasks without issue.
Lo-Fi Usability Testing
Validate task flows:
Enable ‘Hey Spotify’ Voice Assistant
Enable Audio Fingerprinting
Use Voice Commands for ‘Hey Spotify’
Identify a song using Audio Fingerprinting
👥 Participants
5 testers
🗂️ Format
Moderated, remote (~15 min)
Hi-Fi Usability Testing
Core Flows:
Access ‘Hey Spotify’ feature
Access Audio Fingerprinting feature
Enable ‘Hey Spotify’ in Settings
Enable Audio Fingerprinting in Settings
👥 Participants
5 testers who stream music daily, like discovering new music, and rarely use voice assistants
🗂️ Format
Moderated remote sessions through Participate
(~30 min)
✅
What Worked
5/5 successfully completed all tasks with high confidence
4/5 of users found the features easy to access and intuitive
Users felt the experience aligned well with Spotify’s existing design system
⚠️
Pain Points
3/5 were confused by the Terms & Conditions flow and requested a button or confirmation modal.
2/5 made errors trying to locate feature settings
2/5 suggested feature icons should look more like buttons or cards
1 user found "Audio Fingerprinting" terminology unclear
User testing on the high-fidelity prototype surfaced areas where the experience still needed refinement.
While the overall structure felt intuitive and aligned with Spotify’s design system, feedback revealed opportunities to improve clarity, streamline interactions, and make key features easier to find and use.
💡
What to Improve
Add clear confirmation after accepting Terms & Conditions
Simplify confusing labels in Settings (e.g., clarify “Voice Interactions”)
Improve discoverability with better onboarding and clearer home screen access
Redesign icons to feel more interactive and recognizable
Use more intuitive language for features (e.g., rename "Audio Fingerprinting")
Iterations
Hey Spotify flow
Added a launch modal to onboard users to “Hey Spotify” and “Song ID,” improving feature awareness and discoverability.
Redesigned feature icons and layout using colorful, tappable cards with “New!” labels to enhance visual clarity and interaction.
Renamed “Audio Fingerprinting” to “Song ID” to make the feature’s purpose more understandable to users.
Added a confirmation button that only becomes active after users accept Terms & Conditions, creating a smoother and clearer opt-in flow
Song ID flow
Relabeled “Voice Interactions” to “Hey Spotify & Song ID” in Settings to make the feature entry point clearer and more direct for users.
UI Kit
Interactive Prototype
What’s Next
Next, I want to keep improving the onboarding experience and test how users respond over time. I’m also curious about responsibly using listening behavior data to personalize feature discovery and refine future updates based on real user needs.
Reflections
This project challenged me to think critically about how users discover and interact with new features — especially when those features are meant to improve safety and convenience. Through research, I realized that Spotify’s decision to discontinue "Car Thing" validated the need for better hands-free options like "Hey Spotify" and "Song ID." It confirmed that I was solving a real gap users were facing.
One of the biggest lessons I took away was to not overlook user pain points, even when they seem small. After early usability testing, I had enough feedback to realize that "Audio Fingerprinting" needed a more intuitive name and that users needed better onboarding. At first, I didn’t act on those insights — but addressing them in the high-fidelity iterations showed me how much even small changes can dramatically improve user confidence and experience.
I also strengthened my technical skills, especially in building more realistic, conditional prototypes that better mirror real-world behavior.
Overall, this project reinforced that good design is often invisible — it quietly clears the way so users can focus on what they actually want to do. Moving forward, I want to keep sharpening that mindset: staying responsive to user feedback, trusting the data, and designing experiences that feel natural, helpful, and human.