Product discovery
Mapped workflows, ran 8 interviews, and sized activation gaps before touching UI.
Bhava is my nights-and-weekends AI tool built with one part-time engineer. I lead everything from product discovery to shipped experiments-driving 60% activation, 5% landing weeks.
Founder & Designer
4 Weeks
60% Activation
Mapped workflows, ran 8 interviews, and sized activation gaps before touching UI.
Turned the prompt into a guided demo, added progress states, and rebuilt onboarding.
Removed free mode, shipped usage-based pricing, and tracked retention + MRR weekly.
Logged 100+ failed diagrams, clustered errors, and guided sub-agent strategy to lift accuracy.
Activation lift
Early users
Paying teams
Renewals sealed
I turned Bhava from a promising hack into a trust-first workspace where the AI shows its work, asks for the right context, and outputs clean, editable diagrams.
From hacked-together prototypes to a polished product.
The first build was a blank prompt. Users froze because the AI gave no hints or previews, so we added suggested prompts, live thinking states, and “show your work” logging.
We mapped out credits per plan to cap abuse and force value alignment. This later informed the usage-based billing that filtered out low-intent signups.
Built an onboarding path that asks for system type, generates a diagram with context, then shows a repair checklist if quality is low—lifting activation to 60%.
What failed: We launched with a "Free Forever" tier that limited features. It attracted spam and low-quality usage. Switching to a "Free Trial + Paid Only" model filtered for serious users and improved feedback quality.
Every week, I watched designers redraw the same diagram-like déjà vu with arrows.
I'm a product designer at an ad-tech startup. By day, I'm deep in B2B dashboards. By night, I watch my team waste hours redrawing the same system diagram in Figma, Draw.io, Excalidraw, and Miro.
Same workflow. Four different tools. Different versions. Complete chaos.
So I started building Bhava-an AI tool that generates diagrams instantly. But more importantly, one that doesn't feel like a black box.
This is early stage. We're 4 weeks post-launch with ~2,500 users and $250–300 MRR. I work on this part-time alongside my full-time job. One engineer friend helps part-time. Between us, I handle design, product, evals, UI fixes, pricing experiments, and customer interviews. He handles optimization and infrastructure.
This is the story of how we went from a fuzzy idea to 60% activation-and what I learned about building AI products people actually trust.
"AI doesn't need to be perfect-it just needs to show it's trying."
Our bet: Build on top of Draw.io (largest user base) and make AI feel reliable, not random.
Every design decision mapped back to a trust framework for AI research
Can the AI actually do the task?
Does it feel like it's helping me?
Is it honest about what it can and can't do?
Does it work consistently?
Before redesigning anything, I spent 2 weeks analyzing user behavior-watching session recordings, tracking prompts, and interviewing people who churned.
Deep dives with designers and PMs who churned within 48 hours
Analyzed recordings to identify drop-off patterns and friction points
Live observation of first-time users attempting diagram creation
Lands on homepage via Product Hunt or Twitter
Curious, skeptical
Creates account, sees empty editor
Confused, uncertain
Stares at text box with no guidance
Paralyzed, frustrated
Hits generate, sees spinner for 7 seconds
Anxious, doubting
Gets poor result or gives up
Disappointed, churns
Users landed on an empty editor with no guidance, no examples. They froze.
"Intelligent" vs "Basic" results varied wildly. Trust eroded fast.
3–8 seconds of spinner. No updates. Pure anxiety.
Only 15% exported their first diagram. The happy path was invisible.
I don't know what to type, so I just close the tab.
Why does intelligent mode give me different results?
It's just spinning... is it even working?
Where do I export this diagram?
"The activation gap is usually a clarity gap-not a capability gap."
Understanding how Bhava stacks up against existing diagramming solutions
AI-first approach cuts diagram creation time from 30 minutes to 30 seconds
Shows reasoning, progress states, and allows editing—not a black box
Export to editable Draw.io, PNG, SVG—works with existing tools
Presets and examples eliminate blank canvas paralysis
Each redesign tackled a specific trust or activation gap. Here's what worked.
First we fixed clarity, then trust, then monetisation.
Getting users to understand what to do and how to start
Problem: Vague CTAs meant visitors signed up without understanding what to type.
Solution: Elevated a giant prompt box with example chips and a mini walkthrough so users preview the experience before creating an account.
Problem: New users froze on an empty chat and churned without generating anything.
Solution: Added diagram-type cards, contextual hints, and a three-step progress indicator that nudges people into action.
"Clarity unlocks activation. But trust keeps users coming back."
Building reliability into the product experience
Problem: The legacy "Basic" mode produced low-quality diagrams that tanked perceived reliability.
Solution: Sunset the free mode, offered one premium try, and introduced usage-gated access to keep output quality consistent.
"Monetisation isn't just about pricing-it's about signalling reliability."
Aligning value with sustainable monetisation
Problem: Unlimited $10/month plans were unprofitable and encouraged abuse.
Solution: Swapped to a $10 base plan with transparent credit packs and real-time usage tracking.
Problem: Pricing changes created confusion-users couldn't tell where credits went.
Solution: Built an always-available tutorial and a usage dashboard detailing credits, modes, and expiry.
Problem: Diagram quality varied by type and we lacked clarity on failure patterns.
Solution: Logged ~100 failed diagrams, clustered errors, and routed high-volume types through specialized sub-agents.
A snapshot of where things stand after the first month of shipping.
Users who signed up and created their first diagram. Guided onboarding was the key driver.
Homepage prompt demo let visitors understand the product before signing up.
Prompt caching and optimization reduced median diagram generation time.
Per diagram cost after implementing prompt caching on Claude API.
MRR from ~30 paying customers after switching to usage-based pricing.
Users who return and create another diagram within first week.
"I learned that AI trust is built in microseconds, even a7-second latency feels fine only when users see progress."
Trust compounds, but so does mistrust. One broken diagram erodes more trust than five perfect ones build. That's why removing the "Basic" mode-even though it cut our free tier-was the right call. Quality consistency matters more than feature breadth in AI products.
Monetisation isn't just about pricing, it's about signalling reliability. When we introduced usage-based pricing, we weren't just managing costs-we were telling users "this output is valuable enough to meter." Paradoxically, charging more increased trust because it signalled we stood behind the quality.
Progress indicators are trust multipliers. The same 7-second generation time feels entirely different when users see "Analyzing structure... Generating nodes... Optimizing layout" versus a blank spinner. Transparency about what's happening builds confidence even when things take time.
The activation gap is usually a clarity gap. Our jump from 38% to 60% activation wasn't about making the product better-it was about making it clearer. Users didn't know what to type, so they didn't try. The moment we showed examples, they understood the possibility space.