PM Case Study: Oneshot. AI | From Data to Conversations: What 50 User Interviews Actually Revealed | Part 2 : User Interview
- Gokul Rangarajan
- Apr 1
- 8 min read
How We Got Users to Talk (And Why That Mattered)
In the previous analysis, we disscussed the data already showed something was off. Users were onboarding, not many generating messages, even exploring the product but not acting. Onbaording and Activation was weak, conversion was lower, and usage wasn’t translating into behavior.
This blog breaks down Gokul's 9–12 months of working experience closely with Founder Oneshot.ai which helped them to raise 3M+ from 42CAP and eventually get to 2M ARR and do Unser itnerivew
This is 3 part blog case study is based
how a promising AI product with users and revenue was not delivering real workflow value.
how user research and product analysis revealed the wrong problem being solved
how we rethought the product from a personalization tool → autonomous outreach platform
When we started working with Oneshot.ai in November 2021 alongside Venki Pola the product already looked like what most people would call "working." In the last blog, we discussed the False Aha Moment, Cohort Segmentation, Free → Paid Conversion Breakdown, and why things didn’t add up — there was low Time-to-Value vs Time-to-Trust.In this blog, we will discuss deeper insights from the user interviews we conducted.

At this point, we knew analytics alone wouldn’t give us the answer. The drop-offs were clear, but the reason behind them wasn’t. So we structured the next phase deliberately across three parallel tracks
1 onboarding experiments 2 user interviews 3 growth experiments.
Onboarding experiments



We introduced an in-product trigger a simple in-app popup inviting users to participate in a short discussion about their outbound workflow. To make it worthwhile, we offered a small Amazon voucher as compensation for their time. This helped us capture users already engaging with the product.
The goal was to understand behavior from multiple angles what users do when guided, what they say when asked, and how they respond when pushed to act.
In parallel, we ran direct outreach. We reached out to SDRs and sales teams through existing networks and cold connections, offering a similar incentive either a voucher or a small gift for a one-hour conversation. This wasn’t framed as research alone, but as a working session to understand how outbound actually happens.
We also conducted lighter workflow-based on usertesting where users walked us through their actual process in recorded video of how theyare using or product and give a drect feedback
Through this combination, we were able to reach around 50 users. The majority were SDRs, supported by SDR managers and sales leaders, giving us visibility across execution, oversight, and strategy layers.
How We Approached the Conversations
We didn’t treat these as traditional interviews. Instead of running through a fixed questionnaire, we focused on reconstructing real workflows. Every conversation was anchored around understanding what the user actually does, not what they think they do. We asked them to walk us through their outbound process, step by step. Not in theory, but using their current tools, recent prospects, and actual messages.
We avoided hypothetical questions.
We didn’t ask, “Do you personalise?" We asked, “Show us your last few messages.”
We didn’t ask, “Would you use this?” We asked, “What did you do the last time you had to send outreach?”
Every conversation revolved around a few core themes:
How outbound actually happens in practice
Where time is spent
What decisions are made and when
What users claim vs what they actually do
But the real depth didn’t come from asking more questions. It came from how we validated the answers.
We first gathered 200+ initial responses through a short survey, which helped us identify relevant SDR profiles. From there, we manually shortlisted around 50 SDRs by reviewing their LinkedIn profiles to ensure they were actively involved in outbound workflows. We then reached out to them directly, asking them to complete the survey and inviting them for a short common meeting. This approach ensured we spoke with high-intent, qualified users rather than random participants. Across 60 interviews, the users were primarily SDRs supported by managers and sales leaders, spanning industries like fintech, SaaS, marketplaces, travel, and health tech. Despite this diversity, their workflows looked remarkably similar, most operated inside outbound tools like Apollo, Outreach, and Salesloft, focusing heavily on list building, sequencing, and execution. The majority of their time was spent on prospecting and managing volume, cold-calling customers.


While personalization existed across all segments, it varied widely by individual style, industry, and ICP, and was rarely the central focus of their workflow.
Claimed vs Actual Gap
Almost all SDRs claim they personalize, but actual behavior shows only 0–4 out of 10 messages are personalized.100% claim to send personalized emaila nd invite , they are honest , they love it , they fancy it but <20% do it consistently.

Personalization is Fragmented “Good personalization” has no standardvaries from funding mention → role → name → activity.

Across the interviews, personalization did not emerge as a consistent behavior but as a highly fragmented one. Nearly all SDRs described “good personalization” differently some focused on funding events, others on job roles, while a few looked at LinkedIn activity or simply added a name-level tweak. In the dataset, this spread was almost evenly distributed, with no single approach dominating more than ~25–30% of users . This lack of standardization meant personalization was not a repeatable system but an individual preference, making it difficult to scale or productize.
Time Allocation Mismatch Insight
Most time is spent on prospecting, sequencing, and execution not personalization.

At the same time, how SDRs spent their time revealed a clear mismatch between perceived importance and actual effort. Roughly 65–70% of time went into prospecting, list building, and managing sequences, while less than 10% was spent on personalization . Even among those who claimed personalization was critical, the time investment remained minimal. This indicated that personalization was not a core driver of productivity but a secondary layer applied when time allowed.
Workflow is Tool-Centric
Majority workflows run through Apollo / Outreach / Salesloft not LinkedIn-first. Oneshot was an independent chrome extensions, which is why most dint come back to use it again ,
Most SDR workflows were not LinkedIn-first but driven by outbound platforms like Apollo, Outreach, and Salesloft, where tasks such as sending emails, LinkedIn invites, and follow-ups were already structured and executed at scale. In many cases, these tools even supported lightweight personalization within the flow itself, allowing SDRs to complete outreach without leaving the system.

In contrast, Oneshot operated as a standalone Chrome extension outside this core workflow, requiring users to step out of their execution environment. This additional step created friction, and since it didn’t integrate directly into where actions were happening, most users didn’t return to use it consistently. This we dint know untill they showed us the workflow . LinkedIn was used occasionally for context or manual touches, but it was not the primary execution environment. This meant the product was positioned outside the main workflow, reducing its chances of becoming a habitual tool.
Value ≠ Action
Insight: Even when users find personalization useful, they don’t act on it consistently.
Another key pattern was the disconnect between value perception and actual behavior. While almost all SDRs acknowledged that personalization improves message quality or open rates, this belief did not translate into consistent action. Only about 15–20% of users showed repeated personalization behavior across messages, with most relying on templates for speed and efficiency . The value was recognized, but it was not strong enough to influence daily workflow decisions.

While we anchored interviews around workflow and personalization, we kept 10–15 open-ended questions focused on the person, not just the process. Questions like “What part of your day feels the most repetitive?”, “When does outbound feel hardest?”, and “What makes you hesitate before sending?” helped surface the real pressure behind the role. These weren’t about tools or features—they were about how SDRs experience their work day-to-day.
Some of their Quotes
“By the time I reach the 40th or 50th message in a day, I’m not really thinking about each person anymore. I’m just trying to keep the pace going. You start the day wanting to personalize properly, but by the middle of it, it becomes about finishing the list.”
“Everyone talks about personalization like it’s the key, but when you actually sit and do the job, you realize you just don’t have that kind of time. You have targets, meetings, follow-ups, and reporting. Something has to give, and usually it’s the depth of personalization.”
“At the start, I used to spend more time researching each prospect. Over time, I realized it wasn’t sustainable. Now I just do quick scans and move on, otherwise I wouldn’t get through my day.”
“You open profiles, you read a bit, you try to connect something, but after doing that repeatedly, it starts feeling mechanical. It doesn’t feel like real personalization anymore.”

The role is highly repetitive and driven by volume, where sending hundreds of messages becomes routine and mentally draining. SDRs spoke about balancing speed with quality, but in reality, targets push them toward efficiency over thoughtfulness. Rejection rates are high, responses are low, and most messages go unanswered, which creates a constant sense of uncertainty and fatigue. Personalisation, in this context, becomes an added effort rather than a relief.
SDRs operate under constant pressure to maintain volume. As workload increases, personalization drops sharply because it slows them down. Even users who believe in personalization abandon it under time constraints. The real bottleneck is not capability, but time allocation under pressure. SDRs described their work as repetitive and system-driven. Most of their day is spent managing sequences, lists, and tasks, not thinking deeply about each message. This means any product that requires extra cognitive effort or breaks the flow will fail. The opportunity is to automate decisions inside the workflow, not add steps outside it.
Personalization was not just weak as a feature it was weak as a foundation. The value was too small, too fragmented, and too dependent on individual behavior to build a large, scalable product around. Even if we improved it, reduced time-to-value, or optimized onboarding further, it would still remain a marginal gain in a workflow dominated by speed, volume, and execution. The ceiling was visible. At best, this could become a small, sustainable micro-SaaS, but not the kind of product that could scale into a $100M outcome which was the vision we were building toward.
We had already explored improving the existing product making onboarding faster, reducing friction, increasing activation but those were optimizations on top of a weak core. The interviews made it clear that the problem wasn’t how we built personalization, but that we were solving the wrong layer entirely. The real bottleneck wasn’t message quality. It was decision-making, workflow integration, and outcome certainty.
This is where the shift happened.
We moved from thinking about features to rethinking the system. Instead of asking how to improve personalization, we started asking what actually drives action inside outbound workflows. This led us to design a new set of growth experiments—focused not on messaging, but on signals, targeting, timing, and automation. Multiple A/B tests, workflow integrations, and dynamic experiments were run over the next few months to validate this new direction.
The transition wasn’t immediate, but it was decisive.
From a personalization tool, the product began evolving into something closer to an autonomous outreach system one that doesn’t just generate messages, but helps decide who to reach out to, when, and why. That shift in thinking is what eventually unlocked growth, leading to meaningful traction, scaling to multi-million ARR, and setting the foundation for the next stage of the company.
We’ll break down how these growth experiments were designed, what worked, what failed, and how this transition led to real outcomes in the next part.
Comments