I'm Jesse — a senior UX researcher with 11+ years at LinkedIn, Microsoft, ServiceNow, and Netflix. I specialize in emerging technology and go beyond the deck: I create interactive prototypes that bring findings to life so your team can react, test, and iterate.
LinkedIn
ServiceNow
Netflix
I'm drawn to the research nobody has a playbook for. AI features where users are still forming mental models. New product categories where the competitive landscape is blank. Emerging technology where the experience layer is the difference between adoption and abandonment.
My foundational research for LinkedIn Talent Insights produced the product principles that guided what became the fastest product to hit $50M in revenue in LinkedIn's history. At Microsoft, I shaped foundational insights on Copilot in PowerPoint that directly influenced shipped features — in just 3 weeks.
My background is unusual: I trained as an orchestral conductor. Conducting taught me to synthesize complexity into a single coherent experience — to hear every instrument and shape them into something the audience can feel. That's what I do with research: translate messy, ambiguous data into clear stories that make teams act.
An insight that doesn't change behavior is just trivia. I treat every readout like a performance — distilled, dramatic, targeted, and timely. Your stakeholders don't just hear findings. They feel the urgency to act.
Cut through noise to what matters.
Stories that make people care.
Tailored to each audience.
Delivered when decisions are made.
I specialize in emerging technology research — where the problem space is ambiguous, mental models are still forming, and the stakes of getting it wrong are high.
I research how people actually experience new technology. I've shaped AI features at Microsoft and ServiceNow.
When you need to understand who your users are, what they need, and where the opportunity lives.
After research, I create interactive prototypes informed by findings — a tangible artifact to react to and test.
Deep 1-on-1 conversations to understand needs, motivations, and mental models.
Moderated sessions where users interact with your product in real time.
Evaluate early-stage ideas or prototypes before you commit to building.
Map the full experience across touchpoints. Reveals pain points and moments of truth.
Observe users in their real environment. Gold standard for workflows.
Longitudinal self-reporting. Captures real-world evolving behavior over weeks.
Moderated group discussions exploring attitudes and social dynamics.
IA research. How users categorize and find content.
Expert review against usability principles. Fast, no recruitment.
Full-cycle UX research for fast-paced early and mid-stage startups.
Research communication coaching, training, and documentation for a 140-person UX research team.
Full-cycle UX research for the Copilot AI experience in PowerPoint.
Digital strategy and client relationships at a boutique web agency.
Full-cycle UX research for safety and security teams.
Full-cycle UX research for Talent Solutions, Careers, and Growth business units.
UX workshops, wireframing, and research at a 40-person design agency.
Front-end engineering, product design, and user research for a 4-person HealthTech startup.
My work samples contain confidential research from real engagements. Enter the password to view, or reach out and I’ll share it.


A side project exploring how technology can improve research communication. This is a fully interactive prototype — the same kind of artifact I create for clients after a study.
I'll get back to you within 24 hours.
LinkedInLinkedIn had a wealth of workforce data — the kind talent leaders at Fortune 500 companies would pay dearly for. The company had been producing custom insight reports through a small analyst team, helping clients answer questions like "Where are the AI researchers located across the country?" or "How competitive is the engineering talent market in Austin?"
But clients kept asking for more. They wanted a self-service platform — and it made business sense, since a small analyst team couldn't scale. That became LinkedIn Talent Insights. The problem: the team didn't know who the day-to-day users would be, what they needed, or what the product should look like.
I was the sole researcher, but this was never a solo effort. PMs, designers, and product leaders were involved at every step — observing interviews, participating in synthesis workshops, and co-creating artifacts like the journey map and proto-personas. I ran debriefs after every research round so the final readout was never a surprise. By the time I presented, the team had already been building the insights alongside me for months.
Over 4–6 months, I ran three to four rounds of research. Each round consisted of 8–10 in-depth interviews (90 minutes each) with talent executives, analysts, data scientists, recruiters, and sourcers at large enterprises. I combined foundational and evaluative approaches — some sessions focused on understanding their workflows and pain points, others evaluated early design concepts and prototypes.
Key outputs included proto-personas mapping the different roles involved in talent strategy, a journey map tracing the full talent forecasting workflow, and a co-creative workshop where heads of talent from client companies validated our personas and maps against their own organizations.
Partway through, the team got stuck in a black hole of feature debates: Should we build dashboards? A presentation builder? Notification systems? Each PM had a different priority and they were going in circles.
Instead of scoping individual studies for each question, I decided to go deeper. I spent several weeks re-reading every set of notes and re-watching interview videos from all our prior studies, asking a single question:
"What are participants organically asking us — unprompted, without our prodding?"
Two powerful themes emerged. First, participants across every role and company were intensely focused on data — not collaboration features, not dashboards. They wanted to know: Where does your data come from? Can I trust it? How was this metric calculated?
Second, this anxiety wasn't abstract. It was driven by their need to share data with highly analytical hiring managers and executives. If a recruiter couldn't answer basic questions about the data's provenance, it would undermine their credibility in front of the people they needed to influence.
Core problem: Talent professionals are worried that their stakeholders will misinterpret or undermine their data, creating misalignment or hurting their credibility.
I translated this core problem into a "true north" for the team: How might we empower talent professionals with robust, relevant data that earns them trust and credibility with their hiring managers?
Then I broke that into four actionable product principles:
Show users exactly how every data point is calculated. No black boxes.
Help users narrow down to the specific data that answers their question.
If the data isn't trustworthy, nothing else matters. This is table stakes.
Let users curate, filter, and export data on their own terms.
LinkedIn Talent Insights became the fastest product to hit $50M in revenue in LinkedIn's history.
The team built detailed explanations of data formulas directly into the product interface (Transparency).
3–4 additional data scientists were hired specifically to clean and validate data sets (Accuracy).
More robust search filters and data export capabilities were added (Relevance + Control).
The T.R.A.C. framework became the team's decision-making lens for every subsequent feature debate.
Microsoft was building the Copilot AI experience for PowerPoint — imagining how AI could help people create presentations. This was early days for generative AI in productivity tools, and the team was moving fast. PMs needed to make critical feature decisions within 3–4 weeks, and design mockups were already in progress. I was brought in as a contract researcher and needed to generate impact quickly.
I embedded directly with the PM and design team. PMs observed every single interview session, and we ran real-time debriefs immediately after. This meant insights were landing with decision-makers in hours, not weeks — close enough to active development to actually shape it.
Within the first three weeks, I completed a foundational study: 6–8 one-hour in-depth interviews with current PowerPoint users who regularly create presentations for work. The participant mix was deliberate — I included people who were already using AI tools for presentations alongside people who had never tried, across a range of demographics and company sizes.
We'd been thinking of AI like a bowling ball — send it down the lane and get a strike. But people needed checkpoints, not one-shot prompts. They needed ways to guide the AI at every step.
Context is everything. We had been imagining prompts like "Create a presentation about the evolution of AI." But that's not what people create at work. They make team updates, progress reports, and quarterly reviews. Users couldn't even imagine how Copilot could help with those — and if they couldn't imagine it, they wouldn't try it at all.
Documents as starting points. A key discovery: many participants started their presentations in Word — gathering updates from teammates before building slides. This insight led directly to the "reference file" feature, where users can start a presentation from an existing document.
Outlines first, not slides. Nobody dove straight into slide creation. They outlined first — and AI-generated outlines rarely got it right on the first try. Users always needed to tweak. This led to the editable outline step with the ability to add, reorder, and remove sections before generating slides.
Controls beyond the prompt. Not every interaction with AI should be a text prompt. Length controls, reference file selection, outline editing — these traditional UI elements gave users structured, familiar ways to guide the AI output without crafting perfect prompts.
If they can't imagine it, they won't try it. This was a meta-insight I flagged for broader AI work: the way you describe an AI capability shapes whether people believe in it enough to try. Language is a design decision.
Synthesizing the key findings, I developed a framework for how AI tools should be designed to earn user trust and adoption:
Users need checkpoints, not one-shot prompts. Let them steer at every step — outline first, then refine, then generate.
Not every interaction should require crafting a prompt. Length controls, file selectors, editable outlines — structured UI gives users familiar handles on AI output.
How an AI capability is named and described determines whether users believe in it enough to try. Framing is a design decision, not a copy decision.
Reference file feature shipped — users can now start presentations from Word docs.
Length controls added to Copilot's creation flow.
Editable outline step built into the presentation creation pipeline.
Inline prompt editing and side-panel Copilot chat shipped for ongoing refinement.
Treehouse was an early-stage startup with a bold vision: replace the car dealership entirely with a transparent, online concierge model for electric vehicles. The founding team — CEO, Head of Design, and Marketing Lead — had hypotheses cobbled together from personal experience and conversations with friends, but they had no real data from actual EV buyers.
The founding team was involved at every stage — reviewing the research brief, watching every interview, and discussing findings as they emerged. I ran debriefs after each session and the team participated in analysis. I don't disappear into a tower and come back with insights. The best insights are the ones stakeholders glean themselves from participating in the process.
The original plan called for 15 IDIs, an experience sampling study (n=1,000), and a follow-up quantitative survey (n=400). I adapted as we went, guided by one question: When does extra effort and cost stop paying off?
7 in-depth interviews proved sufficient for thematic saturation. An open-ended question in the screener replaced the expensive experience sampling study. And the survey was scaled to 250 respondents — still large enough to validate patterns while respecting the startup's budget.
Trust is the new currency. Treehouse's biggest challenge wasn't pricing, convenience, or vehicle selection — it was earning trust from customers who had been burned by dealerships.
Nearly every participant described dealership experiences characterized by dishonesty in marketing, pressure to buy unwanted vehicles, misleading pricing, deceptive negotiation tactics, and failure to find the best price. Customers coming to Treehouse would arrive damaged — primed to be skeptical that anyone selling cars had their best interests in mind.
Two distinct user types emerged from the research, each needing a fundamentally different experience:
High EV knowledge, great researcher, confident. Needs detailed technical data and access to specialists who can answer complex questions. Turned off by superficial or untrustworthy information.
Overwhelmed by the amount of information, uncertain where to start. Needs noise reduction, guided walkthroughs, and a pre-purchase "wizard." Turned off by too much information upfront.
The design implication: Treehouse needed to serve both simultaneously. Layers of content so Experts can drill down, but a guided entry point so Amateurs aren't overwhelmed.
Shifted Treehouse's entire brand focus from "efficient car buying" to "trusted car buying."
Research directly informed the homepage redesign, marketing strategy, pricing transparency, and website copy.
Built a research-backed brand narrative that strengthened investor conversations and helped secure additional funding.
Months later, when Treehouse hit regulatory friction with dealerships, the team returned to our survey data and discovered that home charging was the top-rated value proposition — guiding a critical business pivot to home electrification services.
If I could do it over, I would have pushed harder to explore alternative business directions earlier. Given how early-stage they were, I took their marketplace direction as more settled than it was. The data that eventually guided their pivot was already in our survey — I just hadn't asked the question broadly enough the first time.
Dwarven ForgeDwarven Forge is the premier maker of hand-painted 3D tabletop gaming terrain, with a devoted community of collectors. As the product catalog grew, so did confusion. New customers struggled to find products, returning customers found navigation less intuitive than the physical sets they loved, and the team was concerned about low mobile conversion rates.
The product and photography teams watched every session live. We ran debriefs together after each session and co-prioritized what to fix. By the time I wrote up findings, the team already had ownership — they'd seen the problems firsthand and were ready to act.
I ran six remote moderated sessions — three on desktop, three on mobile. Mobile was particularly important because the team knew little about how customers used their site on phones. Each 45-minute session followed a structured flow: warm-up exploring brand familiarity and past purchasing, a task-based product search (e.g., "Find Wilderness terrain"), free exploration, visual evaluation of the site, and a wrap-up rating.
Visual-first discovery. Participants rarely used text search. They scrolled and clicked through images. Google ads without product images were skipped entirely, even when top-ranked. This made default product sorting critically important — the most popular items needed to appear first.
Filter/sort confusion. Filters often reloaded or applied prematurely, interrupting the browsing flow. Users confused "Sort" and "Filter" because they were styled identically. These needed clear visual separation and delayed filter application until the user confirmed their choices.
Mobile isn't for buying — it's for browsing. Customers browse on their phones, then switch to desktop when they're ready to purchase. Dwarven Forge's "low" mobile conversion rate wasn't a problem to solve — it was a misread of cross-device purchasing behavior.
Inconsistent category ("biome") pages. Some product category pages featured vivid photography, curated highlights, and auto-playing background videos — these were highly effective and drew users in. But the pattern wasn't consistent. Other category pages were bare-bones product grids that fell flat by comparison.
Biome/category pages were restructured for consistency, giving every product category the same compelling visual treatment.
Photography teams adopted new standards for visual naming and product imagery coherence.
Multiple points of design friction were fixed across both desktop and mobile experiences.
Cross-device tracking was recommended to get an accurate picture of mobile's real contribution to conversion, rather than treating it as an isolated funnel.