
Defining the Right Role for an AI Pricing Agent
Most pricing software is built for people whose full-time job is pricing. This research started with a simpler question: what does pricing look like when it’s just one of many things you’re doing that day?
As part of a pre-GTM effort, I studied how small retailers actually interact with pricing decisions—and why “enterprise tools, simplified” often misses the mark.
Product & Strategy
Research
AI Agent
At a Glance
Decision: What role an AI-powered pricing agent should play for small, independent retailers with limited time and resources
Context: Pre-GTM exploration for a lightweight pricing solution targeting mom-and-pop brick-and-mortar stores dependent on wholesalers
My role: Senior UX Researcher leading problem framing, generative research, and cross-functional alignment across GTM, Product, and Design
Outcome: Defined the agent as a focused situational support role rather than an “enterprise-lite pricing system”, shaping how the product should be positioned, scoped, and designed
Overview
This work focused on understanding how small, independent retailers manage pricing-related decisions in practice, and how an AI-powered solution could support them without introducing enterprise-level complexity.
Most pricing tools in the industry are designed for large retailers with dedicated pricing teams, structured roles, and time allocated for analysis.
Our GTM strategy targeted a very different audience: store owners and managers who handle pricing alongside many other responsibilities.
The goal was not to scale down existing tools, but to determine what job was truly worth solving for this audience.
Context & Framing
Research Focus
The research focused on how pricing decisions surface in real life, rather than how pricing systems are traditionally designed. Through JTBD-driven interviews and workflow walkthroughs, key questions examined included:
When do pricing-related questions arise during the day?
What level of detail is actually useful in those moments?
What does “enough information” look like when time and attention are limited?
what triggered action vs deferral
Key Insight: Pricing as Attention Management
One consistent pattern was that pricing decisions were rarely made in a single, focused session.
Instead, store owners often:
noticed issues while walking the store
flagged products that felt “off” based on sales or stock movement
needed quick context to decide whether something required follow-up
At that stage, they were not trying to optimize pricing.
They were deciding whether action was needed at all. Pricing decisions were rarely linear. Instead, they were distributed across moments: some suitable for deeper analysis, many not.
This reframed pricing from a data analysis problem into an attention and prioritization problem.
Using Personas as a Coordination Tool
Personas were used selectively to align decisions across GTM, Product, and Design — not as exhaustive representations of users.
Two roles were intentionally defined:
End User (Store Owner / Manager)
Manages multiple responsibilities throughout the day
Has limited time for deep analysis
Needs quick context to assess whether an issue is worth attention
AI Agent Role (AI Support)
Acts as delegated support for pricing-related questions
Provides fast context and recent history, not full optimization
Helps prioritize what needs follow-up rather than replacing decision-making
Defining both roles helped clarify how responsibility should be shared between the human and the system, and prevented the product from drifting toward enterprise-style workflows that did not fit the target audience.
Defined Where an AI Agent Adds Value — and Where It Doesn’t
Rather than designing features, the research focused on opportunity boundaries.
The findings suggested that:
Full data ingestion and continuous interaction were unnecessary for early-stage value
Lightweight, question-based interactions worked best in interrupt-driven contexts
The agent’s primary role was orientation, not optimization
This reframed the agent as a situational companion, to provide clarity and direction in the moment, while deferring deeper analysis to existing tools when needed
Impact
This work influenced how the team approached the product in several ways:
Shifted the GTM narrative away from “enterprise pricing, simplified” toward lightweight decision support
Helped constrain AI scope to avoid building capabilities users would not realistically engage with
Provided a clear definition of what the agent should and should not be responsible for
Rather than treating AI as a general-purpose tool, the research positioned it as a focused support role designed around real working conditions.
Why This Case Matters
This case shows how early-stage research can shape product direction by clarifying what not to build, especially when introducing AI into complex, real-world workflows.
By grounding decisions in how people actually work, the research reduced the risk of overbuilding and helped ensure the product served a real, defensible need.
Research
GTM
Product Strategy
0→1 Exploration
Jobs to Be Done
Small Retail
Decision Contexts
January 6th, 2026