April 14, 2026
LinkedInHow I Would Train a New PM for an AI-Native Discovery Team

A ninety-day approach to product management in search, personalization, experimentation, and learning.
The job is changing, but the standard is rising
Every generation of product managers inherits a different set of tools. Some learned in the era of waterfall specifications. Others came up with agile backlogs, experimentation platforms, and self-service analytics. The next generation of discovery PMs will learn in a world where LLMs can help synthesize research, draft specs, generate prototypes, create evaluation cases, and compress the cycle between idea and shipped experience.
That changes the job, but not in the way many people assume.
The point is not to turn PMs into lightweight engineers or prompt specialists who happen to manage a roadmap. The point is to develop product leaders who can operate effectively at the intersection of user intent, machine intelligence, experimentation, and business judgment.
In discovery, that bar is even higher.
Search, recommendations, personalization, navigation, editorial surfaces, and conversation increasingly blur together from the user's perspective. A viewer does not care which internal team produced the result. They care whether the product understood what they wanted and helped them decide quickly. That means a strong discovery PM has to understand the full loop from intent to satisfaction.
If I were onboarding a new PM to an AI-native discovery team today, I would train them in that full loop from day one.
Month one: learn the system
The first thing I would teach is that discovery is not a feature. It is a system.
A user may start on a home page, reformulate through search, respond to a recommendation, open a details page, get nudged by an explanation, and decide within seconds whether the product was useful. Search, browse, ranking, metadata, copy, and experimentation all contribute to that outcome.
A PM who only understands one surface will miss the real product.
So the first month would be about building system awareness.
I would want the PM to use the product constantly, across devices and contexts, not as a casual viewer but as an investigator. Search for known titles. Search with vague intent. Browse anonymously. Browse with history. Test family contexts, dead ends, misspellings, long-tail queries, and frustrating edge cases. The goal would not be to produce a deck. It would be to develop instincts about where discovery succeeds, where it leaks, and where the user effort accumulates.
At the same time, I would want them deep in the data. Not just dashboards, but the anatomy of discovery metrics. What counts as search success? How do we define abandonment? How do we distinguish curiosity from dissatisfaction? What does reformulation tell us? Where do downstream starts and completions fit? Which metrics are leading indicators and which are lagging?
Early-career PMs often think data fluency means knowing how to read a chart. In discovery, it means knowing how behavior maps to intent, friction, and satisfaction.
I would also want them to sit with the people who understand the machinery underneath the product. That means engineers, data scientists, ML practitioners, analysts, designers, researchers, and whoever owns metadata quality or editorial context in the organization. A discovery PM does not need to do every technical job personally. But they do need to understand enough to ask good questions. How are candidates retrieved? What features influence ranking? Where does metadata come from? What are the latency constraints? What breaks when the catalog changes? How do we evaluate new models or prompts safely?
That is the foundation.
Month two: build and evaluate
The second month would shift from observation to building.
I would want the PM to use LLMs and modern AI tools in ways that make the work more concrete. Not because AI tooling is trendy, but because it can accelerate understanding. Take a vague discovery problem and turn it into a prototype. Draft a conversational flow for a new search experience. Create a small internal tool that clusters bad queries, summarizes user complaints, or helps annotate search failures. Write an evaluation rubric for an LLM-powered recommendation surface. Use an assistant to generate candidate test cases, then manually refine them until they reflect real product nuance.
The goal here is not to create a polished production system in a week. It is to teach the PM that modern tools can help them reason faster, visualize ideas sooner, and engage more deeply with the product before formal implementation begins.
This is also where I would teach one of the most important lessons in AI-native product work: the tool does not own the judgment.
LLMs can help generate options, but they do not decide what matters. They do not choose the success metric. They do not determine whether a recommendation experience is trustworthy. They do not understand organizational tradeoffs unless a human does. The PM's role is not reduced by AI. It becomes more exposed. Weak thinking shows up faster. Strong thinking compounds faster.
Month three: own something real
By the third month, I would expect the PM to own something real, but scoped.
Not a huge platform change. Not a vague strategic exploration. A well-bounded discovery improvement with a measurable objective. Maybe it is improving the experience for zero-result queries. Maybe it is refining a conversational search entry point. Maybe it is improving the logic for a specific recommendation module. Maybe it is building a better evaluation set for family co-viewing or mood-based search. The exact surface matters less than the discipline.
Can they frame the problem clearly? Can they write down the expected user value? Can they define success and guardrails? Can they partner with engineering, design, and data science without hiding behind abstraction? Can they propose an experiment? Can they interpret the result honestly, including when the idea did not work?
That is when I would know the onboarding is working.
The skills I would prioritize
There are a few skills I would prioritize above all others.
-
The first is problem framing. Discovery teams are flooded with ideas because user intent is endless and AI lowers the cost of implementation. The PM who stands out is not the one with the longest backlog. It is the one who can separate interesting from important.
-
The second is decomposition. Good discovery products often look simple on the surface because the complexity was broken down well underneath. A PM has to take ambiguous goals like "help people find something better" and turn them into tractable questions about retrieval, ranking, explanation, interface, and measurement.
-
The third is model fluency. I do not mean the PM needs to become a research scientist. I mean they should understand what models are good at, where they are brittle, how prompts and evaluation interact, and why grounding, latency, and cost all matter in real products. In an AI-native discovery team, this is no longer optional literacy.
-
The fourth is taste and restraint. When building gets easier, overbuilding gets easier too. Discovery products can become noisy very quickly. Too many rows, too many explanations, too much conversation, too much personalization theater. Good PMs know when not to add one more thing.
-
The fifth is written clarity. In complex product environments, the ability to articulate a decision cleanly remains one of the highest leverage skills in the company. Clear writing creates alignment. It also forces clearer thinking.
Learning velocity is the real advantage
And finally, I would actively train the learning habit.
I do not think AI-native PM development should rely on passive exposure. I would make learning explicit. Each week, I would want the PM to do four things:
- investigate one part of the live experience
- build one small artifact or prototype
- read one technical or product concept more deeply
- reflect in writing on one insight or failure pattern
The point is to create a cadence where growth is continuous, not occasional.
This matters because discovery is one of the fastest-changing areas in product today. Search is evolving. Personalization is evolving. Evaluation is evolving. LLMs are changing interface expectations. The PM who treats their onboarding as a ninety-day event will plateau quickly.
The PM who learns how to keep rewriting their own operating system will compound.
That may be the most important trait I would hire for. Not just intelligence. Not just technical fluency. Not just taste.
Learning velocity.
In a world where AI makes software easier to produce, judgment is what matters.
Judgment is built through repeated cycles of observation, construction, measurement, and reflection.
That is how I would train a new PM for an AI-native discovery team.
Not to manage the backlog from a safe distance.
To get close enough to the system that they can genuinely improve it.
This article reflects my personal perspectives on product management, AI, and content discovery. It does not represent the official position of my employer or any affiliated organization.