Skip to content
← Back

March 11, 2026

LinkedIn

How AI Tools Are Raising the Bar for PMs

How AI Tools Are Raising the Bar for PMs

What Building Software With My Son Taught Me About the Future of Product Management

Over the past few months, my son Marcelo and I have been coding together. We built a few software projects using frontier AI coding tools: Claude Code, OpenAI Codex, and Cursor. These were not startup ideas. They were learning vehicles. We were looking to understand what happens to product management when the gap between idea and working software collapses?

When software gets dramatically easier to produce, engineering capacity is no longer the bottleneck. It is judgment: knowing what to build, how to sequence it, when to refine, when to stop, and how to distinguish between a feature and actual user value.

Two Projects, Two Approaches

We chose two deliberately different builds. The first was a personal portfolio site for Marcelo, a place to showcase his projects, and work. The second was a web-based 3D Connect-K game that grew far more complete than either of us planned.

For Marcelo, a computer science and information science student, this was a chance to develop product instincts that are hard to build in a classroom. It meant shaping ambiguous ideas into real products, making repeated decisions about experience and priority, and watching those decisions play out in something people could actually use.

For me, it was a way to answer a question that many product leaders are asking: with AI as part of the workflow, what matters more in the PM role? And what matters less?

The PRD Started to Fade

We began the portfolio site the traditional way. We wrote a proper PRD, registered a domain through Squarespace, set up GitHub for source control, chose Netlify for hosting, and picked OpenAI Codex for implementation.

At first the workflow was familiar: define requirements, review output, iterate. But it became clear very quickly that the PRD was not going to stay at the center. It helped us get started, but it did not remain the best instrument for steering the product. Instead, the work became the interaction itself. We stopped refining the PRD and started refining the prompts.

In an AI-native workflow, product clarity still matters enormously, but it shows up differently. It lives in how you frame context, break down tasks, specify constraints, and give feedback in rapid cycles. The work changes to steering an intelligent implementation workflow.

Building Without a Spec

For the game, we changed the process. No full PRD up front. Instead, we started with context: a web app that works on desktop and mobile, a 3D 4x4x4 cube in space, the ability to rotate and zoom, clean piece placement, and an experience that is easy to use. We used Cursor as the orchestration environment and Claude Opus as the coding partner.

From there, the work looked remarkably similar to strong product management at its best. We set context, prioritized, decomposed, and sequenced one capability at a time. First we focused on the world model: could we build a 4x4x4 cube that looked correct, behaved correctly, and gave players enough spatial clarity to interact with it confidently? Once that worked, we added the core game loop: movement, placement, alternating turns, win and draw logic, replay, and undo.

Then the pattern that many PMs will recognize appeared: once the core works, the backlog expands on its own. We added a lobby, then player-versus-computer, then AI difficulty settings. Later we added a layer slider, a layer spread control, and eventually a full 2D view after enough play sessions made it clear that the 3D board by itself might be hard to read.

Each addition was compelling, which is exactly why frontier coding tools are both so powerful and so risky.

The Speed Is a Gift and a Trap

We often used different models for different jobs. We would use Claude or ChatGPT to think through the product problem, clarify the UX, or draft a better prompt. Then we would take that refined prompt into Cursor and execute.

That interaction pattern revealed a broader shift in product development. The classic sequence of research, define, align, hand off, build, review is compressing into something faster: frame context, prompt precisely, inspect, refine, re-prompt, ship.

The PM is still defining the problem, sequencing value, protecting the experience from accidental complexity, and making tradeoffs. But the workflow moves in hours instead of weeks. Good decomposition gets rewarded immediately. Weak framing gets exposed just as fast.

That speed is exciting, it is intoxicating, but it is also dangerous. In a traditional environment, implementation cost acts as a natural brake on scope. In AI-native workflows, that brake weakens. Once the game was working, we did what many teams are tempted to do when feature cost drops: we kept going. We expanded board sizes, made win conditions configurable, built AI-versus-AI mode, introduced gravity, and added more customization options.

From a builder's standpoint, it was exhilarating. From a PM standpoint, it was a warning. When the cost of building goes down, the cost of weak prioritization goes up. AI does not eliminate the need for discipline. It increases the premium on it.

The New Strategic Constraint: AI Operating Costs

There was another practical lesson we could not ignore: the cost of context. Almost nobody thinks naturally in tokens, and yet frontier coding tools ultimately charge, meter, or constrain usage through token consumption. That makes context management and token consumption one of the hidden mechanics of AI-native building, turning disciplined product thinking into a fundamental lever for operational efficiency.

A few heuristics helped us. Treat context like a budget. Narrow the task whenever possible. Assume discovery is more expensive than execution. And pay attention when the workflow starts to feel bloated, because that usually means the context window is too large or the task is not scoped tightly enough.

Disciplined product thinking turns out to be good not just for product quality. It is good for AI operating efficiency, too.

Playing Each Other From 2,000 Miles Away

The final big feature was online play. Marcelo was in Madison, Wisconsin. I was in Cupertino, California. We had been testing through screen shares and GitHub issues, but we wanted to actually play against each other remotely.

So we built it. We added Firebase for real-time state synchronization, room codes, and in-game messaging. The amazing part was not just that we added these features. It was how quickly it happened. We introduced messaging in under an hour.

This speed is the shift product leaders need to understand. Features are longer defined by whether something is buildable and how long it will take. With AI, PMs need to decide: should it exist? In what order? For which users? And how will we know it is actually better?

This acceleration fundamentally changes the product management role and its required skill set.

What Changes for PMs

The biggest takeaway from this experiment is that we made substantial product progress without working line by line through source code. That does not mean technical fluency stops mattering. It means the nature of the work is changing.

A strong PM in this environment still needs systems thinking, user empathy, prioritization, decomposition, and enough technical understanding to reason about architecture and constraints. But the PM also becomes more operational inside the build loop itself. Prompting becomes a real product skill, because good prompts are structured thinking. The PRD becomes more fluid, with living specification moving into the interaction loop. And judgment matters more, not less, because easier implementation raises the need for taste, restraint, and validation.

A few practices consistently made a difference:

  • Start with context, not just tasks
  • Break work into thin slices
  • Use different tools for different jobs
  • Review aggressively
  • Never confuse implementation velocity with product progress

More features is not automatically more value. Faster output is not the same as better judgment.

Why This Matters for the Next Generation

One reason I wanted to write this is that I believe this style of building will matter enormously for early-career product talent. The established signals are not disappearing. Strong internships, communication skills, analytical rigor, and product instincts all still matter. But AI-native building introduces another meaningful signal: can someone take an ambiguous idea, give it structure, work effectively with frontier tools, refine the experience through iteration, and turn that into something real?

That signal is increasingly valuable. Working on these projects with Marcelo reinforced that for me. What stood out was not just willingness to experiment with the tools. It was the instinct to keep improving the experience, to notice where usability was breaking down, to push for clarity, and to keep iterating until the product felt right. In an AI-native product environment, the builders who stand out will not just be the ones who can generate output. They will be the ones who can direct, evaluate, refine, and decide.

What We Built

By the end of this experiment, we had a web-based 3D game with player-versus-player, player-versus-computer, AI-versus-AI, online play, configurable board sizes, configurable win conditions, gravity settings, multiple piece styles, 2D and 3D views, and in-game messaging. It lives at uiralabs.com, Marcelo's idea incubator.

But the most valuable output was not the site or the game. It was a clearer picture of what product management looks like when the cost of turning intent into software drops by an order of magnitude.

AI is not making product management less important. It is making strong product management more visible.

A Challenge for PMs

If you are a PM, one of the most useful things you can do right now is build something small with frontier tools. Not because every PM needs to become an engineer. Because there is no substitute for experiencing firsthand how fast the loop between idea and implementation is changing.

That has been the real lesson for us. And I suspect it is one many product teams, and many future PMs, are about to learn.


Marcelo and I built these projects as learning exercises to understand how AI-native development is reshaping the craft of building software, and what it means for the next generation of product builders.


This article reflects my personal perspectives on product management and AI. It does not represent the official position of my employer or any affiliated organization.