May 7, 2026
LinkedInWhat Is AI? How Does It Impact Us?

What is AI?
- AI is software that can infer from data and context to produce predictions, recommendations, content, decisions, or actions that previously required human judgment. AI can make decisions, and take actions, with varying levels of autonomy and adaptiveness.
- AI is collapsing the distance between idea, prototype, evaluation, and launch. That does not make judgment less important. It makes judgment more important.
- The risk is not that everyone can generate more ideas. The risk is that we confuse speed with strategy, output with quality, and demos with durable product value.
- AI should help us build better customer experiences, make smarter decisions faster, and give our teams more leverage. But the scarce assets are still taste, context, trust, distribution, and judgment.
I think the biggest mistake would be treating AI as either magic or threat. It is neither. It is a new capability layer. It can make weak thinking faster, or it can make strong teams more powerful.
Our job as leaders is to make sure it does the second.
The smart companies will not look at AI and ask, “Can we do the same with less?” They will ask, “How much more can we do with the same?”
As a product leader, I have never been more excited about our future. AI is going to unlock a new level of productivity, creativity, and effectiveness that we have not seen before.
What is the impact of AI in our jobs?
AI changes the altitude at which product, design, and engineering operate.
Historically, a lot of our time was spent translating ideas from one format to another: strategy into PRDs, PRDs into designs, designs into tickets, tickets into implementation, implementation into analysis. AI compresses that translation layer.
A PM can now prototype. A designer can explore interaction models faster. An engineer can move through boilerplate and debugging faster. A data scientist can turn analysis into product-facing insight faster.
But the real impact is not just speed. The real impact is that we can make more of our thinking executable earlier. We do not have to debate every idea abstractly. We can prototype, test, inspect, and evaluate earlier in the process.
That should make us better, but only if we pair AI with stronger judgment. AI will generate more options. It will not automatically tell us which options are strategically right, brand-appropriate, customer-worthy, or operationally safe.
How should PMs and designers be thinking about AI?
We should think about AI in three layers.
- AI as a productivity tool. It helps with synthesis, prototyping, research summaries, competitive analysis, design variations, copy, test plans, and debugging.
- AI as a product capability. It can power better search, personalization, content understanding, recommendations, conversational interfaces, creative tooling, and operational workflows.
- AI as a change in user expectations. Customers are going to expect products to understand more context, require less friction, feel more adaptive, and help them get to the right outcome faster.
The questions we should be asking are
- What customer decision, workflow, or pain point can now be rethought because AI exists?
- How do we make these capabilities feel useful, trustworthy, explainable, and human, instead of magical, confusing, or intrusive?
- Where can intelligence reduce friction, improve relevance, or create a better experience?”
Where are product, design, and engineering getting closer, and where are they pulling apart?
They are getting closer around prototyping, experimentation, and product intuition. A PM can now build a rough version of an idea. A designer can create interactive flows faster. An engineer can use AI to explore implementation paths or review alternatives. That shared ability to make ideas tangible brings the disciplines closer together.
They may pull apart if we blur the difference between prototype and production. Just because a PM can build a demo does not mean we can skip engineering discipline. Just because design can generate many variations does not mean we have a coherent experience. Just because engineering can move faster does not mean the product problem was worth solving.
So I think the disciplines are getting closer at the exploration layer, but we still need clear ownership at the production layer. Product owns the customer and business problem. Design owns the experience quality and interaction model. Engineering owns the system integrity, scalability, reliability, and maintainability.
AI should reduce unnecessary handoffs, not erase accountability.
Where does AI fit in our customer lifecycle?
AI fits across the full customer lifecycle, but it has to show up differently at each stage.
At acquisition, AI can help us understand audiences, position content, personalize messaging, and connect the right customer to the right promise.
At onboarding, it can reduce cold start friction. Instead of asking users to do a lot of work upfront, we can infer preferences, ask lighter questions, and adapt quickly.
In discovery, AI is central. Search, personalization, recommendations, artwork, rails, summaries, trailers, metadata, and conversational discovery are all places where AI can help customers decide what to watch.
During playback and engagement, AI can help with recaps, next-best-watch decisions, end-card recommendations, short-form discovery, sports moments, live events, and contextual experiences around franchises.
For retention, AI can help us understand churn signals, content affinity, household behavior, and the moments where a customer is losing value.
But the big point is this: AI should not be a separate surface sitting off to the side. It should become part of the product’s decisioning layer. It helps the product become more adaptive, more contextual, and more useful throughout the customer journey.”
Now that ideation can be faster, how are you determining what should go out into the world? How are you ensuring quality of thinking?
This is one of the most important leadership questions.
AI makes ideation cheaper. That means the bottleneck moves from idea generation to idea selection.
My bar is not, ‘Can we make a demo?’ The bar is: does this solve a real customer problem, does it improve a measurable business outcome, does it fit the product strategy, can we operate it responsibly, and can we evaluate whether it worked?
I also think we need to separate three things: prototype quality, product quality, and strategic quality.
A prototype can prove that something is possible. Product quality asks whether it is reliable, usable, fast, safe, and coherent with the experience. Strategic quality asks whether this is the right thing to spend organizational energy on.
AI helps us generate more paths. It does not remove the need for taste, prioritization, experimentation, and leadership judgment.
So the quality system has to include customer insight, product review, design critique, engineering review, data evaluation, legal and policy review when needed, and A/B testing where appropriate. The faster ideation gets, the more disciplined the release gate has to become.
“How do we create AI policies that are clear enough to protect the company, but practical enough to help teams move?”
I think the honest answer is both.
Good policy protects the company, our customers, our partners, our IP, and our employees. That matters a lot in a company like ours, where we have premium content, talent relationships, customer data, contractual constraints, and brand trust to protect.
But policy can become limiting if it is written only as a list of things people cannot do. If the rules are unclear, people either avoid using the tools entirely or they use them in inconsistent ways.
The best policy should create safe lanes, not just red lights. It should tell teams: here is what you can do, here is what requires review, here is what is prohibited, here are the approved tools, here is how to handle data, here is how to handle copyrighted material, here is how to evaluate outputs, and here is where to go when you are unsure.
In other words, policy should enable responsible adoption. If we only focus on risk prevention, we will move too slowly. If we only focus on speed, we will create avoidable risk. The goal is speed with guardrails.
What do you think AI means for our jobs, our functions, and our size?
I do not think the right frame is simply ‘AI replaces jobs.’ I think the more immediate and practical frame is that AI changes the leverage of each function.
The PM role becomes less about writing documents and more about judgment, problem selection, product strategy, experimentation, and driving clarity.
The design role becomes less about producing static artifacts and more about shaping intelligent, adaptive, trustworthy experiences.
The engineering role becomes less about manually producing every line of code and more about architecture, systems thinking, reliability, integration, review, security, and production quality.
Analytics and data science become more important, not less, because we need to know whether AI-driven experiences are actually improving outcomes.
Over time, team size and structure may change. Smaller teams may be able to do more. Some work that required large coordination layers may become lighter. But I would be careful about jumping immediately to headcount conclusions. The first-order effect should be higher leverage. The second-order effect may be different org design.
The companies that win will not just cut costs. They will increase the ambition level of what their teams can build.
How are you forecasting what you and your teams are doing?
I see AI impact across three horizons.
Near term, AI improves productivity. Faster synthesis, faster prototyping, faster analysis, faster code generation, faster test creation, faster documentation.
Medium term, AI changes workflows. Product reviews become more prototype-driven. Research synthesis becomes more continuous. Experiment setup and analysis become faster. Content operations and metadata workflows become more intelligent. PMs and designers get closer to executable product thinking.
Long term, AI changes the product itself. Discovery becomes more conversational, more contextual, more personalized, and more adaptive. The experience can move from static surfaces to intelligent systems that understand customer intent and help people find value faster.
For my teams, I would forecast not only output metrics, but learning velocity. Are we testing more meaningful ideas? Are we reducing time from insight to prototype? Are we improving ranking, search, personalization, and discovery quality? Are we making the customer experience better in measurable ways?
In your future vision, what would you consider successful use of AI from your teams?
Success would mean AI is not a novelty anymore. It is embedded into how we work and how the product creates value.
Internally, success means our teams use AI to move faster, think more clearly, prototype earlier, evaluate more rigorously, and reduce repetitive work. PMs can test ideas earlier. Designers can explore richer interaction models. Engineers can build and validate faster. Analysts can get to insight faster. But quality and accountability remain high.
In the product, success means customers find something they love more quickly and more often. Search works better. Recommendations feel more relevant. The homepage feels more adaptive. The product understands context better. We use content metadata, behavioral signals, creative assets, and editorial judgment in smarter ways. AI helps us turn our catalog into a more personal and valuable experience.
At the company level, success means we build an AI operating model that is fast, safe, measurable, and differentiated. Not just lots of pilots. Not just demos. Real customer impact, real team leverage, and real business outcomes.
This writting reflects my personal perspectives on product management, AI, and content discovery. It does not represent the official position of my employer or any affiliated organization.