March 17, 2026
LinkedInLessons from Building with AI: Developing the Connect-K Game

From a 3D Cube in Space to a Published Multiplayer Experience
By Cris Pierry and Marcelo Pierry
Marcelo and I set out to understand what it actually feels like to build real software with AI coding tools. Not a demo. Not a tutorial exercise. A real product, with multiplayer, online play, configurable rules, and a design language that holds together. We chose a 3D Connect-K board game, and we built it entirely through natural-language interaction with tools like Cursor and Claude.
What we discovered is that the tools are remarkably capable, but that capability creates its own category of challenges. Speed amplifies good decisions and bad ones alike. The discipline required to ship a coherent product does not diminish when the cost of writing code drops. If anything, it increases.
What follows are the key lessons we learned on how to work with AI.
The Game, Briefly
The project started with a simple goal: render a 3D cube in a browser and let a user place game pieces inside it. From there, the feature set grew in deliberate stages. We added turn-based play, win detection across all lines of a 3D grid, a game lobby, and AI opponents with configurable difficulty. Then came online multiplayer with Firebase, which evolved to include in-game messaging after we realized that playing someone remotely without being able to talk to them felt incomplete.
Once features were stable, we refined the UI, established a design language, and published version one. Because the engine was configurable rather than hardcoded, we were able to add classic board games like Connect 4 and tic-tac-toe as alternate configurations of the same system. We then rearchitected the codebase to prepare for a native iOS app.
But the product itself is not the point of this article. The point is what the process revealed.
Lessons from the Build
The Connect-K project taught us a set of lessons that we believe will be broadly applicable as AI-assisted development becomes the norm, and hand-coding becomes an artform like calligraphy.
Initial Framing: Set the Context Before You Start Building
The single most important thing we did at the start of the game project was not write a feature list. It was to set context. We told the AI what kind of product this was (a web-based game for desktop and mobile), what the core experience needed to feel like (a 3D cube that was spatially understandable), and what the constraints were (browser-based, no backend initially, responsive). That framing shaped every subsequent interaction. When context is clear, AI tools produce dramatically better output. When context is vague, the tools default to generic solutions that require extensive rework. Investing time in a strong initial framing is the highest-leverage activity in AI-assisted development.
Token Usage: The Hidden Cost You Need to Manage
Tokens are the currency of AI-assisted development, and they are surprisingly difficult to understand or estimate. Most frontier coding tools allocate tokens in cycles. In our workflow, tokens were allocated in five-hour sessions. Once consumed, you wait for the next cycle. This creates a practical constraint that mirrors budget management in traditional engineering: you have a finite resource, and how you spend it determines what you can accomplish.
One of the most useful optimizations we discovered was scheduling. We automated session starts so that a new token allocation would begin at 6:00 AM, meaning that when we sat down to work at 8:00 AM, we had a full budget of tokens available. That small operational adjustment eliminated the frustration of running out of tokens mid-task. It also forced us to think about token consumption as a resource to plan around, not just react to.
Model Selection: Use the Best Models and Let the Tool Decide
We learned early that model selection matters enormously. When building something complex, every cycle spent on a weaker model is a cycle that produces lower-quality output, which then requires additional cycles to fix. Our approach was simple: focus on the best available models only. There is no value in saving tokens on a cheaper model if the result requires three rounds of correction. Better to spend more on a single high-quality generation than to iterate repeatedly on mediocre output.
Ideally, model selection should be automated by the tool itself. The developer should not need to manually pick which model handles a given task. The orchestration layer should route requests to the most capable model based on the nature and complexity of the work. This is an area where the tooling is still maturing, but it is clearly the direction things are headed.
Source Control: Discipline Is Non-Negotiable
AI-assisted development can feel so fast that source control discipline seems like unnecessary friction. It is not. We maintained a rigorous workflow throughout the project: local commits after every meaningful change, a new branch for every new feature, code review before merging, and a clean deployment pipeline. That discipline saved us repeatedly. When an AI-generated change broke something subtle, we could revert cleanly. When a feature branch went sideways, we could abandon it without contaminating the main codebase. The speed of AI development makes source control more important, not less, because changes happen so fast that without discipline, the codebase can drift into an inconsistent state within a single session.
Prioritization: Resist the Temptation of One More Thing
When building with AI is this fast, the temptation to add just one more feature is constant and powerful. Every idea feels achievable because, in most cases, it actually is achievable. The implementation cost is low. But the complexity cost is real. Every feature adds surface area for bugs, increases the cognitive load on users, and makes the product harder to reason about. Be disciplined about what to build. Resist the pull of "one more thing." The question is never "can we build this?" It is always "should we build this now, and does it make the product meaningfully better?"
Incremental Feature Development: Thin Slices, Tested Thoroughly
The most reliable pattern we found was building in thin, testable slices. Rather than asking the AI to build an entire feature end to end, we would break it into discrete steps, implement each step, verify it worked, and then move on. This approach has several advantages. It keeps the context window manageable. It produces code that is easier to review. It catches problems early, before they propagate through multiple layers. And it matches how AI tools actually work best: they perform better on focused, well-scoped tasks than on broad, ambiguous ones.
Multiple Foundational Models: Use Different Tools for Different Jobs
One of the strongest patterns that emerged was using different AI models for different types of work. We would use Claude or ChatGPT to think through a product problem, clarify the user experience, research technical options, or draft a detailed implementation prompt. Then we would take that refined prompt into Cursor and execute. This division of labor was not a workaround. It was a genuine productivity multiplier. Some models are better at reasoning and planning. Others are better at code generation within an integrated development environment. Matching the model to the task consistently produced better results than using a single model for everything.
Different Team Members: Wear Multiple Hats Deliberately
Throughout the project, we found ourselves switching between distinct roles: product manager, designer, principal engineer, and QA tester. The key insight was to be deliberate about which hat we were wearing at any given moment. When we were in product mode, we focused on what to build and why. When we were in design mode, we focused on how it should look and feel. When we were in engineering mode, we focused on implementation quality. And when we were in QA mode, we tried to break everything.
Conflating these roles, trying to design and build and test simultaneously, consistently led to worse outcomes. The discipline of wearing one hat at a time, even when working solo with AI tools, kept the quality of each type of work higher.
The Three-Phase Loop: Features, Then Design, Then Optimization
We converged on a three-phase development rhythm. First, we focused on features: getting the core functionality working and verified. Second, we refined the design: applying visual coherence, improving usability, and polishing the experience. Third, we optimized: improving performance, cleaning up the codebase, and preparing for scale or portability. Attempting to do all three simultaneously was counterproductive. Polishing the design of a feature whose logic is still unstable wastes effort. Optimizing code before the feature set is settled means reworking optimizations repeatedly. The sequential approach was slower in any given moment but faster overall.
Short, Focused Threads: Respect the Context Window
This may be the single most practical lesson from the entire project: keep your conversation threads short and focused. The context window is a finite resource. As a thread grows longer, the AI's ability to maintain coherence with earlier parts of the conversation degrades. More importantly, mixing multiple unrelated features in the same thread leads to confused outputs and tangled code.
Our rule was simple. One feature per thread. When a feature was complete, we started a new thread for the next one. We never, and this is worth emphasizing, never developed different features in the same conversation. The discipline of starting fresh for each feature kept the AI focused, kept the output clean, and made it far easier to review and understand what was generated.
Knowing When to Stop the Agent
AI coding agents sometimes go off the rails. They start refactoring code you did not ask them to touch. They introduce dependencies you did not want. They solve a problem you did not have. When this happens, the instinct is to try to steer the agent back on course. That instinct is usually wrong.
The better approach is to stop immediately. Throw away the generated code. Revert to your last clean commit. Start a new thread with a clearer, more constrained prompt. The cost of a fresh start is almost always lower than the cost of trying to salvage a divergent output. This requires a kind of emotional discipline, letting go of code that the AI spent time generating, but it consistently produces better results.
What We Took Away
The Connect-K game started as a weekend experiment and grew into a fully featured, published product: player-versus-player, player-versus-computer, AI-versus-AI, and online multiplayer with in-game messaging. It supports configurable board sizes, configurable win conditions, gravity settings, multiple game piece styles, and both 2D and 3D visualization. It was built entirely through natural-language interaction with AI coding tools.
But the product itself was never the point. The point was learning. AI-assisted development does not eliminate the need for engineering discipline, product judgment, or design taste. It concentrates the demand for all three into a tighter loop. The developers and product builders who will thrive in this environment are not the ones who can generate the most code. They are the ones who can frame problems clearly, decompose work intelligently, evaluate output honestly, and maintain the discipline to ship something coherent rather than something merely large.
We built this game to learn. The most important thing we learned is that the tools are extraordinary, but the craft still matters. Perhaps more than ever.
This article reflects personal perspectives on AI-assisted software development. It does not represent the official position of any employer or affiliated organization.