December 30, 2025

Execution & Implementation

Building Your Second Feature With AI Tools: When Foundations Get Tested

Proper foundation work in Feature 1 turned Feature 2 into snapping Lego bricks together instead of starting from scratch. How I built the Player Development feature in 3 days using Lovable.

TL;DR

When you start your second feature, it quickly becomes apparent whether you set yourself up for success while building your foundation and first feature. During the development of AssMan.ai, I first built the Football Manager Tactics Analyzer as my core feature, then expanded to develop my second feature, Player Development.

Player Development took 3 days, whereas Tactics Analyzer took multiple weeks. I built this faster by planning around common components and standardizing flows for reuse. The reusability let me spend most of my time on the prompt and the flow's entry point.

To compare features fairly, I looked at equal timeframes starting from Player Development's release. During that period, users uploaded 1,883 tactics and 456 players. Player development averaged 1.82 chats per upload, while tactics averaged 2.59 per upload. 1,302 users completed the tactic analysis, and 187 completed player development. Most users who tried player development also used tactics, showing tactics remained the primary driver.


Why Feature 2 Is Different From Feature 1

Many projects struggle to move from greenfield building to their second feature because the foundation they laid at the start of the project is coming back to haunt them. Whether it's database structure, UI flows, or architectural assumptions, the foundation issues that seemed fine for Feature 1 can turn Feature 2 into a nightmare rebuild.

When you build your foundation with more than just your first feature in mind, you can expand functionality more easily. The work shifts from exploration to extension. The questions change: What changed from my original assumptions? What can I reuse? What needs building from scratch?

For AssMan.ai's player development feature, this meant starting with working components, functional flows, and protections. The image upload handler existed. The chat system existed. Image validation and rate limiting were in place. The foundation I established let me snap player development together like Lego bricks, rather than starting from scratch.


What You Built Once That You Get To Reuse

Here's what that looked like in practice. Three components I built for the tactics analyzer became the foundation for player development:

Reusable Component #1: Screenshot Upload Component

  • Built as a "Lego brick" during the tactics analyzer. Reused for player development with a different state. Changed text/prompts, but core concepts and flow are identical.

Reusable Component #2: Chat Functionality

  • Entire chat handler reused with minor changes. Same throttling, same message functionality. Only difference: context (tactics vs. player).

Reusable Component #3: Security & Rate Limiting

  • The pipeline for safely and securely interacting with OpenAI was already established, allowing me to focus solely on the prompting.

The tactics analyzer took two weeks and change to build. Player development took three days. That's what a strong foundation with reusable components actually enables.


What Changed: Database Decisions

For tactics, I used two tables: uploads and analysis. I knew I'd iterate heavily on the feedback structure, changing what data to extract, how to organize it, and which fields mattered. Keeping analysis separate meant I could modify that schema without touching upload tracking. I covered this decision in depth in my previous article on building the tactic analyzer.

For players, I combined uploads and analysis into one table. The relationship was simpler (one upload, one analysis), and I didn't expect to iterate on the structure as much. The single table kept things straightforward.


What Changed: Development Approach

One of the biggest changes in my process was using GitHub and Lovable's branching functionality instead of building a throwaway prototype. For tactics, I built throwaway prototypes because I was exploring from scratch. For player development, the foundation was solid. I just needed to extend it safely.

Branching let me develop the player feature without introducing potential side effects into the production code. I could test the upload flow, iterate on the prompt output, and validate the single-table approach. Once it worked, I merged it in.


Home Page Hierarchy Decisions

One challenge was surfacing player development functionality on the home page without hurting tactics conversions. I kept tactics as the primary hero and only added player development below the fold with a link to a dedicated player development page. This created intentional friction to protect the core feature.

I made this assumption based on the previous research using Reddit and ChatGPT, but in hindsight, this would have been a perfect opportunity for A/B testing using PostHog. I could have split traffic between player development on the home page versus behind a separate page and measured the actual impact on both features. Without that data, I don't know if the friction helped tactics or just hurt player development adoption.


Implementation Walkthrough

Before building anything new, I verified that my reusable components were actually ready for reuse. Working with Lovable’s chat to align on the component context meant I could reuse them without breaking the original tactics flow.

Once aligned on the components, I followed a similar approach to the one I used in my initial project's foundation prompt. In this case, I structured my prompt around:

  • What the feature is and what problem it solves: A second feature enabling Football Manager players to receive instant, iterative feedback on their players using AI to analyze uploaded screenshots.
  • The core flow: Referenced the original tactics upload flow. Same upload button, loading state, and analysis page with chatbot. The difference: this flow starts from the Players List page, not the home page.
  • Reusable components: Called out the upload handler, chat functionality, and rate limiting from the "What You Built Once That You Get To Reuse" section.
  • Database structure: Defined the single-table approach from the "What Changed: Database Decisions" section.
  • Technical requirements: Reuse the OpenAI function from tactics with manual prompt tweaks.
  • User tracking: Same pattern as tactics, tracking submissions and interactions.

By reusing the OpenAI interaction patterns, I didn't have to rebuild rate limiting or pre-image validation. The main change was the prompt itself, which I manually updated to prevent any unintended modifications. I had to tinker for a bit to get everything working exactly as I wanted, but most of the core functionality structure worked from the initial prompt.


Results & What They Mean

Before building, the Reddit and ChatGPT validation research showed that tactics dominated at 38% of Reddit posts, compared with player development at 7%. These predicted tactics would be used more often.

Production data confirmed that prediction. To compare features fairly, I looked at equal timeframes starting from Player Development's release. During that period, users uploaded 1,883 tactics and 456 players. Player development averaged 1.82 chats per upload, while tactics averaged 2.59 per upload.

Looking at unique users, 1,302 completed tactic analysis, and 187 completed player development. Most users who tried player development also used tactics, showing tactics remained the primary driver.

The validation method worked. Research predicted the usage split, and production results matched.


Closing Thoughts

Building player development proved that the foundation work paid off. As the adage goes, proper planning prevents poor performance.

The groundwork you lay in your foundation and first feature should not hurt your second feature. It should mitigate pain points and facilitate reusability where it makes sense.

Building Feature 1, you can brute force your way through. Building Feature 2 will have major side effects if there isn't intention and proper thought put into its execution, especially when setting up backend functionality.

When you do it right, your feature takes less time than building from scratch, and you have Lego bricks of functionality that you can snap together for speed and efficiency.

Related Posts

More Like This

Get new articles on validation, product decisions, and building with AI tools.