December 22, 2025
How I Built an AI Tactic Analyzer Using Lovable, Supabase, and OpenAI
A step-by-step breakdown of building AssMan.ai’s AI tactic analyzer using Lovable, Supabase, and OpenAI. Covers prompt design, backend schema decisions, API integration, validation, and cost control for non-technical founders.
TL;DR
This article shows non-technical founders how I implemented AssMan.ai’s tactic analyzer from a clean foundation to a working production feature using Lovable, Supabase, and OpenAI. I start by defining a simple, high-level initial prompt that establishes the core flow in Lovable. Then, in Supabase, I set up the backend early with clear user roles, database tables, and safeguards like RLS and API throttling for my OpenAI integration. The main feature is a straightforward pipeline: upload a tactics screenshot, confirm it's a tactic, analyze it with a tightly constrained prompt, and return structured, evidence-based feedback. Key decisions include separating uploads from analysis data, using JSONB for early flexibility, enforcing “null over guess” behavior, and grounding the AI in Football Manager–specific mechanics. The takeaway is simple: build a strong foundation, work in small slices, test constantly.
The Foundation
The Initial Prompt
In my previous article, I talked about building small throwaway prototypes before you start your actual build. Using Lovable's chat, I would summarize the learnings from each prototype, then feed them to ChatGPT to meld and further refine my initial prompt and immediate next steps.
The first prompt is intentionally skeletal. It is not overly complex. The goal is to enable the system to focus on establishing the foundation while leaving openings for functionality later.
Here is what I defined in my initial prompt:
- What the product is and what problem it solves.
For AssMan.ai, that was a tool for Football Manager players to get instant, iterative feedback on their tactics using AI to analyze an uploaded screenshot. - The core flow, its goal, and the expected pages.
I knew the landing page with the upload button would go to a loading state while the AI processed, then bring the user to the analysis page with a chatbot at the bottom. - Identify which components would be reusable.
While exploring the prototypes, I noticed a handful of components I wanted to turn into reusable “Lego bricks”, such as the screenshot upload component, the spinner pager, and the chatbot. Building them with the intention of addressing today's needs while preparing for the future. - High-level account management and database needs.
I stated that I knew I needed a tiered user account system and a database. - Technical stack and requirements.
If you are comfortable with your tool’s default technical stack, you can often skip this step. Just be aware that every setup comes with tradeoffs. In Lovable’s case, the app is client-rendered using React and Vite by default, meaning pages are rendered in the browser rather than pre-rendered for search engines. This setup can limit SEO reliability unless you add pre-rendering or server-side rendering. For some products, this is perfectly acceptable; for others, it is a non-starter. - Define your style guide and design system.
I gave high-level directions about the theme and general color palette, and let the AI handle the design details. - User tracking and consent management.
If you are using any user-tracking software, establishing the consent and approvals required can help reduce the risk of noncompliance with regulations such as GDPR.
This data provides the AI builder with the core information it needs to build a strong initial foundation.
Establishing the Backend
Right after you do your initial prompt, you'll want to start setting up your database. If you are using Lovable, you are likely going to use their cloud or Supabase. I recommend using Supabase to provide you with a layer of abstraction if you ever want to move away from Lovable. Setting these up is not difficult, and Lovable makes it very easy to accomplish.
Additionally, during this time, you will want to establish your user types, and in my case, I had three. I had a general anonymous user, a user who signed in, and an admin user for myself. Establishing the basics now makes it easier for me to build out this functionality later.
If you are building an application where data privacy, roles, or permissions are important, you should learn about Row Level Security (RLS) before fully launching your product.
To prevent bad actors from draining my OpenAI account, I created another table to handle throttling API calls to OpenAI. I was paying for each request I made to OpenAI for each photo upload and chat, so I provided unique limits for each call and also globally.
With all of this established, I had a frontend and backend pipeline, a foundation for user types, and protection for my external API calls.
Test and Save Frequently
Testing after each slice of work is an important step in the process to ensure you never start building on unintended structure that creates refactor work for later. At this point, click through the application and make sure the smoke-and-mirrors flow works.
While I would not be overly concerned about polishing things yet, verify that the core components, such as buttons, text input, and common components, are to your expectations. Creating common components ensures your “Lego bricks” are properly designed so you can use them easily and at scale throughout your project.
The final step to take before you really start investing time in your project is to connect it to GitHub. GitHub is the repository where your code base lives. When using Lovable, it tracks your code, providing a safe place to store it along with other sanity-saving features like reverting and branching.
At this point, you have confirmed your core flow and set up protection for your codebase.
Building Feature
The tactics flow is straightforward, with clear stages and progression points. Users upload an image, it is processed, and the analysis results are displayed on the screen. To support this flow, the system requires a database schema, OpenAI API integrations for image-based feedback, and chat handling logic. The backend is typically the hardest part for non-technical founders, but when approached intentionally, it remains accomplishable.
Supabase and Database Schema Decisions
For the tactic analysis, I decided to create two tables: one to track uploads and who uploaded them, and another for the output. This setup enabled identifying who posted the image by linking a foreign key from the user table to the tactics upload table. Establishing this connection allows me to give users the ability to view their previous uploads in a list when I want to implement that functionality. The second one was for the analysis output from OpenAI. I could have done a single table, but when I started this project, I wanted to separate the large feedback object from the initial upload object.
I planned to use a JSON Blob (JSONB) object type and later convert it to a more stable, secure database object for each important data point. I used a JSONB because I was still tinkering with exactly what I wanted the output to look like and thought there would be a few rounds of iteration after release. The flexibility comes with the risk of data quality issues due to less oversight, making data querying more difficult. Additionally, JSONB consumes more storage, making it heavier and more costly to query.
To keep the scope contained, I did not save any chat messages. I did not flesh out the accounts beyond enabling my Admin user account to handle some admin functionality, such as hiding uploaded tactics. The effort put into prototyping allowed me to implement the schema and backend with both confidence and intention.
OpenAI Integration
Setup
The API setup was straightforward. I struggled to get the system to allow API calls only from the specific URLs. Once I had established that a connection existed, I began building the individual function. I started with the function to analyze the image, spit out the analysis, and save it. Once I had that flow working, I knew I could get the output and funnel it into the chat functionality.
Setting up this API was easier than expected, and Lovable has recently made it even easier with new functionality to set up integrations with OpenAI. I haven't tried it myself, but I look forward to seeing if it makes a noticeable impact.
Prompting
I talked about ChatGPT Playground in my previous article. Using the tool was where I spent most of my time iterating on the prompt.
The Goal
Build an AI system that can "read" a screenshot of a Football Manager tactics screen and provide meaningful, actionable tactical feedback, essentially replicating what an experienced FM player would notice and suggest.
Key Design Decisions & Iterations
1. Verification Step
Rather than blindly analyzing any image, the system first validates that the uploaded image is actually an FM tactics screen. This validation prevents wasted API calls on irrelevant images and provides clear user feedback when something's wrong.
Why: Early iterations accepted any image, leading to nonsensical outputs. Adding validation improved both reliability and user experience.
2. Structured Grid Coordinate System
Instead of letting the AI loosely describe player positions ("somewhere in midfield"), the prompt defines a precise 6-row-by-5-column grid. Each player gets mapped to specific coordinates.
Why: This creates consistency and allows the AI to reference specific tactical relationships (e.g., "the player at D3 is isolated from support at C2").
Training: Capturing the model’s output of the screenshot like this also helps us eventually quantify the error rate.
3. Exhaustive Role/Duty Dictionary
The prompt includes a complete reference to recent FM roles and duties, organized by position type.
Why: This grounds the AI in actual game mechanics rather than generic football knowledge. It prevents hallucinated roles that don't exist in the game.
4. Evidence-Based Insights
Every piece of tactical feedback must cite specific evidence based on role, duty, and position.
Why: This forces precision and makes feedback actionable. Users can trace exactly which player/instruction the AI is referencing.
5. "Null Over Guess" Philosophy
When data is unclear or unreadable, the system returns null rather than guessing.
Why: False confidence is worse than admitted uncertainty. Better to say "I can't read this" than provide wrong information.
6. Feedback Density Requirements
Minimum thresholds for insights (e.g., "target three strengths, three conflicts, three suggestions").
Why: Ensures users always receive substantive feedback, not a single bullet point and "looks good!"
7. Instruction Alignment
Suggestions must work with existing team instructions, not contradict them.
Why: Early outputs might suggest "play wider" when the user has explicitly set narrow play. The prompt now enforces coherence.
Output
The output schema below is intentionally verbose to show how structured the AI response became over time and why starting with JSONB enabled easier iteration. That said, with the structure reasonably stable, it makes sense to transition this to a more standard and safer format, as the risk no longer outweighs the benefits.
| Top-Level Key | Type | Purpose | Example Output |
|---|---|---|---|
formation | string | The detected formation | "4-3-3" |
overviewSummary | string | High-level tactical philosophy description | "Counter-attacking 4-3-3 with Balanced mentality and Structured fluidity. Utilizes width from wingers and overlapping runs from wing-backs, with a focus on quick transitions." |
possessionStrategy | object | How the team builds and maintains possession | { approach: "Run At Defence, Play For Set Pieces", description: "Direct approach focused on quick forward progression" } |
transitionStrategy | object | Attacking and defensive transition patterns | { attacking: "Counter", defensive: "Counter-Press", description: "Quickly transitions to high pressing when possession is lost" } |
defensiveStrategy | object | Defensive shape, pressing, and marking approach | { line: "Higher Defensive Line", pressing: "High Press", additional: "Prevent Short GK Distribution" } |
keyStrengths | array | Primary tactical advantages of the setup | ["Fluid Attack: W-SU at [5,1] and [5,5] combined with WBs at [2,1] and [2,5]", "Central Control: BBM-S supports DLP-D and CM-A triangle", "Defensive Solidarity: CD roles at [2,2] and [2,3]"] |
conflictingTactics | array | Instruction combinations that may clash | ["Balance vs High Press: Balanced mentality may conflict with aggressive pressing", "Counter vs Structured: Structured fluidity may limit tempo for counters", "Wingers' Support vs Central Threat: W-SU may dilute offensive punch"] |
suggestions | array | Recommended improvements or adjustments | ["Consider Higher Mentality to complement pressing", "Utilize Advanced Playmaker instead of one winger", "Switch WB-D to WB-A for better counter support"] |
riskExposures | array | Tactical vulnerabilities to watch for | ["Defensive Transition: High line leaves gaps for through balls", "Wide Spaces: W-SU may not pressure flanks effectively", "Central Overload: Midfield can be outnumbered by 4-2-3-1"] |
roleSynergies | array | Player role combinations that work well together | ["DLP-D + BBM-S: Rico and Harding create balanced midfield triangle", "Wing-Backs + Wingers: Overlapping creates flank overloads", "CM-A + DLF-A: Diogo creates chances for Monday"] |
teamShape | object | Shape in/out of possession with width/depth settings | { defence: { shape: "Compact", description: "High line to compress space" }, attack: { shape: "Fluid", description: "Width via wingers and overlapping wing-backs" } } |
suggested_questions | array | Context-aware follow-up questions for chat | ["How can I further optimize my pressing strategy?", "What substitutes would complement my current setup?", "Should I change roles based on my opponents?"] |
What the Prototype Did Not Catch
In my previous article, I talked about the value of throwaway prototypes. They are great and mitigate many pain points. But they do not reduce all of them. You are still going to run into things that didn’t pop up during that process.
I forgot about pagination. If you make list pages, paginate them. Otherwise, you will cause issues for yourself with massive queries rather than querying 10 at a time. I ended up paying extra Supabase fees because I forgot this and was pulling every element as the data grew.
As I mentioned in my prompt section, I had a verification step as my first step. I didn't have that in my original prompt. I quickly learned that people could upload images unrelated to football tactics, so I needed to add verification. I set this as the initial step so it would exit early and not incur the cost of creating feedback. I implemented the same validation on the chat functionality to ensure questions stayed football manager-related.
Closing Thoughts
Build your skeleton first. Work in small slices. Test constantly. Use branching to explore without risk. That is the whole method.
The goal is not to become a developer. The goal is to get the product to a point where it works, where users get value, and where a developer can take over if you need to scale. Everything else is just getting in your own way.
Related Posts
How I Built an AI Product With Lovable.dev (No Code): Full Breakdown, Costs, and Insights
Breaking down how I built my first AI product using Lovable.dev, including architecture, costs, decisions, analytics, and early lessons.
November 25, 2025
How to Validate Product Ideas Using Reddit and ChatGPT Before Building
How I used Reddit and ChatGPT to validate product features before building with no-code tools, including research process, actual data, and prioritization decisions non-technical founders can replicate.
December 6, 2025
Build a Throwaway Prototype First: How I Use Vibe Coding to Avoid Tech Debt
Why I build a throwaway prototype in Lovable before creating the real product. A rapid iteration process that reveals what NOT to build, mitigates tech debt, and lets you explore fast without committing to bad architecture.
December 10, 2025
More Like This
Get new articles on validation, product decisions, and building with AI tools.