I was at Web Summit Vancouver last week, a tech conference where the only topic of every conversation was, surprise surprise, AI! As someone who has been in the space for years, well before the ChatGPT boom, I was excited to talk to my fellow nerds about the latest tools and tech.
And I was shocked to find that many attendees, including product managers and developers, hadn’t even heard of the AI tools I used most, like Claude and Cursor.
I’ve already written guides on Claude so I figured I’d do one for Cursor. This guide is for you if you’re:
- A complete coding beginner who’s heard the vibe coding hype and wants to skip the “learn syntax for six months” phase
- A seasoned developer curious about AI coding tools but tired of switching between ChatGPT tabs and your IDE
- Someone who tried Cursor once, got confused by all the modes and features, and gave up
By the end, you’ll know exactly how to use Cursor’s three main modes, avoid the common pitfalls that trip up beginners, and build real projects.
Installation and First Contact
Time for the least exciting part of this guide: getting Cursor on your machine. Head to cursor.com and download the application (revolutionary, I know). The installation is standard “next, next, finish” territory, so I won’t insult your intelligence with screenshots.
If you’re familiar with other IDEs, like VS Code, then Cursor won’t look too different. In fact, it’s literally a fork of VS Code. Your muscle memory, keyboard shortcuts, and extensions all work exactly the same. You can install Cursor and use it as a drop-in VS Code replacement without touching a single AI feature.
But why would you want to do that when you could have a coding superpower instead?
Open one of your existing projects in Cursor and hit Cmd+L
(Mac) or Ctrl+L
(Windows/Linux). That’s your AI sidebar. Type something like “explain what this file does” and watch as Cursor not only explains your code but suggests improvements you hadn’t even thought of.
This is your first taste of what makes Cursor different. It’s not pulling generic answers from the internet, or generating something irrelevant. It’s analyzing your actual project and giving you contextual, relevant help. Let’s explore the different ways it can do this.
If you don’t have an existing project, ask Cursor to create one! Just type in “Generate a simple HTML file about pizza toppings” or whatever strikes your fancy, and watch the magic.
The Three Modes of Cursor
Cursor has three main ways to interact with AI, and knowing when to use each one is like knowing when to use a scalpel versus a sledgehammer. Both are tools, but context matters.

Ask Mode: Your Coding Sherpa
Think of Ask mode as your personal Stack Overflow that actually knows your project. Hit Cmd+L
(or Ctrl+L
) to open the sidebar, make sure “Ask” is selected in the dropdown, and start asking questions.
I often use this if I’m returning to a project I haven’t looked at in a couple of days, or if I’m trying to understand why Cursor generated code in a certain way. It’s also a great way to learn how to code if you’re not a professional.
You can ask it something specific, like what does this function do, all the way to asking it how an entire codebase works. I encourage you to also ask it to explain itself and some of the architectural decisions it makes.
Examples:
- “What does this function do and why might it be slow?”
- “What are other ways to implement this functionality”
- “How would you approach adding authentication to this app?”
- “What are the potential security issues in this code?”
Ask mode is read-only so it won’t change your code. It’s purely for exploration, explanation, and planning. Treat it like Google, but Google that knows your specific codebase inside and out.
Pro Tip: Ask follow up questions to deeper understanding, request alternative approaches to problems, and use it to understand AI-generated code before implementing.
Agent Mode: The Code Wizard
This is where the magic happens. Agent mode (formerly called “Composer”) can actually make changes to your code, create new files, and work across your entire project.
You tell it to do something, and it just does it, from adding new text to a page, all the way to creating an entire new feature with multiple pages, functions, and components.
It can even run commands in the terminal, like installing a new package or committing changes to Git.
Examples:
- “Build a login form with validation”
- “Create a new branch for the onboarding feature”
- “Create a REST API for managing user profiles”
- “Refactor this component to use TypeScript”
Agent mode takes into context your entire codebase to understand relationships between different parts and create or modify multiple files. If you ask it to make wholesale change, it will literally go off and generate tons of code across multiple files.
Pro Tip: Start with clear, specific requirements and review changes before accepting them. Use version-control like Git at every step.
Edit Mode: The Precision Tool
Edit mode is for making smaller, more precise edits. To use this, you need to select some code in the editor and you’ll get a little menu with options to add to chat or edit.
Selecting edit opens up edit mode where you can ask the AI to make changes to that piece of code. You might want to use this when making small tweaks to existing code, refactoring a single function, or a quick bug fix.

YOLO Mode
There’s a secret fourth mode in Cursor called YOLO mode. Ok it used to be called YOLO Mode but they’ve changed it to the less scary “auto-run mode”.
This mode lets the AI run terminal commands automatically. You may have noticed in your tests so far, especially in Agent mode, that it pauses and asks if it can install a package or spin up a dev server.
If you select auto-run mode, it executes these commands without your consent. This is obviously a risky thing so I suggest you limit it to certain commands, like running tests. That way, when you ask Agent to build a new feature and test it, it does so automatically without your active involvement.

Choosing Your Mode
“I want to understand something” → Ask mode
“I want to build/change something” → Agent mode
“I want a tiny, precise change” → Edit mode (or just use Agent)
Here’s a practical exercise to try all three:
- Ask mode practice: Open your HTML file and ask “What would make this webpage more accessible?”
- Agent mode practice: Tell Agent “Add a CSS file that makes this webpage look modern with a color scheme and better typography”
- Edit mode practice: Select the page title and ask Edit to “Change this to something more creative”
Context is king
Cursor is only as good as the context you give it. The AI can only work with what it can see, so learning to manage context effectively is the difference between getting decent results and getting mind-blowing ones.
When you open the AI sidebar, look at the bottom and you’ll see an option to “@add context”. This is where you add files, folders, or specific functions to the conversation.

The @ symbol: Click the @ symbol or type it in to chat to see what files Cursor suggests. This tells the AI “pay attention to this specific file.”
You can reference specific files, folders, or even certain functions
@docs
can pull in documentation if available@components/
includes your entire components folder@package.json
includes just that file
The # symbol: Use this to focus on specific files.
The / symbol: Before starting a complex task, open the files you think are relevant to that task, then use the “/” command in Agent mode to “Add Open Files to Context.” This automatically adds them all to context.
The .cursorignore File
Create a .cursorignore
file in your project root to exclude directories the AI doesn’t need to see:
node_modules/
dist/
.env
*.log
build/
This keeps the AI focused on your actual code instead of getting distracted by dependencies and build artifacts.
Context Management Strategy
Think of context like a conversation. If you were explaining a coding problem to a colleague, you’d show them the relevant files, not your entire codebase. Same principle applies here.
Good context: Relevant files, error messages, specific functions you’re working on
Bad context: Your entire project, unrelated files, yesterday’s lunch order
Similarly, when you have long conversations, the context (which is now your entire conversation history) gets too long and the AI tends to lose track of your requirements and previous decisions. You’ll notice this when the AI suggests patterns inconsistent with your existing code or forgets constraints you mentioned earlier.
To avoid this, make it a habit to start new conversations for different features or fixes. This is especially important if you’re moving on to a new task where the context changes.
Beyond giving it the right context, you can also be explicit about what not to touch: “Don’t modify the existing API calls”. This is a form of negative context, telling the AI to work in a certain space but avoid that one spot.
Documentation as context
One of the most powerful but underutilized techniques for improving Cursor’s effectiveness is creating a /docs
folder in your project root and populating it with comprehensive markdown documentation.
I store markdown documents of the project plan, feature requirements, database schema, and so on. That way, Cursor can understand not just what my code does, but why it exists and where it’s heading. It can then suggest implementations that align with my broader vision, catch inconsistencies with my planned architecture, and make decisions that fit my project’s specific constraints and goals.
This approach transforms your documentation from static reference material into active guidance that keeps your entire development process aligned with your original vision.
Cursor Rules
Imagine having to explain your coding preferences to a new team member every single time you work together. Cursor Rules solve this problem by letting you establish guidelines that the AI follows automatically, without you having to repeat yourself in every conversation.
Think of rules as a mini-prompt that runs behind the scenes every time you interact with the AI. Instead of saying “use TypeScript” and “add error handling” in every prompt, you can set these as rules once and the AI will remember them forever.
Global Rules vs. Project Rules
User Rules: Apply to every project you work on. Think of these as your personal preferences you bring to any codebase.
Project Rules: Specific to each codebase. These are the rules your team agrees on and ensure consistency across all contributors.
Examples That Work in Practice
For TypeScript projects:
- Always use TypeScript strict mode
- Prefer function declarations over arrow functions for top-level functions
- Use meaningful variable names, no single letters except for loops
- Add JSDoc comments for complex functions
- Handle errors explicitly, don't ignore them
For Python projects:
- Use type hints for all function parameters and return values
- Follow PEP 8 style guidelines and prefer f-strings for formatting
- Handle errors with specific exception types, avoid bare except clauses
- Write pytest tests for all business logic with descriptive test names
- Use Pydantic for data validation and structured models
- Include docstrings for public functions using Google style format
- Prefer pathlib over os.path and use context managers for resources
For any project:
- Write tests for all business logic
- Use descriptive commit messages
- Add comments for complex algorithms
- Handle edge cases and error states
- Performance matters: avoid unnecessary re-renders and API calls
Use Cursor itself to write your rules. Seriously. Ask it to “Generate a Project Rules file for a TypeScript project that emphasizes clean code, accessibility, and performance.”
The AI knows how to write content that other AIs understand.
Pro Tip: Create different .cursorrules
files for different types of projects. Keep a frontend-rules.md
, backend-rules.md
, and fullstack-rules.md
that you can quickly copy into projects.
Communicating With Cursor
Here’s the thing about AI: it’s incredibly smart and surprisingly literal. The difference between getting decent results and getting “how did you do that?!” results often comes down to how you communicate.
Be Specific
As with any AI, the more specific you are, the better the output. Don’t just say, “fix the styling.” Say “Add responsive breakpoints for mobile (320px), tablet (768px), and desktop (1024px+) with proper spacing and typography scaling”.
You don’t need to know the technical details to be specific about the outcome you want. Saying “Optimize this React component by memoizing expensive calculations and reducing re-renders when props haven’t changed” works better than just “Optimize this component” even though you’re not giving it detailed instructions.
Take an Iterative Approach
Start broad, then narrow down:
- “Build a todo app with React”
- “Add user authentication to this todo app”
- “Make the todo items draggable for reordering”
- “Add due dates and priority levels”
Each step builds on the previous work. The AI maintains context and creates consistent patterns across features.
Use Screenshots
Take screenshots of:
- UIs you want to replicate
- Error messages you’re getting
- Design mockups from Figma
- Code that’s confusing you
Paste them directly into the chat. The AI can read and understand visual information surprisingly well.
Treat it like a coworker
Explain your problem like you’re talking to a colleague:
“I have this React component that’s supposed to update when props change, but it’s not re-rendering. The props are coming from a parent component that fetches data from an API. I think it might be a dependency issue, but I’m not sure.”
This gives the AI context about what you’re trying to do, what’s happening instead, and your initial hypothesis.
The Context Sandwich
Structure complex requests like this:
- Context: “I’m building a shopping cart component”
- Current state: “It currently shows items and quantities”
- Desired outcome: “I want to add coupon code functionality”
- Constraints: “It should validate codes against an API and show error messages”
This format gives the AI everything it needs to provide accurate, relevant solutions.
Common Prompting Mistakes
Making Assumptions: Don’t assume the AI knows what “correct” means in your context. Spell it out by describing expected outcomes. “This function should calculate tax but it’s returning undefined. Here’s the expected behavior…”
Trying to do everything at once: When you tell the AI to “Build a complete e-commerce site with authentication, payment processing, inventory management, and admin dashboard” it is definitely going to go off the rails at some point.
Start small and build incrementally. The AI works better with focused requests.
Describing solutions: Describe the problem, not the solution. The AI might suggest better approaches than you initially considered. Instead of “Use Redux to manage this state”, say “I need to share user data between multiple components”
Overloading context: Adding every file in your project to context doesn’t help, it hurts. The AI gets overwhelmed and loses focus. Be selective about what’s actually relevant.
Debugging Your Prompts
Good prompting is a bit of an art. A small change in a prompt can lead to massive changes in the output, so Cursor may often go off-script.
And that’s totally fine. If you catch it doing that, just hit the Stop button and say “Wait, you’re going in the wrong direction. Let me clarify…”
Sometimes it’s better to start a new conversation with a refined prompt than to keep correcting course. When you do this, add constraints like “keep the current component structure” to stop it from going down the same direction.
Good prompting is iterative:
- Initial prompt: Get something working
- Refinement: “This is close, but change X to Y”
- Polish: “Add error handling and improve the user experience”
- Test: “Write tests for this functionality”
The Psychology of AI Collaboration
The AI is incredibly capable but not infallible. There’s a small area between treating it like a tool and constraining it too much, and treating it like a coworker and letting it run free. That’s where you want to play.
Always review the code it generates, especially for:
- Security-sensitive operations
- Performance-critical sections
- Business logic validation
- Error handling
Don’t just copy-paste the code. Read the AI’s explanations, understand the patterns it uses, and notice the techniques it applies. You’ll gradually internalize better coding practices.
If the AI suggests something that doesn’t feel right, question it. Ask “Why did you choose this approach over alternatives?” or “What are the trade-offs here?”
The AI can explain its reasoning and might reveal considerations you hadn’t thought of. Or it could be flawed because it doesn’t have all the necessary context, and you may be able to correct it.
Putting it all together
Here’s a complete example of effective AI communication:
Context: “I’m building a React app that displays real-time stock prices”
Current state: “I have a component that fetches data every 5 seconds, but it’s causing performance issues”
Specific request: “Optimize this for better performance. I want to update only when prices actually change, handle connection errors gracefully, and allow users to pause/resume updates”
Constraints: “Don’t change the existing API structure, and make sure it works on mobile devices”
This prompt gives the AI everything it needs: context, current state, desired outcome, and constraints. The response will be focused, relevant, and actionable.
Common Pitfalls
Every Cursor user goes through the same learning curve. You start optimistic, hit some walls, wonder if AI coding is overhyped, then suddenly everything clicks. Let’s skip the frustrating middle part by learning from everyone else’s mistakes.
The “Build Everything at Once” Trap
The mistake: Asking for a complete e-commerce platform with authentication, payment processing, inventory management, admin dashboard, and mobile app in a single prompt.
Why it fails: Even the smartest AI gets overwhelmed by massive requests. You’ll get generic, incomplete code that barely works and is impossible to debug.
The fix: Start with the smallest possible version. Build a product catalog first, then add search, then user accounts, then payment processing. Each step builds on solid foundations.
Good progression:
- “Create a simple product listing page”
- “Add search functionality to filter products”
- “Create a shopping cart that stores items”
- “Add user registration and login”
- “Integrate payment processing”
The Context Chaos Problem
The mistake: Adding every file in your project to the AI’s context because “more information is better.”
Why it fails: Information overload makes the AI lose focus. It’s like trying to have a conversation in a crowded restaurant, too much noise drowns out the important signals.
The fix: Be surgical with context. Only include files that are directly relevant to your current task.
Bad context: Your entire components folder, all utilities, config files, and documentation Good context: The specific component you’re modifying and its immediate dependencies
The “AI Will Figure It Out” Assumption
The mistake: Giving vague instructions and expecting the AI to read your mind about requirements, constraints, and preferences.
Why it fails: The AI is smart, not psychic. “Make this better” could mean anything from performance optimization to visual redesign to code refactoring.
The fix: Be specific about what “better” means in your context.
Vague: “Fix this component” Specific: “This React component re-renders too often when props change. Optimize it using React.memo and useMemo to prevent unnecessary renders.”
The Copy-Paste Syndrome
The mistake: Blindly copying AI-generated code without understanding what it does.
Why it fails: When (not if) something breaks, you’ll have no idea how to fix it. Plus, you miss learning opportunities that make you a better developer.
The fix: Always ask for explanations. “Explain what this code does and why you chose this approach.”
What to do when shit inevitably hits the fan
You may avoid all the pitfalls above and still see the AI go off track. It starts modifying files you didn’t want changed, adds unnecessary complexity, or ignores your constraints.
The first thing you should do is hit the stop button. You can then let it know it’s going in the wrong direction. Even better, start a new conversation with clearer instructions and additional constraints.
Another common pattern is when the AI makes a change, sees an error, tries to fix it, creates a new error, and gets stuck in a cycle of “fixes” that make things worse.
If you see the same type of error being “fixed” multiple times, stop the process and revert to the last working state.
Here are some other warning signs that things are going off track:
- It keeps apologizing and starting over
- Solutions get more complex instead of simpler
- It suggests completely different approaches in each attempt
- Error messages persist despite multiple “fixes”
Then use one of the following debugging methods.
The Logging Strategy
When things aren’t working and you can’t figure out why:
- Ask the AI to add detailed logging
- Run the code and collect the output
- Paste the logs back to the AI
- Let it analyze what’s actually happening vs. what should happen
Example prompt: “Add console.log statements to track the data flow through this function. I’ll run it and share the output so we can debug together.”
The Rollback and Retry Method
When the AI made changes that broke more than they fixed:
- Use Cursor’s built-in history to revert changes
- Identify what went wrong in your original prompt
- Start a new conversation with better context
- Be more specific about constraints and requirements
The “Explain Your Thinking” Technique
When the AI gives you code that seems wrong or overly complex:
“Explain why you chose this approach. What are the trade-offs compared to [simpler alternative]?”
Often the AI has good reasons you didn’t consider. Sometimes it reveals that there’s indeed a simpler way.
The Test-Driven AI Approach
TDD (Test Driven Development) is a common (and standard) practice in web development. However, with vibe coding, it seems like people have forgotten about it.
But, as the saying goes, prevention is better than cure. Following tried and tested practices like TDD will save you a ton of headache and rework.
In fact, with AI, it becomes a superpower. AI can write tests faster than you can think of edge cases, and those tests become a quality guarantee for the generated code.
This single prompt pattern will revolutionize how you build features:
“Write comprehensive tests for [feature] first, then implement the code, then run the tests and iterate until all tests pass.”
Here’s an example prompt for building a new React component:
"Write tests that verify this component:
1. Renders correctly with different props
2. Handles user interactions properly
3. Manages state changes
4. Calls callbacks at the right times
5. Handles error states gracefully
Then implement the component to pass all tests."
Watch this workflow in action:
- AI writes tests based on your requirements
- AI implements code to satisfy the tests
- Tests run automatically (with YOLO mode enabled)
- AI sees failures and fixes them iteratively
- You get working, tested code without writing a single test yourself
Advanced Tips and Tricks
The Bug Finder
Hit Cmd+Shift+P
(or Ctrl+Shift+P
) and type “bug finder.” This feature compares your changes to the main branch and identifies potential issues you might have introduced.
It’s not perfect, but it catches things like:
- Forgot to handle null values
- Missing error handling
- Inconsistent variable usage
- Logic errors in conditional statements
Image Imports
This one sounds fake until you try it. You can literally paste screenshots into Cursor’s chat and it will understand them. Take a screenshot of:
- A UI mockup you want to build
- An error message you’re getting
- A design you want to replicate
Paste it in the chat with your prompt and watch the AI work with visual information. It’s genuinely impressive.
Tab Tab Tab
Cursor’s tab completion doesn’t just complete your current line, it can suggest entire functions, predict what you’re about to write next, and even jump you to related code that needs updating.
The AI analyzes your recent changes and predicts your next move. When it’s right (which is surprisingly often), it feels like magic.
AI Models and Selection Strategy in Cursor
Cursor offers access to the latest generation of AI models, each with distinct strengths and cost profiles that suit different development scenarios.
Claude Sonnet 4 is my current go-to choice for most development tasks. It significantly improves on Sonnet 3.7’s capabilities, achieving a state-of-the-art 72.7% on SWE-bench. Use this for routine development tasks like building React components, writing API endpoints, or implementing standard features.
Claude Opus 4 represents the premium tier for the most challenging problems. It is expensive but pays for itself in time saved when you’re tackling architectural decisions, complex refactoring across multiple files, or debugging particularly stubborn issues.
OpenAI’s o3 is a good premium alternative and particularly strong in coding benchmarks, with the high-effort version achieving 49.3% on SWE-bench and excelling in competitive programming scenarios.
GPT-4o remains a solid and cheaper alternative, especially for multilingual projects or when you need consistent performance across diverse tasks. While it tends to feel more generic compared to Claude’s natural style, it offers reliability and broad capability coverage.
Gemini 2.5 Pro is also one of my favorites as it combines reasoning with coding, leading to much better performance. It is also the cheapest and fastest of models, though I use it primarily for planning out an app.
In most cases, you’ll probably just be using one model for the bulk of your work, like Sonnet 4 of GPT-4o, and you can upgrade to a more expensive model like o3 or Opus 4 for complex tasks.

mCP and Integrations
MCP (Model Context Protocol) connects Cursor to external tools and data sources, turning it into a universal development assistant. Need to debug an issue? Your AI can read browser console logs, take screenshots, and run tests automatically. Want to manage your project? It can create GitHub issues, update Slack channels, and query your database, all through natural conversation.
What MCP is and how it works is out of scope of this already long article, so read my guide here. In this section I’ll explain how to set it up and which servers to use.
Setting Up MCP in Cursor
Getting started with MCP in Cursor involves creating configuration files that tell Cursor which MCP servers to connect to and how to authenticate with them.
For project-specific tools, create a .cursor/mcp.json
file in your project directory. This makes MCP servers available only within that specific project (perfect for database connections or project-specific APIs). For tools you want across all projects, add them in your settings.

The configuration uses a simple JSON format. Here’s how to set up the GitHub MCP server:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here"
}
}
}
}
Essential MCP Servers
The MCP ecosystem has exploded with hundreds of available servers, but several have emerged as must-haves for serious development work.
GitHub MCP Server – create issues, manage pull requests, search repositories, and analyze code changes directly within your coding conversation. When debugging, you can ask “what changed in the authentication module recently?” and get immediate insights without leaving your editor.
Slack MCP Server – read channel discussions, post updates about builds or deployments, and even summarize daily standups. This becomes particularly powerful for debugging when team members report issues in Slack. Your AI can read the problem descriptions and immediately start investigating.
PostgreSQL MCP Server gives your AI the ability to inspect schemas and execute read-only queries. You can ask “show me all users who logged in yesterday” or “analyze the performance of this query” and get immediate, accurate results.
Puppeteer MCP Server gives your AI browser automation superpowers. When building web applications, your AI can take screenshots, fill forms, test user flows, and capture console errors automatically. This creates a debugging workflow where you describe a problem and watch your AI reproduce, diagnose, and fix it in real-time.
File System MCP Server seems basic but proves incredibly useful for project management. Your AI can organize files, search across codebases, and manage project structures intelligently. Combined with other servers, it enables workflows like “analyze our React components for unused props and move them to an archive folder.”
Advanced MCP Workflows in Practice
The real power of MCP emerges when multiple servers work together to create sophisticated development workflows. Consider this scenario: you’re building a web application and users report a bug through Slack. Here’s how an MCP-enhanced Cursor session might handle it:
First, the Slack MCP reads the bug report and extracts key details. Then, the GitHub MCP searches for related issues or recent changes that might be relevant. The File System MCP locates the relevant code files, while the PostgreSQL MCP checks if there are database-related aspects to investigate.
Your AI can then use the Puppeteer MCP to reproduce the bug in a browser, capture screenshots showing the problem, examine console errors, and test potential fixes. Finally, it can create a detailed GitHub issue with reproduction steps, propose code changes, and post a summary back to Slack, all through natural conversation with you.
This level of integration transforms debugging from a manual, time-consuming process into an assisted workflow where your AI handles the tedious investigation while you focus on architectural decisions and creative problem-solving.
Custom MCP Server Creation
While the existing ecosystem covers many common needs, building custom MCP servers for company-specific tools often provides the highest value. The process is straightforward enough that a developer can create a basic server in under an hour.
Custom servers excel for internal APIs, proprietary databases, and specialized workflows. For example, a deployment pipeline MCP server could let your AI check build status, trigger deployments, and analyze performance metrics. A customer support MCP server might connect to your ticketing system, allowing AI to help triage issues or generate response templates.
A real-World workflow
Building real applications with Cursor requires a different mindset than traditional development. Instead of diving straight into code, you start by having conversations with your AI assistant about what you want to build.
Let’s say we want to build a project management tool where teams can create projects, assign tasks, and track progress. It’s the kind of application that traditionally takes week, maybe months, to develop, but with Cursor’s AI-assisted approach, we can have a production-ready version in days.
Foundation
Traditional projects start with wireframes and technical specifications. With Cursor, you’d start with Agent mode and a conversation about what you’re trying to build. You describe the basic concept and use the context sandwich method we covered earlier:
Context: “Building a team project management tool”
Current state: “Just an idea, need MVP definition”
Goal: “Users can create projects, assign tasks, track progress”
Constraints: “3-week timeline, needs to scale later”
The AI would break this down into clear MVP features and suggest a technology stack that balances rapid development with future scalability. More importantly, it would design a clean database schema with proper relationships.
Save all of these documents in a folder in your project for the AI to reference later.
Core Features
Start building each feature one by one. Use the test-driven development approach I mentioned earlier, and start small with very specific context.
Connect GitHub and Database MCP servers to let the AI commit code and inspect the database in real-time.
You can even set up a Slack MCP for the AI to update you or read new tickets.
Follow the same pattern for every feature – tasks tracking, user permissions, etc.
Don’t forget to keep testing the product locally. Even with the test-driven approach, the AI might miss things, so ask it to use the logging technique described earlier to help debug potential issues.
Productionizing
As your app gets ready, you may want to start thinking about performance and production-readiness.
Ask the AI to proactively analyze your app for potential failure points and implement comprehensive error handling.
I also often ask it to find areas for refactoring and removing unnecessary code.
For performance optimization, ask the AI to implement lazy loading, database indexing, and caching strategies while explaining the reasoning behind each decision.
Launch and iterate
The monitoring and debugging workflows we covered earlier would prove essential during launch week. The AI would have generated comprehensive logging and performance tracking, so when real users start using your app, you’d have visibility into response times, error rates, and user behavior patterns from day one.
When users request features you hadn’t planned (keyboard shortcuts, bulk operations, calendar integration, etc) the iterative refinement approach combined with MCP would make these additions straightforward.
Each new feature would build naturally on the existing patterns because the AI maintains architectural consistency while MCP servers provide the external context needed for complex integrations.
Your Turn
Hopefully this article demonstrates a fundamentally different approach to software development. Instead of fighting with tools and configurations, you’re collaborating with an AI partner that understands your goals and helps implement them efficiently.
The skills you develop transfer to any technology stack: thinking architecturally, communicating requirements clearly, and iterating based on feedback. Most importantly, you gain confidence to tackle ambitious projects. When implementation details are handled by AI, you can focus on solving interesting problems and building things that matter.
I’d love to support you as you continue on your journey. My blog is filled with detailed guides like this, so sign up below if you want the latest deep dives on AI.
Get more deep dives on AI
Like this post? Sign up for my newsletter and get notified every time I do a deep dive like this one.