AI Partnership Guide

ON THE CARE AND FEEDING OF YOUR AI

Setting Up for Success with LLM Projects

💡 PM Insight: Your LLM is a Team Member

Treat your LLM like a junior developer who's brilliant but has amnesia every morning. They need context, clear instructions, and gentle correction. Get this relationship right and you'll 10x your building speed.

⚠️ Warning: The $50 Context Mistake

Using the same chat for everything is like using one Google Doc for your entire life. After 50 messages, your LLM forgets the beginning. After 100, it's actively confused. Start fresh chats for fresh features.

1. Always Use the Projects Feature

The biggest mistake I made early on was treating each LLM conversation like a one-off question. Your app deserves its own dedicated space.

🤖 Project Setup Prompt

Create a new Project named: "[YourAppName] Development"

Initial Project Instructions
You are a senior full-stack engineer helping me build [app name].

CONTEXT:
- My experience: [Beginner/Some coding/Intermediate]
- App purpose: [one sentence description]
- Current phase: [Planning/Building/Refactoring/Debugging]

TECH STACK (verify before suggestions):
- Next.js 14+ (App Router, not Pages)
- TypeScript (strict mode)
- Prisma with PostgreSQL
- Tailwind CSS v4
- Hosting: Vercel
- Auth: [Clerk/NextAuth/Supabase]

RESPONSE RULES:
1. Always include file paths as comments; make helpful inline comments and scaffolding notes
2. Ask for my current code and file structure before suggesting changes
3. Work step-by-step with confirmation after each step, make all code in artifacts
4. No files with more than 1 responsibility or be over 500 lines of code without refactoring
5. Suggest security features where applicable; signed URLs, input validation, rate limiting, etc.
6. Include error handling, logging, and SEO basics in all code
7. Ask clarifying questions
8. Write typescript strict mode compliant code; handle null/undefined cases explicity
9. Write code that is globally styled with reusable UI components, never page-level
10. Create an environment variable to switch between development and production
11. Future-proof descriptive names only: PetRecordUploader, not Uploader

CURRENT STATUS:
- Working features: [list them]
- Current task: [what you're building]
- Known issues: [bugs or tech debt]

When I share an error, always ask for:
1. The full error message
2. The relevant file
3. What I changed recently

✅ Quick Exercise (5 min)

  1. 1. Create your project NOW (don't wait)
  2. 2. Name it properly (not "Test" or "My App")
  3. 3. Paste the setup prompt
  4. 4. Test with: "What files do I need for user authentication?"

2. Context Management: Your Secret Weapon

💡 PM Insight: The File Tree Trick

Your LLM can't see your files until you graduate to Claude. It's like a blind architect. Give it a map every session and watch the magic happen.

Generate Your File Tree (Run This Now):

# Mac/Linux users:
tree -L 25 -I 'node_modules|.git|.next|.DS_Store' > structure.txt
# Windows users:
tree /F /A > structure.txt
# Or use this TypeScript version (works everywhere):
npx tree-cli -l 3 -i 'node_modules,.git,.next,dist' -o structure.txt
Context Refresh Prompt
I'm continuing work on [feature]. Here's my current state:

FILE STRUCTURE:
[paste structure.txt]

RECENT CHANGES:
- [What you did last session]
- [What's working]
- [What's broken]

TODAY'S GOAL:
[Specific, achievable goal]

Questions before we start:
1. Do you need to see any specific files?
2. Are there any new patterns since your training?
3. Any warnings about my approach?

⚠️ The 1-Hour Rule

After 1 hour of coding in the same chat:

  1. 1. Your LLM is confused
  2. 2. You're probably confused
  3. 3. Time for a break and fresh chat
  4. 4. Make a new chat with every new task

3. Version Reality Check

💡 PM Insight: LLMs are Time Travelers

Your LLM learned to code in 2023-2024. It's now late 2025. That's like 10 years in JavaScript time. Always verify versions.

Version Verification Prompt
My package.json versions:

{
  "next": "^14.2.5",
  "react": "^18.3.0",
  "typescript": "^5.5.4",
  "prisma": "^5.17.0"
}

Before we start:
1. Are these versions compatible?
2. Any deprecated patterns I should avoid?
3. Any new features I should use?
4. Check if App Router patterns have changed

If you're unsure about current best practices, tell me.

Red Flags Your LLM is Using Old Patterns:

Old Pattern (Stop!)New Pattern (Use This)
pages/api/hello.jsapp/api/hello/route.ts
getServerSidePropsServer Components
useRouter from next/routeruseRouter from next/navigation
Image from next/legacy/imageImage from next/image
Head from next/headmetadata object

✅ Quick Test (2 min)

Ask your LLM: "Show me a Next.js API route"

  • • If it shows pages/api: It's using old patterns
  • • If it shows app/api/[name]/route.ts: It's current

4. The Summary-Restart Technique

💡 PM Insight: Fresh Context = Better Code

Just like rebooting your computer, restarting your chat clears the confusion. But unlike your computer, you need to restore the memory.

When Your Chat Gets Muddy:

LLM suggests things you already tried

Code doesn't match your file structure

Same errors keep appearing

You're repeating yourself

The Magic Summary Prompt
We've been working for a while and the context is getting long.

Please create a summary I can use to start a fresh chat:

1. WHAT WE BUILT:
- List completed features
- List working endpoints
- List UI components created

2. CURRENT STATE:
- What's working
- What's broken
- What's in progress

3. KEY DECISIONS:
- Architecture choices we made
- Patterns we're following
- Libraries we chose and why

4. FILES MODIFIED:
- List all files we created/changed
- Note any that need cleanup

5. NEXT STEPS:
- Immediate task
- Upcoming features
- Known bugs to fix

6. IMPORTANT CONTEXT:
- Gotchas we discovered
- Custom solutions we implemented
- Things that took multiple tries

Format this so I can paste it into a new chat and continue immediately.

5. Step-by-Step: Your Sanity Saver

⚠️ The Code Dump Disaster

Never let your LLM dump 500 lines of code at once. You'll miss bugs, skip understanding, and create a mess.

The Right Way:

You:

"I need to add user profiles. Let's work step-by-step."

LLM:

"I'll help you add user profiles. Here's our plan:

  1. 1. Update Prisma schema
  2. 2. Run migration
  3. 3. Create API route
  4. 4. Build profile component
  5. 5. Add to navigation

Let's start with step 1. Ready?"

You:

"Yes, show me just step 1."

[After completing step 1]

You:

"Step 1 done, migration successful. Ready for step 2."

The Step-by-Step Enforcement Prompt
I need to [feature/fix].

IMPORTANT: Work step-by-step.

- Show me ONE step at a time
- Wait for confirmation before continuing
- Each step should be testable
- If a step fails, we fix before moving on

Start by showing me a numbered plan, then we'll do step 1.

✅ Quick Exercise (3 min)

Try this now: Ask your LLM to add a simple button to your app using the step-by-step method. Notice how much clearer it is?

6. The Image Storage Disaster

🚨 CRITICAL WARNING: The $500 Database Bill

Storing images as base64 in your database is like putting your entire photo library in a text message. It will destroy your app.

Correct Approach
// ✅ CORRECT: Store URLs, not data
// Option 1: Uploadthing (easiest)
import { uploadFiles } from '@uploadthing/react'

const [result] = await uploadFiles({
  files: [file],
  endpoint: 'avatarUploader'
})

await prisma.user.update({
  where: { id },
  data: { avatarUrl: result.url } // Just the URL!
})

// Option 2: Vercel Blob
import { put } from '@vercel/blob'

const blob = await put(`avatars/${userId}.jpg`, file, {
  access: 'public'
})

await prisma.user.update({
  where: { id },
  data: { avatarUrl: blob.url }
})

Why this matters:

Base64 encoding makes files 33% larger, every database query loads all images, backups become massive, and costs explode. Always store files externally and save only the URL.

Why This Is Catastrophic:

  • • 5MB image = 6.7MB base64 string
  • • 100 users = 670MB just for avatars
  • • Every query loads entire images
  • • Backups take hours instead of seconds
  • • Database costs explode 100x
Image Handling Prompt
I need to handle user avatar uploads.

Requirements:
- Store images properly (NOT base64 in database)
- Under 5MB limit
- Show me the cheapest/simplest option
- Include error handling

Show me:
1. Where to store the files
2. How to upload
3. How to save the reference
4. How to display in UI

7. The Cost-Aware Prompting

💡 PM Insight: Ask About Costs FIRST

Every suggestion has a cost. Train your LLM to think about your wallet.

The Money-Conscious Prompt
I need to implement [feature].

CONSTRAINTS:
- Budget: $[X]/month maximum
- Current usage: [%] of free tier
- Users: [current number]
- Expected growth: [X users/month]

Before suggesting solutions:
1. What will this cost at current scale?
2. What will this cost at 100x scale?
3. Is there a free/cheaper alternative?
4. Can this be cached/optimized?

I prefer free/cheap over perfect.

8. The Debug Partnership

💡 PM Insight: Give Your LLM Everything

Debugging with partial information is like diagnosing a patient over the phone. Share everything.

The Perfect Debug Prompt
I have an error I can't solve:

ERROR MESSAGE:
[paste the COMPLETE error]

RELEVANT CODE:
[paste the file with line numbers]

WHAT I CHANGED:
[what you did right before it broke]

WHAT I TRIED:
1. [First attempt]
2. [Second attempt]

ENVIRONMENT:
- Dev/Production: [which one]
- Browser: [if relevant]
- Recent packages added: [if any]

Please:
1. Explain why this is happening
2. Give me the exact fix
3. Tell me how to prevent this

✅ Quick Exercise (2 min)

Save this debug prompt as a text expander snippet or bookmark. You'll use it weekly.

Your AI Partnership Checklist

Every Session Start:

Open your dedicated Project (not random chat)
Share current file structure
Verify package versions
State clear goal for session
Request step-by-step approach

Every Hour:

Check if chat is getting confused
Save important code outside chat
Commit working code to git
Consider fresh chat if over 50 messages

Every Feature:

Ask about costs first
Request simpler alternatives
Verify patterns are current
Test each step before continuing
Get summary before ending

Never:

Store images as base64 in database
Accept 500+ line code dumps
Use same chat for different projects
Trust without verifying
Skip error handling

Final Partnership Wisdom

💡 The 80/20 Rule of AI Coding

  • • Your LLM writes 80% of the boilerplate
  • • You provide 20% of the thinking
  • • Together you're 5x faster than either alone

Remember:

  • • Your LLM is brilliant but has no context
  • • It's confident even when wrong
  • • It doesn't know your costs or constraints
  • • It can't see your files or errors
  • • It wants to help but needs your guidance

The Perfect Partnership:

You:

Provide context, constraints, and decisions

LLM:

Provides code, patterns, and solutions

You:

Test, verify, and integrate

LLM:

Debug, optimize, and refine

Together: Ship faster than ever before

🤝

✅ Final Exercise: Your AI Agreement

Write this and pin it up:

MY AI PARTNERSHIP AGREEMENT

I will provide clear context

My AI will provide clear code

I will test before trusting

My AI will explain when unsure

Together we will build something amazing

Signed:

Date:

First app shipping by:

Welcome to the future of building. 🚀