"AI keeps forgetting what we discussed."
"It gave me code that contradicts itself."
"I spend more time fixing AI outputs than writing it myself."
I hear these complaints constantly from developers trying to use AI tools. And I get it - if you're asking one AI to simultaneously architect, plan, code and debug, you're going to have a bad time.
Thatβs not how these tools do their best work.
I've been using what I call the Four Elements framework: treating AI as a team of specialized agents rather than one generalist assistant. Each agent has one job. Each focuses on what it does best.
The results speak for themselves. The difference wasn't the AI - it was the approach.
Hereβs how it works in practice.
The problem everyone's having (but nobody's talking about)
Let's be honest about what actually happens when most developers use AI:
The "traditional" AI loop:
1. "Hey AI, help me build this feature"
2. Gets some code
3. "Actually, can you change this part?"
4. Gets completely different code that breaks the first part
5. "No wait, that's not what I meant"
6. AI apologizes, starts over, context is lost
7. Repeat until you give up and write it yourself
The symptoms are everywhere:
- Lost context between conversations
- Mixing concerns (design + implementation + debugging all at once)
- "AI doesn't understand what I want"
- Inconsistent outputs
- No project memory
- Having to re-explain everything constantly
The problem isn't the AI.
The problem is that we're asking it to wear too many hats at the same time.
Imagine hiring someone and saying: "You're the architect who designs our system, AND the project manager who plans our work, AND the developer who codes it, AND the QA lead who reviews it. Oh, and switch between these roles instantly based on whatever I ask you."
That person would fail. Not because they're incompetent, but because role confusion kills effectiveness.
AI has the same problem.
The Four Elements framework: your AI development team
Here's the better way. Think of AI as four specialized teammates, each with a distinct role:
πͺοΈ The Architect (Air)
Role: Plans and designs structure, not implementation
The Architect is your systems thinker. It sees the big picture, defines boundaries and maps out data flows. It doesn't write code - it designs the what and the why.
When to use Architect mode:
- Starting a new project or feature
- Defining system architecture
- Making design decisions
- Creating the blueprint everyone else will follow
Example prompt:
As an Architect, design a Course Progress Tracker that:
- Tracks which courses users complete
- Calculates completion percentage
- Unlocks chapters sequentially
- Provides a progress summary API
Output a markdown file with data models, API endpoints,
business logic, and implementation TODOs.
What you get:
A clean architecture.md file that becomes your single source of truth. No code yet - just the blueprint.
π The Orchestrator (Water)
Role: Delegates and organizes work
The Orchestrator is your AI project manager. It reads the architecture and breaks it down into actionable tasks with clear priorities and dependencies.
When to use Orchestrator mode:
- Translating designs into implementation plans
- Breaking complex work into subtasks
- Determining what needs to be built in what order
- Creating a roadmap for your Coder
Example prompt:
As an Orchestrator, analyze architecture.md and create
a prioritized implementation plan with specific subtasks
for the Coder and validation points for the Debugger.
What you get:
A structured implementation-plan.md with phases, dependencies, and clear handoffs. The Orchestrator tells you: "Build the data models first, THEN the business logic, THEN the API layer."
π₯ The Coder (Fire)
Role: Builds with speed and precision
The Coder is your implementation specialist. Fast, focused and literal. Give it a specific task from the plan, and it'll deliver exactly that - nothing more, nothing less.
When to use Coder mode:
- Implementing specific, well-scoped tasks
- Following established patterns
- Generating code based on clear specifications
- Building what's already been designed and planned
Example prompt:
As a Coder, implement Task #3 from the implementation plan:
"Create database migration for Course and Chapter models."
Follow the data model specs in architecture.md.
Use Rails conventions. Include timestamps and indexes.
What you get:
Clean, focused code that solves ONE problem well. The Coder doesn't second-guess the architecture or plan - it just builds.
πͺ¨ The Debugger (Earth)
Role: Tests, reviews and perfects
The Debugger is your critical reviewer. It examines code with a QA mindset, looking for edge cases, security issues, performance problems and opportunities for improvement.
When to use Debugger mode:
- Code review after implementation
- Finding bugs and edge cases
- Suggesting tests and improvements
- Validating against original requirements
Example prompt:
As a Debugger, review this code for:
- Edge cases and potential bugs
- Security vulnerabilities
- Performance concerns
- Missing validations
Suggest unit tests that should be written.
What you get:
A detailed review pointing out what you missed, what could break, and what tests you need. The Debugger catches what everyone else overlooked.
The Fifth Element: YOU
Here's the thing about all four of these specialized AI agents: They're powerful, but they're still just tools.
- The Architect can design brilliant systems... but can't decide what problems are worth solving.
- The Orchestrator can plan perfectly... but can't know your team's constraints or priorities.
- The Coder can implement flawlessly... but can't judge if the feature actually makes sense.
- The Debugger can find every bug... but can't decide which ones matter most.
You are the Fifth Element. You bring:
- Judgment: Should this feature exist at all?
- Context: What are our business constraints?
- Intuition: Does this feel right?
- Purpose: Why are we building this?
- Decision-making: Is it done, or do we iterate?
AI brings the power of four specialized elements. But only you bring direction and meaning.
Your role isn't to do everything yourself. Your role is to lead this AI team effectively.
And here's the uncomfortable truth (hehe):
AI won't replace developers. But developers who learn to lead AI will replace those who don't.
This is your opportunity to get ahead of that curve.
The Secret Sauce: Context Persistence with Markdown Files
Now hereβs the technique that makes this framework actually work in practice.
The problem AI has always had: It forgets. Conversations have token limits. Context gets lost. You end up re-explaining the same thing over and over.
The solution: Markdown files as persistent memory.
Think of architecture.md or project-plan.md as your team's shared memory - like a living document that all your AI agents can read and update.
Here's how it works:
architecture.md
ββ Design decisions
ββ Data models
ββ API endpoints
ββ Business logic
ββ Implementation TODOs
The magic:
1. The Architect writes the initial design into architecture.md
2. The Orchestrator reads architecture.md and creates implementation-plan.md
3. The Coder reads both files to implement features correctly
4. The Debugger reads everything to understand what SHOULD happen
Each agent has access to the same source of truth. They're all on the same page. Literally.
Benefits:
- β
Shared team memory - No more "AI forgot what we discussed"
- β
No hallucination - AI references actual project documents, not imagination
- β
Consistent output - Every agent works from the same facts
- β
Git-friendly - These are just markdown files; version control them naturally
- β
Human-readable - You can review and edit the "team memory" anytime
This is what finally solved the context problem for me. AI doesn't remember conversations well, but it's excellent at reading files.
The workflow in practice: a real example
Let me show you how this actually works with a concrete example.
My setup: I use RooCode in VS Code for this workflow, but the beauty of this framework is that it's tool-agnostic. Whether you're using KiloCode in VS Code, Cursor, GitHub Copilot, Claude Code directly, or any other AI coding assistant - the principles are the same. Pick whatever tool fits your workflow.
Project: Course Progress Tracker for an e-learning platform
Requirements:
- Track which courses users complete
- Calculate completion percentage
- Unlock next chapter sequentially
- Provide progress summary API
Step 1: Architect Mode
I open my AI tool and say:
As an Architect, design a Course Progress Tracker system for an e-learning platform.
It should track user progress through courses, calculate completion percentages,
unlock chapters sequentially, and provide a summary API.
Output your design as architecture.md with data models, endpoints,
business logic, and TODOs.
What I get: A complete architecture.md file with:
- Data models (User, Course, Chapter, Progress)
- Relationships and validations
- API endpoint specifications
- Business rules clearly documented
- A TODO list for implementation
The Architect didn't write any code. It designed the structure.
I review it, make a few edits (because I'm still in charge), and save it.
Step 2: Orchestrator Mode
Now I switch modes:
As an Orchestrator, read architecture.md and create
a prioritized implementation plan.
Break it into phases with specific subtasks for the Coder
and validation checkpoints for the Debugger.
What I get: An implementation-plan.md with:
Phase 1: Foundation
- Task 1: Database schema setup
- Task 2: Create base models
- Task 3: Add validations
Phase 2: Core Logic
- Task 4: Progress tracking service
- Task 5: Completion calculation
- Task 6: Chapter unlocking logic
Phase 3: API Layer
- Task 7: Progress endpoints
- Task 8: Summary endpoints
Phase 4: Quality
- Task 9: Unit tests
- Task 10: Integration tests
The Orchestrator organized dependencies: "You can't build the API before the models exist." That's project management.
Small note: I prefer to keep all the following within the same Orchestrator session - it keeps the full context visible and progress traceable.
Step 3: Coder Mode
Now I can actually build. I pick, for example, Task 2 from the plan (assuming that Task 1 is already completed):
As a Coder, implement Task 2: Create base models
for User, Course, Chapter, and Progress
according to architecture.md. Use Rails conventions.
Include all specified relationships and validations.
What I get: Clean, focused implementation:
- Four model files with correct associations
- Validations matching the architecture
- Follows the established patterns
- Implements EXACTLY what was specified
The Coder doesn't question the design. It doesn't add features I didn't ask for. It just builds what the plan says, referencing the architecture for details.
Step 4: Debugger Mode
Finally, quality assurance:
As a Debugger, review the model code I just generated.
Check for edge cases, security issues, performance concerns,
and missing validations.
Suggest specific unit tests.
What I get: A critical code review:
- "Missing index on user_id in progress table - will cause performance issues"
- "Chapter unlocking logic doesn't handle edge case where course gets updated"
- "Consider adding unique constraint on (user_id, chapter_id) to prevent duplicate progress records"
- Plus 8 specific unit test suggestions
The Debugger caught things the Coder missed. That's the value of specialized review.
The complete cycle
- Architect breathes life into ideas
- Orchestrator makes them flow like water
- Coder ignites them with fire
- Debugger grounds them in earth
But I - the developer - give them purpose and decide when it's done.
Best Practices: How to Actually Make This Work
After months of using this framework, here's what actually matters:
β DO: Keep one markdown file = team memory
Your architecture.md and implementation-plan.md files are sacred.
Update them as the project evolves.
They're your continuity.
β DO: Be role-specific in prompts
Say "As an Architect..." not "Can you help me..."
This focuses the AI's thinking. Role clarity = output quality.
β DO: Iterate in small cycles
Design β Plan β Build β Test β Reflect
Don't try to build everything at once. One feature at a time.
β DO: Review everything critically
You're the QA for your AI team. Trust, but verify. AI will make mistakesβyou catch them.
β DO: Version your prompts
Save what works. Build a personal library. Successful prompts from today help tomorrow.
β DON'T: Ask Architect to write code
That's the Coder's job. Role confusion kills quality.
β DON'T: Ask Coder for architectural opinions
The Coder will guess and mislead you. Architecture decisions belong with the Architect.
β DON'T: Skip the Orchestrator
"But I can just jump to coding!"
You can, but you'll end up with disconnected pieces that don't fit together. The Orchestrator prevents chaos.
β DON'T: Forget to update context files
If your markdown files are stale, your agents are working with wrong information.
β DON'T: Treat AI output as gospel
AI is helpful, not infallible. You're the final reviewer.
β DON'T: Mix multiple roles in one prompt
"Design AND implement this" confuses the AI. Pick one role per interaction.
And the most important one!
β DON'T: Give up after the first attempt
Prompt engineering is a skill. It gets better with practice.
Troubleshooting: When Your AI Isn't Giving Good Outputs
"My AI isn't giving me what I want!"
Here's your debugging checklist:
Check: Are you being role-specific?
- Bad: "Help me build a user system"
- Good: "As an Architect, design a user authentication system with..."
Check: Is your context file up to date?
- If architecture.md is wrong, every agent will be wrong
Check: Is the task scoped small enough?
- "Build the entire feature" β too broad
- "Implement the User model with validations per architecture.md" β perfect scope
Try: Rephrasing with more constraints
- Add specific requirements
- Reference sections of your markdown files
- Provide examples of what you want
Try: Switching modes
- Maybe you're in Coder mode when you need Orchestrator
- Maybe you need Debugger to review before continuing
The real talk: what this actually means for developers
Let me be straight with you.
AI is not going to replace developers. That's not what this is about.
But here's what IS happening: The developers who learn to lead AI effectively will become exponentially more productive than those who don't.
This isn't about survival. This is about opportunity.
Five years ago, a senior developer could build X amount of value per sprint. Today, that same developer - armed with this framework - can build 3X or 5X the value.
The developers who figure this out? They become force multipliers. They become the people who ship faster, prototype better and solve harder problems.
The shift isn't AI vs. Developers. It's AI-augmented developers vs. traditional developers.
And right now, you're early to this party.
The Bottom Line
Stop asking AI to do everything - start leading your AI team.
You have four specialized agents at your disposal:
- πͺοΈ Architect: designs what should exist
- π Orchestrator: organizes how it's built
- π₯ Coder: implements with precision
- πͺ¨ Debugger: perfects through review
And you - the developer - are the fifth element that brings purpose, judgment and direction.
AI won't replace you. But you can use AI to become the developer you've always wanted to be: faster, more effective, more creative and way less frustrated at 2 AM.
Go forth and lead your AI team.