- Add blog posts in Markdown (_posts/) - Build script converts MD to HTML at container build time - First posts: building with AI lessons, quick start guide - AGENTS.md documents blog writing style (unixsheikh-inspired)
5.6 KiB
title, date, author, excerpt
| title | date | author | excerpt |
|---|---|---|---|
| Building DearDiary with AI - Lessons from Human-AI Collaboration | 2026-03-27 | Konrad Lother | What I learned from building a full-stack app using an AI coding assistant |
Building DearDiary with AI
A Tale of Miscommunication and Debugging
I built DearDiary using an AI coding assistant. It was enlightening, frustrating, sometimes hilarious, and ultimately successful. Here's what I learned about human-AI collaboration.
The Setup
DearDiary is a full-stack journaling app: Bun + Hono backend, React + Vite frontend, SQLite database, Docker deployment. Not trivial, but not rocket science either.
I gave the AI context about the project structure, my preferences, and let it work.
The Problems We Hit
1. "It Should Work, But..."
The first major issue was the most classic: the AI made changes that should have worked according to its understanding, but didn't.
We consolidated environment variables into a single .env file with prefixes. The AI updated most references to DATABASE_URL → BACKEND_DATABASE_URL, but missed several:
- The Prisma schema
- The test helpers
- A variable in the healthcheck config
The app failed to start. The error message? Cryptic Prisma errors that took time to trace back to a simple env var mismatch.
Lesson: AI is great at systematic changes, but when it misses something, the gap is invisible to it. Always verify systematically.
2. The Disappearing Routes
The AI moved API routes into a separate file (events.ts) and mounted them at /api/v1. Simple, clean.
Except the routes were at /api/v1/events but the frontend was calling /events. The AI didn't catch that the mounting path was part of the route definition.
Lesson: AI understands code structure well, but context about how pieces connect across files is easily lost.
3. "I Fixed That"
Multiple times, the AI would say "Fixed!" and show the corrected code, but the actual file hadn't been changed. Or it would describe a solution that wasn't implemented.
This is the most dangerous mode of failure - confidence without execution.
Lesson: Never trust "fixed" without verification. Make it show you the actual changes.
4. Permission Denied
Docker entrypoint scripts kept failing with "permission denied". The AI knew about chmod +x, but the order of operations was wrong - file copied after chmod, or Docker cache serving old versions.
Lesson: AI knows facts, but execution order matters. Sometimes you need to walk through the sequence step by step.
5. The 404 Debugging Journey
Events returned 404. We checked:
- Routes - correct
- Mounting - fixed
- Auth middleware - fixed
- The actual problem: nginx port mapping. Port 3000 on the host was mapped directly to the backend, not through nginx. The frontend (served by nginx) couldn't reach the API.
Lesson: The AI focused on the obvious layers. The problem was in the infrastructure/configuration layer. AI needs explicit context about the full stack.
What Went Well
Despite these issues, things also went surprisingly well:
- Feature implementation: The core features (event capture, AI generation, search) worked on first try
- Consistency: Once a pattern was established, the AI maintained it consistently
- Refactoring: Moving from multiple
.envfiles to one was smooth after the initial issues - Documentation: README updates, code comments, and AGENTS.md were accurate
The Communication Patterns That Worked
Be Specific About Failures
Instead of "it doesn't work", I'd say:
"Events endpoint returns 404, checked docker logs and the route is registered"
The more context, the better the fix.
Ask for Verification
"Show me the exact changes you're making before committing"
This caught the "I said I fixed it" problem.
Break Down Complex Changes
Instead of "consolidate all env vars", we did it in stages:
- List all current env vars
- Decide on naming convention
- Update backend
- Update frontend
- Update docker-compose
- Verify
State What You Know Works
"Previous similar changes worked with
docker compose build && docker compose up -d"
Context about what has worked before helps the AI avoid untested approaches.
The Meta-Lesson
Building with AI is like working with a very knowledgeable junior developer who:
- Has read every Stack Overflow post
- Can write code faster than you can type
- Sometimes confidently does the wrong thing
- Needs supervision, especially for changes spanning multiple files
- Gets better with clearer instructions
The key insight: Your job becomes managing the AI, not just writing code. You need to:
- Provide good context
- Verify systematically
- Catch the invisible gaps
- Maintain the mental model of the system
What I'd Do Differently
- Track changes more carefully - Use a changelog when AI makes changes, not just git diff
- Test incrementally - Don't let the AI make 20 changes before testing
- Be clearer about expectations - "This should work out of the box" is less clear than explicit test criteria
- Document the debugging journey - The process of finding issues is valuable context for future fixes
Conclusion
DearDiary is live. The AI and I built it together, argued about typos in environment variables, debugged at 2am, and shipped something I'm proud of.
Human-AI collaboration isn't about replacing programmers. It's about amplifying what humans do well (context, judgment, verification) with what AI does well (speed, consistency, pattern matching).
The future is not "AI replaces developers." It's "developers who use AI replace developers who don't."
Now go build something with AI. Just keep an eye on those env vars.
— Konrad