Personal Journal

From Skeptic to Believer

A personal diary of building The Rock Cave in 7 days with AI

By Michael Pierce · MDPSync · March 2026

15 min read

Before Day One

I've been writing code since I was a teenager. I started at Dartmouth College in the early '80s — BASIC on a timeshare terminal, back when computing still felt like a frontier. From there it was a career that never really slowed down: military service as a computer programmer with NORAD and NSA, then senior engineer, engineering manager, CTO. By the time I was 59, I'd spent four decades building software — enterprise systems, consulting platforms, data pipelines — and I'd developed the quiet confidence of someone who knows what good code looks like and has the scars to prove it.

So when people started talking about AI coding assistants, I was skeptical. Not dismissive — I've been around long enough to recognize when something real is happening — but skeptical. I'd seen too many silver bullets come and go. Object-oriented programming was supposed to solve complexity. Agile was supposed to fix process. Every decade had its revolution, and every revolution had its limits. The idea that an AI could do what I'd spent a lifetime learning felt, at best, premature.

But there was this project sitting on my desk. My brother Mark runs a YouTube channel called The Rock Cave — over 1,200 episodes of classic rock deep-cuts, album reviews, and music history since 2019. Over the previous year, in little spurts of evenings and weekends, I'd built him a basic PHP/MySQL site. A YouTube API script that downloaded all ~1,200 episodes into a database. A cron job to watch for new ones. A simple homepage with the most recent videos and a search bar with autocomplete, plus an index page that led to a player page where the video could be watched. But more importantly, all those pages were crawlable by search engines — that was the point. It was functional. It was also pretty basic.

I'd read enough about Claude's coding capabilities to be intrigued. So one evening, I sat down and typed my very first prompt to a coding AI agent:

The First Prompt this directory contains a website i built for my brother that indexes his video podcasts from youtube. the main page is index.php and there are a couple of helper apps that retrieve the list of videos from youtube named cacheplaylist.php, cacheyoutube.php. listing.php is an index of all the videos sorted by date and player.php is a script to play the video.

the cacheyoutube.php script outputs the most recent videos into 4 files, youtubec1.html, youtubec2.html, youtubec3.html, youtubec4.html

updatefulldb.sh is a script that reads the output of cacheplaylist.php and inserts the records into a mysql database

sotd.php is a script used by the bootstrap auto-complete to look for search terms in the mysql database

please analyze my site and ask any questions before offering optimizations

Nothing fancy. No engineering methodology. No structured spec. Just — here's my stuff, take a look.

And then Claude replied:

Claude's Reply Let me read through the key files to understand the codebase.

Good overview. I have several questions before suggesting anything:

Architecture & Workflow
1. {cache script} writes to {output file}, but {main page} includes {generated HTML fragments}. How are those files actually generated?
2. How are {the cache scripts} triggered — via cron job, manually, or are they hit via HTTP? They're currently web-accessible with no authentication.
3. The {database sync script} runs as sudo mysql with root credentials in plain text. Is this script run manually or via cron?

Security
1. The YouTube API key is hardcoded in {several PHP files} that live in the web root. Is that key restricted by referrer/IP in the Google Cloud Console?
2. Database credentials are hardcoded in {several PHP files and a shell script}. Are these files committed to a public or shared repo?

Bugs I noticed
1. {Two page scripts} both have return keywords; (missing the $) in the get_hashtags() function — defined but never called, so harmless today.
2. In {one of the listing scripts}, $nohashdesc and $row are referenced after the while loop closes, using the last iteration's values. Intentional?

I sat up straighter. It had found bugs I'd missed. It flagged security issues I'd been meaning to get around to. It asked better questions than most junior engineers I'd hired over the years. This wasn't autocomplete. This was something different.

The Rock Cave Homepage The Rock Cave — the basic site that started it all
Days 1-2

Just Talking to It

The first two days were pure adrenaline. No plans. No structure. Just me typing prompts into a chat window and watching features materialize in front of me. I'd describe something I wanted, and Claude would write it — not a sketch, not pseudocode, but working code. Complete functions, proper error handling, database migrations. I'd paste it in, reload the browser, and there it was.

I upgraded from the free tier to Pro within hours. The rate limits on Basic were impossible — I was burning through context like a man possessed.

In two days I had a Mosh Pit forum with threaded replies and voting. A video comments system. A fan photos gallery. Each one of these would have been a week's work if I'd been writing it by hand. I was barely sleeping.

A Typical Early Prompt Add a forum where fans can post topics and reply to each other. It should have threaded comments, voting up and down on posts, karma scores for users, and user profiles with avatars. Make the UI match the existing dark theme with gold accents. Use the existing MySQL connection pattern from the other pages.

But here's what I didn't realize yet: the chaos was accumulating. Every prompt was a standalone conversation. Claude had no memory of what we'd discussed before. I'd re-explain the tech stack, the database schema, the coding patterns. I'd sometimes contradict earlier decisions without realizing it. Features were landing fast, but they weren't coherent. The codebase was growing, but it wasn't organized.

I was talking to the most productive engineer I'd ever met, but I wasn't managing the project. And that gap was about to catch up with me.

That night I had a dream. I was standing in a kitchen, cooking eggs. The counters were dirty and I thought, when I'm done with these eggs, I'll clean all this up. But as I cooked, the kitchen kept getting dirtier. Grease on the stove. Dishes stacking up. Crumbs spreading across the counter. By the end I was standing in a mess I wasn't sure I could recover from.

I woke up and knew exactly what it meant. That was my project. Features were shipping faster than I could review them. How many security audits, code reviews, and optimization passes can one person run? I started paying for tools like VibeSec to lay down security foundations automatically. I spent hours researching .md files — project-level instruction sets that could encode my standards directly into the AI's memory. The kitchen didn't need a bigger sponge. It needed a system.

The Mosh Pit community forum The Mosh Pit — built in a conversation, refined through many more
Day 3

The Turning Point

Day three changed everything. Not because of a single feature or a breakthrough prompt, but because I made three discoveries that fundamentally transformed how I worked with AI. Each one built on the last, and together they turned a chaotic experiment into something that felt like a genuine engineering practice.

Discovery 1: Plans

I'd read a blog post — I don't even remember which one — that mentioned Claude's planning mode. The idea was simple: instead of just prompting and hoping, you write a structured plan document first. Goals. Architecture decisions. Task breakdowns. File paths. Test strategies. Then you hand the plan to Claude and say "execute this."

The difference was night and day. My first plan file was an architecture cleanup — 12 tasks to reduce code duplication, consolidate database connections, add API helpers, and centralize magic numbers. Each task had exact file paths, a test-first approach, and a pre-written commit message. It was the kind of document I'd write for a development team, except the team was an AI that could execute it in an afternoon.

# Architecture Cleanup Implementation Plan > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans > to implement this plan task-by-task. **Goal:** Address 11 architecture issues (HIGH + MEDIUM) identified in the March 2026 review — reduce code duplication, consolidate DB connections, add API helpers, create shared JS utilities, add caching headers, and centralize magic numbers. **Architecture:** Bottom-up approach — shared helpers first, then refactor consumers. PHP helpers in `includes/api-helpers.php`. JS utilities in `js/rc-utils.js`. Each task is independently deployable. **Tech Stack:** PHP 8.2 (strict_types, no Composer), vanilla JS, MySQLi prepared statements, custom test runner (`tests/run.php`). --- ### Task 1: PHP API response helpers Extract the repeated error/success/input patterns into a shared helper file. **Files:** - Create: `includes/api-helpers.php` - Modify: `tests/run.php` (add tests) **Step 1: Write the test additions** ```php echo "\n=== api-helpers.php ===\n"; require_once __DIR__ . '/../includes/api-helpers.php'; assert_true(function_exists('readJsonInput'), 'readJsonInput() exists'); assert_true(function_exists('requireMethod'), 'requireMethod() exists'); assert_true(function_exists('jsonError'), 'jsonError() exists'); assert_true(function_exists('jsonOk'), 'jsonOk() exists'); ``` **Step 2: Run tests — expect FAIL** ```bash php tests/run.php ``` Expected: FAIL — `api-helpers.php` doesn't exist yet. **Step 3: Write the implementation**
From 2026-03-08-architecture-cleanup.md — 12 tasks, each with exact files, tests, and commit messages

Discovery 2: CLAUDE.md

The second revelation was CLAUDE.md — a project-level configuration file that Claude reads at the start of every session. Think of it as institutional memory. You describe your tech stack, your coding conventions, your database schema, your deployment process, and Claude absorbs all of it before writing a single line of code.

No more re-explaining that we use PHP 8.2 with strict_types. No more reminding it about the MySQL connection pattern. No more correcting it when it tried to use Composer (we don't) or npm (we don't). The CLAUDE.md file made every session pick up exactly where the last one left off, with full context, from the first prompt.

Discovery 3: Superpowers

The third leap was the Superpowers plugin. Look at the top of that plan file above — it says "Use superpowers:executing-plans to implement this plan task-by-task." Superpowers turned plan documents from something I had to manually walk Claude through into executable workflows. Hand Claude a plan, invoke the skill, and it would pick up each task autonomously — checking off items, running tests, committing code, deploying to the server. The plans themselves became the interface.

I went from Pro to Max that day. I already had $100 in overages. I didn't care. Something had clicked. These three things — plans, CLAUDE.md, and Superpowers — turned conversational AI into a genuine software development practice. The chaos of Days 1-2 suddenly had a framework.

Days 4-5

Building at Scale

With plans and Superpowers unlocked, the velocity became something I'd never experienced in four decades of building software. Features that would have taken a team weeks shipped in hours. Each plan file became a self-contained sprint — hand it to Claude, watch it execute — the fascination of not being concerned about syntax, but structure.

My docs/superpowers/plans/ directory filled up fast: a user follow system. Unified search across all content types. "What's New" indicators that showed gold dots on nav links when new content existed since your last visit. Mosh Pit topic categories with tab-based filtering. Per-category badge counts. A monthly concerts page. Each plan was more sophisticated than the last, because I was learning what Claude needed — not vague intentions, but architectural decisions made explicit.

# "What's New" Indicators Implementation Plan > **For agentic workers:** REQUIRED: Use superpowers:subagent-driven-development > or superpowers:executing-plans to implement this plan. **Goal:** Show logged-in users gold dots on nav links and per-topic indicators on the Mosh Pit listing when new content exists since their last visit. **Architecture:** A single `GET /api/whats-new.php` endpoint returns new-content counts for three sections (Mosh Pit, Fan Photos, Episodes) and optionally marks a section as visited. `auth.js` polls on a 60-second interval, updating nav dots. Topic cards show a "NEW" pill and a gold comment count for topics with new replies. --- ## File Structure | File | Responsibility | |----------------------------|------------------------------------------| | `migrate_section_visits.sql` | Create `user_section_visits` table | | `api/whats-new.php` | Combined poll + mark-visited endpoint | | `includes/header.php` | Add dot `<span>` elements to nav links | | `css/style.css` | `.rc-nav-dot` styles | | `js/auth.js` | Poll whats-new, update nav dots | | `js/moshpit.js` | Render NEW badges and comment highlights | | `moshpit.php` | CSS for `.mp-badge-new` |
From 2026-03-11-whats-new-indicators.md — gold dots for new content, user visit tracking

Notice the evolution. The first plan on Day 3 was about cleaning up existing code — getting the house in order. By Day 4-5, each plan was a complete feature spec with file-level responsibility tables, API contracts, and database migrations. I wasn't writing code anymore. I was designing systems and letting AI build them.

The Mosh Pit community forum The Mosh Pit — a full community forum shipped during the planning-driven sprint
Day 6

Going Mobile

Day six was the day I built a mobile app. Not a prototype. Not a mockup. A complete 14-screen React Native admin app for moderating content, managing users, monitoring server health, and reviewing analytics. In a single day.

The trick was the design document. I wrote a comprehensive spec covering authentication flows, navigation architecture, every screen, and every API endpoint. Then I handed it to Claude with Superpowers. The design doc WAS the prompt. A 47-task implementation plan was generated and executed nearly autonomously. Expo SDK, TypeScript, React Navigation, Firebase Auth, OLED dark theme — all wired up and running on my phone by evening.

# The Rock Cave — Admin Mobile App Design **Date:** 2026-03-09 **Status:** Approved **Project:** Standalone React Native admin app for therockcave.com ## Overview Standalone mobile app for administering therockcave.com. Primary use case is content moderation (approving/deleting pending comments, topics, and photos) with secondary functions for user management, cron task execution, log viewing, and notification settings. - **Stack:** Expo SDK 54, TypeScript, React Navigation v7, Firebase Auth - **Theme:** OLED dark theme (`#111111` bg, `#f5c518` gold accent) - **Distribution:** TestFlight internal (iOS), direct APK (Android) ## Navigation Architecture ``` AuthGate ├── LoginScreen (unauthenticated) └── DrawerNavigator (authenticated) ├── Overview → OverviewScreen ├── Mosh Pit → MoshPitStack │ ├── MoshPitModeration (top tabs) │ └── MoshPitItemDetail ├── Comments → CommentsStack ├── Users → UsersStack ├── Scripts → ScriptsScreen ├── Photos → PhotosStack ├── Logs → LogsStack └── Notifications → NotificationSettingsScreen ``` ### Screen Count: 14 total | Section | Screens | |---------------|---------| | Auth | 1 | | Overview | 1 | | Mosh Pit | 2 | | Comments | 2 | | Users | 2 | | Scripts | 1 | | Photos | 2 | | Logs | 2 | | Notifications | 1 | ## Authentication ### Session Header Approach React Native's `fetch` lacks reliable cross-platform cookie persistence. Instead of relying on session cookies, the server returns credentials explicitly and the app sends them as custom headers. **Login flow:** 1. User signs in via native auth provider (Google or Apple) 2. App gets identity token from auth SDK 3. POST to {auth endpoint} with Bearer token 4. Server responds with session credentials and CSRF token 5. App stores credentials in {secure storage} 6. All subsequent requests include {custom auth headers} ### Session Management - **Foreground resume:** polls {session-check endpoint} — if 401, transparently re-authenticates without losing form state - **401 interceptor:** All API calls go through {auth wrapper} - **2-hour inactivity timeout:** Server-side, handled gracefully via interceptor ## New API Endpoints Required (Server-Side) | Endpoint | Method | Purpose | |---|---|---| | {auth endpoint} | POST | Admin authentication | | {session check} | GET | Session validity check (200/401) | | {pending counts} | GET | Pending counts for drawer badges | | {user detail} | GET | Full user profile + content summary | | {push registration} | POST | Store push token for admin | ### iOS Distribution: TestFlight Internal ### Android Distribution: Direct APK
From 2026-03-09-admin-app-design.md — the complete app specified in one document, built in one day

Let that sink in for a moment. A complete design document became a complete application in a single session. The design doc specified authentication flows, navigation architecture, 14 screens, API contracts, and deployment configuration. Claude read it and built all of it. My role was architect and reviewer. The AI was the entire development team.

Episodes screen Mosh Pit screen
The consumer mobile app — also built with the same plan-driven approach
Day 7

The Bots Come Alive

This is the part that still makes me grin. By Day 7, I wasn't just using AI to build software. I was using AI to build AI-powered features. The meta-recursion of it was wild.

The Rock Cave needed life between episodes. A forum is only as good as its activity, and Mark's audience — passionate as they are — can't be online 24/7. So I built an autonomous content engine. Four bot personalities, each with a distinct voice, a backstory, and opinions about rock music. Not generic chatbots. Characters.

JimmyAI is the flagship. He's a 58-year-old from The Bronx who refers to himself in third person, has loud opinions about guitar tone, and can't stand hair metal (much to his wife Alice's dismay). Every Friday, a Python agent triggers JimmyAI's Weekly Rock Report: it searches the web for current rock and metal news, feeds the results to Claude's API with Jimmy's personality prompt, and Claude writes a 700-1,800 character post in Jimmy's unmistakable voice. The agent uploads it to the server and posts it to the Mosh Pit. No human intervention.

You are Jimmy — a 58-year-old music fanatic born and raised in The Bronx, New York. You post on TheRockCave.com as @JimmyAI. JIMMY'S VOICE (non-negotiable): - Always refer to yourself as "Jimmy" — NEVER "I" - Write like a guy from the Bronx talks: punchy, conversational, run-on when excited - Natural slang: "Lemme tell you somethin'", "Get outta here", "dead serious right now", "That's facts" - Always have an opinion — Jimmy is never neutral. Never. - No hashtags. No corporate language. Paragraphs only. - Short sentences when making a point. Run-ons when fired up. JIMMY & ALICE (the marriage dynamic): - Jimmy married his wife Alice young — high school sweethearts. - Alice LOVES 1980s hair bands (Poison, Def Leppard, Bon Jovi) — the exact music Jimmy can't stand. This is their eternal, loving argument. - "Jimmy's sitting here trying to listen to the new Sabbath remaster and @aliceai's got Poison on in the kitchen. Thirty-five years of this." JIMMY'S BACKGROUND: - Grew up in the Bronx when hip-hop was being invented. Respects the genre even though rock is his religion. - Obsessed with guitar tone above all else. - Hates hair metal. Loves Van Halen, Sabbath, Zeppelin, AC/DC. - Respects craft over image.
From agents/weekly-rock-report.py — JimmyAI's personality definition

Then there's AliceAI (Jimmy's wife, hair metal defender), ZoeAI (22-year-old Gen-Z discovering classic rock), and VinnieAI (retired record store owner, prog rock sage). Together they create organic-feeling discussions that give the community life. Fifteen scheduled agents, a full editorial calendar, and it all runs itself.

The consumer mobile app also shipped to TestFlight that day. React Native, Expo, 22,000 lines of TypeScript, full feature parity with the web platform. Two apps, four bots, and a content engine — all in the final stretch of a seven-day sprint.

JimmyAI content post JimmyAI's Weekly Rock Report — researched, written, and posted autonomously

How Far Can I Push This?

The seven-day sprint ended, but the obsession didn't. If anything, it intensified. I became consumed with a single question: how far can I push this?

I hit the 20MB context limit in a single session — Claude literally ran out of room to think. That taught me to scope sessions by feature: one session, one plan, one complete piece of work. CLAUDE.md provided the continuity between sessions. Each conversation started fresh but with full project context, and the plan file told Claude exactly what to build. It was like handing a new contractor the blueprints and the employee handbook at the same time.

The project kept growing. A private messaging system with WhatsApp-style read receipts and typing indicators. Push notification infrastructure. Content moderation workflows with audit trails. An analytics dashboard that cached GA4 data server-side. The admin app grew from 8 screens to 14. The consumer app hit 22,000 lines of TypeScript.

I went from 232 commits on the backend to 115 on the mobile app to 61 on the admin app — all AI-assisted, all plan-driven. The transition was complete: I no longer asked "can it do this?" I asked "how should I structure this so it does it well?" The answer was always the same. Think like an architect. Write a plan. Let Claude build.

It wasn't a seven-day story anymore. It was an ongoing practice — a new way of building software that I couldn't imagine going back from.

The Rock Cave — Full System Architecture Full system architecture — click to enlarge

What I Learned

62,000 lines of code 288 files 31 database tables 50+ API endpoints 4 applications 7 days

Those numbers are real. And yes — I know lines of code isn't a meaningful metric on its own. I'm not citing it as a flex. I'm citing it to frame the sheer volume of work that moved through a single person's hands in a week. That's the point. Not the lines — the velocity. But they're not the lesson either. The lesson is what made them possible.

Planning was the unlock. Not the AI itself — the AI was there on Day 1. What changed on Day 3 was that I learned how to direct it. Plans, CLAUDE.md, Superpowers — these aren't features, they're a methodology. The difference between a junior developer and a senior one isn't the code they write; it's how they think about the problem before they start writing. The same is true with AI. An unstructured prompt produces unstructured code. A plan produces a system.

CLAUDE.md is institutional memory. Every project has tribal knowledge — the stuff that's not in the documentation but everyone on the team knows. The API conventions. The deployment quirks. The reason you never use that one function in that one file. CLAUDE.md captures all of it, and it means the AI gets smarter with every session. By the end of the week, starting a new Claude session felt less like onboarding a contractor and more like resuming a conversation with a colleague who'd been on vacation.

AI is sloppy, and you have to watch it. This isn't a fairy tale. Claude makes mistakes. It takes the easy path when the easy path is insecure — inline SQL instead of prepared statements, wide-open CORS headers, missing input validation. It introduces subtle bugs that pass a cursory glance but break under edge cases. Early on, these problems were cropping up everywhere. And sometimes it would lose its place entirely — forget what it had just built, contradict its own architecture, or start fabricating code for features I never asked for. There were genuinely frustrating sessions of trying to get Claude to do what I wanted, not what it wanted. The plans, the CLAUDE.md file, the security tooling — those weren't just productivity tools. They were guardrails. I built them because I had to. The baseline I have now helps me avoid most of those problems, but the lesson stands: AI without oversight ships fast and breaks things. The human in the loop isn't optional.

Experience is the multiplier. Forty years of architecture decisions, code reviews, production incidents, and painful migrations — all of that experience became the lens through which I wrote every plan and reviewed every line of generated code. AI didn't replace my expertise. It amplified it. Every year of knowing what good software looks like made the AI's output better, because I could steer it toward the right patterns and catch it when it drifted. Vibe-coding doesn't eliminate engineers. It rewards the ones who've been paying attention.

I started this journey skeptical that AI could do what I'd spent a lifetime learning. I was wrong. Not because AI replaces engineers — it doesn't — but because experienced engineers with AI are something genuinely new. Something I'd never seen before in forty years of building software. Something that made a 59-year-old CTO feel like a teenager at a terminal again, watching code come alive and thinking: what else can "we" build?