Lessons from Vibecoding
You ask an AI to write code.
Then you ask it to explain what it just wrote.
Then you realize it hallucinated half the logic.
Then you fix it.
Then it breaks something else.
Then you repeat the cycle until your sense of time dissolves.
It’s not elegant. It’s not clean. It is stubborn, iterative, mildly deranged, and fueled by caffeine and irritation.
It is also shockingly effective.
Not because it’s good engineering — it often isn’t — but because it keeps momentum alive. You are never blocked. You are never staring at a blank editor wondering where to start. You are always reacting, correcting, steering.
It feels less like programming and more like arguing with a very talented, very forgetful collaborator.
[source]. I love this description of the process vibecoding because it accurately describes my last week. I am the furtherest from a software developer but the following excerpt describes me perfectly too:
when you don’t actually know how to build something (software-wise), but you know what the outcome should look like, and you refuse to stop asking questions until the shape emerges.
I've wondered so many times that I know how to articulate my needs and how to describe my vision for how something should look, act, and respond but I didn't have the coding chops to do that. I could learn how to but I would've to wait until I retire from my full-time job to devote enough time to do it the way I want to. But in the meantime, if I had someone to bounce ideas off and who would run the code for me without mocking me and allowing me to fail spectacularly, I could perhaps build something. That's why vibecoding is so exciting to us; the ones who are like the The Little Engine That Could.
I can empathize with the hardcode developers who are seeing their godlike status diminish and watch the mysterious allure of their work slowly stripped away. I'm sure car mechanics felt the same way when cars were gradually computerized and now you needed only a diagnostic tool and an error code to tell you what's wrong.
More importantly, it accelerates the process of experimenting which is only limited by your curiosity and quality of the questions you can ask. You can go back to the basics or offer your input of what the problem might be. The AI may or may not see your input as useful and if it does not, it will not summarily dismiss you or call you stupid but rather tell you why it wouldn't cause the problem and how a certain bit of code has accounted for that possibility. That's where the real learning occurs. It is an assistant and a patient teacher at the same time. Sometimes it's also best not to do anything after you have identified an issue and diagnosed a fix. The solution is often not worth the risk.
For example, mostly to satiate my anal-retentive self, I was curious about any images on my blog's backend that were no longer linked to any posts or pages. I had spent a ton of time searching, replacing, and editing images so it was natural that there were plenty of overwrites. So I wanted to audit my Ghost blog to identify such images. ChatGPT told me I needed direct access to the live /content folder. However, my Ghost blog is on managed hosting so I don't have access.
But I thought, I could always download a backup copy and run my script against the local copy to find such images. I was able to download the SQL database file for this comparison. Using a full content backup and a database dump, I compared (using an AI-generated script, of course) all image files on disk against every image reference found in the database. This allowed me to separate images that are actively referenced from those that likely aren’t used anywhere on the site.
The analysis showed that most “orphan” images were original uploads marked with a _o suffix. In nearly all cases, a non-_o version of the same image existed and was the one actually used by Ghost. Deleting these originals would have reclaimed about 2.5 GB of disk space without breaking any content. However, because I don’t have direct access to the live /content folder and as the person who does the managed hosting for my blog said, these originals serve as a useful safety net for future edits or reprocessing, I decided not to delete anything. To its credit, ChatGPT was also strongly encouraging me to not delete them even if they took up space.
I've said this before and will say it again. Use AI to assist you and not to generate content from scratch. Let it accelerate and guide your process. Let it help you explore fields and domains that you have cursory knowledge about. Trust but verify. I've noticed that if you ask in an inquisitive tone why something did not work, it will provide a critical and thought-out response. If you state emphatically that something was the reason why something is wrong, it will simply take your word for it and agree with you. Approach it with a curious mind and engage in a dialogue instead of going in assuming you know best and what does a machine know anyway.
Member discussion