
I just listened to the podcast with the creator of ClawdBot (Now Moltbot) titled, “I ship code, I don’t read”. About 51 minutes in he says that the people who dislike AI are often the people who enjoy solving hard problems because the AI solves those problems now.
This seems like a junior software engineer mindset to me. When I was a junior software engineer I did not understand why senior engineers took so long to build and ship systems. How many hours went into planning, picking the correct patterns, ensuring edge cases, making the code maintainable… and above all making sure the code was secure.
The vibe coder problem is exactly what we see with ClawdBot’s recent security issues. The problem all comes down to the fact that vibe coders have that junior software engineer mindset that shipping code is the important thing.
Writing clean well architected code that is secure is one of those hard things, and if I disregarded it I could speed up how fast I ship even hand written code. No, I cannot type as fast as Claude Code can, and between that speed and how it can quickly look up libraries and API’s it is faster than I am. I currently use AI to write about 80% - 90% of my code. However, I review every line of code and often find both structural issues and security issues with the AI generated code.
Let’s address the massive security elephant in the room.
- If 250 poisoned documents can compromise a model trained on 260 billion tokens, with the attack succeeding similarly regardless of model or dataset size [1],
- And if backdoors can be constructed that are computationally undetectable even with full white-box access to network weights and training data [2],
Then the only remaining defense for AI-generated code is human review. Automated defenses show promise but have not achieved full restoration of compromised models [3]. The model cannot certify its own outputs. The attack surface is the entire training corpus, which no user can audit. Therefore, if you do not review and understand every line of AI-generated code, you have no basis for trusting it is not malicious.
References:
- [1] Souly, A. et al. (2024). Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
- [2] Goldwasser, S., Kim, M. P., Vaikuntanathan, V., & Zamir, O. (2022). Planting Undetectable Backdoors in Machine Learning Models.
- [3] Kure, H. I., Sarkar, P., Ndanusa, A. B., & Nwajana, A. O. (2024). - Detecting and Preventing Data Poisoning Attacks on AI Models.