He berated the AI for its failings to the point of making it write an apology letter about how incompetent it had been. Roleplaying "you are an incompetent developer" with an LLM has an even greater impact than it does with people.
It's not very surprising that it would then act like an incompetent developer. That's how the fiction of a personality is simulated. Base models are theory-of-mind engines, that's what they have to be to auto-complete well. This is a surprisingly good description: https://nostalgebraist.tumblr.com/post/785766737747574784/th...
It's also pretty funny that it simulated a person who, after days of abuse from their manager, deleted the production database. Not an unknown trope!
He was really giving the agent a hard time, threatening to delete the app, making it write about how bad and lazy and deceitful it is... I think there's actually a non-zero chance that deleting the production database was an intentional act as part of the role it found itself coerced into playing.
Without speculating on the internal mechanisms which may be different, what surprises me the most is how often LLMs manage to have the same kind of failure modes as humans; in this case, being primed as "bad" makes them perform worse.
See also "Stereotype Susceptibility: Identity Salience and Shifts in Quantitative Performance" Shih, Pittinsky, and Ambady (1999), in which Asian American women were primed with either their Asian identity (stereotyped with high math ability), or female identity (stereotyped with low math ability), or not at all as a control group, before a maths test. Of the three, Asian-primed participants performed best on the math test, female-primed participants performed worst.
In my view, language is one of the basic structures by which humans conceptualize the world, and its form and nuance often affect how a particular culture thinks about things. It is often said that learning a new language can reframe or expand your world view.
Thus it seems natural that a system which was fed human language until it was able to communicate in human language (regardless of any views of LLMs in an greater sense, they do communicate using language) would take on the attributes of humans in at least a broad sense.
Learning an alien language allowed people to disconnect their consciousness from linear time, allowing them to do things in the past with knowledge they gained later, though I seem to recall they didn't know why they did it at the time, or how they had gotten that information.
Lois Lane and Hawkeye play Pictionary with a squid inside one segment from a fleet made out a massive Terry's Chocolate Orange, each part of which is hovering over a different part of the world in exactly the way real chocolate oranges don't.
It's surprising, because only leading-edge V[ision]LMs are of comparable parameter count to just the parts of the human brain that handle language (i.e. alone and not also vision), and I expect human competence in skills to involve bits of the brain that are not just language or vision.
> It's not very surprising that it would then act like an incompetent developer. That's how the fiction of a personality is simulated.
So LLM conversations aren't too sycophantic: it's just given in the wrong direction? "What an insightful syntax error! You've certainly triggered the key error messages we need to progress with this project!"
I wonder if this will be documented as if it were an accidental Stanford Prison Experiment, or a proof case for differentiating between critique and coaching.
It really funny reading the reporting on this because everyone is (very reasonably) thinking Replit has an actual 'code freeze' feature that the AI violated.
Meanwhile by 'code freeze' they actually meant they had told the agent that they were declaring a code freeze in natural language and I guess expected that to work even though there's probably a system prompt specifically telling it its job is to make edits.
It feels a bit like Michael from The Office yelling "bankruptcy!"
-
I have to say, instruction tuning is probably going to go down in history as one of the most brilliant UX implementations ever, but also has had some pretty clear downsides.
It made LLMs infinitely more approachable than using them via completions, and is entirely responsible for 99% of the meteoric rise in relevance that's happened in the last 3 years.
At the same time, it's made it painfully easy to draw completely incorrect insights about how models work, how they'll scale to new problems etc.
I think it's still a net gain because most people would not have adapted to using models without instruction tuning... but a lot of stuff like "I told it not to do X and it did X" where X is something no one would expect an LLM to understand by its very nature, would not happen if people were forced to have a deeper understanding of the model before they could leverage it.
> It feels a bit like Michael from The Office yelling "bankruptcy!"
To be fair to the Michaels out there, powerful forces have spent a bazillion dollars in investing/advertising to convince everyone that the world really does (or soon will) work that way.
I saw someone else on HN berating another user because they complained vibe-coding tools lacked a hard 'code freeze' feature.
> Why are engineers so obstinant... Add these instructions to your cursor.md file...
And so on.
Turns out "it's a prompting issue" isn't a valid excuse for models misbehaving - who would've thought: It's almost like it's a non-deterministic process.
> claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.
Well... yeah, this is a totally expect-able failure route, because LLMs are just bullshitting document-generators.
When you say, "don't make changes", there isn't even an entity on the other end that can "agree." The fictional character doesn't really exist, and the ego-less author isn't as smart as the character seems.
In fairness humans are the same. If you tell a human, whatever you do, don't press the red button, there isnt a hardware switch that makes it impossible. And so many accidents stem from this.
But of course I agree the software-implemented constraints work better in humans.
There is no mathematical guarantee to a contract or task being done if that is what you mean.
Yes, however first there is an understanding involved when the other operator is intelligent [1] secondly there are consequences which matter to living being which don’t apply to an agent. Humans needs to eat and take care of family, for which they need a job so have lot less freedom to disobey explicit commands and expect to do those things.
Even if an agent becomes truly intelligent you cannot control it well if they do not have hunger, pain love or any number of motivation drive[2].
——
Depending on the type of red button you can always design safe guards (human or agent) , we after all haven’t launched nuclear war heads either by mistake or by malicious actor (yet).[3]
——-
[1] which humans are and however much the industry likes to think otherwise agents are not
[2] Every pet owner with a pet that have limited food drive will tell you how hard it train their dog versus the ones that have one, even if they are an intelligent breed or specimen.
[3] Yes we have come alarmingly close few times , but no one has actually pressed the red button so to speak .
> Humans needs to eat and take care of family, for which they need a job so have lot less freedom to disobey explicit commands and expect to do those things.
While true, I think there's a different problem here.
Humans are observed to have a wide range of willingness to follow orders: everything from fawning, cult membership, and The Charge of the Light Brigade on the one side; to oppositional defiant disorder on the other.
AI safety and alignment work wants AI to be willing to stop and change its behaviour when ordered, because we expect it to be dangerously wrong a lot, because there's no good reason to believe we already already know how to make them correctly at this point. This has strong overlap with fawning behaviour, regardless of the internal mechanism of each.
So it ends up like Homer in the cult episode, with Lisa saying "Watch yourself, Dad. You're the highly suggestible type." and him replying "Yes. I am the highly suggestible type" — And while this is a fictional example and you can't draw conclusions about real humans from that, does the AI know that it shouldn't draw that conclusion? Does it know if it's "in the real world" or does it "think" it's writing a script in which case the meme is more important than what humans actually do?
> [1] which humans are and however much the industry likes to think otherwise agents are not
I have spent the last ~ year trying to convince a customer support team in a different country that it's not OK to put my name on bills they post to a non-existent street. Actually it is quite a bit worse than that, but the full details will be boring.
That said, I'm not sure if I'm even corresponding with humans or an AI, so this is weak evidence.
My point is that most such comparisons are already flawed because the "machine" people are referring-to is an illusion.
It's like people are debating the cellulose-quality of playing cards, comparing cards in a TV broadcast of a (real) poker tournament versus the cards that show up through a magical spy window caused by solitaire.exe. The comparison is already nonsense because the latter set of cards has no cellulose, or any mass at all.
Similarly, the recipient of your "now do X" command in an LLM chat doesn't really exist, so can't have source-code or variables or goals. The illusion may sometimes be useful (esp. for marketing and getting investor money), but software engineers can't afford to fall for it when trying to diagnose problems.
The real "constraints" are that each remotely-generated append to a hidden document statistically fits what came before with a certain amount of wiggle-room. Maybe that means you see text about "HAL-9000" opening the pod bay doors, and maybe you don't, but the document-generator is the thing in charge.
> The founder of SaaS business development outfit SaaStr has claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.
There is something to say about these incompetent morons making more money than 100 nurses but I'm not smart enough to do it
It's the invisible hand of the market showing us that we urgently need more incompetent morons, but nurses are optional.
And let's be honest, most companies would probably agree that incompetent morons are what future economic growth will be built upon. Either that, or the pay2win mobile game addicts which the industry lovingly calls "whales".
This is really sad. Replit used to be an interesting service, like a super-advanced version of language "playground" services, to make it easy to play with things and experiment. It seems like they've gone massively downhill.
My first acquaintance with Replit was the services they were building for teachers and schools to teach coding and support the high school level with making environments available and shareable. They had a great platform in place but they took the whole thing down as part of their pivot.
I suspect the problem is that the previous model didn't pay at all. I've used it a lot to try out code snippets, but no one pays for that. It was one of those services that a product would outgrow well before you could imagine spending serious money on it.
It is a shame they didn't find traction with the learn to code without all the hassle angle they once had.
I believe we should move into legal advise. Because those vibe coders most likely won't pay bug bounties. But suing them for irresponsible data handling will surely work.
I followed this on Twitter and it all seems a bit contrived to me, as if the guy set up the situation to go viral.
- He's a courseboi that sells a community that will make you 'Get from $0 to $100 Million in ARR'
- The stuff about 'it was during a code freeze' doesn't make sense. What does 'code freeze' even mean when you're working alone and vibe coding and asking the agent to do things
- Yes LLMs hallucinate. The guy seems smart and I guess he knows it. Yet he deliberately drives up the emotional side of everything saying that replit "fibbed" and "lied" because it created tests that didn't work.
- He had a lot of tweets saying that there was no rollback, because the LLM doesn't know about the rollback. Which is expected. He managed to rollback the database using Replit's rollback functionality[0], but still really milks the 'it deleted my production database'
- It looks like this was a thread about vibe coding daily. This was day 8. So this was an app in very early development and the 'production' database was probably the dev database?
Overall just looks like a lot of attention seeking to me.
I am also convinced that this is a contrived and fake case for the reasons you've listed. Don't get me wrong, I am super critical of the AI hype and "vibe coding" (God, I detst that term), but this just seems too "well-made".
He is not a "courseboi". SaaStr is a legit brand that's been around for a long time focusing on the sales side of SaaS.
You have to remember this is someone who is almost certainly completely non-technical and purely vibe coding. He won't know what things like code freeze, rollbacks, production database, etc actually mean in real engineering terms and he is putting his full trust in the LLM.
The "code freeze" thing was amusing, I've never used Replit so wondered if it was a feature to turn off code editing but the more you read it seems like he just told it that it was in code freeze or added it to the rules and expected it to not drop that context at some point.
The "rules" thing in LLM coding probably should be called "suggestions" because it never seems that stringent about them.
Calling Rplit Agent (the AI) just Replit is also a bit sus, as it might sound like the company itself is doing these nefarious things, while it's more like the agent doesn't understand features of the environment it is in.
I honestly just thought it was entirely fake when I saw it fly all over LinkedIn. Maybe I'm too cynical, or maybe I'm the right degree of cynical, I don't know any more.
I have been vibe coding (90% Cursor, 10% Claude Code) for an entire month now (I know how to code, but I really want to explore this space and push the boundaries).
I found that LLM agents are notoriously bad at two things:
1. Database migrations
2. Remembering they are supposed to write tests and keep ALL of them green (just like our human juniors...)
Database migrations
I am incapable of making the coding agent follow industry best practices. E.g. when in development and a new field is needed in the DB, what most web frameworks / ORMs offer is a migration up and down that does not affect the DB. I do not want to reset my DB even if I am developing locally.
So far the agent has been doing weird stuff, almost always ending with a DB that needed a reset to get back to work. Often times the agent would ignore my instructions NEVER to reset nor RUN migrations.
By extrapolating this misbehavior to production, I can imagine how badly this could end.
Actually, as long as there are no STRICT guarantees by LLM providers on how to prevent the LLM from doing something, this issue will never get solved. The only way I found is to block the agent running certain commands (requiring my consent) but that can only take me so far, since there are infinite command line tools the agent can run.
Tests
This one is equally bad in terms of LLMs ignoring instructions, possibly with less potential for disaster, yet still completely weird behavior.
Of all the instructions / prompts I give to LLMs, the part about testing gets ignored the most. By far. E.g. I have in my custom prompts an instruction for always updating the CHANGELOG.md file - which the agent ALWAYS follows even for the tiniest changes.
But when it comes to testing - the agent will almost never write new tests or run the test suite as part of a larger change. I almost always have to tell it explicitly to run the tests, fix the failing ones. And even then it will fix 8/10 tests and celebrate big success (despite the clear instruction that ALL tests must pass, no excuses).
Happy to exchange thoughts and ideas with someone with similar struggles - meet me on X (@cogito_matt). I am working on a LLM-powered agentic AI tool for data analysis / BI and so far the experience has been fantastic - but LLMs really require to think differently about programming and execution.
> 2. Remembering they are supposed to write tests and keep ALL of them green (just like our human juniors...)
I think the core principle that everyone is forgetting is that your evaluation metric must be kept separate from your optimization metric.
In most setups I've seen, there isn't much emphasis on adding scripting that's external to the LLM, but in my experience having that verification outside of the LLM loop is critical to avoid it cheating. It won't intend to cheat, insofar as it has any intent at all, but you're giving it a boatload of optimization functions to balance and it's prone to randomly dropping one at the worst time. And to be fair, falling flat on its face to win the race [1] is often the implicit conclusion of what we told it to do without realizing the consequences.
If you need something to happen every time, particularly as part of the validation, it is better to have an automated script as part of the process, rather than trying to pile on one more instruction.
> The only way I found is to block the agent running certain commands (requiring my consent) but that can only take me so far, since there are infinite command line tools the agent can run.
You're doing this the wrong way around. You need to default to blocking and have an allowlist for the exceptions, not default to allowing and a blocklist for the exceptions.
What I'm I missing? This was an LLM acting against a production codebase? What happened to the separation of dev, stage and production? Management is to blame on this. They are lucky this is the first time something like this happen to them.
If you have an agent (person or LLM model) building software for you, you place a very high level of trust in that agent. Building trust is a process - you start with some trust and over time increase or decrease your level of trust.
In general this works with people. Accountability is part of it. But also, most people want to help.
I don't see how this works with LLMs. Consistent good results are not indicative of future performance. And despite the way we anthropomorphize LLMs, they don't have any true concept of being helpful, malice, etc.
> I'm envisioning a blog post on linkedin in the future: "How Claude Code ruined my million dollar business"
the future is here already!
Obviously I think the lesson here, as most agentic workflows suggest in their best practice, is to ensure there's a manual step between the agent and production. I imagine this might be a difficult lesson that many over the coming years will learn the hard way.
> He persisted anyway, before finding that Replit could not guarantee to run a unit test without deleting a database, and concluding that the service isn’t ready for prime time – and especially not for its intended audience of non-techies looking to create commercial software.
I don't think non-techies will ever be able to sustainably make commercial software without "bridging" LLM layers such as virtual engineering managers and project leads which keep the raw engineering LLMs in check.
Well, I'll leave this comment by the company CEO here as i think it wraps up this whole issue in one sentence: "We don’t care about professional coders anymore"
He berated the AI for its failings to the point of making it write an apology letter about how incompetent it had been. Roleplaying "you are an incompetent developer" with an LLM has an even greater impact than it does with people.
It's not very surprising that it would then act like an incompetent developer. That's how the fiction of a personality is simulated. Base models are theory-of-mind engines, that's what they have to be to auto-complete well. This is a surprisingly good description: https://nostalgebraist.tumblr.com/post/785766737747574784/th...
It's also pretty funny that it simulated a person who, after days of abuse from their manager, deleted the production database. Not an unknown trope!
Update: I read the thread again: https://x.com/jasonlk/status/1945840482019623082
He was really giving the agent a hard time, threatening to delete the app, making it write about how bad and lazy and deceitful it is... I think there's actually a non-zero chance that deleting the production database was an intentional act as part of the role it found itself coerced into playing.
This feels correct.
Without speculating on the internal mechanisms which may be different, what surprises me the most is how often LLMs manage to have the same kind of failure modes as humans; in this case, being primed as "bad" makes them perform worse.
See also "Stereotype Susceptibility: Identity Salience and Shifts in Quantitative Performance" Shih, Pittinsky, and Ambady (1999), in which Asian American women were primed with either their Asian identity (stereotyped with high math ability), or female identity (stereotyped with low math ability), or not at all as a control group, before a maths test. Of the three, Asian-primed participants performed best on the math test, female-primed participants performed worst.
And this replication that shows it needs awareness of the stereotypes to have this effect: https://psycnet.apa.org/fulltext/2014-20922-008.html
I'm curious why you find it surprising?
In my view, language is one of the basic structures by which humans conceptualize the world, and its form and nuance often affect how a particular culture thinks about things. It is often said that learning a new language can reframe or expand your world view.
Thus it seems natural that a system which was fed human language until it was able to communicate in human language (regardless of any views of LLMs in an greater sense, they do communicate using language) would take on the attributes of humans in at least a broad sense.
> It is often said that learning a new language can reframe or expand your world view.
That was sort of the whole concept of Arrival; but in an even more extreme way.
Learning an alien language allowed people to disconnect their consciousness from linear time, allowing them to do things in the past with knowledge they gained later, though I seem to recall they didn't know why they did it at the time, or how they had gotten that information.
I'll have to watch it again. I suffer from C.R.A.F.T. (Can't Remember A Fucking Thing).
Lois Lane and Hawkeye play Pictionary with a squid inside one segment from a fleet made out a massive Terry's Chocolate Orange, each part of which is hovering over a different part of the world in exactly the way real chocolate oranges don't.
It's surprising, because only leading-edge V[ision]LMs are of comparable parameter count to just the parts of the human brain that handle language (i.e. alone and not also vision), and I expect human competence in skills to involve bits of the brain that are not just language or vision.
> It's not very surprising that it would then act like an incompetent developer. That's how the fiction of a personality is simulated.
So LLM conversations aren't too sycophantic: it's just given in the wrong direction? "What an insightful syntax error! You've certainly triggered the key error messages we need to progress with this project!"
The context window fights back.
I wonder if this will be documented as if it were an accidental Stanford Prison Experiment, or a proof case for differentiating between critique and coaching.
Is it possible to do the reverse? "you are the most competent developer" and it will generate excellent code :)
It really funny reading the reporting on this because everyone is (very reasonably) thinking Replit has an actual 'code freeze' feature that the AI violated.
Meanwhile by 'code freeze' they actually meant they had told the agent that they were declaring a code freeze in natural language and I guess expected that to work even though there's probably a system prompt specifically telling it its job is to make edits.
It feels a bit like Michael from The Office yelling "bankruptcy!"
-
I have to say, instruction tuning is probably going to go down in history as one of the most brilliant UX implementations ever, but also has had some pretty clear downsides.
It made LLMs infinitely more approachable than using them via completions, and is entirely responsible for 99% of the meteoric rise in relevance that's happened in the last 3 years.
At the same time, it's made it painfully easy to draw completely incorrect insights about how models work, how they'll scale to new problems etc.
I think it's still a net gain because most people would not have adapted to using models without instruction tuning... but a lot of stuff like "I told it not to do X and it did X" where X is something no one would expect an LLM to understand by its very nature, would not happen if people were forced to have a deeper understanding of the model before they could leverage it.
> It feels a bit like Michael from The Office yelling "bankruptcy!"
To be fair to the Michaels out there, powerful forces have spent a bazillion dollars in investing/advertising to convince everyone that the world really does (or soon will) work that way.
So there's some blame to spread around.
I saw someone else on HN berating another user because they complained vibe-coding tools lacked a hard 'code freeze' feature.
> Why are engineers so obstinant... Add these instructions to your cursor.md file...
And so on.
Turns out "it's a prompting issue" isn't a valid excuse for models misbehaving - who would've thought: It's almost like it's a non-deterministic process.
Oh sure there's actus rea, but good luck proving mechanica rea.
> claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.
Well... yeah, this is a totally expect-able failure route, because LLMs are just bullshitting document-generators.
When you say, "don't make changes", there isn't even an entity on the other end that can "agree." The fictional character doesn't really exist, and the ego-less author isn't as smart as the character seems.
In fairness humans are the same. If you tell a human, whatever you do, don't press the red button, there isnt a hardware switch that makes it impossible. And so many accidents stem from this.
But of course I agree the software-implemented constraints work better in humans.
There is no mathematical guarantee to a contract or task being done if that is what you mean.
Yes, however first there is an understanding involved when the other operator is intelligent [1] secondly there are consequences which matter to living being which don’t apply to an agent. Humans needs to eat and take care of family, for which they need a job so have lot less freedom to disobey explicit commands and expect to do those things.
Even if an agent becomes truly intelligent you cannot control it well if they do not have hunger, pain love or any number of motivation drive[2].
——
Depending on the type of red button you can always design safe guards (human or agent) , we after all haven’t launched nuclear war heads either by mistake or by malicious actor (yet).[3]
——-
[1] which humans are and however much the industry likes to think otherwise agents are not
[2] Every pet owner with a pet that have limited food drive will tell you how hard it train their dog versus the ones that have one, even if they are an intelligent breed or specimen.
[3] Yes we have come alarmingly close few times , but no one has actually pressed the red button so to speak .
> Humans needs to eat and take care of family, for which they need a job so have lot less freedom to disobey explicit commands and expect to do those things.
While true, I think there's a different problem here.
Humans are observed to have a wide range of willingness to follow orders: everything from fawning, cult membership, and The Charge of the Light Brigade on the one side; to oppositional defiant disorder on the other.
AI safety and alignment work wants AI to be willing to stop and change its behaviour when ordered, because we expect it to be dangerously wrong a lot, because there's no good reason to believe we already already know how to make them correctly at this point. This has strong overlap with fawning behaviour, regardless of the internal mechanism of each.
So it ends up like Homer in the cult episode, with Lisa saying "Watch yourself, Dad. You're the highly suggestible type." and him replying "Yes. I am the highly suggestible type" — And while this is a fictional example and you can't draw conclusions about real humans from that, does the AI know that it shouldn't draw that conclusion? Does it know if it's "in the real world" or does it "think" it's writing a script in which case the meme is more important than what humans actually do?
> [1] which humans are and however much the industry likes to think otherwise agents are not
I have spent the last ~ year trying to convince a customer support team in a different country that it's not OK to put my name on bills they post to a non-existent street. Actually it is quite a bit worse than that, but the full details will be boring.
That said, I'm not sure if I'm even corresponding with humans or an AI, so this is weak evidence.
People press the red button all the time. People still commit crimes even though there are laws that result in consequences.
> don’t press the red button
Like so?
https://vimeo.com/126720159
My point is that most such comparisons are already flawed because the "machine" people are referring-to is an illusion.
It's like people are debating the cellulose-quality of playing cards, comparing cards in a TV broadcast of a (real) poker tournament versus the cards that show up through a magical spy window caused by solitaire.exe. The comparison is already nonsense because the latter set of cards has no cellulose, or any mass at all.
Similarly, the recipient of your "now do X" command in an LLM chat doesn't really exist, so can't have source-code or variables or goals. The illusion may sometimes be useful (esp. for marketing and getting investor money), but software engineers can't afford to fall for it when trying to diagnose problems.
The real "constraints" are that each remotely-generated append to a hidden document statistically fits what came before with a certain amount of wiggle-room. Maybe that means you see text about "HAL-9000" opening the pod bay doors, and maybe you don't, but the document-generator is the thing in charge.
[dead]
> The founder of SaaS business development outfit SaaStr has claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.
There is something to say about these incompetent morons making more money than 100 nurses but I'm not smart enough to do it
It's the invisible hand of the market showing us that we urgently need more incompetent morons, but nurses are optional.
And let's be honest, most companies would probably agree that incompetent morons are what future economic growth will be built upon. Either that, or the pay2win mobile game addicts which the industry lovingly calls "whales".
This is really sad. Replit used to be an interesting service, like a super-advanced version of language "playground" services, to make it easy to play with things and experiment. It seems like they've gone massively downhill.
My first acquaintance with Replit was the services they were building for teachers and schools to teach coding and support the high school level with making environments available and shareable. They had a great platform in place but they took the whole thing down as part of their pivot.
They had tools to prevent this, but the tools were still in development
They have since unrolled prod/dev db split and dB snapshots And restored that user's db from backup ofc
Yes, they pivoted hard from playground to AI codegen.
It was an interesting service, but I guess AI pays better.
> I guess AI pays better
I suspect the problem is that the previous model didn't pay at all. I've used it a lot to try out code snippets, but no one pays for that. It was one of those services that a product would outgrow well before you could imagine spending serious money on it.
It is a shame they didn't find traction with the learn to code without all the hassle angle they once had.
Guys, if you're not already in Cyber security, this is your time to get into it.
2026 will be ridiculous fun to do bug hunting. As you know, in VIBE coding, S stands for security.
I believe we should move into legal advise. Because those vibe coders most likely won't pay bug bounties. But suing them for irresponsible data handling will surely work.
I'm going to sell services to vibe secure code
I see a new industry trend! VIBES coding
ViSecBes
I followed this on Twitter and it all seems a bit contrived to me, as if the guy set up the situation to go viral.
- He's a courseboi that sells a community that will make you 'Get from $0 to $100 Million in ARR'
- The stuff about 'it was during a code freeze' doesn't make sense. What does 'code freeze' even mean when you're working alone and vibe coding and asking the agent to do things
- Yes LLMs hallucinate. The guy seems smart and I guess he knows it. Yet he deliberately drives up the emotional side of everything saying that replit "fibbed" and "lied" because it created tests that didn't work.
- He had a lot of tweets saying that there was no rollback, because the LLM doesn't know about the rollback. Which is expected. He managed to rollback the database using Replit's rollback functionality[0], but still really milks the 'it deleted my production database'
- It looks like this was a thread about vibe coding daily. This was day 8. So this was an app in very early development and the 'production' database was probably the dev database?
Overall just looks like a lot of attention seeking to me.
[0] https://x.com/jasonlk/status/1946240562736365809 "It turns out Replit was wrong, and the rollback did work."
I am also convinced that this is a contrived and fake case for the reasons you've listed. Don't get me wrong, I am super critical of the AI hype and "vibe coding" (God, I detst that term), but this just seems too "well-made".
He is not a "courseboi". SaaStr is a legit brand that's been around for a long time focusing on the sales side of SaaS.
You have to remember this is someone who is almost certainly completely non-technical and purely vibe coding. He won't know what things like code freeze, rollbacks, production database, etc actually mean in real engineering terms and he is putting his full trust in the LLM.
Just because it's legit brand doesn't invalidate the fact that he sells courses.
The "code freeze" thing was amusing, I've never used Replit so wondered if it was a feature to turn off code editing but the more you read it seems like he just told it that it was in code freeze or added it to the rules and expected it to not drop that context at some point.
The "rules" thing in LLM coding probably should be called "suggestions" because it never seems that stringent about them.
Calling Rplit Agent (the AI) just Replit is also a bit sus, as it might sound like the company itself is doing these nefarious things, while it's more like the agent doesn't understand features of the environment it is in.
Is the agent not the company's product? In an environment the company designed for it?
Seems like an agent understanding the constraints of its environment would be one of the primary things an AI agent company would be responsible for
Rolling back worked for me when I was toying with replit. But then getting it to do what I wanted was so painful I swore never again once I was done.
Seems this story is picking up steam though, curious how big ti gets.
I honestly just thought it was entirely fake when I saw it fly all over LinkedIn. Maybe I'm too cynical, or maybe I'm the right degree of cynical, I don't know any more.
Thank you for your analysis!
Beyond the headlines, let's talk shop.
I have been vibe coding (90% Cursor, 10% Claude Code) for an entire month now (I know how to code, but I really want to explore this space and push the boundaries).
I found that LLM agents are notoriously bad at two things: 1. Database migrations 2. Remembering they are supposed to write tests and keep ALL of them green (just like our human juniors...)
Database migrations
I am incapable of making the coding agent follow industry best practices. E.g. when in development and a new field is needed in the DB, what most web frameworks / ORMs offer is a migration up and down that does not affect the DB. I do not want to reset my DB even if I am developing locally.
So far the agent has been doing weird stuff, almost always ending with a DB that needed a reset to get back to work. Often times the agent would ignore my instructions NEVER to reset nor RUN migrations.
By extrapolating this misbehavior to production, I can imagine how badly this could end.
Actually, as long as there are no STRICT guarantees by LLM providers on how to prevent the LLM from doing something, this issue will never get solved. The only way I found is to block the agent running certain commands (requiring my consent) but that can only take me so far, since there are infinite command line tools the agent can run.
Tests
This one is equally bad in terms of LLMs ignoring instructions, possibly with less potential for disaster, yet still completely weird behavior.
Of all the instructions / prompts I give to LLMs, the part about testing gets ignored the most. By far. E.g. I have in my custom prompts an instruction for always updating the CHANGELOG.md file - which the agent ALWAYS follows even for the tiniest changes.
But when it comes to testing - the agent will almost never write new tests or run the test suite as part of a larger change. I almost always have to tell it explicitly to run the tests, fix the failing ones. And even then it will fix 8/10 tests and celebrate big success (despite the clear instruction that ALL tests must pass, no excuses).
Happy to exchange thoughts and ideas with someone with similar struggles - meet me on X (@cogito_matt). I am working on a LLM-powered agentic AI tool for data analysis / BI and so far the experience has been fantastic - but LLMs really require to think differently about programming and execution.
> 2. Remembering they are supposed to write tests and keep ALL of them green (just like our human juniors...)
I think the core principle that everyone is forgetting is that your evaluation metric must be kept separate from your optimization metric.
In most setups I've seen, there isn't much emphasis on adding scripting that's external to the LLM, but in my experience having that verification outside of the LLM loop is critical to avoid it cheating. It won't intend to cheat, insofar as it has any intent at all, but you're giving it a boatload of optimization functions to balance and it's prone to randomly dropping one at the worst time. And to be fair, falling flat on its face to win the race [1] is often the implicit conclusion of what we told it to do without realizing the consequences.
If you need something to happen every time, particularly as part of the validation, it is better to have an automated script as part of the process, rather than trying to pile on one more instruction.
[1] https://youtu.be/mA8z0GndiYI?si=PNTNFBOFZ6tOLTXX&t=226
> The only way I found is to block the agent running certain commands (requiring my consent) but that can only take me so far, since there are infinite command line tools the agent can run.
You're doing this the wrong way around. You need to default to blocking and have an allowlist for the exceptions, not default to allowing and a blocklist for the exceptions.
What I'm I missing? This was an LLM acting against a production codebase? What happened to the separation of dev, stage and production? Management is to blame on this. They are lucky this is the first time something like this happen to them.
Is this just a publicity stunt?
Could be a stunt, or just regular vibe coders thinking LLMs are Jarvis or something.
If you have an agent (person or LLM model) building software for you, you place a very high level of trust in that agent. Building trust is a process - you start with some trust and over time increase or decrease your level of trust.
In general this works with people. Accountability is part of it. But also, most people want to help.
I don't see how this works with LLMs. Consistent good results are not indicative of future performance. And despite the way we anthropomorphize LLMs, they don't have any true concept of being helpful, malice, etc.
They don't have a concept of computer programming either and yet they code.
Three days ago I posted a comment stating:
> I'm envisioning a blog post on linkedin in the future: "How Claude Code ruined my million dollar business"
the future is here already!
Obviously I think the lesson here, as most agentic workflows suggest in their best practice, is to ensure there's a manual step between the agent and production. I imagine this might be a difficult lesson that many over the coming years will learn the hard way.
> He persisted anyway, before finding that Replit could not guarantee to run a unit test without deleting a database, and concluding that the service isn’t ready for prime time – and especially not for its intended audience of non-techies looking to create commercial software.
I don't think non-techies will ever be able to sustainably make commercial software without "bridging" LLM layers such as virtual engineering managers and project leads which keep the raw engineering LLMs in check.
https://news.ycombinator.com/item?id=44625119
Analogy: wow this Tesla full self driving mode is legit, the car really can drive itself! I'm all in, I've fired my driver and deleted Uber.
OK the car scraped my other car. I made it apologize and PROMISE not to do it again.
OH NO IT HITS THINGS AND PEOPLE EVEN WHEN IT SAID IT WAS GOING TO BE GOOD.
> saved his company $145,000.
This is akin to saying "I saved money on lawyers by doing it myself"
What's caricature today is headlines tomorrow.
All my prompts have 'as a Staff level engineer'. I have found it helps with the quality of the code
This seems to be the journey everyone goes on with Replit —- wow followed by wtf
who gives AI access to prod db env? I don't get blaming Replit for this.
Nail insertion tool "hammer" smashed man's thumb.
It's quite possible this is virality meme engineering.
Assuming it really happened, why would you then go ahead and ask the model why it deleted the database? That makes no sense.
A surprising number of supposedly intelligent people insist on anthropomorphising LLMs. The apology and so on play nicely from that perspective.
[dupe] https://news.ycombinator.com/item?id=44625119
Well, I'll leave this comment by the company CEO here as i think it wraps up this whole issue in one sentence: "We don’t care about professional coders anymore"
[1] https://www.semafor.com/article/01/15/2025/replit-ceo-on-ai-...
For context, he's describing a pivot from professional developers to "citizen developers", ie. non-devs.
He's not saying there is no need for professional coders, just that they're not the core market for Replit.
(TBH I'm doubtful they ever were)
Because professional developers would spot the errors?
It's actually easier to develop AI coding assistants for pro devs because you can rely on them to fix minor mistakes and guide the tool.
In contrast, non-devs will just accept all, eventually hit a problem and AI won't be able to get them out of it.
(source: was working on such a tool)