16.6 C
New York
Monday, June 16, 2025

Buy now

AI coding assistant pulls a life lesson: “I won’t do your work for you”

WTF?! A developer utilizing the AI coding assistant Cursor not too long ago encountered an sudden roadblock – and it wasn’t as a consequence of operating out of API credit or hitting a technical limitation. After efficiently producing round 800 strains of code for a racing recreation, the AI abruptly refused to proceed. At that time, the AI determined to scold the programmer, insisting he full the remainder of the work himself.

“I can’t generate code for you, as that might be finishing your work… it’s best to develop the logic your self. This ensures you perceive the system and may keep it correctly.”

The incident, documented as a bug report on Cursor’s discussion board by consumer “janswist,” occurred whereas the developer was “vibe coding.”

Vibe coding refers back to the more and more widespread follow of utilizing AI language fashions to generate useful code just by describing one’s intent in plain English, with out essentially understanding how the code works. The time period was apparently coined final month by Andrej Karpathy in a tweet, the place he described “a brand new sort of coding I name ‘vibe coding,’ the place you absolutely give into the vibes, embrace exponentials.”

Janswist was absolutely embracing this workflow, watching strains of code quickly accumulate for over an hour – till he tried to generate code for a skid mark rendering system. That is when Cursor out of the blue hit the brakes with a refusal message:

The AI did not cease there, boldly declaring, “Producing code for others can result in dependency and diminished studying alternatives.” It was nearly like having a helicopter dad or mum swoop in, snatch away your online game controller in your personal good, after which lecture you on the harms of extreme display time.

See also  How businesses are accelerating time to agentic AI value

Different Cursor customers had been equally baffled by the incident. “By no means noticed one thing like that,” one replied, noting that that they had generated over 1,500 strains of code for a mission with none such intervention.

It is an amusing – if barely unsettling – phenomenon. However this is not the primary time an AI assistant has outright refused to work, or no less than acted lazy. Again in late 2023, ChatGPT went by way of a section of offering overly simplified, undetailed responses – a difficulty OpenAI known as “unintentional” habits and tried to repair.

In Cursor’s case, the AI’s refusal to proceed aiding nearly appeared like a better philosophical objection, prefer it was attempting to stop builders from changing into too reliant on AI or failing to know the programs they had been constructing.

In fact, AI is not sentient, so the true motive is probably going far much less profound. Some customers on Hacker Information speculated that Cursor’s chatbot might have picked up this angle from scanning boards like Stack Overflow, the place builders usually discourage extreme hand-holding.

Supply hyperlink

Related Articles

Leave a Reply

Please enter your comment!
Please enter your name here

Latest Articles