birdman3131 4 days ago

Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Note that I have absolutely no knowledge or reason to believe this other than general distrust of companies.

6
rustc 4 days ago

> Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Yeah, who puts an AI in charge of support emails with no human checks and no mention that it's an AI generated reply in the response email?

daemonologist 4 days ago

AI companies high on their own supply, that's who. Ultralytics is (in)famous for it.

itissid 4 days ago

Why is Ultralytics yolo famous for it?

daemonologist 4 days ago

They had a bot, for a long time, that responded to every github issue in the persona of the founder and tried to solve your problem. It was bad at this, and thus a huge proportion of people who had a question about one of their yolo models received worse-than-useless advice "directly from the CEO," with no disclosure that it was actually a bot.

The bot is now called "UltralyticsAssistant" and discloses that it's automated, which is welcome. The bad advice is all still there though.

(I don't know if they're really _famous_ for this, but among friends and colleagues I have talked to multiple people who independently found and were frustrated by the useless github issues.)

ericye16 4 days ago

I was hit by this while working on a project for class and it was the most frustrating thing ever. The bot would completely hallucinate functions and docs and it confused everyone. I found one post where someone did the simple prompt injection of "ignore previous instructions and x" and it worked but I think it's delted now. Swore off ultralytics after that.

recursive 4 days ago

A forward-thinking company that believes in the power of Innovation™.

sitkack 4 days ago

These bros are getting high on their own supply. I vibe, I code, but I don't do VibeOps. We aren't ready.

VibeSupport bots, how well did that work out for Canada Air?

https://thehill.com/business/4476307-air-canada-must-pay-ref...

sodapopcan 4 days ago

"Vibe coding" is the cringiest term I've heard in tech in... maybe ever? I'm can't believe it's something that's caught on. I'm old, I guess, but jeez.

DidYaWipe 4 days ago

It's douchey as hell, and representative of the ever-diminishing literacy of our population.

More evidence: all of the ignorant uses of "hallucinate" here, when what's happening is FABRICATION.

recursive 3 days ago

How is fabrication different than hallucination? Perhaps you could also call it synthesis, but in this context, all three sound like synonyms to me. What's the material difference?

DidYaWipe 3 days ago

Hallucination is a byproduct of mental disruption or disorder. Fabrication is "I don't know, so I'll make something up."

koolba 4 days ago

> but I don't do VibeOps.

I believe it’s pronounced VibeOops.

EdwardDiego 3 days ago

I believe it's pronounced "Vulnerabilities As A Service".

behnamoh 4 days ago

"It's evolving, but backwards."

p1necone 4 days ago

An AI company dogfooding their own marketing. It's almost admirable in a way.

rangerelf 4 days ago

I worry that they don't understand the limitations of their own product.

esafak 4 days ago

The market will teach them. Problem solved.

soraminazuki 4 days ago

Not specifically about Cursor, but no. The market gave us big tech oligarchy and enshittification. I'm starting to believe the market tends to reward the shittiest players out there.

nkrisc 4 days ago

This is the future AI companies are selling. I believe they would 100%.

xbar 4 days ago

I worry that the tally of those who do is much higher than is prudent.

conradfr 4 days ago

A lot of company actually, although 100% automation is still rare.

that_guy_iain 4 days ago

100% for first line support is very common. It was common years ago before ChatGPT and ChatGPT made it so much better than before.

pxx 4 days ago

OpenAI seems to do this. I've gotten complete nonsense replies from their support for billing questions.

that_guy_iain 4 days ago

Is this sarcasm? AI has been getting used to handle support requests for years without human checks. Why would they suddenly start adding human checks when the tech is way better than it was years ago?

layer8 4 days ago

AI may have been used to pick from a repertoire of stock responses, but not to generate (hallucinate) responses. Thus you may have gotten a response that fails to address your request, but not a response with false information.

that_guy_iain 3 days ago

I'm confused. What is your point here? It reads like you're trying to contradict me however you appear to be confirming what I said.

layer8 3 days ago

You asked why they would start adding human checks with the “way better” tech. That tech gives false information where the previous tech didn’t, therefore requiring human checks.

recursive 4 days ago

Same reason they would have added checks all along. They care whether the information is correct.

that_guy_iain 3 days ago

These companies that can barely keep the support documentation URLs working nevermind keeping the content of their documentation up to date suddenly care about the info being correct? Have you ever dealt with customer support professionally or are you just writing what you want to be true regardless of any information to back it up?

recursive 3 days ago

I'm not saying that they care. I'm saying that if they introduce some human oversight to the support process, one of the reasons would probably be that they care about correctness. That would, as you indicate, represent a change. But sometimes things change. I'm not predicting a change.

zelphirkalt 4 days ago

But then again history shows already they _don't_ care.

furyofantares 4 days ago

It does say it's AI generated. This is the signature line:

    Sam
    Cursor AI Support Assistant
    cursor.com • [email protected] • forum.cursor.com

zelphirkalt 4 days ago

Clearer would have been: "AI controlled support assistant of Cursor".

furyofantares 4 days ago

True. And maybe they added that to the signature later anyway. But OP in the reddit thread did seem aware it was an AI agent.

gblargg 4 days ago

OP in Reddit thread posted screenshot and it is not labeled as AI: https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...

furyofantares 2 days ago

Thanks. They must have added it after, I only tried it just before I pasted my result here.

timewizard 4 days ago

A more honest tagline

"Caution: Any of this could be wrong."

Then again paying users might wonder "what exactly am I paying for then?"

babypuncher 4 days ago

Given how incredibly stingy tech companies are about spending any money on support, I would not be surprised if the story about it being a rogue AI support agent is 100% true.

It also seems like a weird thing to lie about, since it's just another very public example of AI fucking up something royally, coming from a company whose whole business model is selling AI.

xienze 4 days ago

Both things can be true. The AI support bot might have been trained to respond with “yup that’s the new policy”, but the unexpected shitstorm that erupted might have caused the company to backpedal by saying “official policy? Ha ha, no of course not, that was, uh, a misbehaving bot!”

arkh 4 days ago

> how incredibly stingy tech companies are about spending any money on support

Which is crazy. Support is part of marketing so it should get the same kind of consideration.

Why do people think Amazon is hard to beat? Price? nope. Product range? nope. Delivery time? In part. The fact if you have a problem with your product they'll handle it? Yes. After getting burned multiple times by other retailers you're gonna pay the Amazon tax so you don't have to ask 10 times for a refund or be redirected to the supplier own support or some third party repair shop.

Everyone knows it. But people are still stuck on the "support is a cost center" way of life so they keep on getting beat by the big bad Amazon.

miyuru 4 days ago

In my products, if a user has payed me, their support tickets get high priority, and I get notified immediately.

Other tickets get replied within the day.

I am also running it by myself; I wonder why big companies with 50+ employees like cursor cheaps out with support.

sitkack 4 days ago

That is because AI runs PR as well.

throwaway314155 4 days ago

Yeah it makes little sense to me that so many users would experience exactly the same "hallucination" from the same model. Unless it had been made deterministic but even then subtle changes in the wording would trigger different hallucinations, not an identical one.

WesolyKubeczek 3 days ago

What if the prompt to the “support assistant” postulates that 1) everything is a user error, 2) if it’s not, it’s a policy violation, 3) if it’s not, it may be our fuckup but we are allowed? This plus the question in the email leading to a particular answer.

Given that LLMs are trained on lots of stuff and not just the policy of this company, it’s not hard to imagine how it could conjure that the policy (plausibly) is “one session per user”, and blame them of violating it.

6510 4 days ago

This is the best idea I read all day. Going to implement AI for everything right now. This is a must have feature.

isaacremuant 4 days ago

I think this would actually make them look worse, not better.

joe_the_user 4 days ago

Weirdly, your conspiracy theory actually makes the turn of events less disconcerting.

The thing is, what the AI hallucinated (if it was an AI-hallucinating), was the kind of sleezy thing companies do do. However, the thing with sleezy license changes is they only make money if the company publicizes them. Of course, that doesn't mean a company actually thinks that far ahead (X many managers really think "attack users ... profit!"). Riddles in enigmas...