Had a similar experience with a vendor. Marketing team requests we add some DKIM settings for yet another vendor that is gonna spam people on our behalf. The requested record is
* TXT test
I explain that's gotta be a bug on their UI, we send a screenshot to the support of the vendor. The marketing team has 4 turns with an AI that refuses to escalate to a human and is convinced this is the only entry it needs us to add to our DNS. Dropped the vendor. Someone from their retention team followed up and we linked them the ticket talking to the bot about the obvious bug. Never heard back.
I'm seeing an unfortunate pattern where someone wants to write an email, and "asks Claude", producing a needlessly verbose response that the recipient doesn't even want to read. The slop is obvious. This behavior is being pushed down by senior management.
Someone else said it best: AI isn’t increasing productivity much for the average worker, it’s just allowing them to do their job and put in less effort.
And I think that’s entirely fair to be honest. The workers aren’t going to see any raises or bonuses from AI productivity gains. Why should they go out of their way to make their boss richer?
Had a similar experience with an open-source project I help maintain, AI slop bug report saying "your code is vulnerable to blah, you should be doing one of a, b, or c to mitigate it". The mitigations a, b, and c were taken directly from the code the slop report was commenting on, where they were present and already being applied.
Had a similar experience with a vendor. Marketing team requests we add some DKIM settings for yet another vendor that is gonna spam people on our behalf. The requested record is
* TXT test
I explain that's gotta be a bug on their UI, we send a screenshot to the support of the vendor. The marketing team has 4 turns with an AI that refuses to escalate to a human and is convinced this is the only entry it needs us to add to our DNS. Dropped the vendor. Someone from their retention team followed up and we linked them the ticket talking to the bot about the obvious bug. Never heard back.
I wonder if Nexi having LLM bots both originating and receiving emails might be the root cause of the FSFE problem?
https://news.ycombinator.com/item?id=47451429
I do not know! Just speculating.
I'm seeing an unfortunate pattern where someone wants to write an email, and "asks Claude", producing a needlessly verbose response that the recipient doesn't even want to read. The slop is obvious. This behavior is being pushed down by senior management.
Someone else said it best: AI isn’t increasing productivity much for the average worker, it’s just allowing them to do their job and put in less effort.
And I think that’s entirely fair to be honest. The workers aren’t going to see any raises or bonuses from AI productivity gains. Why should they go out of their way to make their boss richer?
So the recipient feeds it into Claude as well with the instructions to tl;dr it.
Go with EnshittificAItion. Sounds much better.
Had a similar experience with an open-source project I help maintain, AI slop bug report saying "your code is vulnerable to blah, you should be doing one of a, b, or c to mitigate it". The mitigations a, b, and c were taken directly from the code the slop report was commenting on, where they were present and already being applied.