The great thing that a site like this can bring is helping to discover and refine anti-patterns [1]. I am a huge fan of showing valid paths to take to speed up learning, but showing dead ends, or at least sub-optimal paths, is often very helpful as well.
One criticism I have though is that documenting a failure doesn't mean you actually realized what truly went wrong and even if you accurately describe what went wrong that doesn't mean you have a real solution to that problem. Often the reason things went wrong were far more nuanced and the fix is not obvious. Adding an untested lesson at the end of each of their failures is premature. I'd call them, at best, observations and next steps to try. They are only lessons after they have truly been tested and successfully navigated around the original failure.
#2 is very crucial and it's something I've learned has to be well defined. Just this past week I heard a task was done. I check it. Only half is done. They didn't check the issue ticket before asserting completion. Not a big deal but it does mean I should walk through that when we start tickets next time.
I was excited about this idea, but based on the writing patterns and vague stories I'm pretty sure these writeups are mostly AI slop. For example, this is classic ChatGPT phrasing:
> The growth I’d been celebrating wasn’t real growth—it was just a spike of first-time buyers who never came back.
If I'm wrong, and these were actually written by a human, I'd love a chance to stand corrected and apologize.
The great thing that a site like this can bring is helping to discover and refine anti-patterns [1]. I am a huge fan of showing valid paths to take to speed up learning, but showing dead ends, or at least sub-optimal paths, is often very helpful as well.
One criticism I have though is that documenting a failure doesn't mean you actually realized what truly went wrong and even if you accurately describe what went wrong that doesn't mean you have a real solution to that problem. Often the reason things went wrong were far more nuanced and the fix is not obvious. Adding an untested lesson at the end of each of their failures is premature. I'd call them, at best, observations and next steps to try. They are only lessons after they have truly been tested and successfully navigated around the original failure.
[1] https://en.wikipedia.org/wiki/Anti-pattern
#2 is very crucial and it's something I've learned has to be well defined. Just this past week I heard a task was done. I check it. Only half is done. They didn't check the issue ticket before asserting completion. Not a big deal but it does mean I should walk through that when we start tickets next time.
I was excited about this idea, but based on the writing patterns and vague stories I'm pretty sure these writeups are mostly AI slop. For example, this is classic ChatGPT phrasing:
> The growth I’d been celebrating wasn’t real growth—it was just a spike of first-time buyers who never came back.
If I'm wrong, and these were actually written by a human, I'd love a chance to stand corrected and apologize.
I suspect they were developed (maybe AI-written) by a human, but the same human.
There's little variety in the way the stories are told.
I suspect that it will be tough to get folks to add to it, but it's not a bad idea.
One site I regularly visit, is Not Always Right[0]. I suspect that many of the stories are apocryphal, but it is entertaining.
I really miss the US Navy Safety Photo of the Day. That was a riot.
[0] https://notalwaysright.com/newest/
Whether or not it's AI written, these very vague stories are not useful.
To find slop in this submission, you don't have to look any further than the very first picture.
Some more great engineering failures are here ;
https://stream.engineered.network:8002/stream