My skip manager started asking for weekly status reports so my manager started pulling Jira reports and feeding them to ChatGPT. It turns out that my skip manager was using Copilot to summarize those reports into basically what you could get directly from Jira.
hliyan · 10h ago
I've seen this happen in emails too:
"Recent outage was due to a retry loop for the Foo API exceeding rate limits. We're implementing a backoff algo"
Sender, via ChatGPT:
Hi,
I wanted to provide more context regarding the recent outage.
The issue was triggered by a retry loop in the Foo API integration. When the API began returning errors, our system initiated repeated retry attempts without sufficient delay, which quickly exceeded the rate limits imposed by the API provider. As a result, requests were throttled, leading to degraded service availability.
To address this, we are implementing an exponential backoff algorithm with jitter. This approach will ensure that retries are spaced out appropriately and reduce the likelihood of breaching rate limits in the future. We are also reviewing our monitoring and alerting thresholds to detect similar patterns earlier.
We’ll continue to monitor the system closely and share further updates as improvements are rolled out.
Best regards,
Receiver, via ChatGPT:
"The outage was caused by excessive retry attempts to the Foo API, which triggered rate limiting and degraded service. To prevent recurrence, exponential backoff with jitter is being implemented"
nullc · 7h ago
Would be a nice fine tuning target: training to close that loop.
Terr_ · 10h ago
Recycling a prediction from a year ago:
> Ultimately a lot of this generative tech stuff is just counterfeiting extra signals people were using to try to guess at interest, attentiveness, intelligence, etc.
> So yeah, as those indicators become debased, maybe we'll go back to sending something [...] all boiled down to bullet points.
hamburga · 11h ago
This is how society collapses and OpenAI wins.
daxfohl · 10h ago
It's already started, with AI resume spamming on one side and the associated AI assisted screening on the other. "Deep research" generated funding requests on one side and associated AI generated funding request summarization on the other.
jacob_rezi · 9h ago
what are the keywords to use to search for ai resume spam tools?
MonkeyClub · 4h ago
Unironically, "ai resume spam tools" seems to work well enough.
Even more unironically, if you run the search through Google, you'll get a nice AI summary at the top as well.
Funny how encoder-decoder architectures have led to decoder-encoder behaviors.
ctkhn · 10h ago
This is the kind of thing my manager and skip manager are up to. Most of our jira tickets now are written as a random jumble of ideas, fed into internal proprietary LLM and then turned into jira description and acceptance criteria. Totally pointless
david38 · 12h ago
Wow. Full circle
Joker_vD · 58m ago
"More paper — cleaner ass". Something bad happens — it's not your fault, since you've reported (in writing!) several times to your superiors that it could happen, and was told it's fine, so please go and try to find someone else for a scapegoat, thank you very much.
roenxi · 11h ago
Plans are useless but planning is essential
~ Lots of people
This guy is quite possibly going to end up looking stupid when something goes wrong and it turns out he lied about having thought about it. I hope he is as clever as he thinks he is at anticipating what will go wrong in the future. Fires and whatnot do happen. Even AWS us-east-1 has experienced outages.
joshstrange · 1h ago
Bold to post this under, what I assume is, their real name?
This really reads as "I was asked to do something _I personally_ deemed beneath me or a waste of time so I just didn't do and provided BS instead, aren't I smart?".
No, no you aren't. You are incredibly selfish. If and when that DR is needed and the team realizes it's BS will you still be as proud as you are in this post?
I can tell you that if I was your coworker I'd probably drop a link to this post in your manager's inbox. I cannot stand people who just don't care and setup landmines for their coworkers because "they know best" and decide to do something different than what was asked for. It's the same as using AI slop in a PR but not/never being on call, it's not your problem so why do you care if the system goes down?
If I pulled out a DR plan in the middle of a crisis and found it was AI generated BS I'd be furious and after the head of whoever half-assed (zero-assed?) it. It's just so incredibly disrespectful.
tptacek · 11h ago
The author jokes, but almost everybody's DR plan (at least, the DR plans motivated by regs like SOC2 --- which I believe are most DR plans) are worse than what you'd get from an LLM. An LLM can at least take some input and craft something ostensibly related to your circumstances. The DR plans the LLM competes with are literally just copy-pasted.
TheNewsIsHere · 46m ago
I would be interested to know more about the authors employment.
As a small business owner, my DR plan is oriented to reasonably preventing getting locked out or screwing things up for customers in the event of an emergency.
I learned a lot about what to do and what not to do leading SOC 2 Type 2 audits at a large firm.
Our auditors handed us a bunch of templates that they had purchased from an even larger firm. They said we could adopt them verbatim, but that we should at least read and probably modify them.
Management was… stressed… that we had paid close to $100,000 for that to be their policy “analysis”. I was tasked to read through all their templates and sort out what was and wasn’t necessary, compare to how we actually did business and what we gave two shits about, and produce work that closed the gap. (That’s another story; turns out it’s really hard writing policy for a business where both (active) founders and the CEO are looking for different things.)
It helps to plan for abstract scenarios, like “we lost the office but the people are alive”. Or “we lost all facilities and people in this state, but not the next one over”.
It also really helps to have management who understands the reality that not everything can be planned for, and that any DR strategy worth its salt has some kind of line in the sand beyond which you can’t do anything.
antonvs · 10h ago
I’ve used LLMs to generate text for a soc2 audit about our processes. For one requirement, it took me 20 minutes to produce 8 pages that otherwise would have taken hours at least. This wasn’t misrepresentation or anything - more like here’s the requirement, here’s a summary of what we do, describe specifically how that meets the requirement. Outside of coding, it’s a perfect application for the tech.
Axolu · 6h ago
Agreed, I'm using it as a technical author, if you set your system prompt up well, you get consistent style and accurate output which would take me hours of manual work.
tbrownaw · 10h ago
They're like backups, in that they don't actually exist unless you've tested them.
A copy-pasted plan is actually fine, as long as whoever's responsible for following it actually can follow it and it works.
A plan that nobody's even looked at much less tried to follow isn't fine. Even if it's word-for-word identical to one that is fine.
satisfice · 11h ago
Or you could just do your fucking job and create a real disaster recovery plan.
statictype · 5h ago
The problem is, it's not necessarily the job of the person who is tasked with doing it.
These are the kind of things that fall between the gaps in smaller companies and there's no expert to build this disaster recovery plan because there is no risk or compliance department.
It falls into the lap of whomever is dealing with the audits or whoever has a reputation for getting things done and unblocking people.
roflyear · 11h ago
People will down vote you, but when I worked for a mega corp and a hurricane knocked out our office, they had a plan, and it worked: they set up in a hotel, and continued operations, people stayed employed, people got the products (which were largely essentials) in their supermarkets, etc.
This is not like, super trivial. People needed to figure out power, hardware, networking, vpn, etc.. etc.. etc.. and staffing. A lot of that had absolutely nothing to do with IT, but some of it did.
apwell23 · 10h ago
was someone keeping this plan up to date from it going obsolete?
thedevilslawyer · 8h ago
Given the plan worked at a megacorp scale and complexity, the answer is assuredly a yes.
tayo42 · 10h ago
The real management failure is taking a problem and assigning it to someone who has no interest in solving it. There's plenty of people interested in working on these kinds of things. Why waste this guy's time.
joshstrange · 1h ago
> The real management failure is taking a problem and assigning it to someone who has no interest in solving it. There's plenty of people interested in working on these kinds of things.
If you think there is always someone willing _and_ available to work on any given problem then... well I don't even know. Maybe I've only worked at smaller companies but that's never been my experience.
I manage (as well as code) but one thing I think more people need to understand: This is a _job_ and at the end of the day I really don't care if X task is something you don't want to do. I will do my best to match people with tasks, even adjusting timeline out further to accommodate people's tastes but there are somethings that just have to get done and the bad attitudes I've seen or had directed at me when this happens is a bit shocking. If you want to only work on things you want to work then work for yourself or find a company more aligned to your preferences (good luck finding one that only does what you want).
I understand why people in and out of the industry see developers as childish (I was/am part of the problem, no doubt).
> Why waste this guy's time.
I feel like I need a megaphone: Employees don't get to decide what is or isn't a "waste of time". Or rather, they can voice their dissent but once a decision is made you are _literally_ being paid to do what you are asked. If you don't like that then look for another job. I'm not trying to be a jerk or pretend job-switching is easy, but I've dealt with high-paying developers who whine (yes, whine) when you ask them to _do their job_ and it's infuriating. Before I started managing people I would regularly go to the mat over decisions but once I had said my piece and a decision was made I'd go along with it, I'd done my job (voice concern) and past that it's out of my hand and not my responsibility if it doesn't pan out. Every combination of being right/wrong about the final outcome and being the right/wrong side of the decision has happened to me, you do the best with the info you have and you deal with the fallout.
apwell23 · 10h ago
middle manager spotted ^
OutOfHere · 11h ago
For new companies, how about "management as a service", featuring AI+MCP? No need for human managers.
TheNewsIsHere · 1h ago
That’s a clever way to burst both the AI bubble and the startup bubble all in one fell swoop.
apwell23 · 11h ago
I've been using chatgpt for career coaching and improving my visibility at work. It has been surprisingly helpful. A million times more helpful than my own manager.
I really don't understand what the point of EMs is.
1123581321 · 1h ago
Theoretically the EM is building the department to do what the company needs, using departments or groups of engineers as the Lego pieces. Good ones are able to keep improving capabilities and keep good engineers at the company. But often they’re just surfing success they had little to do with, and it’s usually an inflated title at small companies.
ctkhn · 10h ago
It's really just monitoring you. A designated technical lead on your team can be very helpful, but once someone moves off the dev team but also isn't able to make any decisions from the product/business side they just become dead weight. I have the same issue with my EM trying to get coaching and visibility and have gone to LLMs just like you because manager is clueless.
"Recent outage was due to a retry loop for the Foo API exceeding rate limits. We're implementing a backoff algo"
Sender, via ChatGPT:
Hi,
I wanted to provide more context regarding the recent outage.
The issue was triggered by a retry loop in the Foo API integration. When the API began returning errors, our system initiated repeated retry attempts without sufficient delay, which quickly exceeded the rate limits imposed by the API provider. As a result, requests were throttled, leading to degraded service availability.
To address this, we are implementing an exponential backoff algorithm with jitter. This approach will ensure that retries are spaced out appropriately and reduce the likelihood of breaching rate limits in the future. We are also reviewing our monitoring and alerting thresholds to detect similar patterns earlier.
We’ll continue to monitor the system closely and share further updates as improvements are rolled out.
Best regards,
Receiver, via ChatGPT:
"The outage was caused by excessive retry attempts to the Foo API, which triggered rate limiting and degraded service. To prevent recurrence, exponential backoff with jitter is being implemented"
> Ultimately a lot of this generative tech stuff is just counterfeiting extra signals people were using to try to guess at interest, attentiveness, intelligence, etc.
> So yeah, as those indicators become debased, maybe we'll go back to sending something [...] all boiled down to bullet points.
Even more unironically, if you run the search through Google, you'll get a nice AI summary at the top as well.
O tempora, o mores.
~ Lots of people
This guy is quite possibly going to end up looking stupid when something goes wrong and it turns out he lied about having thought about it. I hope he is as clever as he thinks he is at anticipating what will go wrong in the future. Fires and whatnot do happen. Even AWS us-east-1 has experienced outages.
This really reads as "I was asked to do something _I personally_ deemed beneath me or a waste of time so I just didn't do and provided BS instead, aren't I smart?".
No, no you aren't. You are incredibly selfish. If and when that DR is needed and the team realizes it's BS will you still be as proud as you are in this post?
I can tell you that if I was your coworker I'd probably drop a link to this post in your manager's inbox. I cannot stand people who just don't care and setup landmines for their coworkers because "they know best" and decide to do something different than what was asked for. It's the same as using AI slop in a PR but not/never being on call, it's not your problem so why do you care if the system goes down?
If I pulled out a DR plan in the middle of a crisis and found it was AI generated BS I'd be furious and after the head of whoever half-assed (zero-assed?) it. It's just so incredibly disrespectful.
As a small business owner, my DR plan is oriented to reasonably preventing getting locked out or screwing things up for customers in the event of an emergency.
I learned a lot about what to do and what not to do leading SOC 2 Type 2 audits at a large firm.
Our auditors handed us a bunch of templates that they had purchased from an even larger firm. They said we could adopt them verbatim, but that we should at least read and probably modify them.
Management was… stressed… that we had paid close to $100,000 for that to be their policy “analysis”. I was tasked to read through all their templates and sort out what was and wasn’t necessary, compare to how we actually did business and what we gave two shits about, and produce work that closed the gap. (That’s another story; turns out it’s really hard writing policy for a business where both (active) founders and the CEO are looking for different things.)
It helps to plan for abstract scenarios, like “we lost the office but the people are alive”. Or “we lost all facilities and people in this state, but not the next one over”.
It also really helps to have management who understands the reality that not everything can be planned for, and that any DR strategy worth its salt has some kind of line in the sand beyond which you can’t do anything.
A copy-pasted plan is actually fine, as long as whoever's responsible for following it actually can follow it and it works.
A plan that nobody's even looked at much less tried to follow isn't fine. Even if it's word-for-word identical to one that is fine.
These are the kind of things that fall between the gaps in smaller companies and there's no expert to build this disaster recovery plan because there is no risk or compliance department.
It falls into the lap of whomever is dealing with the audits or whoever has a reputation for getting things done and unblocking people.
This is not like, super trivial. People needed to figure out power, hardware, networking, vpn, etc.. etc.. etc.. and staffing. A lot of that had absolutely nothing to do with IT, but some of it did.
If you think there is always someone willing _and_ available to work on any given problem then... well I don't even know. Maybe I've only worked at smaller companies but that's never been my experience.
I manage (as well as code) but one thing I think more people need to understand: This is a _job_ and at the end of the day I really don't care if X task is something you don't want to do. I will do my best to match people with tasks, even adjusting timeline out further to accommodate people's tastes but there are somethings that just have to get done and the bad attitudes I've seen or had directed at me when this happens is a bit shocking. If you want to only work on things you want to work then work for yourself or find a company more aligned to your preferences (good luck finding one that only does what you want).
I understand why people in and out of the industry see developers as childish (I was/am part of the problem, no doubt).
> Why waste this guy's time.
I feel like I need a megaphone: Employees don't get to decide what is or isn't a "waste of time". Or rather, they can voice their dissent but once a decision is made you are _literally_ being paid to do what you are asked. If you don't like that then look for another job. I'm not trying to be a jerk or pretend job-switching is easy, but I've dealt with high-paying developers who whine (yes, whine) when you ask them to _do their job_ and it's infuriating. Before I started managing people I would regularly go to the mat over decisions but once I had said my piece and a decision was made I'd go along with it, I'd done my job (voice concern) and past that it's out of my hand and not my responsibility if it doesn't pan out. Every combination of being right/wrong about the final outcome and being the right/wrong side of the decision has happened to me, you do the best with the info you have and you deal with the fallout.
I really don't understand what the point of EMs is.