I find it so hard to reconcile using Chat-GPT to write some Terraform code which hallucinates the way of using the timer provider, tells me that it's right because it "simulated" the rest on an AWS staging environment that it has access to (can I see it? No.) then when confronted with the evidence that this doesn't work in a GitHub issue tells me I've found the "smoking gun" of evidence with what I read in the news articles and my LinkedIn feed about how programmers are out of a job in six months.
I feel like I'm taking crazy pills here, is anyone else getting this?
One thing AI won’t be able to do is take the blame for big decisions.
I’ve realized that companies like McKinsey also provide a scapegoat for a manager’s decision gone wrong. They can just blame McKinsey for advising them to do it. But they can’t as easily blame an AI. And McKinsey is happy to be the bad guy.
I think this is an area where we will still keep humans. To blame someone when something goes wrong.
Unfortunately for McKinsey, that’s not going to be enough to prop up their revenues.
nwmcsween · 2h ago
This is more than enough to prop them up. More often then not the consultee already has a plan even if the plan is horrendous the consulting company is there to insulate management from any fallout (read incompetence).
rvz · 7h ago
This is the true definition of "AGI".
Rzor · 7h ago
In this case, I'm willing to entertain that this is more of an indictment of McKinsey.
I feel like I'm taking crazy pills here, is anyone else getting this?
No comments yet
I’ve realized that companies like McKinsey also provide a scapegoat for a manager’s decision gone wrong. They can just blame McKinsey for advising them to do it. But they can’t as easily blame an AI. And McKinsey is happy to be the bad guy.
I think this is an area where we will still keep humans. To blame someone when something goes wrong.
Unfortunately for McKinsey, that’s not going to be enough to prop up their revenues.