Cory Doctorow writes that AI companies will fail. We can salvage something from the wreckage:
The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire nine out of 10 of your radiologists, saving $20m a year. You give us $10m a year, and you net $10m a year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed – and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop’. It’s their signature on the diagnosis.”
It is what Dan Davies calls an “accountability sink”. The radiologist’s job is not really to oversee the AI’s work, it is to take the blame for the AI’s mistakes.
Interesting take, which concludes on the principle that AI is a bubble waiting to burst. I’m not sure if it’s a bubble or a CapEx cycle, but lots of the “follow the money” concerns raised here are totally valid.
I do, however, disagree with one core premise; that AI is not capable of providing an incredible amount of value across a wide array of roles and sectors. Case in point, the example he gives of ‘hallucinating libraries’ in AI-assisted software engineering, which can be almost entirely mitigated by static verification, does not strengthen his argument.
Regardless, the broader concerns he raises are definitely worth understanding and considering. Whether you’re an AI denier, doomer, or accelerationist, you know the impact will be felt. The disagreements are mostly around who will feel it the most, and who will gain or lose in the transition.