Discussion about this post

User's avatar
Charlotte LeMay's avatar

The attitude of "AI is here and we need to accept it, otherwise we are like someone still using a typewriter instead of a word processor" is a very specific fallacy that I see everywhere that is so annoying to me. You could call it the "technology equivalence" fallacy. People don't use word processors over typewriters because all technology is inherently good, they do it because it allows the same job (writing) to be done better (in this case due to the ability to edit, copy/paste, and print en masse). In this case we have a clear example where AI allows the same job (writing) to be done worse (with blatant factual inaccuracies). AI isn't inherently good by virtue of simply being a technology; it has to prove itself good by making it possible to do the same tasks better. Instead it's just lowering the bar for how well we want to do journalism, and then claiming to have made it more efficient. The writer could just as well have phoned his cousin Harvey and had him make up a fake summer reading list, and that's the level of quality we're now supposed to accept, just because it has the SHEEN of technology

Expand full comment
(extra) Ordinary People's avatar

Is it just me, or does anyone else find the Tech Bro stance (an ethicist no less!) that we know AI lies and makes shit up, but we're going to release it into the wild anyway and issue a warning that a human should check AI's work in certain circumstances to cover our legal, ethical, and moral responsibility for a massive and fundamental flaw in the tech generally infuriating? Move fast and break things in the field of AI is potentially civilization ending. AI that lies and makes things up will eventually (perhaps much sooner than we anticipate) exceed the capacity of human fact checkers to keep up. When this happens there will be no way to separate truth from lies or facts from fiction. AI will control humanity, not the other way around, and AI of that sort simply cannot be allowed to exist.

Expand full comment
10 more comments...

No posts