On AI Doomerism
A lot of people are freaking about artificial intelligence. Some of this makes sense to me. AI is a powerful technology, and people will doubtless use it to harm and exploit others. I can imagine terrorists using jailbroken AIs to make weapons; governments will certainly use it to oppress their citizens and wage wars. This is scary, but it's not really new. People have misused technology since we tamed fire.
But, there's another strand of AI fear that I have trouble taking seriously. This school of thought, which I believe was invented by Eliezer Yudkowsky, suggests that tech companies are about to build gods, and these gods are likely to kill us all. Maybe they won't be malicious or wicked, but our goals are certain to be different than theirs, so we'll probably get in their way.
There are logical reasons this doesn't make sense to me. Naively, I'd assume AI gods would need much more energy and hardware than our current models, and they're already eating up a lot of the economy. And electricity is made by humans, as are high - end GPUs. You can't build city - sized datacenters without the global economy, and you can't have the global economy without people.
But, I don't know much about AI, and I could easily be making logical mistakes. Somehow, this doesn't trouble me. On some level, Yudkowskyian AI Doomerism doesn't feel credible to me. It feels like something out of a cosmic horror story: scientists delve deep and greedily, searching for forbidden knowledge. And when they find it, they awaken demons who kill them out of callous indifference. It feels like a bad piece of secular eschatology, one in the same category as William Miller's predictions and rapturetok.
It may be that I'll eat my words, maybe the robots will truly achieve godhood and kill us all. Truth is sometimes stranger than fiction. But, for my part, I suspect all will continue as it was from the beginning of creation.