Sam Harris spends 30 minutes talking about the dangers of AI.
He makes assumptions about the future. I think that he underestimates the difficulty of building a general AI.
I think that ChatGPT is overhyped. It is like a Wikipedia that can talk. It has no understanding except to predict what words should follow other words based on statistical information. This is why it gets so much wrong. I asked it to write some computer code and the answer wasn't even remotely correct.
We will inevitably develop general AI, but AI is a tool to solve specific problems. We don't have to make an AI that matches human intelligence when it is more efficient to have problem-specific AI. Calculators can do math far better than I can, and even the best 8-bit chess computers can outplay me at chess. It would be like saying that when we developed mechanical locomotion, we needed to make a machine that functioned exactly like a horse. We found better ways to do locomotion.
This means that AI will be solving problems long before we have a general AI, but more importantly, we will be treating it as a tool, just like any other tool. For example, twenty years ago I was annoyed when Microsoft Word automatically corrected my spelling without asking me. It felt like the machines were already becoming smarter than us. Although that was a novel experience twenty years ago, we wouldn't think twice about it today.
No comments:
Post a Comment