The closer AIs capabilities are to those of humans, the more they highlight the value of humanity.
This is an automatically translated post by LLM. The original post is in Chinese. If you find any translation errors, please leave a comment to help me improve the translation. Thanks!
In a past interview, He Kaiming once mentioned an example:
"Why do you trust human drivers? When you take a taxi, the driver is essentially a stranger—you don't know them at all. You only know that the driver is human. So why do you still trust that driver? Is it because you think their brain is interpretable, or because you understand that, generally speaking, a well-trained and experienced human driver is highly likely to perform well based on empirical evidence?"
There's no issue with evaluating driving ability in this way, and people might therefore trust that AI possesses a certain level of autonomous driving capability. However, I can't help but feel that such logic doesn't convince me to choose an AI-driven car over a taxi driven by a stranger.
Recently, while discussing human cognition and decision-making with Teacher PDD, I realized that when people make decisions, they don't actually deliberate over which option is better. Instead, they often rely on quick thinking and "intuition" derived from perceptions across various dimensions. Even when some thought is involved, it may still be grounded in such intuition or merely serve to rationalize it with a plausible process.
So why do I still instinctively prefer a human-driven car over an AI-driven one? I thought of an explanation:
I know and believe that humans are afraid of death and cling to life, but I don’t know if AI is the same.
Therefore, while AI can surpass human drivers infinitely in terms of capability, existing AI falls short in earning my trust. I know that, subjectively, humans do not want to get into accidents while driving. Thus, they might drive slower, more cautiously, and in emergencies, the survival instinct hardwired into their genes activates their bodies to quickly avoid danger. But AI is different—I have no way of knowing what its subjective intentions are when driving. Is it for speed, safety, comfort, or to appear intelligent to others?
This leads me to reflect on the past, when AI was not as advanced—such as in the pre-AlphaGo era—and we wondered when AI would reach or surpass human capabilities to liberate human productivity. Now, in certain fields, AI has far exceeded the average human level and can even demonstrate top-tier human performance in some areas.
In the past, we were more concerned with what AI could do. But as AI begins to accomplish certain tasks, I’ve noticed the focus of the question shifting: it’s now about what AI can do well for us.
Just because AI can do something doesn’t mean it can do it well for
us. We trust strangers to handle certain tasks not because we believe in
their abilities(sometimes ability and task performance may even be
negatively correlated😀), but because we trust human nature—the
tendency to seek advantage and avoid harm, the fear of death, and the
desire for a peaceful life.
Now, I believe this question applies to AI as well. AI is highly capable—it can do many things—but how can we build trust in AI, especially when it comes to matters involving our lives, freedom, and equality?
So, how can I trust that AI won’t take me down with it?