Yann LeCun, the AI chief of Meta, recently expressed his belief that it is premature to be concerned about the development of human-level intelligent AI systems, commonly known as AGI. He stated that there is currently no design for such a system, making it too early to worry or regulate it to prevent existential risks.
While many experts predict that AGI is still decades or even centuries away, governments have expressed concerns about its potential threat to humanity. However, LeCun argues that AI systems are not a natural phenomenon that will spontaneously become dangerous. He believes that humans have the ability to ensure AI safety since we are the ones creating it. Drawing a parallel to turbojets, he points out that despite the many possible scenarios where they could go wrong, we managed to make them highly reliable before widespread deployment. The same principle, he suggests, applies to AI.
In addition to his views on AGI, LeCun also expressed his opinion on large language models (LLMs) like ChatGPT. He stated that these models cannot achieve human intelligence because they have a limited understanding of logic and can only perform as well as the data they are trained on. LeCun considers LLMs to be inherently unsafe and suggests that researchers exploring human-level AI should consider alternative model types.
Both OpenAI and Meta, the parent company of Facebook, have confirmed their interest in developing AGI. OpenAI’s co-founder, Sam Altman, previously stated that they are committed to creating AGI regardless of the cost. Similarly, Mark Zuckerberg, the CEO of Meta, revealed in January that their long-term vision is to build general intelligence, responsibly open-source it, and make it widely accessible for the benefit of all.
It should be noted that the information in this article is based on a report by Ibiam Wayas for Cryptopolitan.