Earlier this month, Chinese artificial intelligence (A.I.) researchers at the Beijing Academy of Artificial Intelligence (BAAI) unveiled Wu Dao 2.0, the world’s biggest natural language processing (NLP) model. And it’s a big deal.
NLP is a branch of A.I. research that aims to give computers the ability to understand text and spoken words and respond to them in much the same way human beings can.
Last year, the San Francisco–based nonprofit A.I. research laboratory OpenAI wowed the world when it released its GPT-3 (Generative Pre-trained Transformer 3) language model. GPT-3 is a 175 billion–parameter deep learning model trained on text datasets with hundreds of billions of words. A parameter is a calculation in a neural network that shapes the model’s data by assigning to each chunk a greater or lesser weighting, thus providing the neural network a learned perspective on the data.
Back in November, The New York Times reported that GPT-3 “generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting.” GPT-3, move on over. Wu Dao 2.0 is here.
Wu Dao 2.0 (Chinese for enlightenment) is ten times larger than GPT-3, using 1.75 trillion parameters to simulate conversational speech, write poems, understand pictures, and even generate recipes. In addition, as the South China Morning Post reports, Wu Dao 2.0 is multimodal, covering both Chinese and English with skills acquired by studying 4.9 terabytes of images and texts, including 1.2 terabytes each of Chinese and English texts.
“Wu Dao 2.0’s mulitmodal design affords it a range of skills, including the ability to perform natural language processing, text generation, image recognition, and image generation tasks,” reports VentureBeat. “It can write essays, poems, and couplets in traditional Chinese, as well as captioning images and creating nearly photorealistic artwork, given natural language descriptions.” In addition, Wu Dao 2.0 can predict the 3D structures of proteins, like DeepMind’s AlphaFold, and can also power “virtual idols.” Just recently, BAAI researchers unveiled Hua Zhibing, China’s first A.I.-powered virtual student (see image below).
In 1950, British mathematician and computer scientist Alan Turing proposed a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions asked of both.
Would Wu Dao 2.0 pass this test? Perhaps. “Wu Dao 2.0 aims to enable machines to think like humans and achieve cognitive abilities beyond the Turing test,” said Jie Tang, the lead researcher behind Wu Dao 2.0. If not Wu Dao 2.0, the rapid progress being made in the development of language models makes it clear that one will soon pass the Turing test.
“The way to artificial general intelligence is big models and big computer [sic],” BAAI chair Dr. Hongjiang Zhang added in a statement. “What we are building is a power plant for the future of AI. With mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”
“The technology could be used as an intelligent assistant — less Siri, more Janet from The Good Place — to help people navigate life admin tasks, such as applying for a new driver’s license,” notes Politico Europe, adding, however, that there are darker possibilities. “Critics also say that these language models could be used as a surveillance tool, for example, to analyze social media for critical comments, or as powerful disinformation and propaganda machines.”
Founded in 1968, Reason is the magazine of free minds and free markets. We produce hard-hitting independent journalism on civil liberties, politics, technology, culture, policy, and commerce. Reason exists outside of the left/right echo chamber. Our goal is to deliver fresh, unbiased information and insights to our readers, viewers, and listeners every day. Visit https://reason.com