+234 802-265-7596
Tech Giants Race Toward “Safe” Superintelligence - But Experts Warn It May Be Impossible to Control

Tech Giants Race Toward “Safe” Superintelligence - But Experts Warn It May Be Impossible to Control

More tech firms are targeting safe and responsible superintelligence. But is that even possible? 


Last week, Microsoft announced a new unit focused on superintelligence, led by the company’s AI chief, Mustafa Suleyman. The goal is to create superintelligent AI that keeps humans in the driver's seat, Suleyman said in a blog post, and harness the technology in the service of humanity. 


In the announcement, Suleyman noted that the unit will work towards “Humanist Superintelligence,” or “systems that are problem-oriented and tend towards the domain specific.” 


“We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity,” Suleyman wrote. 


Microsoft isn’t the first organization claiming to eye superintelligence for good. 


Ilya Sutskever, OpenAI’s cofounder, launched a lab called Safe Superintelligence last September and has since nabbed $2 billion in funding at a valuation of $32 billion without a product to show for it. 


Softbank’s vision for the future involves “Artificial Super Intelligence” that’s 10,000 times smarter than human wisdom, with CEO Masayoshi Son claiming it’s pivotal for “the evolution of humanity.” 


The problem, however, is that there is no way to know whether or not superintelligence can be controlled,Bob Rogers, chief product and technology officer of Oii.ai and co-founder of BeeKeeper AI, told The Deep View. 


Though these companies paint a rosy picture of superintelligence that we can pilot and use however we wish, viewing it this way may be “idealistic and naive,” said Rogers, especially when considering that evidence of behavior such as introspection is cropping up in existing models, leading to questions of self-awareness in models. Machines that are smarter than humans could, for example, work around kill switches or defense mechanisms if they’re determined, he noted. 


“Experts are kind of thinking that there's emergent behavior in these things because they are so complex,” said Rogers. 


Microsoft’s vision differs slightly from those of its competitors in its focus on “domain-specific” solutions, Rogers noted. However, domain-specific superintelligence may be an oxymoron, he said: The tech will either be superintelligence, which, by definition, isn't niche (and potentially not containable), or it is just good, purpose-built AI, which isn't superintelligence.



As it stands, superintelligence doesn’t have a stellar reputation. In recent weeks, a petition by the Future of Life Institute has even called for a ban on the development of superintelligence, receiving nearly 90,000 signatures from names spanning AI, politics, business and media. Promising safety – even if these firms can’t live up to those promises – could be a marketing technique. Otherwise, Rogers noted, “The villagers would come up the hill with pitchforks and torches.”

Comments