Can humans decide the values and culture of AI?

Can humans decide the values and culture of AI?

“We’re currently in the midst of the first of three AI revolutions,” said Ben Goertzel (CEO at SingularityNET) at #LEAP22, “And it’s just going to get more exciting from here on out.” 

That first revolution – the one we’re in right now – is the narrow AI revolution. We have AI models that can do a really good job of carrying out particular, pre-specified tasks. Sometimes they perform those tasks better than a human could, and the speed at which AI can organise and analyse data is proving invaluable when it comes to decision-making in business, for example. 

The second revolution, Goertzel said, will be in artificial general intelligence (AGI). We’re not far off it – Ray Kurzweil (Director of Engineering at Google) has predicted that AGI will be operational by 2029. It’ll involve AIs that can carry out a wider variety of tasks, including tasks they haven’t been specifically programmed to do; they’ll be able to imagine solutions to problems, and learn independently enough to outsmart human beings. 

Then from AGI, we’ll move on to ASI: artificial super intelligence. This is when Philip K. Dick’s bladerunner becomes reality and machines go far beyond the realm of human intelligence – and, possibly, beyond the limits of human control. 

When AI reaches this level, the questions we need to ask are less about tech and more about fundamental values. What kind of culture will AGI and ASI develop for itself? How much (or little) involvement will humans have in AI values and culture? How can we make sure we don’t end up in a dystopian sci-fi thriller and perish at the hands of robots who don’t value us any more than we value the insects that crawl, unwittingly, over our picnic tables? 

Acknowledging diversity in artificial intelligence

“We can’t even fully plan for the future of the technology business or the cryptocurrency business,” Goertzel said, “let alone for the workings of superhuman AGIs. Nevertheless, we can prepare ourselves in some ways. We can collect as much data as possible, and we can broaden our minds so as to be able to adapt as well as possible to the unknown future.” 

This broadening of minds is crucial. And it’s crucial that we start doing it now, while AI tech is still in research and development – because by the time AI is capable of learning and expanding independently of us, it could all happen too quickly for us to catch up. 

And one of the first things we need to get our human heads around is that AIs will be diverse. When ASIs are among us, there won’t just be one system that works in one way. Already, with narrow AI, the systems that exist and interact with our data and with each other are numerous and varied. 

And diversity in AI also means there’ll be a wide range of motivations behind AI behaviour. 

There will be AGIs that are embodied in humanoid forms – like Sophia by Hanson Robotics, developed by Goertzel and colleagues, who became the world’s first robot citizen when Saudi Arabia gave her citizenship in 2017. Goertzel plans to develop her further with AGI capabilities: “We’re looking at using these robots as vehicles for AGI research,” he said, “But because they’re in pretty human-like bodies and their goal is to communicate with people and help people, you would expect that the motivation, the consciousness, the mindset of these robots and the AGI systems associated with these robots vaguely approximates human emotions.” 

But there will also be AGIs that are very different from human beings. “I’m leading the SingularityNET project, which is a decentralised blockchain-based AI network, which is a heterogeneous pool of AI agents that anyone in the world can put online,” Goertzel explained. “And they can communicate with each other by APIs, and aggregate together to help solve problems and help each other solve problems..” 

“This is a sort of AGI primordial soup if you will, out of which different AIs may self-organise, conditioned by providing services to humans.” 

But the body of this network is many different computers and phones connected via the internet. There’s no human-like physical structure here. So then, what’s the goal? What motivations will a network like this be guided by? 

“There’s no single goal,” Goertzel said, “There’s intrinsic motivations of solving problems and emerging in complexity and seeking novelty, but it’s a quite different cognitive organism than a human being, and you would need a different sort of motivational theory.” 

It’s hard to imagine: cognitive, intelligent beings that were developed by us but have since developed themselves beyond us, that function with completely different sets of motivations, goals, and desires from us. But imagine it we must – because that’s what we’re creating. 

How can we influence the future of AI culture?

It’s inevitable that as we develop AI models and release them into the world to work on our behalf, we’ll influence the future of their own values and culture. A key challenge is how we can help to shape that culture without restricting it to the current limitations of human culture. “We don’t want to fix the AGI to forever have the values and culture that we have right now,” Goertzel noted, “but nor do we want it to go in a totally different direction and, say, treat us like we treat the ants in the ground when we mow down the landscape to build a new house.” 

We want AI values and culture to develop in a way that compliments our own, and that can exist in peaceful, supportive harmony with humans. 

For Goertzel, the most rational way to try to achieve that is to raise AGIs like children; to set them to tasks that are most beneficial and meaningful from a human perspective; and to immerse them in the richness of human life: “the more of the diversity of human culture and values you get into the emerging AGI mind, the more likely it’s going to evolve together with us.” 

Can we decide what values and culture future AI will have?

Nope. But we can play a role in encouraging it to go in a direction that works for us. No matter what, we will influence AI culture – but our influence will be as implicit (in the way we use and interact with AI) as it will be intentional. Or perhaps, more implicit than intentional. 

So let’s agree to use AI for good, positive, caring purposes. To focus it on purposes that serve humanity and facilitate collaboration, respect, tolerance, supportiveness, and human wellbeing. That way, the implicit messages we’ll be giving AI – the messages that will influence its future culture, whether we mean them to or not – will be for the good of our species, and not designed for our demise. 

Related
articles

Saudi Arabia: A hotspot for the digital generation

From working in Parliament in Austria to pushing the boundaries of digital tech in Riyadh, LEAP 2024 speaker Margarete Schramboek (Board Member at Aramco Digital; Former Minister of Economy and Digital, Austria) has a passion for the potential of digitisation to transform our world.  We asked her what entrepreneurs should

Will fintech create new inequalities in finance?

Last year on the blog, Dr. Ritesh Jain (Founder of Infynit) explained why payments are the lifeblood of the financial services industry.  This week, we caught up with him again to dig a little deeper into his perspective on payments, fintech, and financial inclusion.  Crucially, we wanted to find out

The evolution of esports with Fabien Devide

Last year, esports club Team Vitality celebrated a decade in the industry – and over that time it has been part of seismic changes in the popularity and scope of the esports market. Back in 2013, the esports market was just beginning to find its place, carving out a space at