A gaming platform lives or dies by the number and quality of games available on it. The latter, in turn, is dependent on attracting developers and publishers to make a bet on that platform. Although it has been around for more than a year now, Stadia’s long-term survival has always been in question because of developer and publisher support. Now it seems that developers are once again questioning Google’s commitment to the cloud-based game streaming platform, thanks to an important change it made to Stadia Pro’s revenue model.
At first glance, the changes Google announced last week seemed to favor all developers. New games launching from October 1st, 2021, until some time in 2023 will see an 85/15 split between developer and Stadia, at least for the first $3 million in sales. It won’t last forever, though, and it will revert to the “standard” 70/30 revenue sharing sooner or later.
The problem, however, is the change that Google is making for the games that are offered for free on the Stadia Pro tier. Instead of upfront payment for games like Google promised in its previous terms, developers will instead get 70% of revenue that’s based on player engagement. In other words, the more days a Stadia Pro game is played in a month, the higher that revenue and the higher the cut that a developer will make.
The problem is that this new revenue-sharing model might be skewed to favor certain types of games only. Online games like PUBG definitely have the advantage because there is no definite end game and content just keeps getting added. Players of single-player titles or even co-op games with end content will most likely taper off at some point.
It remains to be seen whether this new business model will end up being more profitable for developers, especially when they factor in direct sales outside of Stadia Pro. The changes, however, are already adding more worry to developers on top of fears that Google might pull the plug at any given point in time like Google often does.
Some weeks ago, a nine-year-old macaque monkey called Pager successfully played a game of Pong with its mind.
While it may sound like science fiction, the demonstration by Elon Musk’s neurotechnology company Neuralink is an example of a brain-machine interface in action (and has been done before).
A coin-sized disc called a “Link” was implanted by a precision surgical robot into Pager’s brain, connecting thousands of micro threads from the chip to neurons responsible for controlling motion.
Brain-machine interfaces could bring tremendous benefit to humanity. But to enjoy the benefits, we’ll need to manage the risks down to an acceptable level.
A perplexing game of Pong
Pager was first shown how to play Pong in the conventional way, using a joystick. When he made a correct move, he’d receive a sip of banana smoothie. As he played, the Neuralink implant recorded the patterns of electrical activity in his brain. This identified which neurons controlled which movements.
The joystick could then be disconnected, after which Pager played the game using only his mind — doing so like a boss.
This Neuralink demo built on an earlier one from 2020, which involved Gertrude the Pig. Gertrude had the Link installed and output recorded, but no specific task was assessed.
According to Neuralink, its technology could help people who are paralysed with spinal or brain injuries, by giving them the ability to control computerized devices with their minds. This would provide paraplegics, quadriplegics and stroke victims the liberating experience of doing things by themselves again.
Prosthetic limbs might also be controlled by signals from the Link chip. And the technology would be able to send signals back, making a prosthetic limb feelreal.
Cochlear implants already do this, converting external acoustic signals into neuronal information, which the brain translates into sound for the wearer to “hear”.
Neuralink has also claimed its technology could remedy depression, addiction, blindness, deafness and a range of other neurological disorders. This would be done by using the implant to stimulate areas of the brain associated with these conditions.
Brain-machine interfaces could also have applications beyond the therapeutic. For a start, they could offer a much faster way of interacting with computers, compared to methods that involve using hands or voice.
A user could type a message at the speed of thought and not be limited by thumb dexterity. They’d only have to think the message and the implant could convert it to text. The text could then be played through software that converts it to speech.
Perhaps more exciting is a brain-machine interface’s ability to connect brains to the cloud and all its resources. In theory, a person’s own “native” intelligence could then be augmented on demand by accessing cloud-based artificial intelligence (AI).
Human intelligence could be greatly multiplied by this. Consider for a moment if two or more people wirelessly connected their implants. This would facilitate a high-bandwidth exchange of images and ideas from one to the other.
In doing so they could potentially exchange more information in a few seconds than would take minutes, or hours, to convey verbally.
But some experts remain skeptical about how well the technology will work, once it’s applied to humans for more complex tasks than a game of Pong. Regarding Neuralink, Anna Wexler, a professor of medical ethics and health policy at the University of Pennsylvania, said:
neuroscience is far from understanding how the mind works, much less having the ability to decode it.
Can Neuralink be hacked?
At the same time, concerns about such technology’s potential harm continue to occupy brain-machine interface researchers.
Without bulletproof security, it’s possible hackers could access implanted chips and cause a malfunction or misdirection of its actions. The consequences could be fatal for the victim.
Some may worry powerful artificial AI working through a brain-machine interface could overwhelm and take control of the host brain.
The AI could then impose a master-slave relationship and, the next thing you know, humans could become an army of drones. Elon Musk himself is on record saying artificial intelligence poses an existential threat to humanity.
He says humans will need to eventually merge with AI, to remove the “existential threat” advanced AI could present:
My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false.
If you can’t beat em, join em Neuralink mission statement
Musk has famously compared AI research and development with “summoning the demon”. But what can we reasonably make of this statement? It could be interpreted as an attempt to scare the public and, in so doing, pressure governments to legislate strict controls over AI development.
Musk himself has had to negotiate government regulations governing the operations of autonomous and aerial vehicles such as his SpaceX rockets.
The crucial challenge with any potentially volatile technology is to devote enough time and effort into building safeguards. We’ve managed to do this for a range of pioneering technologies, including atomic energy and genetic engineering.
Autonomous vehicles are a more recent example. While research has shown the vast majority of road accidents are attributed to driver behaviour, there are still situations in which AI controlling a car won’t know what to do and could cause an accident.
Years of effort and billions of dollars have gone into making autonomous vehicles safe, but we’re still not quite there. And the travelling public won’t be using autonomous cars until the desired safety levels have been reached. The same standards must apply to brain-machine interface technology.
It is possible to devise reliable security to prevent implants from being hacked. Neuralink (and similar companies such as NextMind and Kernel) have every reason to put in this effort. Public perception aside, they would be unlikely to get government approval without it.
Last year the US Food and Drug Administration granted Neuralink approval for “breakthrough device” testing, in recognition of the technology’s therapeutic potential.
Moving forward, Neuralink’s implants must be easy to repair, replace and remove in the event of malfunction, or if the wearer wants it removed for any reason. There must also be no harm caused, at any point, to the brain.
While brain surgery sounds scary, it has been around for several decades and can be done safely.
When will human trials start?
According to Musk, Neuralink’s human trials are set to begin towards the end of this year. Although details haven’t been released, one would imagine these trials will build on previous progress. Perhaps they will aim to help someone with spinal injuries walk again.
The neuroscience research needed for such a brain-machine interface has been advancing for several decades. What was lacking was an engineering solution that solved some persistent limitations, such as having a wireless connection to the implant, rather than physically connecting with wires.
Hi @elonmusk, I’ve been thinking for a long time how to write this to you – but I’ll keep it very simply:
I was in a car accident 20 years ago and have been paralyzed from the shoulders ever since. I’m always available for clinical studies at @Neuralink.
A few days ago I was having network problems with the WiFi in my home office — my connection was very slow and video conferences were freezing. After fussing with the mesh network extenders with no result, I called the cable provider. Normally, this involves having to navigate several automated voice response menus before a call center representative comes on the line to help. I explain the problem and they then run a remote diagnostic and usually suggest a cable modem reset. Often, that solves the problem.
But in my latest attempt to get back online, the menus had changed and speaking to a representative was no longer an option. Just, “press 1 to reset the modem.” And just like that, it worked.
Something is lost in this process, but something is gained. The human was removed, and the problem was resolved — probably in less time. Explaining the problem to a human had often been a source of frustration, due to language challenges or possibly my poor descriptive abilities. In fact, some of my worst customer service experiences have been with this cable company. But with this change it was hard to miss the advance in automation and I had to acknowledge the role of AI as a key enabler. This story is, in fact, an example of what is now often called intelligent automation, the combination of artificial intelligence and automation that synthesizes vast amounts of information to automate entire processes or workflows
I also had to wonder what happened to the call center representative. Did they go on to one of those positions we hear about that entail higher strategy tasks? Or perhaps they received a layoff notice. I tried not to think of their personal circumstances, about whether they could readily find other work or would face real hardships. Then again, maybe this will free this worker to pursue a more interesting opportunity.
This dual nature of automation — the increase in efficiency and productivity along with the potential human impacts — is the stuff of anxious dreams. Because we hear the same two stories — many jobs will disappear, while new professions will emerge to replace them. The anxiety lives in the gap between, wondering and worrying about what this new reality will bring.
What if new jobs do not materialize?
Even if these new professions do not materialize, not to worry because there will be so much wealth generated by AI and automation that every adult will receive a monthly stipend, much as Alaska residents receive from oil royalties. At least that is the point of view of OpenAI Co-founder and CEO Sam Altman, as expressed in a recent blog, where he writes that we are witnessing a “recursive loop of innovation” that is both accelerating and unstoppable. Altman goes on to argue that the AI revolution will generate enough wealth for everyone to have what they need, spinning off dividends of $13,500 a year.
It could be that his view is inspired and filled with a generosity of spirit, or it could be disingenuous. The Universal Basic Income Altman envisions would be great as a bonus but a poor Faustian bargain if in the process many join the ranks of the long-term unemployed. Several other people have pointed to the flaws in his proposal. For example, Matt Prewitt, president of non-profit RadicalxChange, commented: “The [Altman] piece sells a vision of the future that lets our future overlords off way too easy, and would likely create a sort of peasant class encompassing most of society.”
The prospect of a permanent underclass brought about by AI and automation is increasingly portrayed in fiction looking out on the next 20 to 50 years. In The Resisters, a novel by Gish Jen, unemployed people are deemed “Surplus,” meaning there is no work for them. Instead, they are issued a Universal Basic Income at levels just above subsistence. In the new novel Klara and the Sun from Nobel prize winning author Kazuo Ishiguro, large swaths of the population have been “substituted” by automation. The novel describes how a growing income disparity between those with jobs and those without leads to a fracturing of society with increasing tribalism and fascist ideology. Burn-In, a novel from P. W. Singer and August Cole, describes growing automation that has taken millions of jobs and left many people fearful that the future is leaving them behind. In their extensively documented novel, referencing technology that already exists or is far along in development, AI has advanced so far that once-safe fields such as law or finance have been taken over by algorithms, leading to political backlash, with large numbers of people becoming radicalized in extreme virtual communities.
Taken together, these portrayals of the not-too-distant future are a long way from Altman’s utopia.
It remains to be seen where automation will lead us. Perhaps what will determine the true tipping point is how our institutions respond to this new reality as it accelerates and evolves. Altman warns that if in response to these changes “public policy doesn’t adapt accordingly, most people will end up worse off than they are today.”
Are employment worries justified?
Not everyone is concerned about AI and automation. On the one hand, it is broadly granted that the COVID-19 pandemic accelerated automation and reduced employment, what the World Economic Forum describes as a “double-disruption” scenario for workers leading to growing inequality. On the other hand, some argue there will be changes in the types of available work — and some people will be displaced (like my call center representative) — but overall employment will not be greatly impacted. Afterall, as these arguments often go, this is what has happened in prior technology revolutions. According to Richard Cooper, the Maurits C. Boas Professor of International Economics at Harvard University, “new technology often destroys existing jobs, but it also creates many new possibilities through several different channels.” Cooper says those new opportunities can take decades to emerge, though, which doesn’t sync with the pace of post-COVID job losses. Others argue that dystopian predictions about automation are fraught with exaggerated timelines and that the feared robot apocalypse is still far away.
Most likely, the full impact of automation will not likely be seen until some years into the future. That is the conclusion of a PwC study from a couple of years ago that described several waves of automation. During the first wave, they expect relatively low displacement, “perhaps only around 3% by the early 2020s.” This could explain why the debate about the impact still seems more theoretical than pressing, with far more substantial impacts over the next 10 to 15 years. During the first and second waves, women could be at greater risk of automation due to their higher representation in clerical and other administrative functions, but later automation will put more men at risk. It’s worth underscoring that PwC did this analysis pre-COVID and so its conclusions don’t account for the rapid uptake of automation over the past year and how this could further accelerate the waves of automation going forward.
Source: PwC estimates by gender based on OECD PIAAC data (median values for 29 countries)
Nevertheless, we can already see and feel it, and this is permeating throughout society. It is not only those doing routine work who are at risk, but increasingly those in white collar professions. A recent PwC survey of employees worldwide revealed that “60% are worried that automation is putting many jobs at risk; 48% believe ‘traditional employment won’t be around in the future,’ and 39% think it is likely that their job will be obsolete within five years.”
The longer-term impact of AI and automation on work is not really in doubt. Many positions will be disrupted and people replaced, even as other employment opportunities may be created. The net effect is likely to be positive for the economy. This could be good for economists and corporate shareholders. Though whether it is positive for a large percentage of the population or produces a sizable permanent underclass is very much to be determined. Automation will not likely bring about either utopia or dystopia. Instead, it will lead to both, with different groups experiencing these very different realities.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More