It’s been a weird 24 hours for those who purchased Grand Theft Auto: The Trilogy – The Definitive Edition on PC. The game launched yesterday on PS4, PS5, Xbox One, Xbox Series X|S, PC, and Nintendo Switch, but it wasn’t long before Rockstar pulled the game from sale on PC and disabled the Rockstar Games Launcher on the platform. With long periods of silence from Rockstar, it’s been difficult to figure out just what is happening – or when the trilogy will return.
The case of the vanishing GTA games
If you head over to the Rockstar website at the moment, you’ll see that the PC version of Grand Theft Auto: The Trilogy – The Definitive Edition is no longer up for sale. It was removed entirely from Rockstar’s store shortly after it launched yesterday, and it’s been unobtainable since then. Rockstar so far has given no reason for pulling the PC version from sale, and we have no idea when it might be available again.
The Rockstar Games Launcher has been down for much of the last day as well. The Rockstar Games Launcher is the only way to play Grand Theft Auto: The Trilogy – The Definitive Edition on PC, so those who managed to purchase the PC version before it was removed from sale can’t even play it.
Services for the Rockstar Games Launcher and supported titles are temporarily offline for maintenance. Services will return as soon as maintenance is completed.
The Rockstar Support Twitter account first notified users that it was taking the Rockstar Games Launcher offline for “maintenance” around 20 hours ago. In the time since then, it has only posted one update thanking fans for their patience as Rockstar works to restore service. For now, we have no idea when the Rockstar Games Launcher and the PC titles that need it will be functional again.
A far-reaching problem
More titles beyond Grand Theft Auto: The Trilogy – The Definitive Edition are impacted by the Rockstar Games Launcher being taken offline. Obviously, players can’t access any PC games they purchased directly through Rockstar while the Launcher is offline, but it gets even worse than that. As Kotaku points out, certain Rockstar games available through Steam require the Launcher as well – notably Red Dead Redemption 2 and GTA Online – meaning those have been inaccessible all this time, too.
It’s hard to get a handle on what’s going on here simply because Rockstar has been so quiet. Even if the Rockstar Games Launcher requires a full day of maintenance – which is strange but not unheard of – why was the PC version of the GTA Trilogy delisted from Rockstar’s website? Is the game going to be relisted once this maintenance with the Rockstar Games Launcher is over?
So far, it seems the fan reactions to the GTA trilogy have been mixed at best. The compilation offers remasters of three PS2-era Grand Theft Auto titles: GTA III, GTA: Vice City, and GTA: San Andreas, though some fans have taken to social media to express their frustration with the apparent quality of the remasters. Perhaps the PC version was delisted in response to those criticisms? We’ll have to wait for Rockstar to provide an update and clarify the matter. Assuming it does so, we’ll let you know what the company says.
VCs have a detailed playbook for investing in software-as-a-service (SaaS) companies that has served them well in recent years. Successful SaaS businesses provide predictable, recurring revenue that can be grown by acquiring more subscriptions at little additional cost, making them an attractive investment.
But the lessons that VCs have learned from their SaaS investments turn out not to be applicable to the world of artificial intelligence. AI companies follow a very different trajectory from SaaS providers, and the old rules simply aren’t valid.
Here are four things VCs get wrong about AI because of their past success investing in SaaS:
1. ARR growth is not the best indicator of long-term success in AI
Venture capitalists continue to pour money into AI companies at an astonishing — some might say ridiculous — rate. Databricks has raised a staggering $3.5 billion in funding, including a $1 billion Series G in February, followed six months later by a $1.6 billion Series H in August at a $38 billion valuation. DataRobot recently announced a $300 million Series G financing round, bringing its valuation to $6.3 billion.
While the private market is crazy for AI, the public market is showing signs of more rational behavior. Publicly traded C3.ai has lost 70% of its value relative to all-time high that it notched immediately after its IPO in December 2020. In early September 2021, the company released fiscal Q1 results, which were a cause for further disappointment in the stock that caused a further dip of nearly 10%.
So what’s going on? What is happening is that the private markets — funded by VCs — fundamentally do not understand AI. The fact is, AI is not hard to sell. But AI is quite hard to implement and have it deliver value.
Ordinarily in SaaS, the real peril is market risk — will customers buy? That’s why private markets have always been organized around looking at annual recurring revenue (ARR) growth. If you can show fast ARR growth, then clearly customers want to buy your product and therefore your product must be good.
But the AI market doesn’t work like that. In the AI market, many customers are willing to buy because they’re desperate for a solution to their pressing business problems and the promise of AI is so big. So what happens is that VCs keep pouring money into the likes of Databricks and DataRobot and driving them to absurd valuations without stopping to consider that billions are going into these companies to at best create hundreds of millions of ARR. It’s brute-forcing funding of an already over-hyped market. But the fact remains that these companies have failed to produce results for their customers on a systematic basis.
A report from Forrester sheds some interesting light on what’s really happening behind the numbers being claimed by some AI companies with these huge valuations. Databricks reported that four customers had a three-year net positive ROI of 417%. DataRobot had four customers that over three years created a 514% return. The problem is that out of the hundreds of customers these companies have, they must have cherry-picked some of their very best customers for these analyses, and their returns are still not that impressive. Their best customers are barely doubling their annual return — hardly an ideal scenario for a transformative technology that should deliver at least 10x back from your investment.
Rather than focusing on the most important factor — whether customers are getting tangible value out of AI — VCs are obsessing over ARR growth. The fastest way to get to ARR expansion is brute-force sales, selling services to cover the gaps because you don’t have the time to build the right product. That is why you see so many consulting toolkits masquerading as products in the data science and machine learning market.
2. A minimum viable product isn’t the way to test the market
From the world of SaaS, VCs learned to value the minimum viable product (MVP), an early version of a software product with just enough features to be usable so that potential customers can provide feedback for future product development. VCs have come to expect that if customers would buy the MVP, they will buy the full-version product. Building an MVP has become standard operating procedure in the world of SaaS because it shows VCs that customers would pay money for a product that addressed a specific problem.
But that approach doesn’t work with AI. With AI, it’s not a question of building an MVP to find out whether people will pay. It’s really a question of finding out where AI can create value. Put another way, it’s not about testing product-market fit; it’s about testing product-value delivery. Those are two very different concepts.
3. Successful AI pilots don’t always mean successful real-world outcomes
Another rule that VCs have adopted from the world of SaaS is the notion that successful AI pilots mean successful outcomes. It’s true that if you have successfully piloted a SaaS product like Salesforce with a small group of salespeople under controlled conditions, you can reasonably extrapolate from the pilot and have a clear view of how the software will perform in widespread production.
But that doesn’t work with AI. The way AI performs in the lab is fundamentally different from what it does in the wild. You can run an AI pilot based on cleaned-up data and find that if you follow the AI predictions and recommendations, your company will theoretically make $100 million. But by the time you put the AI into production, the data has changed. Business conditions have changed. Your end users may not accept the recommendations of the AI. Instead of making $100 million, you may actually lose money, because the AI leads to bad business decisions.
You can’t extrapolate from an AI pilot in the way that you can with SaaS.
4. Signing up customers for long-term contracts isn’t a good indicator the vendor’s AI works
VCs like it when customers sign up for long-term contracts with a vendor; they see that as a strong indicator of long-term success and revenue. But that’s not necessarily true with AI. The value created by AI grows so fast and is potentially so transformative that any vendor who truly believes in their technology isn’t trying to sell a three-year contract. A confident AI vendor wants to sell a short contract, show the value created by the AI, and then negotiate price.
The AI vendors that put a lot of effort into locking up customers to long-term contracts are the ones who are afraid that their products won’t create value in the near term. What they’re trying to do is lock in a three-year contract and then hope that somewhere down the line the product will become good enough that value will finally be created before renewal discussions happen. And often, that never happens. According to a study by MIT/BCG, only 10% of enterprises get any value from AI projects.
VCs have been trained to think that any vendor that signs lots of long-term contracts must have a better product, when in the world of AI, the opposite is true.
Getting smart about AI
VCs need to get smart about AI and not rely on their old SaaS playbooks. AI is a rapidly developing transformative technology, every bit as much as the Internet was in the 1990s. When the Internet was emerging, one of the lucky breaks we got was that VCs did not obsess over the profitability or revenues of Internet companies in order to invest in them. They basically said, “Let’s look at whether people are getting value from the technology.” If people adopt the technology and get value from it, you don’t have to worry a lot about revenue or profitability at the beginning. If you create value, you will make money.
Maybe it’s time to bring that early Internet mindset to AI and start evaluating emerging technologies based on whether customers are getting value rather than relying on brute-forced ARR figures. AI is destined to be a game-changing technology, every bit as much as the Internet. As long as businesses get sustained value from AI, it will be successful — and very profitable for investors. Smart VCs understand this and will reap the rewards.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
our newsletters
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.
The cybersecurity industry is rapidly embracing the notion of “zero trust”, where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted.
However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted “ground truth” as reference point.
How can these two seemingly diametrically opposing philosophies coexist?
This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all.
Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.
One of the main stumbling blocks associated with AI trustworthiness revolves around data, and more specifically, ensuring data quality and integrity. Afterall, AI models are only as good as the data they consume.
And yet, these obstacles haven’t discouraged cyber security vendors, which have shown unwavering zeal to base their solutions on AI models. By doing so, vendors are taking a leap of faith, assuming that the datasets (whether public or proprietary) their models are ingesting adequately represent the real-life scenarios that these models will encounter in the future.
The data used to power AI-based cybersecurity systems faces a number of further problems:
Data poisoning: Bad actors can ”poison” training data by manipulating the datasets (and even the pre-trained models) that the AI models are relying upon. This could allow them to circumvent cyber security controls while the organization at risk remains oblivious to the fact that the ground truth it relies on to secure its infrastructure has been compromised. Such manipulations could lead to subtle deviations, such as security controls labeling malicious activity as benign, or generate a more profound impact by disrupting or disabling the security controls.
Data dynamism: AI models are built to address “noise,” but in cyberspace, malicious errors are not random. Security professionals are faced with dynamic and sophisticated adversaries that learn and adapt over time. Accumulating more security-related data might well improve AI-powered security models, but at the same time, it could lead adversaries to change their modus operandi, diminishing the efficacy of existing data and AI models. Data, in this case, is actively shaping the observed reality rather than statically representing it as a snapshot.
For example, while additional data points might render a traditional malware detection mechanism more capable of identifying common threats, it might, theoretically, degrade the AI model’s ability to identify novel malware that considerably diverges from known malicious patterns. This is analogous to how mutated viral variants evade an immune system that was trained to identify the original viral strain.
Unknown unknowns: Unknown unknowns are so prevalent in cyberspace that many service providers preach to their customers to build their security strategy on the assumption that they’ve already been breached. The challenge for AI models emanates from the fact that these unknown unknowns, or blind spots, are seamlessly incorporated into the models’ training datasets and therefore attain a stamp of approval and might not raise any alarms from AI-based security controls.
For example, some security vendors combine a slate of user attributes to create a personalized baseline of a user’s behavior and determine the expected permissible deviations from this baseline. The premise is that these vendors can identify an existing norm that should serve as reference point for their security models. However, this assumption might not hold water. For example, an undiscovered malware may already reside in the customer’s system, existing security controls may suffer from coverage gaps, or unsuspecting users may already be suffering from an ongoing account takeover.
Errors: It would not be brazen to assume that even staple security-related training datasets are probably laced with inaccuracies and misrepresentations. Afterall, some of the benchmark datasets for many leading AI algorithms and exploratory data science research have proven to be rife with serious labeling flaws.
Additionally, enterprise datasets can become obsolete, misleading, and erroneous over time unless the relevant data, and details of its lineage, are kept up-to-date and tied to relevant context.
Privacy-preserving omission: In an effort to render sensitive datasets accessible to security professionals within and across organizations, privacy-preserving and privacy-enhancing technologies, from deidentification to the creation of synthetic data, are gaining more traction. The whole rationale behind these technologies is to omit, alter, or mask sensitive information, such as personally identifiable information (PII). But as a result, the inherent qualities and statistically significant attributes of the datasets might be lost along the way. Moreover, what might seem as negligible “noise” could prove to be significant for some security models, impacting outputs in an unpredictable way.
The road ahead
All of these challenges are detrimental to the ongoing effort to fortify islands of trust in AI-dominated cybersecurity industry. This is especially true in the current environment where we lack widely-accepted AI explainability, accountability, and robustness standards and frameworks.
While efforts have begun to root out biases from datasets, enable privacy-preserving AI training, and reduce the amount of data required for AI training, it will prove much harder to fully and continuously inoculate security-related datasets against inaccuracies, unknown unknowns, and manipulations, which are intrinsic to the nature of cyberspace. Maintaining AI hygiene and data quality in ever-morphing, data-hungry digital enterprises might prove equally difficult.
Thus, it is up to the data science and cybersecurity communities to design, incorporate, and advocate for robust risk assessments and stress tests, enhanced visibility and validation, hard-coded guardrails, and offsetting mechanisms that can ensure trust and stability in our digital ecosystem in the age of AI.
Eyal Balicer is Senior Vice President for Global Cyber Partnership and Product Innovation at Citi.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
our newsletters
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
A recent survey conducted by researchers at the IE Center for the Governance of Change indicates that a majority of people would support replacing members of their respective parliaments with AI systems.
Yikes. The majority might have this one wrong. But we’ll get into why in a moment.
The survey
Researchers interviewed 2,769 Europeans representing varying demographics. Questions ranged from whether they’d prefer to vote via smartphone all the way to whether they’d replace existing politicians with algorithms given the chance.
Per the survey:
51% of Europeans support reducing the number of national parliamentarians and giving those seats to an algorithm. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 are excited about this idea.
On the surface, this makes perfect sense – younger people are more likely to embrace a new technology, no matter how radical.
But it gets even more interesting when you drill things down a bit.
The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.
In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.
Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.
It’s difficult to draw insight from these numbers without resorting to speculation – when you consider the political divide in the UK and the US, for example, it’s interesting to note that people in both nations still seem to prefer the status quo over an AI system.
Here’s the problem
All those people in favor of an AI parliament are wrong.
The idea here, according to the CNBC report, is that this survey captures the “general zeitgeist” when it comes to public perception of their current human representatives.
This seems to indicate that the survey tells us more about how people feel about their politicians than it does about how people feel about AI.
But we really need to consider what an AI parliamentarian would actually mean before we start throwing our support behind the idea.
Governments may not operate the same in every country, but if enough people support an idea – no matter how bad it is – there’s always a chance the people will get what they want.
Why an AI parliamentarian is a terrible idea
Here’s the conclusion right up front: It would not only be filled with baked-in bias, but trained with the biases of the government implementing it. Furthermore, any applicable AI technology in this domain would be considered “black box” AI, and thus it would be even worse at explaining its decisions than contemporary human politicians.
And, finally, if we hand over our constituent data to a centralized government system that has parliamentarian rights, we’d essentially be allowing our respective governments to use digital gerrymandering to conduct mass-scale social engineering.
Here’s how
When people imagine a robot politician they often conceptualize a being that cannot be corrupted. Robots don’t lie, they don’t have agendas, they’re not xenophobic or bigoted, and they can’t be bought off. Right?
The short version of why this is true goes like this: think about the 2,769 person survey mentioned above. How many of those people are Black? How many are queer? How many are Jewish? How many are conservative? Are 2,769 people really enough people to represent the entirety of Europe?
Probably not. It’s just a pretty close guess. When researchers conduct these surveys, they’re trying to get a general idea of how people feel: this isn’t scientifically accurate information. We simply have no way of forcing every single person on the continent to answer these questions.
That’s how AI works. When we train an AI to do work – for example, to take data related to voter sentiment and determine whether to vote yay or nay on a particular motion – we train it on data that was generated, curated, interpreted, transcribed, and implemented by humans.
At every step of the AI training process, every bias that’s crept in becomes exacerbated. If you train an AI on data featuring a disproportionate amount of representation between groups, the AI will develop and amplify bias against those groups with less representation. That’s how algorithms work inside of a black box.
And therein lies our second problem: the black box. If a politician makes a decision that results in a negative consequence we can ask that politician to explain the motive behind that decision.
As a hypothetical example, if a politician successfully lobbied to abolish all traffic lights in their district and that action resulted in an increase in accidents, we could find out why they voted that way and demand they never do it again.
You can’t do that with most AI systems. Simple automation systems can be looked at in reverse if something goes wrong, but AI paradigms that involve deep learning and surfacing insights – the very kind you’d need to use in order to replace members of parliament with AI-powered representation – cannot generally be understood in reverse.
AI developers essentially dial-in a system’s output like they’re tuning in a radio signal from static. They keep playing with the parameters until the AI starts making decisions they like. This process cannot be repeated in reverse: you can’t turn the dial backwards until the signal is noisy again to see how it became clear.
Here’s the scary part
AI systems are goal-based. When we imagine the worst things that could possibly go wrong when it comes to artificial intelligence we might be thinking killer robots, but the experts tend to think misaligned objectives is the more likely evil.
Basically, think about AI developers like Mickey Mouse in Disney’s “The Sorcerer’s Apprentice.” If big government tells Silicon Valley to create an AI parliamentarian, it’s going to come up with the best leader it can possibly create.
Unfortunately, the goal of government isn’t to produce or collect the best leaders. It’s to serve society. Those are two entirely different goals.
The bottom line is that AI developers and politicians can train an AI system to surface any results they want.
If you can imagine gerrymandering, as it happens in the US, but at the scale of which “constituent data” gets weighted more in a machine’s parameters, then you can imagine how politicians could use AI systems to automate partisanship.
The last thing we need to do, as a global community, is use AI to supercharge the worst parts of our respective political systems.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.
The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed bydisillusionment and disappointment. Today’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.
Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author ofArtificial Intelligence: A Guide For Thinking Humans.
In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achievingartificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.
Narrow AI and general AI are not on the same scale
The kind of AI that we have today can be very good atsolving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”
“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.
For instance, today’snatural language processingsystems have come a long way toward solving many different problems, such as translation,text generation, andquestion-answeringon specific problems. At the same time, we have deep learning systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (andmillions of dollarsspent on computing and data). But the AI community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of AI.
The easy things are hard to automate
Credit: Ben DicksonVision, one of the problems every living being solves without effort, remains a challenge for computers
When it comes to humans, we would expect an intelligent person to do hard things that take years of study and practice. Examples might include tasks such as solving calculus and physics problems, playing chess at grandmaster level, or memorizing a lot of poems.
But decades of AI research have proven that the hard tasks, those that require conscious attention, are easier to automate. It is the easy tasks, the things that we take for granted, that are hard to automate. Mitchell describes the second fallacy as “Easy things are easy and hard things are hard.”
“The things that we humans do without much thought—looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone—turn out to be the hardest challenges for machines,” Mitchell writes. “Conversely, it’s often easier to get machines to do things that are very hard for humans; for example, solving complex mathematical problems,mastering games like chess and Go, and translating sentences between hundreds of languages have all turned out to be relatively easier for machines.”
Consider vision, for example. Over billions of years, organisms have developed complex apparatuses for processing light signals. Animals use their eyes to take stock of the objects surrounding them, navigate their surroundings, find food, detect threats, and accomplish many other tasks that are vital to their survival. We humans have inherited all those capabilities from our ancestors and use them without conscious thought. But the underlying mechanism is indeed more complicated than large mathematical formulas that frustrate us through high school and college.
Case in point: We still don’t havecomputer visionsystems that are nearly as versatile as human vision. We have managed to createartificial neural networksthat roughly mimic parts of the animal and human vision system, such as detecting objects and segmenting images. But they are brittle, sensitive to many different kinds of perturbations, and they can’t mimic thefull scope of tasks that biological vision can accomplish. That’s why, for instance, the computer vision systems used in self-driving cars need to be complemented with advanced technology such as lidars and mapping data.
Another area that has proven to be very difficult is sensorimotor skills that humans master without explicit training. Think of the how you handle objects, walk, run, and jump. These are tasks that you can do without conscious thought. In fact, while walking, you can do other things, such as listen to a podcast or talk on the phone. But these kinds of skills remain alarge and expensive challengefor current AI systems.
“AI is harder than we think, because we are largely unconscious of the complexity of our own thought processes,” Mitchell writes.
Anthropomorphizing AI doesn’t help
Credit: Icons8Comparing contemporary AI systems with human intelligence creates an erroneous image of the current state of artificial intelligence
The field of AI is replete with vocabulary that puts software on the same level as human intelligence. We use terms such as “learn,” “understand,” “read,” and “think” to describe how AI algorithms work. While such anthropomorphic terms often serve as shorthand to help convey complex software mechanisms, they canmislead us to thinkthat current AI systems work like the human mind.
Mitchell calls this fallacy “the lure of wishful mnemonics” and writes, “Such shorthand can be misleading to the public trying to understand these results (and to the media reporting on them), and can also unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.”
The wishful mnemonics fallacy has also led the AI community to name algorithm-evaluation benchmarks in ways that are misleading. Consider, for example, theGeneral Language Understanding Evaluation (GLUE) benchmark, developed by some of the most esteemed organizations and academic institutions in AI. GLUE provides a set of tasks that help evaluate how a language model can generalize its capabilities beyond the task it has been trained for. But contrary to what the media portray, if an AI agent gets a higher GLUE score than a human, it doesn’t mean that it is better at language understanding than humans.
“While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks’ names,” Mitchell writes.
A stark example of wishful mnemonics is a 2017 project at Facebook Artificial Intelligence Research, in which scientists trained two AI agents to negotiate on tasks based on human conversations. In theirblog post, the researchers noted that “updating the parameters of both agents led to divergence from human language asthe agents developed their own language for negotiating [emphasis mine].”
This led to a stream of clickbait articles that warned about AI systems that were becoming smarter than humans and were communicating in secret dialects. Four years later, the most advanced language models still struggle withunderstanding basic conceptsthat most humans learn at a very young age without being instructed.
AI without a body
Can intelligence exist in isolation from a rich physical experience of the world? This is a question that scientists and philosophers have puzzled over for centuries.
One school of thought believes that intelligence is all in the brain and can be separated from the body, also known as the “brain in a vat” theory. Mitchell calls it the “Intelligence is all in the brain” fallacy. With the right algorithms and data, the thinking goes, we can create AI that lives in servers and matches human intelligence. For the proponents of this way of thinking, especially those whosupport pure deep learning–based approaches, reaching general AI hinges on gathering the right amount of data and creating larger and larger neural networks.
Meanwhile, there’s growing evidence that this approach is doomed to fail. “A growing cadre of researchers is questioning the basis of the ‘all in the brain’ information processing model for understanding intelligence and for creating AI,” she writes.
Human and animal brains have evolved along with all other body organs with the ultimate goal of improving chances of survival. Our intelligence is tightly linked to the limits and capabilities of our bodies. And there is an expanding field ofembodied AIthat aims to create agents that develop intelligent skills by interacting with their environment through different sensory stimuli.
Mitchell notes that neuroscience research suggests that “neural structures controlling cognition are richly linked to those controlling sensory and motor systems, and that abstract thinking exploits body-based neural ‘maps.’” And in fact, there’s growing evidence and research that proves feedback from different sensory areas of the brain affects both our conscious and unconscious thoughts.
Mitchell supports the idea that emotions, feelings, subconscious biases, and physical experience are inseparable from intelligence. “Nothing in our knowledge of psychology or neuroscience supports the possibility that ‘pure rationality’ is separable from the emotions and cultural biases that shape our cognition and our objectives,” she writes. “Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.”
“It’s clear that to make and assess progress in AI more effectively, we will need to develop a better vocabulary for talking about what machines can do,” Mitchell writes. “And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems in nature.”
Another challenge that Mitchell discusses in her paper is that of common sense, which she describes as “a kind of umbrella for what’s missing from today’s state-of-the-art AI systems.”
Common sense includes the knowledge that we acquire about the world and apply it every day without much effort. We learn a lot without being explicitly instructed, by exploring the world when we are children. These include concepts such as space, time, gravity, and the physical properties of objects. For example, a child learns at a very young age that when an object becomes occluded behind another, it has not disappeared and continues to exist, or when a ball rolls across a table and reaches the ledge, it should fall off. We use this knowledge to build mental models of the world, make causal inferences, and predict future states with decent accuracy.
This kind of knowledge is missing in today’s AI systems, which makes them unpredictable and data-hungry. In fact, housekeeping and driving, the two AI applications mentioned at the beginning of this article, are things that most humans learn through common sense and a little bit of practice.
Common sense also includes basic facts about human nature and life, things that we omit in our conversations and writing because we know our readers and listeners know them. For example, we know that if two people are “talking on the phone,” it means that they aren’t in the same room. We also know that if “John reached for the sugar,” it means that there was a container with sugar inside it somewhere near John. This kind of knowledge is crucial to areas such as natural language processing.
“No one yet knows how to capture such knowledge or abilities in machines. This is the current frontier of AI research, and one encouraging way forward is to tap into what’s known about the development of these abilities in young children,” Mitchell writes.
While we still don’t know the answers to many of these questions, a first step toward finding solutions is being aware of our own erroneous thoughts. “Understanding these fallacies and their subtle influences can point to directions for creating more robust, trustworthy, and perhaps actuallyintelligent AI systems,” Mitchell writes.
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.
For the past 20 years UK Post Office employees have been dealing with a piece of software called Horizon, which had a fatal flaw: bugs that made it look like employees stole tens of thousands of British pounds. This led to some local postmasters being convicted of crimes, even being sent to prison, because the Post Office doggedly insisted the software could be trusted. After fighting for decades, 39 people are finally having their convictions overturned, after what is reportedly the largest miscarriage of justice that the UK has ever seen.
The impact on these employees has been vast: according to the BBC, some have lost marriages or time with their children. Talking to the BBC, Janet Skinner said that she was taken away from her two kids for nine months when she was imprisoned, after the software showed a £59,000 shortfall. She also says she lost a job offer because of her criminal conviction. The time she and others like her spent in jail can’t be bought back, and it happened because software was taken at its word.
According to the BBC, another woman, who swore she was innocent, was sent to prison for theft while she was pregnant. One man reportedly died by suicide after the computer system showed that he had lost almost £100,000. Within a few months, his replacement also faced losses due to discrepancies from the software.
Horizon was made by Japanese company Fujitsu, and information from it was used to prosecute 736 Post Office employees between 2000 and 2014, some of whom ended up going to jail. Bugs in the system would cause it to report that accounts that were under the employees’ control were short — the BBChas reported that some employees even tried to close the gap by remortgaging their homes, or using their own money.
It does seem like the nightmare for the employees may be coming to an end. The 39 who had their convictions overturned are following another six who were cleared of wrongdoing back in December. The Post Office has also been working on financially compensating other employees who were caught up by the software.
In 2019 the Post Office settled with 555 claimants and paid damages to them, and it’s also set up a system to repay other affected employees. So far, according to the BBC, more than 2,400 claims have been made.
Earlier this month the chief executive of the Post Office said that Horizon would be replaced with a new, cloud-based solution. In the same speech, he said that the Post Office would work with the government to compensate the employees who were affected by Horizon’s inaccuracies.
The UK’s prime minister Boris Johnson also weighed in today, calling the original convictions “an appalling injustice.”
I welcome the Court of Appeal’s decision to overturn the convictions of 39 former sub-postmasters in the Horizon dispute, an appalling injustice which has had a devastating impact on these families for years.
Lessons should and will be learnt to ensure this never happens again.
Some employees seem happy with just a monetary settlement and their names being cleared. But there is also now a campaign group calling for a full public inquiry, and some of the people whose names were cleared today have called for those in charge to be held responsible.
The BBC reported that the Post Office argued the errors couldn’t have been be the fault of the computer system — despite knowing that wasn’t true. There is evidence that the Post Office’s legal department was aware that the software could produce inaccurate results, even before some of the convictions were made. According to the BBC, one of the representatives for the Post Office workers said that the post office “readily accepted the loss of life, liberty and sanity for many ordinary people” in its “pursuit of reputation and profit.”
LG has introduced its first 5G phone and, well, in terms of upgrades it’s a far cry from the Samsung S10 5G. Basically, the LG V50 ThinQ is a V40 with a 5G chip in it, and that’s of dubious benefit to consumers because 5G networks won’t materialize in any significant way until 2020.
OK, there are a couple other changes: The V50 has a Snapdragon 855 processor, which is a prerequisite for use of the X50 modem. And at 8.3mm, it’s a little thicker than the V40 to accommodate the modem and handle the vapor-cooling chamber that’s needed to keep the heat down. The V50 also has a larger 4,000mAh battery (versus 3,300mAh on the V40) to offset what LG estimates is a 20 percent hit in battery life when using 5G compared to LTE.
Oh, one more thing: The V50’s camera is entirely under glass now like LG’s G8, which is a nice touch. And of course it’ll be more expensive than the V40. Otherwise, it’s the exact same phone. Check out the specs:
With the G8’s time-of-flight sensor, the V50 could have been a fun phone with AirMotion and Hand ID, and at least give people a reason to buy it other than the dubious promise of 5G. As it stands, LG will market the V50 as an entirely new phone, but really the only thing new about it is the modem.
LG let me play with the V50 for a few minutes, but I wasn’t allowed to take pictures. Nor was I able to test 5G speeds since they weren’t able to set up a micro cell. So what was the point of this demo, you ask? To be among the first smartphone companies to roll out support for 5G. Samsung announced the S10 5G last week and Motorola beat everyone with the Verizon 5G mod for Moto Z last year. So LG had to get on board, lest anyone think LG wasn’t on the cutting edge.
But aside from making 5G headlines, the V50 is kind of pointless. Like the rest of the 5G handsets, it uses Qualcomm’s x50 modem, which has already been overshadowed by its successor, the X55. This newer modem promises faster speeds, combines a 4G LTE modem in a thinner package, and supports T-Mobile’s Frequency Division Duplex (FDD) network.
But modem generations aside, there are no networks to take advantage of 5G. Sprint was the first to announce support for the LG V50, with the promise of “consistent connection and lower latency with virtually buffer-less streaming, which is ideal for videos, music and gaming.” When you’ll be able to get that, I don’t know, but Sprint has assured us that nine cities will be getting 5G in the first half of 2019. If that’s anything like Verizon’s rollout, it won’t be a seamless transition.
So if you plan on buying the V50, proceed with caution. You’re essentially getting an old phone, and more than likely won’t be able to take advantage of its headline feature this year.
And even if you happen to live in one of the first 5G cities, you won’t be getting anything near the promised super-fast all-over speeds that 5G will deliver with the X50 modem. 5G is in the very early stages of its rollout, and it’s going to take years before it’s anywhere near as ubiquitous as 4G.
I don’t blame LG for the V50. 5G is the buzzword of the moment, and phone makers want, nay need, to get on board as soon as possible. Carrier hype has turned 5G from a next-gen evolution into a life-saving proposition, and phone makers have fallen for it, hook, line and sinker. Frankly, this phone should have been named the V40 ThinQ 5G, which would have given customers a proper understanding of what it is.
So buy the V50 at your own risk. Or just buy a V40 and write 5G on the back.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
If you don’t want to spend $1,300 or $1,450 on the Galaxy Note 20 Ultra, Samsung has an affordable option for you: the Galaxy Note 20. Like last year’s Note 10, the Note 20 is a less-loaded handset, meant to bring the Note experience to a less-demanding crowd that still wants all the productivity benefits provided by the S pen.
It could have been one of the best phones of the year. Samsung has made all the right moves with the Note 20, prioritizing a big screen, top-of-the-line processor, 5G modem, and excellent camera. Looking at the spec sheet, I’d expect the Note 20 to cost about $799, maybe even $750 like the S10e. Either price would make the Note 20 one of the best premium Android values this side of the OnePlus 7T.
Samsung
The Galaxy Note 20 (right) might look like a smaller version of the Note 20 Ultra, but it’s made of plastic.
The only problem is it costs $200 more than that—a full $1,000—and it’s incredibly hard to justify the price. Unlike the Galaxy S20, which brings the same premium performance and speedy display as the higher-priced S20 Ultra, the Note 20 cuts more corners than a kindergartener with a fresh pair of safety scissors.
Take the display. While it might seem like a slightly smaller version of the Note 20 Ultra, the screen specs are far inferior to those of the flagship Note:
So while the Note 20 offers an extra half-inch of diagonal screen size, you’re losing a lot in resolution and refresh rate. You’re also giving up the curved edge, though depending on your preference, that might be a benefit. So why would anyone choose this phone over an S20 for the same price?
Alex Todd/IDG
The Note 20 looks like a glass phone, but it’s made of plastic.
The deficiencies continue. You’ll also get 4GB less RAM (8GB vs. 12GB), no expandable memory slot, a heavier weight (194 grams vs. 163 grams), the same camera, and only a slightly bigger battery (4,300mAh vs. 4,000mAh) for the same $1,000 as Samsung charges for the S20. All that, and the back is made of “reinforced polycarbonate,” instead of the glass that every other flagship phone has.
Lite but luxury
It didn’t have to be this way. Earlier this year, Samsung introduced the Note 10 Lite, with many of the same specs as the Note 20, for around $500. It has the same 6.7-inch display, 8GB of RAM, and 128GB of storage, along with a larger battery (4,500mAh) and a higher-resolution front camera. And of course, because it’s a Note, it comes with the S Pen on board.
Samsung
The Note 10 Lite has a lot in common with the Note 20—except it costs hundreds less.
Samsung fans will point out that the Note 10 Lite doesn’t have the Note 20’s 5G or Snapdragon 865+. But those two components should make the Note 20 about $250 more expensive, not $500. At $1,000, the Note 20’s just not worth it, especially following the launch of the laudable (if late) Google Pixel 4a earlier this week.
It’s a shame because there’s nothing wrong with the Note 20. The move to a plastic back, a flat screen, even lower resolution, are all acceptable trade-offs to bring down the price—if Samsung had actually brought down the price.
Instead, it’s hard to see who would buy the Note 20. Hardcore Note fans will surely gravitate toward the Note 20 Ultra, Samsung fans will likely opt for the S10+, and budget-minded users will look to the A51 or A71, all of which come with 5G modems. That leaves the thousand-dollar Note 20 without an audience other than the uninformed buyer who wanders into a carrier store with a pocketful of cash.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
One day we’re all going to die. Science and technology can put it off for awhile, but the march of time stops for no human. Sadly, most of us will be forgotten. It’s a bleak prognosis but that’s how things have always been. And that’s unlikely to change, despite the best efforts of the AI community.
There’s a new tech trend (that’s actually a dumb old trope) sweeping through big tech, little tech, and South Korean TV stations: digital resurrection.
The premise is simple. A person living in the modern world leaves tiny traces of who they are in everything they do. Our ‘digital footprint,’ if you will, has become so massive that we produce enough data for a clever AI to mimic us.
We’ve seen similar AI systems such as this one that, given a few articles from a specific author, can imitate their style. If you crank that up to 11 and imagine an AI that’s been trained on thousands of personal texts, emails, and transcribed voice messages, it becomes easy to see how such a paradigm could be used to create a bot that imitates a person… living or dead.
[Read:How this company leveraged AI to become the Netflix of Finland]
Popular British TV series Black Mirror aired an episode in 2013 called “Be Right Back.” The show took the idea of a chatbot trained on the deceased’s data and added in what was basically a futuristic 3-D printed android that became a real-world living embodiment of that person. The big idea is that such a robot could help people find closure, especially if they lost a loved one unexpectedly.
But, as is often the case, the reality is quite different. In the real world we’ve almost exclusively seen digital resurrection used as a marketing tool or a gimmick. And that’s because AI, no matter how much data it has, cannot actually “recreate” a person in any meaningful way.
Recall the holograms of Tupac Shakur and Whitney Houston or the extra cringey insertion of Kurt Cobain into Guitar Hero 5 as a playable character who could be forced to perform any song in the game.
More recently, Korean TV station SBS has unveiled plans to create a game show that features humans singing duets with an AI recreation of pop superstar Kim Kwang-seok, a performer who died in 1996. And a Spanish beer company called Cruzcampo recently used a deepfake-generated version of the late, legendary singer Lola Flores, who passed in 1995, as part of an ad campaign.
There’s a reason why you tend to see singers recreated as opposed to someone more known for their speeches and ideology, such as Winston Churchill or Dr Martin Luther King Jr: because AI can’t really predict what a person would say or do in any given situation no matter how much data it has.
The problem with digital resurrection is simple: AI can’t do anything a rational, average person couldn’t do given enough time. People can imitate other humans by dressing up as them, copying their mannerisms, and aping their voices. But we can’t read each other’s minds. We can only guess what someone else is thinking. And AI is no different. No amount of data in the world can predict what a person will do next (or would have done were they still alive).
At best, we can preserve and animate a specific developer’s vision of what a sentiment analysis of your dead loved one or a celebrity might look like.
If your spouse who passed was fond of saying “I love you snuggle bunny,” at the end of every email, it stands to reason an AI could learn to imitate their sign off. But, if your spouse hid some gold somewhere and never wrote or spoke about it to anyone, no AI can tell you where the money is. Human psychics can’t predict the winning lottery numbers for the same reason.
And that’s where Black Mirror and the real-world companies trying to sell the idea of a “digital you” get things wrong. It’s easy to polish up a two-three minute clip of a famous performer doing what they’re famous for. It’s another thing all-together to make even a slightly convincing human replicant.
Humans, by instinct, seek out imperfections in other humans because it’s historically been intrinsic to our survival. We’re easily fooled when we’re being entertained and have temporarily suspended our sense of disbelief. But, as Hollywood and the computer-generated-imagery world has learned over the past few decades, when it comes to imitating life it’s easy to convince lots of people from a distance but almost impossible to convince an individual up close.
Whether you believe humans have a soul that drives and motivates them or you understand that scientists know very little about how the human brain actually manifests consciousness, there’s currently no conceivable way for a machine to be literally imbued with whatever it is that makes each of us unique.
Just like a vinyl record can’t convey the gravitas of seeing a live performance, no matter how well it’s recorded, a digital record cannot take the place of a living person. If you never got to see Michael Jackson live, you can only get a taste of what it was like by watching footage or hearing a recording.
No matter how powerful AI becomes, it won’t be able to tell us what the late singer would have thought up next. I submit that no human or AI could have predicted “Thriller” would be the follow-up to “Beat It.”
And the same goes for your loved ones when they pass. An AI that imitates them is no more accurate or powerful than just asking someone to do an impersonation: it’s not the real thing no matter how skilled the impersonator is.
Like most feats involving predictive artificial intelligence, digital resurrection is little more than prestidigitation.
It’s now fairly common for cities to install surveillance cameras with facial recognition capabilities to help catch criminals — Beijing and Moscow use them extensively. However, a city in northern India is taking a different approach: it wants to detect distress on women’s faces, so it can assist them when they’re attacked or threatened.
Cops in Lucknow, the capital city in the state of Uttar Pradesh (UP), aim to install an AI-based camera system on 200 crime hotspots that will alert the police force’s control room if the system detects distress on the women’s face.
Not only is the premise of this solution deeply problematic, but there are also numerous concerns and reasons why this is basically the worst crime-fighting idea ever. Let’s get into it.
The state has a history of a high crime rate, with 162 cases registered for offenses against women every day in 2018 — and that’s just officially recorded data. A report from the National Crime Records Bureau (NCRB) published last year suggested that more than 3,000 rape cases were filed in UP in 2019. So, it’s not entirely surprising that cops want a system to bring these numbers down.
However, facial recognition systems haven’t really been the best way to stop crime. In the US last year, a Black man was wrongfully arrested for shoplifting, after being misidentified by a facial recognition system. In 2019, Delhi police, which serves India’s capital city said that the success rate of the system was under 1% — the system sometimes misidentified gender as well.
Then there’s an issue of detecting emotions. Data suggests that AI systems have hugely inconsistent track records when it comes to identifying the emotions behind a facial expression. Plus, most algorithms concentrate on a limited range of emotions. Last year, researchers from the University of Cambridge and Middle East Technical University found that AI systems detecting emotions might have inherited bias towards minorities because of its training data.
Even if a system successfully detects someone’s facial expression, it might get the emotion behind it horribly wrong. Rana El Kaliouby, co-founder and CEO of Affectiva, an AI company working on human emotion and cognition, said in a conversation with MIT that “there is no one-to-one mapping between a facial expression and an emotion.”
Currently, without any test data, Lucknow’s facial recognition system looks like a bad idea. Plus, there’s no information as to how cops are planning to store and process this data. It will also cause an invasion of women’s privacy in the city, and potentially lead to wrongful charges and investigations. It’s time to shelve this idea.