Categories
Game

Rockstar offline: What’s wrong with the GTA Trilogy on PC?

It’s been a weird 24 hours for those who purchased Grand Theft Auto: The Trilogy – The Definitive Edition on PC. The game launched yesterday on PS4, PS5, Xbox One, Xbox Series X|S, PC, and Nintendo Switch, but it wasn’t long before Rockstar pulled the game from sale on PC and disabled the Rockstar Games Launcher on the platform. With long periods of silence from Rockstar, it’s been difficult to figure out just what is happening – or when the trilogy will return.

The case of the vanishing GTA games

If you head over to the Rockstar website at the moment, you’ll see that the PC version of Grand Theft Auto: The Trilogy – The Definitive Edition is no longer up for sale. It was removed entirely from Rockstar’s store shortly after it launched yesterday, and it’s been unobtainable since then. Rockstar so far has given no reason for pulling the PC version from sale, and we have no idea when it might be available again.

The Rockstar Games Launcher has been down for much of the last day as well. The Rockstar Games Launcher is the only way to play Grand Theft Auto: The Trilogy – The Definitive Edition on PC, so those who managed to purchase the PC version before it was removed from sale can’t even play it.

The Rockstar Support Twitter account first notified users that it was taking the Rockstar Games Launcher offline for “maintenance” around 20 hours ago. In the time since then, it has only posted one update thanking fans for their patience as Rockstar works to restore service. For now, we have no idea when the Rockstar Games Launcher and the PC titles that need it will be functional again.

A far-reaching problem

More titles beyond Grand Theft Auto: The Trilogy – The Definitive Edition are impacted by the Rockstar Games Launcher being taken offline. Obviously, players can’t access any PC games they purchased directly through Rockstar while the Launcher is offline, but it gets even worse than that. As Kotaku points out, certain Rockstar games available through Steam require the Launcher as well – notably Red Dead Redemption 2 and GTA Online – meaning those have been inaccessible all this time, too.

It’s hard to get a handle on what’s going on here simply because Rockstar has been so quiet. Even if the Rockstar Games Launcher requires a full day of maintenance – which is strange but not unheard of – why was the PC version of the GTA Trilogy delisted from Rockstar’s website? Is the game going to be relisted once this maintenance with the Rockstar Games Launcher is over?

So far, it seems the fan reactions to the GTA trilogy have been mixed at best. The compilation offers remasters of three PS2-era Grand Theft Auto titles: GTA III, GTA: Vice City, and GTA: San Andreas, though some fans have taken to social media to express their frustration with the apparent quality of the remasters. Perhaps the PC version was delisted in response to those criticisms? We’ll have to wait for Rockstar to provide an update and clarify the matter. Assuming it does so, we’ll let you know what the company says.



Repost: Original Source and Author Link

Categories
AI

4 things VCs get wrong about AI

VCs have a detailed playbook for investing in software-as-a-service (SaaS) companies that has served them well in recent years. Successful SaaS businesses provide predictable, recurring revenue that can be grown by acquiring more subscriptions at little additional cost, making them an attractive investment.

But the lessons that VCs have learned from their SaaS investments turn out not to be applicable to the world of artificial intelligence. AI companies follow a very different trajectory from SaaS providers, and the old rules simply aren’t valid.

Here are four things VCs get wrong about AI because of their past success investing in SaaS:

1. ARR growth is not the best indicator of long-term success in AI

Venture capitalists continue to pour money into AI companies at an astonishing — some might say ridiculous — rate. Databricks has raised a staggering $3.5 billion in funding, including a $1 billion Series G in February, followed six months later by a $1.6 billion Series H in August at a $38 billion valuation. DataRobot recently announced a $300 million Series G financing round, bringing its valuation to $6.3 billion.

While the private market is crazy for AI, the public market is showing signs of more rational behavior. Publicly traded C3.ai has lost 70% of its value relative to all-time high that it notched immediately after its IPO in December 2020. In early September 2021, the company released fiscal Q1 results, which were a cause for further disappointment in the stock that caused a further dip of nearly 10%.

So what’s going on? What is happening is that the private markets — funded by VCs — fundamentally do not understand AI. The fact is, AI is not hard to sell. But AI is quite hard to implement and have it deliver value.

Ordinarily in SaaS, the real peril is market risk — will customers buy? That’s why private markets have always been organized around looking at annual recurring revenue (ARR) growth. If you can show fast ARR growth, then clearly customers want to buy your product and therefore your product must be good.

But the AI market doesn’t work like that. In the AI market, many customers are willing to buy because they’re desperate for a solution to their pressing business problems and the promise of AI is so big. So what happens is that VCs keep pouring money into the likes of Databricks and DataRobot and driving them to absurd valuations without stopping to consider that billions are going into these companies to at best create hundreds of millions of ARR. It’s brute-forcing funding of an already over-hyped market. But the fact remains that these companies have failed to produce results for their customers on a systematic basis.

A report from Forrester sheds some interesting light on what’s really happening behind the numbers being claimed by some AI companies with these huge valuations. Databricks reported that four customers had a three-year net positive ROI of 417%. DataRobot had four customers that over three years created a 514% return. The problem is that out of the hundreds of customers these companies have, they must have cherry-picked some of their very best customers for these analyses, and their returns are still not that impressive. Their best customers are barely doubling their annual return — hardly an ideal scenario for a transformative technology that should deliver at least 10x back from your investment.

Rather than focusing on the most important factor — whether customers are getting tangible value out of AI — VCs are obsessing over ARR growth. The fastest way to get to ARR expansion is brute-force sales, selling services to cover the gaps because you don’t have the time to build the right product. That is why you see so many consulting toolkits masquerading as products in the data science and machine learning market.

2. A minimum viable product isn’t the way to test the market

From the world of SaaS, VCs learned to value the minimum viable product (MVP), an early version of a software product with just enough features to be usable so that potential customers can provide feedback for future product development. VCs have come to expect that if customers would buy the MVP, they will buy the full-version product. Building an MVP has become standard operating procedure in the world of SaaS because it shows VCs that customers would pay money for a product that addressed a specific problem.

But that approach doesn’t work with AI. With AI, it’s not a question of building an MVP to find out whether people will pay. It’s really a question of finding out where AI can create value. Put another way, it’s not about testing product-market fit; it’s about testing product-value delivery. Those are two very different concepts.

3. Successful AI pilots don’t always mean successful real-world outcomes

Another rule that VCs have adopted from the world of SaaS is the notion that successful AI pilots mean successful outcomes. It’s true that if you have successfully piloted a SaaS product like Salesforce with a small group of salespeople under controlled conditions, you can reasonably extrapolate from the pilot and have a clear view of how the software will perform in widespread production.

But that doesn’t work with AI. The way AI performs in the lab is fundamentally different from what it does in the wild. You can run an AI pilot based on cleaned-up data and find that if you follow the AI predictions and recommendations, your company will theoretically make $100 million. But by the time you put the AI into production, the data has changed. Business conditions have changed. Your end users may not accept the recommendations of the AI. Instead of making $100 million, you may actually lose money, because the AI leads to bad business decisions.

You can’t extrapolate from an AI pilot in the way that you can with SaaS.

4. Signing up customers for long-term contracts isn’t a good indicator the vendor’s AI works

VCs like it when customers sign up for long-term contracts with a vendor; they see that as a strong indicator of long-term success and revenue. But that’s not necessarily true with AI. The value created by AI grows so fast and is potentially so transformative that any vendor who truly believes in their technology isn’t trying to sell a three-year contract. A confident AI vendor wants to sell a short contract, show the value created by the AI, and then negotiate price.

The AI vendors that put a lot of effort into locking up customers to long-term contracts are the ones who are afraid that their products won’t create value in the near term. What they’re trying to do is lock in a three-year contract and then hope that somewhere down the line the product will become good enough that value will finally be created before renewal discussions happen. And often, that never happens. According to a study by MIT/BCG, only 10% of enterprises get any value from AI projects.

VCs have been trained to think that any vendor that signs lots of long-term contracts must have a better product, when in the world of AI, the opposite is true.

Getting smart about AI

VCs need to get smart about AI and not rely on their old SaaS playbooks. AI is a rapidly developing transformative technology, every bit as much as the Internet was in the 1990s. When the Internet was emerging, one of the lucky breaks we got was that VCs did not obsess over the profitability or revenues of Internet companies in order to invest in them. They basically said, “Let’s look at whether people are getting value from the technology.” If people adopt the technology and get value from it, you don’t have to worry a lot about revenue or profitability at the beginning. If you create value, you will make money.

Maybe it’s time to bring that early Internet mindset to AI and start evaluating emerging technologies based on whether customers are getting value rather than relying on brute-forced ARR figures. AI is destined to be a game-changing technology, every bit as much as the Internet. As long as businesses get sustained value from AI, it will be successful — and very profitable for investors. Smart VCs understand this and will reap the rewards.

Arijit Sengupta is CEO and Founder of Aible.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

How cybersecurity is getting AI wrong

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


The cybersecurity industry is rapidly embracing the notion of “zero trust”, where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted.

However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted “ground truth” as reference point.

How can these two seemingly diametrically opposing philosophies coexist?

This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all.

Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.

One of the main stumbling blocks associated with AI trustworthiness revolves around data, and more specifically, ensuring data quality and integrity. Afterall, AI models are only as good as the data they consume.

And yet, these obstacles haven’t discouraged cyber security vendors, which have shown unwavering zeal to base their solutions on AI models. By doing so, vendors are taking a leap of faith, assuming that the datasets (whether public or proprietary) their models are ingesting adequately represent the real-life scenarios that these models will encounter in the future.

The data used to power AI-based cybersecurity systems faces a number of further problems:

Data poisoning: Bad actors can ”poison” training data by manipulating the datasets (and even the pre-trained models) that the AI models are relying upon. This could allow them to circumvent cyber security controls while the organization at risk remains oblivious to the fact that the ground truth it relies on to secure its infrastructure has been compromised. Such manipulations could lead to subtle deviations, such as security controls labeling malicious activity as benign, or generate a more profound impact by disrupting or disabling the security controls.

Data dynamism: AI models are built to address “noise,” but in cyberspace, malicious errors are not random. Security professionals are faced with dynamic and sophisticated adversaries that learn and adapt over time. Accumulating more security-related data might well improve AI-powered security models, but at the same time, it could lead adversaries to change their modus operandi, diminishing the efficacy of existing data and AI models. Data, in this case, is actively shaping the observed reality rather than statically representing it as a snapshot.

For example, while additional data points might render a traditional malware detection mechanism more capable of identifying common threats, it might, theoretically, degrade the AI model’s ability to identify novel malware that considerably diverges from known malicious patterns. This is analogous to how mutated viral variants evade an immune system that was trained to identify the original viral strain.

Unknown unknowns: Unknown unknowns are so prevalent in cyberspace that many service providers preach to their customers to build their security strategy on the assumption that they’ve already been breached. The challenge for AI models emanates from the fact that these unknown unknowns, or blind spots, are seamlessly incorporated into the models’ training datasets and therefore attain a stamp of approval and might not raise any alarms from AI-based security controls.

For example, some security vendors combine a slate of user attributes to create a personalized baseline of a user’s behavior and determine the expected permissible deviations from this baseline. The premise is that these vendors can identify an existing norm that should serve as reference point for their security models. However, this assumption might not hold water. For example, an undiscovered malware may already reside in the customer’s system, existing security controls may suffer from coverage gaps, or unsuspecting users may already be suffering from an ongoing account takeover.

Errors: It would not be brazen to assume that even staple security-related training datasets are probably laced with inaccuracies and misrepresentations. Afterall, some of the benchmark datasets for many leading AI algorithms and exploratory data science research have proven to be rife with serious labeling flaws.

Additionally, enterprise datasets can become obsolete, misleading, and erroneous over time unless the relevant data, and details of its lineage, are kept up-to-date and tied to relevant context.

Privacy-preserving omission: In an effort to render sensitive datasets accessible to security professionals within and across organizations, privacy-preserving and privacy-enhancing technologies, from deidentification to the creation of synthetic data, are gaining more traction. The whole rationale behind these technologies is to omit, alter, or mask sensitive information, such as personally identifiable information (PII). But as a result, the inherent qualities and statistically significant attributes of the datasets might be lost along the way. Moreover, what might seem as negligible “noise” could prove to be significant for some security models, impacting outputs in an unpredictable way.

The road ahead

All of these challenges are detrimental to the ongoing effort to fortify islands of trust in AI-dominated cybersecurity industry. This is especially true in the current environment where we lack widely-accepted AI explainability, accountability, and robustness standards and frameworks.

While efforts have begun to root out biases from datasets, enable privacy-preserving AI training, and reduce the amount of data required for AI training, it will prove much harder to fully and continuously inoculate security-related datasets against inaccuracies, unknown unknowns, and manipulations, which are intrinsic to the nature of cyberspace. Maintaining AI hygiene and data quality in ever-morphing, data-hungry digital enterprises might prove equally difficult.

Thus, it is up to the data science and cybersecurity communities to design, incorporate, and advocate for robust risk assessments and stress tests, enhanced visibility and validation, hard-coded guardrails, and offsetting mechanisms that can ensure trust and stability in our digital ecosystem in the age of AI.

Eyal Balicer is Senior Vice President for Global Cyber Partnership and Product Innovation at Citi.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Majority of Europeans would replace government with AI — oof, they’re so wrong

A recent survey conducted by researchers at the IE Center for the Governance of Change indicates that a majority of people would support replacing members of their respective parliaments with AI systems.

Yikes. The majority might have this one wrong. But we’ll get into why in a moment.

The survey

Researchers interviewed 2,769 Europeans representing varying demographics. Questions ranged from whether they’d prefer to vote via smartphone all the way to whether they’d replace existing politicians with algorithms given the chance.

Per the survey:

51% of Europeans support reducing the number of national parliamentarians and giving those seats to an algorithm. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 are excited about this idea.

On the surface, this makes perfect sense – younger people are more likely to embrace a new technology, no matter how radical.

But it gets even more interesting when you drill things down a bit.

According to a report from CNBC:

The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.

In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

It’s difficult to draw insight from these numbers without resorting to speculation – when you consider the political divide in the UK and the US, for example, it’s interesting to note that people in both nations still seem to prefer the status quo over an AI system.

Here’s the problem

All those people in favor of an AI parliament are wrong.

The idea here, according to the CNBC report, is that this survey captures the “general zeitgeist” when it comes to public perception of their current human representatives.

This seems to indicate that the survey tells us more about how people feel about their politicians than it does about how people feel about AI.

But we really need to consider what an AI parliamentarian would actually mean before we start throwing our support behind the idea.

Governments may not operate the same in every country, but if enough people support an idea – no matter how bad it is – there’s always a chance the people will get what they want.

Why an AI parliamentarian is a terrible idea

Here’s the conclusion right up front: It would not only be filled with baked-in bias, but trained with the biases of the government implementing it. Furthermore, any applicable AI technology in this domain would be considered “black box” AI, and thus it would be even worse at explaining its decisions than contemporary human politicians.

And, finally, if we hand over our constituent data to a centralized government system that has parliamentarian rights, we’d essentially be allowing our respective governments to use digital gerrymandering to conduct mass-scale social engineering.

Here’s how

When people imagine a robot politician they often conceptualize a being that cannot be corrupted. Robots don’t lie, they don’t have agendas, they’re not xenophobic or bigoted, and they can’t be bought off. Right?

Wrong.

AI is inherently biased. Any system designed to surface insights based on data that applies to people will automatically have bias built into its very core.

The short version of why this is true goes like this: think about the 2,769 person survey mentioned above. How many of those people are Black? How many are queer? How many are Jewish? How many are conservative? Are 2,769 people really enough people to represent the entirety of Europe?

Probably not. It’s just a pretty close guess. When researchers conduct these surveys, they’re trying to get a general idea of how people feel: this isn’t scientifically accurate information. We simply have no way of forcing every single person on the continent to answer these questions.

That’s how AI works. When we train an AI to do work – for example, to take data related to voter sentiment and determine whether to vote yay or nay on a particular motion – we train it on data that was generated, curated, interpreted, transcribed, and implemented by humans.

At every step of the AI training process, every bias that’s crept in becomes exacerbated. If you train an AI on data featuring a disproportionate amount of representation between groups, the AI will develop and amplify bias against those groups with less representation. That’s how algorithms work inside of a black box.

And therein lies our second problem: the black box. If a politician makes a decision that results in a negative consequence we can ask that politician to explain the motive behind that decision.

As a hypothetical example, if a politician successfully lobbied to abolish all traffic lights in their district and that action resulted in an increase in accidents, we could find out why they voted that way and demand they never do it again.

You can’t do that with most AI systems. Simple automation systems can be looked at in reverse if something goes wrong, but AI paradigms that involve deep learning and surfacing insights – the very kind you’d need to use in order to replace members of parliament with AI-powered representation – cannot generally be understood in reverse.

AI developers essentially dial-in a system’s output like they’re tuning in a radio signal from static. They keep playing with the parameters until the AI starts making decisions they like. This process cannot be repeated in reverse: you can’t turn the dial backwards until the signal is noisy again to see how it became clear.

Here’s the scary part

AI systems are goal-based. When we imagine the worst things that could possibly go wrong when it comes to artificial intelligence we might be thinking killer robots, but the experts tend to think misaligned objectives is the more likely evil.

Basically, think about AI developers like Mickey Mouse in Disney’s “The Sorcerer’s Apprentice.” If big government tells Silicon Valley to create an AI parliamentarian, it’s going to come up with the best leader it can possibly create.

Unfortunately, the goal of government isn’t to produce or collect the best leaders. It’s to serve society. Those are two entirely different goals.

The bottom line is that AI developers and politicians can train an AI system to surface any results they want.

If you can imagine gerrymandering, as it happens in the US, but at the scale of which “constituent data” gets weighted more in a machine’s parameters, then you can imagine how politicians could use AI systems to automate partisanship.

The last thing we need to do, as a global community, is use AI to supercharge the worst parts of our respective political systems.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
Tech News

4 ideas about AI that even ‘experts’ get wrong

The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment. Today’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.

Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.

In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.

Narrow AI and general AI are not on the same scale

The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”

“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.

For instance, today’s natural language processing systems have come a long way toward solving many different problems, such as translation, text generation, and question-answering on specific problems. At the same time, we have deep learning systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (and millions of dollars spent on computing and data). But the AI community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of AI.

The easy things are hard to automate

Credit: Ben Dickson