Categories
Game

Skullcandy’s first gaming headsets in years include Tile tracking and a wireless model

Skullcandy hasn’t offered gaming headsets for the better part of a decade, but it’s willing to give them another go — and it’s eager to catch up in some respects. The brand has introduced revamped PLYR, SLYR and SLYR Pro headsets that promise budget-friendly game audio on console, mobile and PC with a few perks. The flagship PLYR (shown above) includes Bluetooth 5.2 wireless audio, while it and the wired SLYR Pro offer Tile tracking to help you find your headset (or the device it’s connected to).

Both the PLYR and SLYR Pro (at middle) also use a hearing test to create a personalized sound profile, and offer background audio reduction whether you use the boom or integrated microphones. They can plug in through 3.5mm and USB, and an optional wireless transmitter for the PLYR promises low lag (down to 20ms) for PC- and PlayStation-based gamers. You can expect up to 24 hours of battery life in either model when you aren’t connected through USB. The base SLYR is a no-frills wired design that drops the audio processing features and USB support.

Skullcandy SLYR Pro gaming headset

Skullcandy

As with the old headsets, Skullcandy is counting on price as the main draw. The SLYR starts the line at $60, while the SLYR Pro and PLYR are relatively affordable at $100 and $130 respectively. The caveat, as you might guess, is that the gaming headset business hasn’t been standing still. The Astro A10 offers a more flexible (and arguably more visually appealing) design for the same $60 as the SLYR, while brands like Razer and SteelSeries offer both price-competitive headsets and premium models with extras like spatial audio and RGB lighting. Your choice might come down to sale pricing and personal preferences.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Security

New attack can unlock and start a Tesla Model Y in seconds, say researchers

Tesla prides itself on its cybersecurity protections, particularly the elaborate challenge system that protects its cars from conventional methods for attacking the remote unlock system. But now, one researcher has discovered a sophisticated relay attack that would allow someone with physical access to a Tesla Model Y to unlock and steal it in a matter of seconds.

The vulnerability — discovered by Josep Pi Rodriguez, principal security consultant for IOActive — involves what’s called an NFC relay attack and requires two thieves working in tandem. One thief needs to be near the car and the other near the car owner, who has an NFC keycard or mobile phone with a Tesla virtual key in their pocket or purse.

Near-field communication keycards allow Tesla owners to unlock their vehicles and start the engine by tapping the card against an NFC reader embedded in the driver’s side body of the car. Owners can also use a key fob or a virtual key on their mobile phone to unlock their car, but the car manual advises them to always carry the NFC keycard as a backup in case they lose the key fob or phone or their phone’s battery dies.

In Rodriguez’s scenario, attackers can steal a Tesla Model Y as long as they can position themselves within about two inches of the owner’s NFC card or mobile phone with a Tesla virtual key on it — for example, while in someone’s pocket or purse as they walk down the street, stand in line at Starbucks, or sit at a restaurant.

The first hacker uses a Proxmark RDV4.0 device to initiate communication with the NFC reader in the driver’s side door pillar. The car responds by transmitting a challenge that the owner’s NFC card is meant to answer. But in the hack scenario, the Proxmark device transmits the challenge via Wi-Fi or Bluetooth to the mobile phone held by the accomplice, who places it near the owner’s pocket or purse to communicate with the keycard. The keycard’s response is then transmitted back to the Proxmark device, which transmits it to the car, authenticating the thief to the car by unlocking the vehicle.

Although the attack via Wi-Fi and Bluetooth limits the distance the two accomplices can be from one another, Rodriguez says it’s possible to pull off the attack via Bluetooth from several feet away from each other or even farther away with Wi-Fi, using a Raspberry Pi to relay the signals. He believes it may also be possible to conduct the attack over the internet, allowing even greater distance between the two accomplices.

If it takes time for the second accomplice to get near the owner, the car will keep sending a challenge until it gets a response. Or the Proxmark can send a message to the car saying it needs more time to produce the challenge response.

Until last year, drivers who used the NFC card to unlock their Tesla had to place the NFC card on the console between the front seats in order to shift it into gear and drive. But a software update last year eliminated that additional step. Now, drivers can operate the car just by stepping on the brake pedal within two minutes after unlocking the car.

The attack Rodriguez devised can be prevented if car owners enable the PIN-to-drive function in their Tesla vehicle, requiring them to enter a PIN before they can operate the car. But Rodriguez expects that many owners don’t enable this feature and may not even be aware it exists. And even with this enabled, thieves could still unlock the car to steal valuables.

There is one hitch to the operation: once the thieves shut off the engine, they won’t be able to restart the car with that original NFC keycard. Rodriguez says they can add a new NFC keycard to the vehicle that would allow them to operate the car at will. But this requires a second relay attack to add the new key, which means that, once the first accomplice is inside the car after the first relay attack, the second accomplice needs to get near the owner’s NFC keycard again to repeat the relay attack, which would allow the first accomplice to authenticate themself to the vehicle and add a new keycard.

If the attackers aren’t interested in continuing to drive the vehicle, they could also just strip the car for parts, as has occurred in Europe. Rodriguez says that eliminating the relay problem he found wouldn’t be a simple task for Tesla.

“To fix this issue is really hard without changing the hardware of the car — in this case the NFC reader and software that’s in the vehicle,” he says.

But he says the company could implement some changes to mitigate it — such as reducing the amount of time the NFC card can take to respond to the NFC reader in the car.

“The communication between the first attacker and the second attacker takes only two seconds [right now], but that’s a lot of time,” he notes. “If you have only half a second or less to do this, then it would be really hard.”

Rodriguez, however, says the company downplayed the problem to him when he contacted them, indicating that the PIN-to-drive function would mitigate it. This requires a driver to type a four-digit PIN into the car’s touchscreen in order to operate the vehicle. It’s not clear if a thief could simply try to guess the PIN. Tesla’s user manual doesn’t indicate if the car will lock out a driver after a certain number of failed PINs.

Tesla did not respond to a request for comment from The Verge.

It’s not the first time that researchers have found ways to unlock and steal Tesla vehicles. Earlier this year, another researcher found a way to start a car with an unauthorized virtual key, but the attack requires the attacker to be in the vicinity while an owner unlocks the car. Other researchers showed an attack against Tesla vehicles involving a key fob relay attack that intercepts and then replays the communication between an owner’s key fob and vehicle.

Rodriguez says that, despite vulnerabilities discovered with Tesla vehicles, he thinks the company has a better track record on security than other vehicles.

“Tesla takes security seriously, but because their cars are much more technological than other manufacturers, this makes their attack surface bigger and opens windows for attackers to find vulnerabilities,” he notes. “That being said, to me, Tesla vehicles have a good security level compared to other manufacturers that are even are less technological.”

He adds that the NFC relay attack is also possible in vehicles made by other manufacturers, but “those vehicles have no PIN-to-drive mitigation.”

Repost: Original Source and Author Link

Categories
AI

3 model monitoring tips for reliable results when deploying AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial Intelligence (AI) promises to transform almost every business on the planet. That’s why most business leaders are asking themselves what they need to do to successfully deploy AI into production. 

Many get stuck deciphering which applications are realistic for the business; which will hold up over time as the business changes; and which will put the least strain on their teams. But during production, one of the leading indicators of an AI project’s success is the ongoing model monitoring practices put into place around it. 

The best teams employ three key strategies for AI model monitoring:

1. Performance shift monitoring

Measuring shifts in AI model performance requires two layers of metric analysis: health and business metrics. Most Machine Learning (ML) teams focus solely on model health metrics. These include metrics used during training — like precision and recall — as well as operational metrics — like CPU usage, memory, and network I/O. While these metrics are necessary, they’re insufficient on their own. To ensure AI models are impactful in the real world, ML teams should also monitor trends and fluctuations in product and business metrics that are directly impacted by AI. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

For example, YouTube uses AI to recommend a personalized set of videos to every user based on several factors: watch history, number of sessions, user engagement, and more. And when these models don’t perform well, users spend less time on the app watching videos. 

To increase visibility into performance, teams should build a single, unified dashboard that highlights model health metrics alongside key product and business metrics. This visibility also helps ML Ops teams debug issues effectively as they arise. 

2. Outlier detection

Models can sometimes produce an outcome that is significantly outside of the normal range of results  — we call this an outlier. Outliers can be disruptive to business outcomes and often have major negative consequences if they go unnoticed.

For example, Uber uses AI to dynamically determine the price of every ride, including surge pricing. This is based on a variety of factors — like rider demand or availability of drivers in an area. Consider a scenario where a concert concludes and attendees simultaneously request rides. Due to an increase in demand, the model might surge the price of a ride by 100 times the normal range. Riders never want to pay 100 times the price to hail a ride, and this can have a significant impact on consumer trust.

Monitoring can help businesses balance the benefits of AI predictions with their need for predictable outcomes. Automated alerts can help ML operations teams detect outliers in real time by giving them a chance to respond before any harm occurs. Additionally, ML Ops teams should invest in tooling to override the output of the model manually.  

In our example above, detecting the outlier in the pricing model can alert the team and help them take corrective action — like disabling the surge before riders notice. Furthermore, it can help the ML team collect valuable data to retrain the model to prevent this from occurring in the future. 

3. Data drift tracking 

Drift refers to a model’s performance degrading over time once it’s in production. Because AI models are often trained on a small set of data, they initially perform well, since the real-world production data is very similar to the training data. But with time, actual production data changes due to a variety of factors, like user behavior, geographies and time of year. 

Consider a conversational AI bot that solves customer support issues. As we launch this bot for various customers, we might notice that users can request support in vastly different ways. For example, a user requesting support from a bank might speak more formally, whereas a user on a shopping website might speak more casually. This change in language patterns compared to the training data can result in bot performance getting worse with time. 

To ensure models remain effective, the best ML teams track the drift in the distribution of features — that is, embeddings between our training data and production data. A large change in distribution indicates the need to retrain our models to achieve optimal performance. Ideally, data drift needs to be monitored at least every six months and can occur as frequently as every few weeks for high-volume applications. Failing to do so could cause significant inaccuracies and hinder the model’s overall trustworthiness. 

A structured approach to success 

AI is neither a magic bullet for business transformation nor a false promise of improvement. Like any other technology, it has tremendous promise given the right strategy. 

If developed from scratch, AI can not be deployed and then left to run on its own without proper attention. Truly transformative AI deployments adopt a structured approach that involves careful monitoring, testing, and increased improvement over time. Businesses that do not have the time nor the resources to take this approach will find themselves caught in a perpetual game of catch-up. 

Rahul Kayala is principal product manager at Moveworks.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Why Meta’s large language model does not work for researchers

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


When Alan Turing came up with the Turing Test in 1950, it was a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. Turing proposed that a computer can be said to possess artificial intelligence (AI) if it can create human-like responses to questions.

Thanks to large language models, we’re now at the point where computers can write text on just about any subject we give them — and for the most part, it’s very convincing and human-like.

Tell it to write a sentence on, “Why does Elon Musk like to knit?” and what it outputs is arguably as good as what any human could write:

Some possible reasons why Elon Musk might enjoy knitting could include the fact that it is a relaxing and meditative activity that can help to clear one's mind, and it also allows for a great deal of creativity and self-expression.
Additionally, knitting can be a very social activity, and Elon Musk may enjoy the opportunity to chat and connect with other knitters.

[Source: OpenAI Playground using text-davinci-002 model]

Summarizing complex text

Examples like this are fun, but the bigger value proposition of using large language models is less about writing wacky prose and more about the summarization of complex text. These use cases are exciting across industries. For instance, AI can distill information about potential prospects for sales intelligence purposes, or it can summarize investment documents in finance.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

However, what I’m particularly interested in is summarizing scientific papers for researchers.

The problem is there is an overload of research being published around the world. More than 4.2 million academic research papers were published in 2020. Even within specialized fields of research, there are hundreds of papers published every year — how can a researcher keep on top of it all while pursuing their own research? A paper’s abstract only hints at the research detail within.

When Meta recently open-sourced its language model, OPT-175B, it sounded promising for academic researchers. It’s said to offer better performance than OpenAI’s GPT-3 and uses just 15% of GPT-3’s compute resources to train it.

Putting models to the test

So how does Meta perform on scientific research?

We tested it first on a technical press release, and this was the summary we received:

​​Forward Looking Statements, which speak only as of the date of this press release. Artelo undertakes no obligation to publicly update any forward-looking statement, whether as a result of new information, future events or otherwise.
Investor Relations Contact:
Westwicke Partners on behalf of Artelo Biosciences
Mike Piccinino, CFA

It’s not a great start. The model focuses on the investor legalese at the end of the press release, calculating that it is the most important information reader’s need to know. You might argue that it is important, but it’s not what we’re looking for. The investor contact isn’t even correct, it was fabricated by the model!

Next, we tried it on a paper from the Journal of Electronic Materials:

Journal of ELECTRONIC MATERIALS, Vol. 38, No. 7, 2009 DOI: 10.1007/s11664-009-0701-y (cid:1) 2009 TMS Special Issue Paper thermally. The samples were then pulverized and analyzed by XRD, TGA, and XPS. The XRD patterns showed that CaCo2O4+d crystallizes as a calcium-ferrite-type structure, which consists of a corner- and edge- shared CoO6 octahedron network including one-dimensional double chains. The CaCo2O4+d phase has a finite density of states at the Fermi level. The XPS results showed that CaCo2O4+d has a composition of CaCo2O4+d = (Ca2CoO3)0.62CoO2. The XPS results also showed that CaCo2O4+d has

[Source: Meta OPT-175B]

At first glance, it appears to have important information, but there’s clearly superfluous information such as the details of the paper that aren’t important to the summary, and I’d more accurately describe the result as paraphrasing a part of the text rather than summarizing all aspects of it.

Small-scale models outperform

So how does a smaller language model compare? Through experience in the field, we designed our Iris.ai IPG model to have just 25 million parameters — as opposed to 175 billion from Meta and OpenAI — but trained it on millions of domain-specific scientific articles. Our research has shown that this model performs very differently on the same paper:

Metallic temperature dependence of the seebeck coefficient s with a large thermoelectric power (s=151lv/kat387k) has a finite density of states at the fermi level. In this paper, we report the crystal structure and physical properties of caco2o4+d. We find a new compound caco2o4+d, which exhibits a large thermoelectric power, even though it has a finite density of states at the fermi level. Motivated by the simple guiding principle mentioned previously, we searched for new phases thermoelectric properties related as a thermoelectric material applicable to high-temperature use.

[Source: Iris.ai IPG]

You can see the sentence structure is slightly more simplistic than a large language model, but the information is much more relevant. What’s more, the computational costs to generate that news article summary is less than $0.23. To do the same on OPT-175 would cost about $180.

The container ships of AI models

You’d assume that large language models backed with enormous computational power, such as OPT-175B would be able to process the same information faster and to a higher quality. But where the model falls down is in specific domain knowledge. It doesn’t understand the structure of a research paper, it doesn’t know what information is important, and it doesn’t understand chemical formulas. It’s not the model’s fault — it simply hasn’t been trained on this information.

The solution, therefore, is to just train the GPT model on materials papers, right?

To some extent, yes. If we can train a GPT model on materials papers, then it’ll do a good job of summarizing them, but large language models are — by their nature — large. They are the proverbial container ships of AI models — it’s very difficult to change their direction. This means to evolve the model with reinforcement learning needs hundreds of thousands of materials papers. And this is a problem — this volume of papers simply doesn’t exist to train the model. Yes, data can be fabricated (as it often is in AI), but this reduces the quality of the outputs — GPT’s strength comes from the variety of data it’s trained on.

Revolutionizing the ‘how’

This is why smaller language models work better. Natural language processing (NLP) has been around for years, and although GPT models have hit the headlines, the sophistication of smaller NLP models is improving all the time.

After all, a model trained on 175 billion parameters is always going to be difficult to handle, but a model using 30 to 40 million parameters is much more maneuverable for domain-specific text. The additional benefit is that it will use less computational power, so it costs a lot less to run, too.

From a scientific research point of view, which is what interests me most, AI is going to accelerate the potential for researchers — both in academia and in industry. The current pace of publishing produces an inaccessible amount of research, which drains academics’ time and companies’ resources.

The way we designed Iris.ai’s IPG model reflects my belief that certain models provide the opportunity not just to revolutionize what we study or how quickly we study it, but also how we approach different disciplines of scientific research as a whole. They give talented minds significantly more time and resources to collaborate and generate value.

This potential for every researcher to harness the world’s research drives me forward.

Victor Botev is the CTO at Iris AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
Game

SteelSeries’ first desktop speakers include a 5.1-channel USB model

The gaming audio market has focused on headphones for years, leaving you to rely on familiar brands like Logitech and Klipsch if you prefer speakers. SteelSeries thinks it can shake things up, though. It’s introducing its first desktop speaker line, Arena, and promising a few standout features aimed at gamers. The flagship Arena 9 (pictured at middle) is billed as the first gaming speaker setup to deliver 5.1-channel surround sound through USB. There’s still a 3.5mm jack if you like, but you won’t need a nest of wires to immerse yourself in games on a PC, Mac or PlayStation. You can expect synced RGB lighting, too.

All models also tout the first “pro-grade” parametric EQ aimed at gamers through the Sonar Audio Software Suite, SteelSeries claims. While we’d remain cautious about the company’s boasts, this might prove useful if you’re more interested in accurate-sounding explosions than nuanced music.

The other two Arena models aren’t quite as elaborate, but could make more sense if price or desk space are concerns. The entry Arena 3 (shown at right) is a simple two-speaker offering with four-inch fiber cone drivers and Bluetooth audio support. Buy the 2.1-channel Arena 7 (left) and you’ll get a subwoofer, optional USB audio and RGB lighting.

SteelSeries’ speakers are available now with prices of $150 for the Arena 3, $330 for the Arena 7 and $600 for the Arena 9. You can also buy a $100 Arena Wireless Mic with a cardioid, noise-cancelling microphone to provide voice chat without donning a headset. None of these prices are trivial, but SteelSeries is clearly betting that you’ll pay extra for speakers built more for Call of Duty than Claude Debussy.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Nvidia shows off AI model that turns a few dozen snapshots into a 3D-rendered scene

Nvidia’s latest AI demo is pretty impressive: a tool that quickly turns a “few dozen” 2D snapshots into a 3D-rendered scene. In the video below you can see the method in action, with a model dressed like Andy Warhol holding an old-fashioned Polaroid camera. (Don’t overthink the Warhol connection: it’s just a bit of PR scene dressing.)

The tool is called Instant NeRF, referring to “neural radiance fields” — a technique developed by researchers from UC Berkeley, Google Research, and UC San Diego in 2020. If you want a detailed explainer of neural radiance fields, you can read one here, but in short, the method maps the color and light intensity of different 2D shots, then generates data to connect these images from different vantage points and render a finished 3D scene. In addition to images, the system requires data about the position of the camera.

Researchers have been improving this sort of 2D-to-3D model for a couple of years now, adding more detail to finished renders and increasing rendering speed. Nvidia says its new Instant NeRF model is one of the fastest yet developed and reduces rendering time from a few minutes to a process that is finished “almost instantly.”

As the technique becomes quicker and easier to implement, it could be used for all sorts of tasks, says Nvidia in a blog post describing the work.

“Instant NeRF could be used to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps,” writes Nvidia’s Isha Salian. “The technology could be used to train robots and self-driving cars to understand the size and shape of real-world objects by capturing 2D images or video footage of them. It could also be used in architecture and entertainment to rapidly generate digital representations of real environments that creators can modify and build on.” (Sounds like the metaverse is calling.)

In a paper describing the work, Nvidia’s researchers said they were able to export scenes at a resolution of 1920 × 1080 “in tens of milliseconds.” The researchers also shared source code for the project, allowing others to implement their methods. It seems NeRF renders are progressing quickly, and could start having a real-world impact in the years to come.

Update March 25th, 15:50PM ET: Updated story with link to research paper and source code.

Repost: Original Source and Author Link

Categories
AI

DeepMind’s new AI model helps decipher, date, and locate ancient inscriptions

Machine learning techniques are providing new tools that could help archaeologists understand the past — particularly when it comes to deciphering ancient texts. The latest example is an AI model created by Alphabet-subsidiary DeepMind that helps not only restore text that is missing from ancient Greek inscriptions but offers suggestions for when the text was written (within a 30-year period) and its possible geographic origins.

“Inscriptions are really important because they are direct sources of evidence … written directly by ancient people themselves,” Thea Sommerschield, a historian and machine learning expert who helped created the model, told journalists in a press briefing.

Due to their age, these texts are often damaged, making restoration a rewarding challenge. And because they are often inscribed on inorganic material like stone or metal, it means methods like radiocarbon dating can’t be used to find out when they were written. “To solve these tasks, epigraphers look for textual and contextual parallels in similar inscriptions,” said Sommerschield, who was co-lead on the work alongside DeepMind staff research scientist Yannis Assael. “However, it’s really difficult for a human to harness all existing, relevant data and to discover underlying patterns.”

That’s where machine learning can help.

Ancient Greek inscriptions are often fragmented. The software Ithaca can suggest what letters are missing.
Image: DeepMind

The new software, named Ithaca, is trained on a dataset of some 78,608 ancient Greek inscriptions, each of which is labeled with metadata describing where and when it was written (to the best of historians’ knowledge). Like all machine learning systems, Ithaca looks for patterns in this information, encoding this information in complex mathematical models, and uses these inferences to suggest text, date, and origins.

In a paper published in Nature that describes Ithaca, the scientists who created the model say it is 62 percent accurate when restoring letters in damaged texts. It can attribute an inscription’s geographic origins to one of 84 regions of the ancient world with 71 percent accuracy and can date a text to within, on average, 30 years of its known year of writing.

These are promising statistics, but it’s important to remember that Ithaca is not capable of operating independently of human expertise. Its suggestions are ultimately based on data collected by traditional archaeological methods, and its creators are positioning it as simply another tool in a wider set of forensic methods, rather than a fully-automated AI historian. “Ithaca was designed as a complementary tool to aid historians,” said Sommerschield.

Ithaca is the first model to geographical and chronological attribution with textual restoration.
Image: DeepMind

Eleanor Dickey, a professor of classics from the University of Reading who specializes in ancient Greek and Latin sociolinguists, told The Verge that Ithaca was an “exciting development that may improve our knowledge of the ancient world.” But, she added that a 62 percent accuracy for restoring lost text was not reassuringly high — “when people rely on it they will need to keep in mind that it is wrong about one third of the time” — and that she was not sure how the software would fit into existing academic methodologies.

For example, DeepMind highlighted tests that showed the model helped improve the accuracy of historians restoring missing text in ancient inscriptions from 25 percent to 72 percent. But Dickey notes that those being tested were students, not professional epigraphers. She says that AI models may be broadly accessible, but that doesn’t mean they can or should replace the small cadre of specialized academics who decipher texts.

“It is not yet clear to what extent use of this tool by genuinely qualified editors would result in an improvement in the editions generally available — but it will be interesting to find out,” said Dickey. She added that she was looking for to trying the Ithaca model out for herself. The software, along with its open-source code, is available online for anyone to test.

Ithaca and its predecessor (named Pythia and released in 2019) have already been used to help recent archaeological debates — including helping date inscriptions discovered in the Acropolis of Athens. However, the true potential of the software has yet to be seen.

Sommerschield stresses that the real value of Ithaca may be in its flexibility. Although it was trained on ancient Greek inscriptions, it could be easily configured to work with other ancient scripts. “Ithaca’s architecture makes it really applicable to any ancient language, not just Latin, but Mayan, cuneiform; really any written medium — papyri, manuscripts,” she said. “There’s a lot of opportunities.”

Repost: Original Source and Author Link

Categories
AI

DeepMind tests the limits of large AI language systems with 280-billion-parameter model

Language generation is the hottest thing in AI right now, with a class of systems known as “large language models” (or LLMs) being used for everything from improving Google’s search engine to creating text-based fantasy games. But these programs also have serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning. One big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm?

This is one of the topics that Alphabet’s AI lab DeepMind is tackling in a trio of research papers published today. The company’s conclusion is that scaling up these systems further should deliver plenty of improvements. “One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued,” DeepMind research scientist Jack Rae told reporters in a briefing call.

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters).

It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix.

“I think right now it really looks like the model can fail in variety of ways,” said Rae. “Some subset of those ways are because the model just doesn’t have sufficiently good comprehension of what it’s reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale.”

But, he added, there are “other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths, that […] no one at DeepMind thinks scale will be the solution [to].” In these cases, language models will need “additional training routines” like feedback from human users, he noted.

To come to these conclusions, DeepMind’s researchers evaluated a range of different-sized language models on 152 language tasks or benchmarks. They found that larger models generally delivered improved results, with Gopher itself offering state-of-the-art performance on roughly 80 percent of the tests selected by the scientists.

In another paper, the company also surveyed the wide range of potential harms involved with deploying LLMs. These include the systems’ use of toxic language, their capacity to share misinformation, and their potential to be used for malicious purposes, like sharing spam or propaganda. All these issues will become increasingly important as AI language models become more widely deployed — as chatbots and sales agents, for example.

However, it’s worth remembering that performance on benchmarks is not the be-all and end-all in evaluating machine learning systems. In a recent paper, a number of AI researchers (including two from Google) explored the limitations of benchmarks, noting that these datasets will always be limited in scope and unable to match the complexity of the real world. As is often the case with new technology, the only reliable way to test these systems is to see how they perform in reality. With large language models, we will be seeing more of these applications very soon.

Repost: Original Source and Author Link

Categories
AI

Microsoft is giving businesses access to OpenAI’s powerful AI language model GPT-3

It’s the AI system once deemed too dangerous to release to the public by its creators. Now, Microsoft is making an upgraded version of the program, OpenAI’s autocomplete software GPT-3, available to business customers as part of its suite of Azure cloud tools.

GPT-3 is the best known example of a new generation of AI language models. These systems primarily work as autocomplete tools: feed them a snippet of text, whether an email or a poem, and the AI will do its best to continue what’s been written. Their ability to parse language, however, also allows them to take on other tasks like summarizing documents, analyzing the sentiment of text, and generating ideas for projects and stories — jobs with which Microsoft says its new Azure OpenAI Service will help customers.

Here’s an example scenario from Microsoft:

“A sports franchise could build an app for fans that offers reasoning of commentary and a summary of game highlights, lowlights and analysis in real time. Their marketing team could then use GPT-3’s capability to produce original content and help them brainstorm ideas for social media or blog posts and engage with fans more quickly.”

GPT-3 is already being used for this sort of work via an API sold by OpenAI. Startups like Copy.ai promise that their GPT-derived tools will help users spruce up work emails and pitch decks, while more exotic applications include using GPT-3 to power a choose-your-own-adventure text game and chatbots pretending to be fictional TikTok influencers.

While OpenAI will continue selling its own API for GPT-3 to provide customers with the latest upgrades, Microsoft’s repackaging of the system will be aimed at larger businesses that want more support and safety. That means their service will offer tools like “access management, private networking, data handling protections [and] scaling capacity.”

It’s not clear how much this might cannibalize OpenAI’s business, but the two companies already have a tight partnership. In 2019, Microsoft invested $1 billion in OpenAI and became its sole cloud provider (a vital relationship in the compute-intensive world of AI research). Then, in September 2020, Microsoft bought an exclusive license to directly integrate GPT-3 into its own products. So far, these efforts have focused on GPT-3’s code-generating capacities, with Microsoft using the system to build autocomplete features into its suite of PowerApps applications and its Visual Studio Code editor.

These limited applications make sense given the huge problems associated with large AI language models like GPT-3. First: a lot of what these systems generate is rubbish, and requires human curation and oversight to sort the good from the bad. Second: these models have also been shown time and time again to incorporate biases found in their training data, from sexism to Islamaphobia. They are more likely to associate Muslims with violence, for example, and hew to outdated gender stereotypes. In other words: if you start playing around with these models in an unfiltered format, they’ll soon say something nasty.

Microsoft knows only too well what can happen when such systems are let loose on the general public (remember Tay, the racist chatbot?). So, it’s trying to avoid these problems with GPT-3 by introducing various safeguards. These include granting access to use the tool by invitation only; vetting customers’ use cases; and providing “filtering and monitoring tools to help prevent inappropriate outputs or unintended uses of the service.”

However, it’s not clear if these restrictions will be enough. For example, when asked by The Verge how exactly the company’s filtering tools work, or whether there was any proof that they could reduce inappropriate outputs from GPT-3, the company dodged the question.

Emily Bender, a professor of computational linguistics at the University of Washington who’s written extensively on large language models, says Microsoft’s reassurances are lacking in substance. “As noted in [Microsoft’s] press release, GPT-3’s training data potentially includes ‘everything from vulgar language to racial stereotypes to personally identifying information,’” Bender told The Verge over email. “I would not want to be the person or company accountable for what it might say based on that training data.”

Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency — meaning AI systems should be accountable and understandable. Despite this, says Bender, the exact composition of GPT-3’s training data is a mystery and Microsoft is claiming that the system “understands” language — a framing that is strongly disputed by many experts. “It is concerning to me that Microsoft is leaning in to this kind of AI hype in order to sell this product,” said Bender.

But although Microsoft’s GPT-3 filters may be unproven, it can avoid a lot of trouble by simply selecting its customers carefully. Large language models are certainly useful as long as their output is checked by humans (though this requirement does negate some of the promised gains in efficiency). As Bender notes, if Azure OpenAI Service is just helping to write “communication aimed at business executives,” it’s not too problematic.

“I would honestly be more concerned about language generated for a video game character,” she says, as this implementation would likely run without human oversight. “I would strongly recommend that anyone using this service avoid ever using it in public-facing ways without extensive testing ahead of time and humans in the loop.”

Repost: Original Source and Author Link

Categories
AI

For AI model success, utilize MLops and get the data right

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


It’s critical to adopt a data-centric mindset and support it with ML operations 

Artificial intelligence (AI) in the lab is one thing; in the real world, it’s another. Many AI models fail to yield reliable results when deployed. Others start well, but then results erode, leaving their owners frustrated. Many businesses do not get the return on AI they expect. Why do AI models fail and what is the remedy? 

As companies have experimented with AI models more, there have been some successes, but numerous disappointments. Dimensional Research reports that 96% of AI projects encounter problems with data quality, data labeling and building model confidence.

AI researchers and developers for business often use the traditional academic method of boosting accuracy. That is, hold the model’s data constant while tinkering with model architectures and fine-tuning algorithms. That’s akin to mending the sails when the boat has a leak — it is an improvement, but the wrong one. Why? Good code cannot overcome bad data.

Instead, they should ensure the datasets are suited to the application. Traditional software is powered by code, whereas AI systems are built using both code (models + algorithms) and data. Take facial recognition, for instance, in which AI-driven apps were trained on mostly Caucasian faces, instead of ethnically diverse faces. Not surprisingly, results were less accurate for non-Caucasian users. 

Good training data is only the starting point. In the real world, AI applications are often initially accurate, but then deteriorate. When accuracy degrades, many teams respond by tuning the software code. That doesn’t work because the underlying problem was changing real-world conditions. The answer: to increase reliability, improve the data rather than the algorithms. 

Since AI failures are usually related to data quality and data drifts, practitioners can use a data-centric approach to keep AI applications healthy. Data is like food for AI. In your application, data should be a first-class citizen. Endorsing this idea isn’t sufficient; organizations need an “infrastructure” to keep the right data coming. 

MLops: The “how” of data-centric AI

Continuous good data requires ongoing processes and practices known as MLops, for machine learning (ML) operations. The key mission of MLops: make high-quality data available because it’s essential to a data-centric AI approach.

MLops works by tackling the specific challenges of data-centric AI, which are complicated enough to ensure steady employment for data scientists. Here is a sampling: 

  • The wrong amount of data: Noisy data can distort smaller datasets, while larger volumes of data can make labeling difficult. Both issues throw models off. The right size of dataset for your AI model depends on the problem you are addressing. 
  • Outliers in the data: A common shortcoming in data used to train AI applications, outliers can skew results. 
  • Insufficient data range: This can cause an inability to properly handle outliers in the real world. 
  • Data drift: Which often degrades model accuracy over time. 

These issues are serious. A Google survey of 53 AI practitioners found that “data cascades—compounding events causing negative, downstream effects from data issues — triggered by conventional AI/ML practices that undervalue data quality… are pervasive (92% prevalence), invisible, delayed, but often avoidable.”

How does MLOps work?

Before deploying an AI model, researchers need to plan to maintain its accuracy with new data. Key steps: 

  • Audit and monitor model predictions to continuously ensure that the outcomes are accurate
  • Monitor the health of data powering the model; make sure there are no surges, missing values, duplicates, or anomalies in distributions.
  • Confirm the system complies with privacy and consent regulations
  • When the model’s accuracy drops, figure out why

To practice good MLops and responsibly develop AI, here are several questions to address: 

  • How do you catch data drifts in your pipeline? Data drift can be more difficult to catch than data quality shortcomings. Data changes that appear subtle may have an outsized impact on particular model predictions and particular customers.
  • Does your system reliably move data from point A to B without jeopardizing data quality? Thankfully, moving data in bulk from one system has become much easier, as tools for ML improve.
  • Can you track and analyze data automatically, with alerts when data quality issues arise? 

MLops: How to start now

You may be thinking, how do we gear up to address these problems? Building an MLops capability can begin modestly, with a data expert and your AI developer. As an early days discipline, MLops is evolving. There is no gold standard or approved framework yet to define a good MLops system or organization, but here are a few fundamentals:

  • In developing models, AI researchers need to consider data at each step, from product development through deployment and post-deployment. The ML community needs mature MLops tools that help make high-quality, reliable and representative datasets to power AI systems.
  • Post-deployment maintenance of the AI application cannot be an afterthought. Production systems should implement ML-equivalents of devops best practices including logging, monitoring and CI/CD pipelines which account for data lineage, data drifts and data quality. 
  • Structure ongoing collaboration across stakeholders, from executive leadership, to subject-matter experts, to ML/Data Scientists, to ML Engineers, and SREs.

Sustained success for AI/ML applications demands a shift from “get the code right and you’re done” to an ongoing focus on data. Systematically improving data quality for a basic model is better than chasing state-of-the-art models with low-quality data.

Not yet a defined science, MLops encompasses practices that make data-centric AI workable. We will learn much in the upcoming years about what works most effectively. Meanwhile, you and your AI team can proactively – and creatively – devise an MLops framework and tune it to your models and applications. 

Alessya Visnijc is the CEO of WhyLabs

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link