Categories
AI

What Nvidia’s new MLPerf AI benchmark results really mean

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Nvidia released results today against new MLPerf industry-standard artificial intelligence (AI) benchmarks for its AI-targeted processors. While the results looked impressive, it is important to note that some of the comparisons they make with other systems are really not apples-to-apples. For instance, the Qualcomm systems are running at a much smaller power footprint than the H100, and are targeted at market segments similar to the A100, where the test comparisons are much more equitable. 

Nvidia tested its top-of-the-line H100 system based on its latest Hopper architecture; its now mid-range A100 system targeted at edge compute; and its Jetson smaller system targeted at smaller individual and/or edge types of workloads. This is the first H100 submission, and shows up to 4.5 times higher performance than the A100. According to the below chart, Nvidia has some impressive results for the top-of-the-line H100 platform.

Image source: Nvidia.

Inference workloads for AI inference

Nvidia used the MLPerf Inference V2.1 benchmark to assess its capabilities in various workload scenarios for AI inference. Inference is different from machine learning (ML) where training models are created and systems “learn.” 

Inference is used to run the learned models on a series of data points and obtain results. Based on conversations with companies and vendors, we at J. Gold Associates, LLC, estimate that the AI inference market is many times larger in volume than the ML training market, so showing good inference benchmarks is critical to success.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Why Nvidia would run MLPerf

MLPerf is an industry standard benchmark series that has broad inputs from a variety of companies, and models a variety of workloads. Included are items such as natural language processing, speech recognition, image classification, medical imaging and object detection. 

The benchmark is useful in that it can work across machines from high-end data centers and cloud, down to smaller-scale edge computing systems, and can offer a consistent benchmark across various vendors’ products, even though not all of the subtests in the benchmark are run by all testers. 

It can also create scenarios for running offline, single stream or multistream tests that create a series of AI functions to simulate a real-world example of a complete workflow pipeline (e.g., speech recognition, natural language processing, search and recommendations, text-to-speech, etc.). 

While MLPerf is accepted broadly, many players feel that running only portions of the test (ResNet is the most common) is a valid indicator of their performance and these results are more generally available than the full MLPerf. Indeed, we can see from the chart that many of the comparison chips do not have test results in other components of MLPerf for comparison to the Nvidia systems, as the vendors chose not to create them. 

Is Nvidia ahead of the market?

The real advantage Nvidia has over many of its competitors is in its platform approach. 

While other players offer chips and/or systems, Nvidia has built a strong ecosystem that includes the chips, associated hardware and a full stable of software and development systems that are optimized for their chips and systems. For instance, Nvidia has built tools like their Transformer Engine that can optimize the level of floating-point calculation (such as FP8, FP16, etc.) at various points in the workflow that is best for the task at hand, which has the potential to accelerate the calculations, sometimes by orders of magnitude. This gives Nvidia a strong position in the market as it enables developers to focus on solutions rather than trying to work on low-level hardware and related code optimizations for systems without the corresponding platforms.

Indeed, competitors Intel, and to a lesser extent Qualcomm, have emphasized the platform approach, but the startups generally only support open-source options that may not be at the same level of capabilities as the major vendors provide. Further, Nvidia has optimized frameworks for specific market segments that provide a valuable starting point from which solution providers can achieve faster time-to-market with reduced efforts. Start-up AI chip vendors can’t offer this level of resource.

Image source: Nvidia.

The power factor

The one area that fewer companies test for is the amount of power that is required to run these AI systems. High-end systems like the H100 can require 500-600 watts of power to run, and most large training systems use many H100 components, potentially thousands, within their complete system. The operating cost of such large systems is extremely high as a result. 

The lower-end Jetson consumes only about 50-60 watts, which is still too much for many edge computing applications. Indeed, the major hyperscalers (AWS, Microsoft, Google) all see this as an issue and are building their own power-efficient AI accelerator chips. Nvidia is working on lower-power chips, particularly since Moore’s Law provides power reduction capability as the process nodes get smaller. 

However, it needs to achieve products in the 10 watt and below range if it wants to fully compete with newer optimized edge processors coming to market, and companies with lower power credentials like Qualcomm (and ARM, generally). There will be many low-power uses for AI inference in which Nvidia currently cannot compete.

Nvidia’s benchmark bottom line

Nvidia has shown some impressive benchmarks for its latest hardware, and the test results show that companies need to take Nvidia’s AI leadership seriously. But it’s also important to note that the potential AI market is vast and Nvidia may not be a leader in all segments, particularly in the low-power segment where companies like Qualcomm may have an advantage. 

While Nvidia shows a comparison of its chips to standard Intel x86 processors, it does not have a comparison to Intel’s new Habana Gaudi 2 chips, which are likely to show a high level of AI compute capability that could approach or exceed some Nvidia products. 

Despite these caveats, Nvidia still offers the broadest product family and its emphasis on complete platform ecosystems puts it ahead in the AI race, and will be hard for competitors to match.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

3 model monitoring tips for reliable results when deploying AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial Intelligence (AI) promises to transform almost every business on the planet. That’s why most business leaders are asking themselves what they need to do to successfully deploy AI into production. 

Many get stuck deciphering which applications are realistic for the business; which will hold up over time as the business changes; and which will put the least strain on their teams. But during production, one of the leading indicators of an AI project’s success is the ongoing model monitoring practices put into place around it. 

The best teams employ three key strategies for AI model monitoring:

1. Performance shift monitoring

Measuring shifts in AI model performance requires two layers of metric analysis: health and business metrics. Most Machine Learning (ML) teams focus solely on model health metrics. These include metrics used during training — like precision and recall — as well as operational metrics — like CPU usage, memory, and network I/O. While these metrics are necessary, they’re insufficient on their own. To ensure AI models are impactful in the real world, ML teams should also monitor trends and fluctuations in product and business metrics that are directly impacted by AI. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

For example, YouTube uses AI to recommend a personalized set of videos to every user based on several factors: watch history, number of sessions, user engagement, and more. And when these models don’t perform well, users spend less time on the app watching videos. 

To increase visibility into performance, teams should build a single, unified dashboard that highlights model health metrics alongside key product and business metrics. This visibility also helps ML Ops teams debug issues effectively as they arise. 

2. Outlier detection

Models can sometimes produce an outcome that is significantly outside of the normal range of results  — we call this an outlier. Outliers can be disruptive to business outcomes and often have major negative consequences if they go unnoticed.

For example, Uber uses AI to dynamically determine the price of every ride, including surge pricing. This is based on a variety of factors — like rider demand or availability of drivers in an area. Consider a scenario where a concert concludes and attendees simultaneously request rides. Due to an increase in demand, the model might surge the price of a ride by 100 times the normal range. Riders never want to pay 100 times the price to hail a ride, and this can have a significant impact on consumer trust.

Monitoring can help businesses balance the benefits of AI predictions with their need for predictable outcomes. Automated alerts can help ML operations teams detect outliers in real time by giving them a chance to respond before any harm occurs. Additionally, ML Ops teams should invest in tooling to override the output of the model manually.  

In our example above, detecting the outlier in the pricing model can alert the team and help them take corrective action — like disabling the surge before riders notice. Furthermore, it can help the ML team collect valuable data to retrain the model to prevent this from occurring in the future. 

3. Data drift tracking 

Drift refers to a model’s performance degrading over time once it’s in production. Because AI models are often trained on a small set of data, they initially perform well, since the real-world production data is very similar to the training data. But with time, actual production data changes due to a variety of factors, like user behavior, geographies and time of year. 

Consider a conversational AI bot that solves customer support issues. As we launch this bot for various customers, we might notice that users can request support in vastly different ways. For example, a user requesting support from a bank might speak more formally, whereas a user on a shopping website might speak more casually. This change in language patterns compared to the training data can result in bot performance getting worse with time. 

To ensure models remain effective, the best ML teams track the drift in the distribution of features — that is, embeddings between our training data and production data. A large change in distribution indicates the need to retrain our models to achieve optimal performance. Ideally, data drift needs to be monitored at least every six months and can occur as frequently as every few weeks for high-volume applications. Failing to do so could cause significant inaccuracies and hinder the model’s overall trustworthiness. 

A structured approach to success 

AI is neither a magic bullet for business transformation nor a false promise of improvement. Like any other technology, it has tremendous promise given the right strategy. 

If developed from scratch, AI can not be deployed and then left to run on its own without proper attention. Truly transformative AI deployments adopt a structured approach that involves careful monitoring, testing, and increased improvement over time. Businesses that do not have the time nor the resources to take this approach will find themselves caught in a perpetual game of catch-up. 

Rahul Kayala is principal product manager at Moveworks.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Google is using a new way to measure skin tones to make search results more inclusive

Google is partnering with a Harvard professor to promote a new scale for measuring skin tones with the hope of fixing problems of bias and diversity in the company’s products.

The tech giant is working with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST Scale is designed to replace outdated skin tone scales that are biased towards lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for people with darker coloring, says Monk.

“Unless we have an adequate measure of differences in skin tone, we can’t really integrate that into products to make sure they’re more inclusive,” Monk tells The Verge. “The Monk Skin Tone Scale is a 10-point skin tone scale that was deliberately designed to be much more representative and inclusive of a wider range of different skin tones, especially for people [with] darker skin tones.”

There are numerous examples of tech products, particularly those that use AI, that perform worse with darker skin tones. These include apps designed to detect skin cancer, facial recognition software, and even machine vision systems used by self-driving cars.

Although there are lots of ways this sort of bias is programmed into these systems, one common factor is the use of outdated skin tone scales when collecting training data. The most popular skin tone scale is the Fitzpatrick scale, which is widely used in both academia and AI. This scale was originally designed in the ’70s to classify how people with paler skin burn or tan in the sun and was only later expanded to include darker skin.

This has led to some criticism that the Fitzpatrick scale fails to capture a full range of skin tones and may mean that when machine vision software is trained on Fitzpatrick data, it, too, is biased towards lighter skin types.

The 10-point Monk Skin Tone Scale.
Image: Ellis Monk / Google

The Fitzpatrick scale is comprised of six categories, but the MST Scale expands this to 10 different skin tones. Monk says this number was chosen based on his own research to balance diversity and ease of use. Some skin tone scales offer more than a hundred different categories, he says, but too much choice can lead to inconsistent results.

“Usually, if you got past 10 or 12 points on these types of scales [and] ask the same person to repeatedly pick out the same tones, the more you increase that scale, the less people are able to do that,” says Monk. “Cognitively speaking, it just becomes really hard to accurately and reliably differentiate.” A choice of 10 skin tones is much more manageable, he says.

Creating a new skin tone scale is only a first step, though, and the real challenge is integrating this work into real-world applications. In order to promote the MST Scale, Google has created a new website, skintone.google, dedicated to explaining the research and best practices for its use in AI. The company says it’s also working to apply the MST Scale to a number of its own products. These include its “Real Tone” photo filters, which are designed to work better with darker skin tones, and its image search results.

Google will let users refine certain search results using skin tones selected from the MST Scale.
Image: Google

Google says it’s introducing a new feature to image search that will let users refine searches based on skin tones classified by the MST Scale. So, for example, if you search for “eye makeup” or “bridal makeup looks,” you can then filter results by skin tone. In the future, the company also plans to use the MST Scale to check the diversity of its results so that if you search for images of “cute babies” or “doctors,” you won’t be shown only white faces.

“One of the things we’re doing is taking a set of [image] results, understanding when those results are particularly homogenous across a few set of tones, and improving the diversity of the results,” Google’s head of product for responsible AI, Tulsee Doshi, told The Verge. Doshi stressed, though, that these updates were in a “very early” stage of development and hadn’t yet been rolled out across the company’s services.

This should strike a note of caution, not just for this specific change but also for Google’s approach to fixing problems of bias in its products more generally. The company has a patchy history when it comes to these issues, and the AI industry as a whole has a tendency to promise ethical guidelines and guardrails and then fail on the follow-through.

Take, for example, the infamous Google Photos error that led to its search algorithm tagging photos of Black people as “gorillas” and “chimpanzees.” This mistake was first noticed in 2015, yet Google confirmed to The Verge this week that it has still not fixed the problem but simply removed these search terms altogether. “While we’ve significantly improved our models based on feedback, they still aren’t perfect,” Google Photos spokesperson Michael Marconi told The Verge. “In order to prevent this type of mistake and potentially causing additional harm, the search terms remain disabled.”

Introducing these sorts of changes can also be culturally and politically challenging, reflecting broader difficulties in how we integrate this sort of tech into society. In the case of filtering image search results, for example, Doshi notes that “diversity” may look different in different countries, and if Google adjusts image results based on skin tone, it may have to change these results based on geography.

“What diversity means, for example, when we’re surfacing results in India [or] when we’re surfacing results in different parts of the world, is going to be inherently different,” says Doshi. “It’s hard to necessarily say, ‘oh, this is the exact set of good results we want,’ because that will differ per user, per region, per query.”

Introducing a new and more inclusive scale for measuring skin tones is a step forward, but much thornier issues involving AI and bias remain.


Related:

Repost: Original Source and Author Link

Categories
Game

Google is testing a way to start streaming games from search results

One thing that will help bolster adoption of cloud gaming is by making it as easy as possible to fire up a game. To that end, is testing a way to start playing something with a single click from results, even if it’s not on the company’s own platform.

The test, which was by Bryant Chappel of The Nerf Report, not only enables folks to directly launch a game on Stadia, but it works with , and as well. If you’re enrolled in the test and search for a game on one of those platforms (such as or ), you may see a Play button in the information panel. Clicking that will either start up the game or take you to a landing page on the respective streaming platform.

and saw the feature in action too. The latter noted the search results can show if a game has a timed trial on Stadia or if it’s available for free or as part of a premium subscription.

It’s not incredibly surprising to see Google testing out such functionality. For several years, it has shown folks where they can . For instance, if you have a Netflix subscription and search for Stranger Things on Google, you’ll be able to start watching the show with a single click. 

In hindsight, it’s a little odd that Google hasn’t offered this feature for Stadia from the jump in order to promote its cloud gaming service. On the other hand, Stadia’s store didn’t have a search function for a year and a half, which offered further evidence that the platform isn’t exactly one of Google’s highest priorities. However, Stadia is not shutting down and Google is slowly adding more features to it.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Computing

New Intel Raptor Lake benchmarks deliver some weird results

With Intel Raptor Lake now on the horizon, the first benchmarks of the upcoming processors are starting to appear. Today, the mid-range Core i5-13600K was put to the test and compared to its successful Alder Lake predecessor, the Core i5-12600K.

The benchmarks returned … weird results. The Core i5-13600K blows its predecessor out of the water with an 80% improvement, but it also falls flat in another test with a 26% downgrade. What does this mean for the Raptor Lake lineup?

Enthusiast Citizen (ECSM_Official)

A known leaker, Enthusiastic Citizen (ECSM_Official) on Bilibili, was able to get their hands on an engineering sample of the Intel Core i5-13600K. This pre-production model is labeled as “ES3,” meaning it’s the third engineering sample (ES) of the Raptor Lake CPU. Enthusiastic Citizen simulated the processor to run at the same frequency as a qualification sample (QS) and then proceeded to run tests on CPU-Z and Cinebench R23.

In order to accurately simulate a QS model, the tester had to bring up the clock speeds a notch. The performance (P) cores were tuned to a frequency of 5.1GHz while the efficiency (E) cores were brought up to 4.0GHz. Although Intel has yet to release the official specifications of the Core i5-13600K, ECSM says that the CPU comes with six P-cores and eight E-cores, adding up to a total of 14 cores and 20 threads. This is an upgrade over the Intel Alder Lake offering with 10 cores and 16 threads.

As far as the benchmark results go, there is certainly a bit of a discrepancy here. In the CPU-Z benchmark, the Intel Core i5-13600K did a great job across the board, hitting 830 points in single-core and 10,031 points in multi-core versus the 768 and 5,590 points achieved by the Core i5-12600K. As noted by VideoCardz, this comes to an 8% upgrade in single-core and a massive 79% upgrade in multi-core operations. Of course, an improvement is to be expected, but the 79% gain almost seems too optimistic to be real.

The Cinebench R23 test is where things get even more confusing. The Core i5-13600K scored 1,387 points in single-core and 24,420 in multi-core tests. While the multi-core improvement is still here, albeit less massive (40% versus 79%), the single-core test was definitely a letdown, resulting in a 26% downgrade. The tester didn’t explain this massive discrepancy between the benchmarks.

Intel Raptor Lake chip shown in a rendered image.
Wccftech

Seeing as the benchmark results are all over the place, what does that mean for the Intel Core i5-13600K? Is it going to be a multi-tasking beast that falls flat on single-core tasks? It seems unlikely. It’s still early days and we’re dealing with engineering samples, so there’s no telling whether these test results are accurate. If they are, it means that Intel needs to go back and try to address that problem.

Such a weird discrepancy has no place going into mass production, so if it does keep showing up in further benchmarks, let’s hope that Intel gets it sorted out before the CPUs hit the shelves. The latest rumored release date for Intel Raptor Lake is in October, so time grows short.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Google can now remove search results that dox you without second-guessing intent

Google says it’s expanding the types of personal information that it’ll remove from search results to cover things like your physical address, phone number, and passwords. Before now, the feature mostly covered info that would let someone steal your identity or money — now, you can ask Google to stop showing certain URLs that point to info that could lead someone to your house or give them access to your accounts.

According to a blog post, Google’s giving people the new options because “the internet is always evolving” and if its search engine gives out your phone number or home address, that could be both jarring and dangerous. Here’s a list of what kinds of info Google may remove, with the new additions in bold (h/t to the Wayback Machine for making the old list accessible):

  • Confidential government identification (ID) numbers like U.S. Social Security Number, Argentine Single Tax Identification Number, etc.
  • Bank account numbers
  • Credit card numbers
  • Images of handwritten signatures
  • Images of ID docs
  • Highly personal, restricted, and official records, like medical records (used to read “Confidential personal medical records”)
  • Personal contact info (physical addresses, phone numbers, and email addresses)
  • Confidential login credentials

According to a support page, Google will also remove things like “non-consensual explicit or intimate personal images,” pornographic deepfakes or Photoshops featuring your likeness, or links to sites with “exploitative removal practices.”

Google already had policies in place that let people remove personal information that had been shared maliciously, an act commonly known as doxxing. This policy change, however, requires less of a judgment call — instead of a Google employee having to look at the links submitted and somehow determine whether they’d cause harm, now Google will just have to decide whether the info is of public interest. According to its FAQ, that person will determine whether the info in the link is “newsworthy,” “professionally-relevant,” or if came from a government; all things that could be far easier to decide than trying to figure out why a phone number was posted.

The company does still have a process for dealing with actual malicious doxxing too, if, say, your phone number was shared “with malicious, threatening, or harassing intent.” When you’re filling out the personal information removal request, there’s a question on whether you think that’s the case. The form also asks you to give Google a list of URLs that link to the personal information, as well as the search pages that surface those links. After you submit a request, Google will evaluate the links and determine whether they’re relevant to the public. If Google does decide that the links should be removed, it says they’ll either not show up for any search query or that they won’t be surfaced for searches that include your name.

Google seems to be applying a relatively high bar for what counts as personally identifying information, which makes it a bit different from the systems it’s had to implement in places like the EU to comply with so-called right to be forgotten rules. Those laws let people request that links they deem unflattering or irrelevant be taken down, which isn’t the case here — the rules Google added today only cover links to very sensitive info.

If you’ve ever searched for someone’s phone number, you may have ended up at a site that exists explicitly to sell people’s information, promising to give it to you if you subscribe. When asked if the new policy would apply to these types of sites, Google spokesperson Ned Adriance told The Verge that it would: “If we can verify that such links contain personally identifiable information, there is not other content on the webpage that may be of public interest, and we receive a request to remove those URLs, we will do so, assuming they meet our requirements outlined in the help page — whether or not the information is behind a paywall,” he said in an email.

This page was easily accessible from a Google link and promises to give out my phone number and address. If it meets Google’s requirements, it counts.

Importantly, as Google notes on its support page and in its blog post, getting the information taken off of Google Search doesn’t wipe it from the internet. If, for example, you ask Google to delist a forum post with your address in it, anyone who goes to that forum will be able to see it; the post just shouldn’t pop up if someone searches “[your name] home address.”

Update April 27th, 5:05PM ET: Added statement from Google on paywalled info sites.

Correction, April 28th 4:21PM ET: A previous version of this article didn’t properly address the difference between Google’s new policies for removing personally identifying information, and removing links meant to maliciously dox a person. The story has been updated to provide additional context, and we regret the error.

Repost: Original Source and Author Link

Categories
Game

Epic challenged Fortnite players to make short films: Here are the results

Epic Games announced a twist when it revealed its last Fortography challenge: it wasn’t about images at all this time around, but rather videos and how creative players could make them using the game and its various assets. Fast-forward a few weeks and the company is back with the results, showcasing the player-made videos it liked the most.

Epic’s battle royale game Fortnite features a huge number of characters, outfits, and emotes, among other things, enabling players to wear unique looks and perform a variety of actions. Those outfits and emotes are the central tools used by the players behind a new series of short films made in the game, all of them revolving around a Halloween theme.

Epic has shared the videos it liked the most in a new blog post, noting that it selected videos that followed the rules it established for its first Video Fortography challenge. The videos use various locations in Fortnite, as well as a mix of classic and more recent emotes and costumes to tell stories using emotion and music.

Some of the short videos are creepy, including a film from “GranbeFN” that was clearly inspired by slasher movies, more comical pieces like “Always Check Your Back” from “SteamyDucks,” and a Japanese-narrated video about monsters returning Halloween to eat unsuspecting players’ candy and treats.

Though anyone can participate in these challenges (for which the ‘rewards’ are merely bragging rights), players on PC, Xbox, and PlayStation are at the greatest advantage due to one important feature: replay mode. This game mode automatically saves recent matches, allowing players to revisit them using advanced controls.

The replay mode playback functionality includes the ability to move around the game during the match, view yourself in third person, speed up and slow down the playback speed, and more. This is the tool behind some of the shots featured in these showcased videos, including scenes that wouldn’t be possible by using regular gameplay footage.

Check out the rest of the Fortnite Halloween short films on Epic’s game blog.

Repost: Original Source and Author Link

Categories
Tech News

Google will now warn you if your search results are probably crap

Your Google searches for breaking news stories may now produce a surprising outcome: a warning that your results could be unreliable.

The company has started showing notifications for searches on emerging topics, which suggest that users return later when more information is available.

The notice is Google’s latest efforts to mitigate misinformation in search results for breaking news. In a blog post, Danny Sullivan, public liaison for search at Google, said that sometimes reliable information isn’t online at the time that users search:

To help with this, we’ve trained our systems to detect when a topic is rapidly evolving and a range of sources hasn’t yet weighed in. We’ll now show a notice indicating that it may be best to check back later when more information from a wider range of sources might be available.

The feature was first spotted by Stanford Internet Observatory researcher Renee DiResta, who described it as a “positive step.”

Google has long been criticized for letting unreliable sources and conspiracy theories reach the top of search results for rapidly evolving stories.

[Read: Why entrepreneurship in emerging markets matters]

Twitter and Facebook have faced similar accusations. Karen North, an expert in social media at the University of Southern California, told the New York Times in 2018 that users can game ranking algorithms in these situations:

Before reliable sources put up stories, it’s a bit of a free-for-all. People who are in the business of posting sensationalized opinions about the news have learned that the sooner they put up their materials, the more likely their content will be found by an audience.

The warnings may help stem the tide of misinformation, but they could also exacerbate concerns about Google censoring alternative media outlets.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
AI

Analytics shop Sisu’s new tool automates search results visualizations

Elevate your enterprise data technology and strategy at Transform 2021.


Data analytics shop Sisu today unveiled a Smart Waterfall Charts tool that automatically visualizes search results for users of its platform.

The Sisu platform scans schemas, data types, cardinality, and other related attributes of data sets regardless of the physical location. It then automatically transforms the information into an index that can be queried using natural language. The challenge has been that, before now, someone still needed to manually massage the results of those queries to visualize them for users via a dashboard.

The Smart Waterfall Charts tool now automates the visualization of queries launched via its search engine, said Sisu CEO Peter Bailis. Previously, analysts would have spent hours creating visualizations of the results of data queries, he noted. “It’s the last mile of the process,” he said.

That approach makes it possible to visualize queries of massive amounts of data via a search engine that employs keyword analysis and other data science techniques, using machine learning algorithms and statistical analysis to surface attributes and relationships, Bailis said.

Sisu is not the first platform to employ search engines to query data, but Bailis said it is uniquely able to analyze massive amounts of varied types of data at scale because of the way it indexes data. Rather than requiring all data to be moved into a single data lake or warehouse, Bailis said it’s often more practical to index data wherever it resides rather than going to the trouble and expense of moving it.

Business analysts can also now, for example, not only more easily surface what is occurring, but also discern why it is happening because the platform automatically highlights recent changes to related data sets in real time, Bailis noted. That’s critical because analysts are not always close enough to the business to know which queries should be launched across sets of data that at first glance might not appear to be related.

An upgrade from spreadsheets

There is, of course, no shortage of tools for analyzing data. The issue has always been the ability to surface actionable intelligence versus a report that identifies trends long after it’s too late to have an impact on the outcome. In the absence of analytics that are immediately relevant, business leaders will simply continue to rely on gut instinct versus actual facts.

Providers of analytics applications have been trying to convince businesses to give up relying on spreadsheets to analyze data for the better part of two decades now, with limited success. However, as analytics applications start to leverage machine learning algorithms and other forms of statistical analysis capabilities, insights derived from massive amounts that can be surfaced within a modern analytics platform either can’t be surfaced or can’t be easily replicated in a spreadsheet application.

Worse yet, the spreadsheet application is typically not vetted for accuracy, and if the person who created the application leaves the company, there is often no documentation available that describes how that application works, let alone suffice to pass a compliance audit.

It’s now only a matter of time before organizations that employ advanced analytics applications are consistently making better business decisions faster. In contrast, organizations that rely primarily on a mix of gut instinct and spreadsheet data that’s tough to visualize or comprehend will find themselves falling further behind.

We don’t yet know which analytics platforms will ultimately prevail in this crowded space. Many line-of-business units today employ multiple analytics applications with little or no support from central teams. However, in the longer term, businesses will eventually standardize on a few analytics platforms, if for no other reason than to reduce licensing costs.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

SpaceX Starlink Speedtest results are very inconsistent

Elon Musk envisioned that SpaceX’s Starlink satellite constellation would be able to deliver Internet to areas underserved by most carriers. The service, however, would have to cater to a broad range of users and locations located in a narrow band of areas where those satellites could actually reach. Starlink promised speeds faster than your average DSL or fiber Internet and Speedtest creator Ookla discovered that be true but only for certain areas. In others, it actually did worse than what consumers already had.

Always take benchmarks with a grain of salt, of course, and Speedtest has been called out more than once for some of its methods or results. That said, if used across a series of tests, it does serve as a metric and starting point for discussion. And there will probably be a lot of that discussion around these results from Starlink.

Given Musk’s goals and boasts about Starlink speeds, it’s easy enough to be disappointed with Speedtest’s results in areas covered by the service in the US and Canada. Median download speeds ranged from 40 Mbps to 93 Mbps, a far cry from the above 100 Mbps figures that were initially reported. What makes the situation worse is that the median may actually be worse than fixed broadband in some areas.

That said, these results were actually a bit expected if you examine the locations that gave the best and worse speeds. The worst ones were located in dense cities with tall buildings that would have naturally blocked satellite signals from getting through with full strength. Satellite Internet, after all, works better in wide, open spaces.

That does raise questions on whether Starlink will be a viable business in the long run. The people that would benefit the most from such an Internet connection might not be located in places that would actually afford such a service. Then again, the constellation is far from complete yet and the service could improve once more satellite litter our skies.

Repost: Original Source and Author Link