Categories
AI

Google announces new search tools for online shopping

Shopping online isn’t always a convenience. If you enjoy window shopping or browsing curated collections at a brick-and-mortar store for inspiration, finding something online you don’t yet know you want or are unaware of is tricky if you start with a text search. Google is announcing new shopping search tools to try to alleviate this, with features that utilize Google Lens for finding products to buy from pictures online, broader search terms to help you browse clothing, and the ability to check in-store inventory from home. It claims the new tools will help shoppers “find what they’re looking for in a more visual way.” This comes after Google allowed all businesses to create listings on Google Shopping for free last year. Now, it wants more window shopping to be done right from Google search.

Google Lens has been around since 2017, replacing Google Goggles that came before it, with the ability to use a smartphone camera to conduct visual searches based on the identification of objects found in the real world. Those image searches have allowed users to learn more about the things around them, even finding the same or similar item to buy without looking for a label or barcode to scan. Now, Google wants to make it possible to shop for any product you see in an image or video on the web with nothing more than the picture itself. Soon, iOS users will have a new dedicated button in the Google app, allowing a Google Lens search of any image on a page to bring up Google Shopping listings for purchase through a visual match. The feature will also be coming to Chrome on the desktop.

Google did not give specific dates for this feature launching on iOS or desktop, stating that it hopes to roll them out by the end of the year. There was no initial mention of if or when this feature may come to Android, but Google has clarified that it plans to extend this functionality to Android at a later date, after the iOS and desktop versions.

Google Lens search in the Google app on iOS.
GIF: Google

Desktop Google Lens search on Chrome.
GIF: Google

Another shopping-focused feature coming from Google, which has surely been spurred on by the boom in e-commerce since the beginning of the pandemic, is easier browsing of clothing, accessories, and shoes via search results based on general terms. Google says that if you search for a generic article on mobile, for example, “cropped jackets,” you will see a visual feed of that type of clothing in a variety of colors and designs. These visual results will be accompanied by relevant videos, style guides, or local shops that carry those styles. From there, you can filter your search further according to brand, style, or department; check ratings and reviews; or compare prices on the results that appeal to you most.

Google calls this window shopping, which is one of the challenges of shopping for clothes online compared to going to a physical store to see what’s on display. It claims the dataset is pulled from over 24 billion product listings. The new feature is available only on mobile and is usable right from a Google search beginning today.

Shoppable search within the Google app.
GIF: Google

The third and final Google Shopping update allows users to remotely check in-store inventory directly within a Google search. Shoppers searching for a product are able to filter by “in stock.” This selection should show nearby stores that have the item available. Google claims the new feature can help a small business attract new customers, though it remains to be seen how accurate it might be across a variety of retailers and how one might ensure a product is there for them once they arrive — particularly at small businesses that do not have curbside or in-store pickup.

Google has indicated that it does rely on data from the retailer to determine stock status and claims it will only indicate an item is in stock when there is high confidence; otherwise, it may show limited stock.

The new “in stock” filter is available today across mobile web browsers and the Google app on both iOS and Android.

Google’s in-stock filtering when shopping nearby locations for an item.
GIF: Google

Repost: Original Source and Author Link

Categories
Security

Microsoft’s Chromium-based Edge browser has tools to protect your privacy

One of the things that many people look for in a browser is how it protects their privacy against all the various trackers that are hidden in many of the sites out there. Microsoft Edge, the Chromium-based browser that is built into current versions of Windows, has its share of protections as well — it’s even adding its own VPN to the mix. Edge includes tools to block both first-party cookies (used to keep you logged in or remember the items in your shopping cart) and third-party tracking cookies (used to keep track of your browsing activity).

Here are instructions on how to change your settings, see what trackers are stored on your browser, and delete any cookies. We also address how Edge deals with fingerprinting, another method of tracking that identifies users by collecting details about their system configuration.

Deal with trackers

Edge blocks trackers by default using one of three different levels of protection. Balanced, which is active upon installation, blocks some third-party trackers along with any trackers designated as “malicious.” This mode takes into account sites you visit frequently and the fact that an organization may own several sites; it lowers tracking prevention for organizations you engage with regularly. Basic offers more relaxed control; it still blocks trackers but only those Microsoft describes as “malicious.” You can also switch to Strict, which blocks most third-party trackers across sites.

To change your level of protection:

  • Click on the three dots in the top-right corner of your browser window and go to Settings. Select Privacy, search, and services from the left-hand menu.
  • Make sure Tracking prevention is switched on, and then select which level you want.

Showing Edge’s three levels of privacy protection.

Edge blocks trackers by default using one of three different levels of protection.

Adjust your tracking settings

While Edge provides you with the three easy-to-choose tracking modes, you can also dive deeper to see which trackers are blocked and make exceptions for specific sites.

  • On the Privacy, search and services page, look for the Blocked trackers link just beneath the three tracking prevention modes. Click on that to see all of the trackers Edge has blocked.
  • Beneath the Blocked trackers link is the Exceptions link, where you can specify any sites where you want tracking prevention turned off.

The Blocked tracker page shows all of the trackers Edge has blocked. 

The Blocked tracker page shows all of the trackers Edge has blocked.

When you’re at a site, you can see how effective your tracking prevention is by clicking on the lock symbol on the left side of the top address field. The drop-down box allows you to view the associated cookies and site permissions, allow or disable pop-ups, tweak the tracking permissions for that site, and see what trackers have been blocked.

A drop down menu listing various privacy features.

Click on the lock symbol to see a count of your blocked trackers.

Clean up your cookies

Conveniently, Edge can delete several types of data each time you close it, including browsing history, passwords, and cookies.

  • Go back to Settings > Privacy, search, and services and scroll down to Clear browsing data.
  • Click the arrow next to Choose what to clear every time you close the browser.
  • Toggle on any of the data categories you’d like to be cleared each time you exit Edge. If you select Cookies and other site data, you can also choose any sites whose cookies you want to retain by clicking on the Add button.

Choose what data you want deleted when you close the browser.

Choose what data you want deleted when you close the browser.

You can also manually clear your cookies and other data at any point:

  • On the Privacy, search, and services page, look for Clear browsing data now, and click on the button labeled Choose what to clear. This will open up a smaller window with several options.
  • Select the type of data you want to delete.
  • You can also select a time range within which to delete that data: the last hour; the last 24 hours; the last seven days; the last four weeks; or all time.
  • There is also a link to clear your data if you’ve been using legacy websites in Internet Explorer mode. You are also warned that clearing your data will clear it across all synced devices. (But you can sign out of your Microsoft account to clear it only on that specific computer.)
  • Ready? Click Clear now.

Menu for manually deleting data.

You can also manually delete data.

There are other privacy features on the Privacy, search, and services page, including options to send “Do Not Track” requests. (Although the usefulness of such a request can be questionable.)

If you scroll down to the Security section of that page, you will see a number of features that you can turn on or off. They include Microsoft Defender SmartScreen, which can help protect from malicious sites and, if you turn it on, will block downloads of possibly dangerous apps. There is also a feature that will stop you from accidentally going to a problematic site due to a mistype.

Fingerprinting and ad blocking

According to Microsoft, the three tracking prevention modes will help protect against the type of personalization that leads to fingerprinting.

Edge does not block ads natively, but you can download ad-blocking extensions. Because the browser is now based on Chromium, many Chrome extensions (as well as extensions from the Microsoft Store) will work with this latest version of Edge, a distinct advantage.

Update May 10th, 2022, 10:30AM ET: This article was originally published on February 13th, 2020, and has been updated to reflect changes in the OS and the Edge app.

Repost: Original Source and Author Link

Categories
AI

Iterate integrates more visual tools with Interplay 7 low-code platform update for AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Multiple companies are vying for a piece of share in the competitive – and growing – low-code AI and ML tools market. The basic concept behind low code is about making it easier for users to build applications without needing to dig into source code development tools. Building out AI and ML integrations is a growing area and the space where San Jose, California-based Iterate is playing. 

The company, which is releasing its Interplay 7 platform today, is firmly focused on AI. The release integrates a new workflow engine that enables users to work with multiple AI nodes to load data for building applications and machine learning models. It also provides streamlined integration with Google Vertex AI and AWS Sagemaker for users that want to send trained data models to those services for additional processing. The overall goal is to help larger companies with a modular approach to building AI/ML powered applications.

“We have realized that  a lot of large companies we work with have many web engineers, but they actually don’t have machine learning backend skills,” Brian Sathianathan, CTO at Iterate told VentureBeat.

Competitive marketplace for low code shows promise for AI

The overall market for low-code development tools is forecast to be worth $16.8 billion in 2022 according to a Gartner research forecast. Gartner expects the total market for low code to reach $20.4 billion by 2023.

Looking out even further, Gartner has forecast that it expects a whopping 70% of all new applications developed by companies will use some form of low-code approach by 2025. In contrast, the analyst firm reported that in 2020 fewer than 25% of organizations were using low-code to build new applications.

The vendor marketplace for low-code technologies is crowded: Sstartup Sway.ai launched its no-code platform for helping users to build AI powered applications in February. In the same month, startup Mage launched its low-code AI dev tool which helps organizations to generate models. Cogniteam announced its latest low-code AI updates on May 9, with a focus on robotics. Other established vendors in the low-code space include Appian and Mendix, both of whom have varying degrees of AI enablement capabilities. 

“Low-code represents the primary growth area for AI-driven application development atop a complicated web of services and data sources,” Jason English, principal analyst at Intellyx LLC told VentureBeat. “Bringing down the technical barrier to entry allows business expertise to be applied for ML training, automation and inference tasks.”

How Iterate aims to differentiate

Brian Sathianathan, CTO at Iterate, told VentureBeat that among the ways his company’s platform differentiates against other low-code technologies in the marketplace is with specific industry use case templates.

For example, rather than just providing a generic tool to help users connect to data sources and build out an AI-enabled application, Interplay provides templates for industry verticals including retail, healthcare, automotive and oil and gas industries. Sathianathan said an organization will typically engage with Iterate to solve a specific use case, though the platform can also be customized to build out applications beyond the available templates as well.

In the Interplay 7 release, Iterative is also improving its data preparation capabilities with a visual tool to clean the data so that it is useful for machine learning training operations. In prior releases. Sathianathan said that data cleaning was not a visual process. 

A primary theme with the Interplay update is to Improve the intersection of what end users see overall. It now integrates with the popular Figma tool that is used to mock up interface designs. Sathianathan said that at large organizations, customer-facing interfaces are often designed by agencies that, more often than not, use Figma. The integration enables Figma designs to be imported into Interplay 7, which can then be connected into the AI backend to build applications.

Looking forward to future releases, Sathianathan said he’s looking to expand the AI capabilities his platform can support for the development of AI techniques such as digital twins.

“Today we support the big four use cases of AI including regression, classification, clustering and image recognition,” he said. “In the future we’re going to add additional capabilities,” he said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
Game

Discord gives server owners the tools to put channels behind a paywall

Discord has started testing a feature called Premium Memberships with a small group of users. The tool allows community owners to gate access to part or all of their server behind a monthly subscription fee. It’s something the company’s growing number of users, particularly admins and mods, have been asking it to implement for a while. Before today, those individuals had to turn to third-party services like Patreon to monetize access to their servers.

By contrast, the Premium Memberships tool creates a streamlined interface for that same purpose. A new tab under the “Community” heading in the app’s setting’s menu allows server owners to do things like set price tiers and view related analytics. The feature similarly streamlines the process of signing up for paid channels for users. If you want to support a community, you don’t need to leave Discord to do so. When you tap on a premium channel, indicated by the new “sparkle” icon in the sidebar, the client will tell how much you need to pay for access, as well as what perks you’ll gain for doing so.

Discord Premium Memberships

Discord

As for pricing, Discord says it will encourage server owners to experiment. As a baseline, the company will recommend a monthly fee of $2.99 as a minimum and $99.99 as a maximum. When it comes to most of the channels you visit, you probably won’t pay more than $5 or $10 a month for access. The top end of the price range is a reflection of how much Discord has changed since the start of the pandemic to accommodate a more diverse group of communities. Discord isn’t exclusively a place for gaming anymore.

“Access to a channel can seem simplistic, but it’s a foundational piece that someone can build a lot of flexibility on top of,” said Jesse Wofford, group product marketing manager at Discord, in an interview with Engadget. One of the server owners the company is working with on this week’s soft launch is someone who runs gaming bootcamps. According to Wofford, they’ve built an entire business around private lessons. The company wants to give those people the opportunity to create a sustainable business on its platform. 

Whatever someone decides to charge for access to their channels, the company plans to take a 10 percent cut of the subscription. “It’s an important stake in the ground for us that this product, like a lot of other ones, is built around creator success,” Wofford said. Additionally, for people who want to continue to use services like Patreon to monetize their channels, Discord won’t stop them from doing so. “One of the important things here is that we’re still investing in those relationships,” said Wofford. “We want to create an ecosystem that gives creators as many options to succeed as possible.”

Discord Premium Memberships

Discord

Premium Memberships is something Discord has been working on for a while. “There’s probably a world where we could have released this a while ago,” Wofford told me. “We wanted to make sure we took in the right feedback from a diverse set of creators and built it in a way that we felt confident would deliver value.” To that end, the company plans to take its time testing the feature before it rolls it out more widely.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities

Above: ©LDV CAPITAL INSIGHTS 2021

Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Symbl.ai, provider of conversational intelligence APIs and tools, gets $17M

Symbl.ai, the Seattle-based company that provides APIs to developers to build apps capable of understanding natural human conversations, has raised a $17 million series A funding.

Symbl.ai says its APIs unlock machine learning algorithms from a variety of industry partners and sources. The company helps developers ingest conversation data from voice, email, chat, and social, and provides understanding and context around the language, without the need for upfront training data, wake words, or custom classifiers.

VentureBeat connected with cofounder and CEO Surbhi Rathore for a broader perspective on what this funding means for Symbl.ai and the emerging industry that research firm Gartner has called communication platform-as-a-service (CPaaS).

Conversational intelligence APIs

While a ton of vendors have jumped in to serve developers with APIs for conversational AI, Symbl.ai says it seeks to differentiate itself by providing an end-to-end conversation analytics platform.

Rathore told VentureBeat, “There are other API-first products that provide NLP services, custom classifiers, speech recognition — all of which are parts of the wider conversation intelligence capabilities. Symbl.ai stitches these pieces and more with connected context without dependency on huge data sets and machine learning knowledge.”

Many conversational AI vendors focus on specific industries. In the sales industry, for example, companies like Gong.io and Chorus.ai are gaining momentum. But more end-user customers are seeking to build conversational AI experiences across various functional areas, including webinars, customer experience, team collaboration, and more.

Symbl.ai says it enables businesses to compete with these brands and build differentiated, AI-driven experiences in their products cost effectively and with a shorter time from lab to market. Symbl.ai’s offerings include Transcription Plus, Conversation Analytics, Conversation Topics, Contextual Insights, Customer Tracker, and Summarization.

Turning conversation into data

Recent projections from Mordor Intelligence show that the CPaaS industry will grow from $4.54 billion in 2020 to $26.03 billion by 2026 at a compound annual growth rate (CAGR) of 34.3%. Up from 20% in 2020, 90% of global enterprises are expected to leverage API-enabled CPaaS offerings as a strategic IT skill set to bolster their digital competitiveness by 2023. These figures offer critical insight into today’s unanticipated adoption rate of digital communications. However, even with this recent growth, Symbl.ai claims the current solutions do not enable businesses to unlock the potential of communication content and analyze conversation data at scale.

“Businesses are looking to differentiate beyond ‘just’ enabling communication so as to meet their customer expectations,” the company’s press release stated. The company says it will continue to ease developers’ access to communication data and help product managers to iterate early on the new user experiences for their product — eliminating the need for building ML models from the ground up.

“We are bullish on empowering builders with the right toolkit to unlock the value in conversation data across all digital channels. Conversations are by far the most unused data source, and there is so much potential in this data. Tools need to ease the pain point of capturing, managing, understanding, and acting on this data — and that’s exactly what we are focused on. With this financing, we will further speed up our product roadmap to scale awareness, adoption, and growth of conversation intelligence driven experiences for developers,” Rathore said.

Speaking on what this means for the CPaaS industry, she added, “CPaaS platforms must enable developers to get access to the raw streams easily — so that experiences built go beyond just enabling voice and video. Symbl.ai is excited to partner with leading CPaaS platforms to collectively enable their adopters extend communication experiences and use machine learning to understand the content of that communication, take actions on it, and learn from the best via plug and play APIs.”

Building the technology

Symbl.ai says a major trend in the industry is transcription, as it has become a mainstream use case for live captioning. “This is critical for products to build accessible and inclusive experience — voice analytics that were primarily restricted to the contact center industry are finding ways into other verticals like recruitment, edtech, marketing, and more. Products and businesses are starting to leverage insights from text, chat, email, calls, video from customers interactions to personalize user experience,” Rathore said.

“The trend of digital conversations, which has increased over the past year, is here to stay. Surbhi, [CTO] Toshish [Jawale], and their team have built a special engine to deliver this data in a developer-friendly way, and we are excited to partner with them in their next leg of growth,” said Ray Lane, managing partner at Great Point Ventures.

According to Rathore, “Traditional NLP and NLU techniques fall short on conversations.” Symbl.ai’s technology combines several deep learning and machine learning models to convert unstructured conversations from raw speech or text into structured format that can be programmatically accessed and leveraged to build intelligent experiences in applications in a plug-and-play manner.

“Our system contains more than 50 different models built in-house working together to solve three layers of problems. The first layer models the foundational characteristics of conversations, language, and speech, leveraging various existing languages and speech modeling techniques. The second layer is for contextual correlation modeling between different concepts and their relationships. The third layer performs reasoning tasks to draw conclusions or eliminate the improbabilities. These layers are specifically required for understanding human conversations, enabling our system to model several dimensions of human conversation that help us to draw inferences and conclusions for actionable insights, irrespective of the domain of conversation,” Rathore said.

Funding expectations

This investment round was led by Great Point Ventures, with additional participation from Gutbrain Ventures, PBJ Capital, Crosscut Ventures, and Flying Fish Ventures.

Rathore said this additional capital is expected to accelerate the development of its end-to-end conversational intelligence platform, foster team expansion at the company, and meet the growing demand for the technology.

The company says its customers include Rev.ai, Airmeet, Intermedia, Remo, SpectrumVoip, Intuit, Bandwidth, and Hubilo. “Symbl.ai enables developers to not spend months but just days to integrate, saving time and money with the most accurate and scalable conversation intelligence stack,” the press release stated.

Against the backdrop of this funding, Lane of Great Point Ventures will join Symbl.ai’s board of directors. The round will also go toward recruiting top talent.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Tools to scale robotic process automation enhanced by Microsoft

Microsoft has announced significant advances to its Power Automate platform to help scale robotic process automation (RPA) infrastructure. Key advances include new capabilities for understanding business processes, collaborative bot development, and scaling RPA software bots with virtual desktops. Microsoft is a relative latecomer to the RPA playing field, but is growing this capability quickly — thanks to its existing strengths in office productivity apps, Windows integration, and Azure cloud infrastructure.

The field of RPA started as a way to program sophisticated macros for automating repeating tasks like copying and pasting data between two business apps. Gartner has suggested the future of this field, known as hyperautomation, includes finding ways to identify automation opportunities and program automations, then scale the deployment of the automation more efficiently. Microsoft’s newest updates tick the boxes of significant progress on all three of these aspects.

Microsoft’s Power Automate general manager Stephen Siciliano told VentureBeat, “Understanding up-front which processes have the most wasted time and the highest potential for automation will be extremely valuable for them.” His team aims to provide a single end-to-end product that improves the complete hyperautomation lifecycle and is deeply integrated into Microsoft 365 and built into Windows OS.

New process mining improvements

Many enterprises have hundreds or even thousands of processes that could be automated. The initial step in figuring out how to identify automation opportunities at scale lies in automating the process of seeing what procedures enterprises are commonly repeating. One approach called task mining watches over a  shoulder to see how someone clicks and types their way through a process using a sophisticated macro recorder. Earlier this year, Microsoft provided the ability to bootstrap an automated flow from a task mining diagram, making it easier to go from seeing the process to optimizing it.

The second approach, known as process mining, analyzes enterprise application event logs to reverse engineer a process diagram. This is important, especially when a process spans many users and enterprise applications. Microsoft’s new process mining capability takes advantage of existing Power Query connectors for Power BI and Azure Synapse. So although the specific process mining aspect is new, Power Query supports data ingestion from the hundreds of enterprise applications today.  In addition, Microsoft acquired Clear Software last week, which will improve connectivity with enterprise applications like SAP and Oracle.

Microsoft’s process mining and task mining capabilities approach visualization and the understanding of a process from different angles, which are slowly becoming more connected. Process Mining uses event data from systems of record to understand and analyze the process in a company. Task Mining fills in the gaps in the process identification which event data cannot see, what the human does in the middle, using recorders and RPA techniques.

“In the future, we anticipate that the lines between those two will blur more, and the focus will be on the full end-to-end process, no matter how many tasks or systems it touches,” Siciliano explained.

Collaborative bot development

A second significant advancement improves collaborative bot development. The process and task mining capabilities help generate a template for how a bot is supposed to click and type its way through a particular workflow. But humans must then look over these and identify how to design a more efficient process or respond to common problems.

Microsoft has added collaborative development workflows that allow different experts, including bot developers, business process analysts, subject-matter experts, front-line users, and compliance teams, to collaboratively make comments, recommend changes, and revert or accept the recommended changes. This takes advantage of the same commenting infrastructure used in Word.

One concern is that RPA bots could copy data to applications lacking appropriate protections or to physical locations in violations of regulations like GDPR on where data can be stored. New data loss prevention capabilities widen Microsoft’s existing capabilities for labeling and tracking sensitive data into RPA automation.

Scaling deployment

Historically there have been two sets of machines that enterprises have to manage for RPA: the servers that orchestrate the work that needs to happen and the machines the bots run on. Power Automate automatically manages the machines for RPA orchestration in the cloud. However, enterprises now must manage the machines that contain the running bots. Microsoft is previewing the Azure Desktop Starter Kit, which can automate this second aspect as well.

Siciliano said, “This will make it possible for IT to focus less on the raw tasks of setting up and scaling machines, and more on enabling more users in their organization to build out bots.” This could also simplify governance since the IT team can set the exact policies on its use and who uses it.

Better integration between RPA and Azure desktops promises to simplify the processes of setting up and scaling the appropriate machine configurations for RPA deployments.

“We’ve heard from customers today it can take days to weeks to bootstrap and scale infrastructure,” Siciliano added. “This means not only are citizen developers blocked (and thus not being productive) during that time, but also, there’s a lag time of machines sitting idle. All of that goes away once you can scale automatically.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Multiverse Computing utilizes quantum tools for finance apps

Despite great efforts to unseat it, Microsoft Excel remains the go-to analytics interface in most industries — even in the relatively tech-advanced area of finance.

Could this familiar spreadsheet be the portal to futuristic quantum computing in finance? The answer is “yes,” according to the principals at Multiverse Computing.

This San Sebastián, Spain-based quantum software startup is dedicated to forging forward with finance applications of the quantum kind, and its leadership sees the Excel spreadsheet as a logical means to begin to make this happen.

“In finance, everybody uses Excel; even Bloomberg has connections for Excel tools,” said Enrique Lizaso Olmos, CEO of Multiverse Computing, which recently gained $11.5 million in a funding round headed by JME Ventures.

Excel is a key entry point for emerging quantum methods, Lizaso Olmos said, as he described how users can drag and drop data sets from Excel columns and rows into Multiverse’s Singularity SDK, which then launches quantum computing jobs on available hardware.

From their desks, for example, Excel-oriented quants can analyze portfolio positions of growing complexity. The Singularity SDK can assign their calculations to the best quantum structure, whether it’s based on ion traps, superconducting circuits, tensor networks, or something else. Jobs can run on dedicated classical high-performance computers as well.

Quantum computing for finance

Multiverse’s recently closed seed funding round, led by JME, also included Quantonation, EASO Ventures, CLAVE Capital, and others. Multiverse principals have backgrounds in quantum physics, computational physics, mechatronics engineering, and related fields. On the business side, Lizaso Olmos can point to more than 20 years in banking and finance.

The push to find ways to immediately start work in quantum applications is a differentiator for Multiverse, claims Lizaso. The focus is to work with available quantum devices that can solve today’s problems in the financial sector.

Viewers see quantum computing as generally slow in developing, but the finance sector shows specific early promise, just as it has in the past with a host of emerging technologies. Finance apps drive investments like JME’s in Multiverse.

In a recent report, “What Happens When ‘If’ Turns to ‘When’ in Quantum Computing,” Boston Consulting Group (BCG) estimated equity investments in quantum computing nearly tripled in 2020, with further uptick seen for 2021. BCG states “a rapid rise in practical financial-services applications is well within reason.”

It’s not surprising, then, that Multiverse worked with both BBVA (Banco Bilbao Vizcaya Argentaria) to showcase both quantum computing in finance and Singularity’s potential to optimize investment portfolio management, as well as Crédit Agricole CIB to implement algorithms for risk management.

“We have been working on real problems, using real quantum computers, not just theoretical things,” Lizaso Olmos said.

Why quantum-inspired work matters

Multiverse pursues both quantum and quantum-inspired solutions for open problems in finance, according to Román Orús, cofounder and chief scientific officer at the company. Such efforts create algorithms that mimic some techniques used in quantum physics, and they can run on classical computers.

“It’s important to support quantum-inspired algorithm development because it can be deployed right away, and it’s educating clients about the formalism that they need for moving to the purely quantum,” Orús said.

The quantum-inspired work is finding some footing in quantum machine learning applications, he explained. There, financial applications that could benefit include credit scoring in lending, credit card fraud, and instant transfer fraud detection.

“These methods come from physics, and they can be applied to speed up and improve machine learning algorithms, and also optimization techniques,” Orús said. “The first ones to plug into finance are super successful.”

Being specific about applications is very important, as both Orús and Lizaso Olmos emphasize.

Whether the tools are quantum or quantum-inspired, the applications users in finance pursue must be selected wisely, Orús and Lizaso Olmos said. In other words, this is not your parents’ general-purpose computing.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Best Tools to Stress Test Your CPU

If you’ve recently upgraded your processor, or are overclocking it, it can be a good idea to know the best tools to stress test your CPU to check how stable it is. There are a number of CPU stress tests out there, but we have a few favorites you should check out.

The goal of stress testing is to push the computer to failure. You want to see how long it takes before it becomes unstable. It’s usually a good idea to run tests for at least an hour or two, though some can take longer.

Before starting these tests, we’d highly recommend tools like HWMonitor, HWiNFO64, or Core Temp for keeping track of CPU temperature, clock speeds, and power. These can be a valuable resource for making sure your cooling solution is doing its job as these stress tests push your CPU quite literally to the limit. It’s so important that we have an entire guide on how to check your CPU’s temperature.

Here’s the list of four favorite CPU stress tests.

Prime95

Prime95 is one of the most well-known free CPU stress tests out there. It was developed as part of the Great Internet Mersenne Prime Search (GIMPS), in which the processor is used to find large prime numbers. Though Prime95 is not originally made to stress test the CPU, the strain in using the processor’s floating point and integer capabilities make it an excellent way to see what your CPU is capable of.

You can run different “torture tests” depending on what you’re trying to stress. The small fast fourier transforms (FFTs) can be a good way to see if there are any issues. The large FFTs really punish your CPU, while the blended tests push RAM usage. A word of caution with Prime95: It has a somewhat negative reputation of putting unnecessary stress on the CPU.

Download Now at Prime95

AIDA64

AIDA64 CPU stress test.

Unlike the other tools on our list, AIDA64 is not free to use. The cheapest version is AIDA64 Extreme, which will run you about $50 for three PCs while the Business and Engineer versions go for $200. This tool is geared more toward engineers, IT professionals, and enthusiasts (as indicated by the various download options). Instead of purely stressing the CPU like Prime95, it simulates a more realistic workload that a CPU is likely to have. This is excellent for gauging workstations or servers that are meant for sustained, high-performance workloads.

AIDA64 is an all-in-one diagnostic tool that can be used to look at details of your particular system. In the System Stability Test, you can choose which component (CPU, memory, local disks, GPU, etc.) you want to stress. While the test is running, there’s a Sensor tab that lets you view the temperature of each CPU core and fan speeds. This can be invaluable to see if your system is being properly cooled and stressed.

Download Now at AIDA64

Cinebench R23

Cinebench stress test.

Cinebench is another well-known free benchmark utility that you may have seen in various reviews. It was created by Maxon, the developer behind 3D modeling application Cinema 4D. Cinebench simulates common tasks within Cinema 4D to measure system performance. Specifically, the primary test renders a photorealistic 3D scene and uses algorithms to stress all CPU cores. The render is about 2,000 objects comprised of over 300,000 polygons.

The most recent version, R23, is able to run a 10-minute thermal throttling test instead of doing just one single run. This can be useful in seeing how much you can push a particular system before it gets too hot. The single run is still available in the advanced options. The newest version also adds support for Apple’s M1 silicon.

Download now at Maxon

CPU-Z

CPU-Z stress and monitoring tool.

This is great all-around stress test software that’s easy to use and free. Like AIDA64, CPU-Z can also gather detailed information on your system, including CPU processor name, cache levels, and even what process node it was manufactured on. You can also get real-time measurements of each core’s frequency. The primary drawbacks are that it doesn’t stress GPUs, though it can stress RAM. It’s focus is CPU stress testing and it’s a very useful tool in that respect.

Download Now at CPUID

HeavyLoad

HeavyLoad CPU stress test.

HeavyLoad is a stress tool developed by JAM Software that features a handy graphical user interface (GUI) to visualize the tests being run. The software allows you to test the entire processor or just a specific number of cores. One useful feature of HeavyLoad is that you can install the tool on a USB drive and use it on multiple computers. This avoids having to install HeavyLoad on every single computer. It’s seful for IT professionals who need to ensure numerous servers are able to handle heavy processor loads. HeavyLoad is also able to stress other components such as GPU, RAM, or storage.

Download Now at Jam Software

IntelBurn Test

IntelBurn CPU stress test.

Despite it’s name, the IntelBurn Test isn’t made by Intel. However, it uses Intel’s Linpack benchmark to measure the amount of time it takes to solve a system of linear equations and then converts that into a performance rate. This is generally the same engine that Intel uses to stress its own CPUs before shipping them out. The interface is pretty simple and easy to use. You can set the level at which the CPU is stressed, how many times to run the test, and how many threads to run.

While this could actually be more accurate than Prime95, it also has the same reputation for pushing a CPU way past its normal limits, perhaps even more than Prime95. Keep tabs on the heat output while running tests. Also, while you can technically use it with AMD CPUs, it may be better to use other benchmark tools instead.

Download Now at Jam Software

OCCT

OCCT stress test software.

You can’t talk about CPU stress testing without including the OverClock Checking Tool (OCCT). It may be the most popular stability-checking tool out there. This is an all-in-one tool that includes four tests for gauging performance: Two for CPU, one for GPU, and one for the power supply. It also embeds the HWiNFO monitoring engine that we mentioned earlier, and it includes a temperature fail-safe that immediately stops the test should a certain component reach an unsafe temperature.

There is a free version of OCCT, but you’re limited to a one-hour test. The Personal edition removes that limitation and also includes the ability to save a full graphical report of the test. The Pro edition adds the ability to run on domain-joined computers and generate CSV files to build customized graphs. Finally, the Enterprise edition allows you to build your own test suite using a drag-and-drop system for easier configuration. Even better, those who prefer a command line will be able to use that in the Enterprise version as well.

Download Now at OCBase

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Neuron7 employs open source AI tools for field service across devices

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Neuron7.ai emerged from stealth this week to reveal its platform that combines various open source AI technologies to automate field service across many types of devices. The product’s promise earned the company $4.2 million in seed funding from Nexus Venture Partners and Battery Ventures.

Naturally, there’s already a fair number of organizations attempting to apply AI to a wide range of field service issues, from optimizing traffic routes to encouraging customers to engage bots rather than humans to resolve an issue.

It’s not likely AI platforms are going to replace the need for field technicians anytime soon, given all the issues that might be encountered once a device is deployed. However, AI will clearly play a significant role in enabling a limited number of field service technicians to support a much wider range of devices deployed anywhere in the world.

Neuron7.ai is building a platform that consumes the recommendations created by open source AI engines and models. The aim is to make AI technologies accessible to organizations that typically don’t have the resources required to build AI models that specifically address the unique needs of a field service team, said Neuron7.ai CEO Niken Patel.

Open source AI tools

The Neuron7 platform ingests structured and unstructured data from a wide range of sources, including product and service manuals, knowledge bases, technician notes, customer relationship management (CRM) systems, and messaging systems such as Slack. It then applies various open source AI engines based on frameworks such as TensorFlow to determine how to best remediate a performance issue or an outright device failure, said Patel.

Designed as a software-as-a-service (SaaS) application, Neuron7’s goal is to make AI accessible to organizations that need to optimize field service across an increasing array of devices that require remote support by technicians, Patel said. Technicians can’t be expected to be experts on every potential issue or parameter for all those different devices — “No one can be an expert on every device,” he said.

In addition to aggregating all the data that technicians require to resolve an issue as soon as possible, Patel said, Neuron7 captures the unique knowledge and expertise of the technicians that service the devices to ultimately make the AI platform more accurate. That capability mitigates turnover issues that occur when experienced technicians leave an organization and new ones are onboarded.

Investing in service

Pricing for the Neuron7 platform is based on a subscription model, with tiers that depend on the number of data sets that need to be trained. However, Patel said the company is hoping to shift to a pricing model that is based more on the outcomes enabled by the platform.

Angel investors, early backers, and advisors of the company include Akash Palkhiwala, CFO at Qualcomm; Ashish Agarwal, CEO of Neudesic Global Services; Kintan Brahmbhatt, general manager for Amazon Podcasts; and Anand Chandrasekaran, executive vice president for Five9.

In the age of COVID-19, organizations are looking for ways to automate service management as much as possible to reduce the number of technicians they need to dispatch. Achieving that goal requires organizations to provide customer support technicians with as much relevant data as possible so they can resolve any issues remotely. The challenge is that the devices being deployed in B2C and B2B environments are becoming more complex, Patel said. As more complex devices are connected within an internet of things (IoT) application environment, the need to augment technicians with an AI platform becomes more pressing, he added.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link