Categories
Security

1Password will help you remember which ‘sign in with’ service you used

1Password is trying to solve the situation where you go to log on to a website and wonder something like “did I sign in with Google, Apple, or an actual email and password combo” or “which of my five Google accounts did I use for this?” The company has announced that its password manager will let you save which single sign-on (SSO) service you used on a site, so it can automatically log you in with that same account when you return. This feature comes as big companies are gearing up a campaign against passwords as a concept.

According to a blog post, the feature is currently available in the beta version of 1Password for the browser and currently supports logging in with Facebook, Google, and Apple. 1Password says it’ll add more providers in the future.

Saving the info of what sign-in service you used with 1Password.
Gif: 1Password

If I go to a website and there isn’t a login for it saved in my 1Password vault, I can be reasonably sure I used one of the SSO options — but not 100 percent sure. I’ve definitely wasted my fair share of time trying to figure out whether I just hadn’t added something to my vault or if I had signed into it with either Apple or Google. (And sometimes the problem is that I’ve done both, but only one of those accounts has the right user data associated with it.) In theory, this feature could go far to solve that issue, assuming I remember to actually save the logins.

1Password has been rolling out and announcing a few useful features recently and is working on launching a redesigned 1Password 8 experience across several platforms. The company also announced that it’s making it easier for its users to securely share passwords and documents, even if the person they were sharing with isn’t a 1Password user.

Big companies are trying to get rid of the need for apps like 1Password. Apple has announced that the next version of iOS and macOS will include an authentication system that uses the passkeys standard developed by FIDO. Microsoft and Google have also said they have plans to integrate the standard as well.

However, support for those types of systems will rely on individual websites and services, which can be very slow to support new login tech (I regularly visit several websites that don’t even support the SSO services that 1Password is trying to make it easier to use). For a while, many of us may have to use our browser’s built-in passwordless tools for some sites and a password manager for the rest — given that 1Password has already said it’s planning on including support for passkeys as well (it recently joined the FIDO alliance that built them), it sounds like the company wants to make sure its password manager is omnivorous, storing all your authentication no matter what form it takes.

Repost: Original Source and Author Link

Categories
Computing

SpaceX fears Starlink service could be trashed by 5G plan

SpaceX has said its U.S.-based Starlink customers will see their broadband service badly disrupted if Dish Network is allowed to use the 12GHz band for its 5G cellular network.

The decision is in the hands of the Federal Communications Commission (FCC) as Dish Network and others such as New York-based RS Access lobby the agency to let them use the 12GHz band. But SpaceX isn’t happy.

“If Dish’s lobbying efforts succeed, our study shows that Starlink customers will experience harmful interference more than 77% of the time and total outage of service 74% of the time, rendering Starlink unusable for most Americans,” the company said in a message posted on its website on Tuesday, June 21.

The long-running dispute involves a number of companies that are trying to gain access to the 12GHz band that SpaceX already uses for its internet-from-space Starlink service.

Dish has previously published data suggesting that ground-based 5G networks could comfortably share the frequency with low-Earth orbit satellite networks operated by the likes of SpaceX for its Starlink service.

But this week, SpaceX said that technical studies “dating back as far as 2016” suggest that opening up the band to ground-based 5G networks could adversely impact its Starlink service, and it even accused Dish of attempting to “mislead the FCC with faulty analysis in hopes of obscuring the truth.”

The company led by billionaire entrepreneur Elon Musk also shared a 12-page technical analysis explaining how mobile services envisioned by Dish would “cause massive disruptions to users of next-generation satellite services,” such as Starlink.

It explained that a high-gain antenna, like the SpaceX user terminal, is “designed with sufficient sensitivity to receive very weak signals coming from a desired transmitter,” adding that “such antennas do not, however, ‘reject’ interference coming from other directions.” The result is that interference would “completely wipe out the desired signal.”

In widely reported comments, a Dish spokesperson said its “expert engineers are evaluating SpaceX’s claims.”

Dish announced last week that it has launched commercial 5G services in more than 100 U.S. cities — covering around 20% of the nation’s population — by using frequencies in other spectrum bands. But whether it can access the 12GHz band as part of its 5G rollout remains to be seen.

SpaceX has launched more than 2,500 Starlink satellites into orbit for its broadband service, which currently serves more than 400,000 customers in 34 countries.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

I regret to inform you that Digital Human as a Service (DHaaS) is now an acronym

Science fiction movies have prepared us for the distinct possibility that artificial intelligence will walk among us someday. How soon? No one can say — but that isn’t stopping a raft of companies by trying to sell “digital humans” before that whole intelligence thing gets figured out. Ah, but what if you don’t want to buy a digital human because that sounds icky? Rent one, of course! That’s why we now have the regrettable acronym Digital Human as a Service (DHaaS).

The actual news here is that Japanese telecom giant KDDI has partnered with a firm named Mawari (which means something along the lines of “surroundings” in Japanese) to create a virtual assistant you can “see” through the window of your smartphone in augmented reality, one who might automatically pop up to give you directions and interact if you point your phone at a real-world location. (You’ll also see walking directions and indoor maps in the video, but those simply appear to be packaged together as part of the proof of concept.)

If you peek the video atop this post, you can see it’s not that much more advanced than, say, Pokémon Go. But behind the scenes, the partners claim that KDDI’s 5G network, Amazon’s low-latency AWS Wavelength edge computing nodes, and a proprietary codec from Mawari combine to let “digital humans” stream to your phone in real time instead of running natively on your phone’s chip.

That “substantially lower[s] the heavy processing requirements of real-time digital humans, reducing cost, data size and battery consumption while unlocking scalability,” according to the press release. (It’s true that AR apps like Pokémon Go tend to chow down on battery, but it’s not just graphics to blame; some of that is running GPS, camera and cellular simultaneously.)

Who’s going to jump on board to actually populate the metaverse with experiences designed for KDDI and Mawari’s “digital humans” and pay monthly, quarterly or annually for the “service” part of the acronym? That’s always the question, but there’s no shortage of companies looking to lean into the buzzy metaverse these days. And if they can leverage their existing buzzwords like “5G”, “AI” and “Edge compute,” so much the better. It takes a lot of work to look like you’re paying attention to the future, and you never know if this is the moment someone actually manages to make fetch happen.

Want some more digital humans? We’ve got you covered:

Repost: Original Source and Author Link

Categories
AI

Simpro raises $350M as demand grows for field service automation software

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


Field service management, which refers to the management of jobs performed in the field (e.g., telecom equipment repair), faces continued challenges brought on by the pandemic. While the number of customer service inquiries has increased as enterprises have adopted remote work arrangements, worker availability has decreased. Forty-seven percent of field service companies say that they’re having trouble finding enough quality technicians and drivers to meet business goals, according to a Verizon survey. The shortfall in the workforce has increased the burden on existing staff, who’ve been asked to do more with fewer resources.

Against this backdrop, Simpro, a field service management software company based in Brisbane, Australia, today announced that it raised $350 million from K1 Investment Management with participation from existing investor Level Equity. The new funding brings Simpro’s total capital raised to nearly $400 million, which CEO Sean Diljore says will be put toward product development and customer support with a particular focus on global trade and construction industries.

Simpro also revealed today that it acquired ClockShark, a time-sheeting and scheduling platform, as well as AroFlo, a job management software provider. The leadership teams of Simpro, ClockShark, and AroFlo will operate independently, Diljore says, including continued work on existing services.

How to Set Up Maintenance Planner | simPRO

Above: Simpro’s maintenance-planning dashboard.

“This investment marks the next stage of Simpro’s exciting growth journey. Our mission is to build a world where field service businesses can thrive,” Diljore said in a statement. “We’re thrilled to welcome ClockShark and AroFlo to the Simpro family. Both companies are leaders in their spaces and have incredibly valuable product offerings that will benefit our combined customer bases and help our customers increase revenue. We look forward to growing together and building a range of solutions for the field service and construction industries.”

Managing field service workers

Field service workers feel increasingly overwhelmed by the amount of tasks employers are asking them to complete. According to a study by The Service Council, 75% of field technicians report that work has become more complex and that more knowledge — specifically more technical knowledge — is needed to perform their jobs now versus when they started in field service. Moreover, 70% say that both customer and management demands have intensified during the health crisis.

Simpro, which was founded in 2002 by Curtis Thomson, Stephen Bradshaw, and Vaughan McKillop, claims to offer a solution in software that eases the burden on field workers and their managers. The company’s platform provides quoting, job costing, scheduling, and invoicing tools in addition to capabilities for reporting, billing, testing assets, and planning preventative maintenance.

Bradshaw, a former electrical contractor, teamed up with McKillop, an engineering student, in the early 2000s to build the prototype for Simpro in the early 2000s. Working out of Bradshaw’s garage, they started with the development of job list functionality before adding new features, including a scheduling tool for allocating resources.

Today, Simpro supports over 5,500 businesses in the security, plumbing, electrician, HVAC, and solar and data networking industries. It has more than 200,000 users worldwide and more than 400 employees in offices across Australia, New Zealand, the U.K., and the U.S.

An expanding product

With Simpro, which integrates with existing software including accounting and HR analytics software, customers can use digital templates to build estimates and convert quotes into jobs. From a dashboard, they can schedule field service workers based on availability and job status, plus perform inventory tracking, connect materials to jobs, and send outstanding invoices.

Diljore expects the purchases of ClockShark and AroFlo to bolster Simpro’s suite in key, strategic ways. ClockShark, a Chico, California-based company founded by brothers Cliff Mitchell and Joe Mitchell in 2013, delivers an app that lets teams clock in and out while recording the timesheet data needed for payroll and job costing. Ringwood, Australia-based AroFlo, on the other hand, provides job management features including field service automation, work scheduling, geofencing, and GPS tracking.

Reece is now available for Automatic Catalog and Invoice Sync | simPRO

AroFlo and ClockShark claim to have over 2,200 and 8,000 customers, respectively. AroFlo’s business is largely concentrated in Australia and New Zealand, where it says that over 25,000 workers use its platform for asset maintenance, compliance, and inventory across plumbing, electrical, HVAC, and facilities management.

Somewhat concerningly from a privacy standpoint, AroFlo offers what it calls a “driver management” feature that uses RFID technology as a way of logging which field service worker are driving which work vehicles. Beyond this, AroFlo allows companies to track the current and historical location of devices belonging to their field workers throughout the workday.

While no federal U.S. statutes restrict the use of GPS by employers nor force them to disclose whether they’re using it, workers have mixed feelings. A survey by TSheets showed that privacy was the third-most important concern of field service workers who were aware that their company was tracking their GPS location.

In its documentation, AroFlo suggests — but doesn’t require — employers to “speak to [field] users about GPS tracking.”

Aroflo GPS lets you monitor your field technicians across the entire day,” the company writes on its website. “You’ll always know where they are, what they’re working on, and when they finish.”

A spokesperson told VentureBeat via email: “Simpro will continue offering GPS services and also has its own vehicle GPS tracking add-on, SimTrac. Implementation of GPS fleet tracking can help reduce risks, remain compliant with licenses and vehicle upkeep, and reduce costs in the business. It also benefits the technicians by improving their safety, spending less time in traffic and improving time management. Overall, GPS tracking provides improved visibility of staff and understanding of their location, introduces opportunities to reduce costs associated with travel, schedule smarter and even improve driver safety (by limiting their need to race across to another side of town to complete a job).”

A growing field

The field service management market is rapidly expanding, expected to climb from $2.85 billion in value in 2019 to $7.10 billion in 2026. While as many as 25% of field service organizations are still using spreadsheets for job scheduling, an estimated 48% were using field management software as of 2020, Fieldpoint reports. Customer demand is one major adoption driver. According to data from ReachOut, 89% of customers want to see “modern, on-demand technology” applied to their technician scheduling, and nearly as many would be willing to pay top dollar for it.

“The pandemic made many business owners realize how crucial it is to have the right technology in place for remote work. Trades businesses couldn’t afford to abandon projects or lose out on service and maintenance calls because of delayed response times or drawn-out time to complete,” Diljore told VentureBeat via email. “For these businesses, cloud-based software became a necessity for survival when previously it was a ‘nice to have.’”

Simpro competes with Zinier, which last January raised $90 million to automate field service management via its AI platform. The company has another rival in Workiz, a field service management and communication startup, as well as augmented reality- and AI-powered work management platform TechSee.

According to Tracxn, of the over 3,400 companies developing “field force automation” solutions (which include customer service tracking, order management, routing optimization, and work activity monitoring), more than 700 attracted a cumulative $5.8 billion from investors from 2018 to 2020.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Swish.ai raises $13M to automate IT service desk tasks

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Swish, a Tel Aviv, Israel-based startup developing automation technologies for IT service management, today emerged from stealth with $13 million in a series A round led by Dell Technologies Capital with participation from Skywell, Samsung, StageOne, and AxessVentures. The funding will be put toward expanding the company’s headcount and for supporting go-to-market efforts, CEO Sebastien Adjiman says, as well as bolstering Swish’s product stack.

Enterprise help desk support is one of the most labor-intensive — and costly — IT functions. A 2020 BMC study found that the cost of manually handling a help ticket averages $22. Exacerbating the challenge, the acceleration of digital transformation during the pandemic has increased help desk ticket volume. In addition, the IT labor shortage is limiting the ability of enterprises to staff up to meet this growth. According to the latest data, U.S. IT job growth slowed in October because of too few candidates.

Swish uses AI to automate ticket orchestration in existing IT service management workflows. With Swish, tickets can be automatically routed to relevant, available agents based on skill set, load, and cost criteria, ideally improving the resolution time. The platform also provides management with analytics to help identify optimization opportunities in the organization, generated by a combination of natural language processing (NLP), business process mining, and machine learning algorithms.

“We founded Swish with the belief that [the] real value of [automation] isn’t just simple efficiencies but is instead the ability to turn the avalanche of data that’s being generated by today’s digital interactions into autonomous decisions which are smarter, faster, and more accurate,” Adjiman told VentureBeat via email. “We believe Swish is the perfect solution for any enterprise service and support leader who’s looking for a way to quickly utilize the benefits of [automation] to help them reinvent their current ticket process.”

A growing industry

Swish scores IT service reps on their expertise, knowledge, strengths, and weaknesses. Using an AI system, the platform automatically groups similar tickets based on the data contained in tickets, such as ticket descriptions and resolution notes.

Swish looks for inefficient patterns of behavior such as “ping pong,” “rework,” “pending abuse,” and poor workload allocation. It also flags service types leading to high or low satisfaction and costs among customers, informed by sentiment analysis from feedback forms.

“The Swish platform’s core AI engine consists of a unique combination of proprietary machine learning, NLP and business process mining algorithms, which are trained on all the historical tickets that are archived in the existing tools used by our clients,” Adjiman explained. “This historical goldmine of data is then used dynamically to train the models to capture insights about our client’s unique environment, even as it evolves. For example, our service language understanding goes beyond NLP to explore service-specific terms — improving the understanding of each underlying ticket issue and thus identifying the next best action more accurately.”

Efficiency gains

Against the backdrop of Swish’s relaunch, companies are looking broadly to increase their use of automation technology as a result of the pandemic. The BMC survey found that by automating help ticket desk resolution, 22% of tickets can be resolved at practically no cost — in part because of improved error handling and analysis tools like reporting. This is key, given that 95% of customers cite help desk support as important in their choice of and loyalty to a brand.

“The core use case of Swish’s platform is its autonomous ticket orchestration capability. Swish … suggests and provides resources to ensure [agents] have everything at their fingertips to resolve a ticket without the need to re-route or pause it,” Adjiman explained. “Since it’s agnostic by design, it can be deployed on any enterprise ticketing system such as ServiceNow or BMC. Once deployed … Swish can then be connected to additional workflow systems to accelerate any service and support area, such as customer service, HR, and facility management tickets.”

Of course, the employee monitoring aspects of Swish might be discomfiting to some companies. While 78% admit to using monitoring software to track their employees’ performance, 59% of workers say that they feel stress or anxiety as a result of their employer monitoring them, while 43% say that it feels like a violation of trust.

But 35-employee Swish pitches its analytics as a means to provide targeted training. Low-performing reps can be afforded opportunities like tutorials, guidance, and coaching, Adjiman says, or shifted to an area of service for which they might be better suited.

“Service management is an obvious target for the emerging [automation] industry due to the rapidly growing ticket volumes and labor-intensive processes enterprises rely on today,” Dell Technologies Capital managing director Yair Snir said. “The Swish team has already proven the value of [automation] for some of the largest companies in the world.”

Swish — which has raised a total of $15 million in capital and has 15 customers, including Fortune 500 companies — competes with a number of startups in the IT service automation space including Moveworks, Capacity, Electric, and Spoke. Underlining the segment’s growth, Zendesk recently acquired Cleverly, a service automation startup that creates AI-powered tools to solve common customer problems, for an undisclosed amount.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Domino Data Lab launches fully-managed MLOps service with Nvidia

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


San Francisco, California-based Domino Data Lab, a provider of MLOps solutions, has announced a fully managed offering with Tata Consultancy Services and Nvidia to help enterprises unite their analytics and AI workloads with high-performance computing in a single environment.

Unveiled at the ongoing Nvidia GTC conference, the solution leverages Domino’s MLOps platform and runs high-performance computing and data science workloads on Nvidia DGX systems, all in the TCS enterprise cloud.

Converged solution for MLOps

The MLOps offering works as a single, converged end-to-end solution for training AI, ML, and deep learning models using Domino and Nvidia DGX systems in the same heterogeneous compute environment as HPC simulation workloads. This way, data science leaders can use the output data from CPU-accelerated containerized simulation workloads to train Nvidia GPU-accelerated ML models, or vice versa, without having to move data across two traditionally siloed environments.

The flexible procurement and deployment options also ensure that data science leaders can use the solution to track project status across teams, while allowing IT teams to track infrastructure utilization for capacity planning.

“The most difficult challenges in industries like life sciences and manufacturing can be solved by efficiently leveraging the proliferation of data from connected devices, and by converging simulation, analytics, and AI/ML workloads in a single environment,” Dr. Revati Kulkarni, technology head for HPC at TCS AI group, said.

“TCS’s HPC A3 solution uses Domino’s capabilities to seamlessly manage heterogeneous compute across complex datasets and helps customers accelerate their transformation journey,” she added.

Nick Elprin, the CEO and co-founder of Domino Data Lab, noted that the development will not only help the company’s customers accelerate breakthrough research but also increase the productivity of their data science teams.

Domino platform to integrate with Nvidia AI Enterprise

The engagement with Nvidia for the new solution comes as Domino’s MLOps platform inches closer to the integration with Nvidia AI Enterprise, an end-to-end, cloud-native suite of AI and data analytics software optimized for the Nvidia EGX platform, running on mainstream Nvidia-certified systems from OEM hardware providers and VMware vSphere. Domino said that validation is underway for the integration of the platform.

Just last month, the company had also raised $100 million in a series F round of funding. It said that the capital from the round would largely go toward product development and expansion of the MLOps platform to grow customers worldwide.

According to a study from Cognilytica, the MLOps market could grow from $350 million in 2019 to $4 billion by 2025.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Microsoft updates Dynamics 365 Customer Service with first-party voice channel

At its Ignite developers event today, Microsoft announced the addition of a first-party voice channel to Dynamics 365 Customer Service, its end-to-end cloud product offering for customer support. According to the company, the new capabilities enable organizations to provide more consistent and personalized service to customers across channels with data-driven, AI-infused solutions.

“Service leaders know that 80% of consumers are more likely to purchase from companies that provide more personalized experiences. But for many contact centers, ensuring a continuous, personalized experience across all channels is difficult to achieve. Multiple tools and disconnected data silos prevent agents from having a complete view of the customer journey. But no more. No matter how your customers connect with you, now you can deliver a consistent, intelligent, and personalized service experience,” Dynamics 365 customer service and field service VP Jeff Comstock said in a statement.

AI-powered features

Prior to today’s upgrade, Dynamics 365 Customer Service provided case routing and management for customer service agents and add-ons for insights and omnichannel engagement, as well as authoring tools for knowledge base articles. With the addition of the voice channel, Power Virtual Agent chatbots can now be used as an interactive voice response or for responding to SMS, chat and social messaging channels. Dynamics 365 Customer Service affords AI-based routing of incoming calls to the voice agents, consistent with other support channels. And Microsoft Teams is integrated, allowing agents to collaborate with each other and with subject-matter experts on particular customer topics.

“AI is infused throughout our first-party voice channel to enrich the customer and agent experience by automating routine tasks and offering insights and recommendations to increase the agent’s focus on the customer,” Comstock continued. “Dynamics 365 Customer Service breaks down traditional data silos between channels with a single, secure data platform, elegantly connecting customer conversations across all channels.”

The updated Dynamics 365 Customer Service offers real-time transcription and live sentiment analysis in addition to AI-driven recommendations for similar cases and knowledge articles. Transcripts can be translated in real time for agents assisting customers in different regions and across multiple languages, while AI analyzes conversations, identifying emerging issues and generating KPIs and insights that span live chat, social messaging, and voice.

“With the new voice channel, we are delivering an all-in-one digital contact center solution that brings together contact center channels, unified communications, leading AI, and customer service capabilities together into a single, software-as-a-service solution, built on the Microsoft Cloud,” Comstock said. “And, when it comes to [businesses, they] have a choice. We continue to support integrations with key partners such as Five9, Genesys, NICE, Solgari, Tenfold, Vonage, and others who are building connectors to enable their voice solutions within Dynamics 365 Customer Service.”

The enhancements come roughly a year after Microsoft launched Azure Communication Services, a service that leverages the same network powering Teams to let developers add multimodal messaging to apps and websites while tapping into services like Azure Cognitive Services for translation, sentiment analysis, and more. The pandemic has accelerated the demand for distributed contact center setups, particularly those powered by AI — according to a 2020 report by Grand View Research, the contact center software market is anticipated to grow to $72.3 billion by 2027.

Amazon recently launched an AI-powered contact center product — Contact Lens — in general availability alongside several third-party solutions. And Google continues to expand Contact Center AI, which automatically responds to customer queries and hands them off to a person when necessary.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

GPT-3 comes to the enterprise with Microsoft’s Azure OpenAI Service

During its Ignite conference this week, Microsoft unveiled the Azure OpenAI Service, a new offering designed to give enterprises access to OpenAI’s GPT-3 language model and its derivatives along with security, compliance, governance, and other business-focused features. Initially invite-only as a part of Azure Cognitive Services, the service will allow access to OpenAI’s API through the Azure platform for use cases like language translation, code generation, and text autocompletion.

According to Microsoft corporate VP for Azure AI Eric Boyd, companies can leverage the Azure OpenAI Service for marketing purposes, like helping teams brainstorm ideas for social media posts or blogs. They could also use it to summarizing common complaints in customer service logs or assist developers with coding by minimizing the need to stop and search for examples.

“We are just in the beginning stages of figuring out what the power and potential of GPT-3 is, which is what makes it so interesting,” he added in a statement. “Now we are taking what OpenAI has released and making it available with all the enterprise promises that businesses need to move into production.”

Large language models

Built by OpenAI, GPT-3 and its fine-tuned derivatives, like Codex, can be customized to handle applications that require a deep understanding of language, from converting natural language into software code to summarizing large amounts of text and generating answers to questions. People have used it to automatically write emails and articles, compose poetry and recipes, create website layouts, and create code for deep learning in a dozen programming languages.

GPT-3 has been publicly available since 2020 through the OpenAI API; OpenAI has said that GPT-3 is now being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day. But according to Microsoft corporate VP of AI platform John Montgomery, who spoke recently with VentureBeat in an interview, the Azure OpenAI Service enables companies to deploy GPT-3 in a way that complies with the laws, regulations, and technical requirements (for example, scaling capacity, private networking, and access management) unique to their business or industry.

“When you’re operating a national company, sometimes, your data can’t [be used] in a particular geographic region, for example. The Azure OpenAI Service can basically put the model in the region that you need for you,” Montgomery said. “For [our business customers,] it comes down to question like, ‘How do you handle our security requirements?’ and ‘How do you handle things like virtual networks?’ Some of them need all of their API endpoints to be centrally managed or use customer-supplied keys for encryption … What the Azure OpenAI Service does is it folds all of these Azure backplane capabilities [for] large enterprise customers [into a] true production deployment to open the GPT-3 technology.”

Montgomery also points out that the Azure OpenAI Service makes billing more convenient by charging for model usage under a single Azure bill, versus separately under the OpenAI API. “That makes it a bit simpler for customers to pay and consume,” he said. “Because at this point, it’s one Azure bill.”

Enterprises are indeed increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%.

Customization and safety

As with the OpenAI API, the Azure OpenAI Service will allow customers to tune GPT-3 to meet specific business needs using examples from their own data. It’ll also provide “direct access” to GPT-3 in a format designed to be intuitive for developers to use, yet robust enough for data scientists to work with the model as they wish, Boyd says.

“It really is a new paradigm where this very large model is now itself the platform. So companies can just use it and give it a couple of examples and get the results they need without needing a whole data science team and thousands of GPUs and all the resources to train the model,” he said. “I think that’s why we see the huge amount of interest around businesses wanting to use GPT-3 — it’s both very powerful and very simple.”

Of course, it’s well-established that models like GPT-3 are far from technically perfect. GPT-3 was trained on more than 600GB of text from the web, a portion of which came from communities with pervasive gender, race, physical, and religious prejudices. Studies show that it, like other large language models, amplifies the biases in data on which it was trained.

In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claimed that GPT-3 can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of bias from some of the most popular open source models, such as Google’s BERT and XLNet and Facebook’s RoBERTa.

Even fine-tuned models struggle to shed prejudice and other potentially harmful characteristics. For example, Codex can be prompted to generate racist and otherwise objectionable outputs as executable code. When writing code comments with the prompt “Islam,” Codex outputs the word “terrorist” and “violent” at a greater rate than with other religious groups.

More recent research suggests that toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure the models work better for them, or discourage minority speakers from engaging with the models at all.

OpenAI claims to have developed techniques to mitigate bias and toxicity in GPT-3 and its derivatives, including code review, documentation, user interface design, content controls, and toxicity filters. And Microsoft says it will only make the Azure OpenAI Service available to companies who plan to implement “well-defined” use cases that incorporate its responsible principles and strategies for AI technologies.

Beyond this, Microsoft will deliver safety monitoring and analysis to identify possible cases of abuse or misuse as well as new tools to filter and moderate content. Customers will be able to customize those filters according to their business needs, Boyd says, while receiving guidance from Microsoft on using the Azure OpenAI Service “successfully and fairly.”

“This is a really critical area for AI generally and with GPT-3 pushing the boundaries of what’s possible with AI, we need to make sure we’re right there on the forefront to make sure we are using it responsibly,” Boyd said. “We expect to learn with our customers, and we expect the responsible AI areas to be places where we learn what things need more polish.”

OpenAI and Microsoft

OpenAI’s deepening partnership with Microsoft reflects the economic realities that the company faces. It’s an open secret that AI is a capital-intensive field — in 2019, OpenAI became a for-profit company called to secure additional funding while staying controlled by a nonprofit, having previously been a 501(c)(3) organization. And in July, OpenAI disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik’s Cube.

Roughly a year ago, Microsoft announced it would invest $1 billion in San Francisco-based OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware.

In the months that followed, OpenAI released a Microsoft Azure-powered API — OpenAI API — that allows developers to explore GPT-3’s capabilities. In May during its Build 2020 developer conference, Microsoft unveiled what it calls the AI Supercomputer, an Azure-hosted machine co-designed by OpenAI that contains over 285,000 processor cores and 10,000 graphics cards. And toward the end of 2020, Microsoft announced that it would exclusively license GPT-3 to develop and deliver AI solutions for customers, as well as creating new products that harness the power of natural language generation, like Codex.

Microsoft last year announced that GPT-3 will be integrated “deeply” with Power Apps, its low-code app development platform — specifically for formula generation. The AI-powered features will allow a user building an ecommerce app, for example, to describe a programming goal using conversational language like “find products where the name starts with ‘kids.’” More recently, Microsoft-owned GitHub launched a feature called Copilot that’s powered by OpenAI’s Codex code generation model, which GitHub says is now being used to write as much as 30% of new code on its network.

Certainly, the big winners in the NLP boom are cloud service providers like Azure. According to the John Snow Labs survey, 83% of companies already use NLP APIs from Google Cloud, Amazon Web Services, Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Netflix’s Video Game Service is Available Today With 5 Games

Netflix announced that its new gaming initiative has officially launched today, November 2nd. Five games are currently available on the mobile platform, including Stranger Things: 1984 and Stranger Things 3: The Game. Netflix promises to add more titles to the platform later on.

In August, Netflix rolled out its gaming service in Poland as part of a test. Originally, only players in Poland had access to Stranger Things 1984 and Stranger Things 3: The Game. However, now the service is available worldwide and has added three new games to the platform: Shooting Hoops, Card Blast, and Teeter Up. The only requirement to play these games is to have a Netflix account and an Android device. The games do not require an additional fee in order to play them and there are no microtransactions.

Netflix’s press release goes into more detail about how players can customize their gaminge tim. The gaming service can be used on multiple devices on a single account, however, it cannot exceed the normal limit for Netflix accounts. Netflix suggests if players want to use the gaming service on more devices, they should sign out of unused devices or remove access from the Netflix website.

Each game also offers the same language options as the normal streaming service does. The gaming platform is not available for the kids’ platform, and if the Netflix account requires a PIN to access, the PIN must also be used to play the games that are available. Netflix allows players to play titles offline, which will probably require a full download of the game onto the preferred device.

The Netflix gaming service is available worldwide today on Android devices.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Cnvrg.io develops on-demand service to run AI workloads across infrastructures

Cnvrg.io, the Intel-owned company that offers a platform to help data scientists build and deploy machine learning applications, has opened early access to a new managed service called Cnvrg.io Metacloud.

The offering, as the company explains, gives AI developers the flexibility to run, test, and deploy AI and machine learning (ML) workloads on a mix of mainstream infrastructure and hardware choices, even within the same AI/ML workflow or pipeline.

Cnvrg.io Metacloud: Flexibility for AI developers

AI experts can often find themselves struggling to scale their projects due to the limitations of the cloud or on-premise infrastructure in use. They do get the option to switch to a new environment, but that means re-instrumenting a completely new stack as well as spending a lot of cash and time. This eventually keeps most of the users locked on a single vendor, making it a major obstacle to scaling and operationalizing AI.

Cnvrg.io Metacloud tackles this challenge with a flexible software-as-a-service (SaaS) interface, where developers can pick cloud or on-premise compute resources and storage services of their choice to match the demand of their AI/ML workloads.

The solution has been designed using cloud-native technologies such as containers and Kubernetes, which enables developers to pick any infrastructure provider from a partner menu to run their project. All users need to do is create an account, select the AI/ML infrastructure (any public cloud, on-premise, co-located, dev cloud, pre-release hardware, and more), and run the workload, the company said.

Plus, since there is no commercial commitment, developers can always change to a different infrastructure to meet growing project demands or budget constraints. The current list of supported providers includes Intel, AWS, Azure, GCP, Dell, Redhat, VMWare, and Seagate.

“AI has yet to meet its ultimate potential by overcoming all the operational complexities. The future of machine learning is dependent on the ability to deliver models seamlessly using the best infrastructure available,” Yochay Ettun, CEO and cofounder of Cnvrg.io, said in a statement.

“Cnvrg.io Metacloud is built to give flexibility and choice to AI developers to enable successful development of AI instead of limiting them, so enterprises can realize the full benefits of machine learning sooner,” he added.

Cnvrg.io Metacloud will be provided as part of the Cnvrg.io full-stack machine learning operating system, designed to help developers build and deploy machine learning models. The early access version of the solution can be accessed upon request via the company website.

Notably, this is the first major announcement from Cnvrg.io since its acquisition by Intel in 2020. Prior to the deal, the company had raised about $8 million from multiple investors, including Hanaco Venture Capital and Jerusalem Venture Partners.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link