Categories
Security

Microsoft Defender launches on Windows, macOS, iOS, and Android

Microsoft is launching a new Defender cybersecurity app across Windows, macOS, iOS, and Android today. While the software giant has used the Defender moniker for its antivirus protection for years, this new cross-platform Microsoft Defender app is designed for individuals as more of a simplified dashboard that taps into existing antivirus software or offers additional device protections.

Microsoft Defender will be available for Microsoft 365 Personal and Family subscribers today, and the features will vary by platform. On iOS and iPadOS, for example, there’s no antivirus protection, and the app offers some web phishing protections instead alongside a dashboard that includes alerts for other devices.

Over on Android, Microsoft Defender includes antivirus protection and the ability to scan for malicious apps. The app will also scan links to offer web phishing protection. Microsoft Defender on Windows acts more like a dashboard rather than attempting to replace the built-in Windows Security app. You can view your existing antivirus protection from Norton, McAfee, or other vendors and manage and view security protections across devices.

Microsoft Defender on iOS.
Image: Microsoft

Microsoft Defender also includes security alerts and tips across multiple devices, although the tips are only available on Windows and macOS.

The app feels like it will be superfluous for many, but it will be useful for those wanting to protect family members and multiple devices in a simple dashboard. Microsoft is promising that more features are on the way, too.

“The expansion of our security portfolio with Microsoft Defender for individuals is the natural and exciting progression in our journey as a security company,” says Vasu Jakkal, corporate vice president of Microsoft security. “This is just the start. As we look forward, we will continue to bring more protections together under a single dashboard, including features like identity theft protection and secure online connection.”

Repost: Original Source and Author Link

Categories
AI

AI Dungeon’s creator Latitude launches new Voyage game platform

Latitude, the startup behind text game AI Dungeon, is expanding into a new artificial intelligence-powered game platform called Voyage. The company announced the closed beta on Friday, opening a waitlist for current AI Dungeon users. It’s the next step for a company that began with a university hackathon project, but that ultimately hopes to help other people create their own games using trained AI models.

AI Dungeon, which launched as AI Dungeon 2 in 2019, is powered by OpenAI’s GPT-2 and GPT-3 text generation algorithms. To start, you generate some introductory text or write your own adventure setup. Then you can enter any command you want and a Dungeons & Dragons-style virtual game master will improvise some text describing the outcome. It’s very weird and a lot of fun, but it’s light on traditional game mechanics — more like an interactive fiction engine.

Voyage features more structured games. There’s a Reigns-inspired experiment called Medieval Problems, where you’re the ruler of a kingdom and enter freeform text commands for your advisors, then see the outcome reflected in success ratings. It’s still a lot like AI Dungeon, but with a clearer framing for what you’re supposed to do and a system for evaluating success — although after playing with the game, that system seems pretty forgiving and more than a bit random.

An image from the party game Pixel This

An image from the party game Pixel This

Pixel This, meanwhile, is a party game where one person enters a phrase, the AI generates a pixelated picture of it, and that image slowly increases in resolution until another player guesses it. It’s a bit like the art app Dream paired with a Pictionary-style mechanic.

Latitude CEO Nick Walton describes Voyage as a natural evolution for Latitude. For the company, “AI games are kind of restarting at the beginning” — with text adventures reminiscent of Zork or Colossal Cave Adventure. “Now we’re moving into 2D images where you’ve got some level of visuals in.” AI Dungeon, which is included in Voyage, recently added AI-produced pictures created with the Pixray image generator.

The eventual goal is to add game creation tools, not just games, to Voyage. “Our long-term vision is enabling creators to make things that are dynamic and alive in a way that existing experiences aren’t, and also be able to create things that would have taken studios of a hundred people in the past,” says Walton. There’s no precise roadmap, but Latitude plans to spend the first half of next year working on the system.

Creative tools could help Voyage find a long-term business plan. AI Dungeon is currently free for a set of features powered by GPT-2 and subscription-based for access to the higher-quality GPT-3 algorithm. Following the Voyage beta, Latitude plans to introduce a subscription for it as well.

But Voyage’s new games don’t yet have the versatility or replayability of AI Dungeon — they’re still clearly the products of a company trying to crack games based on machine learning. “This approach is one of the things that I think is going to be really beneficial in terms of being able to iterate and find out what experiences people enjoy,” Walton says. “With traditional games, you can kind of take existing models and create a game that you’re pretty sure people will enjoy. But this space is so different, and it’s hard to necessarily know.” The question is how much people will want to pay to be part of that process.

As Latitude’s mission expands, it will likely need to exercise caution with OpenAI’s application programming interface (API). The organization approves GPT-3 projects on an individual basis, and projects must adhere to content guidelines intended to prevent misuse. Latitude has struggled with these restrictions in the past, since AI Dungeon gives users a lot of freedom to shape their own stories — resulting in some users creating disturbing sexual scenarios that alarmed OpenAI. (It’s also dealt with security issues around user commands.) The startup spent months working on filter systems that accidentally blocked more innocuous fictional content before striking a deal where some user commands would be sent to a non-OpenAI algorithm.

Pixel This and Medieval Problems are more closed systems with fewer obvious moderation risks, but introducing creative tools risks chipping away at OpenAI’s control of GPT-3, which may pose its own set of issues. Walton says that over time, Latitude hopes to shift more of its games onto other algorithms. “We will have some more structure and systems so that it’s not just directly consuming the [OpenAI] API in the same way. And at the same time, I think most of our models will probably be ones that we host ourselves,” he says. That includes models based on emerging open source projects — which have had trouble competing with OpenAI’s work but have advanced since their early days. “I don’t think that gap will be around that long,” says Walton.

Lots of games use procedural generation that remixes developer-created building blocks to create huge quantities of content, and most video game “AI” is a comparatively simple set of instructions. A company like Latitude, by contrast, uses algorithms that are trained to produce text or images fitting a pattern from a data set. (Think of them as super-advanced autocomplete systems.) Right now that can make the resulting experiences highly unpredictable, and their absurdity is often part of their charm — outside gaming, other companies like NovelAI have also harnessed text generation for creative work.

But Latitude is still figuring out how to make systems where players can expect fair and consistent outcomes. Text generation algorithms don’t have any built-in sense of whether an action succeeds or fails, for instance, and systems for making those judgments may not agree with normal human intuition. Image generation algorithms are great for producing weird art, but in a game like Pixel This, players can’t necessarily predict how recognizable a given picture will be.

For now, Latitude’s solution is to lean into the chaos. “If you try and make something super serious with AI where people are going to expect to have a high level of coherence, it’s going to have a hard time, at least until the technology gets better,” Walton says. “But if you kind of embrace that aspect of it and kind of let it be crazier and wacky, then I think you make a fun experience and people get delighted by those surprises.”

Repost: Original Source and Author Link

Categories
AI

Imec launches sustainable semiconductor program

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Imec, a semiconductor research accelerator in Leuven, Belgium, announced the first fruits of the Sustainable Semiconductor Technologies and Systems (SSTS) research program aimed at reducing the semiconductor industry’s carbon footprint. Early participants include large system companies such as Apple and Microsoft and semiconductor suppliers such as ASM, ASML, Kurita, Screen and Tokyo Electron. 

Today, the semiconductor industry plays a small role in global carbon emissions. But the industry is facing unprecedented demand for new chips. A big concern is that growing demand could increase the semiconductor industry’s relative proportion of carbon emissions, particularly as other industries reduce their carbon impact. 

Semiconductor manufacturing requires large amounts of energy and water and can create hazardous waste. In the short run, the SSTS effort will help companies understand and optimize their systems to minimize carbon emissions. Eventually, better metrics could also help mitigate other problems caused by semiconductor production.

A growing concern in the semiconductor industry

Researchers have shown that nearly 75% of a mobile device’s carbon footprint comes from its fabrication and half of this comes from the chips. The SSTS team has already built a virtual fab to simulate the environmental impact of future logic and memory devices. 

The SSTS team is also starting to enrich and simulate data with and check the representativeness of models used. Lars-Åke Ragnarsson, program director at SSTS, told VentureBeat that the early efforts are prototypes of simulated fabs rather than true digital twins. However, the hope is that future efforts will allow enterprises to weave these models into digital twins to help optimize ongoing manufacturing operations. 

He stressed the importance of sharing emissions’ data across the entire manufacturing lifecycle and all steps in the supply chain. By developing metrics, everyone will get a better idea of what actions can improve sustainability versus lessen it. For example, today, some companies may quantify carbon footprint on a per-chip basis, while others do it on a per dollar of product basis. Unified standards will provide an apples-to-apples comparison on the merits of various adjustments. 

Some different sources of carbon emissions include electrical consumption, direct emissions and water usage. The group is already showing the potential impact of higher efficiency pumps and recovering hydrogen. Down the road, they are hoping to identify high-impact processes for logic and memory technologies where manufacturers should focus efforts. They also want to develop guidelines for manufacturers, fabless companies, equipment providers and material providers that could help minimize carbon impact.

Early start

The project is still in its infancy. Currently, chip manufacturing only accounts for about 0.1% of global carbon emissions. But the concern is that without any new approach, they could grow by three times by 2030 and have an outsized impact on carbon emissions as other industries reduce their carbon footprint. “The goal should be to say at 0.1%,” Ragnarsson said.

They plan to add other environmental aspects, such as material costs, down the road. Other tools will make it easier to explore how new manufacturing and computing techniques such as molecular computing might impact the climate footprint. They also plan to develop tools to help share these findings and broader lifecycle data with other stakeholders such as politicians and citizens. 

“We need to work together as a team,” Ragnarsson said. “This is not competitive. We need to work together to make this happen.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Headroom launches to combat Zoom fatigue with AI

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


To say that the global pandemic has transformed the workforce would be something of an understatement, with remote work still a reality for millions of people globally. To help with the rapid transition from centralized working environments to a more distributed network of home-working hubs, cloud-based software-as-a-service (SaaS) tools such as Zoom have shown their worth. But at what cost?

In a world where major businesses, from Salesforce and Microsoft to VMware and Dropbox, have confirmed a permanent shift to a remote-first or hybrid working policy, people are spending less time face-to-face and more time face-to-screen. As such, better and more adaptive virtual collaboration tools will prove to be vital — Zoom fatigue, after all, is very real.

That is precisely what Headroom is setting out to solve, with an AI-powered platform that automates many of the laborious tasks involved in virtual video meetings, and removes some of the stress entailed in catching up with key discussion points.

Headroom was formed in 2020 by a triumvirate of founders with experience in building products at companies such as Google and Magic Leap. CEO Julian Green previously founded an AI-powered photo startup called Jetpac, which Google acquired in 2014. He then went on to become a group product manager at Google, where he launched the Google Cloud Vision API and Android Vision on-device API, as well as overseeing various augmented reality visual search products. More recently, Green served as a director at Alphabet’s “moonshot factory” X.

Headroom raised $5 million in seed funding last October, with big-name backers, including Google’s AI-focused fund Gradient Ventures and Yahoo cofounder Jerry Yang. As of this week, Headroom is officially available for anyone to sign up for after an extended beta period.

Zoom on steroids

While Headroom is a video communication platform in a similar vein to Zoom, it packs a number of notable tools on top of its core video chat functionality — it has been built with productivity, inclusiveness, and organization at its core.

Headroom automatically transcribes a meeting, enabling users to search the transcript for keywords that are relevant to them — rather than having to sit through an entire two hour replay to get to the stuff that matter’s for their role.

Above: Headroom: Search

With this transcription in place, Headroom can also conduct a textual analysis using natural language processing (NLP), and by leveraging computer vision to analyze body language — including gaze-tracking and facial expressions — to generate highlights based on the perceived energy levels of the participants.

“Together, these signals allow us to automatically produce a highlight version of the meeting that is ten times shorter — you can skip between highlights and get a good sense of an hour-long meeting in minutes,” Green told VentureBeat.

Moreover, built-in gesture recognition allows meeting participants to quickly show their (dis)approval without unmuting their microphone, either by giving a thumbs up/down, or raising their hand to indicate that they have a question without verbally interrupting the current speaker.

Above: Headroom: Replaying a meeting, with context

On a technical level, Headroom uses what is known as “real-time super-resolution,” which essentially transforms low-definition video into high-definition using lower bandwidth. And with its server-based architecture, Headroom doesn’t consume users’ local computing power — everything takes place in the cloud.

“We can provide all these AI features without taking up all your CPU cycles, and having your computer fan running when you have a meeting,” Green explained. “Our differentiation from the many other efforts in this area stem from having developed a next-generation video-conferencing platform that enables low latency real-time AI to be run on real-time communication — older architectures can’t do this. Incumbents are in the pixel delivery business, we are in the intelligence delivery business.”

Meetings of the future

In the future, Headroom sees itself serving as a sort of centralized knowledge base that allows users to ask questions — such as “what was project XYZ’s outcome?” — and receive answers. This means those that couldn’t attend a meeting can easily discover any agreements or conclusions that were reached.

“Meetings will become a more flexible way to drive a business forward, from rich in-person communication all the way to rapid information-retrieval and value-added analytics,” Green said.

Other notable features include “wordshare,” which purportedly promotes “inclusivity” in situations where one or two people are doing most of the talking. Headroom counts the number of words each person has said, and displays a corresponding graph — in reality, though, this could shine a spotlight on less-vocal introverted participants instead.

While it remains early days for Headroom, the San Francisco-headquartered company is initially focusing on startups “since they tend to be early adopters and are less committed to existing tools,” according to Green. But he said that the company is seeing some early traction in the enterprise space.

“We have had inbound interest from consultancies, banks, healthcare, and educational companies,” Green said. “So there is value there for enterprises, which are very interested in the things we do well — coaching, improving inclusion, and transparency in meetings. Having a searchable store of meeting information, reducing duplicated meetings, and improving efficiency is attractive to companies of all sizes.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Paper Mario launches N64 era on Switch Online but doesn’t nail the landing

Earlier this year, Nintendo announced the Switch Online Expansion Pack. While the Expansion Pack doesn’t offer any improvements to the online service at the center of the subscription, it does offer various extra perks for those willing to shoulder the additional cost. Those perks include access to libraries of Nintendo 64 and Sega Genesis games, along with ongoing access to the Happy Home Paradise DLC for Animal Crossing: New Horizons. Nintendo has now confirmed that the first post-launch addition to the N64 library will be Paper Mario.

Nintendo Co., Ltd., Nintendo of America Inc.

A classic Nintendo 64 game comes to Switch this month

As far as Nintendo 64 games are concerned, Paper Mario is right up there as one of the classics. A spiritual successor to Super Mario RPG on the Super NES, Paper Mario blazed a new trail for Mario role-playing games when it was originally released. Paper Mario is a fantastic game, and if you’re subscribed to Nintendo Switch Online with the Expansion Pack, it’s well worth checking out when it arrives on the service later this month.

When will it arrive? Nintendo has announced a December 10th release date for Paper Mario, so release is right around the corner. Sadly, Paper Mario is the sole addition to Nintendo Switch Online for the 10th. There are no new NES, SNES, or Sega Genesis games heading to Switch Online alongside it, so hopefully, it won’t be long before we see more additions to some or all of those libraries.

Does Paper Mario make NSO’s Expansion Pack worth it?

It’s no secret that the price of Nintendo Switch Online’s Expansion Pack is a tough pill to swallow. A standard Nintendo Switch Online subscription costs $20 a year and allows you to play online multiplayer games, allows access to libraries of NES and SNES games, and includes a few freebies like Tetris 99 and Pac-Man 99. Nintendo’s online service is not great – something we’ve mentioned in reviews for games like Super Smash Bros. Ultimate and Super Mario Maker 2 – but $20 is still a reasonable price considering what a subscription includes.

The Expansion Pack, on the other hand, adds collections of Nintendo 64 and Sega Genesis games along with access to the Happy Home Paradise DLC for Animal Crossing: New Horizons, a DLC that costs $25 to buy outright. The Expansion Pack adds another $30 per year to the Nintendo Switch Online subscription cost, bringing the total to $50 a year. Is the Expansion Pack worth subscribing to?

Simply put: probably not for most. If you’re only interested in playing Nintendo 64 or Sega Genesis games, the Expansion Pack definitely isn’t worth it. It is, admittedly, impossible to play most of the N64 games included in the Switch Online Expansion Pack on modern platforms, but there are plenty of options for revisiting the many of the Sega Genesis games that are included because Sega loves its compilations.

The sole addition of Paper Mario doesn’t really improve the Expansion Pack’s value proposition. Perhaps there will be a point in the future where there are enough Nintendo 64 and Sega Genesis games to warrant the extra cost. Still, for now, the Expansion Pack should only really be a consideration for Animal Crossing fans who want the Happy Home Paradise expansion in addition to the N64 and Genesis titles.

Repost: Original Source and Author Link

Categories
AI

Amazon launches AWS RoboRunner to support robotics apps

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.

The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. A recent report from Automation World found that the bulk of companies that embraced robotics in the past year did so to decrease labor costs, increase capacity, and navigate a lack of available workers. The same survey found that 44.9% of companies now consider the robots in their assembly and manufacturing facilities to be an integral part of daily operations.

Amazon — a heavy investor in robotics itself — hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022. In 2018, the company unveiled AWS RoboMaker, a product to assist developers with deploying robotics applications with AI and machine learning capabilities. And Amazon earlier this year rolled out SageMaker Reinforcement Learning Kubeflow Components, a toolkit supporting the RoboMaker service for orchestrating robotics workflows.

IoT RoboRunner

IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository.

The goal of IoT RoboRunner is to simplify the process of building management apps for fleets of robots, according to Amazon. As enterprises increasingly rely on robotics to automate their operations, they’re choosing different types of robots, making it more difficult to organize their robots efficiently. Each robot vendor and work management system has its own, often incompatible control software, data format, and data repository. And when a new robot is added to a fleet, programming is required to connect the control software to work management systems and program the logic for management apps.

Developers can use IoT RoboRunner to access the data required to build robotics management apps and leverage prebuilt software libraries to create apps for tasks like work allocation. Beyond this, IoT RoboRunner can be used to deliver metrics and KPIs via APIs to administrative dashboards.

IoT RoboRunner competes with robotics management systems from Freedom Robotics, Exotec, and others. But Amazon makes the case that IoT RoboRunner’s integration with AWS — including services like SageMaker, Greengrass, and SiteWise — gives it an advantage over rivals on the market.

“Using AWS IoT RoboRunner, robotics developers no longer need to manage robots in silos and can more effectively automate tasks across a facility with centralized control,” Amazon wrote in a blog post. “As we look to the future, we see more companies adding more robots of more types. Harnessing the power of all those robots is complex, but we are dedicated to helping enterprises get the full value of their automation by making it easier to optimize robots through a single system view.”

AWS Robotics Startup Accelerator

Amazon also announced the Robotics Startup Accelerator, which the company says will foster robotics companies by providing them with resources to develop, prototype, test, and commercialize their products and services. “Combined with the technical resources and network that AWS provides, the strategic collaboration will help robotics startups and the industry overall to experiment and innovate, while connecting startups and their technologies with the AWS customer base,” Amazon wrote in a blog post.

Startups accepted into the Robotics Startup Accelerator program will consult with AWS and MassRobotics experts on business models and with AWS robotics engineers for technical assistance. Benefits include hands-on training on AWS robotics solutions and up to $ 10,000 in promotional credits to use AWS IoT, robotics, and machine learning services. Startups will also receive business development and investment guidance from MassRobotics and co-marketing opportunities with AWS via blogs and case studies.

Robotics startups — particularly in industrial robotics — have attracted the eye of venture capitalists as the trend toward automation continues. From March 2020 to March 2021, venture firms poured $6.3 billion into robotics companies, up nearly 50% from March 2019 to March 2020, according to data from PitchBook. Over the longer term, robotics investments have climbed more than fivefold throughout the past five years, to $5.4 billion in 2020 from $1 billion in 2015.

“Looking ahead, the expectations of robotics suppliers are bullish, with many believing that with the elections over and increased availability of COVID-19 vaccines on the horizon, much demand will return in industries where market skittishness has slowed robotic adoption,” Automation World wrote in its report. “Meanwhile, those industries already seeing an uptick are expected to plough ahead at an even faster pace.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Signal launches in-app sustainer program to accept donations from users

Secure messaging app Signal has launched a new feature that allows users to make donations within the app, the company announced. As a nonprofit, Signal doesn’t receive financial support from any advertisers or shareholders, a new blog post notes, and relies on users for donations.

Users can choose to make monthly donations, or one-off contributions within the app using Apple Pay or Google Pay, and their payment information won’t be associated with a user’s Signal account, according to the blog post, as the company will use the same anonymous credential scheme it uses for private Signal groups:

Clients make payments and then associate a badge to their profile such that the server can authenticate the client is in the set of people who made a payment, but doesn’t know specifically which payment it corresponds to.

Long a popular messaging app for people who want to secure their text messages, earlier this year, Signal began adding some of the more consumer-friendly features that other messaging apps have had for a while, like stickers and wallpapers.

To donate to Signal, users can choose from three different sustainer levels, at $5, $10, or $20, each with its own badge. Sustainer subscriptions will only renew if a customer uses Signal during the course of a month; if you uninstall or stop using the app, the payments will be canceled before the next cycle to “eliminate the ‘dark pattern’ of subscriptions you’ve forgotten about,” the company said. Signal plans to add support for additional payment methods in the future.

Repost: Original Source and Author Link

Categories
AI

Meta launches PyTorch Live to build AI-powered mobile experiences

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


During its PyTorch Developer Day conference, Meta (formerly Facebook) announced PyTorch Live, a set of tools designed to make AI-powered experiences for mobile devices easier. PyTorch Live offers a single programming language — JavaScript — to build apps for Android and iOS, as well as a process for preparing custom machine learning models to be used by the broader PyTorch community.

“PyTorch’s mission is to accelerate the path from research prototyping to production deployment. With the growing mobile machine learning ecosystem, this has never been more important than before,” a spokesperson told VentureBeat via email. “With the aim of helping reduce the friction for mobile developers to create novel machine learning-based solutions, we introduce PyTorch Live: a tool to build, test, and (in the future) share on-device AI demos built on PyTorch.”

PyTorch Live

PyTorch, which Meta publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open source projects last year, according to GitHub’s 2018 Octoverse report, and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200.

PyTorch Live builds on PyTorch Mobile, a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem, and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live.

PyTorch Mobile launched in October 2019, following the earlier release of Caffe2go, a mobile CPU- and GPU-optimized version of Meta’s Caffe2 machine learning framework. PyTorch Mobile can launch with its own runtime and was created with the assumption that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server.

“For example, if you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” Meta AI software engineer Roman Radle said in a prerecorded video shared with VentureBeat ahead of today’s announcement.

Built-in tools

PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing API, it prepares and integrates custom models to be used with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS.

In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live, as well as provide a more customizable data processing API and support machine learning domains that work with audio and video data.

PyTorch Live

“This is our initial approach of making it easier for [developers] to build mobile apps and showcase machine learning models to the community,” Radle continued. “It’s also an opportunity to take this a step further by building a thriving community [of] researchers and mobile developers [who] share and utilize pilots mobile models and engage in conversations with each other.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Amazon launches SageMaker Canvas for no-code AI model development

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


During a keynote address today at its re:Invent 2021 conference, Amazon announced SageMaker Canvas, which enables users to create machine learning models without having to write any code. Using SageMaker Canvas, Amazon Web Services (AWS) customers can run a machine learning workflow with a point-and-click user interface to generate predictions and publish the results.

Low- and no-code platforms allow developers and non-developers alike to create software through visual dashboards instead of traditional programming. Adoption is on the rise, with a recent OutSystems report showing that 41% of organizations were using a low- or no-code tool in 2019/2020, up from 34% in 2018/2019.

“Now, business users and analysts can use Canvas to generate highly accurate predictions using an intuitive, easy-to-use interface,” AWS CEO Adam Selipsky said onstage. “Canvas uses terminology and visualizations already familiar to [users] and complements the data analysis tools that [people are] already using.”

AI without code

With Canvas, Selipsky says that customers can browse and access petabytes of data from both cloud and on-premises data sources, such as Amazon S3, Redshift databases, as well as local files. Canvas uses automated machine learning technology to create models, and once the models are created, users can explain and interpret the models and share the models with each other to collaborate and enrich insights.

“With Canvas, we’re making it even easier to prepare and gather data for machine learning to train models faster and expand machine learning to an even broader audience,” Selipsky added. “It’s really going to enable a whole new group of users to leverage their data and to use machine learning to create new business insights.”

Canvas follows on the heels of SageMaker improvements released earlier in the year, including Data Wrangler, Feature Store, and Pipelines. Data Wrangler recommends transformations based on data in a target dataset and applies these transformations to features. Feature Store acts as a storage component for features and can access features in either batches or subsets. As for Pipelines, it allows users to define, share, and reuse each step of an end-to-end machine learning workflow with preconfigured customizable workflow templates while logging each step in SageMaker Experiments.

With upwards of 82% of firms saying that custom app development outside of IT is important, Gartner predicts that 65% of all apps will be created using low- and no-code platforms like Canvas by 2024. Another study reports that 85% of 500 engineering leaders think that low- and no-code will be commonplace within their organizations as soon as 2021.

If the current trend holds, the market for low- and no-code could climb to between $13.3 billion and $17.7 billion in 2021 and between $58.8 billion and $125.4 billion in 2027.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

SkyPoint Cloud launches predictive insights product

Customer data platform SkyPoint Cloud today announced the launch of SkyPoint Predict, predictive customer insights powered by artificial intelligence (AI). SkyPoint unifies fragmented customer data from multiple systems and applications into comprehensive “customer 360” profiles. SkyPoint Predict creates an enhanced customer experience while providing business and technology users with the tools to secure, govern, and access that data.

With today’s launch of SkyPoint Predict, the platform automatically enriches customer profiles with behavioral insights. Equipped with predictive analytics on each customer, brands can deliver personalized experiences, granular segmentation, and personalization; support advanced analytics; and do more in order to keep customers engaged.

For consumer brands, this translates to more loyal customers and higher lifetime value. For health care brands, this means timelier and more personalized interventions to drive patient empowerment.

Now users can identify a customer’s predicted customer lifetime value (CLV), churn propensity — the rate at which customers stop doing business with an entity — and product preferences. No model building or manual tuning is required on the part of the user.

Tisson Mathew, SkyPoint founder and CEO, said, “Predictive insights enable businesses to anticipate customers’ needs, delivering experiences that drive engagement, improve health, and build long-term loyalty.”

Predictive insights drive engagement

In an effort to create a database with a brain, this latest launch from SkyPoint makes AI a core part of profile enrichment. Predictive insights transform data about an individual customer into assets that can then be used for population-level analytics, segmentation, and personalization of the customer experience.

SkyPoint has used machine learning in other parts of its platform, most notably in its proprietary identity resolution technology, which relies on algorithmic linking of customer records with incomplete or inaccurate information. SkyPoint unifies fragmented customer data with customer profiles, then provides users with secure access to that data.

Two core infrastructure investments enable SkyPoint to deliver adaptive predictive insights. The first is SkyPoint’s use of open data standards, including the Common Data Model (CDM) and Fast Health Interoperability Resources (FHIR), which enforce well-defined semantics around business concepts. The second is an autonomous AI pipeline, supported through a partnership with Databricks.

“AI enables brands to see the people in their data and connect with them on a human level, even at massive scale,” said Joelle Poe, chief product officer at SkyPoint.

AI-powered profile enrichment

Consumer rights are expanding, and the data privacy legislation list is growing longer every year, from opt-out models like CCPA and CPRA to opt-in models like GDPR. Through a self-service privacy center, DSR automation, real-time data maps, and a zero-trust data vault, SkyPoint asserts that it will make privacy compliance easier by making consumer data and preferences accessible, centralized, and actionable.

SkyPoint’s clients include health care, ecommerce and retail, hospitality, and sports and media brands including Wyld, Vivante Health, the Ace Hotel, and the Portland Timbers.

Chris Thompson, director of CRM and analytics at Portland Timbers, said, “SkyPoint brings connectivity to all of our data and technology. I can stitch together different audiences and insights to understand what our fans want and need. By having information in one easily accessible place, SkyPoint is helping our team work effectively to overcome problems and produce results.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link