Categories
Game

New ‘FIFA Mobile’ mode puts the focus on strategy, not action

Would you rather oversee your FIFA Mobile team than control your players’ every last step? You now have your chance. EA has introduced a Manager Mode to the Android and iOS title that has you focusing on strategy and tactics rather than action. You choose the starting lineup, set the tactics in real-time (such as attacking or countering) and let your team play. You can even queue multiple matches as you climb the division ranks.

The corresponding game update also improves goalkeepers, adds player switching options and offers kits for 30 national teams. The upgrade is available now.

This doesn’t turn FIFA Mobile into a management sim like Football Manager. You aren’t scouting talent, shaping training programs or wrestling with the team’s board. Think of this more as the soccer equivalent to an auto battler like Auto Chess or Teamfight Tactics — it’s a slightly more relaxed experience that does more to reward situational awareness than fast reflexes.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

NATO launches AI strategy and $1B fund as defense race heats up

The North Atlantic Treaty Organization (NATO), the military alliance of 30 countries that border the North Atlantic Ocean, this week announced that it would adopt its first AI strategy and launch a “future-proofing” fund with the goal of investing around $1 billion. Military.com reports that U.S. Defense Secretary Lloyd Austin will join other NATO members in Brussels, Belgium, the alliance’s headquarters, to formally approve the plans over two days of talks.

Speaking at a news conference, Secretary-General Jens Stoltenberg said that the effort was in response to “authoritarian regimes racing to develop new technologies.” NATO’s AI strategy will cover areas including data analysis, imagery, cyberdefense, he added.

NATO said in a July press release that it was “currently finalizing” its strategy on AI” and that principles of responsible use of AI in defense will be “at the core” of the strategy. Speaking to Politico in March, NATO assistant secretary general for emerging security challenges David van Weel said that the strategy would identify ways to operate AI systems ethically, pinpoint military applications for the technology, and provide a “platform for allies to test their AI to see whether it’s up to NATO standards.” van Weel said.

“Future conflicts will be fought not just with bullets and bombs, but also with bytes and big data,” Stoltenberg said. “We must keep our technological edge.”

NATO’s overtures come after a senior cybersecurity official at the Pentagon resigned in protest because of the slow pace of technological development at the department. Speaking to the press last week, Nicolas Chaillan, former chief software officer at the Air Force, said that the U.S. has “no competing fighting chance against China” in 15 to 20 years, characterizing the AI and cyber defenses in some government agencies as being at “kindergarten level.”

In 2020, the U.S. Department of Defense (DoD) launched the AI Partnership for Defense, which consists of 13 countries from Europe and Asia to collaborate on AI use in the military context. More recently, the department announced that it plans to invest $874 million next year in AI-related technologies as a part of the army’s $2.3 billion science and technology research budget.

Much of the DoD’s spending originates from the Joint Artificial Intelligence Center (JAIC) in Washington, D.C., a government organization exploring the use and applications of AI in combat. (In news related to today’s NATO announcement, JAIC is expected to finalize its AI ethics guidelines by the end of this month.) According to an analysis by Deltek, the DoD set aside $550 million of AI obligations awarded to the top ten contractors and defense accounted for 37% of total AI spending by the U.S. government, with contractors receiving the windfall.

Fearmongering

While U.S. — and now NATO — officials grow more vocal about China’s supposed dominance in military and defense AI, research suggests that their claims somewhat exaggerate the threat. A 2019 report from the Center for Security and Emerging Technology (CSET) shows that China is likely spending far less on AI than previously assumed, between $2 billion and $8 billion. That’s as opposed to the $70 billion figure originally shared in a speech by a top US Air Force general in 2018.

While Baidu, Tencent, SenseTime, Alibaba, and iFlytek, and some of China’s other largest companies collaborate with the government to develop AI for national defense, MIT Tech Review points out that Western nations’ attitudes could ultimately hurt U.S. AI development by focusing too much on military AI and too little on fundamental research. A recent OneZero report highlighted the way that the Pentagon uses adversaries’ reported progress to scare tech companies into working with the military, framing government contracting as an ideological choice to support the U.S. in a battle against China, Russia, and other competing states.

Speaking at the Center for Strategic and International Studies Global Security Forum in January 2020, secretary of defense Mike Esper said that DoD partnerships with the private sector are vital to the Pentagon’s aim to remain a leader in emerging technologies like AI. Among others, former Google CEO Eric Schmidt — a member of the DoD’s Defense Innovation Board — has urged lawmakers to bolster funding in the AI space while incentivizing public-private partnerships to develop AI applications across government agencies, including military agencies.

Contractors have benefited enormously from the push — Lockheed Martin alone netted $106 million in 2020 for an AI-powered “cyber radar” initiative. Tech companies including Concur, Microsoft, and Dell have contracts with U.S. Immigration and Customs Enforcement, with Microsoft pledging — then abandoning in the face of protests — to build versions of its HoloLens headsets for the U.S. Army. (Microsoft this month agreed to commission an independent human rights review of some of its deals with government agencies and law enforcement.)

Amazon and Microsoft fiercely competed for — and launched a legal battle over — the DoD’s $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract, which was canceled in July after the Pentagon launched a new multivendor project. Machine learning, computer vision, facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. army branches.

For some AI and data analytics companies, like Oculus cofounder Palmer Luckey’s Anduril and Palantir, military contracts have become a top source of revenue. In October, Palantir won most of an $823 million contract to provide data and big analytics software to the U.S. army. And in July, Anduril said that it received a contract worth up to $99 million to supply the U.S. military with drones aimed at countering hostile or unauthorized drones.

While suppliers are likely to remain in abundance, the challenge for NATO will be aligning its members on AI in defense. The U.S. and others, including France and the U.K., have developed autonomous weapons technologies, but members like Belgium and Germany have expressed concerns about the implications of the technologies.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Cloud-native RPA architecture drives Automation Anywhere’s strategy

All the sessions from Transform 2021 are available on-demand now. Watch now.


Robotic process automation (RPA) leader Automation Anywhere Inc. (AAI) has grown to become the top public cloud RPA platform with 54% market share according to IDC — more than all the other players combined. This seems to indicate that the company’s major cloud engineering push and marketing partnership with Google have paid off.

Cloud RPA is the fastest growing sector of the hot RPA market, and AAI’s recent cloud gains indicate the company has an edge against other RPA leaders, including UiPath and Blue Prism. These results also show that the company is poised to stay competitive even as Microsoft begins moving into the cloud RPA market.

However, the cloud market is still only a tiny sliver of the overall RPA market, and the other players have the resources to keep pace, IDC’s intelligent process automation lead Maureen Fleming told VentureBeat. RPA grew 37.1% to $1.7 billion in 2020. RPA cloud accounted for 3.3% share of the total market and increased 387.8%.

Strong cloud growth

Fleming points to AAI’s technology overhaul, improved partnerships, and business incentives in driving its cloud RPA story.

“AAI went all-in on cloud-native RPA architecture, with no easy path back for clients who were running Enterprise in their data centers and on worker desktops,” Fleming said. They focused heavily on making migration to their new architecture as straightforward as possible while sending a clear signal to customers that cloud architecture is the path forward.

As a result, new customers and new automations from existing customers adopted AAI’s cloud SaaS or a private version of Automation 360. IDC’s model that cited AAI’s share only looked at public cloud services adoption, but hosting RPA software on public cloud infrastructure also grew in 2020. “Its strong growth in cloud — along with the partnership with GCP [Google Cloud Platform] — are cases in point where the decision to rearchitect paid off,” Fleming said.

Cloud-native RPA race still young

It is too early to tell the extent to which cloud RPA leadership will translate into the overall market leadership.

Both AAI and Blue Prism, whose offerings were aligned with the flexibility needed by enterprises to support work at home, saw impressive growth during the pandemic. UiPath also offers an RPA public cloud service (RPA SaaS) and is gradually shifting its platform technologies to become more cloud-native. However, architecturally they are not as portable as AAI, which can be deployed anywhere that runs Kubernetes. But UiPath does have the resources to offer RPA SaaS across clouds as it sees the opportunity.

All the RPA vendors are offering some version of RPA in the cloud. These capabilities run the gamut from simply making it easy to run RPA applications on virtual machines all the way to completely refactoring their RPA stack to run on a cloud-native architecture. AAI, for example, made a significant effort to refactor its RPA stack to run natively on the cloud.

Appian and IBM also have cloud-native RPA capabilities. Others are treating this as an evolution where the refactoring is done progressively. Fleming said this could make sense because it considers the need for backward compatibility. However, it can be expensive to maintain and move forward in the long run.

When characterizing market position, it is essential to consider how the different vendors recognize revenues. UiPath sells software subscriptions rather than SaaS, which translates to faster upfront growth. Most of the booking is recognized as revenue upfront if a customer purchases a software subscription and deploys it on public cloud infrastructure. As a result, a company like UiPath will show much higher growth by selling software subscriptions than it would selling public cloud services. In contrast, AAI’s approach recognizes SaaS revenue over the life of a booking, which softens growth in the early part of the booking cycle but strengthens the later stages of a booking relative to software recognition.

Follow the cloud-native RPA UI

RPA sort of competes with low-code development at a meta level in lowering the cost of building automations. But it is also an extremely complicated topic. Fleming said, “No one should assume that low code is easy to learn and adopt by non-professional developers. In fact, the opposite is true.” Business users can employ a growing number of RPA features to build an automation, and some RPA products are readily usable by trained business users. But sophisticated automation capabilities often require trained professional developers.

That said, AAI re-engineered its studio to support line-of-business developers as well as professional developers. UiPath has a separate, narrower studio environment for line of business. Nintex RPA also has a business-friendly developer experience.

Improving the development UI of cloud-native RPA could play as big a role in RPA leadership as better cloud support. Fleming said, “It isn’t cloud that creates the designation of ‘usable by line of business’ but more the studio environment, training, examples, documentation, and methodologies implemented to support the acquisition of skills.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Researchers say they’ve found the ideal strategy to pay off student loans

When many people near college graduation, they begin to contemplate how they’ll deal with the student loans they’ve racked up over the past few years. The burden — which grows more substantial with every generation — can result in stress and, if not managed properly, may throw one’s life plans off track for several years. Mathematicians with the University of Colorado at Boulder may have a solution, explaining that they developed a mathematical model to explore the ideal repayment strategy.

Generally speaking, college graduates get a brief grace period after graduation during which time they aren’t required to make payments on their loans. Two different options are available once payments start: an income-based repayment strategy that involves paying a certain amount monthly based on one’s salary or simply throwing as much money at the loan as possible to pay it off in a shorter period of time.

In many cases, graduates are often advised to pay the loans off as quickly as possible if the funding amount is on the smaller side. On the flip side, graduates are typically told to take the income-based repayment option if they’ve taken out a substantial amount of funds in the form of student loans. The new study suggests a hybrid approach may be more ideal.

The mathematical model takes into account things like compounding interest rates, the income tax that may need to be paid, and more. The findings indicate that some graduates may benefit from a hybrid-style repayment approach that involves paying off as much as possible for the first several years, then switching over to an income-based repayment plan for the remainder of the balance.

The team of researchers hasn’t made their work available as a calculator for the public, but they do plan to improve it and potentially make it available to existing repayment calculators that may integrate the model. The ideal repayment method will ultimately depend on personal factors that must be accounted for, including things like anticipated salary and more.

Repost: Original Source and Author Link

Categories
AI

Salesforce boosts customer data platform strategy as rivals circle

Elevate your enterprise data technology and strategy at Transform 2021.


Salesforce today announced AI enhancements enabled by the Einstein platform into its customer data platform (CDP), just as rivals large and small are making similar investments. Salesforce is also tightening integration between the ecommerce cloud platform and the CDP, as well as making it possible to segment audiences in real time based on factors like membership status, loyalty tier, and points balance.

Announced during Salesforce Connections 2021, these offerings headline a wider series of updates Salesforce is making across its Digital 360 portfolio of applications and services as part of its overarching Customer 360 strategy. Other new offerings range from reports that analyze customer journey by channel to features that make it simpler to engage customers via Snapchat and WhatsApp platforms.

There’s also now a Progressive Web Application (PWA) Kit and Managed Runtime. Enabled by headless services provided by Salesforce, they enable developers to more easily decouple front-end and back-end technologies to create customize application experiences. This capability should allow organizations to accelerate digital business transformation initiatives, allowing them to make use of Salesforce application programming interfaces (APIs) to drive faster development of applications while retaining control over the front-end application experience.

Building a ‘single source of truth’

In terms of strategic initiatives, the Salesforce CDP is a crucial battle for Salesforce. Rather than housing their customer data in a traditional customer relationship management (CRM) application, which can be more challenging to access, organizations have started to employ CDPs as a way to make that data more accessible to a range of omnichannel applications that drive multiple digital business transformation initiatives. The CDP, in effect, becomes the hub around which customer engagements occurring in real time over email, phone, social media platforms, and mobile applications are all tracked. The result is data that’s more accessible to a range of omnichannel applications. In effect, the CDP becomes the hub around which customer engagements — whether occurring in real time over email, phone, social media platforms, and mobile applications — are all tracked.

“It’s a powerful single source of truth,” said Lidiane Jones, executive vice president and general manager for Commerce Cloud at Salesforce.

The source of truth in many cases is now at the core of digital business transformation strategies that require companies to finally unify customer data in a way that makes business insights actionable in near real time, rather than generating yet another business intelligence report long after it’s too late to have any meaningful impact on the outcome.

Taking on disparate rivals

The challenge for Salesforce is the fact that rivals large and small are all making similar CDP investments. While a CDP doesn’t replace the need for a CRM application for a sales team, it does play a more strategic role by enabling organizations to engage customers in a much more consistent fashion. Engagements that occur across social media networks and mobile applications can be more easily personalized, monitored, and analyzed. IT vendors, spanning from makers of marketing automation platforms to providers of enterprise resource planning (ERP) applications, are all vying to become providers of the CDP any organization standardizes on.

Salesforce is clearly betting on the fact that much of the data that organizations are looking to shift into a CDP already resides in its CRM and marketing applications. In its most recent quarter, the company reported revenue of $5.96 billion, a 23% increase over the same quarter a year ago. Salesforce also revealed it expects revenue for the second quarter to exceed $6.22 billion. Overall, the company is expecting revenue for the full 2022 fiscal year to range between $25.9 billion to $26.0 billion, representing a 22% growth rate.

Competing against rivals in the CRM space is one thing. Battling for market share against everyone from Adobe and Microsoft to Oracle and SAP for dominance of an emerging CDP market may be quite another undertaking, especially for a company that still reported a slight loss for its most recent quarter. Regardless of the outcome, the contest for CDP dominance will most certainly be nothing less than brutal in the months ahead.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

How Walmart adapted its IoT strategy to the pandemic

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Walmart made $559 billion in total revenue during the COVID-19 pandemic’s first fiscal year, up from $514.4 billion in fiscal 2019, thanks in part to newly integrated internet of things (IoT) capabilities to improve food quality and lower energy consumption. Walmart claims its systems for IoT deployments are built at a scale unmatched across the retail industry: The company reports that, every day, it takes in approximately 1.5 billion messages and analyzes over one terabyte of data. This proprietary software includes a cloud-based dashboard application to manage volume and detect anomalous events, such as refrigeration failures, so they can ostensibly be fixed more quickly, saving ice cream from melting while driving corporate profit.

VP of technology Sanjay Radhakrishnan oversees Walmart’s IoT platforms and applications. Radhakrishnan sat down with VentureBeat to describe the giant retail chain’s long-term data strategy and how it’s changed since last March to accommodate changing store ecosystems across the U.S.

This interview has been edited for clarity and brevity.

VentureBeat: How would you describe Walmart’s approach to IoT at a high level?

Sanjay Radhakrishnan: When we started on this journey, we had three key objectives. One was to address this at the scale of Walmart’s, that Walmart can actually leverage the impact of IoT at Walmart scale. The second objective was to ensure that we are the control plane for our data. So, we control where our data lands, and we have the ability to convert into business insights. And then the last objective was really maintaining that connection to our end customer experience. And then ensuring that we are being good corporate citizens, with respect to our sustainability initiatives. So just want to set the stage that when we started on this IoT journey, those were the three main drivers that we were looking to solve.

VentureBeat: I’m really interested in that IoT journey. Could you tell me more about how Walmart has evolved its tech platforms over the past couple of years? And what has that progression has looked like, maybe in the past 5, 10, even 15 years?

Radhakrishnan: You know, with those three objectives in the background, we have always had all kinds of devices in our stores. And these devices typically come from vendors or original equipment manufacturers (OEMs) that actually manufactured these devices. Typically this equipment comes with some kind of an HDMI human machine interface that’s on the machine, so you can actually go connect to it and collect data out of these devices in a one-off fashion.

And we’ve always done that. But with this IoT journey, what we really wanted to do was we wanted to move into the driver’s seat, where we can actually normalize these datasets coming from all these different machines, different devices, and different OEMs. We normalize that data, and we control our data using IoT from these devices and provide those data sets to our business in a way where we can actually convert them into useful information and useful insights and really improve that end customer experience. So our journey really has been, instead of individual point-to-point access from these individual machines, on how we can grow this at scale by being the control plane and getting all this data from equipment, normalize it, and simplify it into our language so that we can do intelligent things things with it, right? And so that focus is really shifted inside Walmart by building our own software that we are using to form that control.

VentureBeat: That’s fascinating. And, in building this proprietary software at such a great scale, did Walmart run into any particular kinds of challenges or problems that it then worked to overcome?

Radhakrishnan: The biggest challenge we have is just the variety of devices that we have in our ecosystem. They come from different OEMs, they are across different generations of these devices, and they all speak different languages. And what this means to us is, in our world, we are dealing with a wide mix of sensors, a range of protocols, and really a myriad of information models. So our approach has been to look at how we build our software and where we are talking to all these devices. You know, talking to the different protocols. But we have an ability to kind of normalize all of that data into one consistent IoT specification. That’s a Walmart IoT specification. And then we apply the right kind of data quality checks, so that we can certify the data and drop it into our control plane. And then we take it from there.

So once we are able to connect the devices to our control plane, then we can land the data, either at the edge or the cloud. And afterward our software engineers can build all kinds of applications for our business customers. And we really kind of looked at this in a cloud-agnostic fashion. So we ensure that we have a dual-pronged strategy with our infrastructure. We leverage infrastructure in our own datacenters, and we also leverage infrastructure as a top cloud provider. The focus really has been to ensure that our IoT pipeline software can access the right infrastructure at scale, considering things like latency and connectivity concerns.

VentureBeat: Within this group of devices you mentioned, are you including in-store ones like refrigeration systems from the ice cream case study

Radhakrishnan: Yeah, that’s right. So you walk into the store and you see a lot of refrigeration cases. We are talking about sensors that are actually inside these refrigeration cases. And they are connected to what we call controllers in the store. We are actually connecting into those controllers and pulling device telemetry signals. It’s a lot of operating functions that you’re getting out of the equipment, and we are getting it in a consistent manner, in a continuous stream, to do intelligent things.

VentureBeat: For the refrigeration IoT tech, could I hear more about how that’s architected in the cloud? Are there any specific foods, like ice cream or frozen pizzas for example, that are easier or more difficult to maintain with the technology?

Radhakrishnan: We stream from edge to the cloud, and we have different pathways in the cloud based on data usage patterns. Our IoT applications can access data across the edge and cloud to solve business problems. We are cloud agnostic and leverage a dual-prong strategy that includes access to infrastructure in our own datacenters and top cloud providers. And our focus has been to ensure that our IoT pipeline software can access the right infrastructure at scale, considering connectivity and latency constraints. The type of food in the refrigeration cases does not cause differing complexity of our system.

VentureBeat: Do you have any statistics on whether Walmart food quality has been more consistent since IoT tech was implemented? I’m curious if there are any specific stores or products that have seen a particularly measurable difference.

Radhakrishnan: What I’ll say is, our focus has been on how you drive operational efficiencies in the store. For example, when things go wrong in the store, technicians actually fix problems with this equipment that is in the store. So the focus has been on how you get the right technician to the right place at the right time so that we can proactively address issues. Because if you don’t, it could impact product quality. Since we have started this journey, just by looking at reference duration, we have been able to improve our refrigeration equipment health by an average of 30%.

VentureBeat: On a related note, I remember reading about Walmart’s intention to limit energy consumption. Could you tell me more about how that energy approach is architected in the cloud? Are there any specific frameworks or data strategies that Walmart is using to accomplish that?

Radhakrishnan: If you look at our architecture and our frameworks, I would say it’s everything from connecting to the devices to using sophisticated infrastructure and software that runs on the edge and actually knows how to connect to these devices while holding telemetry data. Now, it depends on our use cases. If they’re kind of low latency use cases, then we store data at the edge, and we have logic at the edge to fulfill those use cases.

Otherwise, we are streaming data to the cloud. And in the cloud, we have multiple kind of patterns depending on data usage. We might extend the data into kind of a cold pack, or a one pack, and our IoT applications have the ability to access the data, either at the edge or in the cloud. They can basically build business applications and solve business problems. So, if you’re talking about frameworks and the technology stack, it’s a mix.

We use Walmart homegrown and open standard frameworks like Spring and .NET Core, our device protocols. We can connect to devices all the way from BACnet to Modbus to serial communications to some of the more recent protocols like HTTP and Simple Mail Transfer Protocol (SMTP). If you look at the tech stack itself, typically our device drivers are written in Java, and the applications themselves are all ReactJS, not GS applications that use Linux-type operating systems.

VentureBeat: I’m interested in hearing more about how individual elements of Walmart’s tech stack — choices like Spring tools, for example — specifically help with IoT deployments. How and why do specific tools work well for Walmart’s use cases, like scaling large volumes of data?

Radhakrishnan: Messages are generated by the equipment (such as HVAC and refrigeration controllers) in the stores and processed by software on edge infrastructure. From the IoT edge infrastructure, messages are then sent to our cloud storage to be processed and consumed by software applications. We use a hybrid approach of edge and cloud computing depending on the type of data. The data is sent over our secured network to our proprietary solution that has multiple architectural components and micro services. We use a mix of Walmart internally developed and open standard frameworks like Spring Boot and .NET Core. Our strategy is to build our software to be cloud agnostic, so we use common frameworks and languages such as Java, Embedded C, React, Node JS, and Linux technologies.

Our focus is really trying to make sure that we map the right technology to solve the right business problem. We always start with the customer in mind. What is the use case? What’s the business? How do our internal customers solve thinking of the end customer in mind, and then work our way back to what does that mean for tech and then what’s the right tech stack to actually fulfill that. So, I mean we are pretty open, and the focus really is on understanding the customer problem, and then marrying it to the right tech stack to solve that problem.

VentureBeat: Could you tell me a little bit more about Walmart’s IoT developments in the last year, and how they’ve helped the chain adjust to the COVID-19 pandemic’s challenges?

Radhakrishnan: The pandemic has definitely opened new proper business problems and use cases for us where IoT is extremely useful to leverage. For example, when the pandemic hit, we reduced hours in our stores, so our associates could restock inventory and sanitize stores for our customers. We have this system called Demand Response, which is one of the IoT applications that we have built in-house. And we were able to leverage that to a working model, where we can control the temperature settings in the stores to adjust to these new hours, and that  brought a lot of productivity to our associates. Instead of using more constrained and manual approaches, now they have an actual system, where they can do remote deployment of capabilities and really control our high-performance computing (HPC) systems in a remote manner at scale. From a productivity angle, it helped the business, and also from a sustainability angle, we were able to reduce the energy consumption on the grid. So to give you an example, our system was able to execute shredding events. We did it for about 200 sites, and we were able to save enough electricity to roughly power 20-plus U.S. households for a year. That gives you a scale for how we are giving back, both in terms of productivity for our associates and also in terms of sustainability.

VentureBeat: How has Walmart’s existing IoT and tech infrastructure allowed for its engineers to create new capabilities, like the COVID-19 responses, so quickly?

Radhakrishnan: Over the last few years, we have moved to the driver’s seat, where we built software that will normalize and control our data using IoT from these devices, converting the data to insights that the business can use in decision-making. We apply the right data quality checks to certify the data and bring the data into our control plane. We were able to incorporate real-time data streaming and improve the speed at which issues are identified and resolved in a highly accurate manner. Having this foundation in place has allowed us to quickly respond to external factors, like adjusting store hours overnight during the early response to COVID-19. Another recent example of the IoT technology allowing us to respond quickly would be in February, when the extreme cold weather impacted energy grids in numerous communities. We had the necessary controls in place for demand shedding already, so we were able to apply the tool in a new way that controlled HVAC heating set points and reduced our energy consumption. In less than two days, we used the technology to successfully reduce the HVAC energy consumption in almost 500 stores.

VentureBeat: Are there any other digital platform technologies related to ML, blockchain, IoT, or ERP Walmart is deploying? And are there any in particular that Walmart wants to research next?

Radhakrishnan: For our IoT use cases, we are looking at ways we can further improve the customer experience and our impact on the communities we serve. Through algorithms, we will continue to update our algorithms as we identify trends between what the data is telling us and how we should respond. Through equipment, we will identify other equipment that we can connect to that would provide a benefit to our customer for remote diagnostics and proactive maintenance.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Only 13% of organizations are delivering on their data strategy, survey finds

Join Transform 2021 this July 12-16. Register for the AI event of the year.


A new survey of C-suite data, IT, and senior tech executives finds that just 13% of organizations are delivering on their data strategy. The report, which was based on a survey of 351 respondents at organizations earning $1 billion or more in annual revenue, found that machine learning’s business impact is limited largely by challenges in managing its end-to-end lifecycle.

MIT Technology Review Insights and Databricks conducted the survey, which canvassed companies including Total, the Estée Lauder companies, McDonald’s, L’Oréal, CVS Health, and Northwestern Mutual. Among the findings was that only a select group of “high achievers” — the aforementioned 13% — delivered measurable business results across the enterprise. This group succeeded by paying attention to the foundations of sound data management and architecture, which enabled them to “democratize” data and derive value from AI and machine learning technologies, according to the report’s authors.

“Managing data is highly complex and can be a real challenge for organizations. But creating the right architecture is the first step in a huge business transformation,” report editor Francesca Fanshawe said in a press release.

Democratization of data

Every chief data officer interviewed for the study ascribed importance to democratizing analytics and machine learning capabilities. This, they said, will help end users make more informed business decisions — the hallmarks of a strong data culture.

The respondents also advocated embracing open source standards and data formats. But what remains the most significant challenge is the lack of a central place to store and discover machine learning models, 55% of executives said. That’s perhaps why 50% are currently evaluating or actively implementing new, potentially cloud-based data platforms.

As Broadridge VP of innovation and growth Neha Singh noted in a recent piece, many firms try to develop AI solutions without having clean, centralized data pools or a strategy for actively managing them. Without this critical building block for training AI solutions, the reliability, validity, and business value of any AI solution is likely to be limited.

Organizations’ top data priorities over the next two years fall into three areas, all supported by wider adoption of cloud platforms, according to the report. These are: improving data management; enhancing data analytics and machine learning; and expanding the use of all types of enterprise data, including streaming and unstructured data. “There are many models an enterprise can adopt, but ultimately the aim should be to create a data architecture that’s simple, flexible, and well-governed,” Fanshawe continued.

General agreement

The MIT and Databricks findings come after Alation’s latest quarterly State of Data Culture Report, which similarly discovered that only a small percentage of professionals believe AI is being used effectively across their organizations. A lack of executive buy-in was a top reason, Alation reported, with 55% of respondents to the company’s survey citing this as more important than a lack of employees with data science skills.

The findings agree with other surveys showing that, despite enthusiasm around AI, enterprises struggle to deploy AI-powered services in production. Business use of AI grew a whopping 270% over the past several years, according to Gartner, while Deloitte says 62% of respondents to its corporate October 2018 report adopted some form of AI, up from 53% in 2019. But adoption doesn’t always meet with success, as the roughly 25% of companies that have seen half their AI projects fail will tell you.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Outbreak Zombies Strategy Guide – Black Ops: Cold War

Season 2 of Call of Duty Black Ops: Cold War might be the most extensive free-content download since Warzone itself dropped in March of 2020. Season 2 introduced brand-new operators, weapons, and maps into the game. A cargo ship full of zombies crashed landed on the Prison shore in Warzone, and exposed nuclear missile silos currently cast an ominous aura over the future of Verdansk. However, the most widely anticipated feature of Season 2 is the release of Outbreak Zombies.

Outbreak Zombies is a semi-open world zombie experience set across Black Ops: Cold War‘s three Fireteam maps, Alpine, Ruka, and Golova. Players will warp between each area to complete objectives and slay the undead. Each location is full of buildings to loot, secrets to find, and dragons to feed. Yes, we said dragons to feed. This guide will walk you through everything you need to know about Outbreak Zombies, Black Ops: Cold War’s latest un-dead experience.

See more:

Outbreak at a glance

You objective is to explore all three maps searching for top-tier weapons, loot, and gear. Each map will feature one main objective that will rotate between five different options. You’ll also be presented with three side objectives and a bonus objective should you be so lucky. Now, each map isn’t guaranteed to be fully loaded with missions. The best thing you can do for yourself is get acquainted with the different map markers and understand what each one means.

After completing the main objective, you’ll be prompted to activate a beacon that will warp you to the next map. When you warp, the beacon will take you to a different fireteam map, and the difficulty will be ramped up to the next level. You can also choose to Exfil at the beacon. If you choose to Exfil, you’ll have to run toward a marked area on the map and eliminate a certain number of zombies to clear the landing zone. The same Exfil strategies from Firebase Z and Die Machine apply to Outbreak. Chopper Gunners and Monkey Bombs will be your best friends.

To put the difficulty of each wave in perspective:

Wave 1 — Round 1 zombies

Wave 2 — Round 5 zombies

Wave 3 — Round 10 zombies

Use this five-round scale to determine when it’s best to upgrade your weapons at the Pack-a-Punch and mannequin. Each Outbreak map will come fully equipped with everything you’ll need to upgrade. However, you’ll have to do some exploring to find each station. Alternatively, you can rush the main objective and activate the beacon. Doing so will spawn the Pack-a-Punch machine, a crafting table, the Wonderfizz Machine, and the upgrade mannequin in the immediate vicinity.

You’ll earn significantly fewer points than typical games of Die Machine or Firebase Z. Killing zombies will award you 25 points instead of 100. Critical kills will reward you with 35 points. While there are hundreds of infinitely spawning zombies to kill on each map, players will receive 1/4 the points for each one.

The main objectives 

Cold War Outbreak Zombies back to back

There are five main objectives to complete in Outbreak Zombies. Players are free to loot and explore as much as they want before they start the objectives. However, once they do, there’s no stopping it until the mission is complete. Upon completion, players will be able to activate the beacon. Again, they do not have to go through the beacon right away and are free to explore, loot, and upgrade. Here are the five main objectives:

Holdout 

In holdout, players are transported through the Dark Aether to a close-quarters building that they cannot escape. The windows are boarded up in classic zombie fashion, and players will have to stay alive for 3 1/2 minutes. Holdout is very easy in the early stages, but once players get up to round 5 or 6, they’ll be in for a real challenge. 

The most effective way to complete a Holdout is to hunker down in a corridor and defend the doorway. As long as one teammate keeps an eye on the window, the rest of the team can bottleneck zombies in the door and kill them with ease. One Holdout building will transport players to a two-story cabin. Head up the stairs and down the hallway. There will be a boarded-up door to your back. Zombies will ignore the door and only come up the stairs. All players can focus their fire on the stairs and doorway to the left.

Escort 

Escort will have players following an armored rover with a caged monkey on the back. It will travel to three different portals before finally passing through into the Dark Aether. Defend the rover from incoming zombies and don’t let them damage it. The Rover will not move if there are zombies in front of it. Keep the path cleared, and don’t lose track of the Rover. 

While Escort will prove easy at first, don’t underestimate how much damage elite enemies can do. A team of four should keep two people in front and two people in the back. Each time the rover gets spat out of an aether portal, its cargo (the monkey) will transform. If the monkey transforms into a pick-up like Max Ammo or Double Points, approach the cage and melee it to grab the upgrade.

Retrieve 

Retrieve is the most difficult of all five objectives. Players are tasked with picking up and transporting two aether canisters to a rocket. While holding the canister, your movement will be restricted. You cannot sprint, and you cannot jump. You can still shoot to defend yourself. You’re also free to drop the aether canister if you get overrun.

You’ll also get a special field upgrade that releases a powerful pulse wave in every direction. Use the pulse wave when you make it back to the rocket to load the aether. It will clear out nearby zombies and leave a window of opportunity. It takes a few seconds to load the canister in, so make sure your teammates are nearby to provide cover. 

Players can drive cars while holding the aether canister. However, the vehicle will take damage as they’re driving, so keep an eye on its health and bail out when it’s at critical damage. It is wise to drive two cars up to the rocket before starting the objective, especially on higher rounds. Just be careful of the Tempest’s EMP blasts. 

Defend 

This objective is straightforward. Players will load a severed head into a machine and defend the machine from incoming zombies. This objective is broken up into three waves of increasing difficulty. Do not underestimate the ease of the first wave. Sentry turrets will come in handy when defending the church building. Chopper Gunners are better suited for outside areas. 

Eliminate

This objective will spawn a powerful elite nearby. This mini-boss fight will have three stages as the elite jumps around to different nearby POIs. Track it down and kill it before it kills the Requiem team. 

Side objectives

Black Ops Outbreak Zombies Loot Chest

While completing the main objective is the only way to activate the beacon, completing side objectives is equally important. This is the best way to earn more points to spend at the Pack-a-Punch and Wonderfizz Machines.

Three objectives will randomly spawn on each map. Sometimes you’ll get all three; sometimes you’ll only get one. Check the map as soon as you spawn and formulate a plan to hit all objectives available. These side objectives are the best way to find upgraded weapons. However, the best part about completing these is the random perk reward. Whether it drops as a purple pick-up or a green Max Ammo-like bonus, random perks will save you some serious cash in the early rounds.

Feed the dragon

Marked by an encircled dragon head on the map, players will find what looks like a rocket ship with two large canisters on each side. They’ll be prompted to “begin feeding time” at the machine. Activating the machine will spawn a purple Ring of Fire-like circle in front of the dragon. An actual aether dragon head will pop out of the machine as well.

Zombies start spawning in every direction, but players will have to wait for them to enter the circle to feed the beast. Wait for the zombies to glow purple before killing them. Keep track of your progress by checking the canisters on the side of the dragon. They’ll be filling with pink liquid. You’ll know you’ve succeeded if the horizontal bar at the top of each canister is full. After a short amount of time, the dragon will rocket into space and leave behind a loot box relative to how many zombies you killed in the circle.

Eliminate the HVT

Marked by a cluster of skulls on the map, for this objective players will find a deceased Requiem team. Interact with a radio on the corpse to spawn a group of elite enemies nearby. You’ll see large beams of blue light descending from the clouds, and the same skull cluster will mark them. Multiple zombies, Plaguehounds, and Hellhounds will guard the HVT (high-value target). The target itself will differ depending on the wave. Early-round HVTs will be an average Mangler or Megaton. However, higher rounds will spawn mini-bosses with massive pools of health dealing absurd amounts of damage. Tread lightly.

The HVTs will drop upgraded weapons, perks, equipment, and killstreaks when defeated. These are the most fruitful of the three side objectives and should be done before the main objective.

Golden Loot Box

A lightning bolt on your map marks the Golden Loot Box. As you near it, you’ll see a blue light shoot down from the sky and spawn the loot box out in the open. Interact with it to “confront the Dark Aether.” A circle of zombies will spawn, ranging from average undead to high-level elites. You must eliminate all of them to unlock the loot box and claim your reward. Again, this is the best way to get free perks and upgraded weapons.

The Screaming Aether Orb

The Screaming Aether Orb is not marked on your map and can only be found through exploration. Once you’re close enough, you will see it pop up as a ridged circle on your mini-map. Unload on the orb to spawn multiple Zombie Essence canisters. Once you shoot it enough, the Orb will let out a child-like scream and start moving. Thankfully, it will only move in a straight line, so following it is easy.

However, do not lose track of the Orb as it’s not easy to locate once lost. After three times, the Orb will explode with valuable loot. You can also take advantage of a Double Points upgrade when collecting the essence dropped from the Orb. Instead of 100, it’ll be 200.

Elite enemies (new to Outbreak)

Outbreak Zombies Krasny Soldat

Elites will be scattered all over each map and are easily identified by the skull symbol on the mini-map. Most of the roaming elites will be Manglers and Tempests, with a few Megatons mixed in. Krasny Soldats will make appearances on high difficulties, and Mimics will be disguised as they always are. You already know all about Mimics, Megatons, and Manglers. So, let’s dive into Outbreak Zombie’s latest elite enemies.

Tempests

Tempests are easy to kill but annoying to encounter. Their sole purpose is to disable your vehicles with their EMP blasts. These same blasts will also slow your movement and blur your vision if they connect. They’ll warp around the vicinity when shot. However, you can get an idea of where they’ll warp to by the blue wisp they give off.

If you can bunny hop, wait for the Tempest to shoot at you, then jump left or right. Don’t jump too soon, though. We all know how enemy projectiles track in Cold War Zombies. The same trick applies to Manglers. However, you’ll want to concentrate your fire on the Manglers’ arm cannons to kill them quickly. Tempests are small-framed, so you’ll need to be accurate with your shots.

Krasny Soldats

These are going to be the most challenging enemies you’ll encounter when playing Outbreak Zombies. They are heavily armored, with only a small window of opportunity for critical headshots. They’ll shoot an arm-mounted flame thrower when at close range. Get too far away, and Krasny Soldats will shoot an incendiary grenade at you. Whenever one of these bad boys appears, the entire team should focus on killing it. They’ll jump great distances and close gaps in the blink of an eye. Don’t be afraid to burn a killstreak to kill Krasnys, especially on later rounds.

Mini-bosses

When completing objectives like Eliminate, Golden Loot Chests, or HVTs, you’ll encounter upgraded mini-boss versions of these elite enemies. They’ll have unique names and an upgraded health pool. You’ll easily recognize mini-boss Megatons by their red features.

Outbreak Zombies: Best strategies

Outbreak Zombies Large Loot Chest (1)

Now that you know how all the inner mechanics of Outbreak work, let’s talk about the best way to complete objectives, upgrade weapons, and survive high rounds.

Outbreak is a time investment. It’ll take two to three hours to reach round 8 or 9 with everything fully upgraded. The bulk of the time will be spent between rounds 1 and 3, as you’ll be exploring all three maps to complete objectives and earn points while the game is still relatively easy.

Ideally, you should earn enough points to Pack-a-Punch your primary weapon before completing the first main objective. You’ll have already gotten a random perk from side missions and hopefully have a decent secondary to back you up. Of course, Outbreak still operates on a random number generator. There is no guarantee you’ll get every side objective in the first area. If you do, consider yourself lucky.

By the end of round 3, you should have most of the perks, ideally Juggernaut, Stamin-Up, Quick Revive, and Speed Cola, two high-tier-double-Pack-a-Punched weapons, and a killstreak to use in case of emergency. Use rounds 4 and 5 to triple-pack at least one of those weapons. By rounds 6 and 7, you’ll start getting Wonder Weapons to drop upon completing objectives. You can find the Rai-K 84, Ray Gun, and D.I.E Machine as challenge rewards and elite drops.

Ideal loadouts

When it comes to building your Outbreak Zombie loadout, you’ll want to be using automatic weapons with large clips. LMGs take too long to reload, even when equipped with the fast mags. Your best bet is any Assault Rifle with the largest clip possible. Make sure to equip any attachments that increase your salvage or equipment drop rates as well.

A team of four will want to strategize when it comes to field upgrades. There’s no reason all four players should be running Ring of Fire. The new Frenzied Guard field upgrade will keep players armored and proves useful in Outbreak. One team member should also be running Healing Aura to revive downed teammates from a distance. Healing Aura will also help if the team gets overrun during a Holdout or Defend mission.

Keep your vehicles at a safe distance

There’s a lot of splash damage and ranged attacks in Outbreak Zombies. Vehicles are necessary for traversing the map on higher rounds as to not waste ammo and armor on roaming zombies. However, once you arrive at your destination, leave your vehicle away from the action. One accidental blast from the Rai-K or Ray Gun can blow it sky-high.

Split up on early rounds

Rounds 1, 2, and 3 should be manageable for any experienced Zombies player. Everyone should be able to split off on their own to expedite the looting progress. Four-player squads can split up into teams of two to be safe. Players will reconvene to complete challenges and primary objectives. They should also be activating reward challenges whenever they pass by the station. These challenges are a great way to earn bonus essence, perks, and a possible Wonder Weapon.

Stay together on higher rounds

Once players have completely upgraded, there’s no reason to split off from the pack. You’ll be hard-pressed to find anything more valuable than your upgraded Wonder Weapon or legendary primary.

On a related note, don’t challenge every group of zombies you see once you’ve made it to the higher rounds. Again, there’s nothing else you’ll need at this point, and your main focus should be completing the main objective and making it as far as possible. Don’t even worry about side objectives anymore.

Final thoughts

Outbreak Zombies Retrieve

The only thing Outbreak Zombies is currently lacking is a useful map. If Outbreak were to implement the same interactive style map as Warzone, it would make marking objectives and waypoints much easier. It’s easy to get lost and sidetracked on your way to side objectives and upgrade stations. Otherwise, Outbreak Zombies might be the best zombie mode Call of Duty has ever released. If players are willing to put the time into a game, they can expect a fun and challenging experience.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Twitter Super Follow borrows OnlyFans strategy to charge for tweets

Twitter is preparing to launch paid tweets, with a new Super Follow system which will work a little like Patreon or OnlyFans. Announced during the company’s investors presentation, Super Follow will offer a new way for those with followings on Twitter to monetize that audience, with everything from exclusive content to special badging.

Twitter has long talked about – and, according to rumors and leaks, been working on internally – a way to squeeze more profit out of its service than through advertising alone. One of the most common expectations has been a monthly or annual subscription, which would remove ads from users’ timelines, among other potential perks.

This Twitter Super Follow system, however, takes a different approach. In effect, it would allow users of the service to individually monetize their own shared content, much in the way that services like Patreon and OnlyFans do today. Exactly what could be offered seems to be down to the individual user’s preferences.

In an example shared by Twitter, for instance, that could be anything from a badge showing that you’re a supporter of a certain tweeter, or subscriber-only newsletters. It might include exclusive content that wouldn’t be available to non-Super Followers, or deals & discounts for certain products and services.

Individual tweets shared with Super Followers would only support viewing and replying by those subscribers, according to screenshots posted by The Verge.

Finally, there’s also “Community access,” a reference to another new feature that was revealed today. Twitter Communities are effectively closed groups, built around individual topics: that could be gardening, exercise, or even hashtags such as #SocialJustice, Twitter suggested. Communities could seemingly be open to any Twitter user wanting to join, or closed and require invitation – potentially after signing up as a Super Follower first – to take part.

Twitter is presumably envisaging following the strategy of other sites, and taking a cut of Super Follow fees. Exactly how much it’ll cost will seemingly depending on the individual creator: Twitter’s example is $4.99 per month with the ability to cancel at any time. However it’s likely that users would be able to set their own amount based on what they believe their community will pay.

There’s no indication as to when the new features will launch.

Repost: Original Source and Author Link

Categories
AI

How to know if federated learning should be part of your data strategy

AI researchers and practitioners are developing and releasing new systems and methods at an extremely fast pace, and it can be difficult for enterprises to gauge which particular AI technologies are most likely to help their businesses. This article — the first part of a two-part series — will try to help you determine if federated learning (FL), a fairly new piece of privacy-preserving AI technology, is appropriate for a use case you have in mind.

FL’s core purpose is to enable use of AI in situations where data privacy or confidentiality concerns currently block adoption. Let’s unpack this a bit. The purpose of AI systems/methods/algorithms is to take data and autonomously create pieces of software, AI models, that transform data into actionable insight. Modern AI methods typically require a lot of data, collect all the necessary data at some central location, and run a learning algorithm on the data to learn/create the model.

However, when that data is confidential, siloed, and owned by different entities, gathering the data at a central location is not possible. Federated learning very cleverly gets around this problem by moving the learning algorithm or code to the data, rather than bringing the data to the code.

As an example, consider the problem of creating an AI model to predict whether patients have COVID-19 given their lung CT-scans. The data for this problem, CT-scans, are obviously confidential, and are owned by various different entities — hospitals and medical and research facilities. They are also stored all across the world under very various different jurisdictions. You want to combine all of that data because you want your AI models to be able to detect all the forms in which the disease has manifested in CT-scans. However, combining all this data in the one location is not possible for obvious data confidentiality and jurisdictional reasons.

Here’s how federated learning bypasses the confidentiality issue: A worker node, which is a computing system capable of machine learning, is deployed at each hospital or facility. This worker node has full access to the confidential data at the location. During federated learning, each such worker node creates a local AI model using the data at the facility and sends over the model to the central FL server. Note that the confidential data never leaves the client site — only the model does. A central server combines all of the insights from the local models into a single global model. This global model is sent back to all the local worker nodes, which now have insights from all the other worker nodes without having seen their data. When you iterate on these steps many times, you end up with a model that is, in many cases, equivalent to a model that would have been built if you’d trained it on all the data in the same place. (See a schematic illustration of the process below.)

Federated learning was initially developed by Google as a way to train the Android keyboard app (GBoard) to predict what the user will type next. Here the confidential data being used is the text that the user is typing. However, it turns out that the data confidentiality issue appears in many guises across industries. Indeed, you will face this problem if you are

* a maker of autonomous vehicle and you want to combine confidential video and image data from across your vehicles to build a better vision system,

* a bank and want to build a machine learning based anti-money laundering system by combining data from various jurisdictions, possibly even other banks,

* a supply chain platform provider and you want to build a better risk assessment or route optimization system by combining confidential sales data from multiple businesses,

* a cell service provider and you want to build a machine learning model to optimize routes by combining confidential data from cell towers,

* a consortium of farmers and you want to build a model to detect crop disease using confidential disease data from the members of your consortium,

* a consortium of additive manufacturers and you want to build AI-based process controllers and quality assurance systems using confidential build data from members of your consortium.

An important point to note about the above examples is that the owner of the data can be different units of the same organization but in different jurisdictions (as in the bank and cell service provider examples) or clients of the same organization (as in the autonomous vehicle maker and supply chain service provider examples), or completely independent units (as in the consortium of farmers and additive manufacturer examples above, and the hospitals/COVID-19 example described earlier).

You can use the following checklist to see if federated learning makes sense for you:

I will end by emphasizing that, if you are able to combine your data into a central location, you should probably go for that option (barring cases where you want to, for instance, future-proof your solution). Centralizing your data may make for better final model performance than you’ll be able to achieve with federated learning, and the effort required to deploy an AI solution will be significantly lower. However, if it turns out that FL is just what you need to remove your barrier to AI adoption, part two of this series will provide you with an overview of how to do that.

M M Hassan Mahmud is a Senior AI and Machine Learning Technologist at Digital Catapult, with a background in machine learning within academia and industry.

VentureBeat is always looking for insightful guest posts related to enterprise data technology and strategy.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link