Categories
AI

How to leverage AI to boost care management success

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Sixty percent of American adults live with at least one chronic condition, and 12% with five or more. They spend exponentially more on healthcare than those without any chronic conditions. For instance, 32% of adults with five or more chronic conditions make at least one ER visit each year. On top of that, 24% have at least one inpatient stay, in addition to an average of 20 outpatient visits — up to 10 times more than those without chronic conditions. In fact, 90% of America’s $4 trillion healthcare expenditures are for people with chronic and mental health conditions, according to the Centers for Disease Control and Prevention (CDC).

The fundamental way healthcare organizations reduce these costs, improve patient experience and ensure better population health is through care management. 

In short, care management refers to the collection of services and activities that help patients with chronic conditions manage their health. Care managers proactively reach out to patients under their care and offer preventative interventions to reduce hospital ER admissions. Despite their best efforts, many of these initiatives provide suboptimal outcomes.

Why current care management initiatives are ineffective

Much of care management today is performed based on past data

For instance, care managers identify patients with the highest costs over the previous year and begin their outreach programs with them. The biggest challenge with this approach, according to our internal research, is nearly 50-60% of high-cost patients were low-cost in the previous year. Without appropriate outreach, a large number of at-risk patients are left unattended with the reactive care management approach. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The risk stratification that the care management team uses today is a national model

These models are not localized, so understanding the social determinants of individual locations is not considered.

The care management team’s primary focus is chiefly on transition of care and avoiding readmissions

Our experience while working with different clients also points to the fact that readmissions contribute only 10-15% of total admission. The focus on proactive care management and avoiding future avoidable emergency room and hospital admission is lacking. This is key to success in value-based care models.

In any given year, high-cost patients can become low-cost

Without such granular understanding, outreach efforts can be ineffective in curbing the cost of care.

How AI can boost care management success

Advanced analytics and artificial intelligence (AI) open up a significant opportunity for care management. Health risks are complex, driven by a wide range of factors well beyond just one’s physical or mental health. For example, a person with diabetes is at higher risk if they also have low-income and limited access to medical services. Therefore, identifying at-risk patients’ needs to consider additional factors to encompass those most in need of care.

Machine learning (ML) algorithms can evaluate a complex range of variables such as patient history, past hospital/ER admissions, medications, social determinants of health, and external data to identify at-risk patients accurately. It can stratify and prioritize patients based on their risk scores, enabling care managers to design their outreach to be effective for those who need it most. 

At an individual level, an AI-enabled care management platform can offer a holistic view of each patient, including their past care, current medication, risks, and accurate recommendations for their future course of action. For the patient in the example above, AI can equip care managers with HbA1C readings, medication possession ratio, and predictive risk scores to deliver proper care at the right time. It can also guide the care manager regarding the number of times they should reach out to each patient for maximum impact.

Unlike traditional risk stratification mechanisms, modern AI-enabled care management systems are self-learning. When care managers enter new information about the patient — such as latest hospital visit, change in medication, new habits, etc. — AI adapts its risk stratification and recommendations engine for more effective outcomes. This means that the ongoing care for every patient improves over time.

Why payers and providers are reluctant to embrace AI in care management

In theory, the impact of AI in care management is significant — both governments and the private sector are bullish on the possibilities. Yet, in practice, especially among those who use the technology every day, i.e., care managers, there appears to be reluctance. With good reason.

Lack of localized models

For starters, many of today’s AI-based care management solutions aren’t patient-centric. Nationalized models are ineffective for most local populations, throwing predictions off by a considerable margin. Without accurate predictions, care managers lack reliable tools, creating further skepticism. Carefully designed localized models are fundamental to the success of any AI-based care management solution.

Not driven by the care manager’s needs

On the other hand, AI today is not ‘care manager-driven’ either. A ‘risk score’ or the number indicating the risk of any patient gives little to the care manager. AI solutions need to speak the user’s language, so they become comfortable with the suggestions. 

Healthcare delivery is too complex and critical to be left to the black box of an ML algorithm. It needs to be transparent about why each decision was made — there must be explainability that is accessible to the end-user. 

Inability to demonstrate ROI

At the healthcare organizational level, AI solutions must also demonstrate ROI. They must impact the business by moving the needle on its key performance indicators (KPIs). This could include reducing the cost of care, easing the care manager’s burden, minimizing ER visits, and other benefits. These solutions must provide healthcare leaders with the visibility they need into hospital operations as well as delivery metrics.

What is the future of AI in care management?

Despite current challenges and failures in some early AI projects, what the industry is experiencing is merely teething troubles. As a rapidly evolving technology, AI is adapting itself to the needs of the healthcare industry at an unprecedented pace. With ongoing innovation and receptiveness to feedback, AI can become the superpower in the armor of healthcare organizations.

Especially in proactive care management, AI can play a significant role. It can help identify at-risk patients and offer care that prevents complications or emergencies. It can enable care managers to monitor progress and give ongoing support without patients ever visiting a hospital to receive it. This will, in turn, significantly reduce the cost of care for providers. It will empower patients to lead healthy lives over the long term and promote overall population health.

Pradeep Kumar Jain is the chief product officer at HealthEM AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Artificial intelligence (AI) engineer: Learn about the role and skills needed for success

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The engineers who build and manage AI systems are increasingly valuable to companies across industry sectors. Unsurprisingly, the demand for their services outstrips the supply. 

But what is the role of an AI engineer? What are the key qualifications for the role? What really makes a good one? And how can they be made — maybe even from current developers on staff — if not found? Alternatively, how can the functionality otherwise be filled?

What is the role of an artificial intelligence (AI) engineer?

An AI engineer develops, programs, trains and deploys AI models. With 86% of companies in a recent survey reporting that AI is becoming mainstream in their businesses, the AI engineer has become a central figure.

While a data scientist focuses on finding and extracting business insights and applicable data from large datasets, an AI engineer comes from an IT infrastructure background and is charged with developing the algorithms for an AI application and integrating the application into a company’s broader tech environment. An engineer focused on algorithms may also be known as a machine learning (ML) engineer. Someone who specializes in integrating AI applications with an organization’s other technology may be known as an AI architect. Additionally, a professional specifically focused on writing code might have the title of AI developer.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

[ Also read: 97% of execs say data science is “crucial” to maintaining profitability ]

Because an important part of an AI engineer’s job is applying AI to real-world use cases, these workers must understand the problems their companies face and find ways that AI can help solve them. That often includes collaborating with other departments and teaching others about AI’s potential.

AI engineer salary and benefits

AI engineers are highly skilled. They face a wide-open job market and are well compensated. ZipRecruiter reports that the average salary of an AI engineer is upwards of $158,000 a year, with top earners offered as much as $288,000 annually. Many companies employing these professionals also offer attractive benefits for these positions. AI engineering is a reasonably future-proof career, as AI is only becoming increasingly important to everyday life. 

Education, experience and soft skills are needed for the role

AI engineers typically require expertise in three broad areas:

  1. Relevant, formal education through at least the level of a bachelor’s degree.
  2. Extensive experience in tech and/or data. 
  3. The soft skills to collaborate productively on projects with colleagues.

Although more AI-specific courses are being added to undergrad and graduate programs all the time, many AI engineers have honed their specialties with certifications or a few courses to augment their foundational degrees.

AI engineers generally need at least a bachelor’s degree in a field such as computer science, IT, data science or statistics. Some positions may even require a master’s degree. 

An advanced degree in a related area will qualify applicants for more positions. However, it may become less of a necessity over time. As the need for these workers rises, more companies are seeking experience over education.

Relevant certifications may be more useful. Taking extra AI engineer courses and exams can earn job-seekers AI-specific certifications that ensure they have the needed skills. On top of grabbing employers’ attention, these certifications will indicate an applicant has some helpful real-world experience with the day-to-day work of AI engineering.

This means a relatively broad pool of tech professionals may be candidates for a mid-career specialty in AI. Such workers, of course, must be able and motivated, and they may be found within or beyond an employer’s organization.

[ Also read: Creating a powerful data department with data science  ]

10 key skills needed to succeed as an AI engineer

Let’s look at some of the more specific skills required of an AI engineer:

1. Programming language proficiency

One of the most important skills to have as an AI engineer is proficiency in at least one programming language. Ideally, applicants should have experience working with multiple languages, as some companies may prefer working in one language over another. The more diverse experience, the better.

The top programming languages in the field include:

  • Python
  • C++
  •  JavaScript
  •  Java
  • C#
  • Julia
  • Shell
  • R
  • TypeScript
  • Scala

Python is the most popular language for machine learning applications and the third most popular overall, so is often considered a default requirement for the role. Students should work with at least a few languages in their AI engineering courses, but many professionals are self-taught to at least some degree, and they have likely demonstrated proficiency with personal projects.

2. Experience with AI models

While general programming knowledge is important, engineers also need to accrue AI-specific experience. Building and training AI models is a unique practice, and those interested in an AI career seek varied opportunities to build this expertise.

Cultivating this experience is a lot like gaining proficiency in programming languages, and it is best done by working with various types of AI models, including linear regression, classification algorithms, decision trees and deep neural networks. Experimenting with different models can also help AI engineers discover what they enjoy working with the most.

Learning to work with models also resembles the programming language process. Students in AI engineering courses will build and test a few models in their studies, but personal research is valuable too. Forums and exchanges like GitHub are good places for support with AI projects.

3. Linear algebra and statistics

AI engineers need a strong grasp of applied mathematics fields such as linear algebra and statistics. Different models require an understanding of different mathematical concepts. Engineers must know how to apply derivatives and integrals to tackle gradient descent algorithms, while probability theory and Gaussian distributions are important for Hidden Markov models. A college-level mathematics education will often provide the skills necessary. 

4. Data literacy

An AI engineer’s work revolves around data, and data literacy is one of the most important skills to have when entering this field. AI engineers should be able to read, understand, analyze and apply data to various use cases.

Formal data science and statistics classes are useful, but the best practice is engaging with data projects first-hand, which is another reason why experienced tech workers may be good candidates to develop for the role. 

5. Critical thinking

“Soft” skills are also important in this field, although they are often harder to gauge. One of the most important soft skills in AI engineering is critical thinking.

AI models can be complicated, and the solution to a problem is rarely immediately evident. As a result, delivering timely and accurate results with these technologies requires a fast, creative approach to problem-solving. 

AI engineers must be able to think through multiple solutions and determine the best course of action.

6. Business acumen

A skill that is sometimes overlooked — but useful for AI engineers to have — is a strong grasp of business concepts. Operations optimization and product enhancement are the most common AI use cases for businesses, so AI engineers should understand how these processes work. Effective AI application requires an understanding of how the company operates.

AI is only as effective as its users’ ability to apply it to their end goals. Top-performing AI engineers know not just how to build functioning AI models, but also how these models can help businesses serve their unique needs. That means understanding general business concepts and company-specific considerations.

Engineers can develop their business acumen in formal courses and/or by working with colleagues in other departments.

7. Communication skills

Another crucial soft skill to have is communication. AI engineers must be able to explain to their non-technical colleagues how different AI solutions might help teams reach their goals.

A lack of understanding of how AI can benefit businesses is the second-largest barrier to adoption, according to Gartner, with 42% of chief information officers (CIOs) citing it as a problem. Knowing how to explain AI concepts will improve cooperation. 

As the technology becomes more important to a wider variety of business functions, AI engineers will work with more departments. They must be able to communicate with other workers effectively for these relationships to work. Presentation and summary skills are particularly critical.

8. Collaboration

Along those same lines, AI engineers must have excellent teamwork skills to thrive in the current market. This goes beyond telling other departments how to use AI models effectively. AI engineers must be open to feedback and cooperate with other workers to understand the challenges they face.

Many AI engineers also work in groups, even within their own departments and projects. If they are unable to work well with others, they will struggle to excel in the industry. Conversely, strong collaborative skills will help them find effective solutions faster.

Experience working in groups helps naturally build these skills, too, so prospective engineers should seek collaborative projects to improve in this area. The better they can work as part of a team, the more success they’ll have in the field.

9. Time management

Building, testing and deploying AI models is often a time-consuming process, and time management is vitally important.

 A recent study found that 83% of developers suffer from workplace burnout, with high workloads being the leading cause. While AI engineers may have little control over their workloads, they can adapt their habits to make the most of them. Of course, company culture and strong management are important for keeping such valuable professionals in peak form.

Artificial intelligence engineers should also gain experience in related technologies. Gathering relevant data and deploying AI models will likely involve working with technologies like internet of things (IoT) devices, robotics and cloud computing. Most AI projects fail, and the lack of an integrated environment is one of the most common reasons. If AI engineers hope to deploy their models effectively, working across a company’s unique IT environment is important. That means understanding the various technologies they may use.

More staffing options

The AI engineer’s role is essential and in-demand, but the AI industry is developing tools and options to enable less-specialized workers to build out applications as well:

  • Low-code and no-code options enable less-skilled staff to develop use cases. 
  • AI vendors increasingly offer prepackaged vertical and horizontal market solutions.
  • AI vendors are also cultivating business partners to offer still more prepackaged implementations.
  • Using various visual and dashboard interfaces, AI vendors are enabling non-technical business analysts to craft simple applications.
  • Consulting services are expanding to meet project-specific needs, especially.

Organizations will adapt their solutions to their size and resources, the strategic importance of their implementations and their staffing markets and philosophies, and AI skills will continue to be diffused across the wider tech environment. The role of the AI engineer is still gaining importance and will be key to many companies’ adoption of the technology.

Read next: IBM chief data scientist makes the case for building AI factories

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

For AI model success, utilize MLops and get the data right

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


It’s critical to adopt a data-centric mindset and support it with ML operations 

Artificial intelligence (AI) in the lab is one thing; in the real world, it’s another. Many AI models fail to yield reliable results when deployed. Others start well, but then results erode, leaving their owners frustrated. Many businesses do not get the return on AI they expect. Why do AI models fail and what is the remedy? 

As companies have experimented with AI models more, there have been some successes, but numerous disappointments. Dimensional Research reports that 96% of AI projects encounter problems with data quality, data labeling and building model confidence.

AI researchers and developers for business often use the traditional academic method of boosting accuracy. That is, hold the model’s data constant while tinkering with model architectures and fine-tuning algorithms. That’s akin to mending the sails when the boat has a leak — it is an improvement, but the wrong one. Why? Good code cannot overcome bad data.

Instead, they should ensure the datasets are suited to the application. Traditional software is powered by code, whereas AI systems are built using both code (models + algorithms) and data. Take facial recognition, for instance, in which AI-driven apps were trained on mostly Caucasian faces, instead of ethnically diverse faces. Not surprisingly, results were less accurate for non-Caucasian users. 

Good training data is only the starting point. In the real world, AI applications are often initially accurate, but then deteriorate. When accuracy degrades, many teams respond by tuning the software code. That doesn’t work because the underlying problem was changing real-world conditions. The answer: to increase reliability, improve the data rather than the algorithms. 

Since AI failures are usually related to data quality and data drifts, practitioners can use a data-centric approach to keep AI applications healthy. Data is like food for AI. In your application, data should be a first-class citizen. Endorsing this idea isn’t sufficient; organizations need an “infrastructure” to keep the right data coming. 

MLops: The “how” of data-centric AI

Continuous good data requires ongoing processes and practices known as MLops, for machine learning (ML) operations. The key mission of MLops: make high-quality data available because it’s essential to a data-centric AI approach.

MLops works by tackling the specific challenges of data-centric AI, which are complicated enough to ensure steady employment for data scientists. Here is a sampling: 

  • The wrong amount of data: Noisy data can distort smaller datasets, while larger volumes of data can make labeling difficult. Both issues throw models off. The right size of dataset for your AI model depends on the problem you are addressing. 
  • Outliers in the data: A common shortcoming in data used to train AI applications, outliers can skew results. 
  • Insufficient data range: This can cause an inability to properly handle outliers in the real world. 
  • Data drift: Which often degrades model accuracy over time. 

These issues are serious. A Google survey of 53 AI practitioners found that “data cascades—compounding events causing negative, downstream effects from data issues — triggered by conventional AI/ML practices that undervalue data quality… are pervasive (92% prevalence), invisible, delayed, but often avoidable.”

How does MLOps work?

Before deploying an AI model, researchers need to plan to maintain its accuracy with new data. Key steps: 

  • Audit and monitor model predictions to continuously ensure that the outcomes are accurate
  • Monitor the health of data powering the model; make sure there are no surges, missing values, duplicates, or anomalies in distributions.
  • Confirm the system complies with privacy and consent regulations
  • When the model’s accuracy drops, figure out why

To practice good MLops and responsibly develop AI, here are several questions to address: 

  • How do you catch data drifts in your pipeline? Data drift can be more difficult to catch than data quality shortcomings. Data changes that appear subtle may have an outsized impact on particular model predictions and particular customers.
  • Does your system reliably move data from point A to B without jeopardizing data quality? Thankfully, moving data in bulk from one system has become much easier, as tools for ML improve.
  • Can you track and analyze data automatically, with alerts when data quality issues arise? 

MLops: How to start now

You may be thinking, how do we gear up to address these problems? Building an MLops capability can begin modestly, with a data expert and your AI developer. As an early days discipline, MLops is evolving. There is no gold standard or approved framework yet to define a good MLops system or organization, but here are a few fundamentals:

  • In developing models, AI researchers need to consider data at each step, from product development through deployment and post-deployment. The ML community needs mature MLops tools that help make high-quality, reliable and representative datasets to power AI systems.
  • Post-deployment maintenance of the AI application cannot be an afterthought. Production systems should implement ML-equivalents of devops best practices including logging, monitoring and CI/CD pipelines which account for data lineage, data drifts and data quality. 
  • Structure ongoing collaboration across stakeholders, from executive leadership, to subject-matter experts, to ML/Data Scientists, to ML Engineers, and SREs.

Sustained success for AI/ML applications demands a shift from “get the code right and you’re done” to an ongoing focus on data. Systematically improving data quality for a basic model is better than chasing state-of-the-art models with low-quality data.

Not yet a defined science, MLops encompasses practices that make data-centric AI workable. We will learn much in the upcoming years about what works most effectively. Meanwhile, you and your AI team can proactively – and creatively – devise an MLops framework and tune it to your models and applications. 

Alessya Visnijc is the CEO of WhyLabs

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

How to turn AI failure into AI success

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The enterprise is rushing headfirst into AI-driven analytics and processes. However, based on the success rate so far, it appears there will be a steep learning curve before it starts to make noticeable contributions to most data operations.

While positive stories are starting to emerge, the fact remains that most AI projects fail. The reasons vary, but in the end, it comes down to a lack of experience with the technology, which will most certainly improve over time. In the meantime, it might help to examine some of the pain points that lead to AI failure to hopefully flatten out the learning curve and shorten its duration.

AI’s hidden functions

On a fundamental level, says researcher Dan Hendrycks of UC Berkeley, a key problem is that data scientists still lack a clear understanding of how AI works. Speaking to IEEE Spectrum, he notes that much of the decision-making process is still a mystery, so when things don’t work out, it’s difficult to ascertain what went wrong. In general, however, he and other experts note that only a handful of AI limitations are driving many failures.

One of these is brittleness — the tendency for AI to function well when a set pattern is observed, but then fail when the pattern is altered. For instance, most models can identify a school bus pretty well, but not when it is flipped on its side after an accident. At the same time, AIs can quickly “forget” older patterns once they have been trained to spot new ones. Things can also go south when AI’s use of raw logic and number-crunching leads it to conclusions that defy common sense.

Another contributing factor to AI failure is that it represents such a massive shift in the way data is used that most organizations have yet to adapt to it on a cultural level. Mark Montgomery, founder and CEO of AI platform developer KYield, Inc., notes that few organizations have a strong AI champion at the executive level, which allows failure to trickle up from the bottom organically. This, in turn, leads to poor data management at the outset, as well as ill-defined projects that become difficult to operationalize, particularly at scale. Maybe some of the projects that emerge in this fashion will prove successful, but there will be a lot of failure along the way.

Clear goals

To help minimize these issues, enterprises should avoid three key pitfalls, says Bob Friday, vice president and CTO of Juniper’s AI-Driven Enterprise Business. First, don’t go into it with vague ideas about ROI and other key metrics. At the outset of each project, leaders should clearly define both the costs and benefits. Otherwise, you are not developing AI but just playing with a shiny new toy. At the same time, there should be a concerted effort to develop the necessary AI and data management skills to produce successful outcomes. And finally, don’t try to build AI environments in-house. The faster, more reliable way to get up and running is to implement an expertly designed, integrated solution that is both flexible and scalable.

But perhaps the most important thing to keep in mind, says Emerj’s head of research, Daniel Faggella, is that AI is not IT. Instead, it represents a new way of working in the digital sphere, with all-new processes and expectations. A key difference is that while IT is deterministic, AI is probabilistic. This means actions taken in an IT environment are largely predictable, while those in AI aren’t. Consequently, AI requires a lot more care and feeding upfront in the data conditioning phase, and then serious follow-through from qualified teams and leaders to ensure that projects do not go off the rails or can be put back on track quickly if they do.

The enterprise might also benefit from a reassessment of what failure means and how it affects the overall value of its AI deployments. As Dale Carnegie once said, “Discouragement and failure are two of the surest stepping stones to success.”

In other words, the only way to truly fail with AI is to not learn from your mistakes and try, try again.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This masterclass explains the path to success as an Amazon FBA seller for under $20

TLDR: In The 2021 Amazon FBA Master Class Bundle, entrepreneur Ryan Ford walks new and experienced sellers alike through the steps of starting a successful Amazon based business.

In 2017, Amazon’s share of the ecommerce market was 37 percent. If you think the introduction of new retail players or a push from existing competitors took a bite out of that market dominance over the past four years, you’re very mistaken.

In fact, Amazon’s market share is now expected to pass 50 percent this year. And the exposure they bring in increased visibility and clout to their Amazon FBA (Fulfilled by Amazon) third-party sellers is enormous if you know how to take advantage of those benefits. With the training in The 2021 Amazon FBA Master Class Bundle ($19.99, over 90 percent off, from TNW Deals), entrepreneurs will learn all the inside tricks and tips that help savvy retailers make a killing selling virtually anything on the Amazon platform.

This five-course collection is led by business success story Ryan Ford, who studied the potential of Amazon FBA selling, then used the platform to launch several successful products and brands on his way to 7-figure sales. These courses take learners inside Ford’s method for spotting Amazon FBA opportunities and the steps he followed to business success.

It starts with the Amazon Limitless Course, a session designed to teach anyone, regardless of experience, how to sell on Amazon successfully. Ford walks newcomers through all the basics, from the best methods for listing items and establishing a brand to growing your customer base and maximizing your exposure. Armed with a truckload of professional insider strategies, this course offers the real ins and outs of Amazon FBA selling to help avoid mistakes and build a healthy business in your first try.

Meanwhile, the remaining courses will bolster your growing Amazon knowledge. Product Research Challenge: Find Your 1st Amazon Product will help entrepreneurs zero in on the key decision of their new business: what to sell. And in Amazon FBA Suspension Prevention Course, Ford helps new ventures avoid the mistakes that can lead to an Amazon account suspension if you don’t pay attention.

There’s also an Amazon PPC Masterclass, showing users exactly how to take advantage of an Amazon advertising model where your business is only charged if someone actually clicks on your ad.

Users can start crafting their own careers and futures with the training in The 2021 Amazon FBA Master Class Bundle, a collection that usually costs almost $1,000, but is on sale now for just $19.99.

Prices are subject to change.

Repost: Original Source and Author Link

Categories
AI

The success of self-driving vehicles will depend on teleoperation

Elevate your enterprise data technology and strategy at Transform 2021.


There is a lot of chatter right now about self-driving vehicles, which is understandable, as we stand on the threshold of an entire new world of driving. It’s not just the dream of self-driven vehicles, but also of green cars and trucks that glide silently without emitting any climate- threatening pollutants.

The cutting edge electric vehicle (EV) and autonomous vehicle (AV) technologies are virtually symbiotic, with a great deal of collaboration between the two emerging sectors. Yet, EVs actually are way ahead of autonomous driving in terms of fulfilling their vision in the real world. And it’s not just because people are gun-shy when it comes to tooling around in a driverless car. An empty driver’s seat is perhaps all fine and well on a straight road with almost no traffic and no sudden surprises. But when it comes to real-life traffic, there’s still no substitute for a hands-on driver. A human behind the wheel can see and properly respond to unexpected problems; autonomous technology has a long way to go before it can match wits with a human in such situations.
For the time being, self-driving technology falls short of the vision. Among the stumbling blocks are imperfect sensors, lack of mathematical models that can successfully predict the behavior of every road user, the high cost of collecting enough data to train algorithms, and, of course, passenger inhibition.(When elevators were introduced in the 1850s, it took years for most people to overcome their fear of the new contraptions.)

Today’s self-driving vehicles operate on a complex marriage of camera, lidar, radar, GPS and direction sensors. Combined, these are expected to deliver an all-situation, all-weather answer empowering an AV to see all, anticipate everything, and guarantee safe delivery to one’s destination.

Only they don’t. And they can’t.

Poor visibility, temporary detours, and lane-changes are challenges the current technology cannot process to provide an all-weather, all-situation solution. On-board computers, algorithms, and sensors are just not good enough today to deliver instant, and possibly life-saving, reactions.

While AVs have learned to recognize a fixed traffic light and a moving pedestrian, they cannot yet adapt to all new situations, especially at high speeds and in complex urban environments.
It’s no wonder that government regulators are reluctant to clear AVs for prime time. There is just not enough data to insure confidence in an AV’s ability to prevent damage, injury, and death. Understandably, consumer confidence is also just not there yet.

These are just some of the reasons why teleoperation is necessary to make the era of AVs practical. Simply put, teleoperation is the technology that allows one to remotely monitor an AV, take control as needed, and solve problems quickly and remotely.

With teleoperation, a single controller positioned at a distance from, say, a fleet of robo-taxis can observe each vehicle in real time and override their autonomy as necessary or provide much-needed input. When the problem is solved, the AV continues on its autonomous way. In effect, the remote “driver” takes over or issues commands only when human intervention is needed. And he or she can monitor and handle a number of vehicles simultaneously. The teleoperator is also there to speak with passengers who might be concerned about why the AV is taking a few extra seconds to cross an intersection.

Problem solved? Not so fast.

Because not all teleoperation technology is created equal. Guiding a vehicle remotely requires an ability to transfer, with as close to no delay as possible, information between the vehicle and the teleoperation center. Continuous and reliable two-way data streaming, regardless of changing network conditions, is absolutely critical, and a big challenge of its own. And yet, neither 4G LTE nor WiFi are equipped to support such high bandwidth and low latency communication, especially from a vehicle in motion. It will be a while before 5G will be the universal standard, and even with that network upgrade the challenges will remain.

Another obstacle is one that’s measured in milliseconds. Even with the strongest data connection, there is still a split-second difference between what is happening on the road and what the teleoperator can see. This drastically affects their ability to react.

The human factor (or UX, for user experience) is also an issue. The way different teleoperators perceive driving environments varies, and is different from that of an in-vehicle operator. It is insufficient to merely receive a video feed and follow through with commands. Tools that help create situational awareness are necessary — for example, an overlay to employ sensor systems and translate their information into recommended decisions.

The final bogeyman is hackers, the devils who delight in throwing in a virtual spanner that can ruin one’s day. When a bank account is hacked, money is lost. When a teleoperation post is hacked, lives can be lost, as safety and ingenuity are compromised.

Clearly, all of these issues require solutions that depend on deep innovation in multiple technologies. Details on the teleoperation solution will be explored in succeeding articles. Stay tuned.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Monster Hunter Rise is already a surprising success

The most recent game in the Monster Hunter franchise – Monster Hunter Rise – launched as a Switch exclusive last Friday, but it seems that exclusivity deal Capcom worked out with Nintendo didn’t have a major impact on sales. Capcom today is celebrating a major milestone for Monster Hunter Rise, announcing millions of shipments in its first weekend of availability.

Specifically, Capcom revealed today that Monster Hunter Rise has shipped 4 million units worldwide. That figure, of course, represents not only the copies that were sold before release, but also probably some portion of the copies that were shipped over the weekend. Keep in mind that this is units shipped, not units sold, so the total number of sold-through copies might be lower than 4 million.

We’re assuming that the quantity of copies sold is pretty close to that 4 million figure, though. Monster Hunter Rise has received some pretty stellar reviews, as it currently sits at an 87 on Metacritic. In addition to positive reviews fueling hype for the game, Capcom also published two demos before Monster Hunter Rise released, giving players a chance to test out some of Rise‘s additions before purchasing the full title.

For the time being, Monster Hunter Rise is a Switch-exclusive title, but it won’t be that way forever. Capcom has also confirmed that will be coming to PC in early 2022, but beyond that, we don’t have any details about when the PC version will launch. That vague release window is enough for us to know that Monster Hunter Rise will be exclusive to Switch for around a year, so we’re guessing that many of the Monster Hunter faithful won’t wait for the PC version to launch before diving in.

So, with one weekend in the books, it seems that Monster Hunter Rise is off to a very strong start. We’ll undoubtedly hear more about the game’s successes from Capcom in the future, so stay tuned for more.

Repost: Original Source and Author Link

Categories
Tech News

Why your SaaS success is all about ‘Net Dollar Retention’

If you’ve ever taken a marketing 101 course, you’ve learned that keeping a customer is much more profitable than acquiring new customers. It’s an old piece of wisdom but still valid today. 

Unfortunately, many SaaS companies forget about it and concentrate their efforts on generating new leads. The costs of acquiring new customers get very high, really quick. And if you’re unable to bind your user to your product, it is completely useless. 

Most people judge the performance of a SaaS-company on its MRR while judging a company on its NDR might give much more valuable insight.

Why? Well, it’s not uncommon that a company’s MRR increases while their NDR deteriorates. In other words, the company is bleeding money. 

One could argue that NDR is much more than a number; it quantifies how your clients value or love your product and how much of an impact you’re making on their lives… I’d say Customer lifetime value (LTV) is one of the most critical metrics for SaaS companies.

With that in mind, upgrades or downgrades and churns must be considered while judging a company’s performance. 

What is Net Dollar Retention?

NDR is a metric expressed as a percentage. It illustrates the (change in the) amount of revenue from the current users a company can retain compared to another period with taking downgrades, upgrades, and churn into account.  

NDR, or Net Dollar Retention, is not as well known as MRR, but a very valuable (if not more valuable) metric. Especially for SaaS–entrepreneurs. Why is NDR so important? Well, first and foremost for me is the fact that it’s very much possible to grow your MRR while losing money.

This scenario occurs when your marketing department is on fire. The acquisition of new users creates a revenue stream that exceeds the net reduction in revenue from your existing user base.  

Let’s say your company starts this month with €100,000 in MRR. It books €50,000 in new subscriptions, zero in expansion revenue, suffers €20,000 in downgrades, and €5,000 in churn.

In this example, your MRR rose by a whopping 50%. And yes, you should bring out the champagne for that. But, your NDR is only at 75%. You lost 25% of MRR from your current userbase. Also, what was your marketing budget? How much did you spend to acquire these new users? Money is leaking from your business…

To understand Net Dollar Retention, you must consider expansion (by marketing), downgrades (of packages), and most of all churn, and their effect on monthly recurring revenue (MRR).

Let’s dig a bit deeper into what this means before we address the essentials of NDR.

Increases in MRR

MRR can increase via two broad mechanisms:

  • Through newly onboarded customers
  • Through an increase in usage or upgrades within your existing customer base

Simply put, MRR increases when your existing customers start spending more on your product. By upgrading a standard subscription to a premium subscription, for example. Or when your marketing department does a bang-up job. 

An example of the first would be a user upgrading from a €10 basic subscription to a €50 premium subscription. In this case, the expansion revenue would be €40 or the net increase resulting from the upgrade.

€50-€10 = €40 increase in MRR

Decreases in MRR

Life can be hard, especially when you’re a SaaS–entrepreneur. Sometimes you lose clients, this decreases your MRR. This can happen in two ways: 

  • Through downgrades in usage within your existing customer base
  • Through churn, where customers cease to do business you

Downgrades are any decrease in revenue caused by downgrading in use, which is basically the opposite of our previous example.

A downgrade would be a user moving from a €50 basic premium subscription to a €10 basic subscription. The downgrade in revenue, in this case, would be €40, which is the net decrease resulting from the downgrade. 

€10-€50 = (€40)

Churn

I’d define churn as a disaster. Or, more mildly put, as losing users.

An example of churn is a user leaving the platform from a €50 subscription. The churn would be €50, the net loss resulting from the user leaving your platform. 

But with that out of the way, let’s get to the meat of this article.

Calculating Net Dollar Retention

NDR accounts for the changes in MRR caused by expansion, upgrades, downgrade, and churn within an existing customer base.

It is expressed as a percentage and calculated using the following equation.

(Starting MRR + expansion — downgrades — churn)/ Starting MRR *100 = NDR

For example, a company starts the month with €10,000 in recurring revenue. Over the month, it added €2,500 in expansion revenue, has €1,000 in downgrades and another €500 in churn.

(€10,000 + €2,500 — €1,000 — €500)/€10,000 *100 = 110% NDR

The company now has €11,000 in MRR and due to an NDR of 110%

Making use of NDR

A Net Dollar Retention below 100% means churn and downgrades were bigger than the growth you realized with your existing customers. You are losing users, or they’re spending less on your product. The role that your product plays in their life is not significant enough. They can do without you. Or, with less of you. Both are a little painful.  

It’s like being dumped. 

If this analogy seems a little far fetched, but I tend to disagree. People don’t leave something that they really love. Or, in this case, a product they really love. Maybe, just maybe, you’re not loveable enough, and it is time to invest in the relationship.

How to improve the NDR?

If your NDR is bad, don’t worry. I’ve worked with many SaaS companies along with my team and we’ve seen it’s more than possible to beat churn and turn your negative NDR into a positive one. 

Of course, there are plenty of ways to turn NDR around, but the one I have the most experience with is through human-centered design. A human connection built straight into your application will make your users fall in love and make sure they’ll stay with you as long as possible. 

Design helps you get your product from liked to loved. What does this mean? Well, your marketing efforts are more fruitful, your clients more positive, and VC’s are banging on your door. Making sure your users never leave you anymore, invest in the upgraded design of your SaaS product. 

So calculate your NDR and find the best way to get it to where you want it to be.

Published February 12, 2021 — 08:00 UTC



Repost: Original Source and Author Link