Categories
AI

Lumigo joins race to fill serverless observability gap

Lumigo, a company that aims to plug a gap in serverless monitoring, has announced new features to do just that. The company has also announced its latest capital injection of $29 million in a series A round of funding.

Serverless computing has exploded in popularity as of late because it allows IT departments to run code without thinking about servers. Cloud providers run the servers responsible for the application, and IT pays the provider for only the time they consume.

But how does IT monitor the performance of those applications? In serverless monitoring, the infrastructure comprises transient functions which are restricted within proprietary boundaries. So installing monitoring agents to perform logging and tracing analysis is challenging.

These challenges have given birth to serverless monitoring tools like Lumigo, Epsagon, or the Splunk Observability Suite.

Lumigo’s visual map

Lumigo says it can now monitor containers, Kubernetes, and virtual machines. Both containers and full-fledged virtual machines are now part of Lumigo’s hybrid distributed apps. Lumigo can track service-wise requests inside these apps. It can track latency issues and locate and fix hard-to-reproduce bugs.

Lumigo’s distributed tracing is a one-click solution for developers to seamlessly find and fix issues in serverless and microservices environments. Developers at a host of companies, including Medtronic, Fortinet, Berlitz, Optibus, Symantec, Allianz, leverage the services of Lumigo.

With a virtual stack trace of all services participating in the transaction, Lumigo displays everything in a visual map. It does not need any manual code changes to visualize the entire environment. While you can see the end-to-end execution duration of each service, Lumigo identifies your worst latency offenders. The successful leveraging of machine learning allows Lumigo to preempt issues and raise alerts. Resultantly, the cost implications of these issues remain low.

Several contenders of Lumigo match its capabilities. Epsagon, for instance, builds its services on the notion of distributed tracing. Similar to Lumigo, its AI-powered methods can preempt and neutralize issues before it occurs by raising relevant alerts. Similarly, Splunk’s Observability Suite offers end-to-end observability for serverless applications through tracing and automated incident response mechanisms. Its architecture of microservices makes real-time visibility and performance monitoring a reality.

Overall, these tools help close the serverless-observability gap to a large extent by bringing in a mix of manual and automated observation techniques. The developers can spend more time on writing functional code without having to worry about writing instrumentation. And these are the benefits that are motivating investors to get increasingly involved in companies like Lumigo.

This is the second funding for Lumigo after it had its initial seed round of $8 million two years ago.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: China’s massive multimodal model highlights AI research gap

Elevate your enterprise data technology and strategy at Transform 2021.


This week, researchers at the Beijing Academy of Artificial Intelligence (BAAI) announced the release of Wu Dao 2.0, a multimodal AI model capable of generating text indiscernible from human-crafted prose — and more. Containing 1.75 trillion parameters, the parts of the machine learning model learned from historical training data, Wu Dao 2.0 is 10 times larger than OpenAI’s 175-billion-parameter GPT- 3.

Wu Dao 2.0 is the latest example of what OpenAI policy director Jack Clark calls model diffusion, or multiple state and private actors developing GPT-3-style AI models. For example, Russia and France are training smaller-scale systems via Sberbank and LightOn’s PAGnol, while Korea’s Naver Labs is investing in the recently created HyperCLOVA. Clark notes that because these models reflect and magnify the data they’re trained on, different countries care about how their own cultures are represented in the models. The Wu Dao 2.0 announcement, then, is part of a general trend of nations asserting their own AI capabilities via training frontier models like GPT-3.

Wu Dao 2.0, which arrived three months after version 1.0’s March debut, is built on an open source system akin to Google’s Mixture of Experts, dubbed FastMoE. Mixture of Experts, a paradigm first proposed in the ’90s, keeps models specialized in different tasks within a larger model using a “gating network.” BAAI says Wu Dao 2.0 was trained with 4.9 terabytes of Chinese and English images and text both on clusters of supercomputers and conventional GPUs, giving it more flexibility than Google’s system because FastMoE doesn’t require proprietary hardware.

Wu Dao 2.0’s mulitmodal design affords it a range of skills, including the ability to perform natural language processing, text generation, image recognition, and image generation tasks. It can write essays, poems, and couplets in traditional Chinese, as well as captioning images and creating nearly photorealistic artwork, given natural language descriptions. According to Engadget, Wu Dao 2.0 can also power “virtual idols” and predict the 3D structures of proteins, like DeepMind’s AlphaFold.

“The way to artificial general intelligence is big models and big computer,” BAAI chair Dr. Zhang Hongjiang said in a statement. “What we are building is a power plant for the future of AI. With mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”

AI nationalism

Wu Dao 2.0’s release comes during a surge in tech nationalism globally, particularly in China and parts of the Eurozone. Last November, China imposed new rules around tech exports, with the country’s Ministry of Commerce adding 23 items to its restricted list. Following Nvidia’s announcement that it intends to acquire U.K.-based chipmaker Arm, the majority of U.K.-area IT experts said the government should intervene to protect the country’s tech sector, according to a survey from the industry’s professional body (The Chartered Institute for IT).

Former U.S. chief technology officer Michael Kratsios, among others, has suggested state adversaries are pursuing uses of AI technologies that “aren’t in alignment with American values.” In February, the White House said it would bump non-defense-related AI investment to $2 billion annually by 2022, while U.S. President Joe Biden has proposed an increase in the amount of federal R&D spending to $300 billion over four years. And a U.S. Senate panel last month approved the Endless Frontier Act, pending legislation that would authorize more than $110 billion for basic and advanced technological research over five years.

But U.S. superiority in AI is an increasingly dim prospect. France recently took the wraps off a $1.69 billion (€1.5 billion) initiative aimed at transforming the country into a “global leader” in AI research and training. In 2018, South Korea unveiled a multiyear, $1.95 billion (KRW 2.2 trillion) effort to strengthen its R&D in AI, with the goal of establishing six AI-focused graduate schools by 2022 and training 5,000 AI specialists. And China, whose AI Innovation Action Plan for Colleges and Universities called for the establishment of 50 new AI institutions in 2020, is expected to leapfrog past the European Union within the next several years if current trends continue.

BAAI is funded by the Beijing government, which put 340 million yuan ($53.3 million) into the academy in 2018 and 2019 alone. A Beijing official pledged to continue support in a 2019 speech.

In March, former Google CEO Eric Schmidt urged lawmakers to ramp up funding in the AI space to prevent China from becoming the biggest player in the global AI market. Schmidt suggested doubling the nation’s budget for R&D in AI each year until it hits $32 billion in 2026. Citing the U.S. National Security Commission on Artificial Intelligence, Schmidt also said lawmakers need to incentivize public-private partnerships to develop AI applications across government agencies.

“The government is not today prepared for this new technology,” Schmidt told CNN’s Fareed Zakaria, noting that the use of AI to produce and spread harmful information poses a “threat to democracy” and could ultimately be used as a weapon of war. “We believe this is a national emergency and a threat to our nation unless we get our act together with respect to focusing on AI in the federal government and international security.”

So how might the U.S. make up for lost ground, despite the many challenges ahead? Last July, the President’s Council of Advisors on Science and Technology (PCAST) released a report outlining what it believes must happen for the U.S. to advance “industries of the future,” including AI. PCAST recommended driving opportunities for AI education and training, in part by securing pledges to scale investments for training and education of the U.S. workforce in AI; developing AI curricula and performance metrics at K-12 through postgraduate levels and for certificate and professional programs; creating incentives, recruitment, and retention programs for AI faculty at universities; and increasing National Science Foundation and Department of Education investments in AI educators, scientists, and technologists at all levels.

Last August, in a step toward these goals, the White House established 12 new research institutes focused on AI and quantum information science. But the release of Wu Dao 2.0 highlights the work that must be done before the U.S. can close the AI gap with other world superpowers.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Survey finds talent gap is slowing enterprise AI adoption

Join Transform 2021 this July 12-16. Register for the AI event of the year.


AI’s popularity in the enterprise continues to grow, but practices and maturity remain stagnant as organizations run into obstacles while deploying AI systems. O’Reilly’s 2021 AI Adoption in the Enterprise report, which surveyed more than 3,500 business leaders, found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing it as a “significant” barrier — revealing how persistent the talent gap might be.

The findings agree with a recent KPMG survey that revealed a large number of organizations have increased their investments in AI to the point that executives are now concerned about moving too quickly. Indeed, Deloitte says 62% of respondents to its corporate October 2018 report adopted some form of AI, up from 53% in 2019. But adoption doesn’t always meet with success, as the roughly 25% of companies that have seen half their AI projects fail will tell you.

The O’Reilly report suggests that the second-most significant barrier to AI adoption is a lack of quality data, with 18% of respondents saying their organization is only beginning to realize the importance of high-quality data. Interestingly, participants in Alation’s State of the Data Culture Report said the same, with a clear majority of employees (87%) pegging data quality issues as the reason their organizations failed to successfully implement AI.

The percentage of respondents to O’Reilly’s survey who reported mature practices (26%) — that is, ones with revenue-bearing AI products — was roughly the same as in the last few years. The industry sector with the highest percentage of mature practices was retail, while education had the lowest percentage. Impediments to maturity ran the gamut but largely centered around a lack of institutional knowledge about machine learning modeling and data science (52%), understanding business use cases (49%), and data engineering (42%).

Talent gap

Laments over the AI talent shortage in the U.S. have become a familiar refrain from private industry. According to a report by Chinese technology company Tencent, there are about 300,000 AI professionals worldwide but “millions” of roles available. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers globally working on AI development and research, only 25% are “well-versed enough in the technology to work with teams to take it from research to application.” And a 2019 Gartner survey found that 54% of chief information officers view this skills gap as the biggest challenge facing their organization.

While higher education enrollment in AI-relevant fields like computer science has risen rapidly in recent years, few colleges have been able to meet student demand, due to a lack of staffing. There’s evidence to suggest the number of instructors is failing to keep pace with demand due to private sector poaching. From 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40%, reflecting the growing movement of researchers from academia to corporations.

One curious trend highlighted in the survey was the share of organizations that say they’ve adopted supervised learning (82%) versus more cutting-edge techniques like self-supervised learning. Supervised learning entails training an AI model on a labeled dataset. By contrast, self-supervised learning generates labels from data by exposing relationships between the data’s parts, a step believed to be critical to achieving human-level intelligence.

Spotlight on supervised learning

According to Gartner, supervised learning will remain the type of machine learning organizations leverage most through 2022. That’s because it’s effective in a number of business scenarios, including fraud detection, sales forecasting, and inventory optimization. For example, a model could be fed data from thousands of bank transactions, with each transaction labeled as fraudulent or not, and learn to identify patterns that led to a “fraudulent” or “not fraudulent” output.

“In the past two years, the audience for AI has grown but hasn’t changed much: Roughly the same percentage consider themselves to be part of a ‘mature’ practice; the same industries are represented, and at roughly the same levels; and the geographical distribution of our respondents has changed little,” wrote Mike Loukides, O’Reilly VP of content strategy and the report’s author. “[For example,] relatively few respondents are using version control for data and models … Enterprise AI won’t really have matured until development and operations groups can engage in practices like continuous deployment; until results are repeatable (at least in a statistical sense); and until ethics, safety, privacy, and security are primary rather than secondary concerns.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link