Categories
AI

Naver’s large language model is powering shopping recommendations

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


In June, Naver, the Seongnam, South Korea-based company that operates the eponymous search engine Naver, announced that it had trained one of the largest AI language models of its kind, called HyperCLOVA. Naver claimed that the system learned 6,500 times more Korean data than OpenAI’s GPT-3 and contained 204 billion parameters, the parts of the machine learning model learned from historical training data. (GPT-3 has 175 billion parameters.)

HyperCLOVA was seen as a notable achievement because of the scale of the model and since it fits into the trend of generative model “diffusion,” with multiple actors developing GPT-3-style models, like Huawei’s PanGu-Alpha (stylized PanGu-α). The benefits of large language models — including the ability to generate human-like text for marketing and customer support purposes — were previously limited to English because companies lacked the resources to train these models in other languages.

In the months since HyperCLOVA was developed, Naver has begun using it to personalize search results on the Naver platform, Naver executive officer Nako Sung told VentureBeat in an interview. It’ll also soon become available in private beta through HyperCLOVA Studio, a no-code tool that’ll allow developers to access the model for text generation and classification tasks.

“Initially used to correct typos in search queries on Naver Search, [HyperCLOVA] is now enabling many new features on our ecommerce platform, Naver Shopping, such as summarizing multiple consumer reviews into one line, recommending and curating products to user shopping preferences, or generating trendy marketing phrases for featured shopping collections,” Sung said. “We also launched CLOVA CareCall, a … conversational agent for elderly citizens who live alone. The service is based on the HyperCLOVA’s natural conversation generation capabilities, allowing it to have human-like conversations.”

Large language models

Training HyperCLOVA, which can understand English and Japanese in addition to Korean, required large-scale datacenter infrastructure, according to Sung. Naver leveraged a server cluster made up of 140 Nvidia SuperPod A100 DGX nodes, which the company claims can deliver up to 700 petaflops of compute power.

It took months to train HyperCLOVA on 2TB of Korean text data, much of which came from user-generated content on Naver’s platforms. For example, one source was Knowledge iN, a Quora-like, Korean-language community where users can ask questions on topics to receive answers from experts. Another was public blost posts from people who use free web hosting services provided through Naver.

Naver HyperCLOVA

Sung says that this differentiates HyperCLOVA from previous large language models like GPT-3, which have a limited ability to understand the nuances of languages besides English. He claims that by having the model draw on the “collective intelligence of Korean culture and society,” it can better serve Korean users — and at the same time reduce Naver’s dependence on other, less Asia Pacific-centric AI services.

In a recent issue of his Import AI newsletter, former OpenAI policy director Jack Clark asserted that because generative models ultimately reflect and magnify the data they’re trained on, different nations care a lot about how their own culture is represented in these models. “[HyperCLOVA] is part of a general trend of different nations asserting their own AI capacity [and] capability via training frontier models like GPT-3,” he continued. “[We’ll] await more technical details to see if [it’s] truly comparable to GPT-3.”

Some experts have argued that because the companies developing influential AI systems are predominantly located in the U.S., China, and the E.U., a disproportionate share of economic benefit will fall inside these regions — potentially exacerbating inequality. In an analysis of publications at two major machine learning conferences, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa, or Southeast Asia. Moreover, a recent report from Georgetown University’s Center for Security and Emerging Technology found that while 42 of the 62 major AI labs are located outside of the U.S., 68% of the staff are located within the United States.

“These large amounts of collective intelligence are continuously enriching and fortifying HyperCLOVA,” Sung said. “The most well-known hyperscale language model is GPT-3, and it is trained mainly with English data, and is only taught 0.016% of Korean data out of the total input … [C]onsidering the impact of hyperscale AI on industries and economies in the near future, we are confident that building a Korean language-based AI is very important for Korea’s AI sovereignty.”

Challenges in developing models

Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who is harmed. It’s well-established that models can amplify the biases in data on which they were trained, and the effects of model training on the environment have been raised as serious concerns.

To address the issues around bias, Sung says that Naver is in discussions with “external experts” including researchers at Seoul National University’s AI Policy Initiative and plans to form an advisory committee on AI ethics in Korea this year. The company also released a benchmark — Korean Language Understanding Evaluation (KLUE) — to evaluate the natural language understanding capabilities of Korean language models including HyperCLOVA.

“We recognize that while AI can make our lives convenient, it is also not infallible like all other technologies used today,” he added. “While pursuing convenience in the service we provide, Naver will also endeavor to explain our AI service in a manner that users can easily understand upon their request or when necessary … We will pay attention to safety during all stages of designing and testing our services, including after the service is deployed, to prevent a situation where AI as a daily tool threatens life or causes physical harm to people.”

Real-world applications

Currently, Naver says that HyperCLOVA is being tapped for various Naver services including Naver Smart Stores, the company’s ecommerce marketplace, where it’s “correcting” the names of products by generating “more attractive” names versus the original search-engine-optimized SKUs. In another ecommerce use case, Naver is applying HyperCLOVA to create product recommendation systems tailored to shoppers’ individual preferences.

Naver HyperCLOVA

“While HyperCLOVA doesn’t specifically learn users’ purchase logs, we discovered that it was able to recommend products on our marketplace to some extent. So, we fine-tuned this capability and introduced it as one of our ecommerce features. Unlike the existing recommendation algorithms, this model shows the ‘generalized’ ability to perform well on cold items, cold users and cold services,” Sung said. “Recommending a certain gift to someone is not a suitable problem for traditional machine learning to solve. That’s because there is no information about the recipient of the gift … [But] with HyperCLOVA, we were able to make this experience possible.”

HyperCLOVA is also powering an AI-driven call service for senior citizens who live alone, which Naver says it plans to refine to provide more personalized conversations in the future. Beyond this, Naver says it’s developing a multilingual version of HyperCLOVA that can understand two or more languages at the same time and an API that will allow developers to build apps and services on top of the model.

The pandemic has accelerated the world’s digital transformation, pushing businesses to become more reliant on software to streamline their processes. As a result, the demand for natural language technology is now higher than ever — particularly in the enterprise. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their natural language processing budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%.

The global NLP market is expected to climb in value to $35.1 billion by 2026.

“The most interesting thing about HyperCLOVA is that its usability is not limited only to AI experts, such as engineers and researchers, but it has also been used by service planners and business managers within our organization. Most of the winners [in a recent HyperCLOVA hackathon] were from non-AI developer positions, which I believe proves that HyperCLOVA’s no-code AI platform will empower everyone with AI capabilities, significantly accelerating the speed of AI transformation and changing its scope in the future.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: UN recommendations point to need for AI ethics guidelines

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The U.N.’s Educational, Scientific, and Cultural Organization (UNESCO) this week approved a series of recommendations for AI ethics, which aim to recognize that AI can “be of great service” but also raise “fundamental … concerns.” UNESCO’s 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place “strong enforcement mechanisms and remedial actions” to protect human rights.

“The world needs rules for artificial intelligence to benefit humanity. The recommendation[s] on the ethics of AI is a major answer,” UNESCO chief Audrey Azoulay said in a press release. “It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its … member states in its implementation and ask them to report regularly on their progress and practices.”

UNESCO’s policy document highlights the advantages of AI while seeking to reduce the risks that it entails. Toward this end, they address issues around transparency, accountability, and privacy in addition to data governance, education, culture, labor, health care, and the economy.

“Decisions impacting millions of people should be fair, transparent, and contestable,” UNESCO assistant director-general for social and human sciences Gabriela Ramos said in a statement. “These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.”

The recommendations follow on the heels of the European Union’s proposed regulations to govern the use of AI across the bloc’s 27 member states. They impose bans on the use of biometric identification systems in public, like facial recognition — with some exceptions. And they prohibit AI in social credit scoring, the infliction of harm (such as in weapons), and subliminal behavior manipulation.

The UNESCO recommendations also explicitly ban the use of AI for social scoring and mass surveillance, and they call for stronger data protections to provide stakeholders with transparency, agency, and control over their personal data. Beyond this, they stress that AI adopters should favor data, energy, and resource-efficient methods to help fight against climate change and tackle environmental issues.

Growing calls for regulation

While the policy is nonbinding, China’s support is significant because of the country’s historical — and current — stance on the use of AI surveillance technologies. According to the New York Times, the Chinese government — which has installed hundreds of millions of cameras across the country’s mainland — has piloted the use of predictive technology to sweep a person’s transaction data, location history, and social connections to determine whether they’re violent. Chinese companies such as Dahua and Huawei have developed facial recognition technologies, including several designed to target Uighurs, an ethnic minority widely persecuted in China’s Xinjiang province.

Underlining the point, contracts from the city of Zhoukou show that officials spend as much on surveillance as they do on education — and more than twice as much as on environmental protection programs.

Given China’s expressed intent to surveil 100% of public spaces within its borders, it seems unlikely to reverse course — UNESCO policy or not. But according to Ramos, the hope is that the recommendations, particularly the emphasis on addressing climate change, have an impact on the types of AI technologies that corporations, as well as governments, pursue.

“[UNESCO’s recommendations are] the code to change the [AI sector’s] business model, more than anything,” Ramos told Politico in an interview.

The U.S. isn’t a part of UNESCO and isn’t a signatory of the new recommendations. But bans on technologies like facial recognition have picked up steam across the U.S. at the local level. Facial recognition bans had been introduced in at least 16 states including Washington, Massachusetts, and New Jersey as of July. California lawmakers recently passed a law that will require warehouses to disclose the algorithms and metrics they use to track workers. A New York City bill bans employers from using AI hiring tools unless a bias audit can show that they won’t discriminate. And in Illinois, the state’s biometric information privacy act bans companies from obtaining and storing a person’s biometrics without their consent.

Regardless of their impact, the UNESCO recommendations signal growing recognition on the part of policymakers of the need for AI ethics guidelines.  The U.S. Department of Defense earlier this month published a whitepaper — circulated among National Oceanic and Atmospheric Administration, the Department of Transportation, ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service — outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. NATO recently released an AI strategy listing the organization’s principles for “responsible use [of] AI.” And the U.S. National Institute of Standards and Technology is working with academia and the private sector to develop AI standards.

Regulation with an emphasis on accountability and transparency could go a long way toward restoring trust in AI systems. According to a survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to trust AI systems in general. That’s not surprising, given that biases in unfettered AI systems have yielded wrongful arrestsracist recidivism scoressexist recruitmenterroneous high school grades, offensive and exclusionary language generators, and underperforming speech recognition systems, to name a few injustices.

“It is time for the governments to reassert their role to have good quality regulations, and incentivize the good use of AI and diminish the bad use,” Ramos continued.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Lucidworks: Chatbots and recommendations boost online brand loyalty

Elevate your enterprise data technology and strategy at Transform 2021.


Pandemic-related shutdowns led consumers to divert the bulk of their shopping to online — and many of those shoppers are now hesitant about returning to stores as businesses begin to open back up. A recent survey of 800 consumers conducted by cloud company Lucidworks found that 59% of shoppers plan to either avoid in-person shopping as much as possible,  or visit in-person stores less often than before the pandemic.

Who is loyal

Above: Shoppers across the U.S. and U.K. agree that high-quality products, personalized recommendations, and excellent customer service are the top three reasons they’re brand-loyal.

Image Credit: Lucidworks

As the world stabilizes, shoppers want brands to provide a multi-faceted shopping experience — expanded chatbot capabilities, diverse recommendations, and personalized experiences that take into account personal preferences and history, Lucidworks found in its study. More than half of shoppers in the survey, 55%, said they use a site’s chatbot on every visit. American shoppers use chatbots more than their counterparts in the United Kingdom, at 70%.

The majority of shoppers, 70%, use chatbots for customer service, and 53% said they want a chatbot to help them find specific products or check product compatibility. A little less than half, or 48%, said they use chatbots to find more information about a product, and 42% use chatbots to find policies such as shipping information and how to get refunds.

A quarter of shoppers will leave the website to seek information elsewhere if the chatbot doesn’t give them the answer. Brands that deploy chatbots capable of going beyond basic FAQs and can perform product and content discovery will provide the well-rounded chatbot experience shoppers expect, Lucidworks said.

Respondents also pointed to the importance of content recommendations. The survey found that almost a third of shoppers said they find recommendations for “suggested content” useful, and 61% of shoppers like to do research via reviews on the brand’s website where they’ll be purchasing from. A little over a third — 37% — of shoppers use marketplaces such as Amazon, Google Shopping, and eBay for their research.

Brands should try to offer something for every step in the shopping journey, from research to purchase to support, to keep shoppers on their sites longer. How online shopping will look in coming years is being defined at this very moment as the world reopens. Brands that are able to understand a shopper’s goal in the moment and deliver a connected experience that understands who shoppers are and what they like are well-positioned for the future, Lucidworks said.

Lucidworks used a self-serve survey tool, Pollfish, in late May 2021 to survey 800 consumers over the age of 18—400 in the U.K. and 400 in the U.S.—to understand how shoppers interact with chatbots, product and content recommendations, where they prefer to do research, and plans for future in-store shopping.

Read the full U.S./U.K. Consumer Survey Report from Lucidworks.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

YouTube test detects products in videos to make recommendations

Reports last year suggested Google was looking to make YouTube into more of a shopping hub, but we’ve seen little progress in that arena since. That’s changing a bit today, as Google announced it’s testing an automated list of products detected in videos.

Here’s the short bit of text directly from YouTube’s test feature support page:

[March 22, 2021] Testing automated list of products detected in videos:We are experimenting with a new feature that displays a list of products detected in some videos, as well as related products. The feature will appear in between the recommended videos, to viewers scrolling below the video player. The goal is to help people explore more videos and information about those products on YouTube. This feature will be visible to people watching videos in the US. 

And that’s about it. While the document doesn’t include any images of what the feature might look like, it appears YouTube will add another section to its player page, in which you’ll see recommended products based on what its vision AI thinks is in a video.

The key word in the above quote is ‘detected,’ as that’s what suggests Google isn’t just looking at the video title and tags to figure out what products are in the video. Instead, it suggests Google is analyzing the video and/or audio content itself to figure out what to recommend. We’ve reached out to Google for clarification and will update this post if we hear back.

In that sense implementing vision AI may seem a little redundant, but the tool could presumably detect other products that may not be the main subject of the video as well. The company could also use the knowledge to help make more useful video suggestions, as its never-ending tweaking of the recommendation algorithm continues.

While such a change won’t suddenly turn YouTube into Amazon, it’s clear Google is experimenting with ways to make more money off its viewership. Hopefully, the creators end up seeing a return from those profits too.

Via 9to5Google

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.

Published March 23, 2021 — 00:37 UTC



Repost: Original Source and Author Link

Categories
AI

Bearing.ai emerges from stealth to power recommendations for shipping boat captains

Bearing.ai emerged from stealth today to launch AI-powered software that provides predictions to the maritime shipping industry and tanker boat captains. The idea is to optimize shipping route navigation based on fuel efficiency, profit, and safety. Since the company was founded in 2019, Bearing.ai has raised $3 million from the AI Fund, a $175 million endeavor led by former Google Brain cofounder Andrew Ng, as well as Japanese shipping company Mitsui and Co.

Bearing.ai CEO Dylan Kiel told VentureBeat the startup was able to train its first models with historical data provided by investor Mitsui and Co. and 2,500 ships. As part of the arrangement, Bearing.ai announced deals to provide services to 300 K Line vessels, as well as shipping companies MOL and ZeroNorth. Fuel consumption is Bearing.ai’s primary focus, Kiel said, because it’s the biggest single driver of operating costs for shipping companies. When making predictions, Bearing.ai takes in sensor data and considers factors like ship dimensions, location, and weather conditions like wind speed and wave size.

“Weather is one of the single biggest drivers of the variance that occurs with fuel consumption for a given voyage. I can have the same ship going on the same route, let’s say Tokyo to San Diego, and [carrying] the same cargo. And the consumption I have from voyage A to voyage B could be different by 30-40% based upon the weather,” Kiel said.

Bearing.ai claims its models are capable of predicting the fuel consumption of a container or hull ship with 98% accuracy, a feat made possible by fuel sensors and speed sensors collecting data on a minute-by-minute basis.

Container ships enable a vast amount of global trade but saw a sharp decline in 2020 due to the COVID-19 pandemic. Like other industries,  shipping faces pressure to automate, and Kiel said Bearing.ai wants to help companies consider a range of options to save money.

“It’s not just choosing the right route for one ship. If you choose that right route for that ship that impacts what the other ships need to do and that impacts what you’re going to do with your contract and when you’re going to clean that ship and so on … there’s a lot of decision points that ultimately are all interconnected if you’re trying to optimize the whole system,” he said. “Pretty much every decision you can make as a decision company — whether it’s fuel to use, the ship to use, the route to take, how you position your fleet — all of that impacts your ultimate operational efficiency.”

Other examples of automation startups entering the maritime space include Sea Machines, which is working on autonomous shipping navigation, and Orca AI, which makes systems to help ships avoid collisions. Also of note is recent work by AI researchers to create amphibious robots capable of movement on sea and land.

Bearing.ai was founded in June 2019 and is based in Palo Alto, California. The company has 10 employees.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Best USB-C power adapters for iPhone 12: Buying tips, recommendations

If you’ve ordered a new iPhone 12, you’ll notice that the box is a whole lot slimmer than in previous years. That’s because the bulkiest accessory is gone: the power adapter. In fact, Apple has removed the charger from all iPhones it sells, so whether you’re spending $399 on an iPhone SE or $1,399 on a maxed-out iPhone 12 Pro Max, you need to bring your own charger.

Any old charger and Lightning cable you have lying around will work, of course. But if you’ve been using Apple’s old 5W adapter, it’s a perfect time to upgrade. For the first time, Apple is supplying a USB-C-to-Lightning cable in all iPhone boxes to allow for fast charging, so all you need is the right charger.

Watch the wattage

apple 5watt iphone charger Apple

You probably have Apple’s 5-watt charger. It works with the iPhone 12, but settle in, because the charging will be slow. 

The most important thing to consider when buying a new charger is the amount of wattage it will provide to your device. For years, Apple supplied “good-enough” 5-watt chargers in the iPhone box, which take about 2.5 hours to fill up your iPhone. That was fine for the iPhone 5 and earlier, which didn’t support fast charging, but the newest iPhone 12 models are capable of working with chargers that handle up to 20 watts. You can fill up about 50 percent of an iPhone 12’s battery in about 30 minutes with the right adapter.

So you should get a USB-C charger that’s capable of delivering a 20-watt charge. Quite frankly, it’s harder to find one that doesn’t than one that does, but you’ll want to make sure you’re at least getting the bare minimum to allow for maximum fast charging. You’ll also want to make sure the charger supports USB Power Delivery, which any third-party charger almost certainly will do.

Check the size and the specs

Apple’s chargers have always been light, small, and portable, but some third-party adapters make them seem downright bulky. That’s due to the newest charging tech, gallium nitride (GaN), which allows for adapters that are significantly smaller and more power-efficient.

Charger makers have already begun replacing the silicon inside power adapters with gallium nitride, and the size difference is significant. For example, the Anker PowerPort Atom III is 35 percent smaller than the adapter Apple supplies with the 13-inch MacBook Pro, despite delivering the same 60-watt charge. Unless you’re buying one of the models here—which are all GaN except for Apple’s adapter—be sure to check out the dimensions in the technical specs.

Count the ports

Just because Apple only allows you to charge one device per plug doesn’t mean they all have to be that way. Many third-party adapters offer multiple ports on a single wall charger. If you’re going to be regularly charging more than one device at a time, buy an adapter with at least two ports—some have as many as four ports. You can even get a mix of USB-C and USB-A, depending on your needs.

Prongs: To fold or not to fold

After you decide how much power and how many ports you need, just one question remains: Do you want the prongs to fold or not? Some third-party chargers have folding prongs to protect the adapter as well as other items if you toss it in a bag, but Apple’s 20W charger and a few others have protruding prongs. It’s a small thing, but it could make a big difference in your travel bag.

Repost: Original Source and Author Link