Categories
Security

Australian PM proposes defamation laws forcing social platforms to unmask trolls

Australian Prime Minister Scott Morrison is introducing new defamation laws that would force online platforms to reveal the identities of trolls, or else pay the price of defamation. As ABC News Australia explains, the laws would hold social platforms, like Facebook or Twitter, accountable for defamatory comments made against users.

Platforms will also have to create a complaint system that people can use if they feel that they’re a victim of defamation. As a part of this process, the person who posted the potentially defamatory content will be asked to take it down. But if they refuse, or if the victim is interested in pursuing legal action, the platform can then legally ask the poster for permission to reveal their contact information.

And if the platform can’t get the poster’s consent? The laws would introduce an “end-user information disclosure order,” giving tech giants the ability to reveal a user’s identity without permission. If the platforms can’t identify the troll for any reason — or if the platforms flat-out refuse — the company will have to pay for the troll’s defamatory comments. Since the law is specific to Australia, it appears that social networks wouldn’t have to identify trolls located in other countries.

“The online world should not be a wild west where bots and bigots and trolls and others are anonymously going around and can harm people,” Morrison said during a press conference. “That is not what can happen in the real world, and there is no case for it to be able to be happening in the digital world”

As noted by ABC News Australia, a draft of the “anti-troll” legislation is expected this week, and it likely won’t reach Parliament until the beginning of next year. It still remains unclear which specific details the platforms would be asked to collect and disclose. Even more concerning, we still don’t know how severe the case of defamation would have to be to warrant revealing someone’s identity. A loose definition of defamation could pose serious threats to privacy.

The proposed legislation is part of a larger effort to overhaul Australia’s defamation laws. In September, Australia’s High Court ruled that news sites are considered “publishers” of defamatory comments made by the public on their social media pages, and should be held liable for them. This has caused outlets like CNN to block Australians from accessing their Facebook page altogether. However, the ruling potentially poses implications for individuals running social pages, as the ruling implies that they can also be held responsible for any defamatory comments left on their pages.

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Defense Department proposes new guidelines for developing AI technologies

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This week, the Defense Innovation Unit (DIU), the division of the U.S. Department of Defense (DoD) that awards emerging technology prototype contracts, published a first draft of a whitepaper outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. The paper, which includes worksheets for system planning, development, and deployment, is based on DoD ethics principles adopted by the Secretary of Defense and was written in collaboration with researchers at Carnegie Mellon University’s Software Engineering Institute, according to the DIU.

“Unlike most ethics guidelines, [the guidelines] are highly prescriptive and rooted in action,” a DIU spokesperson told VentureBeat via email. “Given DIU’s relationship with private sector companies, the ethics will help shape the behavior of private companies and trickle down the thinking.”

Launched in March 2020, the DIU’s effort comes as corporate defense contracts, particularly those involving AI technologies, have come under increased scrutiny. When news emerged in 2018 that Google had contributed to Project Maven, a military AI project to develop surveillance systems, thousands of employees at the company protested.

For some AI and data analytics companies, like Oculus cofounder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military contracts have become a top source of revenue. In October, Palantir won most of an $823 million contract to provide data and big analytics software to the U.S. army. And in July, Anduril said that it received a contract worth up to $99 million to supply the U.S. military with drones aimed at countering hostile or unauthorized drones.

Machine learning, computer vision, facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. army branches. And in the case of Maven, Microsoft and Amazon among others have taken Google’s place.

AI development guidance

The DIU guidelines recommend that companies start by defining tasks, success metrics, and baselines “appropriately,” identifying stakeholders and conducting harms modeling. They also require that developers address the effects of flawed data, establish plans for system auditing, and “confirm that new data doesn’t degrade system performance,” primarily through “harms assessment[s]” and quality control steps designed to mitigate negative impacts.

The guidelines aren’t likely to satisfy critics who argue that any guidance the DoD offers is paradoxical. As MIT Tech Review points out, the DIU says nothing about the use of autonomous weapons, which some ethicists and researchers as well as regulators in countries including Belgium and Germany have opposed.

But Bryce Goodman at the DIU, who coauthored the whitepaper, told MIT Tech Review that the guidelines aren’t meant to be a cure-all. For example, they can’t offer universally reliable ways to “fix” shortcomings such as biased data or inappropriately selected algorithms, and they might not apply to systems proposed for national security use cases that have no route to responsible deployment.

Studies indeed show that bias mitigation practices like those that the whitepaper recommend aren’t a panacea when it comes to ensuring fair predictions from AI models. Bias in AI also doesn’t arise from datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can also contribute. So can other human-led steps throughout the AI deployment pipeline, like dataset selection and prep and architectural differences between models.

Regardless, the work could change how AI is developed by the government if the DoD’s guidelines are adopted by other departments. While NATO recently released an AI strategy and the U.S. National Institute of Standards and Technology is working with academia and the private sector to develop AI standards, Goodman told MIT Tech Review that he and his colleagues have already given the whitepaper to the National Oceanic and Atmospheric Administration, the Department of Transportation, and ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service.

The DIU says that it’s already deploying the guidelines on a range of projects covering applications including predictive health, underwater autonomy, predictive maintenance, and supply chain analysis. “There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” Goodman told MIT Tech Review.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

DeepMind proposes new benchmark to improve robots’ object-stacking abilities

Stacking an object on top of another object is a straightforward task for most people. But even the most complex robots struggle to handle more than one such task at a time. Stacking requires a range of different motor, perception, and analytics skills, including the ability to interact with different kinds of objects. The level of sophistication involved has elevated this simple human task to a “grand challenge” in robotics and spawned a cottage industry dedicated to developing new techniques and approaches.

A team of researchers at DeepMind believe that advancing the state of the art in robotic stacking will require a new benchmark. In a paper to be presented at the Conference on Robot Learning (CoRL 2021), they introduce RGB-Stacking, which tasks a robot with learning how to grasp different objects and balance them on top of one another. While benchmarks for stacking tasks already exist in the literature, the researchers assert that what sets their research apart is the diversity of objects used, and the evaluations performed to validate their findings. The results demonstrate that a combination of simulation and real-world data can be used to learn “multi-object manipulation,” suggesting a strong baseline for the problem of generalizing to novel objects, the researchers wrote in the paper.

“To support other researchers, we’re open-sourcing a version of our simulated environment, and releasing the designs for building our real-robot RGB-stacking environment, along with the RGB-object models and information for 3D printing them,” the researchers said. “We are also open-sourcing a collection of libraries and tools used in our robotics research more broadly.”

RGB-Stacking

With RGB-Stacking, the goal is to train a robotic arm via reinforcement learning to stack objects of different shapes. Reinforcement learning is a type of machine learning technique that enables a system — in this case a robot — to learn by trial and error using feedback from its actions and experiences.

RGB-Stacking places a gripper attached to a robot arm above a basket, and three objects in the basket: one red, one green, and one blue (hence the name RGB). A robot must stack the red object on top of the blue object within 20 seconds, while the green object serves as an obstacle and distraction.

According to DeepMind researchers, the learning process ensures that a robot acquires generalized skills through training on multiple object sets. RGB-Stacking intentionally varies the grasp and stack qualities that define how a robot can grasp and stack each object, which forces the robot to exhibit behaviors that go beyond a simple pick-and-place strategy.

DeepMind

“Our RGB-Stacking benchmark includes two task versions with different levels of difficulty,” the researchers explain. “In ‘Skill Mastery,’ our goal is to train a single agent that’s skilled in stacking a predefined set of five triplets. In ‘Skill Generalization,’ we use the same triplets for evaluation, but train the agent on a large set of training objects — totaling more than a million possible triplets. To test for generalization, these training objects exclude the family of objects from which the test triplets were chosen. In both versions, we decouple our learning pipeline into three stages.”

The researchers claim that their methods in RGB-Stacking result in “surprising” stacking strategies and “mastery” of stacking a subset of objects. Still, they concede that they only scratch the surface of what’s possible and that the generalization challenge remains unsolved.

“As researchers keep working to solve the open challenge of true generalization in robotics, we hope this new benchmark, along with the environment, designs, and tools we have released, contribute to new ideas and methods that can make manipulation even easier and robots more capable,” the researchers added.

As robots become more adept at stacking and grasping objects, some experts believe that this type of automation could drive the next U.S. manufacturing boom. In a recent study from Google Cloud and The Harris Poll, two-thirds of manufacturers said that the use of AI in their day-to-day operations is increasing, with 74% claiming that they align with the changing work landscape. Companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion by 2025.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: NIST proposes ways to identify and address AI bias

The National Institute of Standards and Technology (NIST), the U.S. agency responsible for developing technical metrics to promote “innovation and industrial competitiveness,” this week published a document outlining feedback and recommendations for mitigating the risk of bias in AI. The paper, about which NIST is accepting comments until August, proposes an approach for identifying and managing “pernicious” biases that can damage public trust in AI.

As NIST scientist Reva Schwartz, who coauthored the paper, points out, AI is transformative in its ability to make sense of data more quickly than humans. But as AI pervades the world, it’s becoming clear that its predictions can be affected by algorithmic and data biases. Making matters worse, some AI systems are built to model complex concepts that can’t be directly measured by data in the first place. For example, hiring algorithms use proxies — some of which are dangerously imprecise — like “area of residence” or “education level” — for the concepts they attempt to capture.

The effects are often catastrophic. Biases in AI have yielded wrongful arrests, racist recidivism scores, sexist recruitment, erroneous high school grades, offensive and exclusionary language generators, and underperforming speech recognition systems, to name a few injustices. Unsurprisingly, trust in AI systems is eroding. According to survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to trust AI systems in general.

Proposed framework

The NIST document lays out a framework to spot and address AI biases at different points in a system’s lifecycle, from conception, iteration, and debugging to release. It starts at the pre-design or ideation stage before moving onto design and development and, finally, deployment.

At the pre-design phase, since many of the downstream processes hinge on decisions made here, there’s a lot of pressure to “get things right,” the NIST coauthors note. Central to these decisions is who makes them and which people or teams have the most power or control over them, which can reflect limited points of view, affect later stages and decisions, and lead to biased outcomes.

For example, it’s an obvious risk to build predictive models for scenarios already known to be discriminatory, like hiring. Yet developers often don’t address the possibility of inflated expectations related to AI. Indeed, current assumptions in development often revolve around the idea of technological solutionism, the perception that technology will lead to only positive solutions.

The design and development phases present other, related sets of challenges. Here, data scientists are often singularly focused on performance and optimization, which can be sources of bias in their own rights. For instance, modelers will almost always select the most accurate machine learning models. But not taking context into consideration can lead to biased results for certain populations, as can the use of aggregated data about groups to make predictions about individual behavior. This latter type of bias, known as an “ecological fallacy,” unintentionally weights certain factors such that societal inequities are exacerbated.

The ecological fallacy is widespread in health care modeling, where much of the data used to train algorithms for diagnosing and treating diseases has been shown to perpetuate inequalities. Recently, a team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI are sourced from New York, California, and Massachusetts.

When AI systems reach the deployment phase — i.e., where people start interacting with them — poor decisions in the earlier phases start to have an impact, typically unbeknownst to the affected people. For example, by not designing to compensate for activity biases, algorithmic models may be built on data only from the most active users. The NIST coauthors peg the problem on the fact that groups who invent the algorithms are unlikely to be aware — sometimes willfully — of all the potentially problematic ways they’ll be repurposed. Beyond this, there are individual differences in how people interpret AI models’ predictions, which could cause the “offloading” of decisions to coarse, imprecise automated tools.

This is particularly evident in the language domain, where model behavior can’t be reduced to universal standards because “desirable” behavior differs by application and social context. A study by researchers at the University of California, Berkeley, and the University of Washington illustrates the point, showing that language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure that the models work better for them, for instance, which could discourage minority speakers from engaging with the models to begin with.

Tackling bias in AI

What’s to be done about the pitfalls? The NIST coauthors recommend pinpointing biases early in the AI development process by maintaining “diversity” — including racial, gender, age — along social lines, where bias is a concern. While they acknowledge that identifying impacts may take time and require the involvement of end-users, practitioners, subject matter experts, and professionals from the law and social sciences, the coauthors say that these stakeholders can bring experience to bear on the challenge of considering all possible outcomes.

The suggestions are aligned with a paper published last June by a group of researchers at Microsoft. It advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work, concluding that the machine learning research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful.

“Technology or datasets that seem non-problematic to one group may be deemed disastrous by others. The manner in which different user groups can game certain applications or tools may also not be so obvious to the teams charged with bringing an AI-based technology to market,” the NIST paper reads. “These kinds of impacts can sometimes be identified in early testing stages, but are usually very specific to the contextual end-use and will change over time.”

Beyond this, the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help root out biases. Requiring AI practitioners to defend their techniques, the coauthors posit, can incentivize new ways of thinking and help create change in approaches by organizations and industries.

Many organizations fall short of the mark. After a 2019 research paper demonstrated that commercially available facial analysis tools fail to work for women with dark skin, Amazon Web Services executives attempted to discredit study coauthors Joy Buolamwini and Deb Raji in multiple blog posts. More recently, Google fired leading AI researcher Timnit Gebru from her position on an AI ethics team in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices.

But others, particularly in academia, have taken preliminary steps. For instance, a new program at Stanford — the Ethics and Society Review (ESR) — is requiring AI researchers to evaluate their proposals for any potential negative impact on society before being green-lighted for funding. Starting in 2020, Stanford ran the ESR across 41 proposals seeking Stanford HAI grant funding. The panel most commonly identified issues of harm to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data. One research team that examined the use of ambient AI for in-home care for elderly adults wrote an ESR statement that considered privacy ethics in their research, outlining recommendations for the use of face blurring, body masking, and other methods to ensure participants were protected.

Finally, at the deployment phase, the coauthors make the case that monitoring and auditing are key ways to manage bias risks. There’s a limit to what this can accomplish — for example, it’s not clear whether “detoxification” methods can thoroughly debias language models of a certain size. However, techniques like counterfactual fairness, which uses causal methods to produce “fair” algorithms, can perhaps begin to bridge gaps between lab and real-world environments.

Comments on NIST’s proposed approach can be submitted by August 5, 2021, by downloading and completing a template form and sending it to NIST’s dedicated email account. The coauthors say that they’ll use the responses to help shape the agenda of virtual events NIST will hold in coming months, a part of the agency’s broader effort to support the development of trustworthy and responsible AI.

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear. We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause,” Schwarz said in a statement. “An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”

Repost: Original Source and Author Link

Categories
AI

Facebook proposes NetHack as a grand challenge in AI research

Elevate your enterprise data technology and strategy at Transform 2021.


Facebook today proposed NetHack as a grand challenge for AI research, for which the company is launching a competition at the NeurIPS 2021 AI conference in Sydney, Australia. It’s Facebook’s assertion that NetHack, an ’80s video game with simple visuals that’s considered among the hardest in the world, can enable data scientists to benchmark state-of-the-art AI methods in a complex environment without the need to run experiments on a powerful computer.

Games have served as AI benchmarks for AI for decades, but things really kicked into gear in 2013 — the year Google’s DeepMind demonstrated a system that could play Pong, Breakout, Space Invaders, Seaquest, Beamrider, Enduro, and Q*bert at superhuman levels. The advancements aren’t merely improving game design, according to experts like DeepMind cofounder Demis Hassabis. Rather, they’re informing the development of systems that might one day diagnose illnesses, predict complicated protein structures, and segment CT scans.

In particular, reinforcement learning — a type of AI that can learn strategies to orchestrate large systems like manufacturing plants, traffic control systems, financial portfolios, and robots — is transitioning from research labs to highly impactful, real-world applications. For example, self-driving car companies like Wayve and Waymo are using reinforcement learning to develop the control systems for their cars. And via Microsoft’s Bonsai, Siemens is employing reinforcement learning to calibrate its CNC machines.

“Recent advances in reinforcement learning have been fueled by simulation environments such as games like StarCraft II, Dota 2, or Minecraft. However, this progress came at substantial computational costs, often requiring running thousands of GPUs in parallel for a single experiment, while also falling short of leading to … methods that can be transferred to more real-world problems outside of these games,” Facebook AI researchers Edward Grefenstette, Tim Rocktäschel, and Eric Hambro wrote in a blog post. “We need environments that are complex, highlighting shortcomings of RL, while also allowing extremely fast simulation at low computation costs.”

NetHack

Facebook’s proposal follows the release of the company’s NetHack Learning Environment (NHLE), a research tool based on the original NetHack. (The NetHack Challenge is in turn based on the NHLE.) NetHack, which was first released in 1987, tasks players with descending more than 50 dungeon levels to retrieve a magical amulet, during which they must use wands, weapons, armors, potions, spellbooks, and other items and fight monsters. Levels in NetHack are procedurally generated and every game is different, which the Facebook researchers note tests the generalization limits of leading AI.

“Winning a game of NetHack requires long term planning in an incredibly unforgiving environment. Once a player’s character dies … the game starts from scratch in an entirely new dungeon,” Grefenstette, Rocktäschel, and Hambro continued. “Successfully completing the game as an expert player takes on average 25 to 50 times more steps than an average StarCraft II game, and players’ interactions with objects and the environment are extremely complex, so success often hinges on calling upon imagination to solve problems in creative or surprising ways as well as consulting external knowledge sources [such as] the official NetHack Guidebook, the NetHack Wiki, and online videos and forum discussions].”

Facebook NetHack Learning Environment

Partial observation makes exploration in NetHack essential, and procedural generation and “permadeath” make the cost of failure significant. And AI can’t reset or interfere with the environment, making the methods that underpin systems like DeepMind’s AlphaZero for StarCraft II or Uber’s Go-Explore for Montezuma’s Revenge impossible.

“[The challenges in NetHack] range from randomized mazes to more structured challenges, like large rooms full of monsters and traps, towns and forts, and hazards such as kraken-infested waters,” Grefenstette, Rocktäschel, and Hambro said. “New ways of dealing with the ever changing observations in a stochastic and rich game world calls for the development of techniques that have a better chance of scaling to real-world settings with high degrees of variability.”

Lightweight

NetHack has another advantage in its lightweight architecture. A turn-based, ASCII-art world and a game engine written primarily in C captures its complexity. NetHack forgoes all but the simplest physics while rendering symbols instead of pixels, importantly, allowing AI to learn quickly without wasting computational resources on simulating dynamics or rendering observations.

Indeed, training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

By contrast, a single high-end graphics card is sufficient to train AI-driven NetHack agents hundreds of millions of steps a day using the TorchBeast framework, which supports further scaling by adding more graphics cards or machines. Agents can experience billions of steps in the environment in a reasonable time frame while still challenging the limits of what current techniques can achieve.

Facebook NetHack Learning Environment

“[The NHLE] can train reinforcement learning agents …15 times faster than even decade-old Atari benchmark[s]. Furthermore, NetHack can be used to test the limits of even more recent state-of-the-art deep reinforcement learning methods while running 50 to 100 times faster than challenges of comparable difficulty while providing a higher degree of complexity.”

Challenge

The NHLE consists of three components: a Python interface to NetHack using the popular OpenAI Gym API, a suite of benchmark tasks, and a baseline machine learning agent. To beat the NetHack Challenge, entrants must develop AI that can reliably either win at NetHack or achieve as high a score as possible. In doing so, the competition aims to yield a head-to-head comparison of different methods and new benchmarks for future research, while at the same time showcasing the suitability of the NHLE as a setting for research.

There won’t be restrictions on how the systems can be trained for the NetHack Challenge, Facebook says — participants are welcome to use techniques besides machine learning if they choose. Awards will be given for (1) the best overall AI system, (2) the best AI system not using a neural network, and (3) the best AI system from an academic or independent team.

Grefenstette, Rocktäschel, and Hambro say that achieving these objectives will lay the groundwork for follow-up competitions focused on specific aspects of AI. Moreover, the NetHack Challenge might help bring light to classes of training methods and modeling approaches capable of dealing with highly varied environments and a high cost of errors, like having to restart from scratch if a character is killed by a creature.

“Many real-world and industrial problems — navigation, for example — share these characteristics. Consequently, making progress in NetHack is making progress toward reinforcement learning in a wider range of applications,” Grefenstette, Rocktäschel, and Hambro said.

Facebook’s NeurIPS 2021 NetHack Challenge will be conducted in partnership with co-organizer AIcrowd, and it’ll run from early June through October. The winners will be announced at NeurIPS in December.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

EU proposes strict AI rules, with fines up to 6% for violations

Join Transform 2021 this July 12-16. Register for the AI event of the year.


(Reuters) — The European Commission on Wednesday announced tough draft rules on the use of artificial intelligence, including a ban on most surveillance, as part of an attempt to set global standards for a technology seen as crucial to future economic growth.

The rules, which envisage hefty fines for violations and set strict safeguards for high-risk applications, could help the EU take the lead in regulating AI, which critics say has harmful social effects and can be exploited by repressive governments.

The move comes as China moves ahead in the AI race, while the COVID-19 pandemic has underlined the importance of algorithms and internet-connected gadgets in daily life.

“On artificial intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” European tech chief Margrethe Vestager said in a statement.

The Commission said AI applications that allow governments to do social scoring or exploit children will be banned.

High risk AI applications used in recruitment, critical infrastructure, credit scoring, migration and law enforcement will be subject to strict safeguards.

Companies breaching the rules face fines up to 6% of their global turnover or 30 million euros ($36 million), whichever is the higher figure.

European industrial chief Thierry Breton said the rules would help the 27-nation European Union reap the benefits of the technology across the board.

“This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security,” he said.

However, civil and digital rights activists want a blanket ban on biometric mass surveillance tools such as facial recognition systems, due to concerns about risks to privacy and fundamental rights and the possible abuse of AI by authoritarian regimes.

The Commission will have to thrash out the details with EU national governments and the European Parliament before the rules can come into force, in a process that can take more than year.

($1 = 0.8333 euros)

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM proposes AI chip with benchmark-beating power efficiency

IBM claims to have developed one of the world’s first energy-efficient chips for AI inferencing and training built with 7-nanometer technology. In a paper presented at the 2021 International Solid-State Circuits Virtual Conference in early February, a team of researchers at the company detailed a hardware accelerator that supports a range of model types while achieving “leading” power efficiency on all of them.

AI accelerators are a type of specialized hardware designed to speed up AI applications, particularly neural networks, deep learning, and machine learning. They’re multicore in design and focus on low-precision arithmetic or in-memory computing, both of which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains.

IBM says its four-core chip, which remains in the research stages, is optimized for low-precision workloads with a number of different AI and machine learning models. Low-precision techniques require less silicon area and power compared with their high-precision counterparts, enabling better cache usage and reduce memory bottlenecks. This often leads to a decrease in the time and energy cost of training AI models.

IBM AI chip

Above: The schematics of IBM’s proposed AI chip.

Image Credit: IBM

IBM’s AI accelerator chip is among the few to incorporate ultra-low precision “hybrid FP8” formats for training deep learning models in an extreme ultraviolet lithography-based package. It’s also one of the first to feature power management, with the ability to maximize performance by slowing down during computation phases with high power consumption. And it offers high sustained utilization that ostensibly translates to superior real application performance.

In experiments, IBM says its AI chip routinely achieved more than 80% utilization for training and more than 60% utilization for inference. Moreover, the chip’s performance and power efficiency exceeded that of other dedicated inference and training chips.

IBM AI chip

Above: Benchmark results from IBM’s study.

Image Credit: IBM

IBM’s goal in the next 2-3 years is to apply the novel AI chip design commercially to a range of applications, including large-scale training in the cloud, privacy, security, and autonomous vehicles. “Our new AI core and chip can be used for many new cloud to edge applications across multiple industries,” IBM researchers Ankur Agrawal and Kailash Gopalakrishnan wrote in a blog post. “For instance, they can be used for cloud training of large-scale deep learning models in vision, speech and natural language processing using 8-bit formats (versus the 16- and 32-bit formats currently used in the industry). They can also be used for cloud inference applications, such as for speech to text AI services, text to speech AI services, natural language processing services, financial transaction fraud detection and broader deployment of AI models in financial services.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Sen. Ron Wyden proposes $500 million to fix US unemployment systems

Sen. Ron Wyden (D-OR) put out a new bill Wednesday that would overhaul the US’s crumbling unemployment benefits systems.

Over the last year, outdated state unemployment systems have kept jobless Americans from accessing the increased benefits approved by the federal government to curb the economic effects of the coronavirus pandemic. Wyden’s new $500 million plan would create standardized tools for states that choose to adopt them. The Department of Labor would be in charge of creating these tools with the help of outside tech experts.

“While enhanced jobless benefits have enabled millions and millions of families to pay the rent and buy groceries, many states have been unable to get benefits out the door in a timely manner,” Wyden said in a statement Wednesday. “I have heard story after story from Oregonians who have spent months trying to get their jobless benefits. That’s completely unacceptable when families are depending on these benefits to keep a roof over their heads.”

Wyden’s bill is co-sponsored by Sens. Sherrod Brown (D-OH), Mark Warner (D-VA), and Catherine Cortez Masto (D-NV).

State unemployment offices are in charge of maintaining their own benefits systems. Some states choose to build their own while others, like California, have contracts with outside vendors. Many of these unemployment systems were built nearly decades ago in ancient coding languages that aren’t frequently taught in school and are widely known by programmers who are quickly aging out of the workforce.

Last summer, a Verge investigation found that at least 12 state systems, like those in Alaska, Colorado, Iowa, and Kansas, were partially coded in COBOL, an over 40-year-old coding language very few programmers learn anymore. This makes it difficult for these systems to be fixed when they melt down under the pressure of requests caused by the pandemic. Only one COBOL programmer maintained Colorado’s system before the coronavirus outbreak, according to The Verge.

States like Colorado continue to face mounting problems with their unemployment systems as some unemployed Coloradans have gone weeks and months without payments. Earlier this week, protestors marched on Colorado’s Department of Labor due to the faulty system, according to The Denver Post. To fix its issues, Colorado launched a new system last month called “MyUI+,” which had a rough rollout after some people said they were unable to create accounts for the site.

Congress continues to negotiate its next economic stimulus package as a result of the pandemic. Democrats are looking to approve this latest package by mid-March to ensure that heightened unemployment benefits continue through the summer.

Repost: Original Source and Author Link