Categories
AI

These are the AI risks we should be focusing on

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced — yet equally dangerous — risks posed by the misuse of AI applications that are already available or being developed today.

AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more. AI technologies are sure to continue disrupting our world, from automating routine office tasks to solving urgent challenges like climate change and hunger. But as incidents such as wrongful arrests in the U.S. and the mass surveillance of China’s Uighur population demonstrate, we are also already seeing some negative impacts stemming from AI. Focused on pushing the boundaries of what’s possible, companies, governments, AI practitioners, and data scientists sometimes fail to see how their breakthroughs could cause social problems until it’s too late.

Therefore, the time to be more intentional about how we use and develop AI is now. We need to integrate ethical and social impact considerations into the development process from the beginning, rather than grappling with these concerns after the fact. And most importantly, we need to recognize that even seemingly-benign algorithms and models can be used in negative ways. We’re a long way from Terminator-like AI threats — and that day may never come — but there is work happening today that merits equally serious consideration.

How deepfakes can sow doubt and discord

Deepfakes are realistic-appearing artificial images, audio, and videos, typically created using machine learning methods. The technology to produce such “synthetic” media is advancing at breakneck speed, with sophisticated tools now freely and readily accessible, even to non-experts. Malicious actors already deploy such content to ruin reputations and commit fraud-based crimes, and it’s not difficult to imagine other injurious use cases.

Deepfakes create a twofold danger: that the fake content will fool viewers into believing fabricated statements or events are real, and that their rising prevalence will undermine the public’s confidence in trusted sources of information. And while detection tools exist today, deepfake creators have shown they can learn from these defenses and quickly adapt. There are no easy solutions in this high-stakes game of cat and mouse. Even unsophisticated fake content can cause substantial damage, given the psychological power of confirmation bias and social media’s ability to rapidly disseminate fraudulent information.

Deepfakes are just one example of AI technology that can have subtly insidious impacts on society. They showcase how important it is to think through potential consequences and harm-mitigation strategies from the outset of AI development.

Large language models as disinformation force multipliers

Large language models are another example of AI technology developed with non-negative intentions that still merits careful consideration from a social impact perspective. These models learn to write humanlike text using deep learning techniques that are trained by patterns in datasets, often scraped from the internet. Leading AI research company OpenAI’s latest model, GPT-3, boasts 175 billion parameters — 10 times greater than the previous iteration. This massive knowledge base allows GPT-3 to generate almost any text with minimal human input, including short stories, email replies, and technical documents. In fact, the statistical and probabilistic techniques that power these models improve so quickly that many of its use cases remain unknown. For example, initial users only inadvertently discovered that the model could also write code.

However, the potential downsides are readily apparent. Like its predecessors, GPT-3 can produce sexist, racist, and discriminatory text because it learns from the internet content it was trained on. Furthermore, in a world where trolls already impact public opinion, large language models like GPT-3 could plague online conversations with divisive rhetoric and misinformation. Aware of the potential for misuse, OpenAI restricted access to GPT-3, first to select researchers and later as an exclusive license to Microsoft. But the genie is out of the bottle: Google unveiled a trillion-parameter model earlier this year, and OpenAI concedes that open source projects are on track to recreate GPT-3 soon. It appears our window to collectively address concerns around the design and use of this technology is quickly closing.

The path to ethical, socially beneficial AI

AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it.

For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods. The industry can likewise ramp up efforts around countermeasures, such as the detection tools developed through Facebook’s Deepfake Detection Challenge or Microsoft’s Video Authenticator. Finally, it will be necessary to continually engage the general public through educational campaigns around AI so that people are aware of and can identify its misuses more easily. If as many people knew about GPT-3’s capabilities as know about The Terminator, we’d be better equipped to combat disinformation or other malicious use cases.

We have the opportunity now to set incentives, rules, and limits on who has access to these technologies, their development, and in which settings and circumstances they are deployed. We must use this power wisely — before it slips out of our hands.

Peter Wang is CEO and Co-founder of data science platform Anaconda. He’s also the creator of the PyData community and conferences and a member of the board at the Center for Human Technology.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
AI

Gryps, an RPA platform focusing on construction, raises $1.5M

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Gryps, a robotic process automation (RPA) startup focused on the construction industry, today announced it has raised $1.5 million. The company says it will use the funding to support product R&D and hire new employees, particularly engineers.

RPA — technology that automates monotonous, repetitive chores traditionally performed by human workers — is big business. Forrester estimates that RPA and other AI subfields created jobs for 40% of companies in 2019 and that a tenth of startups now employ more digital workers than human ones. According to a McKinsey survey, at least a third of activities could be automated in about 60% of occupations. And in its recent Trends in Workflow Automation report, Salesforce found that 95% of IT leaders are prioritizing workflow automation, with 70% seeing the equivalent of more than four hours savings per employee each week.

Gryps, which was founded in 2020, connects to multiple email, project management database, and other systems to automatically scrape and organize documents during the construction process. The platform ingests things like manuals, warranty certificates, contracts, change orders, invoices, lien waivers, and other close-out documents from different sources and then applies machine learning to categorize the files and label them correctly so that they can be shared with various stakeholders.

“We met in 2013 working on the same team at a construction management firm in New York City,” Gryps founders Dareen Salama and Amir Tasbihi told VentureBeat. “One thing was always top of mind: The industry is ready for and needs AI that helps process the volume of data generated, and it must be easy to adopt for users. We started Gryps to provide an amazing product with the best user experience and to create an environment where young professionals in construction can grow and find a fulfilling career.”

Gryps employs APIs and digital robots to process the documents it collects. Leveraging a combination of natural language processing, machine learning, computer vision, and document understanding, the platform canvasses, ingests, and transforms construction project data.

“We use cutting-edge transfer learning techniques to transfer AI knowledge from best-in-class models to boost our accuracy. We also use layout-based, Transformer-based deep learning models to extract information from documents,” Salama and Tasbihi explained. “Our RPA agents are rule-based software robots performing actions to get our information ingestion jobs done more accurately and faster than people can … For example, [we apply] computer vision and machine learning to extract and analyze trends such as companies, products used, services rendered, and costs involved from documents.”

Gryps has a number of competitors in a global intelligent process automation sector that’s estimated to be worth $15.8 billion by 2025, according to KBV Research. Automation Anywhere last secured a $290 million investment from SoftBank at a $6.8 billion valuation. Within a span of months, Blue Prism raised over $120 million, Kryon $40 million, and FortressIQ $30 million. Tech giants have also made forays into the field, including Microsoft, which acquired RPA startup Softomotive, and IBM, which purchased WDG Automation.

But Gryps, which has three paying customers and several in pilots, asserts that specializing in construction gives it a leg up over rivals focused on the broader market. To this end, one of the company’s first clients was the Javits Convention Center in New York. Gryps claims its software automatically ingested over 20,000 documents and 100,000 data points, collated them, and handed them over to the Javits team, with estimates putting the savings at hundreds of hours of staff time.

Salama and Tasbihi say the pandemic made apparent the need for and speed of digitization adoption, with construction teams desiring faster access to information remotely, in-office, and on-site. “With teams working from home, they needed more robust tools than what exists today to access project data such as contracts, financials, and other documentation as fast as possible,” they continued. “Project managers are typically extremely busy responding to project needs, which typically slows down technology adoption because they have no time to spend on digitizing their processes. The pandemic paused a lot of projects and provided executives and teams time to rethink their policies, procedures, and ways of doing business. Now these projects are coming back and the teams are determined to increase efficiency, and integrating new technologies is key to that goal.”

LDV Capital led Gryps’ seed round with participation from Pear VC and Harvard Business School Graduate Syndicate. A group of angel investors also contributed.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link