Categories
AI

EU AI Act’s possible open-source regulation sparks Twitter debate

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Alex Engler, research fellow at the Brookings Institution, never expected his recent article, “The EU’s attempt to regulate open source AI is counterproductive,” to stir up a Twitter debate. 

According to Engler, as the European Union continues to debate its development of the Artificial Intelligence Act (AI Act), one step it has considered would be to regulate open-source general-purpose AI (GPAI). The EU AI Act defines GPAI as “AI systems that have a wide range of possible uses, both intended and unintended by the developers … these systems are sometimes referred to as ‘foundation models’ and are characterized by their widespread use as pre-trained models for other, more specialized AI systems.”  

In Engler’s piece, he said that while the proposal is meant to enable safer use of these artificial intelligence tools, it “would create legal liability for open-source GPAI models, undermining their development.” The result, he maintained, would “further concentrate power over the future of AI in large technology companies” and prevent critical research. 

“It’s an interesting issue that I did not expect to get any attention,” he told VentureBeat. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“I’m always surprised to get press calls, frankly.” 

But after Emily Bender –- professor of linguistics at the University of Washington and a regular critic of how AI is covered on social media and in mainstream media –- wrote a thread about a piece that quoted from Engler’s article, a spirited Twitter back-and-forth began. 

EU AI Act’s open-source discussion in the hot seat

“I have not studied the AI Act, and I am not a lawyer, so I can’t really comment on whether it will work well as regulation,” Bender tweeted, pointing out later in her thread: “How do people get away with pretending, in 2022, that regulation isn’t needed to direct the innovation away from exploitative, harmful, unsustainable, etc practices?” 

Engler responded to Bender’s thread with his own. Broadly, he said, “I am in favor of AI regulation … still I don’t think regulating models at the point of open-sourcing helps at all. Instead, what is better and what the original EU Commission proposal did, is to regulate whenever a model is used for something dangerous or harmful, regardless of whether it is open-source.” 

He also maintained that he does not want to exempt open-source models from the EU AI Act, but wants to exempt the act of open sourcing AI. If it is harder to release open-source AI models, he argued, these same requirements will not prevent commercialization of these models behind APIs. “We end up with more OpenAIs, and fewer OS alternatives — not my favorite outcome,” he tweeted. 

Bender responded to Engler’s thread by emphasizing that if part of the purpose of the regulation is to require documentation, “the only people in a position to actually thoroughly document training data are those who collect it.” 

Perhaps this could be handled by disallowing any commercial products based on under-documented models, leaving the liability with the corporate interests doing the commercializing, she wrote, but added, “What about when HF [Hugging Face] or similar hosts GPT-4chan or Stable Diffusion and private individuals download copies & then maliciously use them to flood various online spaces with toxic content?” 

Obviously, she continued, the “Googles and Metas of the world should also be subject to strict regulation around the ways in which data can be amassed and deployed. But I think there’s enough danger in creating collections of data/models trained on those that OSS devs shouldn’t have free rein.” 

Engler, who studies the implications of AI and emerging data technologies on society, admitted to VentureBeat that “this issue is pretty complicated, even for people who broadly share fairly similar perspectives.” He and Bender, he said, “share a concern about where regulatory responsibility and commercialization should fall … it’s interesting that people with relatively similar perspectives land in a somewhat different place.”

The impact of open-source AI regulation

Engler made several points to VentureBeat about his views on the EU regulating open-source AI. First of all, he said the limited scope is a practical concern. “The EU passing requirements doesn’t affect the rest of the world, so you can still release this somewhere else and the EU requirements will have a very minimal impact,” he said. 

In addition, “the idea that a well-built, well-trained model that meets these regulatory requirements somehow wouldn’t be applicable for harmful uses just isn’t true,” he said. “I think we haven’t clearly shown that regulatory requirements and making good models will necessarily make them safe in malicious hands,” he added, pointing out that there is a lot of other software that people use for malicious purposes that would be hard to start regulating. 

“Even the software that automates how you interact with a browser has the same problem,” he said. “So if I’m trying to make lots of fake accounts to spam social media, the software that lets me do that has been public for 20 years, at least. So [the open-source issue] is a bit of a departure.” 

Finally, he said, the vast majority of open-source software is created without a goal of selling the software. “So you’re taking an already uphill battle, which is that they’re trying to build these large, expensive models that can even get close to competing with the big firms and you’re adding a legal and regulatory barrier as well,” he said. 

What the EU AI Act will and won’t do

Engler emphasized that the EU AI Act won’t be a cure-all for AI ills. What the EU AI Act will broadly help with, he said, is “preventing kind of fly-by-night AI applications for things it can’t really do or is doing very poorly.” 

In addition, Engler thinks the EU is doing a fairly good job attempting to “meaningfully solve a pretty hard problem about the proliferation of AI into dangerous and risky areas,” adding that he wishes the U.S. would take a more proactive regulatory role in the space (though he gives credit to work done by the Equal Employment Opportunity Commission’s work on bias and AI hiring systems). 

What the EU AI Act will not really address is dealing with the creation and public availability of models that people are simply using nefariously. 

“I think that is a different question that the EU AI Act doesn’t really address,” he said. “I’m not sure we’ve seen anything that prevents them from being out there, in a way that is actually going to work,” he added, while the open-source discussion is a bit “tacked on.”

“If there was a part of the EU AI Act that said, hey, the proliferation of these large models is dangerous and we want to slow them down, that would be one thing –- but it doesn’t say that,” he said. 

Debate will surely continue

Clearly, the Twitter debate around the EU AI Act and other AI regulations will continue, as stakeholders from across the AI research and industry spectrum weigh in on dozens of recommendations on a comprehensive AI regulatory framework that could potentially be a model for a global standard. 

And the debate continues offline, as well: Engler said that one of the European Parliamentary committees, advised by digital policy advisor Kai Zenner, plans to introduce a change to the EU AI Act that would take up the issue surrounding open-source AI – reflected in yet another tweet: 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Repost: Original Source and Author Link

Categories
AI

Why AI regulation will resemble privacy regulation

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


You are a walking data repository. While outside your residence or vehicle, walking down a street, shopping in a store, or visiting any type of public event or meeting — you potentially lose your personal privacy and cross the boundary from being a private individual to a virtual public figure. You can be filmed or photographed, your image can be transported to a storage silo anywhere in the world, your voice can be recorded, and your time in public view can be noted. This is the world in which we live 2022. 

When you go online to make a purchase, there opens a whole new door to others of your personally identifiable information, (PII). You invariably will be voluntarily offering strangers your name, address, phone number, email address and possibly more extensive information about yourself. Ostensibly, this data remains private between you and the vendor. “Ostensibly” is the key word here, however; one never really knows how much of your PII stays legitimately private.

Everything cited above can become data and go on your record somewhere in the world, whether you like it or not. Over-the-top severe assessment? Possibly, but it’s up to you to know this and act accordingly. 

What information qualifies as personally identifiable information?

According to the U.S. Department of Labor, (DoL) companies may maintain PII on their employees, customers, clients, students, patients, or other individuals, depending on the industry. PII is defined as information that directly identifies an individual (e.g., name, address, social security number or other identifying number or code, telephone number, email address, etc.). It can also mean information by which an agency intends to identify specific individuals with other data elements, such as a combination of gender, race, birthdate, geographic indicator and other descriptors.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Whether you want this PII to be in the hands (or databases) of numerous outsiders is largely, but not totally, your own decision. The DoL says specifically: “It is the responsibility of the individual user to protect data to which they have access.”

People have long been uncomfortable with the way companies can track their movements online, often gathering credit card numbers, addresses and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their online searches, which led them to worry constantly about identity theft and fraud. This is a direct result of putting PII in the hands of companies who want to profit from your movements on the web.

Those concerns have led to the passage of regulations in the United States and Europe guaranteeing internet users some level of control over their personal data and images — most importantly, the European Union’s 2018 General Data Protection Regulation (GDPR). Of course, those measures didn’t end the debate around companies’ use of personal data; they are merely a starting point for deeper and more specific laws.

The California Consumer Privacy Act is a prime example, a data privacy law (enacted in 2020) that provides privacy rights to California residents, giving them options as to how their PII can be used. There’s also California’s Automated Decisions Systems Accountability Act (still in the legislative process), which aims to end algorithmic bias against groups protected by federal and state anti-discrimination laws.

Privacy, AI regulations moving in parallel fashion

Data privacy laws and regulation of data gathered for the use of artificial intelligence are progressing in parallel paths through government agencies because they are so intertwined.

Anytime a human is involved in an analytics project, bias can be introduced. In fact, AI systems that produce biased results have been making headlines. One highly publicized example is Apple’s credit card algorithm, which has been accused of discriminating against women and caused an investigation by New York’s Department of Financial Services. Another is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in U.S. court systems to predict the likelihood that a defendant would become a repeat offender. This one in particular has been wrong numerous times.

As a result of all this PII collection, the rapid rise of the use of analytics and machine learning in online applications, and the constant threat of bias in AI algorithms, law enforcement agencies are chasing down an increasing number of complaints from citizens regarding online fraud.

Governments too are trying to get their arms around appropriate legislation in statewide efforts to curb this criminal activity.

The state of AI regulations

Are there regulations for artificial intelligence? Not yet, but they are coming. States can move quicker on this than the federal government, which is not a surprise. For two years, the California legislature has been debating and modifying the Automated Decision Systems Accountability Act, which stipulates that state agencies use an acquisition method that minimizes the risk of adverse and discriminatory impacts resulting from the design and application of automated decision systems. There’s a possibility it will become law later this year or early next year.

These are just the first wave of a phalanx of new laws and regulations that will be impacting online companies and their customers during the next several years. There’s plenty of evidence that tighter regulations are needed to contain deep-pocket companies such as Google and Amazon, which are becoming virtual monopolies due to the continued use of their users’ PII.

There’s no question that the ocean of PII is the fuel that analytics uses to produce data that can lead to business value. Analytics is the basis for artificial intelligence that can suggest a strategy correction for a business, warn of an impending problem in the supply chain, or make a prediction about where any market is headed over months or years. This is all bottom line-important to an enterprise and its investors, not to mention all the employees, partners, contractors, and customers that rely on the business itself.

Bobby Napiltonia is the president of Okera.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

AI regulation: A state-by-state roundup of AI bills

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Wondering where AI regulation stands in your state? Today, the Electronic Privacy Information Center (EPIC) released The State of State AI Policy, a roundup of AI-related bills at the state and local level that were passed, introduced or failed in the 2021-2022 legislative session (EPIC gave VentureBeat permission to reprint the full roundup below).

Within the past year, according to the document (which was compiled by summer clerk Caroline Kraczon), states and localities have passed or introduced bills “regulating artificial intelligence or establishing commissions or task forces to seek transparency about the use of AI in their state or locality.” 

For example, Alabama, Colorado, Illinois and Vermont have passed bills creating a commission, task force or oversight position to evaluate the use of AI in their states and make recommendations regarding its use. Alabama, Colorado, Illinois and Mississippi have passed bills that limit the use of AI in their states. And Baltimore and New York City have passed local bills that would prohibit the use of algorithmic decision-making in a discriminatory manner. 

Ben Winters, EPIC’s counsel and leader of EPIC’s AI and Human Rights Project, said the information was something he had wanted to get in one single document for a long time. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“State policy in general is really hard to follow, so the idea was to get a sort of zoomed-out picture of what has been introduced and what has passed, so that at the next session everyone is prepared to move the good bills along,” he said. 

Fragmented state and local AI legislation

The list of varied laws makes clear the fragmentation of legislation around the use of AI in the US – as opposed to the broad mandate of a proposed regulatory framework around the use of AI in the European Union. 

But Winters said while state laws can be confusing or frustrating – for example, if vendors have to deal with different state laws regulating AI in government contracting — the advantage is that comprehensive bills can tend to get watered down. 

“Also, when bills are passed affecting businesses in huge states such as, say, California, it basically creates a standard,” he said. “We’d love to see strong legislation passed nationwide, but from my perspective right now state-level policy could yield stronger outcomes.” 

Enterprise businesses, he added, should be aware of proposed AI regulation and that there is a rising standard overall around the transparency and explainability of AI. “To get ahead of that rising tide, I think they should try to take it upon themselves to do more testing, more documentation and do this in a way that’s understandable to consumers,” he said. “That’s what more and more places are going to require.” 

AI regulation focused on specific issues

Some limited issues, he pointed out, will have more accelerated growth in legislative focus, including facial recognition and the use of AI in hiring. “Those are sort of the ‘buckle your seat belt’ issues,’ he said. 

Other issues will see a slow growth in the number of proposed bills, although Winters said that there is a lot of action around state procurement and automated decision making systems. “The Vermont bill passed last year and both California and Washington state were really close,” he said. “So I think there’s going to be more of those next year in the next legislative session.” 

In addition, he said, there might be some movement on specific bills codifying concepts around AI discrimination.  For example, Washington DC’s Stop Discrimination by Algorithms Act of 2021 “would prohibit the use of algorithmic decision-making in a discriminatory manner and would require notices to individuals whose personal information is used in certain algorithms to determine employment, housing, healthcare and financial lending.”  

“The DC bill hasn’t passed yet, but there’s a lot of interest,” he explained, adding that similar ideas are in the pending American Data Privacy and Protection Act. “I don’t think there will be a federal law or any huge avalanche of generally-applicable AI laws in commerce in the next little bit, but current state bills have passed that have requirements around certain discrimination aspects and opting out – that’s going to require more transparency.” 

State 

Passed 

  • Alabama Act No. 2021-344– To establish the Alabama Council on Advanced Technology and Artificial Intelligence to review and advise the Governor, the Legislature, and other interested parties on the use and development of advanced technology and artificial intelligence in this state. 
    • Established the Alabama Council on Advanced Technology and Artificial Intelligence, specified the makeup of the council, and set qualification requirements for council members. 
    • Introduced: 2/2/21; Passed 4/27/21 
  • Alabama Act No. 2022-420– Artificial intelligence, limit the use of facial recognition, to ensure artificial intelligence is not the only basis for arrest 
    • Prohibits state and local law enforcement agencies (LEAs) from using facial recognition technology (FRT) match results to establish probable cause in a criminal investigation or to make an arrest.  
    • When LEAs seek to establish probable cause, the bill only permits LEAs to use FRT match results in conjunction with other lawfully obtained information and evidence.  
    • Introduced: 1/11/22; Passed: 4/5/22; Signed: 4/6/22 
  • Colorado SB 22-113 – Concerning the use of personal identifying data, and, in connection therewith, creating a task force for the consideration of facial recognition services, restricting the use of facial recognition services by state and local government agencies, temporarily prohibiting public schools from executing new contracts for facial recognition services, and making an appropriation. 
    • Created a task force to study issues relating to the use of artificial intelligence in Colorado.  
    • Requires state and local agencies that use or intend to use a facial recognition service (FRS) to file a notice of intent and produce an accountability report. Agencies using FRS will be required to subject decisions that produce legal effects to meaningful human review. Agencies using FRS must conduct periodic training of individuals who operate the FRS. Agencies must maintain records sufficient to facilitate public reporting and auditing of compliance with FRS policies.  
    • Restricted LEA’s use of FRS. It prohibits LEAs from using FRS to conduct ongoing surveillance, real-time identification, or persistent tracking unless the LEA obtains a warrant, and LEAs may not apply FRS to an individual based on protected characteristics.  
    • Agencies must disclose their use of an FRS on a criminal defendant to that defendant in a timely manner prior to trial.  
    • Prohibits the use of facial recognition services by any public school, charter school, or institute charter school. 
    • Introduced: 2/3/22; Signed: 6/8/22; Effective: 8/10/22. 
  • Illinois Public Act 102-0047 – Artificial Intelligence Video Interview Act 
    • Requires employers who solely rely on AI analysis of video interview to determine whether an applicant will be selected for an in-person interview to collect and report demographic data about the race and ethnicity of applications who are not selected for in-person interviews and of those who are hired. Employers must report this data to the Department of Commerce and Economic Opportunity. 
    • Introduced: 1/14/21; Passed: 5/25/21; Approved; 7/9/21; Effective: 1/1/22. 
  • Illinois Public Act 102-0407 – Illinois Future of Work Act 
    • Creates the Illinois Future of Work Task Force to identify and assess the new and emerging technologies, including artificial intelligence, that impact employment, wages, and skill requirements. The bill describes the task force’s responsibilities and specifies the makeup of the task force. 
    • Introduced: 2/8/21; Passed: 5/31/21; Signed: 8/19/21. 
  • Mississippi HB 633 – Mississippi Computer Science and Cyber Education Equality Act. 
    • The State Department of Education is directed to implement a K-12 computer science curriculum including instruction in artificial intelligence and machine learning. 
    • Introduced: 1/18/22; Passed: 3/17/21; Approved by the Governor: 3/24/21; Effective 7/1/21. 
  • Vermont H 410 – An act relating to the use and oversight of artificial intelligence in State government 
    • Creates the Division of Artificial Intelligence within the Agency of Digital Services to review all aspects of artificial intelligence developed, employed, or procured by State government.  
    • Creates the position of the Director of Artificial Intelligence to administer the Division and the Artificial Intelligence Advisory Council to provide advice and counsel to the Director. Requires the Division of Artificial Intelligence to, among other things, propose a State code of ethics on the use of artificial intelligence in State government and make recommendations to the General Assembly on policies, laws, and regulations of artificial intelligence in State government. The Division is also responsible for making various annual recommendations and reporting requirements to the General Assembly on the use of artificial intelligence in State government.  
    • Requires the Agency of Digital Services to conduct an inventory of all the automated decision systems developed, employed, or procured by State government. 
    • Introduced 3/9/21, passed 5/9/22, and approved by Governor on 5/24/22. 

Pending 

  • California SB 1216 – An act to add and repeal Section 11547.5 of the Government Code, relating to technology. 
    • Amends an existing CA law that prohibits businesses from making false and misleading advertising claims.  
    • Would establish a Deepfake Working Group to evaluate how deepfakes pose a risk to businesses and residents of CA. It defines deepfakes as “audio or visual content that has been generated or manipulated by artificial intelligence which would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do without their consent.”  
    • The working group would be directed to develop mechanisms to reduce and identify deepfakes and report on current uses and risks of deepfakes. 
    • Introduced: 2/17/22; Passed CA Senate; 5/25/22. It is currently in the House, and it was re-referred to committee on 6/29/22. 
  • California AB 13– An act to add Chapter 3.3 (commencing with Section 12114) to Part 2 of Division 2 of, and to add and repeal Section 12115.4 of, the Public Contract Code, relating to automatic decision systems. 
    • Enacts the Automated Decision Systems Accountability Act and state the intent of the Legislature that state agencies use an acquisition method that minimizes the risk of adverse and discriminatory impacts resulting from the design and application of automated decision systems (ADS).  
    • Requires the Department of Technology to conduct an inventory of all high-risk ADS that have been proposed for, or are being used, developed, or procured by state agencies, and to submit a report to the Legislature.  
    • Requires state agencies that are seeking to award contracts for goods or services that include the use of ADS to encourage the contractors to include ADS impact assessment reports in their bids. 
    • Introduced: 12/7/20, Passed the CA Assembly: 6/1/21; Placed on suspense file in CA Senate: 8/16/21. 
  • California AB 1826– An act to add Chapter 5.9 (commencing with 11549.75) to Part 1 of Division 3 of Title 2 of the Government Code, relating to technology. 
    • Establishes the Department of Technology within the Government Operations Agency. The department would be required to establish a process to conduct research projects related to technology, and online platforms would be required to share certain date with researched conducting the projects. Among the data that platforms would be required to share is a semiannual report including a summary of data-driven models, including those based on machine learning or other artificial intelligence techniques that the platforms use to predict user behavior or engagement and statistics regarding the content the platforms have removed using artificial intelligence review processes. 
    • Introduced: 2/18/22. 
  • Georgia HB 1651 – Transparency and Fairness in Automated Decision-Making Commission 
    • Create the Transparency and Fairness in Automated Decision-Making Commission, which would review and publicly report on the state’s use of artificial intelligence and other automated decision systems and develop recommendations for the use of these systems by state agencies. It specifies the makeup of the commission, sets qualification requirements, describes how the commission should operate, and requires the commission to report its findings to the legislature and the public. 
    • Introduced: 4/4/22. 
  • Hawaii HB 454 – Cybersecurity and artificial intelligence business investment tax credit. 
    • Establishes an income tax credit for investment in qualified businesses that develop cybersecurity and artificial intelligence. 
    • Introduced: 1/25/21; Carried over to the 2022 regular session on 12/10/21. 
  • Maryland HB 1359 – Technology and Science Advisory Commission 
    • Establishes the Technology and Science Advisory Commission to advise state agencies on technology and science developments, make recommendations on the use of developing technologies, review and make recommendations on algorithmic decision systems employed by state agencies, and create a framework to address the ethics of emerging technologies to avoid systemic harm and bias.  
    • Introduced: 2/11/22. 
  • Massachusetts S 2688 – An Act establishing a commission on automated decision-making by government in the commonwealth 
    • Establishes a commission to study and make recommendations related to the Massachusetts’ use of automated decision systems that may affect human welfare and legal rights and privileges.  
    • Describes the specific responsibilities of the commission, composition of the commission, and reporting requirements. 
    • Reported from the committee on Advanced Information Technology, the Internet and Cybersecurity on 2/14/22; Referred to the committee on Senate Ways and Means on 7/11/22. 
  • Massachusetts H 4512 -An Act establishing a commission on automated decision-making by government in the commonwealth 
    • Same bill text as MA S2688. 
    • Reported from the Committee on Advanced Information Technology, the Internet and Cybersecurity on 3/3/22; Referred to the Committee on House Ways and Means on 4/14/22. 
  • New Jersey A195 – Requires Commissioner of Labor and Workforce Development to conduct a study and issue report on impact of artificial intelligence on growth of State’s economy 
    • Requires the Commissioner of Labor and Workforce Development to study the impact of AI-powered technology on the growth of the state’s economy and prepare a report describing their findings. 
    • Introduced: 1/11/21. 
  • New York AB 2414– Establishes the Commission on the Future of Work 
    • Establishes the Commission on the Future of Work within the Department of Labor to research and understand the impact of technology on workers, employers, and the economy of the state. Requires the Commission to submit a report along with any recommendations for legislative action to the governor and the legislature. 
    • Introduced: 1/19/21; Reintroduced: 1/5/22. 
  • Rhode Island H 7223– Commission to Monitor the Use of Artificial Intelligence in State Government  
    • Establishes a commission to monitor the use of AI in state government and to make recommendations related to the state’s use of AI systems that could affect human welfare, including legal rights and privileges. Specifies the composition of the commission and sets reporting requirements. 
    • Introduced: 1/26/22. 
  • Washington SB 5116 – Establishing guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability. 
    • Directs the Washington state chief information officer to adopt rules related to the development, procurement, and use of AI systems by public agencies. The officer is required to consult with communities whose rights are disproportionately impacted by automated decision systems. 
    • Introduced: 1/8/21; Reintroduced: 1/10/22. 

Failed 

  • Maryland HB 1323 – Algorithmic Decision Systems – Procurement and Discriminatory Acts 
    • Would require state units purchasing products that contain algorithmic decision systems to only purchase products or services that adhere to responsible artificial intelligence standards. These standards include the avoidance of physical and mental harm, the unjustified deletion or disclosure of information, unwarranted damage to property, reputation, or environment, a commitment to transparency, giving primacy to fairness, and conducting a comprehensive evaluation of risks of the system. 
    • Introduced: 2/8/21. 
  • Michigan HB 4439 -Michigan employment security act 
    • Directs an independent computer expert to audit the algorithm used by the unemployment agency computer system to evaluate claims for unemployment benefits and prepare a report regarding the system, the number of claims and denials, and an analysis of the fairness of the algorithm. 
    • Introduced: 3/4/21. 
  • Missouri HB 1254 – Establishes the Missouri Technology Task Force 
    • Would establish a task force to evaluate Missouri’s technology platforms, the use of cloud computing and artificial intelligence in the state, state certification programs and workforce developments, and state technology initiatives. The task force was also directed to make recommendations regarding the use of technology and artificial intelligence to improve state record management.  
    • Set reporting requirements and specified the makeup of the task force. 
    • Introduced: 2/23/21. 
  • Nevada SB 110 – Revises provisions relating to businesses engaged in the development of emerging technologies. 
    • Would create the Emerging Technologies Task Force administer and coordinate programs, provide information to the public, and assist small businesses and government entities to prepare for and respond to emerging technological developments, which includes AI. 
    • Introduced: 2/10/21. 

Local 

Passed 

  • Baltimore, Maryland Act 21-038 – Surveillance Technology in Baltimore 
    • Prohibits Baltimore city government from obtaining or contracting with another entity that provides certain face surveillance technology, prohibits any person in Baltimore City from obtaining or using face surveillance technology, and requires the Director of Baltimore City Information and Technology to submit an annual report to the Mayor and City Council regarding the use of surveillance by the Mayor and City Council. 
    • Introduced: 1/11/21; Approved: 6/14/21; Signed: 8/16/21. 
  • Bellingham, Washington Ballot Initiative #2 – Ban on Advanced Policing Technologies
    • Prohibits government use of facial recognition and predictive policing technologies. Bellingham residents voted to prohibit the city from acquiring or using facial recognition technology or contracting with a third party to use facial recognition technology on the city’s behalf. The measure also restricts use of illegally obtained data in policing or trials.
    • Passed 11/10/21
  • New York, New York Int. 1894-2020 – A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools 
    • Requires employers to conduct bias audits on automated decision tools before using them and to notify candidates and employees about the use of the tools in assessments or evaluations for hire or promotion. 
    • Introduced: 2/27/20; Passed: 11/10/21. 

Pending 

  • Washington, DC B24-0558 – Stop Discrimination by Algorithms Act of 2021 
    • Would prohibit the use of algorithmic decision-making in a discriminatory manner and would require notices to individuals whose personal information is used in certain algorithms to determine employment, housing, healthcare and financial lending. 
    • Introduced: 12/9/21. 

Read more: What is AI governance

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

DataRobot exec talks ‘humble’ AI, regulation

Elevate your enterprise data technology and strategy at Transform 2021.


Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don’t know with any certainty whether those AI models will one day run afoul of new AI regulations.

Ted Kwartler, vice president of Trusted AI at DataRobot, talked with VentureBeat about why it’s critical for AI models to make predictions “humbly” to make sure they don’t drift or one day run afoul of government regulations.

This interview has been edited for brevity and clarity.

VentureBeat: Why do we need AI to be humble?

Ted Kwartler: An algorithm needs to demonstrate humility when it’s making a prediction. If I’m classifying an ad banner at 50% probability or 99% probability, that’s kind of that middle range. You have one single cutoff threshold above this line and you have one outcome. Below this line, you have another outcome. In reality, we’re saying there’s a space in between where you can apply some caveats, so a human has to go review it. We call that humble AI in the sense that the algorithm is demonstrating humility when it’s making that prediction.

VentureBeat: Do you think organizations appreciate the need for humble AI?

Kwartler: I think organizations are waking up. They’re becoming much more sophisticated in their forethought around brand and reputational risk. These tools have an ability to amplify. The team that I help lead is really focused on what we call applied AI ethics, where we help educate our clients to this kind of phenomenon of thinking about the impacts; not just the math of it. Senior leaders maybe don’t understand the math. Maybe they don’t understand the implementation. But they definitely understand the implications at the strategic level, so I do think it’s an emerging field. I think senior leaders are starting to recognize that there’s more reputational and brand risk.

VentureBeat: Do you think government regulatory agencies are starting to figure this out as well? And if so, what are the concerns?

Kwartler: It’s interesting. If you’ve read the AI Algorithmic Accountability Act, it’s written very broadly. That’s tough because you have an evolving technological landscape. And there are thresholds in that bill around $50 million in revenue that require an impact assessment if your algorithm is going to impact people. I like the idea of the high-risk use cases that were clearly defined. That’s a little prescriptive, but in a good way, I also like that it’s collaborative, because this is an evolving space. You want this stuff to be aligned to your societal values, not build tools of oppression. At the same time, you can’t just clamp it all down because it has shown economic progress. We all benefit from AI technology. It’s a balancing act.

VentureBeat: Do business executives have a vested interest in encouraging governments to define the AI rules sooner than later?

Kwartler: Ambiguity is the tough spot. Organizations are willing to write impact assessments. They’re willing to get third-party audits of their models that are in production. They’re willing to have different monitoring tools in place. A lot of monitoring and model risk management already exists but not for AI, so there are mechanisms by which this can happen. As the technology and use cases improve, how do you then adjust what counts or constitutes as high risk? There is both a need to balance economic prosperity and the guardrails that can operate it.

VentureBeat: What do make of the European Union efforts to regulate AI?

Kwartler: I think next-generation technologists welcome that collaboration. It gives us a path forward. The one thing I really liked about it is that it didn’t seem overreaching. It seemed like it was balancing prosperity with security. It seemed like it was trying to be prescriptive enough about high-risk use cases. It seemed like a very reasoned approach. It wasn’t slamming the door and saying “no more AI.” What that does is it leaves AI development to maybe governments and organizations that operate in the dark. You don’t want that either.

VentureBeat: Do you think the U.S. will move in a similar direction?

Kwartler: We will interpret it for our own needs. That’s what we’ve done in the past. In the end, we will have some form of regulation. I think that we can envision a world where some sort of model auditing is a real feature.

VentureBeat: That would be a preferable alternative to 50 different states attempting to regulate AI?

Kwartler: Yes. There are even regulations coming out in New York City itself. There are regulations in California and Washington that by themselves can dictate it for the whole country. I would be in favor of anything that helps clear up ambiguity so that the whole industry can move forward.

VentureBeat: Do you think there’s going to be an entire school of law built around AI regulations and enforcement?

Kwartler: I suspect that there’s an opportunity for good regulation to really help as a protective measure. I’m certainly no legal expert, so I wouldn’t know if there’s going to be ambulance chasers or not. I do think that there is an existing precedent for good regulation for protecting companies. Once that regulation is in place, you remove the ambiguity. That’s a safer space for organizations that want to do good in the world using this technology.

VentureBeat: Do we have the tools needed to monitor AI?

Kwartler: I would say the technology for monitoring technology and mathematical equations for algorithmic bias exists. You can also apply algorithms to identify the characteristics of data and data quality checks. You can apply methods. You can also apply some heuristics to — or after the model is in production and making predictions — to mitigate biases or risks. Algorithms, heuristics, and mathematical equations can be used throughout that kind of workflow.

VentureBeat: Bias may not be a one-time event. Do we need to continuously evaluate the AI model for bias as new data becomes available? Do we need some sort of set of best practices for evaluating these AI models?

Kwartler: As soon as you build a model, no matter what, it’s going to be wrong. The pandemic has also shown us the input data that you use to train the model does not always equate to the real world. The truth of the matter is that data actually drifts. And once it’s in production, you have data drift, or in the case of language, you have what’s called a concept drift. I do think that there’s a real gap right now. Our AI executive survey showed a very small number of organizations were actually monitoring models in production. I think that is a huge opportunity to help inform these guardrails to get the right behavior. I think the community’s focused a lot on the model behavior, when I think we need to migrate to monitoring and MLOps (machine learning operations) to engender trust way downstream and be less technical.

VentureBeat: Do you think there is a danger a business will evaluate AI models based on their optimal result and then simply work backward to force that outcome?

Kwartler: In terms of model evaluation, I think that’s where a good collaboration for AI regulation can come in and say, for instance, if working in hiring you need to use statistical parity to make sure that you have equal representation by protected classes. That’s a very specific targeted metric. I think that’s where we need to go. The mandated organizations should have a common benchmark. We want this type of speed and this type of accuracy, but how does it deal with outliers? How does it deal with a a set of design choices left to the data scientist as the expert? Let’s bring more people to the table.

VentureBeat: We hear a lot about the AutoML frameworks being employed to make AI more accessible to end users. What is the role of data scientists in an organization that adopts AutoML?

Kwartler: I’m very biased in the sense that I have an MBA in learned data science. I believe that data scientists that are operating in a silo don’t deliver the value because their speed to market with the model is much slower than if you do it with AutoML. Scientists don’t always see the real desire of the business person trying to sponsor the project. They’ll want to build the model and optimize to six decimal points, when in reality it makes no difference unless you’re at some massive scale. I’m a firm believer in AutoML because that allows the data scientists doing a forecast for call centers to go sit with a call center agent and learn from them. I tell data scientists to go see where the data is actually being made. You’ll see all sorts of data integrity issues that will inform your model. That’s harder to do when it takes six months to build a bespoke model. If I can use AutoML to speed up velocity to value then I have this luxury to go deeper into the weeds.

VentureBeat: AI adoption is relatively slow still. Is AutoML about to speed things up?

Kwartler: I’ve worked in large, glacial companies. They move slower. The models themselves took maybe a year plus to move into production. I would say there is going to be data drift if it takes six months to build the model and six months to implement the model. The model is not going to be as accurate as I think it is. We need to increase that velocity to democratize it for people that are closer to the business problem.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Why the United Nations urgently needs its own regulation for AI

The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence.

The sun is starting to set on the Wild West days of artificial intelligence,” writes Jeremy Kahn. He may have a point.

When this regulation comes into effect, it will change the way that we conduct AI research and development. In the last few years of AI, there were few rules or regulations: if you could think it, you could build it. That is no longer the case, at least in the European Union.

There is, however, a notable exception in the regulation, which is that it does not apply to international organizations like the United Nations.

Naturally, the European Union does not have jurisdiction over the United Nations, which is governed by international law. The exclusion, therefore, does not come as a surprise but does point to a gap in AI regulation. The United Nations, therefore, needs its own regulation for artificial intelligence, and urgently so.

AI in the United Nations

Artificial intelligence technologies have been used increasingly by the United Nations. Several research and development labs, including the Global Pulse Labthe Jetson initiative by the UN High Commissioner for RefugeesUNICEF’s Innovation Labs, and the Centre for Humanitarian Data have focused their work on developing artificial intelligence solutions that would support the UN’s mission, notably in terms of anticipating and responding to humanitarian crises.

United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims. The UNHCR developed a biometrics database that contained the information of 7.1 million refugees. The World Food Program has also used biometric identification in aid distribution to refugees, coming under some criticism in 2019 for its use of this technology in Yemen.

In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modeling.

A UNESCO video on its applications of AI.

No oversight, regulation

In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. Several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations.

Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight. There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs’ Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models.

In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.

In the European Commission’s AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.

The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services. They may also be used to dispatch of emergency first response services — all of these are current uses of AI by the United Nations.

Categories
Tech News

EU’s proposed AI regulation has some major loopholes

The European Commission plans to ban the use of AI for social credit scoring, exploitation of the vulnerable, and subliminal manipulation, as part of new proposals for regulating artificial intelligence in the EU.

Police use of real-time biometric identification systems — such as facial recognition — will also be “prohibited in principle,” albeit with a rather vague list of exemptions.

Judges can approve their use for finding victims of specific crimes, preventing terrorist attacks, and locating suspects or perpetrators of certain offenses.

The rules follow “a risk-based approach” that categorizes AI systems as unacceptable, high, limited, and minimal risk. Companies who break the rules will face fines of up to 6% of their global revenue or €30 million ($36 million), whichever figure is higher.

[Read: 3 new technologies ecommerce brands can use to connect better with customers]

The Commission aims to create a “human-centric” form of regulation that provides a competitive advantage over the US and China.

But the rules have been criticized, both by those who think they go too far, and those who think they don’t go far enough.

Advocates for stronger protections note that only four applications have been banned, while the prohibition of biometric systems comes with a range of exemptions. Furthermore, the “high risk” category isn’t clearly defined.

Daniel Leufer, Europe policy analyst at digital rights group Access Now, said there are major loopholes in the rules.

Representatives of the US tech sector are also unhappy with the proposals.

Benjamin Mueller, a senior policy analyst at the Center for Data Innovation, a think tank that gets funding from Google, Apple, Microsoft, and Facebook, was among the most vociferous critics. He said the rules “will kneecap the EU’s nascent AI industry before it can learn to walk.”

Rather than supercharge the European digital economy, this law will cause it to fall even further behind the United States and China.

Mueller argued that the definition of AI is too expansive and that the requirements for high-risk systems would restrict many socially beneficial applications of the tech. Unsurprisingly, he wants the EU to adopt a light-touch framework that’s limited in scope.

In its efforts to balance safety and innovation, the EU was always likely to displease both sides of the debate. However, the rules could still change and won’t be implemented in the immediate future.

The European Parliament and EU member states both need to approve the proposals before they become law, a process that can take years. Lobbyists will also be generously providing their own suggested tweaks to the rules.

When the regulation finally becomes official, it might already be outdated.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link