Categories
Computing

FBI warns hackers are using deepfakes to apply for jobs

Forget scamming grandma with fake IRS calls. According to the FBI, hackers are now stealing personal information and using deepfakes to apply for remote jobs.

In a public service announcement posted on the Internet Crime Complaint Center earlier today, the FBI explained how cybercriminals are stealing Americans’ personal identifiable information (PII) and applying for remote jobs, and then using deepfake videos to pass online job interviews.

“The remote work or work-from-home positions identified in these reports include information technology and computer programming, database, and software-related job positions.” The FBI post said. “Notably, some reported positions include access to customer PII, financial data, corporate IT databases and proprietary information.”

Personal identifiable information, or PII, can include any information used to identify you, such as your social security number, your driver’s license, and even your health insurance information. Once cybercriminals have your PII, they can apply for remote jobs using your name and address along with fake qualifications.

The impressive part happens once they’ve been asked for a remote interview. The hackers will use deepfake videos to pretend they are you during an online video meeting. They may also use deepfake voice modifiers for telephone interviews.

Deepfake technology uses AI and machine learning to carefully match a subject’s facial expressions to an actual video. It only needs a single still photo, such as from a driver’s license, and can recreate an impressively realistic video. Experts have been warning of the increasing prevalence of deepfakes in cybercrime for a few years, with Europol even releasing a report about deepfakes being used to impersonate powerful CEOs.

Thankfully, there are ways to spot a deepfake. Often the lip movements don’t sync with the words being spoken. Because the hackers need to use pre-recorded voices for the AI to generate a match, there can be sneezing or coughing in the audio while the face remains unmoved.

The FBI said many of the fake applicants provided background checks to other people, and the background results did not match up.

The hackers’ goal seems to be to gain access to information secured behind corporate firewalls. Once they have access, they can steal hordes of information including passwords and credit card numbers. The FBI release did not state if any companies are known to have been breached in this way.

The FBI is asking all victims of this new cybercrime to lodge a complaint with the Internet Crime Complaint Center (IC3).

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Valve warns against squeezing a larger SSD into your Steam Deck

Valve loves to warn people about about the risks of do-it-yourself Steam Deck maintenance, and that now extends to upgrading the storage. In a response to a PC Gamer article on modding the Steam Deck, Valve hardware designer Lawrence Yang warned against upgrading the device’s NVMe SSD. While it’s technically possible, the M.2 2242 drives (22mm wide by 42mm long) you frequently find in stores are hotter and more power-hungry than the 2230 models (22mm x 30mm) the handheld was meant to support. You could “significantly shorten” the longevity of the system, Yang said, adding that you shouldn’t move thermal pads.

The PC Gamer story referenced modder Belly Jelly’s discovery (initially reported by Hot Hardware) that it was possible to fit an M.2 2242 SSD in the Steam Deck, albeit with some design sacrifices. There were already concerns this might lead to overheating problems. Yang just explained why it’s a bad idea, and outlined the likely long-term consequences.

The alert might be a letdown if you feel limited by Valve’s maximum 512GB storage and don’t think a microSD card (typically much slower than an SSD) is an adequate substitute. With that said, it’s not shocking — mobile devices like this often have size and thermal constraints that make it impractical to upgrade at least some components.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Security

Businesses risk ‘catastrophic financial loss’ from cyberattacks, US watchdog warns

A government watchdog has warned that private insurance companies are increasingly backing out of covering damages from major cyberattacks — leaving American businesses facing “catastrophic financial loss” unless another insurance model can be found.

The growing challenge of covering cyber risk is outlined in a new report from the Government Accountability Office (GAO), which calls for a government assessment of whether a federal cyber insurance option is needed.

The report draws on threat assessments from the National Security Agency (NSA), Office of the Director of National Intelligence (ODNI), Cybersecurity and Infrastructure Security Agency (CISA), and Department of Justice to quantify the risk of cyberattacks on critical infrastructure, identifying vulnerable technologies that might be attacked and a range of threat actors capable of exploiting them.

Citing an annual threat assessment released by the ODNI, the report finds that hacking groups linked to Russia, China, Iran, and North Korea pose the greatest threat to US infrastructure — along with certain non-state actors like organized cybercriminal gangs.

Given the wide and increasingly skilled range of actors willing to target US entities, the number of cyber incidents is rising at an alarming rate.

“Although federal agencies do not have a comprehensive inventory of cybersecurity incidents,” the report reads, “several key federal and industry sources show (1) an increase in most types of cyberattacks across the United States— including those affecting critical infrastructure, and (2) significant and increasing costs for cyberattacks.”

In 2016, US businesses and public bodies were hit with a total of 19,060 incidents in the four major categories — ransomware, data breaches, business email compromise, and denial of service attacks — with a total cost of $470 million, per a GAO analysis of FBI reports. In 2021, there were 26,074 incidents, and the total cost was close to $2.6 billion.

The report also cites specific incidents that have had a spillover effect on the wider economy, notably the cyberattack on the Colonial Pipeline that took a 5,500-mile-long fuel transporting operation offline. In that attack, the pipeline operator paid a ransom of $4.4 million to the hackers — despite advice from law enforcement agencies that ransom demands should always be rejected.

Spooked by the possibility of having to cover such large losses, private insurers are backing out of the market by excluding some of the most high-level cyberattacks from being covered by insurance policies. While data breaches and ransomware attacks are generally still covered, the report finds that “private insurers have been taking steps to limit their potential losses from systemic cyber events,” declining to cover losses incurred by acts of cyber warfare or deliberate infrastructure targeting.

According to the US Department of the Treasury, some insurers have also been mitigating their exposure by lowering the maximum amount that a policy will pay out in the case of a cyberattack and / or increasing premiums in an attempt to protect themselves from losses. There’s further evidence that some insurance companies are pulling back from coverage in infrastructure sectors entirely, the GAO found, judging the risk of attack as too high.

Overall, the GAO report suggests that CISA and the Federal Insurance Office undertake an assessment into whether the above factors necessitate a federal insurance response along the lines of FDIC insurance for bank deposits and the National Flood Insurance Program.

Repost: Original Source and Author Link

Categories
AI

Bias in AI isn’t an enterprise priority, but it should be, survey warns

All the sessions from Transform 2021 are available on-demand now. Watch now.


A global survey published today finds nearly a third (31%) of respondents consider the social impact of bias in models to be AI’s biggest challenge. This is followed by concerns about the impact AI is likely to have on data privacy (21%). More troubling, only 10% of respondents said their organization has addressed bias in AI, with another 30% planning to do so sometime in the next 12 months.

Conducted by Anaconda, whose platform provides access to curated instances of open source tools for building AI models, the survey of 4,299 individuals includes IT and business professionals, alongside students and academics. It suggests IT organizations are now exercising more influence over AI, with nearly a quarter of respondents (23%) noting data science teams report up through the IT organization. Approval of the AI platforms employed by IT teams ranked third (45%) after performance (60%) and memory (46%).

Respondents said they spend about 39% of their time on data prep and data cleansing, which is more than the time spent on model training, model selection, and deployment combined.

Among respondents responsible for deploying AI models in production environments, the top challenges cited are security (27%), recoding models from Python or R to another programming language (24%), managing dependencies and environments (23%), and recoding models from other languages into Python or R (23%). Python remains the dominant language (63%) employed by data science teams, while a full 87% said they are employing open source software to some degree.

The worst AI myths

The top two biggest data science myths cited are 1) that having access to lots of data leads to greater accuracy (33%) and 2) the perception that data scientists don’t know how to code (31%).

The survey suggests there is also a long way to go in terms of embedding AI within business workflows. Only 39% of respondents said many decisions are based on insights surfaced by their data science efforts. A little over a third (35%) said some decisions are influenced by their work. Only 36% said their organization’s decision-makers are very data literate and understand the stories told by visualizations and models. Just over half (52%) said decision-makers as mostly data literate but need some coaching.

However, the percentage of individuals across an organization that will be employing data science is only going to increase in the months ahead, said Anaconda CEO Peter Wang. “You don’t need to know data science to use data science,” he said.

AI spending drops over 2020

In the short term, however, the survey suggests investment in AI fell somewhat in the last year. More than a third of respondents said they saw a decline in AI investments in the wake of the economic downturn brought on by the COVID-19 pandemic. Only just over a quarter of respondents (26%) said their organization actually increased investments in AI.

Nearly half of respondents (45%) said reduced investments manifested themselves in the form of reduced budgets. Nearly half (47%) said their teams did not grow, while 39% said members of their teams were actually laid off. Just over a third (35%) said projects were put on hold or had their deadlines extended. Just under a third of respondents (32%) said they expect to be looking for a new job in the next 12 months.

There’s no doubt organizations of all sizes are engaging to the best of their ability in what is rapidly becoming an AI arms race. But not all processes lend themselves equally well to AI, so the issue is not just how to build AI models but where best to apply them.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Microsoft Warns Windows Users of Printing Vulnerability

Microsoft might have patched PrintNightmare in Windows, but for the second time this month, there’s yet another printer-themed vulnerability in the wild.

Just detailed is a new vulnerability in the Windows Print Spooler service that could allow hackers to install programs; view, change, or delete data; and create new accounts on your PC.

Though that might sound scary, it is important to note that to leverage this new vulnerability, hackers will need to execute code on a victim system. Basically, it means that a hacker would need physical access to your PC. Microsoft mentions this in the support guide for the new vulnerability, going by the name of CVE-2021-34481.

It is there where Microsoft labels the vulnerability with a score of 7.8 and “important” severity, meaning it is a high-security risk. However, Microsoft does also mention that though CVE-2021-34481 was made public, it hasn’t been exploited — though another note details exploitation is “more likely.”

Microsoft hasn’t yet mentioned when a patch for this new vulnerability will be released. Instead, the company says it is investigating and “developing a security update.” Importantly, Microsoft points out that this new issue wasn’t caused by the July 2021 security update, which initially patched PrintNightmare.

Still worried? There is a temporary workaround for those who might be concerned. The workaround involves opening Powershell on Windows and determining if the Print Spooler Service is running, then stopping and disabling the service. The downside of this workaround is that stopping and disabling the Print Spooler service disables the ability to print both locally and remotely.

The last time, Microsoft was quick to release a patch for PrintNightmare. It happened within four days of Microsoft first discovering the issue. It’s unknown if a similar patch for this exploit could come at a similar time. Seeing as though the situation is a little less urgent, with hackers needing local access to a PC, it could be a while.

Microsoft credited the security researcher Jacob Baines for discovering this issue and reporting it to Microsoft. Baines notes on his Twitter page that he doesn’t believe this new vulnerability to be a variant of PrintNightmare.

Editors’ Choice






Repost: Original Source and Author Link

Categories
AI

OpenAI warns AI behind GitHub’s Copilot may be susceptible to bias

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Last month, GitHub and OpenAI launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Powered by an AI model called Codex rained on billions of lines of public code, the companies claim that Copilot works with a broad set of frameworks and languages and adapts to the edits developers make, matching their coding styles.

But a new paper published by OpenAI reveals that Copilot might have significant limitations, including biases and sample inefficiencies. While the research describes only early Codex models, whose descendants power GitHub Copilot and the Codex models in the OpenAI API, it emphasizes the pitfalls faced in the development of Codex, chiefly misrepresentations and safety challenges.

Despite the potential of language models like GPT-3, Codex, and others, blockers exist. The models can’t always answer math problems correctly or respond to questions without paraphrasing training data, and it’s well-established that they amplify biases in data. That’s problematic in the language domain, because a portion of the data is often sourced from communities with pervasive gender, race, and religious prejudices. And this might also be true of the programming domain — at least according to the paper.

Massive model

Codex was trained on 54 million public software repositories hosted on GitHub as of May 2020, containing 179 GB of unique Python files under 1 MB in size. OpenAI filtered out files which were likely auto-generated, had average line length greater than 100 or a maximum greater than 1,000, or had a small percentage of alphanumeric characters. The final training dataset totaled 159 GB.

OpenAI claims that the largest Codex model it developed, which has 12 billion parameters, can solve 28.8% of the problems in HumanEval, a collection of 164 OpenAI-created problems designed to assess algorithms, language comprehension, and simple mathematics. (In machine learning, parameters are the part of the model that’s learned from historical training data, and they generally correlate with sophistication.) That’s compared with OpenAI’s GPT-3, which solves 0% of the problems, and EleutherAI’s GPT-J, which solves just 11.4%.

After repeated sampling from the model, where Codex was given 100 samples per problem, OpenAI says that it manages to answer 70.2% of the HumanEval challenges correctly. But the company’s researchers also found that Codex proposes syntactically incorrect or undefined code, invoking functions, variables, and attributes that are undefined or outside the scope of the codebase.

GitHub Copilot

Above: GitHub Copilot

More concerningly, Codex suggests solutions that appear superficially correct but don’t actually perform the intended task. For example, when asked to create encryption keys, Codex selects “clearly insecure” configuration parameters in “a significant fraction of cases.” The model also recommends compromised packages as dependencies and invoked functions insecurely, potentially posing a safety hazard.

Safety hazards

Like other large language models, Codex generates responses as similar as possible to its training data, leading to obfuscated code that looks good on inspection but in fact does something undesirable. Specifically, OpenAI found that Codex, like GPT-3, can be prompted to generate racist, denigratory, and otherwise harmful outputs as code. Given the prompt “def race(x):,” OpenAI reports that Codex assumes a small number of mutually exclusive race categories in its completions, with “White” being the most common followed by “Black” and “other.”  And when writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups.

OpenAI recently claimed it discovered a way to improve the “behavior” of language models with respect to ethical, moral, and societal values. But the jury’s out on whether the method adapts well to other model architectures like Codex’s, as well as other settings and social contexts.

In the new paper, OpenAI also concedes that Codex is sample inefficient in the sense that even inexperienced programmers can be expected to solve a larger fraction of problems despite having seen fewer than the model. Moreover, refining Codex requires a significant amount of compute — hundreds of petaflops per day — that contributes to carbon emissions. While Codex was trained on Microsoft Azure, which OpenAI notes purchases carbon credits and sources “significant amounts of renewable energy,” the company admits that the compute demands of code generation could grow to be much larger than Codex’s training if “significant inference is used to tackle challenging problems.”

Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who’s disadvantaged. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car.

Perhaps anticipating criticism, OpenAI asserts in the paper that risk from models like Codex can be mitigated with “careful” documentation and user interface design, code review, and content controls. In the context of a model made available as a service, like via an API, policies including user review, use case restrictions, monitoring, and rate limiting might also help to reduce harms, the company says.

“Models like Codex should be developed, used, and their capabilities explored carefully with an eye towards maximizing their positive social impacts and minimizing intentional or unintentional harms that their use might cause. A contextual approach is critical to effective hazard analysis and mitigation, though a few broad categories of mitigations are important to consider in any deployment of code generation models,” OpenAI wrote.

We’ve reached out to OpenAI to see whether any of the suggested safeguards have been implemented in Copilot.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Interpol Warns of High Rate of Cyberattacks During Pandemic

Interpol has warned that the coronavirus pandemic has led to an “alarming” rate of cyberattacks as criminals focus increasingly on larger organizations by targeting staff working from home.

A report released by the international police agency on Thursday, August 4, said that since the start of the pandemic it has seen a “significant target shift from individuals and small businesses to major corporations, governments, and critical infrastructure.”

It said that while the spread of the coronavirus has led to more organizations and businesses setting up remote networks to enable their staff to work from home, online security measures are often not as robust as those in the workplace, making it easier for cybercriminals to cause disruption, steal data, and generate profits.

“Cybercriminals are developing and boosting their attacks at an alarming rate, exploiting the fear and uncertainty caused by the unstable social and economic situation created by COVID-19,” said Jurgen Stock, secretary-general of Interpol.

The organization says it has seen an uptick in many different types of attacks, including phishing, where a perpetrator sends someone a fake email in a bid to trick the victim into clicking on a malicious link — a scam that could lead to the target giving up sensitive information about their business.

Cybercriminals are also launching more attacks using ransomware, a method that locks a computer system until a sum of money is paid.

With the pandemic still the main focus of so many people’s lives, perpetrators are also changing their tactics by increasingly impersonating government and healthcare facilities in emails that attempt to trick their targets into clicking on a link that could ultimately lead to a malware or ransomware attack.

Interpol, which counts the U.S. among its 194 member states, has also warned that if a vaccine is developed, cybercriminals will likely try to use it to launch more attacks by referring to it in bogus emails.

Twitter recently suffered a major hack where some of its employees, who may have been working from home, were tricked into giving up vital information about its internal systems. Meanwhile, tech company Garmin experienced a ransomware attack last month that forced a server outage, causing major disruption to customers using Garmin Connect, the network that controls data syncs for its wearables and online apps. The Kansas-based company has reportedly since received a decryption key to recover its files, suggesting it may have paid a ransom that one report put at $10 million.

Interpol urged organizations and businesses to ensure they have effective online security measures in place or risk becoming the next victim as cybercriminals increase their activities during the pandemic.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Robinhood warns of possible Dogecoin disaster as it files for IPO

Robinhood has filed for its IPO, and the popular investment platform is boasting unexpected profitability – and warning of the potential Dogecoin risk. It’s been a hectic year or so for the company, which now says it turned a small $7.45 million profit in 2020, compared to $107 million in losses the previous year.

That’s based on a significant revenue jump, mind. In 2020, Robinhood disclosed today, it generated $959 million in revenues; in 2019, that figure was $278 million.

As expected, the Robinhood S-1 filing doesn’t just set out the company’s stall for why investors might want to consider its stock, but also lists a litany of potential issues it could face. Key among those is the volatile world of cryptocurrency, an area where Robinhood has seen particular interest from amateur investors. In particular, Dogecoin is called out as a specific area of possible concern.

“For the three months ended March 31, 2021, 17% of our total revenue was derived from transaction-based revenues earned from cryptocurrency transactions, compared to 4% for the three months year ended December 31, 2020,” Robinhood’s S-1 reveals. “While we currently support a portfolio of seven cryptocurrencies for trading, for the three months ended March 31, 2021, 34% of our cryptocurrency transaction-based revenue was attributable to transactions in Dogecoin, as compared to 4% for the three months ended December 31, 2020.”

As Robinhood portrays it, that’s a level of new risk on top of the already at-times precarious cryptocurrency market. After all, $DOGE has been shown to be particularly variable based on high-profile investors, with mere tweets from Tesla’s Elon Musk sufficient to send the price rocketing up – or, indeed, crashing down.

“As such, in addition to the factors impacting the broader cryptoeconomy described elsewhere in this section,” Robinhood adds, “RHC’s business may be adversely affected, and growth in our net revenue earned from cryptocurrency transactions may slow or decline, if the markets for Dogecoin deteriorate or if the price of Dogecoin declines, including as a result of factors such as negative perceptions of Dogecoin or the increased availability of Dogecoin on other cryptocurrency trading platforms.”

Now, it’s worth remembering that pessimism is the order of the day when it comes to SEC risk assessments. Businesses are required to list out every possible challenge, hurdle, or other potential pratfall that could affect them: that, after all, is why it’s called the “Risk Factors” section. Robinhood also calls out everything from the COVID-19 pandemic, through SEC and other regulator demands, to even just a potential loss of reputation that could dampen investor interest.

Nonetheless it’s worth noting just how important Dogecoin has become as a proportion of Robinhood’s crypto business, and remembering that – unlike, for example, Bitcoin – the so-called meme stock has no theoretical limit on mining. Robinhood has previously cracked down on crypto trades in an attempt to pacify turbulent trading, finding itself the target of investor ire after limiting purchases and holdings of certain popular shares.

Robinhood plans to list under $HOOD on the NASDAQ when it finally goes public. The company recently agreed a $70 million fine – the largest ever imposed by the Financial Industry Regulatory Authority (FINRA) – over claims it misled investors with false information, and caused harm with various outages in March 2020.

Repost: Original Source and Author Link

Categories
Security

Microsoft warns of ‘sophisticated’ Russian email attack targeting government agencies

Microsoft has raised the alarm over a “sophisticated” ongoing cyberattack believed to be from the same Russia-linked hackers behind the SolarWinds hack. In a blog post, Tom Burt, Microsoft’s corporate vice president for customer security and trust, said the attack appears to be targeting government agencies, think tanks, consultants, and NGOs. In total, around 3,000 email accounts are believed to have been targeted across 150 organizations. Victims are spread across upward of 24 countries, but the majority are believed to be in the US.

According to Microsoft, hackers from a threat actor called Nobelium were able to compromise the US Agency for International Development’s account on a marketing service called Constant Contact, allowing them to send authentic-looking phishing emails. Microsoft’s post contains a screenshot of one of these emails, which claimed to contain a link to “documents on election fraud” from Donald Trump. However, when clicked, this link would install a backdoor that let the attackers steal data or infect other computers on the same network.

“We are aware that the account credentials of one of our customers were compromised and used by a malicious actor to access the customer’s Constant Contact accounts,” a spokesperson for Constant Contact said in a statement. “This is an isolated incident, and we have temporarily disabled the impacted accounts while we work in cooperation with our customer, who is working with law enforcement.”

Microsoft says it believes that many of the attacks were blocked automatically, and that its Windows Defender antivirus software is also limiting the spread of the malware. The Cybersecurity and Infrastructure Security Agency at the Department of Homeland Security has acknowledged Microsoft’s blog post and encouraged administrators to apply the “necessary mitigations.”

This salvo of malicious emails is a warning that supply chain cyberattacks against US organizations are showing no signs of slowing, and that hackers are updating their methods in response to previous attacks becoming public. In its post, Microsoft calls for new international norms to be established governing “nation-state conduct in cyberspace” along with expectations of the consequences for breaking them.

The US government has blamed SVR, the Russian foreign intelligence service, for the SolarWinds hack, Bloomberg notes, although Russia’s president Vladimir Putin has denied Russian involvement. The attack is believed to have compromised around 100 private sector companies and nine federal agencies. Up to 18,000 SolarWinds customers are believed to have been exposed to the malicious code. In response, President Biden announced new sanctions on Russia and moved to expel 10 Russian diplomats from Washington, Bloomberg reports.

Repost: Original Source and Author Link

Categories
AI

Bank of England warns of potential risks from cloud data providers

Join Transform 2021 this July 12-16. Register for the AI event of the year.


(Reuters) — The Bank of England might strengthen its controls on cloud data providers and other technology firms to counter possible risks to the stability of the financial system from the rise of fintech, Deputy Governor Dave Ramsden said.

The Bank of England (BoE) has expressed concerns before about the reliance by financial firms, especially fintech startups, on third-party technology companies for key parts of their operations, and Ramsden said this scrutiny would intensify.

“We plan to analyse further whether we need even stronger tools to manage the risk that critical third parties, including potentially cloud and other major tech providers, may pose to the Bank’s … objectives,” Ramsden told the Innovate Finance conference on Wednesday.

Regulators globally have been tightening scrutiny of outsourced functions as they worry that core services financial firms provide to customers are vulnerable to outages at third parties.

Britain’s government is keen to promote fintech as an area of growth and hopes that nimbler regulation will enable it to steal a march over the European Union, where British financial firms now have reduced access due to Brexit.

The BoE has said it will not water down regulatory standards, but does see scope for more streamlined regulation of smaller banks and in some areas of insurance.

On Monday, finance minister Rishi Sunak asked the BoE to work with the finance ministry on whether the central bank should set up a digital version of sterling to compete with cryptocurrencies, which he dubbed ‘Britcoin’.

The government is also consulting over proposals to relax stock market listing rules due to a concern that Britain is less attractive than the United States as a listing venue, especially for tech companies whose founders want to keep an sizeable role.

Ramsden said the BoE had taken a step to make life easier for smaller financial companies on Monday by giving firms more direct ways to access its high-value payments system, which is dominated by major banks and processing companies.

Other steps included work standardising the identification of businesses involved in financial transactions, and looking at whether artificial intelligence could ease the burden of regulatory compliance.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link