Categories
Computing

IBM Reveals Quantum Computing Leap With 127-Qubit Processor

IBM has revealed its latest and most powerful quantum processor, and it represents a key breakthrough in the quantum computing industry.

Dubbed Eagle, the 127-qubit processor becomes the first of its kind to deliver more than 100 qubits. To illustrate just how powerful quantum computing systems are, it’s been a requirement until recently that their qubits have to be cooled at temperatures as cold as outer space.

In order to demonstrate the power of Eagle, IBM highlighted how a computer has typically required more bits than atoms found in every human in the world to successfully simulate the processor.

Due to the substantial quantity of qubits, Eagle is claimed to be the first processor ever created that cannot be simulated on a classic supercomputer.

By increasing the amount of qubits in a quantum computer, it allows highly sophisticated programs to function on such systems that would otherwise not be capable of running on a standard supercomputer.

“The arrival of the Eagle processor is a major step toward the day when quantum computers can outperform classical computers at meaningful levels,” said Dr. Darío Gil, senior vice president and director of research at IBM. “Quantum computing has the power to transform nearly every sector and help us tackle the biggest problems of our time.”

IBM aims to achieve “quantum advantage” by 2023.

The processor architecture of Eagle features new methods that place control components on several physical levels, while qubits have been incorporated into their own level. 

IBM also noted how it had to combine and improve techniques that originated in previous generations of IBM quantum processors so it could develop an architecture that includes advanced 3D packaging techniques. Ultimately, such functionality will allow the company to provide a strong foundation for its processors, which includes the forthcoming 1000-plus-qubit Condor chip.

As for plans pertaining to future IBM quantum systems, Condor in particular is expected to contain 1,121 qubits in total upon its planned launch in 2023. Eagle’s immediate successor, the 433-qubit Osprey, is scheduled for a 2022 release.

The company also aims to achieve “quantum advantage” by 2023. The term is attributed to quantum computers that can solve problems that are impossible to crack via a classical computer. Condor’s capabilities will help in accomplishing this goal.

Eagle will become available to select members of IBM’s Quantum Network, who have remote access to IBM’s quantum computers, in December. IBM will divulge more information related to its quantum systems at its annual Quantum Summit event taking place tomorrow, November 16

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

NeuReality and IBM team up to develop AI inference platforms

[Updated 5:44am PST]

NeuReality, an Israeli-based semiconductor company developing high-performance AI inference technology, has signed an agreement with IBM to develop the technology.

The technology aims to deliver cost and power consumption improvements for deep learning use cases of inference, the companies said. This development follows NeuReality’s emergence from stealth earlier in February with an $8 million seed round to accelerate AI workloads at scale.

AI inference is a growing area of focus for enterprises, because it’s the part of AI where neural networks actually are applied in real application and yield results. IBM and NeuReality claim their partnership will allow the deployment of computer vision, recommendation systems, natural language processing, and other AI use cases in critical sectors like finance, insurance, healthcare, manufacturing, and smart cities. They also claim the agreement will accelerate deployments in today’s ever-growing AI use cases, which are already deployed in public and private cloud datacenters.

NeuReality has competition in Cast AI, a technology company offering a platform that “allows developers to deploy, manage, and cost-optimize applications in multiple clouds simultaneously.” Some other competitors include Comet.ml, Upright Project, OctoML, Deci, and DeepCube. However, this partnership with IBM will see NeuReality become the first start-up semiconductor product member of the IBM Research AI Hardware Center and a licensee of the Center’s low-precision high performance Digital AI Cores.

VentureBeat connected via email with Moshe Tanach, CEO and co-founder of NeuReality, to get a broader view on the direction of this partnership.

Delivering a new reality to datacenters and near edge compute solutions

NeuReality’s agreement with IBM includes cooperation around NR1, NeuReality’s first Server-on-a-Chip ASIC implementation of its AI-centric architecture. The NR1 is a high performance, fully linear, scalable, network-attached device that provides services of AI workload processing, NeuReality says. In simpler terms, the NR1 offering targets cloud and enterprise datacenters, alongside carriers, telecom operators, and other near edge compute solutions—enabling them to deploy AI use cases more efficiently. The NR1 is based on NeuReality’s first generation FPGA-based NR1-P prototype platform introduced earlier this year.

In line with NeuReality’s vision to make AI accessible to all, this technology will remove the system bottlenecks of today’s solutions and provide disruptive cost and power consumption benefits for inference systems and services, the company said. The collaboration with IBM will ensure Neurality’s already available FPGA-based NR1-P platform supports software integration and system level validation prior to the availability of the NR1 production platform next year, the companies said.

“Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 Server-on-a-Chip’s tapeout. Being able to develop, test and optimize complex datacenter distributed features, such as Kubernetes, networking, and security before production is the only way to deliver high quality to our customers. I am extremely proud of our engineering team who will deliver a new reality to datacenters and near edge solutions. This new reality will allow many new sectors to deploy AI use cases more efficiently than ever before,” Tanach added.

A marker of NeuReality’s continued momentum

According to Dr. Mukesh Khare, Vice President of Hybrid Cloud research at IBM Research, “In light of IBM’s vision to deliver the most advanced Hybrid Cloud and AI systems and services to our clients, teaming up with NeuReality, which brings a disruptive AI-centric approach to the table, is the type of industry collaboration we are looking for. The partnership with NeuReality is expected to drive a more streamlined and accessible AI infrastructure, which has the potential to enhance people’s lives.”

As part of the agreement, IBM becomes a design partner of NeuReality and will work on the product requirements for the NR1 chip, system, and SDK that will be implemented in the next revision of the architecture. Together the two companies will evaluate NeuReality’s products for use in IBM’s Hybrid Cloud, including AI use cases, system flows, virtualization, networking, security, and more.

Following NeuReality’s announcement of  its first-of-a-kind AI-centric architecture back in February and its collaboration with Xilinx to deliver their new AI-centric FPGA-based NR1-P platforms to the market  in September, this agreement with IBM marks the company’s upward trajectory and continued momentum.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM launches AI service to assist companies with climate change analysis

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


IBM today launched the Environmental Intelligence Suite, a set of AI-powered software that customers can use to prepare for climate risks that could disrupt operations. By combining AI, weather data, climate risk analytics, and carbon accounting capabilities, the Environmental Intelligence Suite can be used to help organizations assess their impact on the planet while reducing the complexity of regulatory compliance, IBM says.

Companies are facing climate-related damage to their assets, as well as increasing expectations from consumers to perform as environmental leaders. McKinsey predicts that climate change could mean more disruptions in global supply chains, interrupting production and raising costs and prices. At the same time, shoppers are seeking out — and are willing to pay a premium for — environmentally-friendly products, according to recent studies from GreenPrint and others.

The Environmental Intelligence Suite leverages existing weather data from IBM, along with technologies developed by IBM Research. Via APIs, dashboards, maps, and alerts, the platform delivers recommendations aimed at addressing both immediate challenges and long-term planning and strategies. With the platform, climate and data scientists can analyze environmental datasets and use a new climate risk modeling framework to generate data on future wildfire and flooding risks. They can also tap natural language processing and automation features designed to help estimate carbon emissions and identify opportunities for reduction.

“The future of business and the environment are deeply intertwined. Not only are companies coping with the effects of extreme weather disruptions on their operations, they’re also being held increasingly accountable by shareholders and regulators for how their operations impact the planet,” IBM’s general manager of AI applications and blockchain Kareem Yusuf said in a statement. “IBM is bringing together the power of AI and hybrid cloud to provide businesses with environmental intelligence designed to help them improve environmental performance and reporting, create more efficient business operations to reduce resource consumption, and plan for resiliency in the face of climate disruptions.”

AI for climate change

IBM is pitching the Environmental Intelligence Suite as a way to monitor for severe weather, wildfires, flooding, and air quality in addition to prioritizing mitigation efforts and measuring environmental initiatives. For example, the company says, retailers could use the platform for severe weather-related shipping and inventory disruptions, while energy and utility companies could deploy it to determine where to trim vegetation around power lines.

While studies suggest that some forms of machine learning do contribute significantly to greenhouse gas emissions, the technology has also been proposed as a tool to combat climate change. Researchers are using AI-generated images to help visualize climate change and estimate corporate carbon emissions. And nonprofits like WattTime are working to reduce households’ carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available.

Beyond AI-powered services, tech giants are releasing tools to corner a global emission management software market expected to be worth $43.6 billion by 2030. Recently, Microsoft announced Cloud for Sustainability, a service designed to help companies measure and manage their carbon emissions by setting sustainability goals. It was released on the heels of Salesforce’s Sustainability Cloud, an enterprise carbon accounting product designed to drive climate action, and apps from Google Cloud to help businesses choose cleaner regions to locate their resources.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM acquires Bluetab to expand hybrid cloud service offerings

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


A week after snatching up startup BoxBoat, IBM today announced another acquisition to expand its portfolio of data, cloud, and analytics services. With Bluetab, IBM aims to further advance its hybrid cloud and AI strategy across Europe, Latin America, and North America.

Organizations have turned to the cloud for flexibility as they look to support digital transformation. IDG reports that the average cloud budget is up from $1.62 million in 2016 to a whopping $2.2 million today. Moreover, despite the pandemic, the market for data services continues to grow both in revenue and size. Gartner forecasted the worldwide data and analytics services market at a five-year compound annual growth rate of 14.6%, reaching $232 billion by 2024.

An enterprise software and services company with offices in the U.K., Mexico, and Spain, Bluetab, which was founded in 2005, offers products designed to help enterprises adopt hybrid multicloud data platforms. The startup’s Truedat open source data governance tool ostensibly lets enterprises become more data-driven, while its Fastcapture service uses AI to automatically classify documents and extract relevant information from them.

Bluetab

“The key to solving data challenges for our clients has been the exceptionally talented and experienced team we have been able to build as well as the value-added accelerators we have developed,” Bluetab cofounder José Luis López said in a press release. “We could not be more excited by the opportunity that IBM offers us to continue to grow our team, to build on our accelerators, and to help more clients achieve leadership positions by leveraging their data.”

IBM SVP Mark Foster says that Bluetab’s over 700 data experts will eventually join IBM’s Global Business Services (GBS) unit, where they’ll leverage relationships in the banking, telecom, energy, and utilities industries. It’s a part of IBM’s wider effort to capitalize on a hybrid cloud market opportunity that some analysts say represents $1 trillion in value.

“The outside-in digital transformation of the past is giving way to the inside-out potential of using company-owned data with AI and automation to generate business value and create intelligent workflows,” Foster said in a statement. “Our acquisition of Bluetab will fuel migration to the cloud and help our clients to realize even more value from their mission-critical data.”

Continued growth

Since Arvind Krishna took over as IBM chairman last year, he’s spearheaded a remaking of the company, focusing on revenue growth and investing in cloud and AI technologies. Over the past five quarters, IBM spent more than $1.7 billion on 12 acquisitions — a strategy that’s paid dividends. For every dollar of platform spend, clients spend $3 to $5 in software and $6 to $8 in services, according to IBM.

IBM’s GBS is a profitable enterprise in its own right, with approximately $6 billion revenue in the cloud consulting services market in 2020. In the first quarter of 2021, GBS doubled the number of Red Hat client engagements from the prior year to over 150. And to date, IBM says it’s signed $2 billion of business from its Red Hat practice.

IBM declined to disclose financials details of the Bluetab acquisition. The transaction is subject to customary closing conditions, but the companies expect it to close in Q3 2021.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM chief data scientist makes the case for building AI factories

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


IBM distinguished engineer and chief data scientist John Thomas contends that for organizations to really embrace AI, they need to adopt a factory model that automates as much of the model building process as possible.

Just like a traditional factory would create physical products reliably at scale and at speed, an AI factory would enable businesses to quickly build and scale trustworthy AI models.

VentureBeat caught up with Thomas to gain a better understanding of how an AI factory would actually work.

VentureBeat: What are the biggest challenges organizations encounter today with AI?

John Thomas: There are a few recurring themes we have been running into. Pretty much every large customer we work with has a data science team. They already have some kind of a data science project going on. But many of those projects are just experiments. They don’t make it into production, and even if they do, it just takes forever to get things from concept into production.

When we start digging into why it’s happening and what’s going on, there’s a number of different things. Sometimes it is a misalignment between what the business expects and what the data science team is building. Sometimes it’s about the model — it’s great in development, but organizations are having a hard time getting it through the model validation and risk management processes and get it approved for deployment in production. Sometimes it’s about what happens after it goes into production. All of these are our challenges outside the actual model building piece itself.

Not enough attention has been given to those different stages of the lifecycle. That’s what we keep seeing again and again, even with some of the most advanced data science teams. They’re super talented in terms of using the algorithms and the libraries and the frameworks to do the model building, but when it comes to deployment, management, monitoring, and aligning that to ongoing business impact, it seems to be problematic.

VentureBeat: How does this get fixed?

Thomas: Software development went through this phase a long time ago. Application developers were just writing code, and it was just difficult to get it all in production. You needed a structured approach, and DevOps came around. It’s the same kind of mindset, but now in the world of AI and machine learning. Just like a physical factory has got a set of processes, a set of best practices, and people with certain skills to produce some goods at scale and speed. You need a similar construct. You need people, process, and technology.

If you look at the different stages of the lifecycle, the first part of planning and scoping is a major step. IBM uses design thinking to tease out all of the aspects of the project in a very structured way. The next stage is data exploration, and the third stage is the model building. That’s when you start to look at trustworthiness and if the data is biased. All of the challenges around trustworthiness should be part of the model build stage itself. Then the next stage is the validation and deployment stage, where we set best practices. A validation team, which is separate from the model development team, has to come in to run the validation performance metrics, check for fairness, check for providing the explanations of the model, produce reports, and make sure certain criteria or thresholds the business has defined are met. The final stage is ongoing monitoring and management. This is where you have guardrails in place for checking the ongoing performance of the model. Once you set this up, it’s just like a physical factory.

VentureBeat: Whose job is it to build this AI factory?

Thomas: It’s usually not the data science team because they don’t want to be in the middle of any of this. What we have seen is, it’s the stakeholders. Each line of business has its own data science team chugging away at a bunch of models as part of a hub and spoke construct. The person who cares about consistency and scale across these lines of businesses is a person to champion the setting up of a factory. They will have people from different departments participate in the factory. IBM helps them set up the factory.

It’s not as if everything has to flow through it. That’s not what we’re saying. We are saying the spokes have the freedom to innovate, but they follow the same guidelines. They follow the same design thinking process for scoping and creating the action plan. They follow the same model governance. They have complete freedom in what algorithms and frameworks they use.

VentureBeat: Where do machine learning operations (MLOps) and DevOps fit within that factory?

Thomas: I didn’t use the term MLOps because there’s so much more that’s needed beyond MLOps. Understanding the trustworthiness, bias, fairness, explanations, and so on. The very nature of AI and ML is it’s a probabilistic, not deterministic, paradigm. It’s not something that a typical application development paradigm has to deal with.

VentureBeat: Do the AI factory and the software development factory need to merge?

Thomas: At a very high level there are similar constructs, but there are unique challenges to be addressed in the world of AI. I don’t think AI factories and software development factories will all become just one thing. There will be similar constructs and similar paradigms, but unique challenges need to be addressed uniquely.

VentureBeat: A factory implies automation. How will the data engineering process be automated?

Thomas: I don’t think we are at the point of automating everything. We want to automate the high labor-intensive, manual, boring tasks as much as possible. If you’re working with a very large data set with hundreds or thousands of features, it’s a pretty boring, manual, labor-intensive work. You want to rely on automation as much as possible. Creating a pipeline for model deployment should be automated, but with a human in the loop. It is about making sure the domain experts are used in the right way along the different stages of the lifecycle while automating some of the more mundane tasks. That’s the reality of where we are.

VentureBeat: We hear about the democratization of AI all the time, where end users are going to be building their own little AI frameworks. How does that fit within a factory model?

Thomas: We look at the different stages all the way from the beginning. Even before a single line of Python code is written, you need the business owner to be part of the scoping and planning stage itself. A lot of times the data science team is running after data science metrics. ‘My model is great because look at the precision.’ But how that translates into the actual business KPI (key performance indicator) is not very clear. Sometimes it’s not. Being able to understand how your model relates to the business KPI upfront before a single line of coding is done is important. You need to have the business be part of this life cycle.

VentureBeat: A lot of end users are being told they don’t need a data science team to build an AI model as part of the democratization argument. Where is the line between the two?

Thomas: There are tools that lower the barrier of entry for sure, but at some point, you need the domain expert and the data science person. Unless the businessperson and the data scientists are working hand in hand, you cannot get that last mile. You cannot get to something that will go into production. You can’t just have data thrown into a magical box to produce AI. It’s not real.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM working on AI tools to spot, cut bias in online ad targeting

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


(Reuters) — IBM is developing tools that would ensure online advertising algorithms do not unfairly show ads to only specific groups such as mostly men or wealthy people, aiming to address discrimination concerns that have drawn industrywide scrutiny.

The company told Reuters on Thursday that a team of 14 will research “fairness” in ads over the next six months, exploring ways to spot and mitigate unintended bias, including in audiences and the messages themselves.

Academic researchers and civil rights groups have found for a decade that some audiences including Black people and women can be excluded from seeing job, housing and other ads because of potentially unlawful choices made by advertisers or automated systems they use.

Following complaints by U.S. anti-discrimination regulators and activists, Facebook and Alphabet’s Google, which are the world’s biggest digital ads sellers, enacted some changes.

But problems remain, while greater concern about data privacy has already begun reshaping internet marketing.

“The foundation of advertising is crumbling and we have to rebuild the house,” IBM Senior Vice President Bob Lord said. “While we’re at it, let’s ensure fairness is in the blueprint.”

IBM’s Watson Advertising unit sells ads on its The Weather Company properties and also provides ad software to other businesses. Clients eventually could be pitched on anti-bias offerings.

IBM’s initial analysis of its own ad purchases suggests ads can be shown fairly equally to all groups without affecting measures such as the percentage of users who click an ad, said Robert Redmond, head of AI ad product design.

Redmond offered as a hypothetical example a system told by an advertiser to direct an ad to men across the United States. It may start showing the ad to only those who are in big cities and white because that group clicks more. IBM is researching whether artificial intelligence programs can spot that and then redirect the ads more equally.

The company next plans to analyze data from the nonprofit Ad Council’s public service announcements about COVID-19 vaccines.

“Systemic bias is unfortunately baked into nearly every corner of the advertising industry,” said Lisa Sherman, the council’s president and chief executive.

“We hope this research undertaking will help validate that the steps we take to mitigate unintended bias (and) identify areas for us and others to continue refining.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM releases AI model toolkit to help developers measure uncertainty

Elevate your enterprise data technology and strategy at Transform 2021.


At its Digital Developer Conference today, IBM open-sourced Uncertainty Quantification 360 (UQ360), a new toolkit focused on enabling AI to understand and communicate its uncertainty. Following in the footsteps of IBM’s AI Fairness 360 and AI Explainability 360, the goal of UQ360 is to foster community practices across researchers, data scientists, developers, and others that might lead to better understanding and communication around the limitations of AI.

It’s commonly understood that deep learning models are overconfident — even when they make mistakes. Epistemic uncertainty describes what a model doesn’t know because the training data wasn’t appropriate. On the other hand, aleatoric uncertainty is the uncertainty arising from the natural randomness of observations. Given enough training samples, epistemic uncertainty will decrease, but aleatoric uncertainty can’t be reduced even when more data is provided.

UQ360 offers a set of algorithms and a taxonomy to quantify uncertainty, as well as capabilities to measure and improve uncertainty quantification (UQ). For every UQ algorithm provided in the UQ360 Python package, a user can make a choice of an appropriate style of communication by following IBM’s guidance on communicating UQ estimates, from descriptions to visualizations. UQ360 also includes an interactive experience that provides an introduction to producing UQ and ways to use UQ in a house price prediction application. Moreover, UQ360 includes a number of in-depth tutorials to demonstrate how to use UQ across the AI lifecycle.

The importance of uncertainty

Uncertainty is a major barrier standing in the way of self-supervised learning’s success, Facebook chief AI scientist Yann LeCun said at the International Conference on Learning Representation (ICLR) last year. Distributions are tables of values that link every possible value of a variable to the probability the value could occur. They represent uncertainty perfectly well where the variables are discrete, which is why architectures like Google’s BERT are so successful. But researchers haven’t yet discovered a way to usefully represent distributions where the variables are continuous — i.e., where they can be obtained only by measuring.

As IBM research staff members Prasanna Sattigeri and Q. Vera Liao note in a blog post, the choice of UQ method depends on a number of factors, including the underlying model, the type of machine learning task, characteristics of the data, and the user’s goal. Sometimes a chosen UQ method might not produce high-quality uncertainty estimates and could mislead users, so it’s crucial for developers to evaluate the quality of UQ and improve the quantification quality if necessary before deploying an AI system.

In a recent study conducted by Himabindu Lakkaraju, an assistant professor at Harvard University, showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learning’s limitations.

“Common explainability techniques shed light on how AI works, but UQ exposes limits and potential failure points,” Sattigeri and Liao wrote. “Users of a house price prediction model would like to know the margin of error of the model predictions to estimate their gains or losses. Similarly, a product manager may notice that an AI model predicts a new feature A will perform better than a new feature B on average, but to see its worst-case effects on KPIs, the manager would also need to know the margin of error in the predictions.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM partners with U.K. on $300M quantum computing research initiative

Elevate your enterprise data technology and strategy at Transform 2021.


The U.K. government and IBM this week announced a five-year £210 million ($297.5 million) artificial intelligence (AI) and quantum computing collaboration, in the hopes of making new discoveries and developing sustainable technologies in fields ranging from life sciences to manufacturing.

The program will hire 60 scientists, as well as bringing in interns and students to work under the auspices of IBM Research and the U.K.’s Science and Technology Facilities Council (STFC) at the Hartree Centre in Daresbury, Cheshire. The newly formed Hartree National Centre for Digital Innovation (HNCDI) will “apply AI, high performance computing (HPC) and data analytics, quantum computing, and cloud technologies” to advance research in areas like materials development and environmental sustainability, IBM said in a statement.

“Artificial intelligence and quantum computing have the potential to revolutionize everything from the way we travel to the way we shop. They are exactly the kind of fields I want the U.K. to be leading in,” U.K. Science Minister Amanda Solloway said.

The Hartree Centre was opened in 2012 by UK Research and Innovation’s STFC as an HPC, data analytics, and AI research facility. It’s housed within Sci-Tech Daresbury’s laboratory for research in accelerator science, biomedicine, physics, chemistry, materials, engineering, computational science, and more.

The program is part of IBM’s Discovery Accelerator initiative to “accelerate discovery and innovation based on a convergence of advanced technologies” at research centers like HNCDI, the company said. This will be IBM’s first Discovery Accelerator research center in Europe.

Fostering innovation around the globe

As part of the HNCDI program, the STFC Hartree Center is joining over 150 global organizations, ranging from Fortune 500 companies to startups, with an IBM Hybrid Cloud-accessible connection to the IBM Quantum Network. The Quantum Network is the computing giant’s assembly of premium quantum computers and development tools. IBM will also provide access to its commercial and experimental AI products and tools for work in areas like material design, scaling and automation, supply chain logistics, and trusted AI applications, the company said.

IBM has been busy inking Discovery Accelerator deals with partners this year. The company last month made a $200 million investment in a 10-year joint project with the Grainger College of Engineering at the University of Illinois Urbana-Champaign (UIUC). As with the HNCDI in the U.K., the planned IBM-Illinois Discovery Accelerator Institute at UIUC will build out new research facilities and hire faculty and technicians.

Earlier this year, IBM announced a 10-year quantum computing collaboration with the Cleveland Clinic to build the computational foundation of the future Cleveland Clinic Global Center for Pathogen Research & Human Health. That project will see the installation of the first U.S.-based on-premises, private sector IBM Quantum System One, the company said. In the coming years, IBM also plans to install one of its first next-generation 1,000+ qubit quantum systems at another Cleveland client site.

The pandemic added urgency to the task of harnessing quantum computing, AI, and other cutting-edge technologies to help solve medicine’s most pressing problems, IBM chair and CEO Arvind Krishna said in March at the time of the Cleveland Clinic announcement.

“The COVID-19 pandemic has spawned one of the greatest races in the history of scientific discovery — one that demands unprecedented agility and speed,” Krishna said in a statement.

“At the same time, science is experiencing a change of its own — with high-performance computing, hybrid cloud, data, AI, and quantum computing being used in new ways to break through long-standing bottlenecks in scientific discovery. Our new collaboration with Cleveland Clinic will combine their world-renowned expertise in health care and life sciences with IBM’s next-generation technologies to make scientific discovery faster and the scope of that discovery larger than ever,” Krishna said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Grillo, IBM, and the Clinton Foundation expand low-cost earthquake detection to the Caribbean

The Clinton Foundation today announced a commitment to action around deploying a low-cost, open-source-based novel earthquake detection system in the Caribbean, starting with Puerto Rico.

The system is being developed by Grillo in tandem with the long-running Call for Code challenge.

We’ve been covering David Clarke Causes and IBM’s Call for Codefor years here at TNW and Grillo’s work has been one of many success stories to come out of the coding event.

Grillo worked with IBM to develop open-source kits for the contest last year and, with the help of local scientists and government officials, began testing it’s earthquake-sensing technology in Puerto Rico.

Today, the company is gearing up to deploy 90 sensors across the island, developed through what’s called “OpenEEW,” or open-source earthquake early warning.

These low-cost sensors can provide robust seismic activity detection when combined with Grillo’s machine learning solutions.

Per a Clinton Global Initiative press release:

The Caribbean is a highly seismic region due to its location at the convergence zone between major tectonic plates and communities across the region are frequently impacted by seismic events. In January 2020, southern Puerto Rico was impacted by a series of earthquakes over several weeks that damaged homes and infrastructure and caused displacement.

Earthquakes can be difficult to detect. While it’s usually impossible to miss the big ones as they’re happening, many smaller and medium-sized seismic events go undetected by modern sensors.

In fact, when Grillo developed its system in Mexico and Puerto Rico, the team found that it’s inexpensive open-source sensors and detection tech outperformed costly state systems by a significant margin.

Here’s the best part: Grillo and IBM are committed to developing these systems in tandem with local and global developers. In other words: you can help.

Per the press release:

In another important step in helping prepare and alert citizens ahead of earthquakes, they’ll be hosting deployments in Puerto Rico, leveraging the 90+ sensors located across the island and calling on the open source community to help introduce affordable, community-driven detection solutions to the region.

Grillo would like the open source communities help on the following actions:

  • Developers can help improve and test the sensor firmware so that it is more reliable and easier to provision.
  • The MQTT backend needs to be ported to additional open source platforms to allow for a scalable global hosted solution that will ingest data from citizen scientists everywhere.
  • The detection code can use help and is being developed in Python and deployed in Kubernetes. This will allow for rapid and precise detection of earthquakes using many more sensors and distributed systems.
  • The team is experimenting with machine learning for improved accuracy using the latest seismological algorithms.
  • There is also work on the mobile and wearable apps both for provisioning of a sensor device, as well as receiving alerts from the cloud.
  • Work is underway on the public Carbon/React dashboard which allows users to see and interact with devices, as well as view recent earthquake events.

If you think you have what it takes to contribute, or you’d like to learn how to use and develop these solutions as part of the open-source community, you can find out more here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
Computing

IBM President Says Chip Shortage Will Last a Few Years

IBM

In an interview with the BBC, IBM president Jim Whitehurst warned that the chip shortage could last “a few years” longer. The quote echoes similar claims made by Nvidia and Intel, which have seen firsthand the disruption of supply chains brought on by COVID-19.

Even with an uptick in vaccinations and a sense of normalcy returning to parts of the world, things will remain in flux for the semiconductor industry for at least a few years. “There’s just a big lag between from when a technology is developed and when [a fabrication plant] goes into construction and when chips come out,” Whitehurst explained.

IBM just recently unveiled the world’s first 2nm chip, setting a new bar for the semiconductor industry. It doesn’t look like manufacturers will be using that technology soon, however. Whitehurst said that the industry will need to look at ways of reusing and extending the life of computing technologies to overcome the shortage.

Interrupted supply chains are one cause of the semiconductor shortage, but they’re not the only one. A surge in demand has also contributed. The Semiconductor Industry Association continues to see market growth in the semiconductor industry, with revenue in the first quarter of 2021 exceeding that of the same quarter in 2020 by 17.8%.

The biggest driver of demand is semiconductors in computing systems, according to IDC, a market research company, followed closely by smartphones. This has caused shortages of graphics cards, game consoles, laptops, and even cars.

Apple, which has long been able to secure chips for its products, has felt the squeeze, too. A report from April claims that Apple was delaying manufacturing of the MacBook Pro and the iPad due to a lack of components. Samsung is experiencing similar problems, citing “a serious imbalance in supply and demand.”

It may take a few years, but the semiconductor industry is building back up. Last month, President Joe Biden announced a $50 billion investment in the industry as part of the American Jobs Plan, and semiconductor giants like TSMC are already in talks to take advantage of the funding to boslter domestic production.

Editors’ Choice




Repost: Original Source and Author Link