• Who’s Teaching What
  • Subject Updates
  • MEng program
  • Opportunities
  • Minor in Computer Science
  • Resources for Current Students
  • Program objectives and accreditation
  • Graduate program requirements
  • Admission process
  • Degree programs
  • Graduate research
  • EECS Graduate Funding
  • Resources for current students
  • Student profiles
  • Instructors
  • DEI data and documents
  • Recruitment and outreach
  • Community and resources
  • Get involved / self-education
  • Rising Stars in EECS
  • Graduate Application Assistance Program (GAAP)
  • MIT Summer Research Program (MSRP)
  • Sloan-MIT University Center for Exemplary Mentoring (UCEM)
  • Electrical Engineering
  • Computer Science
  • Artificial Intelligence + Decision-making
  • AI and Society
  • AI for Healthcare and Life Sciences
  • Artificial Intelligence and Machine Learning
  • Biological and Medical Devices and Systems
  • Communications Systems
  • Computational Biology
  • Computational Fabrication and Manufacturing
  • Computer Architecture
  • Educational Technology
  • Electronic, Magnetic, Optical and Quantum Materials and Devices
  • Graphics and Vision
  • Human-Computer Interaction
  • Information Science and Systems
  • Integrated Circuits and Systems
  • Nanoscale Materials, Devices, and Systems
  • Natural Language and Speech Processing
  • Optics + Photonics
  • Optimization and Game Theory
  • Programming Languages and Software Engineering
  • Quantum Computing, Communication, and Sensing
  • Security and Cryptography
  • Signal Processing
  • Systems and Networking
  • Systems Theory, Control, and Autonomy
  • Theory of Computation
  • Departmental History
  • Departmental Organization
  • Visiting Committee
  • News & Events
  • News & Events
  • EECS Celebrates Awards

AI accelerates problem-solving in complex scenarios

By adam zewe.

December 5, 2023 | MIT News

role of artificial intelligence in problem solving

While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.

This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.

The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.

Researchers from MIT and ETH Zurich used machine learning to speed things up.

They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.

Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.

This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.

“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu wrote the  paper  with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.

Tough to solve

MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.  

“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.

A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.

Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems. 

Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.

“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.

Shrinking the solution space

She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.

Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.

This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.

The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.

This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.

In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.

This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Related topics

  • Artificial Intelligence + Machine Learning

Media Inquiries

Journalists seeking information about EECS, or interviews with EECS faculty members, should email [email protected] .

Please note: The EECS Communications Office only handles media inquiries related to MIT’s Department of Electrical Engineering & Computer Science. Please visit other school, department, laboratory, or center websites to locate their dedicated media-relations teams.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

AI accelerates problem-solving in complex scenarios

Press contact :.

A stylized Earth has undulating, glowing teal pathways leading everywhere.

Previous image Next image

While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.

This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.

The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.

Researchers from MIT and ETH Zurich used machine learning to speed things up.

They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.

Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.

This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.

“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu wrote the paper with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.

Tough to solve

MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.  

“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.

A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.

Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems. 

Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.

“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.

Shrinking the solution space

She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.

Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.

This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.

The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.

This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.

In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.

This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Share this news article on:

Related links.

  • Project website
  • Laboratory for Information and Decision Systems
  • Institute for Data, Systems, and Society
  • Department of Civil and Environmental Engineering

Related Topics

  • Computer science and technology
  • Artificial intelligence
  • Laboratory for Information and Decision Systems (LIDS)
  • Civil and environmental engineering
  • National Science Foundation (NSF)

Related Articles

Illustration of a blue car next to a larger-than-life smartphone showing a city map. Both are seen with a city in the background.

Machine learning speeds up vehicle routing

Headshot photo of Cathy Wu, who is standing in front of a bookcase.

Q&A: Cathy Wu on developing algorithms to safely integrate robots into our world

“What this study shows is that rather than shut down nuclear plants, you can operate them in a way that makes room for renewables,” says MIT Energy Initiative researcher Jesse Jenkins. “It shows that flexible nuclear plants can play much better with variable renewables than many people think, which might lead to reevaluations of the role of these two resources together.”

Keeping the balance: How flexible nuclear operation can help add more wind and solar to the grid

Previous item Next item

More MIT News

Tomás Orellanastands in front of a blurred-out poster

Teen uses pharmacology learned through MIT OpenCourseWare to extract and study medicinal properties of plants

Read full story →

Curtis Smith leans on a glass whiteboard in an office with windows in the background

Applying risk and reliability analysis across industries

Three rows show a tumor responding to types of treatment. Top shows grey microscopic image of tumor. Middle row is stained blue and shows blue lumps. Bottom row is stained green and shows smaller bits.

Cancer biologists discover a new mechanism for an old drug

Victor Ambros (left) and Gary Ruvkun

Victor Ambros ’75, PhD ’79 and Gary Ruvkun share Nobel Prize in Physiology or Medicine

Justin Reich stands outside with MIT Medical in background.

On technology in schools, think evolution, not revolution

Julian Shun leans on a railing outside of Stata Center.

Modeling relationships to solve complex problems efficiently

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Pictogram collage with clouds, pie chart and graphs

IBM TechXchange Conference 2024 | October 21-24 in Las Vegas

Join the must-attend event for technologists using IBM products and solutions. Explore the growing session catalog of over 1,200 sessions and labs.

Updated : 16 August 2024 Contributors : Cole Stryker, Eda Kavlakoglu

Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.

Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. They can make detailed recommendations to users and experts. They can act independently, replacing the need for human intelligence or intervention (a classic example being a self-driving car). 

But in 2024, most AI researchers and practitioners—and most AI-related headlines—are focused on breakthroughs in generative AI  (gen AI), a technology that can create original text, images, video and other content. To fully understand generative AI, it’s important to first understand the technologies on which generative AI tools are built: machine learning  (ML) and deep learning .

Learn how to choose the right approach in preparing data sets and employing AI models.

A simple way to think about AI is as a series of nested or derivative concepts that have emerged over more than 70 years:  

Directly underneath AI, we have machine learning, which involves creating models by training an algorithm to make predictions or decisions based on data. It encompasses a broad range of techniques that enable computers to learn from and make inferences based on data without being explicitly programmed for specific tasks. 

There are many types of machine learning techniques or algorithms, including linear regression ,  logistic regression , decision trees , random forest , support vector machines   (SVMs) , k-nearest neighbor (KNN), clustering and more. Each of these approaches is suited to different kinds of problems and data.

But one of the most popular types of machine learning algorithm is called a neural network (or artificial neural network). Neural networks are modeled after the human brain's structure and function. A neural network consists of interconnected layers of nodes (analogous to neurons) that work together to process and analyze complex data. Neural networks are well suited to tasks that involve identifying complex patterns and relationships in large amounts of data.

The simplest form of machine learning is called supervised learning , which involves the use of labeled data sets to train algorithms to classify data or predict outcomes accurately. In supervised learning, humans pair each training example with an output label. The goal is for the model to learn the mapping between inputs and outputs in the training data, so it can predict the labels of new, unseen data.  

Register for the guide on foundation models

Deep learning is a subset of machine learning that uses multilayered neural networks, called deep neural networks, that more closely simulate the complex decision-making power of the human brain.

Deep neural networks include an input layer, at least three but usually hundreds of hidden layers, and an output layer, unlike neural networks used in classic machine learning models, which usually have only one or two hidden layers.

These multiple layers enable unsupervised learning : they can automate the extraction of features from large, unlabeled and unstructured data sets, and make their own predictions about what the data represents.

Because deep learning doesn’t require human intervention, it enables machine learning at a tremendous scale. It is well suited to natural language processing (NLP) , computer vision , and other tasks that involve the fast, accurate identification complex patterns and relationships in large amounts of data. Some form of deep learning powers most of the artificial intelligence (AI) applications in our lives today.  

Deep learning also enables:

  • Semi-supervised learning , which combines supervised and unsupervised learning by using both labeled and unlabeled data to train AI models for classification and regression tasks.
  • Self-supervised learning , which generates implicit labels from unstructured data, rather than relying on labeled data sets for supervisory signals.
  • Reinforcement learning , which learns by trial-and-error and reward functions rather than by extracting information from hidden patterns.
  • Transfer learning , in which knowledge gained through one task or data set is used to improve model performance on another related task or different data set.

Generative AI, sometimes called "gen AI" , refers to deep learning models that can create complex original content—such as long-form text, high-quality images, realistic video or audio and more—in response to a user’s prompt or request.

At a high level, generative models encode a simplified representation of their training data, and then draw from that representation to create new work that’s similar, but not identical, to the original data.

Generative models have been used for years in statistics to analyze numerical data. But over the last decade, they evolved to analyze and generate more complex data types. This evolution coincided with the emergence of three sophisticated deep learning model types:

  • Variational autoencoders  or VAEs, which were introduced in 2013, and enabled models that could generate multiple variations of content in response to a prompt or instruction.
  • Diffusion models, first seen in 2014, which add "noise" to images until they are unrecognizable, and then remove the noise to generate original images in response to prompts.
  • Transformers (also called transformer models), which are trained on sequenced data to generate extended sequences of content (such as words in sentences, shapes in an image, frames of a video or commands in software code). Transformers are at the core of most of today’s headline-making generative AI tools, including ChatGPT and GPT-4, Copilot, BERT, Bard and Midjourney. 

In general, generative AI operates in three phases:

  • Training, to create a foundation model.
  • Tuning, to adapt the model to a specific application.
  • Generation, evaluation and more tuning, to improve accuracy.

Generative AI begins with a "foundation model"; a deep learning model that serves as the basis for multiple different types of generative AI applications.

The most common foundation models today are large language models (LLMs) , created for text generation applications. But there are also foundation models for image, video, sound or music generation, and multimodal foundation models that support several kinds of content.

To create a foundation model, practitioners train a deep learning algorithm on huge volumes of relevant raw, unstructured, unlabeled data, such as terabytes or petabytes of data text or images or video from the internet. The training yields a neural network of billions of parameters —encoded representations of the entities, patterns and relationships in the data—that can generate content autonomously in response to prompts. This is the foundation model.

This training process is compute-intensive, time-consuming and expensive. It requires thousands of clustered graphics processing units (GPUs) and weeks of processing, all of which typically costs millions of dollars. Open source foundation model projects, such as Meta's Llama-2, enable gen AI developers to avoid this step and its costs.

Next, the model must be tuned to a specific content generation task. This can be done in various ways, including:

  • Fine-tuning, which involves feeding the model application-specific labeled data—questions or prompts the application is likely to receive, and corresponding correct answers in the wanted format.
  • Reinforcement learning with human feedback (RLHF), in which human users evaluate the accuracy or relevance of model outputs so that the model can improve itself. This can be as simple as having people type or talk back corrections to a chatbot or virtual assistant.

Generation, evaluation and more tuning  

Developers and users regularly assess the outputs of their generative AI apps, and further tune the model—even as often as once a week—for greater accuracy or relevance. In contrast, the foundation model itself is updated much less frequently, perhaps every year or 18 months.

Another option for improving a gen AI app's performance is retrieval augmented generation (RAG), a technique for extending the foundation model to use relevant sources outside of the training data to refine the parameters for greater accuracy or relevance.

AI offers numerous benefits across various industries and applications. Some of the most commonly cited benefits include:

  • Automation of repetitive tasks.
  • More and faster insight from data.
  • Enhanced decision-making.
  • Fewer human errors.
  • 24x7 availability.
  • Reduced physical risks.

Automation of repetitive tasks  

AI can automate routine, repetitive and often tedious tasks—including digital tasks such as data collection, entering and preprocessing, and physical tasks such as warehouse stock-picking and manufacturing processes. This automation frees to work on higher value, more creative work.

Enhanced decision-making  

Whether used for decision support or for fully automated decision-making, AI enables faster, more accurate predictions and reliable, data-driven decisions . Combined with automation, AI enables businesses to act on opportunities and respond to crises as they emerge, in real time and without human intervention.

Fewer human errors  

AI can reduce human errors in various ways, from guiding people through the proper steps of a process, to flagging potential errors before they occur, and fully automating processes without human intervention. This is especially important in industries such as healthcare where, for example, AI-guided surgical robotics enable consistent precision.

Machine learning algorithms can continually improve their accuracy and further reduce errors as they're exposed to more data and "learn" from experience.

Round-the-clock availability and consistency  

AI is always on, available around the clock, and delivers consistent performance every time. Tools such as AI chatbots or virtual assistants can lighten staffing demands for customer service or support. In other applications—such as materials processing or production lines—AI can help maintain consistent work quality and output levels when used to complete repetitive or tedious tasks.

Reduced physical risk  

By automating dangerous work—such as animal control, handling explosives, performing tasks in deep ocean water, high altitudes or in outer space—AI can eliminate the need to put human workers at risk of injury or worse. While they have yet to be perfected, self-driving cars and other vehicles offer the potential to reduce the risk of injury to passengers.

The real-world applications of AI are many. Here is just a small sampling of use cases across various industries to illustrate its potential:

Customer experience, service and support  

Companies can implement AI-powered chatbots and virtual assistants to handle customer inquiries, support tickets and more. These tools use natural language processing (NLP) and generative AI capabilities to understand and respond to customer questions about order status, product details and return policies.

Chatbots and virtual assistants enable always-on support, provide faster answers to frequently asked questions (FAQs), free human agents to focus on higher-level tasks, and give customers faster, more consistent service.

Fraud detection  

Machine learning and deep learning algorithms can analyze transaction patterns and flag anomalies, such as unusual spending or login locations, that indicate fraudulent transactions. This enables organizations to respond more quickly to potential fraud and limit its impact, giving themselves and customers greater peace of mind.

Personalized marketing  

Retailers, banks and other customer-facing companies can use AI to create personalized customer experiences and marketing campaigns that delight customers, improve sales and prevent churn. Based on data from customer purchase history and behaviors, deep learning algorithms can recommend products and services customers are likely to want, and even generate personalized copy and special offers for individual customers in real time.

Human resources and recruitment  

AI-driven recruitment platforms can streamline hiring by screening resumes, matching candidates with job descriptions, and even conducting preliminary interviews using video analysis. These and other tools can dramatically reduce the mountain of administrative paperwork associated with fielding a large volume of candidates. It can also reduce response times and time-to-hire, improving the experience for candidates whether they get the job or not.

Application development and modernization  

Generative AI code generation tools and automation tools can streamline repetitive coding tasks associated with application development, and accelerate the migration and modernization (reformatting and replatorming) of legacy applications at scale. These tools can speed up tasks, help ensure code consistency and reduce errors.

Predictive maintenance  

Machine learning models can analyze data from sensors, Internet of Things (IoT) devices and operational technology (OT) to forecast when maintenance will be required and predict equipment failures before they occur. AI-powered preventive maintenance helps prevent downtime and enables you to stay ahead of supply chain issues before they affect the bottom line.

Organizations are scrambling to take advantage of the latest AI technologies and capitalize on AI's many benefits. This rapid adoption is necessary, but adopting and maintaining AI workflows comes with challenges and risks. 

Data risks  

AI systems rely on data sets that might be vulnerable to data poisoning, data tampering, data bias or cyberattacks that can lead to data breaches. Organizations can mitigate these risks by protecting data integrity and implementing security and availability throughout the entire AI lifecycle, from development to training and deployment and postdeployment.

Model risks  

Threat actors can target AI models for theft, reverse engineering or unauthorized manipulation. Attackers might compromise a model’s integrity by tampering with its architecture, weights or parameters; the core components that determine a model’s behavior, accuracy and performance.

Operational risks  

Like all technologies, models are susceptible to operational risks such as model drift, bias and breakdowns in the governance structure. Left unaddressed, these risks can lead to system failures and cybersecurity vulnerabilities that threat actors can use.

Ethics and legal risks  

If organizations don’t prioritize safety and ethics when developing and deploying AI systems, they risk committing privacy violations and producing biased outcomes. For example, biased training data used for hiring decisions might reinforce gender or racial stereotypes and create AI models that favor certain demographic groups over others.  

AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Principles of AI ethics are applied through a system of AI governance consisted of guardrails that help ensure that AI tools and systems remain safe and ethical.  

AI governance encompasses oversight mechanisms that address risks. An ethical approach to AI governance requires the involvement of a wide range of stakeholders, including developers, users, policymakers and ethicists, helping to ensure that AI-related systems are developed and used to align with society's values.

Here are common values associated with AI ethics and responsible AI :

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. Explainable AI is a set of processes and methods that enables human users to interpret, comprehend and trust the results and output created by algorithms.

Although machine learning, by its very nature, is a form of statistical discrimination, the discrimination becomes objectionable when it places privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage, potentially causing varied harms. To encourage fairness, practitioners can try to minimize algorithmic bias across data collection and model design, and to build more diverse and inclusive teams.

Robust AI effectively handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm. It is also built to withstand intentional and unintentional interference by protecting against exposed vulnerabilities.

Organizations should implement clear responsibilities and governance structures for the development, deployment and outcomes of AI systems. In addition, users should be able to see how an AI service works, evaluate its functionality, and comprehend its strengths and limitations. Increased transparency provides information for AI consumers to better understand how the AI model or service was created.

Many regulatory frameworks, including GDPR, mandate that organizations abide by certain privacy principles when processing personal information. It is crucial to be able to protect AI models that might contain personal information, control what data goes into the model in the first place, and to build adaptable systems that can adjust to changes in regulation and attitudes around AI ethics.

In order to contextualize the use of AI at various levels of complexity and sophistication, researchers have defined several types of AI that refer to its level of sophistication:

Weak AI : Also known as “narrow AI,” defines AI systems designed to perform a specific task or a set of tasks. Examples might include “smart” voice assistant apps, such as Amazon’s Alexa, Apple’s Siri, a social media chatbot or the autonomous vehicles promised by Tesla. 

Strong AI : Also known as “artificial general intelligence” (AGI) or “general AI,” possess the ability to understand, learn and apply knowledge across a wide range of tasks at a level equal to or surpassing human intelligence . This level of AI is currently theoretical and no known AI systems approach this level of sophistication. Researchers argue that if AGI is even possible, it requires major increases in computing power. Despite recent advances in AI development, self-aware AI systems of science fiction remain firmly in that realm. 

The idea of "a machine that thinks" dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of AI include the following:

1950 Alan Turing publishes Computing Machinery and Intelligence (link resides outside ibm.com). In this paper, Turing—famous for breaking the German ENIGMA code during WWII and often referred to as the "father of computer science"—asks the following question: "Can machines think?" 

From there, he offers a test, now famously known as the "Turing Test," where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it was published, it remains an important part of the history of AI, and an ongoing concept within philosophy as it uses ideas around linguistics. 

1956 John McCarthy coins the term "artificial intelligence" at the first-ever AI conference at Dartmouth College. (McCarthy went on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon create the Logic Theorist, the first-ever running AI computer program.

1967 Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that "learned" through trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research initiatives. 

1980 Neural networks, which use a backpropagation algorithm to train itself, became widely used in AI applications.

1995 Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach (link resides outside ibm.com), which becomes one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems based on rationality and thinking versus acting. 

1997 IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).

2004 John McCarthy writes a paper, What Is Artificial Intelligence? (link resides outside ibm.com), and proposes an often-cited definition of AI. By this time, the era of big data and cloud computing is underway, enabling organizations to manage ever-larger data estates, which will one day be used to train AI models. 

2011 IBM Watson® beats champions Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, data science begins to emerge as a popular discipline.

2015 Baidu's Minwa supercomputer uses a special deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human. 

2016 DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves). Later, Google purchased DeepMind for a reported USD 400 million.

2022 A rise in large language models  or LLMs, such as OpenAI’s ChatGPT, creates an enormous change in performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pretrained on large amounts of data.

2024 The latest AI trends point to a continuing AI renaissance. Multimodal models that can take multiple types of data as input are providing richer, more robust experiences. These models bring together computer vision image recognition and NLP speech recognition capabilities. Smaller models are also making strides in an age of diminishing returns with massive models with large parameter counts. 

IBM watsonx.ai AI studio is part of the IBM  watsonx ™ AI and data platform, bringing together new generative AI (gen AI) capabilities powered by  foundation models  and traditional machine learning (ML) into a powerful studio spanning the AI lifecycle. 

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Transform standard support into exceptional care when you give your customers instant, accurate custom care anytime, anywhere, with conversational AI.

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

AI is changing the game for cybersecurity, analyzing massive quantities of risk data to speed response times and augment under-resourced security operations.

Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.

Access our full catalog of over 100 online courses by purchasing an individual or multi-user digital learning subscription today, enabling you to expand your skills across a range of our products at one low price.

IBM watsonx™ Assistant is recognized as a Customers' Choice in the 2023 Gartner Peer Insights Voice of the Customer report for Enterprise Conversational AI platforms.

Discover how machine learning can predict demand and cut costs.

2024 stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives.

The European Union’s AI Act entered into force on 1 August 2024. The EU AI Act is one of the first regulations of artificial intelligence to be adopted in one of the world’s largest markets. What is it going to change for businesses?

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

SciTechDaily

The Intersection of Math and AI: A New Era in Problem-Solving

Connecting Math and Machine Learning

Conference is exploring burgeoning connections between the two fields.

Traditionally, mathematicians jot down their formulas using paper and pencil, seeking out what they call pure and elegant solutions. In the 1970s, they hesitantly began turning to computers to assist with some of their problems. Decades later, computers are often used to crack the hardest math puzzles. Now, in a similar vein, some mathematicians are turning to machine learning tools to aid in their numerical pursuits.

Embracing Machine Learning in Mathematics

“Mathematicians are beginning to embrace machine learning,” says Sergei Gukov, the John D. MacArthur Professor of Theoretical Physics and Mathematics at Caltech, who put together the Mathematics and Machine Learning 2023 conference, which is taking place at Caltech December 10–13.

“There are some mathematicians who may still be skeptical about using the tools,” Gukov says. “The tools are mischievous and not as pure as using paper and pencil, but they work.”

Machine Learning: A New Era in Mathematical Problem Solving

Machine learning is a subfield of AI, or artificial intelligence , in which a computer program is trained on large datasets and learns to find new patterns and make predictions. The conference, the first put on by the new Richard N. Merkin Center for Pure and Applied Mathematics, will help bridge the gap between developers of machine learning tools (the data scientists) and the mathematicians. The goal is to discuss ways in which the two fields can complement each other.

Mathematics and Machine Learning: A Two-Way Street

“It’s a two-way street,” says Gukov, who is the director of the new Merkin Center, which was established by Caltech Trustee Richard Merkin.

“Mathematicians can help come up with clever new algorithms for machine learning tools like the ones used in generative AI programs like ChatGPT, while machine learning can help us crack difficult math problems.”

Yi Ni, a professor of mathematics at Caltech, plans to attend the conference, though he says he does not use machine learning in his own research, which involves the field of topology and, specifically, the study of mathematical knots in lower dimensions. “Some mathematicians are more familiar with these advanced tools than others,” Ni says. “You need to know somebody who is an expert in machine learning and willing to help. Ultimately, I think AI for math will become a subfield of math.”

The Riemann Hypothesis and Machine Learning

One tough problem that may unravel with the help of machine learning, according to Gukov, is known as the Riemann hypothesis. Named after the 19th-century mathematician Bernhard Riemann, this problem is one of seven Millennium Problems selected by the Clay Mathematics Institute; a $1 million prize will be awarded for the solution to each problem.

The Riemann hypothesis centers around a formula known as the Riemann zeta function, which packages information about prime numbers. If proved true, the hypothesis would provide a new understanding of how prime numbers are distributed. Machine learning tools could help crack the problem by providing a new way to run through more possible iterations of the problem.

Mathematicians and Machine Learning: A Synergistic Relationship

“Machine learning tools are very good at recognizing patterns and analyzing very complex problems,” Gukov says.

Ni agrees that machine learning can serve as a helpful assistant. “Machine learning solutions may not be as beautiful, but they can find new connections,” he says. “But you still need a mathematician to turn the questions into something computers can solve.”

Knot Theory and Machine Learning

Gukov has used machine learning himself to untangle problems in knot theory. Knot theory is the study of abstract knots, which are similar to the knots you might find on a shoestring, but the ends of the strings are closed into loops. These mathematical knots can be entwined in various ways, and mathematicians like Gukov want to understand their structures and how they relate to each other. The work has relationships to other fields of mathematics such as representation theory and quantum algebra, and even quantum physics.

In particular, Gukov and his colleagues are working to solve what is called the smooth Poincaré conjecture in four dimensions. The original Poincaré conjecture, which is also a Millennium Problem, was proposed by mathematician Henri Poincaré early in the 20th century. It was ultimately solved from 2002 to 2003 by Grigori Perelman (who famously turned down his prize of $1 million). The problem involves comparing spheres to certain types of manifolds that look like spheres; manifolds are shapes that are projections of higher-dimensional objects onto lower dimensions. Gukov says the problem is like asking, “Are objects that look like spheres really spheres?”

The four-dimensional smooth Poincaré conjecture holds that, in four dimensions, all manifolds that look like spheres are indeed actually spheres. In an attempt to solve this conjecture, Gukov and his team develop a machine learning approach to evaluate so-called ribbon knots.

“Our brain cannot handle four dimensions, so we package shapes into knots,” Gukov says. “A ribbon is where the string in a knot pierces through a different part of the string in three dimensions but doesn’t pierce through anything in four dimensions. Machine learning lets us analyze the ‘ribboness’ of knots, a yes-or-no property of knots that has applications to the smooth Poincaré conjecture.”

“This is where machine learning comes to the rescue,” writes Gukov and his team in a preprint paper titled “ Searching for Ribbons with Machine Learning .” “It has the ability to quickly search through many potential solutions and, more importantly, to improve the search based on the successful ‘games’ it plays. We use the word ‘games’ since the same types of algorithms and architectures can be employed to play complex board games, such as Go or chess, where the goals and winning strategies are similar to those in math problems.”

The Interplay of Mathematics and Machine Learning Algorithms

On the flip side, math can help in developing machine learning algorithms, Gukov explains. A mathematical mindset, he says, can bring fresh ideas to the development of the algorithms behind AI tools. He cites Peter Shor as an example of a mathematician who brought insight to computer science problems. Shor, who graduated from Caltech with a bachelor’s degree in mathematics in 1981, famously came up with what is known as Shor’s algorithm, a set of rules that could allow quantum computers of the future to factor integers faster than typical computers, thereby breaking digital encryption codes.

Today’s machine learning algorithms are trained on large sets of data. They churn through mountains of data on language, images, and more to recognize patterns and come up with new connections. However, data scientists don’t always know how the programs reach their conclusions. The inner workings are hidden in a so-called “black box.” A mathematical approach to developing the algorithms would reveal what’s happening “under the hood,” as Gukov says, leading to a deeper understanding of how the algorithms work and thus can be improved.

“Math,” says Gukov, “is fertile ground for new ideas.”

The conference will take place at the Merkin Center on the eighth floor of Caltech Hall.

Related Articles

Breaking the 21-day myth: machine learning unlocks the secrets of habit formation, conventional computers can learn to solve tricky quantum problems in physics and chemistry, ai algorithm predicts future crimes one week in advance with 90% accuracy, ai reveals unsuspected connections hidden in the complex math underlying search for exoplanets, seeing quadruple: artificial intelligence leads to discovery that can help solve cosmological puzzles, “friends and strangers” theorem – math professor makes breakthrough in ramsey numbers, the fractal dimension of the us zip code system: 1.78, mathematician claims breakthrough in the sudoku problem, mathematics and lego: the deeper meaning of combined systems and networks.

Save my name, email, and website in this browser for the next time I comment.

Type above and press Enter to search. Press Esc to cancel.

When Should You Use AI to Solve Problems?

role of artificial intelligence in problem solving

Summary .   

AI is increasingly informing business decisions but can be misused if executives stick with old decision-making styles. A key to effective collaboration is to recognize which parts of a problem to hand off to the AI and which the managerial mind will be better at solving. While AI is superior at data-intensive prediction problems, humans are uniquely suited to the creative thought experiments that underpin the best decisions.

Business leaders often pride themselves on their intuitive decision-making. They didn’t get to be division heads and CEOs by robotically following some leadership checklist. Of course, intuition and instinct can be important leadership tools, but not if they’re indiscriminately applied.

Partner Center

What is AI (artificial intelligence)?

3D robotics hand

Humans and machines: a match made in productivity  heaven. Our species wouldn’t have gotten very far without our mechanized workhorses. From the wheel that revolutionized agriculture to the screw that held together increasingly complex construction projects to the robot-enabled assembly lines of today, machines have made life as we know it possible. And yet, despite their seemingly endless utility, humans have long feared machines—more specifically, the possibility that machines might someday acquire human intelligence  and strike out on their own.

Get to know and directly engage with senior McKinsey experts on AI

Sven Blumberg is a senior partner in McKinsey’s Düsseldorf office; Michael Chui is a partner at the McKinsey Global Institute and is based in the Bay Area office, where Lareina Yee is a senior partner; Kia Javanmardian is a senior partner in the Chicago office, where Alex Singla , the global leader of QuantumBlack, AI by McKinsey, is also a senior partner; Kate Smaje and Alex Sukharevsky are senior partners in the London office.

But we tend to view the possibility of sentient machines with fascination as well as fear. This curiosity has helped turn science fiction into actual science. Twentieth-century theoreticians, like computer scientist and mathematician Alan Turing, envisioned a future where machines could perform functions faster than humans. The work of Turing and others soon made this a reality. Personal calculators became widely available in the 1970s, and by 2016, the US census showed that 89 percent of American households had a computer. Machines— smart machines at that—are now just an ordinary part of our lives and culture.

Those smart machines are also getting faster and more complex. Some computers have now crossed the exascale threshold, meaning they can perform as many calculations in a single second as an individual could in 31,688,765,000 years . And beyond computation, which machines have long been faster at than we have, computers and other devices are now acquiring skills and perception that were once unique to humans and a few other species.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem-solving, and even exercising creativity. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are some customer service chatbots that pop up to help you navigate websites.

Applied AI —simply, artificial intelligence applied to real-world problems—has serious implications for the business world. By using artificial intelligence, companies have the potential to make business more efficient and profitable. But ultimately, the value of AI isn’t in the systems themselves. Rather, it’s in how companies use these systems to assist humans—and their ability to explain to shareholders and the public what these systems do—in a way that builds trust and confidence.

For more about AI, its history, its future, and how to apply it in business, read on.

Learn more about QuantumBlack, AI by McKinsey .

Circular, white maze filled with white semicircles.

Looking for direct answers to other complex questions?

What is machine learning.

Machine learning is a form of artificial intelligence that can adapt to a wide range of inputs, including large sets of historical data, synthesized data, or human inputs. (Some machine learning algorithms are specialized in training themselves to detect patterns; this is called deep learning . See Exhibit 1.) These algorithms can detect patterns and learn how to make predictions and recommendations by processing data, rather than by receiving explicit programming instruction. Some algorithms can also adapt in response to new data and experiences to improve over time.

The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it. In the years since its widespread deployment, which began in the 1970s, machine learning has had an impact on a number of industries, including achievements in medical-imaging analysis  and high-resolution weather forecasting.

The volume and complexity of data that is now being generated, too vast for humans to process and apply efficiently, has increased the potential of machine learning, as well as the need for it.

What is deep learning?

Deep learning is a more advanced version of machine learning that is particularly adept at processing a wider range of data resources (text as well as unstructured data including images), requires even less human intervention, and can often produce more accurate results than traditional machine learning. Deep learning uses neural networks—based on the ways neurons interact in the human brain —to ingest data and process it through multiple neuron layers that recognize increasingly complex features of the data. For example, an early layer might recognize something as being in a specific shape; building on this knowledge, a later layer might be able to identify the shape as a stop sign. Similar to machine learning, deep learning uses iteration to self-correct and improve its prediction capabilities. For example, once it “learns” what a stop sign looks like, it can recognize a stop sign in a new image.

What is generative AI?

Case study: vistra and the martin lake power plant.

Vistra is a large power producer in the United States, operating plants in 12 states with a capacity to power nearly 20 million homes. Vistra has committed to achieving net-zero emissions by 2050. In support of this goal, as well as to improve overall efficiency, QuantumBlack, AI by McKinsey worked with Vistra to build and deploy an AI-powered heat rate optimizer (HRO) at one of its plants.

“Heat rate” is a measure of the thermal efficiency of the plant; in other words, it’s the amount of fuel required to produce each unit of electricity. To reach the optimal heat rate, plant operators continuously monitor and tune hundreds of variables, such as steam temperatures, pressures, oxygen levels, and fan speeds.

Vistra and a McKinsey team, including data scientists and machine learning engineers, built a multilayered neural network model. The model combed through two years’ worth of data at the plant and learned which combination of factors would attain the most efficient heat rate at any point in time. When the models were accurate to 99 percent or higher and run through a rigorous set of real-world tests, the team converted them into an AI-powered engine that generates recommendations every 30 minutes for operators to improve the plant’s heat rate efficiency. One seasoned operations manager at the company’s plant in Odessa, Texas, said, “There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.”

Overall, the AI-powered HRO helped Vistra achieve the following:

  • approximately 1.6 million metric tons of carbon abated annually
  • 67 power generators optimized
  • $60 million saved in about a year

Read more about the Vistra story here .

Generative AI (gen AI) is an AI model that generates content in response to a prompt. It’s clear that generative AI tools like ChatGPT and DALL-E (a tool for AI-generated art) have the potential to change how a range of jobs  are performed. Much is still unknown about gen AI’s potential, but there are some questions we can answer—like how gen AI models are built, what kinds of problems they are best suited to solve, and how they fit into the broader category of AI and machine learning.

For more on generative AI and how it stands to affect business and society, check out our Explainer “ What is generative AI? ”

What is the history of AI?

The term “artificial intelligence” was coined in 1956  by computer scientist John McCarthy for a workshop at Dartmouth. But he wasn’t the first to write about the concepts we now describe as AI. Alan Turing introduced the concept of the “ imitation game ” in a 1950 paper. That’s the test of a machine’s ability to exhibit intelligent behavior, now known as the “Turing test.” He believed researchers should focus on areas that don’t require too much sensing and action, things like games and language translation. Research communities dedicated to concepts like computer vision, natural language understanding, and neural networks are, in many cases, several decades old.

MIT physicist Rodney Brooks shared details on the four previous stages of AI:

Symbolic AI (1956). Symbolic AI is also known as classical AI, or even GOFAI (good old-fashioned AI). The key concept here is the use of symbols and logical reasoning to solve problems. For example, we know a German shepherd is a dog , which is a mammal; all mammals are warm-blooded; therefore, a German shepherd should be warm-blooded.

The main problem with symbolic AI is that humans still need to manually encode their knowledge of the world into the symbolic AI system, rather than allowing it to observe and encode relationships on its own. As a result, symbolic AI systems struggle with situations involving real-world complexity. They also lack the ability to learn from large amounts of data.

Symbolic AI was the dominant paradigm of AI research until the late 1980s.

Neural networks (1954, 1969, 1986, 2012). Neural networks are the technology behind the recent explosive growth of gen AI. Loosely modeling the ways neurons interact in the human brain , neural networks ingest data and process it through multiple iterations that learn increasingly complex features of the data. The neural network can then make determinations about the data, learn whether a determination is correct, and use what it has learned to make determinations about new data. For example, once it “learns” what an object looks like, it can recognize the object in a new image.

Neural networks were first proposed in 1943 in an academic paper by neurophysiologist Warren McCulloch and logician Walter Pitts. Decades later, in 1969, two MIT researchers mathematically demonstrated that neural networks could perform only very basic tasks. In 1986, there was another reversal, when computer scientist and cognitive psychologist Geoffrey Hinton and colleagues solved the neural network problem presented by the MIT researchers. In the 1990s, computer scientist Yann LeCun made major advancements in neural networks’ use in computer vision, while Jürgen Schmidhuber advanced the application of recurrent neural networks as used in language processing.

In 2012, Hinton and two of his students highlighted the power of deep learning. They applied Hinton’s algorithm to neural networks with many more layers than was typical, sparking a new focus on deep neural networks. These have been the main AI approaches of recent years.

Traditional robotics (1968). During the first few decades of AI, researchers built robots to advance research. Some robots were mobile, moving around on wheels, while others were fixed, with articulated arms. Robots used the earliest attempts at computer vision to identify and navigate through their environments or to understand the geometry of objects and maneuver them. This could include moving around blocks of various shapes and colors. Most of these robots, just like the ones that have been used in factories for decades, rely on highly controlled environments with thoroughly scripted behaviors that they perform repeatedly. They have not contributed significantly to the advancement of AI itself.

But traditional robotics did have significant impact in one area, through a process called “simultaneous localization and mapping” (SLAM). SLAM algorithms helped contribute to self-driving cars and are used in consumer products like vacuum cleaning robots and quadcopter drones. Today, this work has evolved into behavior-based robotics, also referred to as haptic technology because it responds to human touch.

  • Behavior-based robotics (1985). In the real world, there aren’t always clear instructions for navigation, decision making, or problem-solving. Insects, researchers observed, navigate very well (and are evolutionarily very successful) with few neurons. Behavior-based robotics researchers took inspiration from this, looking for ways robots could solve problems with partial knowledge and conflicting instructions. These behavior-based robots are embedded with neural networks.

Learn more about  QuantumBlack, AI by McKinsey .

What is artificial general intelligence?

The term “artificial general intelligence” (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human . In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension. But let’s not get ahead of ourselves: the key word here is “someday.” Most researchers and academics believe we are decades away from realizing AGI; some even predict we won’t see AGI this century, or ever. Rodney Brooks, an MIT roboticist and cofounder of iRobot, doesn’t believe AGI will arrive until the year 2300 .

The timing of AGI’s emergence may be uncertain. But when it does emerge—and it likely will—it’s going to be a very big deal, in every aspect of our lives. Executives should begin working to understand the path to machines achieving human-level intelligence now and making the transition to a more automated world.

For more on AGI, including the four previous attempts at AGI, read our Explainer .

What is narrow AI?

Narrow AI is the application of AI techniques to a specific and well-defined problem, such as chatbots like ChatGPT, algorithms that spot fraud in credit card transactions, and natural-language-processing engines that quickly process thousands of legal documents. Most current AI applications fall into the category of narrow AI. AGI is, by contrast, AI that’s intelligent enough to perform a broad range of tasks.

How is the use of AI expanding?

AI is a big story for all kinds of businesses, but some companies are clearly moving ahead of the pack . Our state of AI in 2022 survey showed that adoption of AI models has more than doubled since 2017—and investment has increased apace. What’s more, the specific areas in which companies see value from AI have evolved, from manufacturing and risk to the following:

  • marketing and sales
  • product and service development
  • strategy and corporate finance

One group of companies is pulling ahead of its competitors. Leaders of these organizations consistently make larger investments in AI, level up their practices to scale faster, and hire and upskill the best AI talent. More specifically, they link AI strategy to business outcomes and “ industrialize ” AI operations by designing modular data architecture that can quickly accommodate new applications.

What are the limitations of AI models? How can these potentially be overcome?

We have yet to see the longtail effect of gen AI models. This means there are some inherent risks involved in using them—both known and unknown.

The outputs gen AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and other biases of the internet and society more generally).

It can also be manipulated to enable unethical or criminal activity. Since gen AI models burst onto the scene, organizations have become aware of users trying to “jailbreak” the models—that means trying to get them to break their own rules and deliver biased, harmful, misleading, or even illegal content. Gen AI organizations are responding to this threat in two ways: for one thing, they’re collecting feedback from users on inappropriate content. They’re also combing through their databases, identifying prompts that led to inappropriate content, and training the model against these types of generations.

But awareness and even action don’t guarantee that harmful content won’t slip the dragnet. Organizations that rely on gen AI models should be aware of the reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.

These risks can be mitigated, however, in a few ways. “Whenever you use a model,” says McKinsey partner Marie El Hoyek, “you need to be able to counter biases  and instruct it not to use inappropriate or flawed sources, or things you don’t trust.” How? For one thing, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf gen AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases.

It’s also important to keep a human in the loop (that is, to make sure a real human checks the output of a gen AI model before it is published or used) and avoid using gen AI models for critical decisions, such as those involving significant resources or human welfare.

It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities is likely to continue to change rapidly in the coming years. As gen AI becomes increasingly incorporated into business, society, and our personal lives, we can also expect a new regulatory climate to take shape. As organizations experiment—and create value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk.

What is the AI Bill of Rights?

The Blueprint for an AI Bill of Rights, prepared by the US government in 2022, provides a framework for how government, technology companies, and citizens can collectively ensure more accountable AI. As AI has become more ubiquitous, concerns have surfaced  about a potential lack of transparency surrounding the functioning of gen AI systems, the data used to train them, issues of bias and fairness, potential intellectual property infringements, privacy violations, and more. The Blueprint comprises five principles that the White House says should “guide the design, use, and deployment of automated systems to protect [users] in the age of artificial intelligence.” They are as follows:

  • The right to safe and effective systems. Systems should undergo predeployment testing, risk identification and mitigation, and ongoing monitoring to demonstrate that they are adhering to their intended use.
  • Protections against discrimination by algorithms. Algorithmic discrimination is when automated systems contribute to unjustified different treatment of people based on their race, color, ethnicity, sex, religion, age, and more.
  • Protections against abusive data practices, via built-in safeguards. Users should also have agency over how their data is used.
  • The right to know that an automated system is being used, and a clear explanation of how and why it contributes to outcomes that affect the user.
  • The right to opt out, and access to a human who can quickly consider and fix problems.

At present, more than 60 countries or blocs have national strategies governing the responsible use of AI (Exhibit 2). These include Brazil, China, the European Union, Singapore, South Korea, and the United States. The approaches taken vary from guidelines-based approaches, such as the Blueprint for an AI Bill of Rights in the United States, to comprehensive AI regulations that align with existing data protection and cybersecurity regulations, such as the EU’s AI Act, due in 2024.

There are also collaborative efforts between countries to set out standards for AI use. The US–EU Trade and Technology Council is working toward greater alignment between Europe and the United States. The Global Partnership on Artificial Intelligence, formed in 2020, has 29 members including Brazil, Canada, Japan, the United States, and several European countries.

Even though AI regulations are still being developed, organizations should act now to avoid legal, reputational, organizational, and financial risks. In an environment of public concern, a misstep could be costly. Here are four no-regrets, preemptive actions organizations can implement today:

  • Transparency. Create an inventory of models, classifying them in accordance with regulation, and record all usage across the organization that is clear to those inside and outside the organization.
  • Governance. Implement a governance structure for AI and gen AI that ensures sufficient oversight, authority, and accountability both within the organization and with third parties and regulators.
  • Data management. Proper data management includes awareness of data sources, data classification, data quality and lineage, intellectual property, and privacy management.
  • Model management. Organizations should establish principles and guardrails for AI development and use them to ensure all AI models uphold fairness and bias controls.
  • Cybersecurity and technology management. Establish strong cybersecurity and technology to ensure a secure environment where unauthorized access or misuse is prevented.
  • Individual rights. Make users aware when they are interacting with an AI system, and provide clear instructions for use.

How can organizations scale up their AI efforts from ad hoc projects to full integration?

Most organizations are dipping a toe into the AI pool—not cannonballing. Slow progress toward widespread adoption is likely due to cultural and organizational barriers. But leaders who effectively break down these barriers will be best placed to capture the opportunities of the AI era. And—crucially—companies that can’t take full advantage of AI are already being sidelined by those that can, in industries like auto manufacturing and financial services.

To scale up AI, organizations can make three major shifts :

  • Move from siloed work to interdisciplinary collaboration. AI projects shouldn’t be limited to discrete pockets of organizations. Rather, AI has the biggest impact when it’s employed by cross-functional teams with a mix of skills and perspectives, enabling AI to address broad business priorities.
  • Empower frontline data-based decision making . AI has the potential to enable faster, better decisions at all levels of an organization. But for this to work, people at all levels need to trust the algorithms’ suggestions and feel empowered to make decisions. (Equally, people should be able to override the algorithm or make suggestions for improvement when necessary.)
  • Adopt and bolster an agile mindset. The agile test-and-learn mindset will help reframe mistakes as sources of discovery, allaying the fear of failure and speeding up development.

Learn more about QuantumBlack, AI by McKinsey , and check out AI-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ As gen AI advances, regulators—and risk functions—rush to keep pace ,” December 21, 2023, Andreas Kremer, Angela Luget , Daniel Mikkelsen , Henning Soller , Malin Strandell-Jansson, and Sheila Zingg
  • “ What is generative AI? ,” January 19, 2023
  • “ Tech highlights from 2022—in eight charts ,” December 22, 2022
  • “ Generative AI is here: How tools like ChatGPT could change your business ,” December 20, 2022, Michael Chui , Roger Roberts , and Lareina Yee  
  • “ The state of AI in 2022—and a half decade in review ,” December 6, 2022, Michael Chui , Bryce Hall , Helen Mayhew , Alex Singla , and Alex Sukharevsky  
  • “ Why businesses need explainable AI—and how to deliver it ,” September 29, 2022, Liz Grennan , Andreas Kremer, Alex Singla , and Peter Zipparo
  • “ Why digital trust truly matters ,” September 12, 2022, Jim Boehm , Liz Grennan , Alex Singla , and Kate Smaje
  • “ McKinsey Technology Trends Outlook 2023 ,” July 20, 2023, Michael Chui , Mena Issler, Roger Roberts , and Lareina Yee  
  • “ An AI power play: Fueling the next wave of innovation in the energy sector ,” May 12, 2022, Barry Boswell, Sean Buckley, Ben Elliott, Matias Melero , and Micah Smith  
  • “ Scaling AI like a tech native: The CEO’s role ,” October 13, 2021, Jacomo Corbo, David Harvey, Nicolas Hohn, Kia Javanmardian , and Nayur Khan
  • “ What the draft European Union AI regulations mean for business ,” August 10, 2021, Misha Benjamin, Kevin Buehler , Rachel Dooley, and Peter Zipparo
  • “ Winning with AI is a state of mind ,” April 30, 2021, Thomas Meakin , Jeremy Palmer, Valentina Sartori , and Jamie Vickers
  • “ Breaking through data-architecture gridlock to scale AI ,” January 26, 2021, Sven Blumberg , Jorge Machado , Henning Soller , and Asin Tavakoli  
  • “ An executive’s guide to AI ,” November 17, 2020, Michael Chui , Brian McCarthy, and Vishnu Kamalnath
  • “ Executive’s guide to developing AI at scale ,” October 28, 2020, Nayur Khan , Brian McCarthy, and Adi Pradhan
  • “ An executive primer on artificial general intelligence ,” April 29, 2020, Federico Berruti , Pieter Nel, and Rob Whiteman
  • “ The analytics academy: Bridging the gap between human and artificial intelligence ,” McKinsey Quarterly , September 25, 2019, Solly Brown, Darshit Gandhi, Louise Herring , and Ankur Puri  

This article was updated in April 2024; it was originally published in April 2023.

3D robotics hand

Want to know more about AI?

Related articles.

Abstract light explosion render. Beams of light shooting free from center of mass. - stock photo

Ten unsung digital and AI ideas shaping business

Complex Digital Structure Growing Endlessly - Intricate Connection Lines Symbolizing Innovative Artificial Intelligence Or Big Data Models

Driving innovation with generative AI

Video of colorful hexagons overlapping with shifting light.

As gen AI advances, regulators—and risk functions—rush to keep pace

  • AI Education in India
  • Speakers & Mentors
  • AI services

Problem Space Search in Artificial Intelligence – Techniques, Challenges, and Applications

Artificial intelligence (AI) has revolutionized the way we tackle complex problems. By leveraging algorithms and computational power, AI seeks to find solutions in problem spaces that were once considered impossible to navigate. One of the key components of AI’s problem-solving capability is its ability to explore these problem spaces, uncovering hidden patterns and insights.

In order to effectively explore problem spaces, AI relies on intelligent search algorithms. These algorithms are designed to traverse the vast expanse of possibilities, evaluating different scenarios and deciding on the best course of action. By systematically searching through the problem space, AI can identify optimal solutions and make informed decisions based on the available data.

Exploring problem spaces is not a straightforward task. The problem space can be incredibly complex, with multiple dimensions and countless variables. AI must navigate through this complexity, making adjustments and adaptations along the way. This requires a combination of computational power, sophisticated algorithms, and strategic thinking.

Artificial intelligence is continuously advancing its ability to explore problem spaces. As researchers and developers continue to push the boundaries of AI technology, new techniques and approaches are being discovered. AI is becoming smarter, faster, and more efficient in its search for solutions to complex problems.

In conclusion, exploring problem spaces is an essential aspect of artificial intelligence’s search for solutions. By leveraging intelligent search algorithms, AI is able to traverse complex problem spaces, uncovering hidden insights and identifying optimal solutions. As AI technology continues to evolve, we can expect even greater advancements in the field of problem-solving, and the potential for AI to tackle increasingly challenging problems.

Understanding the Importance of Problem Spaces in AI

In the field of artificial intelligence, exploring problem spaces is a crucial aspect of finding effective solutions. Problem spaces refer to the set of all possible states, actions, and outcomes that an AI system can take into account when searching for a solution. By understanding the problem space, AI can effectively explore and evaluate various paths to solve a given problem.

The Role of Problem Spaces in AI Search

Problem spaces provide a structured framework for AI algorithms to search for solutions. These algorithms can be designed to traverse through the problem space, considering different combinations of actions and evaluating their potential outcomes. By exploring various paths and evaluating their success, AI systems can identify the most promising solutions.

Without a clear understanding of the problem space, AI algorithms may struggle to effectively search for solutions. This is because they may overlook crucial actions or fail to consider all possible states and outcomes. By comprehensively exploring the problem space, AI systems can avoid such limitations and increase their chances of finding optimal solutions.

The Challenge of Problem Space Exploration

Exploring problem spaces can be challenging due to their complexity and size. Problem spaces can vary greatly depending on the nature of the problem and the specific AI algorithm being used. Some problem spaces may be relatively small and straightforward, while others can be vast and intricate.

To effectively explore problem spaces, AI algorithms must strike a balance between exhaustively evaluating all possible paths and avoiding unnecessary computations. This requires intelligent search strategies and heuristics that guide the exploration process towards the most promising areas of the problem space.

The importance of problem spaces in AI cannot be overstated. Understanding the problem space allows AI systems to navigate through the vast search space and find optimal solutions to complex problems. By developing intelligent search algorithms and strategies, AI researchers can continue to push the boundaries of problem-solving in artificial intelligence.

Exploring Different Problem Spaces in AI

Artificial intelligence (AI) is constantly evolving and improving, as it strives to find solutions for a wide range of problems. A key aspect of AI’s capabilities lies in its ability to explore different problem spaces.

In order to solve a problem, AI must first understand the problem space. The problem space is the set of all possible states that a problem can have. By exploring different problem spaces, AI can gain a deep understanding of the problem at hand and develop effective solutions.

The Role of Search in Exploring Problem Spaces

Search algorithms play a crucial role in AI’s exploration of problem spaces. These algorithms traverse through different states in the problem space, evaluating each state and determining the best path to reach the desired solution.

AI’s search for solutions involves evaluating various possibilities and making informed decisions about the best course of action. By systematically exploring the problem space, AI can effectively navigate through the complexity of different problem spaces to find optimal solutions.

Exploring Different Spaces for Problem Solving

AI has the ability to explore problem spaces across various domains. Whether it’s in healthcare, finance, transportation, or any other industry, AI can adapt to different problem spaces and find innovative solutions.

For example, in healthcare, AI can explore problem spaces related to disease diagnosis, treatment planning, and drug discovery. By analyzing vast amounts of medical data and exploring different problem spaces, AI can provide accurate diagnoses and personalized treatment plans.

In finance, AI can explore problem spaces related to fraud detection, risk assessment, and investment strategies. By analyzing financial data and exploring different problem spaces, AI can identify anomalies, mitigate risks, and optimize investment decisions.

Domain Problem Space
Healthcare Disease diagnosis, treatment planning, drug discovery
Finance Fraud detection, risk assessment, investment strategies
Transportation Route optimization, traffic management, autonomous vehicles

By exploring different problem spaces, AI can drive innovation and bring about transformative change in various industries. The unlimited potential of AI’s problem-solving abilities makes it an invaluable tool in today’s rapidly advancing world.

The Role of Artificial Intelligence in Problem Solving

Artificial Intelligence (AI) plays a crucial role in problem solving by using various search algorithms to explore problem spaces and find optimal solutions. AI algorithms are designed to mimic human intelligence and solve complex problems by implementing computational processes.

Problem spaces refer to the set of all possible states or configurations that a problem can have. AI algorithms traverse these spaces using a combination of heuristics, knowledge representation, and search strategies to find the most favorable solution.

Exploring Problem Spaces

AI employs search algorithms to explore problem spaces in order to find solutions. By iteratively examining different states or configurations, AI algorithms can navigate through large and complex problem spaces to find the best possible outcome.

Search algorithms in AI include techniques such as depth-first search, breadth-first search, and heuristic search. These algorithms evaluate and compare different paths or options based on predefined criteria to determine the most promising direction.

The Role of Artificial Intelligence

The role of artificial intelligence in problem solving is to assist humans and automate tedious processes. AI can analyze vast amounts of data and provide insights that aid in decision-making. It can also optimize processes and improve resource allocation by finding the most efficient solutions to complex problems.

Furthermore, AI algorithms can learn from previous problem-solving experiences and continuously improve their performance. Machine learning techniques enable AI systems to adapt and optimize their strategies based on feedback and new information.

In summary, artificial intelligence plays a vital role in problem solving by exploring problem spaces and finding optimal solutions. Through various search algorithms and computational processes, AI can assist humans, automate processes, and provide valuable insights for decision-making.

Search Methods Used in Artificial Intelligence

Artificial intelligence (AI) is constantly exploring problem spaces in search of solutions. In order to solve complex problems, AI uses a variety of search methods to navigate through the problem space.

Depth-First Search

One common search method used in AI is depth-first search. This method explores a problem by continuously moving forward until it reaches a dead end, at which point it backtracks and explores other paths. This search method is particularly useful for solving problems with a large number of possible solutions, as it can quickly traverse the problem space.

Breadth-First Search

Another search method used in AI is breadth-first search. This method explores a problem by systematically searching all possible solutions at each level of the problem space. Breadth-first search is effective in finding the shortest path between two points, as it systematically explores all possible paths before moving on to the next level.

In addition to depth-first search and breadth-first search, AI also uses other search methods such as heuristic search, informed search, and local search. These methods use various algorithms and heuristics to guide the search towards the most optimal solution.

Overall, the search methods used in artificial intelligence play a crucial role in exploring problem spaces and finding solutions. By leveraging these search methods, AI can efficiently navigate through complex problem spaces and provide intelligent solutions to a wide range of problems.

Exploring Problem Spaces with Machine Learning

Machine learning has revolutionized the field of artificial intelligence (AI) by providing powerful tools and techniques for solving complex problems. One of the key aspects of AI is the ability to explore and search through problem spaces in order to find optimal solutions.

Problem spaces are the set of all possible states and actions that an AI system can encounter while attempting to solve a problem. By exploring these problem spaces, AI systems can navigate through different potential solutions and determine the most promising ones.

Incorporating Intelligence into Problem Exploration

In order to effectively explore problem spaces, AI systems utilize various machine learning algorithms and techniques. These algorithms analyze and process large amounts of data to identify patterns, make predictions, and optimize decision-making.

For example, reinforcement learning is a type of machine learning that enables AI systems to learn through trial and error. By constantly exploring different actions and evaluating their outcomes, AI systems can gradually improve their decision-making and problem-solving abilities.

Artificial intelligence plays a crucial role in solving complex problems by using its ability to explore problem spaces. It can quickly analyze vast amounts of data, uncover hidden patterns and relationships, and generate insights that human analysts may overlook.

Through machine learning algorithms, artificial intelligence can also adapt and optimize its strategies based on new information and changing problem conditions. This adaptive capability allows AI systems to continuously improve their performance and find innovative solutions to challenging problems.

In conclusion, exploring problem spaces with machine learning is a powerful approach that allows artificial intelligence to efficiently analyze and solve complex problems. By leveraging its ability to search through problem spaces, AI systems can uncover optimal solutions and drive advancements in various fields.

Cognitive Approaches to Problem Solving in AI

In the field of artificial intelligence (AI), problem solving plays a crucial role in exploring the problem space and finding solutions. Cognitive approaches to problem solving involve mimicking the human thought process to tackle complex problems.

Understanding Problem Spaces

Problem spaces in AI refer to the set of possible states, actions, and constraints that define a particular problem. Exploring these problem spaces is a fundamental task for AI systems as they search for solutions.

By applying cognitive approaches to problem solving, AI systems can navigate and map problem spaces more efficiently. This involves using cognitive algorithms that simulate human reasoning, learning, and decision-making processes.

Searching for Solutions

In AI, search algorithms are employed to explore problem spaces and find optimal or satisfactory solutions. Cognitive approaches to search involve reasoning, learning, and adapting strategies based on observed and learned information.

These approaches aim to optimize the search process by leveraging knowledge representation, semantic understanding, and pattern recognition. AI systems can use various search algorithms, such as depth-first search or breadth-first search, depending on the characteristics of the problem space and the specific requirements of the task at hand.

Furthermore, cognitive approaches to problem solving in AI also involve the use of heuristics. Heuristics act as shortcuts or rules of thumb that guide the search process towards promising areas of the problem space.

By combining cognitive approaches with advanced algorithms and computational power, AI systems can efficiently explore problem spaces and find solutions to complex problems. These cognitive approaches not only enhance the problem-solving capabilities of AI but also pave the way for more human-like intelligence in artificial systems.

Deep Learning Techniques for Exploring Problem Spaces

In the field of artificial intelligence (AI), problem solving and search algorithms are critical for exploring the vast problem spaces that exist. Deep learning techniques have emerged as a powerful tool for tackling these complex problem spaces and finding innovative solutions.

Artificial intelligence (AI) is the study and development of computer systems that can perform tasks that would typically require human intelligence. One of the key challenges in AI is solving problems in a wide variety of domains, ranging from natural language processing to computer vision.

Exploring a problem space involves systematically searching through the various possible solutions to find an optimal solution. The problem space is defined by the set of possible states and actions that can be taken to move from one state to another. For example, in a game of chess, the problem space consists of all possible board configurations and possible moves that can be made.

Deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have revolutionized the field of AI by enabling computers to learn from large amounts of data. These techniques have been successfully applied to a wide range of problem spaces, including image recognition, natural language processing, and speech recognition.

By training deep neural networks on large datasets, researchers can teach computers to recognize patterns and make predictions based on the data they have seen. This allows AI systems to explore problem spaces more effectively, as they can quickly evaluate and prioritize potential solutions based on learned patterns.

In addition to deep learning techniques, other AI algorithms, such as reinforcement learning and genetic algorithms, can also be used to explore problem spaces. Reinforcement learning involves training an AI agent to interact with an environment and learn through trial and error. Genetic algorithms use principles inspired by natural evolution to search for solutions by iteratively generating and evaluating candidate solutions.

Overall, deep learning techniques have revolutionized the field of AI by enabling computers to explore and solve complex problem spaces. By training neural networks on large datasets, AI systems can learn to recognize patterns and make accurate predictions. This allows them to effectively search and prioritize potential solutions, leading to innovative breakthroughs in a wide range of domains.

The Impact of Natural Language Processing on Problem Solving in AI

Artificial intelligence (AI) is continually exploring new problem spaces and striving to find innovative solutions. One of the key areas where AI has made significant advancements is in natural language processing (NLP). NLP is the field of AI that focuses on the interaction between computers and human language.

NLP is instrumental in problem solving because it enables AI systems to understand and interpret human language, allowing them to effectively analyze and process textual data. This has opened up numerous possibilities for AI in problem solving and decision making.

With NLP, AI systems can analyze large volumes of text data, such as scientific papers, news articles, or social media posts, to extract relevant information and insights. This information can then be used to identify patterns, trends, or anomalies, which can aid in problem solving and decision making.

NLP has also greatly improved the interaction between humans and AI systems. Through the use of techniques such as sentiment analysis and text summarization, AI systems can understand the emotions behind text or condense large amounts of information into concise summaries. This enhances communication and collaboration between humans and AI, enabling more effective problem solving.

Furthermore, NLP has played a crucial role in bridging the language gap between humans and machines. Translation algorithms powered by NLP have made it possible for AI systems to understand and generate text in different languages. This has broadened the reach of AI and made it accessible to a global audience, fostering cross-cultural problem solving and collaboration.

In conclusion, natural language processing has had a profound impact on problem solving in the field of artificial intelligence. By enabling AI systems to understand and process human language, NLP has revolutionized the way AI interacts with text data and humans. This has led to advancements in problem solving capabilities and has opened up new avenues for AI research and applications. As NLP continues to evolve, we can expect further enhancements to AI’s problem solving abilities and the exploration of even more problem spaces.

Using Heuristics to Navigate Problem Spaces in AI

In the field of artificial intelligence, search is a crucial component in solving complex problems. Problem spaces refer to the set of all possible states and actions that can be explored in order to reach a solution. With the vastness of these spaces, it can be challenging for AI to efficiently find a solution. This is where heuristics come into play.

Heuristics are problem-solving techniques that provide AI with a guide or rule of thumb to explore the problem space. They help narrow down the search and focus AI’s efforts on the most promising paths. These heuristics are often formulated based on previous knowledge and understanding of the problem, allowing AI to make informed decisions.

Using heuristics, AI can prioritize certain actions or states that have a higher likelihood of leading to a solution. This helps AI avoid wasting time and resources on less fruitful paths. Heuristics can take various forms, such as rules, patterns, or mathematical formulas, depending on the problem at hand.

A key advantage of using heuristics is their ability to allow AI to make intelligent choices when faced with uncertainty. They can help AI estimate the potential value of different actions or states, even if the outcome is not known with certainty. By using heuristics, AI can focus its search on the most promising areas, increasing the efficiency and effectiveness of problem solving.

However, it’s important to note that heuristics are not foolproof. They are based on assumptions and simplifications of the problem, which may not always hold true. Additionally, heuristics can introduce biases or limitations in AI’s search. Therefore, it is crucial to carefully design and validate heuristics to ensure their effectiveness and reliability.

In conclusion, heuristics play a vital role in helping AI navigate problem spaces. They provide AI with a guide to explore the problem space efficiently, prioritize promising paths, and make intelligent choices. However, their design and implementation require careful consideration to ensure their effectiveness and reliability in artificial intelligence systems.

Exploring Problem Spaces in Autonomous Systems

As artificial intelligence continues to advance, one of the key challenges is how to effectively explore problem spaces. In the context of autonomous systems, exploring problem spaces refers to the process by which AI searches for solutions to complex problems. This exploration is crucial for enabling the AI system to make informed decisions and take appropriate actions.

Exploring problem spaces requires the AI system to have a comprehensive understanding of the problem at hand, as well as the potential solutions that may exist within the problem space. The AI system uses various search techniques to navigate through the problem space and evaluate different options. These techniques include heuristics, optimization algorithms, and machine learning models.

The Role of Artificial Intelligence in Exploring Problem Spaces

Artificial intelligence plays a vital role in exploring problem spaces within autonomous systems. By leveraging its ability to process vast amounts of data and analyze complex patterns, AI can effectively navigate through problem spaces and identify optimal solutions. AI algorithms can take into account various factors, such as cost, time constraints, and resource availability, to identify the most efficient and effective solution.

Furthermore, AI can adapt and learn from its previous experiences in exploring problem spaces. Through a process called reinforcement learning, autonomous systems can improve their decision-making capabilities over time. This allows the AI system to continuously refine its exploration techniques and achieve better performance in solving complex problems.

The Challenges of Exploring Problem Spaces

While exploring problem spaces is crucial, it also presents several challenges. One of the main challenges is the vastness of problem spaces. Some problem spaces can be exponentially large, making it difficult for AI systems to explore all possible solutions. In such cases, AI algorithms must prioritize and focus on the most relevant and promising areas within the problem space.

Another challenge is the presence of uncertainty and ambiguity within problem spaces. The AI system must be able to handle incomplete information and make decisions based on partial knowledge. This requires the AI system to employ techniques such as probabilistic reasoning and uncertainty estimation to effectively explore problem spaces.

In conclusion, exploring problem spaces is a fundamental aspect of artificial intelligence in autonomous systems. By employing advanced search techniques and leveraging its processing capabilities, AI can effectively navigate through problem spaces and identify optimal solutions. However, challenges such as the vastness and uncertainty of problem spaces must be overcome to ensure the success of AI systems in solving complex problems.

Advancements in Search Algorithms for Problem Solving

In the field of artificial intelligence (AI), one of the most important tasks is solving problems. Whether it’s finding the shortest path between two points, optimizing a complex system, or making predictions based on data, AI relies heavily on search algorithms to explore the problem space and find solutions.

A problem space is the set of all possible states and actions that can be taken to solve a given problem. In the context of AI, problem spaces can be incredibly large and complex, requiring efficient algorithms to search through them effectively. Over the years, there have been significant advancements in search algorithms that have greatly improved problem-solving capabilities.

Advancements in Search Algorithms

One major advancement in search algorithms is the development of heuristic-based approaches. These algorithms use heuristics, which are rules or guidelines that help in estimating the quality of a potential solution without exhaustively exploring all possibilities. By incorporating heuristics, search algorithms can make more informed decisions on which paths to explore, leading to faster and more efficient problem-solving.

Another significant advancement is the introduction of metaheuristic algorithms. These algorithms are designed to solve optimization problems by incorporating techniques inspired by natural phenomena such as genetic algorithms, simulated annealing, and particle swarm optimization. Metaheuristic algorithms have the advantage of being able to find near-optimal solutions in large problem spaces with complex constraints.

Furthermore, machine learning techniques have been used to train search algorithms to make smarter decisions. By analyzing large amounts of data and learning patterns, these algorithms can adapt and improve their search strategies over time. This allows them to become more effective in finding solutions to complex problems, especially when faced with uncertain or incomplete information.

  • One popular machine learning technique used in search algorithms is reinforcement learning. In reinforcement learning, an agent learns to navigate a problem space by interacting with it and receiving feedback in the form of rewards or penalties. The agent’s objective is to maximize its cumulative reward, which motivates it to explore and find optimal solutions.
  • Another technique is evolutionary algorithms, where a population of potential solutions evolves over time through selection, replication, and variation. With each generation, the algorithms improve the fitness of the population, gradually converging towards better solutions.

Overall, advancements in search algorithms for problem solving have significantly enhanced the capabilities of artificial intelligence. From heuristic-based approaches to metaheuristics and machine learning techniques, these algorithms have allowed AI systems to tackle increasingly complex problems and find optimal or near-optimal solutions in large problem spaces. As technology continues to evolve, it is likely that we will see further advancements in search algorithms, pushing the boundaries of problem-solving capabilities in artificial intelligence.

Understanding the Limitations of Problem Solving in AI

In the field of artificial intelligence (AI), problem solving is a crucial area of research. AI aims to develop systems that can solve complex problems by searching through problem spaces. However, despite the advancements in AI, there are still limitations to problem solving in this field.

One major limitation is the vastness of problem spaces. Problem spaces can be incredibly large and complex, making it difficult for AI systems to search through all possible solutions. This is particularly challenging when faced with problems that have a high number of variables or dependencies.

Another limitation is the reliance on predefined search algorithms. AI systems utilize various search algorithms to explore problem spaces and find potential solutions. However, these algorithms are often limited in their ability to handle certain types of problems. For example, some algorithms may struggle with problems that involve uncertain or incomplete information.

The lack of creativity is another significant limitation in AI problem solving. While AI systems can excel at finding optimal solutions within a given problem space, they often lack the ability to think outside the box and come up with innovative solutions. This limits their effectiveness in solving problems that require novel approaches or unconventional thinking.

Furthermore, the computational requirements for solving complex problems can be a significant limitation. Problem solving in AI often requires substantial computational resources, including processing power and memory. As problems become more complex, the computational demands increase, making it challenging to solve problems efficiently.

In conclusion, while artificial intelligence has made significant progress in problem solving, there are still limitations that need to be addressed. The vastness of problem spaces, reliance on predefined algorithms, lack of creativity, and computational requirements all pose challenges for AI systems. Overcoming these limitations will be crucial for advancing the field of artificial intelligence and improving problem-solving capabilities.

The Future of Exploring Problem Spaces in AI

As artificial intelligence (AI) continues to advance and evolve, so too does its ability to search and explore problem spaces. In the future, we can expect AI to become even more adept at searching for solutions to complex problems.

One area where AI is already making significant advancements is in the search for optimal solutions. By using advanced algorithms and processing power, AI is able to analyze vast amounts of data and quickly identify the most efficient paths to a solution. This ability to explore problem spaces in a systematic and efficient manner has the potential to revolutionize a wide range of industries.

Another area where the future of AI exploration is promising is in the ability to search for solutions in complex and unstructured problem spaces. Traditionally, AI has excelled at solving well-defined problems with clear rules and parameters. However, as AI continues to develop, it is becoming increasingly capable of handling more ambiguous and open-ended problems. This opens up a whole new realm of possibilities for AI to explore and find innovative solutions.

Furthermore, AI is also poised to become a powerful tool for exploring problem spaces in collaborative environments. As AI becomes more intelligent and capable of understanding human language and context, it will be able to work seamlessly alongside humans to help explore problem spaces and find solutions. This collaborative approach has the potential to greatly enhance creativity and problem-solving capabilities.

Overall, the future of exploring problem spaces in AI is filled with exciting possibilities. As AI continues to advance and evolve, we can expect it to become even more skilled at searching for solutions in a wide range of problem spaces. Whether it be optimizing complex systems, finding innovative solutions to unstructured problems, or working collaboratively with humans, AI will undoubtedly play a vital role in shaping the future of problem-solving.

Exploring Problem Spaces in Robotics and AI

Artificial intelligence and robotics are transforming the world around us, solving complex problems and automating tasks that were once thought to be impossible. The key to their success lies in their ability to search and explore problem spaces, finding optimal solutions to a wide range of challenges.

Problem spaces refer to the set of possible states or configurations a system can have, along with the actions that can be taken to transition between these states. In the context of robotics and AI, problem spaces can be vast and multidimensional, making the search for solutions a challenging task.

Artificial intelligence algorithms utilize various search techniques to navigate problem spaces efficiently. These techniques can be categorized into uninformed search and informed search. Uninformed search algorithms, like breadth-first search and depth-first search, explore the problem space without any prior knowledge or heuristics. In contrast, informed search algorithms, such as A* search and heuristic search, leverage heuristics and domain-specific knowledge to guide the search towards more promising areas of the problem space.

Exploring problem spaces in robotics and AI involves a combination of algorithmic design, computational efficiency, and problem modeling. Researchers and practitioners must carefully define the problem space and develop appropriate search algorithms that can efficiently find solutions. This process often requires a deep understanding of the domain and the ability to balance exploration and exploitation in search strategies.

Moreover, the exploration of problem spaces in robotics and AI is not limited to a single search algorithm. Instead, it is a continual process of refining and improving the search techniques to adapt to new challenges and domains. As problem spaces evolve, new algorithms and approaches are developed to address the unique complexities and constraints of each problem.

Artificial Intelligence Robotics
Search Algorithms Problem Modeling
Uninformed Search Informed Search
Exploration Exploitation

In conclusion, exploring problem spaces in robotics and AI is a crucial component in solving complex challenges. It requires a combination of AI algorithms, computational efficiency, and problem modeling to navigate vast and multidimensional problem spaces. By continuously refining and improving search techniques, researchers and practitioners can push the boundaries of what is possible and drive innovation in artificial intelligence and robotics.

The Role of Genetic Algorithms in Exploring Problem Spaces

In the field of artificial intelligence (AI), exploring problem spaces is a crucial aspect of finding solutions. AI systems are designed to exhibit intelligence and solve complex problems. One way AI achieves this is through the use of genetic algorithms (GA).

A genetic algorithm is a search and optimization technique inspired by the process of natural selection. It mimics the principles of genetics and evolutionary biology to find the best solution to a given problem. Genetic algorithms start with an initial population of candidate solutions and iteratively improve them through genetic operations such as selection, crossover, and mutation.

The role of genetic algorithms in exploring problem spaces is to efficiently navigate the vast solution space and converge towards optimal or near-optimal solutions. They are particularly useful in situations where the problem is too complex or lacks a clear mathematical representation. Genetic algorithms can handle problems with multiple variables, constraints, and objectives, making them suitable for a wide range of applications.

In the search for solutions, genetic algorithms explore the problem space by generating new candidate solutions through genetic operations. These operations simulate the natural processes of reproduction, recombination, and mutation, allowing the algorithm to explore different regions of the solution space. By evaluating the fitness of these candidate solutions, genetic algorithms can identify promising areas and focus their search accordingly.

Genetic algorithms excel at solving optimization problems that involve combinatorial or continuous variables. They have been successfully applied in various fields such as engineering design, scheduling, finance, and artificial intelligence itself. By exploring problem spaces, genetic algorithms enable AI systems to find innovative solutions and make informed decisions.

Artificial Intelligence Exploring Problem In AI Search Solving Spaces
Intelligence Genetic Algorithms Genetic Operations Optimal Solutions Evolutionary Biology Reproduction Constraints Variability
Complex Problems Navigate Mutation Iterative Improvement Combinatorial Variables Crossover Constraints Objective
Solutions Optimization Fitness Decision Making Continuous Variables Recombination Multi-objective Innovation

In conclusion, genetic algorithms play a crucial role in exploring problem spaces in artificial intelligence. They offer a powerful approach to search and optimization, enabling AI systems to find innovative solutions to complex problems. By simulating the principles of natural selection, genetic algorithms effectively navigate the solution space and converge towards optimal or near-optimal solutions. Through their ability to handle multiple variables, constraints, and objectives, genetic algorithms have become a vital tool in solving real-world problems across various domains.

Using Neural Networks for Problem Solving in AI

Artificial intelligence (AI) has revolutionized the way we approach problem solving. By leveraging advanced algorithms and deep learning techniques, AI systems are now capable of solving complex problems in various domains.

One of the key components of AI problem solving is the use of neural networks. Neural networks are a computing system inspired by the human brain, comprised of interconnected nodes or “neurons”. These networks are capable of learning from data and making predictions or decisions based on that learning.

Neural networks are particularly useful in the search space of problem solving. They excel at processing large amounts of data and identifying patterns and relationships within that data. By training a neural network on a specific problem domain, AI systems can effectively search through the problem space and find optimal solutions.

For example, in a game-playing AI, a neural network can be trained on a dataset of previous gameplay and outcomes. The network learns the rules and strategies of the game and can then use that knowledge to search for the best moves in a given game state. The neural network’s ability to process and analyze a vast amount of data allows it to make informed decisions and find the optimal solution.

Neural networks can also be used in AI systems that aim to solve real-world problems. For instance, in image recognition tasks, a neural network can be trained on a large dataset of labeled images. The network learns to identify and categorize objects in images, enabling it to efficiently search for and recognize objects in new, unseen images.

In conclusion, neural networks are a powerful tool in the problem-solving capabilities of artificial intelligence. Through their ability to process and analyze data, these networks can effectively search through problem spaces, finding optimal solutions in various domains. As AI continues to evolve, neural networks will play an increasingly important role in enabling AI systems to solve complex problems and enhance human decision-making.

Exploring Problem Spaces in Computer Vision and AI

Artificial intelligence (AI) is revolutionizing the way we approach problem solving across various domains. One area where AI is making significant strides is computer vision. Computer vision involves the development of algorithms that enable computers to interpret and understand visual information, allowing them to analyze images and videos.

AI in computer vision is all about searching for solutions to complex problems related to visual data. The problem space in computer vision includes tasks such as object recognition, image classification, and image segmentation. These tasks involve the exploration of different approaches and techniques to achieve accurate and efficient results.

Exploring problem spaces in computer vision and AI involves experimenting with different algorithms, models, and data processing techniques. This exploration helps to identify the most effective methods for solving specific problems. Researchers and engineers approach problem spaces with the goal of finding innovative solutions that can push the boundaries of what is currently possible in computer vision and AI.

By exploring problem spaces, researchers and engineers can uncover new insights and approaches that can lead to breakthroughs in computer vision. This exploration allows them to discover hidden patterns, optimize algorithms, and develop more efficient and accurate models.

Furthermore, exploring problem spaces in computer vision and AI enables the development of robust and generalized solutions. By examining a wide range of scenarios and variations within the problem space, researchers can create models and algorithms that can handle diverse and challenging real-world situations.

Overall, exploring problem spaces in computer vision and AI is essential for advancing the field and unlocking the full potential of artificial intelligence. It allows researchers and engineers to continuously improve existing solutions and develop new ones, ultimately driving progress in computer vision and its applications.

Quantum Computing and its Potential for Problem Solving in AI

Exploring the problem space is a crucial part of developing AI intelligence. Artificial intelligence is constantly searching for new solutions to the problems it encounters. One area that holds significant promise for problem solving in AI is quantum computing.

Quantum computing, a field that combines principles from physics and computer science, has the potential to revolutionize the way we solve complex problems. Unlike classical computers that use binary bits to represent information, quantum computers use quantum bits, or qubits, which can exist in a superposition of states. This allows quantum computers to perform calculations on a much larger scale and at a much faster rate than classical computers.

In the context of AI, quantum computing could greatly enhance the capabilities of machine learning algorithms. Machine learning algorithms typically rely on large quantities of data and complex computations to train models and make predictions. Quantum computers could drastically improve the speed and efficiency of these computations, allowing AI systems to process and analyze data more effectively.

Furthermore, quantum computing could enable AI systems to explore larger problem spaces and find optimal solutions more efficiently. Many real-world problems, such as optimization and pattern recognition, have a vast number of possible solutions that need to be explored. Classical computers often struggle to examine all possible solutions due to limitations in processing power. Quantum computers, with their ability to process multiple solutions simultaneously, could overcome these limitations and offer new insights into complex problem spaces.

However, there are still significant challenges to overcome in the development of quantum computing for AI. Quantum computers are extremely sensitive to noise and errors, and maintaining the delicate quantum state of qubits is a major technical hurdle. Additionally, designing algorithms that can effectively leverage the parallel processing capabilities of quantum computers is an ongoing area of research.

Despite these challenges, the potential of quantum computing for problem solving in AI is immense. As research and development in quantum computing continue to advance, we can expect to see exciting advancements in the field of artificial intelligence. With its ability to explore problem spaces more efficiently and find optimal solutions, quantum computing holds the key to unlocking new possibilities in AI.

Exploring Problem Spaces in Natural Language Understanding

In the field of artificial intelligence (AI), exploring problem spaces is essential for developing effective solutions. One area where this exploration is crucial is in natural language understanding (NLU). NLU involves the ability of AI systems to comprehend and interpret human language, enabling communication and interaction between humans and machines.

The Challenge of NLU

Natural language is incredibly diverse and complex, with many nuances, idioms, and variations. This poses a significant challenge for AI systems, as they need to understand the meaning and intent behind human speech or written text. The problem space of NLU encompasses a wide range of linguistic elements, such as syntax, semantics, pragmatics, and discourse analysis.

AI Search in NLU

To overcome the challenge of natural language understanding, AI systems utilize various search algorithms to explore the problem space. These algorithms allow the AI system to search through a vast number of possible solutions, evaluating each one based on predefined criteria. Through this search process, the AI system can identify the most suitable solution that accurately captures the meaning and intent of the input text.

AI search in NLU involves analyzing the structure of language, extracting relevant information, and applying advanced techniques such as machine learning and deep learning. By exploring different problem spaces and iterating through the search process, AI systems can continually improve their understanding and interpretation of natural language.

Overall, exploring problem spaces in natural language understanding is a fundamental aspect of AI research. By refining the search algorithms and advancing the techniques used in NLU, AI systems can achieve higher levels of accuracy and effectiveness in understanding and processing human language.

The Role of Reinforcement Learning in Problem Solving

Reinforcement learning is a fundamental concept in the field of artificial intelligence (AI) that plays a crucial role in problem solving. It is a type of machine learning that involves an agent exploring different spaces to find optimal solutions in a given problem space.

Exploring problem spaces is a common task in AI, where the goal is to find the best solution among many possible options. Reinforcement learning provides a method for agents to continuously learn from their environment and make decisions based on the rewards or punishments they receive.

How Does Reinforcement Learning Work?

In reinforcement learning, an agent interacts with its environment and takes actions based on its current state. The environment then provides feedback in the form of rewards or penalties, depending on the outcome of the agent’s action. The agent’s objective is to maximize the accumulated rewards over time.

Through exploration and exploitation, the agent learns which actions lead to higher rewards and gradually improves its decision-making process. This learning process involves trial and error, as the agent tries different actions and observes the resulting rewards.

One of the key advantages of reinforcement learning is its ability to handle complex and uncertain problem spaces. Instead of relying on a predefined set of rules or heuristics, the agent learns from experience and adapts its behavior accordingly. This flexibility allows reinforcement learning algorithms to tackle a wide range of problems, from game playing to robotics to business optimization.

Applications of Reinforcement Learning

Reinforcement learning has demonstrated success in various domains. In game playing, it has achieved remarkable results, such as AlphaGo, which defeated world champions in the complex game of Go. In robotics, reinforcement learning has been used to train autonomous agents to perform tasks like grasping objects or navigating unfamiliar environments.

Reinforcement learning is also being applied to business problems, such as optimizing pricing strategies or supply chain management. By continuously learning and adapting to changing market conditions, reinforcement learning algorithms can help businesses make better decisions and improve their overall performance.

Exploring Spaces in AI Problem Solving
Reinforcement learning plays a crucial role in exploring different spaces to find optimal solutions
Reinforcement learning offers flexibility in handling complex problem spaces through trial and error
Applications of reinforcement learning include game playing and business optimization

Exploring Problem Spaces in Data Science and AI

The field of artificial intelligence (AI) has greatly expanded in recent years, offering solutions to a wide range of problems across various industries. One key aspect of AI is its ability to search for solutions in problem spaces.

AI is designed to mimic human intelligence and learning, providing computational models and algorithms that can process and analyze large amounts of data. By exploring problem spaces, AI can identify patterns, make predictions, and solve complex problems.

Data science plays a crucial role in AI by providing the necessary tools and techniques to explore problem spaces. Data scientists collect, clean, and analyze data, identifying relevant features and variables to build accurate models.

Exploring problem spaces involves defining the scope and constraints of a given problem. This includes understanding the desired outcome, the available data, and any limitations or challenges that may arise. By thoroughly exploring the problem space, data scientists and AI systems can develop effective strategies for solving complex problems.

AI algorithms use various search techniques to navigate problem spaces. These techniques include breadth-first search, depth-first search, heuristic search, and genetic algorithms. Each search method has its own strengths and weaknesses, making it suitable for different types of problems.

Exploring problem spaces in data science and AI involves iterative processes, as the initial analysis and models may need to be refined and updated based on new data or insights. This continuous exploration and refining process help improve the accuracy and efficiency of the AI system.

In conclusion, exploring problem spaces is a vital aspect of data science and AI. By using artificial intelligence to search for solutions, we can tackle complex problems and make informed decisions based on data-driven insights. As technology continues to advance, the exploration of problem spaces will play a crucial role in further enhancing AI capabilities and driving innovation across industries.

Using Bayesian Networks for Problem Solving in AI

Problem solving is a fundamental task for artificial intelligence (AI). The exploration of problem spaces is an essential process in AI, where algorithms search for solutions to complex problems. One powerful tool in this exploration is the use of Bayesian networks.

What is a Bayesian Network?

A Bayesian network is a graphical model that represents the probabilistic relationships between different variables. It is composed of nodes, which represent variables, and edges, which represent probabilistic dependencies between the variables. By using Bayesian networks, AI systems can model and reason about uncertain knowledge and make decisions based on available evidence.

The Role of Bayesian Networks in Problem Solving

In problem solving, Bayesian networks can be used to represent the problem space and help AI algorithms find solutions. By representing variables and their dependencies, Bayesian networks provide a structured way to explore different possibilities and evaluate their likelihood based on observed evidence.

When searching for solutions in the problem space, AI algorithms can utilize the probabilistic reasoning capabilities of Bayesian networks. They can update the probabilities of different variables based on new evidence and make informed decisions about the best course of action.

Benefits of Using Bayesian Networks

There are several benefits to using Bayesian networks for problem solving in AI:

  • Efficient exploration: Bayesian networks provide a structured representation of the problem space, allowing AI algorithms to efficiently explore different possibilities and narrow down the search for solutions.
  • Handling uncertainty: Bayesian networks allow AI systems to incorporate and reason about uncertain knowledge. This is particularly useful in real-world scenarios where incomplete or noisy data is available.
  • Flexibility: Bayesian networks can be easily updated with new evidence, allowing AI systems to adapt and refine their solutions as additional data becomes available.
  • Interpretability: The graphical nature of Bayesian networks makes it easier for humans to understand and interpret the reasoning process of AI systems, promoting transparency and trust.

In conclusion, Bayesian networks play a crucial role in problem solving in AI. By representing the problem space and utilizing probabilistic reasoning, AI algorithms can effectively search for solutions and make informed decisions. The benefits of using Bayesian networks include efficient exploration, handling uncertainty, flexibility, and interpretability.

Exploring Problem Spaces in Expert Systems and AI

In the field of artificial intelligence (AI), problem spaces play a crucial role in solving complex tasks. A problem space refers to the set of all possible states or configurations that a problem can have. Exploring these problem spaces is essential for AI systems to find optimal solutions.

Expert systems, a branch of AI, are designed to mimic the decision-making process of human experts in a specific domain. These systems have a defined problem space, which includes all the possible inputs, outputs, and rules for solving a particular problem.

The search for solutions within problem spaces is a fundamental aspect of AI. AI algorithms employ various search techniques to navigate through the problem space and reach the desired outcome. For example, depth-first search and breadth-first search are commonly used algorithms for exploring problem spaces.

Exploring problem spaces involves analyzing different states or configurations and evaluating their potential for achieving the desired goal. The AI system iteratively searches through the problem space, considering different paths and making decisions based on the current state. This exploration continues until an optimal solution is found.

One challenge in exploring problem spaces is the vastness and complexity of the search space. AI systems must efficiently navigate through a large number of possible states and configurations to find the most optimal solution. This requires intelligent search algorithms and heuristics to guide the exploration process.

In conclusion, exploring problem spaces is a critical aspect of AI and expert systems. It involves searching through all possible states or configurations to find the optimal solution. By employing intelligent search algorithms, AI systems can efficiently navigate these problem spaces and solve complex tasks.

The Ethical Implications of Problem Solving in AI

As artificial intelligence (AI) continues to advance, its ability to search and explore problem spaces for solutions has become increasingly sophisticated. However, this progress in AI raises important ethical questions that need to be addressed.

The exploration of problem spaces in AI involves creating algorithms and systems that can search through vast amounts of data, identifying patterns, and generating possible solutions. This capability has led to significant advancements in various fields, including medicine, finance, and transportation. However, with this power comes the potential for misuse and harm.

One ethical concern is the use of AI in making decisions that affect individuals and societies.

As AI systems become more integrated into our daily lives, they have the potential to make decisions that impact various aspects of our well-being, such as healthcare, employment, and criminal justice. The ethical implications arise when these decisions are based solely on algorithmic calculations, without the consideration of subjective factors, biases, or human input. This can lead to unfair outcomes and perpetuate existing social inequalities.

Another ethical concern involves the responsibility for the consequences of AI-driven solutions.

When AI algorithms search problem spaces for solutions, they often rely on large datasets to learn and make decisions. The quality and biases within these datasets can heavily influence the outcomes generated by AI systems. If these biases are not identified and corrected, AI-driven solutions can perpetuate existing biases and discrimination, leading to real-world harm and injustices.

Furthermore, the automation of problem-solving processes through AI can also have profound social and economic implications. While AI has the potential to streamline processes, reduce costs, and increase efficiency, it can also result in job displacement and exacerbate wealth inequalities. The ethical implications arise when the benefits of AI-driven problem-solving are not equitably distributed and when vulnerable populations are disproportionately affected.

In conclusion, the exploration of problem spaces in artificial intelligence has significant ethical implications. It is crucial for developers, policymakers, and society as a whole to actively engage in discussions and debates to ensure that AI is developed and used in a responsible and ethical manner. Only by addressing these ethical concerns can we truly harness the power of AI problem-solving for the benefit of all.

Exploring Problem Spaces in Virtual Assistants and AI

Artificial intelligence (AI) is constantly evolving and expanding, finding new ways of solving problems and improving our daily lives. One area where AI has had a major impact is in the development of virtual assistants, such as Siri, Alexa, and Google Assistant. These virtual assistants use AI algorithms to understand and interpret natural language, providing users with helpful information and completing tasks on their behalf.

In order for virtual assistants to be effective problem solvers, they must first understand the problem space they are working in. Problem spaces refer to the set of possible problems that an AI system can encounter and the methods it can use to solve them. By exploring the problem space, AI systems can identify the best approach to solve a particular problem and provide accurate and relevant solutions to users.

Artificial Intelligence’s Search for Solutions

AI systems search problem spaces by employing various techniques, such as search algorithms and machine learning. Search algorithms enable AI to navigate through different possible solutions, evaluating each one to determine its effectiveness. Machine learning helps AI systems learn from past interactions and improve their problem-solving capabilities over time.

Exploring problem spaces involves analyzing and categorizing different types of problems, understanding the relationships between them, and determining the best strategies for solving them. AI systems may use techniques like natural language processing, data mining, and predictive modeling to gather information about the problem space and build a knowledge base that can be accessed during problem-solving tasks.

The Role of Virtual Assistants in Problem Solving

Virtual assistants play a vital role in problem-solving by providing users with quick and accurate solutions to their inquiries or tasks. By exploring the problem space, virtual assistants can understand user intent, identify relevant information, and generate appropriate responses. They can also adapt to new problems by continuously learning from user interactions and improving their problem-solving techniques.

In addition to providing solutions, virtual assistants can also act as problem solvers by guiding users through complex tasks or processes. They can break down a problem into smaller sub-problems, provide step-by-step instructions, and offer suggestions or recommendations based on user preferences and historical data. Through this interactive problem-solving approach, virtual assistants can assist users in achieving their goals more efficiently and effectively.

In conclusion, exploring problem spaces is crucial for virtual assistants and AI systems to effectively solve problems and provide useful solutions to users. By employing various search algorithms, machine learning techniques, and data analysis methods, AI systems can navigate through problem spaces, understand user intent, and generate accurate and relevant responses. As AI continues to advance, virtual assistants will become even more sophisticated in their problem-solving abilities, enhancing our daily lives and transforming the way we interact with technology.

Artificial General Intelligence and its Approach to Problem Solving

Artificial General Intelligence (AGI) refers to the ability of an AI system to understand and perform any intellectual task that a human being can do. AGI aims to go beyond the narrow applications of AI and develop a more comprehensive and adaptable problem-solving capability.

AGI approaches problem solving by utilizing advanced search algorithms to explore problem spaces. The problem space refers to the set of all possible states and actions that can be taken to solve a specific problem. By exploring different paths and evaluating their outcomes, AGI can find the most optimal solution.

AGI uses techniques like heuristic search algorithms, evolutionary algorithms, and reinforcement learning to explore problem spaces. These algorithms enable AGI to intelligently navigate through a vast number of possible solutions and select the most promising ones for further exploration.

One common approach is the use of search algorithms like depth-first search, breadth-first search, and A* search. These algorithms traverse the problem space by systematically exploring all possible solutions, evaluating their potential and making decisions based on the accumulated knowledge. This allows AGI to identify patterns and generalize solutions to similar problems.

AI in Problem-Solving

Artificial intelligence plays a crucial role in enhancing problem-solving capabilities. AGI systems can analyze and process large amounts of data, understand complex patterns, and learn from past experiences. This enables them to tackle a wide range of problems in various domains, from scientific research to business optimization.

In problem-solving, AI can assist humans by providing insights, generating solutions, and automating repetitive tasks. AGI systems can analyze vast amounts of data from diverse sources and identify underlying patterns, making them invaluable in fields like healthcare, finance, and environmental research.

  • AGI can analyze patient data to assist doctors in diagnosing diseases and recommending treatment plans.
  • AGI can process financial data and market trends to suggest optimal investment strategies.
  • AGI can analyze climate data to predict weather patterns and assist in climate change mitigation.

Overall, AGI’s approach to problem solving combines advanced search techniques and AI capabilities to explore problem spaces, analyze data, and generate optimal solutions. With continuous advancements in AI research, AGI holds the potential to revolutionize problem-solving across various industries and domains.

Exploring Problem Spaces in Autonomous Vehicles and AI

In the search for solutions to complex problems, artificial intelligence plays a crucial role in exploring problem spaces. This is especially true in the field of autonomous vehicles, where AI is being used to solve a variety of challenges.

Autonomous vehicles must be able to navigate their environment, make decisions in real-time, and respond to changing conditions. These tasks require AI algorithms to understand and interpret a vast amount of data, including sensor input, road conditions, and traffic patterns.

By exploring problem spaces, AI is able to break down complex tasks into manageable subproblems. For example, when an autonomous vehicle encounters an obstacle, AI algorithms can analyze the problem space and consider different options for navigation. This could include mapping alternative routes or determining the optimal speed to safely navigate around the obstacle.

AI’s ability to explore problem spaces also extends beyond just navigation. It can be used to optimize energy consumption, enhance safety features, and improve overall efficiency in autonomous vehicles. By analyzing data and simulating different scenarios, AI algorithms can identify the most optimal solutions to these challenges.

Furthermore, AI’s exploration of problem spaces is not limited to autonomous vehicles alone. It is also applicable in a wide range of other domains, such as healthcare, finance, and manufacturing. In each of these areas, AI algorithms can assist in solving complex problems by analyzing data, identifying patterns, and generating insights.

In conclusion, artificial intelligence is a powerful tool for searching problem spaces and finding innovative solutions. Whether it is in the field of autonomous vehicles or other domains, AI’s ability to explore and solve complex problems is revolutionizing industries and driving us towards a more efficient and advanced future.

The Role of Fuzzy Logic in Exploring Problem Spaces

In the field of artificial intelligence (AI), problem solving and exploration of problem spaces are crucial components. This is where fuzzy logic plays a key role, providing a powerful tool for AI systems to navigate and make sense of complex problem spaces.

Fuzzy Logic and Problem Spaces

When exploring problem spaces, AI systems encounter a multitude of variables and uncertainties. Fuzzy logic provides a way to represent and reason with imprecise and uncertain information. Unlike traditional logic, which operates in a binary true/false framework, fuzzy logic deals with degrees of truth. It allows AI systems to handle incomplete or ambiguous data by quantifying degrees of membership to different categories.

By using fuzzy logic, AI systems can effectively explore problem spaces with imperfect, incomplete, or uncertain information. They can reason about and evaluate potential solutions based on their degree of relevance or suitability, rather than relying on strict binary rules.

Search and Optimization in Problem Spaces

Exploring problem spaces often involves search and optimization, with the aim of finding the best solution or set of solutions. Fuzzy logic can help guide AI systems in these tasks by providing a flexible and adaptive approach.

AI systems can use fuzzy logic to define and evaluate objective functions that assess the quality of potential solutions. These objective functions can take into account multiple criteria and preferences, allowing AI systems to explore a broader range of potential solutions.

In addition, fuzzy logic can be used to define search heuristics that guide the exploration of problem spaces. These heuristics can incorporate fuzzy rules to adaptively guide the search and prioritize promising regions of the problem space.

Overall, fuzzy logic offers a valuable framework for AI systems to explore and solve problems in complex problem spaces. By embracing uncertainty and imprecision, AI systems can navigate the intricacies of real-world problems and find robust and effective solutions.

What is meant by problem space in the context of artificial intelligence?

In the context of artificial intelligence, problem space refers to the set of all possible states and actions that can be taken to solve a given problem.

How does artificial intelligence explore problem spaces?

Artificial intelligence explores problem spaces by searching through different states and actions, evaluating their potential for solving the problem and finding an optimal solution.

Can you provide an example of problem space exploration in artificial intelligence?

Sure! Let’s say we want to build an AI system that plays chess. The problem space would consist of all possible chess positions and moves. The AI would explore this problem space by searching through different moves and evaluating their outcomes to find the best move.

What are some techniques used in problem-solving search in artificial intelligence?

There are various techniques used in problem-solving search in artificial intelligence, such as breadth-first search, depth-first search, heuristic search, and genetic algorithms.

Why is exploring problem spaces important in artificial intelligence?

Exploring problem spaces is important in artificial intelligence because it allows AI systems to find optimal solutions to complex problems by searching through all possible states and actions. This helps in decision-making and problem-solving tasks.

What is the problem space in artificial intelligence?

The problem space in artificial intelligence refers to the set of possible states and actions that can be taken to solve a specific problem using AI techniques. It represents all the possible paths and actions that an AI system can explore to find a solution.

Artificial intelligence explores problem spaces through various search algorithms. These algorithms navigate through the problem space by considering different paths and evaluating their potential to reach a solution. The goal is to find the most optimal path or combination of actions that lead to a desired outcome.

What are the challenges in searching for problem spaces in artificial intelligence?

Searching for problem spaces in artificial intelligence involves dealing with challenges such as the curse of dimensionality, where the number of possible states and actions becomes exponentially large. Additionally, there may be constraints, obstacles, and uncertain information that further complicate the search process. AI systems need to efficiently explore the problem space while considering these challenges to find effective solutions.

What are some common search algorithms used in artificial intelligence problem solving?

There are several commonly used search algorithms in artificial intelligence problem solving, including depth-first search, breadth-first search, A* search, and heuristic search. Each algorithm has its own strengths and weaknesses and is suited for different types of problems and problem spaces. These algorithms help AI systems navigate through the problem space and find solutions through systematic exploration.

Related posts:

Default Thumbnail

About the author

' src=

2 months ago

BlackRock and AI: Shaping the Future of Finance

Ai and handyman: the future is here, embrace ai-powered cdps: the future of customer engagement.

' src=

AI News

  • Face Recognition
  • Virtual Assistants
  • Voice Recognition
  • Meta (Facebook)
  • Deep & Reinforcement Learning
  • Ethics & Society
  • Entertainment & Retail
  • Manufacturing
  • Legislation & Government
  • Machine Learning
  • Surveillance
  • Webinars & Resources
  • Publish an article or press release
  • Advertise on AI News
  • Subscribe / Login
  • Cloud Tech News
  • Developer News
  • Edge Computing News
  • Marketing Tech News
  • Tech Wire Asia
  • Telecoms Tech News
  • Upcoming Events

role of artificial intelligence in problem solving

King’s Business School: How AI is transforming problem-solving

Statue of a man thinking illustrating a report into the benefits of generative AI for global problem-solving.

About the Author

' src=

A new study by researchers at King’s Business School and Wazoku has revealed that AI is transforming global problem-solving.

The report found that nearly half (46%) of Wazoku’s 700,000-strong network of problem solvers had utilised generative AI (GenAI) to work on innovative ideas over the past year. This network – known as the Wazoku Crowd – comprises a diverse group of professionals including scientists, pharmacists, engineers, PhD students, CEOs, start-ups, and business leaders.

Perhaps more strikingly, almost a quarter (22%) of respondents reported using GenAI or LLM tools such as ChatGPT and Claude for at least half of their idea submissions, with 8% employing these technologies for every single submission. Of those using GenAI, 47% are leveraging it specifically for idea generation.

The Wazoku Crowd’s collective intelligence is harnessed to solve ‘challenges’ – requests for ideas submitted by enterprises – with an impressive success rate of over 80%.

Simon Hill, CEO of Wazoku, commented on the findings: “There’s an incredible amount of hype with GenAI, but alongside that there is enormous curiosity. Getting immersed in something and being curious is an innovator’s dream, so there is rich potential with GenAI.”

However, Hill also urged caution: “A note of caution, though – it is best used to generate interest, not solutions. Human ingenuity and creativity are still best, although using GenAI can undoubtedly make that process more effective.”

The study revealed that the most common application of GenAI was in research and learning, with 85% of respondents using it for this purpose. Additionally, around one-third of the Wazoku Crowd employed GenAI for report structuring, writing, and data analysis and insight.

The research was conducted in partnership with Oguz A. Acar, Professor of Marketing and Innovation at King’s Business School, King’s College London. Professor Acar viewed the study as a crucial first step towards understanding AI’s potential and limitations in tackling complex innovation challenges.

“Everyone’s trying to figure out what AI can and can’t do, and this survey is a step forward in understanding that,” Professor Acar stated. “It reveals that some crowd members view GenAI as a valuable ally, using it to research, create, and communicate more effectively.”

“While perhaps it’s no surprise that those open to innovation are curious about new tools, the survey also shows mixed opinions. Most people haven’t used GenAI tools yet, highlighting that we’re only beginning to uncover AI’s potential in innovative problem-solving.”

Wazoku collaborates with a range of customers, including Sanofi, A2A, Bill & Melinda Gates Foundation, and numerous global enterprise businesses, government departments, and not-for-profits, to crowdsource ideas and innovation.

Recently, Wazoku launched its own conversational AI to aid innovation. Dubbed Jen AI , this digital innovation assistant has access to Wazoku’s connected innovation management suite—aimed at accelerating decision-making around innovation and enhancing productivity to deliver consistent, scalable results.

“The solutions to the world’s problems are complex, and the support of AI brings vast benefits in terms of efficiency, creativity, and insight generation,” explained Hill.

As the adoption of AI in innovation processes continues to grow, it’s clear that – while these tools offer significant potential – they are best used to augment rather than replace human creativity and problem-solving skills.

(Photo by Ally Griffin )

See also: Ivo Everts, Databricks: Enhancing open-source AI and improving data governance

role of artificial intelligence in problem solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference , BlockX , Digital Transformation Week , and Cyber Security & Cloud Expo .

Explore other upcoming enterprise technology events and webinars powered by TechForge here .

Tags: ai , artificial intelligence , enterprise , genai , generative ai , problem-solving , report , research , study

Leave a Reply Cancel reply

You must be logged in to post a comment.

Join our community

Create your free account now to access all our premium content and recieve the latest tech news to your inbox., latest articles.

Statue of a man thinking illustrating a report into the benefits of generative AI for global problem-solving.

Regulations to help or hinder: Cloudflare’s take

role of artificial intelligence in problem solving

How cold hard data science harnesses AI with Wolfram Research

Han Heloir, MongoDB: The future of AI-powered applications with scalable databases and business optimisation

Han Heloir, MongoDB: The role of scalable databases in AI-powered apps

Chart illustrating upwards improvement in open-source AI and data governance highlighted by Databricks in an interview ahead of AI & Big Data Expo Europe.

Ivo Everts, Databricks: Enhancing open-source AI and improving data governance

role of artificial intelligence in problem solving

Not subscribed / a member yet?

" * " indicates required fields

Personal Details

Company details, account details.

Step 1 of 3

role of artificial intelligence in problem solving

Already a member / subscriber?

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Problem Solving in Artificial Intelligence

Problem solving is a core aspect of artificial intelligence (AI) that mimics human cognitive processes. It involves identifying challenges, analyzing situations, and applying strategies to find effective solutions.

This article explores the various dimensions of problem solving in AI, the types of problem-solving agents, the steps involved, and the components that formulate associated problems.

Table of Content

Understanding Problem-Solving Agents

Types of problems in ai, 1. ignorable problems, 2. recoverable problems, 3. irrecoverable problems, steps in problem solving in artificial intelligence (ai), components of problem formulation in ai, techniques for problem solving in ai, challenges in problem solving with ai.

In artificial intelligence (AI) , agents are entities that perceive their environment and take actions to achieve specific goals. Problem-solving agents stand out due to their focus on identifying and resolving issues systematically. Unlike reflex agents, which react to stimuli based on predefined mappings, problem-solving agents analyze situations and employ various techniques to achieve desired outcomes.

These are problems or errors that have minimal or no impact on the overall performance of the AI system. They are minor and can be safely ignored without significantly affecting the outcome.

  • Slight inaccuracies in predictions that do not affect the larger goal (e.g., small variance in image pixel values during image classification).
  • Minor data preprocessing errors that don’t alter the results significantly.

Handling : These problems often don’t require intervention and can be overlooked in real-time systems without adverse effects.

Recoverable problems are those where the AI system encounters an issue, but it can recover from the error, either through manual intervention or built-in mechanisms, such as error-handling functions.

  • Missing data that can be imputed or filled in by statistical methods.
  • Incorrect or biased training data that can be retrained or corrected during the process.
  • System crashes that can be recovered through checkpoints or retraining.

Handling : These problems require some action—either automated or manual recovery. Systems can be designed with fault tolerance or error-correcting mechanisms to handle these.

Description : These are critical problems that lead to permanent failure or incorrect outcomes in AI systems. Once encountered, the system cannot recover, and these problems can cause significant damage or misperformance.

  • Complete corruption of the training dataset leading to irreversible bias or poor performance.
  • Security vulnerabilities in AI models that allow for adversarial attacks, rendering the system untrustworthy.
  • Overfitting to the extent that the model cannot generalize to new data.

Handling : These problems often require a complete overhaul or redesign of the system, including retraining the model, rebuilding the dataset, or addressing fundamental issues in the AI architecture.

The process of problem solving in AI consists of several finite steps that parallel human cognitive processes. These steps include:

  • Problem Definition: This initial step involves clearly specifying the inputs and acceptable solutions for the system. A well-defined problem lays the groundwork for effective analysis and resolution.
  • Problem Analysis: In this step, the problem is thoroughly examined to understand its components, constraints, and implications. This analysis is crucial for identifying viable solutions.
  • Knowledge Representation: This involves gathering detailed information about the problem and defining all potential techniques that can be applied. Knowledge representation is essential for understanding the problem’s context and available resources.
  • Problem Solving: The selection of the best techniques to address the problem is made in this step. It often involves comparing various algorithms and approaches to determine the most effective method.

Effective problem-solving in AI is dependent on several critical components:

  • Initial State: This represents the starting point for the AI agent, establishing the context in which the problem is addressed. The initial state may also involve initializing methods for problem-solving.
  • Action: This stage involves selecting functions associated with the initial state and identifying all possible actions. Each action influences the progression toward the desired goal.
  • Transition: This component integrates the actions from the previous stage, leading to the next state in the problem-solving process. Transition modeling helps visualize how actions affect outcomes.
  • Goal Test: This stage verifies whether the specified goal has been achieved through the integrated transition model. If the goal is met, the action ceases, and the focus shifts to evaluating the cost of achieving that goal.
  • Path Costing: This component assigns a numerical value representing the cost of achieving the goal. It considers all associated hardware, software, and human resource expenses, helping to optimize the problem-solving strategy.

Several techniques are prevalent in AI for effective problem-solving:

1. Search Algorithms

Search algorithms are foundational in AI, used to explore possible solutions in a structured manner. Common types include:

  • Uninformed Search : Such as breadth-first and depth-first search, which do not use problem-specific information.
  • Informed Search : Algorithms like A* that use heuristics to find solutions more efficiently.

2. Constraint Satisfaction Problems (CSP)

CSPs involve finding solutions that satisfy specific constraints. AI uses techniques like backtracking, constraint propagation, and local search to solve these problems effectively.

3. Optimization Techniques

AI often tackles optimization problems, where the goal is to find the best solution from a set of feasible solutions. Techniques such as linear programming, dynamic programming , and evolutionary algorithms are commonly employed.

4. Machine Learning

Machine learning techniques allow AI systems to learn from data and improve their problem-solving abilities over time. Supervised, unsupervised, and reinforcement learning paradigms offer various approaches to adapt and enhance performance.

5. Natural Language Processing (NLP)

NLP enables AI to understand and process human language, making it invaluable for solving problems related to text analysis, sentiment analysis, and language translation. Techniques like tokenization, sentiment analysis, and named entity recognition play crucial roles in this domain.

Despite its advancements, AI problem-solving faces several challenges:

  • Complexity : Some problems are inherently complex and require significant computational resources and time to solve.
  • Data Quality : AI systems are only as good as the data they are trained on. Poor quality data can lead to inaccurate solutions.
  • Interpretability : Many AI models, especially deep learning, act as black boxes, making it challenging to understand their decision-making processes.
  • Ethics and Bias : AI systems can inadvertently reinforce biases present in the training data, leading to unfair or unethical outcomes.

Problem solving is a fundamental element of artificial intelligence, encompassing various techniques and strategies. By understanding the nature of problems, employing structured approaches, and utilizing effective agents, AI can navigate complex challenges and deliver optimal solutions. As AI continues to evolve, enhancing problem-solving capabilities will remain essential for advancing technology and improving human experiences.

author

Similar Reads

Please login to comment....

  • Top AI Writing Assistants with No Login, No Sign-Up, and Unlimited Use in 2024
  • Top Free AI Sentence Rewriter Tools in 2024
  • Best Transcription Software in 2024 (October Updated)
  • Top AI Drawing Apps in 2024: Most Used in the USA
  • GeeksforGeeks Practice - Leading Online Coding Platform

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

  • Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

role of artificial intelligence in problem solving

Your purchase has been completed. Your documents are now available to view.

book: Artificial Intelligence and Problem Solving

Artificial Intelligence and Problem Solving

  • Danny Kopec , Christopher Pileggi , David Ungar and Shweta Shetty
  • X / Twitter

Please login or register with De Gruyter to order this product.

  • Language: English
  • Publisher: Mercury Learning and Information
  • Copyright year: 2016
  • Main content: 350
  • Keywords: Artificial Intelligence
  • Published: June 29, 2016
  • ISBN: 9781683922414

More From Forbes

How leaders are using ai as a problem-solving tool.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Leaders face more complex decisions than ever before. For example, many must deliver new and better services for their communities while meeting sustainability and equity goals. At the same time, many need to find ways to operate and manage their budgets more efficiently. So how can these leaders make complex decisions and get them right in an increasingly tricky business landscape? The answer lies in harnessing technological tools like Artificial Intelligence (AI).

CHONGQING, CHINA - AUGUST 22: A visitor interacts with a NewGo AI robot during the Smart China Expo ... [+] 2022 on August 22, 2022 in Chongqing, China. The expo, held annually in Chongqing since 2018, is a platform to promote global exchanges of smart technologies and international cooperation in the smart industry. (Photo by Chen Chao/China News Service via Getty Images)

What is AI?

AI can help leaders in several different ways. It can be used to process and make decisions on large amounts of data more quickly and accurately. AI can also help identify patterns and trends that would otherwise be undetectable. This information can then be used to inform strategic decision-making, which is why AI is becoming an increasingly important tool for businesses and governments. A recent study by PwC found that 52% of companies accelerated their AI adoption plans in the last year. In addition, 86% of companies believe that AI will become a mainstream technology at their company imminently. As AI becomes more central in the business world, leaders need to understand how this technology works and how they can best integrate it into their operations.

At its simplest, AI is a computer system that can learn and work independently without human intervention. This ability makes AI a powerful tool. With AI, businesses and public agencies can automate tasks, get insights from data, and make decisions with little or no human input. Consequently, AI can be a valuable problem-solving tool for leaders across the private and public sectors, primarily through three methods.

1) Automation

One of AI’s most beneficial ways to help leaders is by automating tasks. This can free up time to focus on other essential things. For example, AI can help a city save valuable human resources by automating parking enforcement. In addition, this will help improve the accuracy of detecting violations and prevent costly mistakes. Automation can also help with things like appointment scheduling and fraud detection.

2) Insights from data

Another way AI can help leaders solve problems is by providing insights from data. With AI, businesses can gather large amounts of data and then use that data to make better decisions. For example, suppose a company is trying to decide which products to sell. In that case, AI can be used to gather data about customer buying habits and then use that data to make recommendations about which products to market.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

3) Simulations

Finally, AI can help leaders solve problems by allowing them to create simulations. With AI, organizations can test out different decision scenarios and see what the potential outcomes could be. This can help leaders make better decisions by examining the consequences of their choices. For example, a city might use AI to simulate different traffic patterns to see how a new road layout would impact congestion.

Choosing the Right Tools

Artificial intelligence and machine learning technologies can revolutionize how governments and businesses solve real-world problems,” said Chris Carson, CEO of Hayden AI, a global leader in intelligent enforcement technologies powered by artificial intelligence. His company addresses a problem once thought unsolvable in the transit world: managing illegal parking in bus lanes in a cost effective, scalable way.

Illegal parking in bus lanes is a major problem for cities and their transit agencies. Cars and trucks illegally parked in bus lanes force buses to merge into general traffic lanes, significantly slowing down transit service and making riders’ trips longer. That’s where a company like Hayden AI comes in. “Hayden AI uses artificial intelligence and machine learning algorithms to detect and process illegal parking in bus lanes in real-time so that cities can take proactive measures to address the problem ,” Carson observes.

Illegal parking in bus lanes is a huge problem for transit agencies. Hayden AI works with transit ... [+] agencies to fix this problem by installing its AI-powered camera systems on buses to conduct automated enforcement of parking violations in bus lanes

In this case, an AI-powered camera system is installed on each bus. The camera system uses computer vision to “watch” the street for illegal parking in the bus lane. When it detects a traffic violation, it sends the data back to the parking authority. This allows the parking authority to take action, such as sending a ticket to the offending vehicle’s owner.

The effectiveness of AI is entirely dependent on how you use it. As former Accenture chief technology strategist Bob Suh notes in the Harvard Business Review, problem-solving is best when combined with AI and human ingenuity. “In other words, it’s not about the technology itself; it’s about how you use the technology that matters. AI is not a panacea for all ills. Still, when incorporated into a company’s problem-solving repertoire, it can be an enormously powerful tool,” concludes Terence Mauri, founder of Hack Future Lab, a global think tank.

Split the Responsibility

Huda Khan, an academic researcher from the University of Aberdeen, believes that AI is critical for international companies’ success, especially in the era of disruption. Khan is calling international marketing academics’ research attention towards exploring such transformative approaches in terms of how these inform competitive business practices, as are international marketing academics Michael Christofi from the Cyprus University of Technology; Richard Lee from the University of South Australia; Viswanathan Kumar from St. John University; and Kelly Hewett from the University of Tennessee. “AI is very good at automating repetitive tasks, such as customer service or data entry. But it’s not so good at creative tasks, such as developing new products,” Khan says. “So, businesses need to think about what tasks they want to automate and what tasks they want to keep for humans.”

Khan believes that businesses need to split the responsibility between AI and humans. For example, Hayden AI’s system is highly accurate and only sends evidence packages of potential violations for human review. Once the data is sent, human analysis is still needed to make the final decision. But with much less work to do, government agencies can devote their employees to tasks that can’t be automated.

Backed up by efficient, effective data analysis, human problem-solving can be more innovative than ever. Like all business transitions, developing the best system for combining human and AI work might take some experimentation, but it can significantly impact future success. For example, if a company is trying to improve its customer service, it can use AI startup Satisfi’s natural language processing technology . This technology can understand a customer’s question and find the best answer from a company’s knowledge base. Likewise, if a company tries to increase sales, it can use AI startup Persado’s marketing language generation technology . This technology can be used to create more effective marketing campaigns by understanding what motivates customers and then generating language that is more likely to persuade them to make a purchase.

Look at the Big Picture

A technological solution can frequently improve performance in multiple areas simultaneously. For instance, Hayden AI’s automated enforcement system doesn’t just help speed up transit by keeping bus lanes clear for buses; it also increases data security by limiting how much data is kept for parking enforcement, which allows a city to increase the efficiency of its transportation while also protecting civil liberties.

This is the case with many technological solutions. For example, an e-commerce business might adopt a better data architecture to power a personalized recommendation option and benefit from improved SEO. As a leader, you can use your big-picture view of your company to identify critical secondary benefits of technologies. Once you have the technologies in use, you can also fine-tune your system to target your most important priorities at once.

In summary, AI technology is constantly evolving, becoming more accessible and affordable for businesses of all sizes. By harnessing the power of AI, leaders can make better decisions, improve efficiency, and drive innovation. However, it’s important to remember that AI is not a silver bullet. Therefore, organizations must use AI and humans to get the best results.

Benjamin Laker

  • Editorial Standards
  • Forbes Accolades

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 13 January 2020

The role of artificial intelligence in achieving the Sustainable Development Goals

  • Ricardo Vinuesa   ORCID: orcid.org/0000-0001-6570-5499 1 ,
  • Hossein Azizpour   ORCID: orcid.org/0000-0001-5211-6388 2 ,
  • Iolanda Leite 2 ,
  • Madeline Balaam 3 ,
  • Virginia Dignum 4 ,
  • Sami Domisch   ORCID: orcid.org/0000-0002-8127-9335 5 ,
  • Anna Felländer 6 ,
  • Simone Daniela Langhans 7 , 8 ,
  • Max Tegmark 9 &
  • Francesco Fuso Nerini   ORCID: orcid.org/0000-0002-4770-4051 10  

Nature Communications volume  11 , Article number:  233 ( 2020 ) Cite this article

437k Accesses

1098 Citations

938 Altmetric

Metrics details

  • Computational science
  • Developing world
  • Energy efficiency

The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.

Similar content being viewed by others

role of artificial intelligence in problem solving

A definition, benchmark and database of AI for social good initiatives

role of artificial intelligence in problem solving

AI for social good: unlocking the opportunity for positive impact

role of artificial intelligence in problem solving

Ecological footprints, carbon emissions, and energy transitions: the impact of artificial intelligence (AI)

Introduction.

The emergence of artificial intelligence (AI) is shaping an increasing range of sectors. For instance, AI is expected to affect global productivity 1 , equality and inclusion 2 , environmental outcomes 3 , and several other areas, both in the short and long term 4 . Reported potential impacts of AI indicate both positive 5 and negative 6 impacts on sustainable development. However, to date, there is no published study systematically assessing the extent to which AI might impact all aspects of sustainable development—defined in this study as the 17 Sustainable Development Goals (SDGs) and 169 targets internationally agreed in the 2030 Agenda for Sustainable Development 7 . This is a critical research gap, as we find that AI may influence the ability to meet all SDGs.

Here we present and discuss implications of how AI can either enable or inhibit the delivery of all 17 goals and 169 targets recognized in the 2030 Agenda for Sustainable Development. Relationships were characterized by the methods reported at the end of this study, which can be summarized as a consensus-based expert elicitation process, informed by previous studies aimed at mapping SDGs interlinkages 8 , 9 , 10 . A summary of the results is given in Fig.  1 and the Supplementary Data  1 provides a complete list of all the SDGs and targets, together with the detailed results from this work. Although there is no internationally agreed definition of AI, for this study we considered as AI any software technology with at least one of the following capabilities: perception—including audio, visual, textual, and tactile (e.g., face recognition), decision-making (e.g., medical diagnosis systems), prediction (e.g., weather forecast), automatic knowledge extraction and pattern recognition from data (e.g., discovery of fake news circles in social media), interactive communication (e.g., social robots or chat bots), and logical reasoning (e.g., theory development from premises). This view encompasses a large variety of subfields, including machine learning.

figure 1

Documented evidence of the potential of AI acting as ( a ) an enabler or ( b ) an inhibitor on each of the SDGs. The numbers inside the colored squares represent each of the SDGs (see the Supplementary Data  1 ). The percentages on the top indicate the proportion of all targets potentially affected by AI and the ones in the inner circle of the figure correspond to proportions within each SDG. The results corresponding to the three main groups, namely Society, Economy, and Environment, are also shown in the outer circle of the figure. The results obtained when the type of evidence is taken into account are shown by the inner shaded area and the values in brackets.

Documented connections between AI and the SDGs

Our review of relevant evidence shows that AI may act as an enabler on 134 targets (79%) across all SDGs, generally through a technological improvement, which may allow to overcome certain present limitations. However, 59 targets (35%, also across all SDGs) may experience a negative impact from the development of AI. For the purpose of this study, we divide the SDGs into three categories, according to the three pillars of sustainable development, namely Society, Economy, and Environment 11 , 12 (see the Methods section). This classification allows us to provide an overview of the general areas of influence of AI. In Fig.  1 , we also provide the results obtained when weighting how appropriate is the evidence presented in each reference to assess an interlinkage to the percentage of targets assessed, as discussed in the Methods section and below. A detailed assessment of the Society, Economy, and Environment groups, together with illustrative examples, are discussed next.

AI and societal outcomes

Sixty-seven targets (82%) within the Society group could potentially benefit from AI-based technologies (Fig.  2 ). For instance, in SDG 1 on no poverty, SDG 4 on quality education, SDG 6 on clean water and sanitation, SDG 7 on affordable and clean energy, and SDG 11 on sustainable cities, AI may act as an enabler for all the targets by supporting the provision of food, health, water, and energy services to the population. It can also underpin low-carbon systems, for instance, by supporting the creation of circular economies and smart cities that efficiently use their resources 13 , 14 . For example, AI can enable smart and low-carbon cities encompassing a range of interconnected technologies such as electrical autonomous vehicles and smart appliances that can enable demand response in the electricity sector 13 , 14 (with benefits across SDGs 7, 11, and 13 on climate action). AI can also help to integrate variable renewables by enabling smart grids that partially match electrical demand to times when the sun is shining and the wind is blowing 13 . Fewer targets in the Society group can be impacted negatively by AI (31 targets, 38%) than the ones with positive impact. However, their consideration is crucial. Many of these relate to how the technological improvements enabled by AI may be implemented in countries with different cultural values and wealth. Advanced AI technology, research, and product design may require massive computational resources only available through large computing centers. These facilities have a very high energy requirement and carbon footprint 15 . For instance, cryptocurrency applications such as Bitcoin are globally using as much electricity as some nations’ electrical demand 16 , compromising outcomes in the SDG 7 sphere, but also on SDG 13 on Climate Action. Some estimates suggest that the total electricity demand of information and communications technologies (ICTs) could require up to 20% of the global electricity demand by 2030, from around 1% today 15 . Green growth of ICT technology is therefore essential 17 . More efficient cooling systems for data centers, broader energy efficiency, and renewable-energy usage in ICTs will all play a role in containing the electricity demand growth 15 . In addition to more efficient and renewable-energy-based data centers, it is essential to embed human knowledge in the development of AI models. Besides the fact that the human brain consumes much less energy than what is used to train AI models, the available knowledge introduced in the model (see, for instance, physics-informed deep learning 18 ) does not need to be learnt through data-intensive training, a fact that may significantly reduce the associated energy consumption. Although AI-enabled technology can act as a catalyst to achieve the 2030 Agenda, it may also trigger inequalities that may act as inhibitors on SDGs 1, 4, and 5. This duality is reflected in target 1.1, as AI can help to identify areas of poverty and foster international action using satellite images 5 . On the other hand, it may also lead to additional qualification requirements for any job, consequently increasing the inherent inequalities 19 and acting as an inhibitor towards the achievement of this target.

figure 2

Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 1, 2, 3, 4, 5, 6, 7, 11, and 16 ( https://www.un.org/sustainabledevelopment/ ). Each block in the diagram represents a target (see the Supplementary Data  1 for additional details on the targets). For targets highlighted in green or orange, we found published evidence that AI could potentially enable or inhibit such target, respectively. The absence of highlighting indicates the absence of identified evidence. It is noteworthy that this does not necessarily imply the absence of a relationship. (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).

Another important drawback of AI-based developments is that they are traditionally based on the needs and values of nations in which AI is being developed. If AI technology and big data are used in regions where ethical scrutiny, transparency, and democratic control are lacking, AI might enable nationalism, hate towards minorities, and bias election outcomes 20 . The term “big nudging” has emerged to represent using big data and AI to exploit psychological weaknesses to steer decisions—creating problems such as damaging social cohesion, democratic principles, and even human rights 21 . AI has been recently utilized to develop citizen scores, which are used to control social behavior 22 . This type of score is a clear example of threat to human rights due to AI misuse and one of its biggest problems is the lack of information received by the citizens on the type of analyzed data and the consequences this may have on their lives. It is also important to note that AI technology is unevenly distributed: for instance, complex AI-enhanced agricultural equipment may not be accessible to small farmers and thus produce an increased gap with respect to larger producers in more developed economies 23 , consequently inhibiting the achievement of some targets of SDG 2 on zero hunger. There is another important shortcoming of AI in the context of SDG 5 on gender equality: there is insufficient research assessing the potential impact of technologies such as smart algorithms, image recognition, or reinforced learning on discrimination against women and minorities. For instance, machine-learning algorithms uncritically trained on regular news articles will inadvertently learn and reproduce the societal biases against women and girls, which are embedded in current languages. Word embeddings, a popular technique in natural language processing, have been found to exacerbate existing gender stereotypes 2 . In addition to the lack of diversity in datasets, another main issue is the lack of gender, racial, and ethnic diversity in the AI workforce 24 . Diversity is one of the main principles supporting innovation and societal resilience, which will become essential in a society exposed to changes associated to AI development 25 . Societal resilience is also promoted by decentralization, i.e., by the implementation of AI technologies adapted to the cultural background and the particular needs of different regions.

AI and economic outcomes

The technological advantages provided by AI may also have a positive impact on the achievement of a number of SDGs within the Economy group. We have identified benefits from AI on 42 targets (70%) from these SDGs, whereas negative impacts are reported in 20 targets (33%), as shown in Fig.  1 . Although Acemoglu and Restrepo 1 report a net positive impact of AI-enabled technologies associated to increased productivity, the literature also reflects potential negative impacts mainly related to increased inequalities 26 , 27 , 28 , 29 . In the context of the Economy group of SDGs, if future markets rely heavily on data analysis and these resources are not equally available in low- and middle- income countries, the economical gap may be significantly increased due to the newly introduced inequalities 30 , 31 significantly impacting SDGs 8 (decent work and economic growth), 9 (industry, innovation and infrastructure), and 10 (reduced inequalities). Brynjolfsson and McAfee 31  argue that AI can exacerbate inequality also within nations. By replacing old jobs with ones requiring more skills, technology disproportionately rewards the educated: since the mid 1970s, the salaries in the United States (US) salaries rose about 25% for those with graduate degrees, while the average high-school dropout took a 30% pay cut. Moreover, automation shifts corporate income to those who own companies from those who work there. Such transfer of revenue from workers to investors helps explain why, even though the combined revenues of Detroit's “Big 3” (GM, Ford, and Chrysler) in 1990 were almost identical to those of Silicon Valley's “Big 3” (Google, Apple, and Facebook) in 2014, the latter had 9 times fewer employees and were worth 30 times more on the stock market 32 . Figure  3 shows an assessment of the documented positive and negative effects on the various targets within the SDGs in the Economy group.

figure 3

Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 8, 9, 10, 12, and 17 ( https://www.un.org/sustainabledevelopment/ ). The interpretation of the blocks and colors is as in Fig.  2 .  (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).

Although the identified linkages in the Economy group are mainly positive, trade-offs cannot be neglected. For instance, AI can have a negative effect on social media usage, by showing users content specifically suited to their preconceived ideas. This may lead to political polarization 33 and affect social cohesion 21 with consequences in the context of SDG 10 on reduced inequalities. On the other hand, AI can help identify sources of inequality and conflict 34 , 35 , and therewith potentially reduce inequalities, for instance, by using simulations to assess how virtual societies may respond to changes. However, there is an underlying risk when using AI to evaluate and predict human behavior, which is the inherent bias in the data. It has been reported that a number of discriminatory challenges are faced in the automated targeting of online job advertising using AI 35 , essentially related to the previous biases in selection processes conducted by human recruiters. The work by Dalenberg 35 highlights the need of modifying the data preparation process and explicitly adapting the AI-based algorithms used for selection processes to avoid such biases.

AI and environmental outcomes

The last group of SDGs, i.e., the one related to Environment, is analyzed in Fig.  4 . The three SDGs in this group are related to climate action, life below water and life on land (SDGs 13, 14, and 15). For the Environment group, we identified 25 targets (93%) for which AI could act as an enabler. Benefits from AI could be derived by the possibility of analyzing large-scale interconnected databases to develop joint actions aimed at preserving the environment. Looking at SDG 13 on climate action, there is evidence that AI advances will support the understanding of climate change and the modeling of its possible impacts. Furthermore, AI will support low-carbon energy systems with high integration of renewable energy and energy efficiency, which are all needed to address climate change 13 , 36 , 37 . AI can also be used to help improve the health of ecosystems. The achievement of target 14.1, calling to prevent and significantly reduce marine pollution of all kinds, can benefit from AI through algorithms for automatic identification of possible oil spills 38 . Another example is target 15.3, which calls for combating desertification and restoring degraded land and soil. According to Mohamadi et al. 39 , neural networks and objective-oriented techniques can be used to improve the classification of vegetation cover types based on satellite images, with the possibility of processing large amounts of images in a relatively short time. These AI techniques can help to identify desertification trends over large areas, information that is relevant for environmental planning, decision-making, and management to avoid further desertification, or help reverse trends by identifying the major drivers. However, as pointed out above, efforts to achieve SDG 13 on climate action could be undermined by the high-energy needs for AI applications, especially if non carbon-neutral energy sources are used. Furthermore, despite the many examples of how AI is increasingly applied to improve biodiversity monitoring and conservation 40 , it can be conjectured that an increased access to AI-related information of ecosystems may drive over-exploitation of resources, although such misuse has so far not been sufficiently documented. This aspect is further discussed below, where currently identified gaps in AI research are considered.

figure 4

Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 13, 14, and 15 ( https://www.un.org/sustainabledevelopment/ ). The interpretation of the blocks and colors is as in Fig.  2 . (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).

An assessment of the collected evidence on the interlinkages

A deeper analysis of the gathered evidence was undertaken as shown in Fig.  1 (and explained in the Methods section). In practice, each interlinkage was weighted based on the applicability and appropriateness of each of the references to assess a specific interlinkage—and possibly identify research gaps. Although accounting for the type of evidence has a relatively small effect on the positive impacts (we see a reduction of positively affected targets from 79% to 71%), we observe a more significant reduction (from 35% to 23%) in the targets with negative impact of AI. This can be partly due the fact that AI research typically involves quantitative methods that would bias the results towards the positive effects. However, there are some differences across the Society, Economy and Environment spheres. In the Society sphere, when weighting the appropriateness of evidence, positively affected targets diminish by 5 percentage points (p.p.) and negatively affected targets by 13 p.p. In particular, weighting the appropriateness of evidence on negative impacts on SDG 1 (on no poverty) and SDG 6 (on clean water and sanitation) reduces the fraction of affected targets by 43 p.p. and 35 p.p., respectively. In the Economy group instead, positive impacts are reduced more (15 p.p.) than negative ones (10 p.p.) when taking into account the appropriateness of the found evidence to speak of the issues. This can be related to the extensive study in literature assessing the displacement of jobs due to AI (because of clear policy and societal concerns), but overall the longer-term benefits of AI on the economy are perhaps not so extensively characterized by currently available methods. Finally, although the weighting of evidence decreases the positive impacts of AI on the Environment group only by 8 p.p., the negative impacts see the largest average reduction (18 p.p.). This is explained by the fact that, although there are some indications of the potential negative impact of AI on this SDG, there is no strong evidence (in any of the targets) supporting this claim, and therefore this is a relevant area for future research.

In general, the fact that the evidence on interlinkages between AI and the large majority of targets is not based on tailored analyses and tools to refer to that particular issue provides a strong rationale to address a number of research gaps, which are identified and listed in the section below.

Research gaps on the role of AI in sustainable development

The more we enable SDGs by deploying AI applications, from autonomous vehicles 41 to AI-powered healthcare solutions 42 and smart electrical grids 13 , the more important it becomes to invest in the AI safety research needed to keep these systems robust and beneficial, so as to prevent them from malfunctioning, or from getting hacked 43 . A crucial research venue for a safe integration of AI is understanding catastrophes, which can be enabled by a systemic fault in AI technology. For instance, a recent World Economic Forum (WEF) report raises such a concern due to the integration of AI in the financial sector 44 . It is therefore very important to raise awareness on the risks associated to possible failures of AI systems in a society progressively more dependent on this technology. Furthermore, although we were able to find numerous studies suggesting that AI can potentially serve as an enabler for many SDG targets and indicators, a significant fraction of these studies have been conducted in controlled laboratory environments, based on limited datasets or using prototypes 45 , 46 , 47 . Hence, extrapolating this information to evaluate the real-world effects often remains a challenge. This is particularly true when measuring the impact of AI across broader scales, both temporally and spatially. We acknowledge that conducting controlled experimental trials for evaluating real-world impacts of AI can result in depicting a snapshot situation, where AI tools are tailored towards that specific environment. However, as society is constantly changing (also due to factors including non-AI-based technological advances), the requirements set for AI are changing as well, resulting in a feedback loop with interactions between society and AI. Another underemphasized aspect in existing literature is the resilience of the society towards AI-enabled changes. Therefore, novel methodologies are required to ensure that the impact of new technologies are assessed from the points of view of efficiency, ethics, and sustainability, prior to launching large-scale AI deployments. In this sense, research aimed at obtaining insight on the reasons for failure of AI systems, introducing combined human–machine analysis tools 48 , are an essential step towards accountable AI technology, given the large risk associated to such a failure.

Although we found more published evidence of AI serving as an enabler than as an inhibitor on the SDGs, there are at least two important aspects that should be considered. First, self-interest can be expected to bias the AI research community and industry towards publishing positive results. Second, discovering detrimental aspects of AI may require longer-term studies and, as mentioned above, there are not many established evaluation methodologies available to do so. Bias towards publishing positive results is particularly apparent in the SDGs corresponding to the Environment group. A good example of this bias is target 14.5 on conserving coastal and marine areas, where machine-learning algorithms can provide optimum solutions given a wide range of parameters regarding the best choice of areas to include in conservation networks 49 . However, even if the solutions are optimal from a mathematical point of view (given a certain range of selected parameters), additional research would be needed to assess the long-term impact of such algorithms on equity and fairness 6 , precisely because of the unknown factors that may come into play. Regarding the second point stated above, it is likely that the AI projects with the highest potential of maximizing profit will get funded. Without control, research on AI is expected to be directed towards AI applications where funding and commercial interests are. This may result in increased inequality 50 . Consequently, there is the risk that AI-based technologies with potential to achieve certain SDGs may not be prioritized, if their expected economic impact is not high. Furthermore, it is essential to promote the development of initiatives to assess the societal, ethical, legal, and environmental implications of new AI technologies.

Substantive research and application of AI technologies to SDGs is concerned with the development of better data-mining and machine-learning techniques for the prediction of certain events. This is the case of applications such as forecasting extreme weather conditions or predicting recidivist offender behavior. The expectation with this research is to allow the preparation and response for a wide range of events. However, there is a research gap in real-world applications of such systems, e.g., by governments (as discussed above). Institutions have a number of barriers to the adoption AI systems as part of their decision-making process, including the need of setting up measures for cybersecurity and the need to protect the privacy of citizens and their data. Both aspects have implications on human rights regarding the issues of surveillance, tracking, communication, and data storage, as well as automation of processes without rigorous ethical standards 21 . Targeting these gaps would be essential to ensure the usability and practicality of AI technologies for governments. This would also be a prerequisite for understanding long-term impacts of AI regarding its potential, while regulating its use to reduce the possible bias that can be inherent to AI 6 .

Furthermore, our research suggests that AI applications are currently biased towards SDG issues that are mainly relevant to those nations where most AI researchers live and work. For instance, many systems applying AI technologies to agriculture, e.g., to automate harvesting or optimize its timing, are located within wealthy nations. Our literature search resulted in only a handful of examples where AI technologies are applied to SDG-related issues in nations without strong AI research. Moreover, if AI technologies are designed and developed for technologically advanced environments, they have the potential to exacerbate problems in less wealthy nations (e.g., when it comes to food production). This finding leads to a substantial concern that developments in AI technologies could increase inequalities both between and within countries, in ways which counteract the overall purpose of the SDGs. We encourage researchers and funders to focus more on designing and developing AI solutions, which respond to localized problems in less wealthy nations and regions. Projects undertaking such work should ensure that solutions are not simply transferred from technology-intensive nations. Instead, they should be developed based on a deep understanding of the respective region or culture to increase the likelihood of adoption and success.

Towards sustainable AI

The great wealth that AI-powered technology has the potential to create may go mainly to those already well-off and educated, while job displacement leaves others worse off. Globally, the growing economic importance of AI may result in increased inequalities due to the unevenly distributed educational and computing resources throughout the world. Furthermore, the existing biases in the data used to train AI algorithms may result in the exacerbation of those biases, eventually leading to increased discrimination. Another related problem is the usage of AI to produce computational (commercial, political) propaganda based on big data (also defined as “big nudging”), which is spread through social media by independent AI agents with the goals of manipulating public opinion and producing political polarization 51 . Despite the fact that current scientific evidence refutes technological determinism of such fake news 51 , long-term impacts of AI are possible (although unstudied) due to a lack of robust research methods. A change of paradigm is therefore needed to promote cooperation and to limit the possibilities for control of citizen behavior through AI. The concept of Finance 4.0 has been proposed 52 as a multi-currency financial system promoting a circular economy, which is aligned with societal goals and values. Informational self-determination (in which the individual takes an active role in how their data are handled by AI systems) would be an essential aspect of such a paradigm 52 . The data intensiveness of AI applications creates another problem: the need for more and more detailed information to improve AI algorithms, which is in conflict with the need of more transparent handling and protection of personal data 53 . One area where this conflict is particularly important is healthcare: Panch et al. 54 argue that although the vast amount of personal healthcare data could lead to the development of very powerful tools for diagnosis and treatment, the numerous problems associated to data ownership and privacy call for careful policy intervention. This is also an area where more research is needed to assess the possible long-term negative consequences. All the challenges mentioned above culminate in the academic discourse about legal personality of robots 55 , which may lead to alarming narratives of technological totalitarianism.

Many of these aspects result from the interplay between technological developments on one side and requests from individuals, response from governments, as well as environmental resources and dynamics on the other. Figure  5 shows a schematic representation of these dynamics, with emphasis on the role of technology. Based on the evidence discussed above, these interactions are not currently balanced and the advent of AI has exacerbated the process. A wide range of new technologies are being developed very fast, significantly affecting the way individuals live as well as the impacts on the environment, requiring new piloting procedures from governments. The problem is that neither individuals nor governments seem to be able to follow the pace of these technological developments. This fact is illustrated by the lack of appropriate legislation to ensure the long-term viability of these new technologies. We argue that it is essential to reverse this trend. A first step in this direction is to establish adequate policy and legislation frameworks, to help direct the vast potential of AI towards the highest benefit for individuals and the environment, as well as towards the achievement of the SDGs. Regulatory oversight should be preceded by regulatory insight, where policymakers have sufficient understanding of AI challenges to be able to formulate sound policy. Developing such insight is even more urgent than oversight, as policy formulated without understanding is likely to be ineffective at best and counterproductive at worst.

figure 5

Schematic representation showing the identified agents and their roles towards the development of AI. Thicker arrows indicate faster change. In this representation, technology affects individuals through technical developments, which change the way people work and interact with each other and with the environment, whereas individuals would interact with technology through new needs to be satisfied. Technology (including technology itself and its developers) affects governments through new developments that need appropriate piloting and testing. Also, technology developers affect government through lobbying and influencing decision makers. Governments provide legislation and standards to technology. The governments affect individuals through policy and legislation, and individuals would require new legislation consistent with the changing circumstances from the governments. The environment interacts with technology by providing the resources needed for technological development and is affected by the environmental impact of technology. Furthermore, the environment is affected either negatively or positively by the needs, impacts, and choices of individuals and governments, which in turn require environmental resources. Finally, the environment is also an underlying layer that provides the “planetary boundaries” to the mentioned interactions.

Although strong and connected institutions (covered by SDG 16) are needed to regulate the future of AI, we find that there is limited understanding of the potential impact of AI on institutions. Examples of the positive impacts include AI algorithms aimed at improving fraud detection 56 , 57 or assessing the possible effects of certain legislation 58 , 59 . Another concern is that data-driven approaches for policing may hinder equal access to justice because of algorithm bias, particularly towards minorities 60 . Consequently, we believe that it is imperative to develop legislation regarding transparency and accountability of AI, as well as to decide the ethical standards to which AI-based technology should be subjected to. This debate is being pushed forward by initiatives such as the IEEE (Institute of Electrical and Electronics Engineers) ethical aligned design 60 and the new EU (European Union) ethical guidelines for trustworthy AI 61 . It is noteworthy that despite the importance of an ethical, responsible, and trustworthy approach to AI development and use, in a sense, this issue is independent of the aims of the article. In other words, one can envision AI applications that improve SDG outcomes while not being fully aligned with AI ethics guidelines. We therefore recommend that AI applications that target SDGs are open and explicit about guiding ethical principles, also by indicating explicitly how they align with the existing guidelines. On the other hand, the lack of interpretability of AI, which is currently one of the challenges of AI research, adds an additional complication to the enforcement of such regulatory actions 62 . Note that this implies that AI algorithms (which are trained with data consisting of previous regulations and decisions) may act as a “mirror” reflecting biases and unfair policy. This presents an opportunity to possibly identify and correct certain errors in the existing procedures. The friction between the uptake of data-driven AI applications and the need of protecting the privacy and security of the individuals is stark. When not properly regulated, the vast amount of data produced by citizens might potentially be used to influence consumer opinion towards a certain product or political cause 51 .

AI applications that have positive societal welfare implications may not always benefit each individual separately 41 . This inherent dilemma of collective vs. individual benefit is relevant in the scope of AI applications but is not one that should be solved by the application of AI itself. This has always been an issue affecting humankind and it cannot be solved in a simple way, since such a solution requires participation of all involved stakeholders. The dynamicity of context and the level of abstraction at which human values are described imply that there is not a single ethical theory that holds all the time in all situations 63 . Consequently, a single set of utilitarian ethical principles with AI would not be recommendable due to the high complexity of our societies 52 . It is also essential to be aware of the potential complexity in the interaction between human and AI agents, and of the increasing need for ethics-driven legislation and certification mechanisms for AI systems. This is true for all AI applications, but especially those that, if they became uncontrolled, could have even catastrophic effects on humanity, such as autonomous weapons. Regarding the latter, associations of AI and robotics experts are already getting together to call for legislation and limitations of their use 64 . Furthermore, associations such as the Future of Life Institute are reviewing and collecting policy actions and shared principles around the world to monitor progress towards sustainable-development-friendly AI 65 . To deal with the ethical dilemmas raised above, it is important that all applications provide openness about the choices and decisions made during design, development, and use, including information about the provenance and governance of the data used for training algorithms, and about whether and how they align with existing AI guidelines. It is therefore important to adopt decentralized AI approaches for a more equitable development of AI 66 .

We are at a critical turning point for the future of AI. A global and science-driven debate to develop shared principles and legislation among nations and cultures is necessary to shape a future in which AI positively contributes to the achievement of all the SDGs. The current choices to develop a sustainable-development-friendly AI by 2030 have the potential to unlock benefits that could go far-beyond the SDGs within our century. All actors in all nations should be represented in this dialogue, to ensure that no one is left behind. On the other hand, postponing or not having such a conversation could result in an unequal and unsustainable AI-fueled future.

In this section we describe the process employed to obtain the results described in the present study and shown in the Supplementary Data  1 . The goal was to answer the question “Is there published evidence of AI acting as an enabler or an inhibitor for this particular target?” for each of the 169 targets within the 17 SDGs. To this end, we conducted a consensus-based expert elicitation process, informed by previous studies on mapping SDGs interlinkages 8 , 9 and following Butler et al. 67 and Morgan 68 . The authors of this study are academics spanning a wide range of disciplines, including engineering, natural and social sciences, and acted as experts for the elicitation process. The authors performed an expert-driven literature search to support the identified connections between AI and the various targets, where the following sources of information were considered as acceptable evidence: published work on real-world applications (given the quality variation depending on the venue, we ensured that the publications considered in the analysis were of sufficient quality); published evidence on controlled/laboratory scenarios (given the quality variation depending on the venue, we ensured that the publications considered in the analysis were of sufficient quality); reports from accredited organizations (for instance: UN or government bodies); and documented commercial-stage applications. On the other hand, the following sources of information were not considered as acceptable evidence: educated conjectures, real-world applications without peer-reviewed research; media, public beliefs or other sources of information.

The expert elicitation process was conducted as follows: each of the SDGs was assigned to one or more main contributors, and in some cases to several additional contributors as summarized in the Supplementary Data  1 (here the initials correspond to the author names). The main contributors carried out a first literature search for that SDG and then the additional contributors completed the main analysis. One published study on a synergy or a trade-off between a target and AI was considered enough for mapping the interlinkage. However, for nearly all targets several references are provided. After the analysis of a certain SDG was concluded by the contributors, a reviewer was assigned to evaluate the connections and reasoning presented by the contributors. The reviewer was not part of the first analysis and we tried to assign the roles of the main contributor and reviewer to experts with complementary competences for each of the SDGs. The role of the reviewer was to bring up additional points of view and considerations, while critically assessing the analysis. Then, the main contributors and reviewers iteratively discussed to improve the results presented for each of the SDGs until the analysis for all the SDGs was sufficiently refined.

After reaching consensus regarding the assessment shown in the Supplementary Data  1 , we analyzed the results by evaluating the number of targets for which AI may act as an enabler or an inhibitor, and calculated the percentage of targets with positive and negative impact of AI for each of the 17 goals, as shown in Fig.  1 . In addition, we divided the SDGs into the three following categories: Society, Economy, and Environment, consistent with the classification discussed by Refs. 11 , 12 . The SDGs assigned to each of the categories are shown in Fig.  6 and the individual results from each of these groups can be observed in Figs.  2 – 4 . These figures indicate, for each target within each SDG, whether any published evidence of positive or negative impact was found.

figure 6

(The content of this figure has not been reviewed by the United Nations and does not reflect its views).

Taking into account the types of evidence

In the methodology described above, a connection between AI and a certain target is established if at least one reference documenting such a link was found. As the analyzed studies rely on very different types of evidence, it is important to classify the references based on the methods employed to support their conclusions. Therefore, all the references in the Supplementary Data  1 include a classification from (A) to (D) according to the following criteria:

References using sophisticated tools and data to refer to this particular issue and with the possibility to be generalized are of type (A).

Studies based on data to refer to this particular issue, but with limited generalizability, are of type (B).

Anecdotal qualitative studies and methods are of type (C) .

Purely theoretical or speculative references are of type (D).

The various classes were assigned following the same expert elicitation process described above. Then, the contribution of these references towards the linkages is weighted and categories (A), (B), (C), and (D) are assigned relative weights of 1, 0.75, 0.5, and 0.25, respectively. It is noteworthy that, given the vast range of studies on all the SDG areas, the literature search was not exhaustive and, therefore, certain targets are related to more references than others in our study. To avoid any bias associated to the different amounts of references in the various targets, we considered the largest positive and negative weight to establish the connection with each target. Let us consider the following example: for a certain target, one reference of type (B) documents a positive connection and two references of types (A) and (D) document a negative connection with AI. In this case, the potential positive impact of AI on that target will be assessed with 0.75, while the potential negative impact is 1.

Limitations of the research

The presented analysis represents the perspective of the authors. Some literature on how AI might affect certain SDGs could have been missed by the authors or there might not be published evidence yet on such interlinkage. Nevertheless, the employed methods tried to minimize the subjectivity of the assessment. How AI might affect the delivery of each SDG was assessed and reviewed by several authors and a number of studies were reviewed for each interlinkage. Furthermore, as discussed in the Methods section, each interlinkage was discussed among a subset of authors until consensus was reached on its nature.

Finally, this study relies on the analysis of the SDGs. The SDGs provide a powerful lens for looking at internationally agreed goals on sustainable development and present a leap forward compared with the Millenium Development Goals in the representation of all spheres of sustainable development, encompassing human rights 69 , social sustainability, environmental outcomes, and economic development. However, the SDGs are a political compromise and might be limited in the representation of some of the complex dynamics and cross-interactions among targets. Therefore, the SDGs have to be considered in conjunction with previous and current, and other international agreements 9 . For instance, as pointed out in a recent work by UN Human Rights 69 , human rights considerations are highly embedded in the SDGs. Nevertheless, the SDGs should be considered as a complement, rather than a replacement, of the United Nations Universal Human Rights Charter 70 .

Data availability

The authors declare that all the data supporting the findings of this study are available within the paper and its Supplementary Data  1 file .

Acemoglu, D. & Restrepo, P. Artificial Intelligence, Automation, and Work. NBER Working Paper No. 24196 (National Bereau of Economic Research, 2018).

Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V. & Kalai, A. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 29 , 4349–4357 (2016).

Google Scholar  

Norouzzadeh, M. S. et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc. Natl Acad. Sci. USA 115 , E5716–E5725 (2018).

Article   CAS   Google Scholar  

Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence (Random House Audio Publishing Group, 2017).

Jean, N. et al. Combining satellite imagery and machine learning to predict poverty. Science (80-.) 353 , 790–794 (2016).

Article   ADS   CAS   Google Scholar  

Courtland, R. Bias detectives: the researchers striving to make algorithms fair. Nature 558 , 357–360 (2018).

UN General Assembly (UNGA). A/RES/70/1Transforming our world: the 2030 Agenda for Sustainable Development. Resolut 25 , 1–35 (2015).

Fuso Nerini, F. et al. Mapping synergies and trade-offs between energy and the Sustainable Development Goals. Nat. Energy 3 , 10–15 https://doi.org/10.1038/s41560-017-0036-5 (2017).

Article   ADS   Google Scholar  

Fuso Nerini, F. et al. Connecting climate action with other Sustainable Development Goals. Nat. Sustain . 1 , 674–680 (2019). https://doi.org/10.1038/s41893-019-0334-y

Article   Google Scholar  

Fuso Nerini, F. et al. Use SDGs to guide climate action. Nature 557 , https://doi.org/10.1038/d41586-018-05007-1 (2018).

United Nations Economic and Social Council. Sustainable Development (United Nations Economic and Social Council, 2019).

Stockholm Resilience Centre’s (SRC) contribution to the 2016 Swedish 2030 Agenda HLPF report (Stockholm University, 2017).

International Energy Agency. Digitalization & Energy (International Energy Agency, 2017).

Fuso Nerini, F. et al. A research and innovation agenda for zero-emission European cities. Sustainability 11 , 1692 https://doi.org/10.3390/su11061692 (2019).

Jones, N. How to stop data centres from gobbling up the world’s electricity. Nature 561 , 163–166 (2018).

Truby, J. Decarbonizing Bitcoin: law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies. Energy Res. Soc. Sci. 44 , 399–410 (2018).

Ahmad Karnama, Ehsan Bitaraf Haghighi, Ricardo Vinuesa, (2019) Organic data centers: A sustainable solution for computing facilities. Results in Engineering 4:100063

Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics informed deep learning (part I): data-driven solutions of nonlinear partial differential equations. arXiv:1711.10561 (2017).

Nagano, A. Economic growth and automation risks in developing countries due to the transition toward digital modernity. Proc. 11th International Conference on Theory and Practice of Electronic Governance—ICEGOV ’18 (2018). https://doi.org/10.1145/3209415.3209442

Helbing, D. & Pournaras, E. Society: build digital democracy. Nature 527 , 33–34 (2015).

Helbing, D. et al. in Towards Digital Enlightenment 73–98 (Springer International Publishing, 2019). https://doi.org/10.1007/978-3-319-90869-4_7

Nagler, J., van den Hoven, J. & Helbing, D. in Towards Digital Enlightenment 41–46 (Springer International Publishing, 2019). https://doi.org/10.1007/978-3-319-90869-4_5

Wegren, S. K. The “left behind”: smallholders in contemporary Russian agriculture. J. Agrar. Chang. 18 , 913–925 (2018).

NSF - National Science Foundation. Women and Minorities in the S&E Workforce (NSF - National Science Foundation, 2018).

Helbing, D. The automation of society is next how to survive the digital revolution; version 1.0 (Createspace, 2015).

Cockburn, I., Henderson, R. & Stern, S. The Impact of Artificial Intelligence on Innovation (NBER, 2018). https://doi.org/10.3386/w24449

Seo, Y., Kim, S., Kisi, O. & Singh, V. P. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques. J. Hydrol. 520 , 224–243 (2015).

Adeli, H. & Jiang, X. Intelligent Infrastructure: Neural Networks, Wavelets, and Chaos Theory for Intelligent Transportation Systems and Smart Structures (CRC Press, 2008).

Nunes, I. & Jannach, D. A systematic review and taxonomy of explanations in decision support and recommender systems. Use. Model Use. Adapt Interact. 27 , 393–444 (2017).

Bissio, R. Vector of hope, source of fear. Spotlight Sustain. Dev . 77–86 (2018).

Brynjolfsson, E. & McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W. W. Norton & Company, 2014).

Dobbs, R. et al. Poorer Than Their Parents? Flat or Falling Incomes in Advanced Economies (McKinsey Global Institute, 2016).

Francescato, D. Globalization, artificial intelligence, social networks and political polarization: new challenges for community psychologists. Commun. Psychol. Glob. Perspect. 4 , 20–41 (2018).

Saam, N. J. & Harrer, A. Simulating norms, social inequality, and functional change in artificial societies. J. Artificial Soc.Social Simul . 2 (1999).

Dalenberg, D. J. Preventing discrimination in the automated targeting of job advertisements. Comput. Law Secur. Rev. 34 , 615–627 (2018).

World Economic Forum (WEF). Fourth Industrial Revolution for the Earth Series Harnessing Artificial Intelligence for the Earth (World Economic Forum, 2018).

Vinuesa, R., Fdez. De Arévalo, L., Luna, M. & Cachafeiro, H. Simulations and experiments of heat loss from a parabolic trough absorber tube over a range of pressures and gas compositions in the vacuum chamber. J. Renew. Sustain. Energy 8 (2016).

Keramitsoglou, I., Cartalis, C. & Kiranoudis, C. T. Automatic identification of oil spills on satellite images. Environ. Model. Softw. 21 , 640–652 (2006).

Mohamadi, A., Heidarizadi, Z. & Nourollahi, H. Assessing the desertification trend using neural network classification and object-oriented techniques. J. Fac. Istanb. Univ. 66 , 683–690 (2016).

Kwok, R. AI empowers conservation biology. Nature 567 , 133–134 (2019).

Bonnefon, J.-F., Shariff, A. & Rahwan, I. The social dilemma of autonomous vehicles. Science 352 , 1573–1576 (2016).

De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med 24 , 1342–1350 (2018).

Russell, S., Dewey, D. & Tegmark, M. Research priorities for robust and beneficial artificial intelligence. AI Mag. 34 , 105–114 (2015).

World Economic Forum (WEF). The New Physics of Financial Services – How Artificial Intelligence is Transforming the Financial Ecosystem (World Economic Forum, 2018).

Gandhi, N., Armstrong, L. J. & Nandawadekar, M. Application of data mining techniques for predicting rice crop yield in semi-arid climatic zone of India. 2017 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR) (2017). https://doi.org/10.1109/tiar.2017.8273697

Esteva, A. et al. Corrigendum: dermatologist-level classification of skin cancer with deep neural networks. Nature 546 , 686 (2017).

Cao, Y., Li, Y., Coleman, S., Belatreche, A. & McGinnity, T. M. Detecting price manipulation in the financial market. 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr) (2014). https://doi.org/10.1109/cifer.2014.6924057

Nushi, B., Kamar, E. & Horvitz, E. Towards accountable AI: hybrid human-machine analyses for characterizing system failure. arXiv:1809.07424 (2018).

Beyer, H. L., Dujardin, Y., Watts, M. E. & Possingham, H. P. Solving conservation planning problems with integer linear programming. Ecol. Model. 328 , 14–22 (2016).

Whittaker, M. et al. AI Now Report 2018 (AI Now Institute, 2018).

Petit, M. Towards a critique of algorithmic reason. A state-of-the-art review of artificial intelligence, its influence on politics and its regulation. Quad. del CAC 44 (2018).

Scholz, R. et al. Unintended side effects of the digital transition: European scientists’ messages from a proposition-based expert round table. Sustainability 10 , 2001 (2018).

Ramirez, E., Brill, J., Maureen, K., Wright, J. D. & McSweeny, T. Data Brokers: A Call for Transparency and Accountability (Federal Trade Commission, 2014).

Panch, T., Mattie, H. & Celi, L. A. The “inconvenient truth” about AI in healthcare. npj Digit. Med 2 , 77 (2019).

Solaiman, S. M. Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif. Intell. Law 25 , 155–179 (2017).

West, J. & Bhattacharya, M. Intelligent financial fraud detection: a comprehensive review. Comput. Secur 57 , 47–66 (2016).

Hajek, P. & Henriques, R. Mining corporate annual reports for intelligent detection of financial statement fraud – A comparative study of machine learning methods. Knowl.-Based Syst. 128 , 139–152 (2017).

Perry, W. L., McInnis, B., Price, C. C., Smith, S. C. & Hollywood, J. S. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations (RAND Corporation, 2013).

Gorr, W. & Neill, D. B. Detecting and preventing emerging epidemics of crime. Adv. Dis. Surveillance 4 , 13 (2007).

IEEE. Ethically Aligned Design - Version II overview (2018). https://doi.org/10.1109/MCS.2018.2810458

European Commission. Draft Ethics Guidelines for Trustworthy AI (Digital Single Market, 2018).

Lipton, Z. C. The mythos of model interpretability. Commun. ACM 61 , 36–43 (2018).

Dignum, V. Responsible Artificial Intelligence (Springer International Publishing, 2019).

Future of Life Institute. Open Letter on Autonomous Weapons (Future of Life Institute, 2015).

Future of Life Institute. Annual Report 2018. https://futureoflife.org/wp-content/uploads/2019/02/2018-Annual-Report.pdf?x51579

Montes, G. A. & Goertzel, B. Distributed, decentralized, and democratized artificial intelligence. Technol. Forecast. Soc. Change 141 , 354–358 (2019).

Butler, A. J., Thomas, M. K. & Pintar, K. D. M. Systematic review of expert elicitation methods as a tool for source attribution of enteric illness. Foodborne Pathog. Dis. 12 , 367–382 (2015).

Morgan, M. G. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc. Natl Acad. Sci. USA 111 , 7176–7184 (2014).

United Nations Human Rights. Sustainable Development Goals Related Human Rights (United Nations Human Rights, 2016).

Draft Committee. Universal Declaration of Human Rights (United Nations, 1948).

Download references

Acknowledgements

R.V. acknowledges funding provided by KTH Sustainability Office. I.L. acknowledges the Swedish Research Council (registration number 2017-05189) and funding through an Early Career Research Fellowship granted by the Jacobs Foundation. M.B. acknowledges Implicit SSF: Swedish Foundation for Strategic Research project RIT15-0046. V.D. acknowledges the support of the Wallenberg AI, Autonomous Systems, and Software Program (WASP) program funded by the Knut and Alice Wallenberg Foundation. S.D. acknowledges funding from the Leibniz Competition (J45/2018). S.L. acknowledges funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska–Curie grant agreement number 748625. M.T. was supported by the Ethics and Governance of AI Fund. F.F.N. acknowledges funding from the Formas grant number 2018-01253.

Author information

Authors and affiliations.

Linné FLOW Centre, KTH Mechanics, SE-100 44, Stockholm, Sweden

Ricardo Vinuesa

Division of Robotics, Perception, and Learning, School of EECS, KTH Royal Institute Of Technology, Stockholm, Sweden

Hossein Azizpour & Iolanda Leite

Division of Media Technology and Interaction Design, KTH Royal Institute of Technology, Lindstedtsvägen 3, Stockholm, Sweden

Madeline Balaam

Responsible AI Group, Department of Computing Sciences, Umeå University, SE-90358, Umeå, Sweden

Virginia Dignum

Leibniz-Institute of Freshwater Ecology and Inland Fisheries, Müggelseedamm 310, 12587, Berlin, Germany

Sami Domisch

AI Sustainability Center, SE-114 34, Stockholm, Sweden

Anna Felländer

Basque Centre for Climate Change (BC3), 48940, Leioa, Spain

Simone Daniela Langhans

Department of Zoology, University of Otago, 340 Great King Street, 9016, Dunedin, New Zealand

Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA

Max Tegmark

Unit of Energy Systems Analysis (dESA), KTH Royal Institute of Technology, Brinellvagen, 68SE-1004, Stockholm, Sweden

Francesco Fuso Nerini

You can also search for this author in PubMed   Google Scholar

Contributions

R.V. and F.F.N. ideated, designed, and wrote the paper; they also coordinated inputs from the other authors, and assessed and reviewed SDG evaluations as for the Supplementary Data 1 . H.A. and I.L. supported the design, wrote, and reviewed sections of the paper; they also assessed and reviewed SDG evaluations as for the Supplementary Data 1 . M.B., V.D., S.D., A.F. and S.L. wrote and reviewed sections of the paper; they also assessed and reviewed SDG evaluations as for the Supplementary Data 1 . M.T. reviewed the paper and acted as final editor.

Corresponding authors

Correspondence to Ricardo Vinuesa or Francesco Fuso Nerini .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information Nature Communications thanks Dirk Helbing and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Description of additional supplementary files, supplementary data 1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Vinuesa, R., Azizpour, H., Leite, I. et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11 , 233 (2020). https://doi.org/10.1038/s41467-019-14108-y

Download citation

Received : 03 May 2019

Accepted : 16 December 2019

Published : 13 January 2020

DOI : https://doi.org/10.1038/s41467-019-14108-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Assessing the current landscape of ai and sustainability literature: identifying key trends, addressing gaps and challenges.

  • Shailesh Tripathi
  • Nadine Bachmann
  • Herbert Jodlbauer

Journal of Big Data (2024)

Navigating the digital world: development of an evidence-based digital literacy program and assessment tool for youth

  • M. Claire Buchan
  • Jasmin Bhawra
  • Tarun Reddy Katapally

Smart Learning Environments (2024)

Green and sustainable AI research: an integrated thematic and topic modeling analysis

  • Raghu Raman
  • Debidutta Pattnaik
  • Prema Nedungadi

Artificial Intelligence can help Loss and Damage only if it is inclusive and accessible

  • Francesca Larosa
  • Adam Wickberg

npj Climate Action (2024)

Rethinking digitalization and climate: don’t predict, mitigate

  • Daria Gritsenko
  • Bent Flyvbjerg

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

role of artificial intelligence in problem solving

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction & Top Questions
  • Problem solving
  • Symbolic vs. connectionist approaches
  • Artificial general intelligence (AGI), applied AI, and cognitive simulation
  • Machine learning
  • Large language models and natural language processing
  • Autonomous vehicles
  • Virtual assistants
  • Want to learn more?

artificial intelligence

What is artificial intelligence?

Grade school students working at computers in a school library. Study learn girl child class technology

artificial intelligence

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Harvard University - Science in the News - The History of Artificial Intelligence
  • Journal of Emerging Technologies and Innovative Research - A Study on the Robotics and Artificial Intelligence
  • National Center for Biotechnology Information - PubMed Central - The rise of artificial intelligence in healthcare applications
  • Lifewire - What is artificial intelligence?
  • Computer History Museum - AI and Robotics
  • Internet Encyclopedia of Philosophy - Artificial Intelligence
  • artificial intelligence - Children's Encyclopedia (Ages 8-11)
  • artificial intelligence (AI) - Student Encyclopedia (Ages 11 and up)
  • Table Of Contents

Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the  intellectual processes characteristic of humans , such as the ability to reason. Although there are as yet no AIs that match full human flexibility over wider domains or in tasks requiring much everyday knowledge, some AIs perform specific tasks as well as humans. Learn more.

Are artificial intelligence and machine learning the same?

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

Recent News

artificial intelligence (AI) , the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since their development in the 1940s, digital computers have been programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess —with great proficiency. Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in executing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis , computer search engines , voice or handwriting recognition, and chatbots .

What is intelligence?

Artificial intelligence is changing how we interact online, how we manage our finances, and even how we work. Learn more with Britannica Money .

  • Using AI for money management
  • How AI is changing work
  • Ethical questions and AI
  • AI and regulation
  • Investing in AI stocks

All but the simplest human behavior is ascribed to intelligence, while even the most complicated insect behavior is usually not taken as an indication of intelligence. What is the difference? Consider the behavior of the digger wasp , Sphex ichneumoneus . When the female wasp returns to her burrow with food, she first deposits it on the threshold , checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behavior is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of the wasp—must include the ability to adapt to new circumstances.

(Read Ray Kurzweil’s Britannica essay on the future of “Nonbiological Man.”)

Psychologists generally characterize human intelligence not by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving , perception , and using language.

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that, the next time the computer encountered the same position, it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization . Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless the program was previously presented with jumped , whereas a program that is able to generalize can learn the “add -ed ” rule for regular verbs ending in a consonant and so form the past tense of jump on the basis of experience with similar verbs.

(Read Yuval Noah Harari’s Britannica essay on the future of “Nonconscious Man.”)

Insight & Resources

What are the key components of artificial intelligence (ai).

What are the Key Components of Artificial Intelligence (AI)?

Artificial intelligence (AI) consists of multiple important elements that help machines imitate human-like mental processes. These components allow AI to tackle problems, make choices, understand speech, and adjust to new data. Let’s go over the five key elements of AI : learning, reasoning, problem-solving, perception, and language processing.

1. Learning

Learning is the core of any AI system. It refers to AI’s capability to get better over time by collecting experience from data. Unlike traditional programming, which requires each task to be specifically coded, AI uses different learning methods to adjust and become smarter. Here are the three main types of learning used in AI:

  • Supervised learning involves training the AI with labeled data, so it knows what to look for. This allows the system to make predictions or sort new data based on examples provided. For instance, an AI trained to recognize handwritten numbers learns by examining thousands of labeled images.
  • Unsupervised learning doesn’t rely on labeled information. Instead, it identifies patterns and relationships within the data on its own. This method is commonly used in customer segmentation, where AI identifies customer groups with similar buying habits.
  • Reinforcement learning mirrors how people learn through trial and error. The AI interacts with its surroundings, getting feedback like rewards or penalties, and uses this to improve its long-term performance. A clear example of reinforcement learning can be found in gaming or robotics, where AI agents develop strategies through repeated attempts.

Example: Voice assistants like Siri or Alexa use machine learning to better understand and respond to users. Over time, they improve by recognizing speech patterns and fine-tuning their answers.

2. Reasoning

Reasoning enables AI to make choices or reach conclusions based on available facts. This involves using logic or probability models to replicate human thinking. There are two types of reasoning:

  • Deductive reasoning takes a general rule and applies it to specific situations. For example, in legal systems, AI can analyze many cases to predict the results of new cases.
  • Inductive reasoning generalizes from specific examples. This is especially useful in healthcare, where AI can identify health patterns from specific patient data to help doctors make decisions.

Example: Writing tools like Grammarly use reasoning to offer corrections. It understands when a comma should be added or when a sentence needs improvement, helping users write better without needing to give direct commands.

3. Problem-Solving

AI is great at problem-solving, which involves finding answers to difficult challenges. This part allows machines to break complex tasks into simpler steps. AI problem-solving isn’t limited to routine issues; it can also tackle unique, complicated situations. For example, in chess, AI analyzes possible moves and predicts outcomes based on its knowledge of the game’s rules.

Example: In a chess match, AI can scan millions of potential moves to pick the best one using its knowledge of the game’s state and strategies. Similarly, in medical diagnosis, AI reviews symptoms and data to suggest treatments.

4. Perception

Perception involves AI’s ability to handle and make sense of sensory data from its environment. This could include recognizing images, sounds, or other forms of information. AI systems frequently use perception when they need to interact with the physical world. For example, self-driving cars use cameras, radar, and lidar to “see” the road, spot obstacles, and make driving decisions.

A common real-world example of AI perception is facial recognition, where AI looks at visual data to detect and recognize faces. Likewise, AI-powered security systems can detect suspicious activity and alert authorities to potential risks.

Perception plays a significant role in how AI interacts with the world and is widely used in fields like healthcare (for example, AI-assisted medical imaging diagnostics) and retail (for example, checkout-free shopping).

AI systems often rely on sensors like cameras and microphones to understand their surroundings. Examples of perception tasks include things like recognizing images, detecting objects, and processing speech.

Example: A smart home system uses perception to detect movements indoors and outside. It processes data from sensors to make decisions in real-time.

5. Language Understanding

Natural Language Processing (NLP) is a field within AI that allows machines to grasp, interpret, and react to human language. It’s a widely applied feature in today’s consumer technologies. NLP is commonly used in virtual assistants like Siri and Alexa, which process spoken commands and respond appropriately.

NLP also plays a big role in sentiment analysis, which helps businesses examine customer feedback by figuring out the emotions or attitudes behind written text, such as social media posts or product reviews.

Example: Translation services like Google Translate process text or speech in one language and convert it into another, helping bridge communication gaps across different languages.

AI’s five key parts—learning, reasoning, problem-solving, perception, and language processing—are what enable these systems to handle tasks that once required human intelligence. Each component is crucial in building AI’s abilities, from processing data to interacting with the world around it. As AI keeps advancing, its ability to solve complicated issues and boost efficiency across various sectors will expand even further.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Infographics
  • Artificial Intelligence
  • Data Science
  • Machine Learning
  • Cyber Security
  • Virtual reality
  • Augmented Reality
  • Terms and Conditions
  • Privacy Policy
  • Support Policy
  • Refund Policy
  • Address : 440 N Barranca Ave, Covina, California 91723, US
  • Email : [email protected]
  • Phone : +1-(323) 984-8594

home

Artificial Intelligence

  • Artificial Intelligence (AI)
  • Applications of AI
  • History of AI
  • Types of AI
  • Intelligent Agent
  • Types of Agents
  • Agent Environment
  • Turing Test in AI

Problem-solving

  • Search Algorithms
  • Uninformed Search Algorithm
  • Informed Search Algorithms
  • Hill Climbing Algorithm
  • Means-Ends Analysis

Adversarial Search

  • Adversarial search
  • Minimax Algorithm
  • Alpha-Beta Pruning

Knowledge Represent

  • Knowledge Based Agent
  • Knowledge Representation
  • Knowledge Representation Techniques
  • Propositional Logic
  • Rules of Inference
  • The Wumpus world
  • knowledge-base for Wumpus World
  • First-order logic
  • Knowledge Engineering in FOL
  • Inference in First-Order Logic
  • Unification in FOL
  • Resolution in FOL
  • Forward Chaining and backward chaining
  • Backward Chaining vs Forward Chaining
  • Reasoning in AI
  • Inductive vs. Deductive reasoning

Uncertain Knowledge R.

  • Probabilistic Reasoning in AI
  • Bayes theorem in AI
  • Bayesian Belief Network
  • Examples of AI
  • AI in Healthcare
  • Artificial Intelligence in Education
  • Artificial Intelligence in Agriculture
  • Engineering Applications of AI
  • Advantages & Disadvantages of AI
  • Robotics and AI
  • Future of AI
  • Languages used in AI
  • Approaches to AI Learning
  • Scope of AI
  • Agents in AI
  • Artificial Intelligence Jobs
  • Amazon CloudFront
  • Goals of Artificial Intelligence
  • Can Artificial Intelligence replace Human Intelligence
  • Importance of Artificial Intelligence
  • Artificial Intelligence Stock in India
  • How to Use Artificial Intelligence in Marketing
  • Artificial Intelligence in Business
  • Companies Working on Artificial Intelligence
  • Artificial Intelligence Future Ideas
  • Government Jobs in Artificial Intelligence in India
  • What is the Role of Planning in Artificial Intelligence
  • AI as a Service
  • AI in Banking
  • Cognitive AI
  • Introduction of Seaborn
  • Natural Language ToolKit (NLTK)
  • Best books for ML
  • AI companies of India will lead in 2022
  • Constraint Satisfaction Problems in Artificial Intelligence
  • How artificial intelligence will change the future
  • Problem Solving Techniques in AI
  • AI in Manufacturing Industry
  • Artificial Intelligence in Automotive Industry
  • Artificial Intelligence in Civil Engineering
  • Artificial Intelligence in Gaming Industry
  • Artificial Intelligence in HR
  • Artificial Intelligence in Medicine
  • PhD in Artificial Intelligence
  • Activation Functions in Neural Networks
  • Boston Housing Kaggle Challenge with Linear Regression
  • What are OpenAI and ChatGPT
  • Chatbot vs. Conversational AI
  • Iterative Deepening A* Algorithm (IDA*)
  • Iterative Deepening Search (IDS) or Iterative Deepening Depth First Search (IDDFS)
  • Genetic Algorithm in Soft Computing
  • AI and data privacy
  • Future of Devops
  • How Machine Learning is Used on Social Media Platforms in 2023
  • Machine learning and climate change
  • The Green Tech Revolution
  • GoogleNet in AI
  • AlexNet in Artificial Intelligence
  • Basics of LiDAR - Light Detection and Ranging
  • Explainable AI (XAI)
  • Synthetic Image Generation
  • What is Deepfake in Artificial Intelligence
  • What is Generative AI: Introduction
  • Artificial Intelligence in Power System Operation and Optimization
  • Customer Segmentation with LLM
  • Liquid Neural Networks in Artificial Intelligence
  • Propositional Logic Inferences in Artificial Intelligence
  • Text Generation using Gated Recurrent Unit Networks
  • Viterbi Algorithm in NLP
  • What are the benefits of Artificial Intelligence for devops
  • AI Tech Stack
  • Speech Recognition in Artificial Intelligence
  • Types of AI Algorithms and How Do They Work
  • AI Ethics (AI Code of Ethics)
  • Pros and Cons of AI-Generated Content
  • Top 10+ Jobs in AI and the Right Artificial Intelligence Skills You Need to Stand Out
  • AIOps (artificial intelligence for IT operations)
  • Artificial Intelligence In E-commerce
  • How AI can Transform Industrial Safety
  • How to Gradually Incorporate AI in Software Testing
  • Generative AI
  • NLTK WordNet
  • What is Auto-GPT
  • Artificial Super Intelligence (ASI)
  • AI hallucination
  • How to Learn AI from Scratch
  • What is Dilated Convolution?
  • Explainable Artificial Intelligence(XAI)
  • AI Content Generator
  • Artificial Intelligence Project Ideas for Beginners
  • Beatoven.ai: Make Music AI
  • Google Lumiere AI
  • Handling Missing Data in Decision Tree Models
  • Impacts of Artificial Intelligence in Everyday Life
  • OpenAI DALL-E Editor Interface
  • Water Jug Problem in AI
  • What are the Ethical Problems in Artificial Intelligence
  • Difference between Depth First Search, Breadth First Search, and Depth Limit Search in AI
  • How To Humanize AI Text for Free
  • 5 Algorithms that Demonstrate Artificial Intelligence Bias
  • Artificial Intelligence - Boon or Bane
  • Character AI
  • 18 of the best large language models in 2024
  • Explainable AI
  • Conceptual Dependency in AI
  • Problem characteristics in ai
  • Top degree programs for studying artificial Intelligence
  • AI Upscaling
  • Artificial Intelligence combined with decentralized technologies
  • Ambient Intelligence
  • Federated Learning
  • Neuromorphic Computing
  • Bias Mitigation in AI
  • Neural Architecture Search
  • Top Artificial Intelligence Techniques
  • Best First Search in Artificial Intelligence
  • Top 10 Must-Read Books for Artificial Intelligence
  • What are the Core Subjects in Artificial Intelligence
  • Features of Artificial Intelligence
  • Artificial Intelligence Engineer Salary in India
  • Artificial Intelligence in Dentistry
  • des.ai.gn - Augmenting Human Creativity with Artificial Intelligence
  • Best Artificial Intelligence Courses in 2024
  • Difference Between Data Science and Artificial Intelligence
  • Narrow Artificial Intelligence
  • What is OpenAI
  • Best First Search Algorithm in Artificial Intelligence
  • Decision Theory in Artificial Intelligence
  • Subsets of AI
  • Expert Systems
  • Machine Learning Tutorial
  • NLP Tutorial
  • Artificial Intelligence MCQ

Related Tutorials

  • Tensorflow Tutorial
  • PyTorch Tutorial
  • Data Science Tutorial
  • Reinforcement Learning

The process of problem-solving is frequently used to achieve objectives or resolve particular situations. In computer science, the term "problem-solving" refers to artificial intelligence methods, which may include formulating ensuring appropriate, using algorithms, and conducting root-cause analyses that identify reasonable solutions. Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modelling frameworks. A same issue has a number of solutions, that are all accomplished using an unique algorithm. Additionally, certain issues have original remedies. Everything depends on how the particular situation is framed.

Artificial intelligence is being used by programmers all around the world to automate systems for effective both resource and time management. Games and puzzles can pose some of the most frequent issues in daily life. The use of ai algorithms may effectively tackle this. Various problem-solving methods are implemented to create solutions for a variety complex puzzles, includes mathematics challenges such crypto-arithmetic and magic squares, logical puzzles including Boolean formulae as well as N-Queens, and quite well games like Sudoku and Chess. Therefore, these below represent some of the most common issues that artificial intelligence has remedied:

Depending on their ability for recognising intelligence, these five main artificial intelligence agents were deployed today. The below would these be agencies:

This mapping of states and actions is made easier through these agencies. These agents frequently make mistakes when moving onto the subsequent phase of a complicated issue; hence, problem-solving standardized criteria such cases. Those agents employ artificial intelligence can tackle issues utilising methods like B-tree and heuristic algorithms.

The effective approaches of artificial intelligence make it useful for resolving complicated issues. All fundamental problem-solving methods used throughout AI were listed below. In accordance with the criteria set, students may learn information regarding different problem-solving methods.

The heuristic approach focuses solely upon experimentation as well as test procedures to comprehend a problem and create a solution. These heuristics don't always offer better ideal answer to something like a particular issue, though. Such, however, unquestionably provide effective means of achieving short-term objectives. Consequently, if conventional techniques are unable to solve the issue effectively, developers turn to them. Heuristics are employed in conjunction with optimization algorithms to increase the efficiency because they merely offer moment alternatives while compromising precision.

Several of the fundamental ways that AI solves every challenge is through searching. These searching algorithms are used by rational agents or problem-solving agents for select the most appropriate answers. Intelligent entities use molecular representations and seem to be frequently main objective when finding solutions. Depending upon that calibre of the solutions they produce, most searching algorithms also have attributes of completeness, optimality, time complexity, and high computational.

This approach to issue makes use of the well-established evolutionary idea. The idea of "survival of the fittest underlies the evolutionary theory. According to this, when a creature successfully reproduces in a tough or changing environment, these coping mechanisms are eventually passed down to the later generations, leading to something like a variety of new young species. By combining several traits that go along with that severe environment, these mutated animals aren't just clones of something like the old ones. The much more notable example as to how development is changed and expanded is humanity, which have done so as a consequence of the accumulation of advantageous mutations over countless generations.

Genetic algorithms have been proposed upon that evolutionary theory. These programs employ a technique called direct random search. In order to combine the two healthiest possibilities and produce a desirable offspring, the developers calculate the fit factor. Overall health of each individual is determined by first gathering demographic information and afterwards assessing each individual. According on how well each member matches that intended need, a calculation is made. Next, its creators employ a variety of methodologies to retain their finest participants.





Latest Courses

Python

We provides tutorials and interview questions of all technology like java tutorial, android, java frameworks

Contact info

G-13, 2nd Floor, Sec-3, Noida, UP, 201301, India

[email protected] .

Facebook

Latest Post

PRIVACY POLICY

Interview Questions

Online compiler.

  • Recommended

role of artificial intelligence in problem solving

The role of AI in improving health care: artificial intelligence and appreciative inquiry

Kim Downey:

Part of my work supporting physicians involves facilitating connections, both among physicians and with others who assist them. In this vein, I introduced Dr. Wael Saasouh to Lisa Scardina. Our discussion touched on AI, with Dr. Saasouh sharing positive ways we can use technology and Lisa offering her perspective on AI, which also stands for Appreciative Inquiry. How we connect and have conversations matters! We talked about hope and finding common understanding. Dr. Saasouh and Lisa shared some excellent takeaways, including the importance of not feeling like you have to learn everything, which is impossible, and how essential it is to “find your people.”

Here’s to the benefits of Artificial Intelligence and Appreciative Inquiry!

Wael Saasouh, MD:

The utility of artificial intelligence is rising exponentially. Use cases seem to be at least doubling by the day, and some of these have direct applicability to the health care system. Some ways AI can be leveraged for the benefit of clinicians include:

Direct recognition.  Gamified reward systems and hyper-realistic virtual events to foster motivation and a sense of community.

Employee education opportunities.  Personalized skill advancement based on the identification of particular skills and targeted educational opportunities.

Fostering collaboration.  Recognizing areas of low morale, establishing interest-based connections among clinicians, and utilizing methods from high-functioning collaborators.

Research and academic advancement.  Automating routine tasks, accelerating data collection and analysis, and analyzing large datasets to uncover patterns, refine research questions, and improve clinical outcomes.

However, clinicians should not be required to produce more just because technology is available. AI is intended to enhance the work environment, recognize efforts, and provide tools for more effective job performance, creating a supportive and collaborative health care system.

We have to figure this out, and we can! An inspiring example is how our podcast episode came to be. Lisa and I didn’t know each other before, but through the efforts of one person, Kim, we connected and now we are inspired and have the potential to inspire others to make both small and large changes.

Wisdom I’ve learned over the years

Healthy pride. It’s OK to be proud of achievements, but never at the expense of morals and decency.

Purpose.  Everyone needs something that gives them a sense of purpose.

Material possessions.  Neither lack nor abundance will significantly contribute to a sense of pride and purpose.

Self-improvement. It’s beneficial to pursue self-improvement and look up to successful people, but the noise from social media can make this process seem daunting and overwhelming. Content creators often add to this noise in their quest for more content.

Balanced altruism. Altruism is essential in medicine, but too much of it can lead to neglecting oneself and one’s personal life.

Challenging the “”tough it out and keep going” mentality: It’s important to realize that it’s never over, and we can always seek improvement and support.

We can combat burnout and disconnection through thoughtful initiatives and personal wisdom. Let’s continue to inspire and support each other in both small and significant ways.

Parting thoughts

Simple gestures matter.  A kind word and a smile can make all the difference. This rings true in physician-patient interactions, from both sides, and among team members at all levels.

Avoid creating unnecessary conflict.  There is room for good clinical care, compassion, fair compensation, career satisfaction, and accountability.

Unified effort.  We need to collaborate toward common goals, rather than isolating like-minded individuals and pitting them against each other.

Lisa Scardina:

Several years ago, inspired by a leadership program provided by a mentor, I completed the Appreciative Inquiry in Positive Business and Social Change program at Case Western Reserve University. The original premise was enticing and based on primary research at the Cleveland Clinic: teams that ask positive, strength-based questions achieve higher performance. Appreciative Inquiry focuses on positive idea generation over negative problem identification.

The program was enlightening and radically changed my approach and perspective, both professionally and personally. Appreciative Inquiry, as a methodology, is a hybrid approach bringing together the best of organizational management, development, leadership, and positive psychology. Inspiring high performance starts with how we think. Patterns of thinking can often be stuck in habitual patterns that focus on scarcity, fear, and negativity. Change the questions, and different answers appear.

As we explore the impact of technology and artificial intelligence in health care, ensuring a focus on the human experience of these transformational tools and approaches is needed. While automation increases, how teams adopt new processes significantly affects whether investments in new technology and tools will yield positive results.

With my current team, when we look at the organization-wide performance scorecard, we intentionally ask questions like: What’s going well? What are the bright spots in this week’s or month’s performance? How might we enhance our culture and improve employee experience? What are we learning from some of our data-driven experiments that can enhance the experience of the physicians and organizations we work with?

Conversations like this create space for employees to lean in, engage, aspire, and dream. Appreciative Inquiry creates the structure and space to ask ourselves to dream about what is possible. If we never take the time to dream, then those aspirations will surely never happen!

On a personal note, as a wife and mother of three adult sons, I have worked to shift my mindset from worry and fear to one of possibility based on the strengths that are present and have already been demonstrated. Getting into a mindset of possibility begets more possibility.

I am so grateful for Professor David Cooperrider and the faculty at Case Western for their work and dedication to creating positive social change and giving us the tools to make those dreams a reality. And to my cohort from the program, whom I am still in touch with, thank you for continuing to inspire me with the ways you make a positive difference and share those experiences through authentic relationships across our group.

Artificial intelligence needs appreciative inquiry!

Wael Saasouh is an anesthesiologist. Lisa Scardina is a health care executive. Kim Downey is a physician advocate and physical therapist.

role of artificial intelligence in problem solving

Solving the health care dilemma: Why older adults are skipping vital care

role of artificial intelligence in problem solving

How drivers of health screenings led to immediate patient impact and practice sustainability

role of artificial intelligence in problem solving

Tagged as: Health IT

More by Kim Downey, PT & Lisa Scardina & Wael Saasouh, MD

role of artificial intelligence in problem solving

Fighting burnout with deeper human connections

role of artificial intelligence in problem solving

How to support physician wellness during the toughest times

role of artificial intelligence in problem solving

It’s time for physicians to reclaim their voice and identity

Related posts.

role of artificial intelligence in problem solving

Why the health care industry must prioritize health equity

role of artificial intelligence in problem solving

Improve mental health by improving how we finance health care

role of artificial intelligence in problem solving

Proactive care is the linchpin for saving America’s health care system

role of artificial intelligence in problem solving

Health care workers should not be targets

role of artificial intelligence in problem solving

To “fix” health care delivery, turn to a value-based health care system

role of artificial intelligence in problem solving

Health care’s hidden problem: hospital primary care losses

More in tech.

role of artificial intelligence in problem solving

Improve your medical presentations: How to embed videos in PowerPoint like a pro

role of artificial intelligence in problem solving

How AI could widen health disparities without stronger policies

role of artificial intelligence in problem solving

How to regulate generative AI in health care

role of artificial intelligence in problem solving

Time to get digital in your practice

How ai is reshaping the anesthesia workforce.

role of artificial intelligence in problem solving

Unmasking online scams: How to stay safe as a visible patient advocate

Most popular.

role of artificial intelligence in problem solving

Why EHRs are ruining health care: a doctor’s shocking truth

role of artificial intelligence in problem solving

Stop using “whole person care” if you are not assessing and providing whole person care

role of artificial intelligence in problem solving

Performing in the clutch: lessons from a pediatric airway surgeon

role of artificial intelligence in problem solving

Why doctors must take charge to save our failing health care system

role of artificial intelligence in problem solving

Psychological safety for all: Why this phenomenon matters to everyone

role of artificial intelligence in problem solving

PGY-22 has a steep learning curve

Past 6 months.

role of artificial intelligence in problem solving

Why is our health care system going down the drain and no one seems to care?

role of artificial intelligence in problem solving

Nurse practitioner reveals startling flaws in APRN education: Is patient safety at risk?

role of artificial intelligence in problem solving

Climate change is exacerbating diseases in vulnerable populations in America and abroad

role of artificial intelligence in problem solving

How biased medical experts are destroying doctors’ lives and careers in the opioid crisis

Recent posts.

role of artificial intelligence in problem solving

A pediatrician’s reflections on resilience and rebuilding in Asheville after the hurricane

role of artificial intelligence in problem solving

After Hurricane Helene: How Asheville rose from the floodwaters stronger than ever

role of artificial intelligence in problem solving

How in-home care gave this couple a new lease on life

role of artificial intelligence in problem solving

Is tech the answer to America’s growing doctor shortage? [PODCAST]

Subscribe to kevinmd and never miss a story.

Get free updates delivered free to your inbox.

role of artificial intelligence in problem solving

Find jobs at Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

dc-ftr

CME Spotlights

role of artificial intelligence in problem solving

Leave a Comment

Comments are moderated before they are published. Please read the comment policy .

role of artificial intelligence in problem solving

  • Open access
  • Published: 07 October 2024

Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students

  • F. Funer 1 , 2 ,
  • S. Tinnemeyer 1 ,
  • W. Liedtke 3 &
  • S. Salloch 1  

BMC Medical Ethics volume  25 , Article number:  107 ( 2024 ) Cite this article

Metrics details

Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice.

Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees’ perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians’ side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz).

In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees’ statements: the first type,  the clinician as a systemic trustee (or “the one relying”), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to “the one relying”, clinicians should choose those measures that statistically cause the least harm. The second type,  the clinician as an individual expert (or “the one controlling”), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician’s necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees’ different conceptions of the clinician’s role and responsibilities cause them to have different requirements regarding the clinician’s understanding and explicability of an AI-CDSS beyond the proof of benefit.

Conclusions

The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.

Peer Review reports

Clinical Decision Support Systems (CDSS) are being increasingly introduced in various domains of health care for diagnostic, prognostic, therapeutic and other purposes. Clinical decision support as such has been discussed for decades [ 1 ], however, innovations in artificial intelligence (AI) and Machine Learning (ML) have intensified the debate on the chances and pitfalls of clinicians relying on computerised input. Potential benefits of the introduction of AI-CDSS in clinical workflows arise from the high accuracy of predictions in which AI-CDSS, even today, outperform human specialists in some tasks [ 2 ]. In addition, the efficiency of health care might be enhanced by employing computerised support, especially for simple or repetitive tasks that can be meaningfully entrusted to machines. On the other hand, risks are identified in such different fields as patient safety (e.g. alert fatigue), interoperability, user acceptance, users’ computer literacy, or disrupted and fragmented workflows [ 3 ]. Attempts to increase reliability and trustworthiness as well as technological harmonisation are considered to be key for the future success of AI-CDSS in health care.

From an ethical perspective, issues pertaining to the interpretation of established bioethical principles, such as justice or autonomy, concerning AI-CDSS have been discussed intensively in recent years. Regarding justice, for example, the right to equal access to health care plays a key role in enabling all patients to profit from newly introduced health care technologies as soon as their clinical benefit has been proven. More intricate ethical issues relate to testimonial (in-) justice if doctors are about to decide whether to trust a patient’s testimony or the outputs generated by a machine from big socio-demographic or clinical data [ 4 ]. Patient autonomy can be challenged in various ways by the introduction of AI-CDSS, for example, given the lack of clarity as to whether information needs to be provided about the use of an AI-CDSS in clinical care [ 5 ] and how much and what information needs to be provided to the patient to enable him or her to give informed consent on this basis. Additionally, the use of AI-CDSS can compromise the physician’s autonomy by making it difficult or even impossible for the clinician to assess recommendations, by unclear arrangements for integrating the AI-CDSS into shared decision-making, or by removing the clinician’s control over when and how the AI-CDSS is being used [ 6 ].

Other ethical issues relating to the introduction of AI-CDSS are closely linked to epistemological questions about the degree to which health care professionals are able to reproduce the outputs of computerised decision support. Some authors argue that “black-box medicine” “conflicts with core ideals of patient-centered medicine”, and it “is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient” [ 7 ]. The question of the need to understand an AI tool has become an important part of the discussion on the use of AI in health care, as it often seems to be closely linked to normative concepts such as trustworthiness, accountability and agency of health care professionals. A vivid discussion has emerged around the question whether “explicability” needs to be introduced as a further principle in the canon of bioethics when it comes to the evaluation of AI-driven tools [ 8 , 9 ]. The term “explicability” is far from being unambiguous, and other terms, such as interpretability or transparency, are often used interchangeably [ 10 ]. The trade-offs that often need to be made between explicability and other goals in health care, such as accuracy, are also ethically meaningful [ 11 ]: the increased explicability in some applications comes at the cost of the accuracy of the predictions. In this respect, approaches to the necessary level of explicability of AI-CDSS are highly case-dependent and constrained not only by the clinicians’ skills and their own understanding but also by technical limitations that depend on the computational methods chosen in the development of medical AI [ 12 ].

Empirical research on stakeholder perceptions of the (ethical) chances and challenges of AI-CDSS are greatly needed to inform the debate and further guide technological development and policy-making. On the one hand, this can provide a reality check for the primarily conceptual-normative discourse and thereby test the validity of arguments in practice. On the other hand, explorative empirical research in particular can help to generate questions that may not have been asked so far. Initial qualitative empirical evidence exists on the perceptions, expectations and attitudes of clinical users of CDSS, as well as of barriers and facilitators of its use [ 13 , 14 , 15 , 16 ]. It generally appears that health care professionals working in hospitals are especially afraid of a loss of professional autonomy and the difficulties of integrating the systems into their clinical workflows [ 17 ]. Previous presentations of qualitative empirical results have succeeded in particular in collecting and mapping the breadth of ethical aspects that are considered relevant by health care professionals as a collective. However, greater differentiation is now required in order to achieve a greater depth of understanding of the underlying reasons for ethical aspects that are considered important.

Against the background of the current status of research, this article reports from a qualitative interview study exploring German medical students’ perceptions of ethical issues related to exemplary AI-CDSS. Medical students were selected as interviewees as they are future health care professionals who will most likely be dealing with AI-driven support throughout their professional careers. Due to their age their views on digital technologies might considerably differ compared to (senior) physicians who have been the participants of previous studies. Although, on the one hand, limited clinical experience can lead to different attitudes than those of experienced physicians, on the other hand, it can be stated that medical students at the end of their studies have initial insights and experiences in various clinical fields. They also have ideals and expectations regarding their own clinical role and responsibilities within health care. In this respect, we argue that medical students exhibit a characteristic that distinguishes them from experienced physicians, namely that their ideals of the professional role and associated responsibilities are not immediately relativised or compromised by everyday practical constraints. In this respect, the study presented here complements the empirical research with the views and attitudes of a further group of key protagonists for our future healthcare.

The qualitative interview study generated findings on various aspects of the use of AI-CDSS that have already been published elsewhere, such as on questions of the (final) responsibility of health care professionals [ 18 ] or on the necessity and scope of information and communication about the use of AI-CDSS [ 19 ]. However, a central theme of the interviews was the question of the clinician’s need for understanding and the competencies required to be able to use an AI-CDSS in a responsible manner. Here and in the following, “understanding” refers to how the interviewees consider it necessary to understand how a specific AI-CDSS works in general and how an individual recommendation come about. The need to “understand” the AI-CDSS and its recommendations in this way results in requirements for information about the AI-CDSS (cf. “explainability”/“explicability”) on the one hand and for the competencies required for its use on the part of the clinician on the other. The goal of the exploration was to understand the medical students’ attitudes and subsequently their reasons regarding the necessity of understanding to employ AI-CDSS. In contrast to previous studies, our approach enabled us not only to present the attitudes of future health care professionals but also to situate them within their epistemic-ethical context of justification, that means, to illustrate the connection of necessary understanding with the self-assigned clinical role as a doctor and its associated responsibilities. In this way, the different professional attitudes can be traced back to their premises and existing differences can be explained more comprehensively. On this basis, two types of medical students with different rationales can be differentiated. To the best of our knowledge, this study is the first to present such contrasting positions within the debate on understanding the requirements for medical AI in their context of justification elicited by qualitative research.

A qualitative interview study was conducted to arrive at an in-depth understanding of medical students’ perceptions of ethical issues surrounding AI-CDSS with a special focus on the knowledge and competencies needed to use such systems in clinical practice. The interview guide and other key findings from the study, that also included nursing trainees and patients as interviewees, have already been reported elsewhere [ 18 , 19 ]. Semi-structured interviews were conducted with advanced medical students at a German maximum-care hospital. Ethical approval was obtained from the local Research Ethics Committee prior to conducting the study (Reg. No. 9805_BO_K_2021).

Data collection

Interview partners were included in the convenience sample if they met the following inclusion criteria: medical students in the fourth or fifth year of study, ≥ 18 years old and sufficient proficiency in German. There was no relationship established between the participants and the interviewer prior to the study; they met for the first time in the interview situation. Participants received some general information about the interview topic before the interview. Due to the COVID-19 pandemic, video calls were used for all interviews. They were conducted in German between June and July 2021. Most participants were at home and alone during the interviews. A common expense allowance was paid for participation.

The interview guide for the semi-structured interviews included two case vignettes that were presented during the interviews in written form and pictures (see “Medical Students’ Interview guide” within Supplement 1, published in [ 18 ]): The first vignette introduced an AI-CDSS to support doctors in the surgical setting (intra-abdominal surgical navigation) and the second presented an app for prognosis and therapy planning in chronic kidney disease. The AI-CDSS were selected with the aim of a variation in terms of the clinical field of application (surgery vs. nephrology), acute vs. long-term care and the degree of support (manual guidance, e.g. for incision lines, vs. prognosis estimation and therapy planning). The interviewees had the opportunity to discuss digitisation in health care in general, express their spontaneous reactions to the vignettes and then answered questions, for example, on patient information or competencies that must be expected from future clinicians. The interviews therefore have characteristics that combine both theory-generating expert interviews and problem-centered interviews [ 20 ]: The interview guide was structured on the basis of the debate in the literature, so that it addresses typical topics such as the question of the understanding of AI-CDSS, but is also open to the interview situation and the interviewees with regard to the scope and content discussed, thus allowing the interviewees to decide on the relevance and further exploration of the topic in the interview. The semi-structured interviews thus combine deductive and inductive methods. The interviews were audio-recorded and field notes were taken. We stopped conducting the interviews when saturation was reached, thus, at the point where additional interviews no longer generated new information relevant for the research question based on an iterative process of data collection and data analysis. Saturation refers to the characterization of the two argumentative types we identified in this study.

Data analysis

Interviews were anonymised for people, places and institutions, and fully transcribed. The data analysis used key principles of qualitative content analysis according to Kuckartz [ 21 ]. Regarding this multistage procedure, inductive category building from the data is combined with theoretically derived categories that are defined prior to the start of the inductive analysis. In order to develop the deductive categories, topics related to the research question were extracted from the literature and subsequently interpreted in light of what emerged from the interviews. We documented coding rules for the deductive categories and selected exemplary passages (see Supplement 2, published in [ 18 ]). The data analysis was conducted by FF, ST and SS as researchers with interdisciplinary backgrounds in medical ethics, medicine, philosophy and pedagogics. MAXQDA (2020) was used as software to support the data analysis. The coding system was constantly revised and considerably expanded during the analysis. Ambiguities and disagreements which occurred were discussed critically between the authors and decided by consensus.

The interviews with 15 medical students (self-reported gender: 8 ♀ / 7 ♂; average age 25.5 years, range: 23–36 years) lasted an average of 55:49 min (with a range from 46:55 to 75:37 min). The interviewees had already finished all pre-clinical subjects, all clinical-theoretical subjects (e.g. pharmacology, pathology) and major clinical subjects such as surgery, internal medicine or emergency care. At this point in their studies, the students had been in full-time practice in hospitals for at least five months.

The results presented in this article are drawn from the overarching categories “Reliability of the technology”, “Traceability/Comprehensibility of decisions” and “Competencies” (see Supplement 2, published in [ 18 ]). The reporting of methods and results was guided by the Consolidated Criteria for Reporting Qualitative Research [ 22 ]. Exemplary passages supporting the main findings were translated from German to English by the authors to be included in this article. Each of the interviews was analysed in its overall epistemic-ethical context of justification and explanation of premises. This made it possible to differentiate between different types of attitudes and their justifications among the future clinicians.

The focus of the results reported in this article lies on self-perceived clinical roles and necessary levels of understanding when using AI-CDSS. Different patterns of justification for the interviewees’ convictions regarding these topics were identified. Based on the interviewees’ statements, we have inductively reconstructed two major types to illustrate the most important alternative justification patterns: on the one hand, the clinician as a systemic trustee (“the one relying”) and, on the other hand, the clinician as an individual expert (“the one controlling”).

We first introduce the common starting points of these two argumentative types in the results section. Subsequently, the two types are reconstructed and their different patterns of justification are elaborated in parallel using interviewees’ statements. A tabular overview then compiles the key characteristics and elements of the alternative justification patterns (see Table  1 ). From this, we illustrate argumentative challenges that emerged in some interviews and with which the interviewees said they would be confronted in dealing with medical AI decision support.

The interviewees’ statements represent requirements they placed primarily on themselves but claimed them to be generalisable to “clinicians” or “doctors” (cf. e.g. SI-5, SI-11). While some of the interviewees could be assigned quite easily to one of the two argumentative types Footnote 1 , there were some interviews in which parts of both reasoning patterns were combined. There is also a continuum between the two types, with individuals who can sometimes be more attributed to one type, and sometimes more to the other.

Starting point of both types: scientific proof of benefit

Both argumentative types indicate that they need scientific evidence on the clinical validity for the use of AI-CDSS outcomes. Specifically, interviewees believe that a positive effect of using AI-CDSS compared to not using it must first be demonstrated. In this respect, clinical decisions made by clinicians with an AI-CDSS should be proven to be correct at least significantly more frequently than comparable decisions made by average specialists without an AI-CDSS (cf. e.g. interviews SI-1, SI-4, SI-6, SI-7, SI-13 and SI-14):

And , yeah , otherwise maybe like validation studies , to what extent the things that the device predicted or recommended were actually good compared to more traditional methods or something like that. (Stud_Interview_10, Position: 70)

Other criteria which would have to be evaluated, one interviewee said, are benefits, such as the following:

[T]hat would be, for example, fewer complication rates after surgery, shorter surgery duration, in other words, all kinds of things that would be beneficial to the patient. And, of course, also for the surgeon. (Stud_Interview_12, Position: 29)

In addition, regular re-evaluations should detect long-term changes in human-machine interaction and make them assessable regarding their outcomes (cf. SI-4, SI-6 and SI-7):

There has to be a superiority that if you work with this support system now, that it really brings the advantages that you expect. […] But you can really only find that out over time by comparing it with each other, whether it really brings advantages and fewer complications occur, for example, and the duration of surgery is shortened and so on. (Stud_Interview_7, Position: 33–35)

According to some of the interviewees, the existing evidence on AI-CDSS should be reviewed, assessed and approved by appropriate expert bodies, such as governmental authorities or medical societies in the relevant field (cf. e.g. SI-6), before clinical deployment.

Based on this common starting point of sufficient evidence and suitable instances for evaluating it, the interviews reveal considerably different positions on how clinicians should deal with the scientific evidence of a positive proof of benefit. Two main argumentative types will now be reconstructed.

Reconstruction Type I (“the one relying”)

Regarding “the one relying”, errors and causing harm are an inevitable part of medical practice (cf. e.g. SI-3, SI-6 and SI-12):

You simply have to say goodbye to that [ = the idea of not making mistakes; authors]. There are always mistakes somewhere and hopefully they will be less with this programme, but mistakes and misjudgements do happen. (Stud_Interview_12, Position: 85)

Empirical evidence of better outcomes and lower error rates with the help of AI-CDSS is, therefore, decisive for the clinician’s decision on the use of this technology:

[…] that I personally, as soon as it was empirically shown that this surgical assistant works well and brings better results, that I would trust it very well, probably also more than people who operate without this assistance. (Stud_Interview_6, Position: 47) It’s just a question of who is better or who makes fewer mistakes, whether you can rely on it more or not. (Stud_Interview_14, Position: 41)

Even in the case that damages were caused associated with the use of an AI-CDSS, retrospectively, the use could have been better justified than the non-use:

But, nevertheless, it would have been the most rational thing to do, even if the end result is a worse outcome. In my view, it would still have been the most rational thing to do, or the most appropriate thing to do. Basically, to consider that it is more likely that this outcome will not occur. (Stud_Interview_6, Position: 115)

The goal of “the one relying” is, thus, to cause as little harm as possible. Protagonists of this type see it as necessary that it has been empirically shown that a higher benefit can be achieved with AI-CDSS’s help. In this respect, the use of AI-CDSS is understood to be the evidence-based best available remedy to achieve the desired benefit for most cases. Accordingly, the clinician is also not responsible for damages resulting from the AI-CDSS recommendation because, based on the empirical evidence, its use was indicated (cf. e.g. SI-1 and SI-6). The prerequisite, however, is that the clinician correctly informed the patient about the potential damages beforehand (cf. e.g. SI-3).

Regarding “the one relying”, this position is reflected in a necessary level of understanding that essentially consists of two components: knowing that appropriate regulatory authorities and processes exist that have verified the scientific evidence of benefit, for example, as the result of a certification process or a recommendation by medical societies (cf. e.g. SI-3 and SI-6). This knowledge of appropriate processes is also framed as trust in the existing system:

[…] but at a certain point, there’s just a certain amount of trust that’s necessary, and I just have that trust in the people who programmed this system. (Stud_Interview_6, Position: 153)

However, an understanding by the clinician of how the AI-CDSS works and how it arrives at its outcomes is held to be unnecessary by “the one relying”:

I didn’t mean that I have to understand the system. I don’t really have the […] major interest in that. So, as long as I’m told that it’s been empirically shown that this system works, I’m not so incredibly interested in how this system comes to the benefit, if I’m honest. (Stud_Interview_6, Position: 71–73)

According to this understanding of the clinician’s role in dealing with an AI-CDSS, there is a responsibility of the clinician to ensure that information about advantages and disadvantages or risks of its use is correctly conveyed to the patient (cf. e.g. SI-1 and SI-3). The evidence that decisions with AI support are generally better than without it justifies the acceptance of potential errors that are caused by the AI-CDSS or occuring during its use:

But I still think that if it was really shown that mine [ = my decision; authors] is usually worse than the AI’s and then I end up accepting fewer mistakes and preventing many mistakes on my part in return, then it was still the right decision to follow. In my opinion, it would be a bad decision not to trust the AI just because it might sometimes make different mistakes than I do. (Stud_Interview_6, Position: 85)

Reconstruction Type II (“the one controlling”)

The other argumentative type identified from the interviews shows, in some respects, a similar argumentative pattern as the first one; in other respects, major differences emerge. Similar to the first type, “the one controlling” acknowledges the occurrence of errors and harm in the context of medical practice (cf. e.g. SI-2, SI-5 and SI-12). They also aim for the lowest possible number of errors and damage. However, the task of avoiding harm is seen as being anchored individually in the role of the clinician: “the one controlling” tries to compensate or reduce sources of error for the individual patient as best as possible (cf. e.g. SI-5 and SI-8). Therefore, the clinician is in the role of always questioning the outcome of an AI-CDSS and judging whether the outcome is correct for the patient’s unique situation (cf. e.g. SI-1, SI-2, SI-5, SI-7, SI-8, SI-9, SI-10 and SI-13):

Then, of course, the doctor really has to check whether this app or, yes, this support has then also decided correctly for him, so to speak. (Stud_Interview_2, Position: 87)

The clinician is in the position to consider the context, the neglected aspects or the entirety of the patient’s situation more comprehensively than the AI-CDSS ever can (cf. e.g. SI-4 and SI-9):

And that’s also interesting, for example, […] sometimes things are a bit trickier than you can type them in [ = in the input data set of the AI; authors], I’d say, when someone describes them to you. (Stud_Interview_4, Position: 59) [A]s a clinician, you could almost just rely on all sorts of computer systems and then you wouldn’t need people at all. […] But I think it always needs that one person who can somehow connect everything together a bit and who then also takes responsibility for interpreting something out of it. (Stud_Interview_9, Position: 29)

If, despite the critical dealing with AI-driven recommendations, errors occur because the clinician has inadequately checked its outcome, according to “the one controlling”, it is the clinician who has failed:

And, accordingly, that is then ultimately medical malpractice, if he then blindly trusts the machine. (Stud_Interview_12, Position: 37)

In this respect, the recommendation of an AI-CDSS is only another element that can assist in identifying a correct decision, but its recommendation must be evaluated in the context of clinical guidelines, empirical data and consensus. Basing a decision solely on the information provided by an AI-CDSS does not constitute sufficient justification:

We always have to justify what we do. And we do so on the basis of guidelines that rest on data, facts and consensus. And if this app plays a role, then that’s part of it. If I relied on the app only without checking the scientific basis for it, then it’s my fault. (Stud_Interview_13, Position: 87)

In summary: “The one controlling” argues that harm is to be reduced and it is good if this goal is improved by AI-CDSS in an evidence-based way. However, the clinician has to complementarily consider the limitations of the AI-CDSS and prevent potential harms that may be caused by its use. According to “the one controlling,” the clinician is not only in the role but has the responsibility to control and judge whether the AI-CDSS’s recommendation is appropriate for the case at hand (cf. e.g. SI-2, SI-4 and SI-9):

I would never say that the system should be allowed to take the decision away from me, honestly. So, I think the system can support me in that, yes, but ultimately, I still have the responsibility. (Stud_Interview_2, Position: 109)

A sufficient level of understanding is required to enable the clinician to consider the system’s limitations (cf. e.g. SI-2, SI-5, SI-11 and SI-14):

If you don’t understand that [ = how the CDSS comes from its input to its output; authors] or you don’t understand the basic idea behind it, I would be afraid that you’re relying way too much on systems like that way too quickly. And if you don’t understand what’s happening in the meantime, what’s happening inside the device or inside the system, I would also think that you yourself can’t control what comes out of it anymore. And if you use a system like that, I think you should also control yourself what’s happening and not rely on it blindly. (Stud_Interview_11, Position: 35)

“The one controlling” knows about his/her own limits of understanding because of his/her own qualification in medicine (cf. e.g. SI-2 and SI-5) but demands at least enough understanding to be able to use it competently in the context of his/her own medical practice:

So that I can use it optimally, honestly. Because, of course, I’m not a physicist and not a mathematician. […] But I should definitely have a basic knowledge of how this comes about. (Stud_Interview_2, Position: 111)

Some interviewees of this second type consider it necessary to know the advantages and disadvantages and specific risks of AI-CDSS use (cf. e.g. SI-2 and SI-10). They desire to know about the regulatory review procedures and certifications by experts (cf. e.g. 8) and to have a basic knowledge of how ML and neural networks function (cf. e.g. 2, 5, 8), and how the system arrives at a specific recommendation (cf. e.g. SI-2, SI-5, SI-8 and SI-9). They also want to know about the data basis and the origin and context of the data (cf. e.g. SI-5, SI-7, SI-8, SI-10 and SI-13). Sufficient clinical experience of the clinician regarding the treatment, before using an AI-CDSS, is also seen as necessary to adequately assess the quality of a recommendation (cf. e.g. SI-7).

That means, from my point of view, either the basic data collection or the way to get there would have to be somehow transparent, that I as an end user of this AI can somehow assure myself that this algorithm has also drawn the right conclusions from right data and not from wrong data the 99% right conclusions and at 1% it always comes back to the error and I rely 100% on this AI, though. (Stud_Interview_8, Position: 31) And I think there should be a certain transparency in it or a certain explanation. So, if I can’t understand how this support system comes to this cut or to this position, then I would have to be able to understand, okay, how do you analyse the other structures around it that you come to the conclusion that that’s exactly where the cut should be. (Stud_Interview_8, Position: 25)

Attaining such a level of understanding requires, on the one hand, the competencies mentioned on the part of the professionals and, on the other hand, an appropriate presentation of the information by the AI-CDSS:

So, of course, I would prefer to inquire, […] so, in the best case, the system could somehow explain to me how it came to this decision, so I know that it first explains or first marks which structures it has recognised and then next makes the cut, so that I can just reassure myself: “aha, maybe the programme has recognised a structure incorrectly and has come to a wrong cut.” Then I could follow up on this error and say, okay, there’s a mistake here, that’s why I don’t take over this cutting direction. (Stud_Interview_8, Position: 45)

The understanding is particularly relevant to inform patients adequately (cf. e.g. SI-2 and SI-8) and be empowered as a clinician to “intervene” during the use of the AI-CDSS when needed (cf. e.g. SI-5 and SI-13):

I know what features there are, but I also know how to turn those off and I know my fallback level. How much the system can interfere with me, I’d say, and then how I could bypass that. (Stud_Interview_13, Position: 41)

Only a comprehensive understanding would allow the clinician an informed assessment of the system’s limitations and prevent an overestimation of its performance:

[t]o make sure that you don’t hopelessly overestimate it. It’s not like some God-given thing that suddenly knows everything. It also has its limits, and one should be clear about that. (Stud_Interview_10, Position: 116)

Expectations and requirements for the design of human-AI collaborations in health care contexts have been in the focus of philosophical and ethical publications for a few years now [ 3 , 6 , 23 , 24 , 25 , 26 , 27 , 28 ]. Particularly questions about the epistemological quality and limitations of AI-generated recommendations and the resulting ethical questions about the morally legitimate way of dealing with these chances and limitations have attracted attention. The question of whether a highly reliable or accurate AI recommendation is sufficient, or whether and to what extent it must be explainable to justify a diagnostic or treatment decision based on it from an epistemic and ethical point of view was often at the core of the analyses [ 7 , 10 , 29 , 30 , 31 ]. This is discussed mostly in the context of a potential loss or diffusion of responsibility and accountability [ 32 , 33 , 34 , 35 , 36 , 37 , 38 ]. Our results show that this complex question also affects the interviewees and they see it relevant for their own future clinical practice and, for instance, whether alternative subjects of responsibility could be assigned [ 18 ]. Many of the arguments found in the literature could also be found similarly or even the identically among the interviewees.

All interviewees consider themselves as representatives of evidence-based medicine. Scientific proof of benefit (or clinical validation) was seen as the most important starting point for the use of applications such as AI-CDSS in health care (cf. similarly [ 14 ]). The interviewees only considered the use of AI-CDSS worthy of discussion if it was proven that it achieves at least a comparably good performance and outcome as clinicians achieve without AI-CDSS (cf. also [ 16 ]). The more reliable the evidence, the more obvious or even imperative the use of the application would be. The rationale for this imperative is the recognised goal of medical practice, to maximise patient benefits, or, more precisely, to serve the well-being and will of the patient (cf. [ 6 ]). From the evidence-based positive proof of a benefit for patients, therefore, follows the necessity to pursue the potential of AI-CDSS, wherever this is feasible (cf. [ 10 ]).

Decisively, however, the argumentative justification for one or another answer to the question about the proper way to deal with this scientific proof of benefit is decided by the image of the clinician’s role the interviewees have. From this “professional role”, they derive, accordingly, which tasks they have to fulfil, which accountability for the clinical decision-making process this entails, and which competencies they need to guarantee this accountability – or, in other words, which moral obligations go hand in hand with it.

The students interviewed anticipate their future role as clinicians as to have the moral obligation, in the context of the respective health system, of selecting and suggesting to patients those diagnostic and treatment options that cause the least harm and the most benefit, based on evidence. However, while the interviewees of the type “the one relying” recognise this goal as being best pursued by using an evidence-based AI-CDSS to statistically benefit the most patients, which is held to be based on a broad database and trained neither to underfit nor overfit, the interviewees of the type “the one controlling” add to this requirement the need to check the validity of the specific recommendation of the AI-CDSS for the individual patient in the given situation. Thus, while some rely on the evidence-based validity of the positive proof of benefit for AI-CDSS in a collective and consider that to be sufficient to identify the greatest possible patient benefit, others focus on nonetheless possible limitations of AI-CDSS statements that may limit their evidence-based validity for the individual patient (even if statistically such an approach might result in more frequent errors, cf. [ 10 ]).

This is a well-known epistemic-ethical conflict about how to achieve the greatest possible benefit: either by striving for the greatest possible benefit for the entire group (and thus indirectly for each individual on average) or directly by striving the greatest possible benefit for the individual patient. This trade-off is not specific for AI applications. Instead, it is rather a generic problem of applying generally functional measures or tools with existing limitations to individual cases. However, this challenge is made all the more apparent by the knowledge about limitations and biases of data and AI applications built on them (cf. [ 15 ], also for doubts about the robustness of data). The use of AI would be most widely accepted [ 15 , 17 ] and unobjectionable from an ethical point of view if its users could be sure that the AI-CDSS could not make any mistakes. However, there will probably never be error-free datasets (e.g. due to noise or recording errors and biases) [ 10 ], which always implies false-positive and false-negative AI-CDSS predictions. This means that compromises will always have to be made. The following ethical question, thus, arises concerning what is the minimum quality of the data and how should recommendations from them be handled in view of their limitations in order to make decisions about the quality and length of life for an individual patient. As Amann et al. argue, in the context of AI use, the principle of non-maleficence urges clinicians not to harm their patients “either intentionally or through excessive or inappropriate use of medical means” [ 10 ] and, furthermore: “This is why, from a medical point-of-view, not only clinical validation but also explainability plays an instrumental role in the clinical setting” [ 10 ]. Similarly, the obligation to benefit and not to harm the individual patient urges future clinicians of the type “the one controlling” to avoid, if possible, patient injury due to inappropriate care – this could only be achieved for them through sufficient scrutiny of the appropriateness of the decision in question (cf. [ 10 , 16 ]). Accordingly, our interview results shed light on the debate on the importance of explicability in medical AI and trade-offs that sometimes need to be made with other goals in healthcare. According to the point of view of “the one controlling”, explicability might rather serve as a means that is important to prevent patients being harmed by the use of medical AI. In general, this argumentative type strives for a sufficient understanding of AI-CDSS and its outputs to provide optimal care as well as to deal with the patient’s information needs. “The one relying” as the second argumentative type we identified also upholds minimising harm when using AI-CDSS but does not strive for understanding of the machine outputs to the same degree as the other type does. While interviewees of the one argumentation type call for explicability, under the assumption that this will allow them to better prevent harm to individual patients and to better inform them (and thus better benefit the patient), interviewees of the other argumentation type call for less explicability, under the assumption that this will statistically allow more decisions to be made that benefit the patient. Hence, our results might enrich the – so far predominantly theoretical – debate on explicability of medical AI in highlighting and discussing different needs as perceived by future healthcare professionals.

Another aspect, which was raised especially by interviewees of the type “the one controlling”, concerns the inadequacy of AI-CDSS to consider only those factors that can be operationalised. Clinicians would have to take into account these aspects which are associated with the patient’s personality, values, life situation and socio-cultural background (cf. similarly [ 29 , 39 ]), as these realised relevant aspects of patient autonomy. This aspect is not addressed by interviewees of the type “the one relying”; whether due to the fact that they consider these aspects to be operationalisable, or if they consider them to be of secondary importance, cannot be said on the basis of the data. However, this aspect seems all the more necessary the more routinised the use of AI-CDSS in clinical practice becomes [ 39 ] in order to continue to meet the needs, wishes and preferences of individual patients in the future.

It can be summarised that the future clinicians interviewed read the evidence about the positive proof of benefit with existing limitations against the background of their respective conception of the role of the clinician and its moral obligations.

Correspondingly, the accountability or responsibility for harm prevention is also considered to be realized in different ways when AI-CDSS recommendations are passed on: for some through the evidence-based, indicated use of the AI-CDSS and transparent information about their limitations, for others only through the critical review and the validation of the respective AI-CDSS recommendation regarding the individual case. There is agreement among the interviewees, but also in other empirical studies [ 14 , 15 , 18 ], about the importance of the clinician’s responsibility when using AI support; however, the ways in which this responsibility can be executed varies greatly among the interviewees, as our results show.

According to the interviewees of the type “the one relying”, in order to fulfil the role and moral obligations in dealing with AI-CDSS, clinicians need a sufficient understanding of the advantages and disadvantages and existing risks of the use of a certain AI-CDSS to be able to communicate them to patients for the latter’s informed consent. Knowledge of rigorous validation processes for assessing the evidence of benefit and regulatory standards, for example, through government authorities, medical societies and/or certification according to medical device regulations, is sufficient reassurance for them to use an AI-CDSS (cf. similarly [ 16 , 10 ]).

By contrast, interviewees of the “the one controlling” type demand a more comprehensive understanding from clinicians that enables them to critically review and interpret AI-CDSS, their single recommendations and underlying assumptions (cf. [ 40 ]). Clinical decision-making has been carried out so far by clinical experts primarily on the basis of medical reasons [ 29 , 35 ] and not only based on data. Explanations enable clinicians to interpret AI-driven recommendations in light of the respective situation and the individual patient [ 29 ] and to align them with their own medical clinical judgment [ 7 ]. Both the interviewees and the literature concede that different levels of understanding and explanation need to be achieved for different decision-making scenarios in everyday clinical practice, depending on the different risks and impacts on the patient’s life [ 10 , 30 ]. More extensive competencies are required to fulfil this kind of clinician role, and higher demands need to be placed on the explicability of the AI-CDSS itself. Not being able to fulfil their role and meet the moral obligations is seen as a normative barrier to the use of AI-CDSS by future clinicians of the “the one controlling” type (cf. similarly the “distancing” of clinicians when the rationale for an AI-CDSS recommendation could no longer be understood: 14).

The need for competencies and knowledge as expressed by professionals is already known [ 10 , 15 , 16 , 18 ], which is why the discussion on tailored training and professional development regarding the use of AI in clinical practice has recently gained momentum. Initial consensus studies are attempting to identify the skills and learning objectives that clinicians need to use AI tools (see, e.g., [ 41 ]), and national and international initiatives to integrate such into curricular structures have been launched; however, there is still often a lack of standardized training and study programs that would be available everywhere (cf. [ 42 ]). In a study at two German medical schools, it has also been shown that there is a positive correlation between AI literacy and students’ positive attitudes towards AI (cf. [ 43 ]). Our study adds to this existing knowledge in differentiating between two types of students that differ in their demand for education and competencies in so far as they perceive different levels of understanding to be necessary for using AI-CDSS in practice. Both types, however, share the view that future clinicians must be equipped with the appropriate skills to be able to meet the normative demands that stem from their professional role. This includes, for example, knowledge about the advantages and disadvantages/risks of specific AI-CDSS, regulatory processes for reviewing the clinical validity, basics of information technology and competencies to assess the underlying dataset and its limitations, and the reasonableness of a recommendation. Approaches, such as that proposed by Sand et al. [ 36 ], based on “Entrustable Professional Activities” appear to be particularly constructive for this purpose. With the help of such frameworks, necessary competencies can be identified in order to be able to ascribe certain responsibilities. For both cases, it will be possible to say: “Being a competent operator of such systems […] demands more from physicians than becoming information specialists. It requires a more general awareness of the fallibility of these systems and the various ways in which their utilization might fail” [ 36 ]. The answer to the question about the appropriate scope of competencies of clinicians will, nevertheless, be measured by the extent to which they should be able to safeguard control over the clinical decision-making process: clinicians of “the one relying” type will be able to get by with significantly fewer competencies than those of “the one controlling” type. However, this is only due to the different assigned role expectations and responsibilities of clinicians.

The results of our study underline that the reference to proof of high accuracy and the need explicability or understanding are by no means contradictory. In so far, our study adds an empirical perspective to the debates on explainable AI that have a predominantly technical or theoretical character so far. While some of the future clinicians interviewed can be linked to one argumentative type, others can be categorised as belonging to the other type; however, only a small number of those interviewed use both references to realise their conception of the clinical role along with its moral obligations. For the future clinicians, both represent approaches from which they deal hermeneutically with existing theory and evidence in order to be able to best fulfil their idea of the clinician’s role and its responsibilities – in each case, with the goal of serving the well-being and will of the patient. As clear as the normative preference for AI-CDSS use may be (if scientific proof of the benefit is provided), it becomes clear that the epistemological requirements for ensuring the benefit pledged for the individual patient follow different rationales. The future clinicians interviewed assess the trade-off between a normatively imperative maximum benefit and an (also normatively imperative) epistemic certainty to achieve this benefit differently (cf. similarly the conceptual analysis in [ 31 ]).

Limitations that need to be considered in the interpretation of this study’s results arise from the sample and the recruitment process. The study mirrors the perceptions and attitudes of German medical students from one university and cannot be generalised unconditionally. It could be that even more argumentative types were identified when drawing on a different (and broader) sample of study participants. Furthermore, each interviewee’s clinical experience is very limited so far, and they have minimal or no personal experience in dealing with AI-CDSS in clinical practice. Their answers therefore have a hypothetical character, insofar as they could act differently and formulate different claims in practice than in the interview situation based on the case vignettes. Although this limitation must be taken into account in the interpretation, we believe that the limited practical experience also has the advantage that positions are developed based on personal convictions and are not relativised too quickly against the background of practical feasibility. However, clinically experienced practitioners could possibly contribute to the identification of further argumentative types. Finally, the study results as reported in this article do not represent an encompassing analysis of AI-CDSS but are limited to certain aspects related to necessary levels of understanding as perceived by the stakeholders. It should therefore not be wrongly assumed from the results that other aspects were irrelevant for the interviewees.

The ethical debate on the employment of AI-CDSS and its impact on physicians’ practice and professional role is already in full swing. Empirical evidence on the stakeholders’ own viewpoints, however, is limited so far. This study generated insights into prospective German clinicians’ perspectives regarding their professional role and necessary levels of understanding and explicability needed as a basis for responsible clinical decision-making. Two contrasting types of clinicians were particularly identified who differ, for example, in their perception of which level of understanding they perceive as necessary for AI-supported clinical decision-making.

The study results open up the debate on the levels of competencies needed and appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. Future initiatives in this direction need to be aware that clinicians are not a homogenous group at all, either in their AI-related competencies or in their appreciation of which levels of understanding and explicability they consider necessary to undergird their professional judgment. Consensus-seeking processes, thus, might be necessary in the medical profession to ensure consistent standards which will enhance the trustworthiness of AI-supported health care.

From a research perspective, our hypothesis-generating study could be taken as groundwork for a more in-depth or quantitative exploration of the different types of professional users of AI-CDSS. In addition, more empirical studies in various national contexts are needed because expectations towards technological progress and the understanding of human-machine interaction differ greatly depending on the cultural context. Such research should not only elicit health care professionals’ perspectives but also generate evidence on patients’ viewpoints on the levels of explicability and transparency needed when AI is integrated in clinical decision-making. It is generally desirable that open and informed communication about the use of medical AI finds its place in patient-physician communication and shared decision-making so that patients’ information needs and treatment preferences can be adequately addressed. As a prerequisite, however, more work is needed to enhance the explicability of AI-CDSS (e.g. through visualisation) and increase physicians’ competencies in dealing with medical AI.

Data availability

The datasets generated and/or analysed during the current study are not publicly available as they might contain information that could compromise research participant privacy and consent.

It was possible, for example, to assign interviewees No. 3 and 6 quite clearly to Type I and interviewees No. 2, 5, 7, 8, 9 and 10 to Type II.

Abbreviations

Artificial Intelligence

Clinical Decision Support Systems

Machine Learning

Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inf. 2016;(Suppl.1):S103–16. https://doi.org/10.15265/IYS-2016-s034 .

Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271–97. https://doi.org/10.1016/S2589-7500(19)30123-2 .

Article   Google Scholar  

Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17. https://doi.org/10.1038/s41746-020-0221-y .

Pozzi G. Testimonial injustice in medical machine learning. J Med Ethics. 2023;49(8):536–40. https://doi.org/10.1136/jme-2022-108630 .

Ploug T, Holm S. The right to refuse diagnostics and treatment planning by artificial intelligence. Med Health Care Philos. 2020;23(1):107–14. https://doi.org/10.1007/s11019-019-09912-8 .

Funer F, Wiesing U. Physician’s autonomy in the face of AI support: walking the ethical tightrope. Front Med. 2024;11. https://doi.org/10.3389/fmed.2024.1324963 .

Bjerring JC, Busch J. Artificial intelligence and patient-centered decision-making. Philos Technol. 2021;34(2):349–71. https://doi.org/10.1007/s13347-019-00391-6 .

Ursin F, Timmermann C, Steger F. Explicability of artificial intelligence in radiology: is a fifth bioethical principle conceptually necessary? Bioethics. 2022;36(2):143–53. https://doi.org/10.1111/bioe.12918 .

Adams J. Defending explicability as a principle for the ethics of artificial intelligence in medicine. Med Health Care Philos. 2023. https://doi.org/10.1007/s11019-023-10175-7 .

Amann J, Blasimme A, Vayena E, Frey D, Madai VI, the Precise Qc. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inf Decis Mak. 2020;20(1):310. https://doi.org/10.1186/s12911-020-01332-6 .

London AJ. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep. 2019;49(1):15–21. https://doi.org/10.1002/hast.973 .

Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Levels of explicability for medical artificial intelligence: what do we normatively need and what can we technically reach? Ethik Med. 2023;35(2):173–99. https://doi.org/10.1007/s00481-023-00761-x .

Ford E, Edelman N, Somers L, Shrewsbury D, Lopez Levy M, van Marwijk H, et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inf Decis Mak. 2021;21(1):193. https://doi.org/10.1186/s12911-021-01557-z .

Samhammer D, Roller R, Hummel P, Osmanodja B, Burchardt A, Mayrdorfer M, et al. Nothing works without the doctor: physicians’ perception of clinical decision-making and artificial intelligence. Front Med (Lausanne). 2022;9:1016366. https://doi.org/10.3389/fmed.2022.1016366 .

Van Cauwenberge D, Van Biesen W, Decruyenaere J, Leune T, Sterckx S. Many roads lead to Rome and the artificial intelligence only shows me one road: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med Ethics. 2022;23(1):50. https://doi.org/10.1186/s12910-022-00787-8 .

Frisinger A, Papachristou P. The voice of healthcare: introducing digital decision support systems into clinical practice – a qualitative study. BMC Prim Care. 2023;24(1):67. https://doi.org/10.1186/s12875-023-02024-6 .

Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, Stephan A. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. 2023;6(1):111. https://doi.org/10.1038/s41746-023-00852-5 .

Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, et al. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals’ preferences and concerns. J Med Ethics. 2023. https://doi.org/10.1136/jme-2022-108814 .

Funer F, Schneider D, Heyen NB, Aichinger H, Klausen AD, Tinnemeyer S, Liedtke W, Salloch S, Bratan T. Impacts of clinical decision support systems on the relationship, communication and shared decision-making between healthcare professionals and patients: a multi-stakeholder interview study. J Med Internet Res. Jun 2024;7. https://doi.org/10.2196/55717 .

Döringer S. The problem-centred expert interview’. Combining qualitative interviewing approaches for investigating implicit expert knowledge. Intern J Soc Res Meth. 2020;24(3):265–78. https://doi.org/10.1080/13645579.2020.1766777 .

Kuckartz U. Qualitative inhaltsanalyse. Methoden, praxis, computerunterstützung. Weinheim: Beltz; 2016.

Google Scholar  

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, Floridi L. The ethics of AI in health care: a mapping review. Soc Sci Med. 2020;260:113172. https://doi.org/10.1016/j.socscimed.2020.113172 .

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):14. https://doi.org/10.1186/s12910-021-00577-8 .

Cartolovni A, Tomicic A, Lazic Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. Int J Med Inf. 2022;161:104738. https://doi.org/10.1016/j.ijmedinf.2022.104738 .

Hagendorff T. The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 2020;30(1):99–120. https://doi.org/10.1007/s11023-020-09517-8 .

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99. https://doi.org/10.1038/s42256-019-0088-2 .

Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: mapping the debate. Big Data Soc. 2016;3(2):2053951716679679. https://doi.org/10.1177/2053951716679679 .

Funer F. The deception of certainty: how non-interpretable machine learning outcomes challenge the epistemic authority of physicians. A deliberative-relational approach. Med Health Care Philos. 2022;25(2):167–78. https://doi.org/10.1007/s11019-022-10076-1 .

Funer F. Accuracy and interpretability: struggling with the epistemic foundations of machine learning-generated medical information and their practical implications for the doctor-patient relationship. Philos Technol. 2022;35(1):art5. https://doi.org/10.1007/s13347-022-00505-7 .

Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics. 2020;46(3):205–11. https://doi.org/10.1136/medethics-2019-105586 .

Bleher H, Braun M. Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI Ethics. 2022;2(4):747–61. https://doi.org/10.1007/s43681-022-00135-x .

Coeckelbergh M. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics. 2020;26(4):2051–68. https://doi.org/10.1007/s11948-019-00146-8 .

Grote T, Di Nucci E. Algorithmic decision-making and the problem of control. In: Beck B, Kühler M, editors. Technology, anthropology, and dimensions of responsibility. Techno:Phil – Aktuelle Herausforderungen Der Technikphilosophie. Stuttgart: J.B. Metzler; 2020. pp. 97–113.

Chapter   Google Scholar  

Kempt H, Nagel SK. Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts. J Med Ethics. 2022;48(4):222–9. https://doi.org/10.1136/medethics-2021-107440 .

Sand M, Duran JM, Jongsma KR. Responsibility beyond design: physicians’ requirements for ethical medical AI. Bioethics. 2022;36(2):162–9. https://doi.org/10.1111/bioe.12887 .

Santoni de Sio F, Mecacci G. Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol. 2021;34(4):1057–84. https://doi.org/10.1007/s13347-021-00450-x .

Tigard DW. Artificial moral responsibility: how we can and cannot hold machines responsible. Camb Q Healthc Ethics. 2021;30(3):435–47. https://doi.org/10.1017/S0963180120000985 .

Heyen NB, Salloch S. The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory. BMC Med Ethics. 2021;22(1):112. https://doi.org/10.1186/s12910-021-00679-3 .

Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316(22):2353–4. https://doi.org/10.1001/jama.2016.17438 .

Çalışkan SA, Demir K, Karaca O. Artificial intelligence in medical education curriculum: an e-Delphi study for competencies. PLoS ONE. 2022;17(7):e0271872. https://doi.org/10.1371/journal.pone.0271872 .

Foadi N, Varghese J. Digital competence – A Key Competence for Todays and Future Physicians. Journal of European CME. 2022;11(1); https://doi.org/10.1080/21614083.2021.2015200 .

Laupichler MC, Aster A, Meyerheim M, Raupach T, Mergen M. Medical students’ AI literacy and attitudes towards AI: a cross-sectional two-center study using pre-validated assessment instruments. BMC Medical Education. 2024;401; https://doi.org/10.1186/s12909-024-05400-7 .

Download references

Acknowledgements

We would like to thank all interview partners participating in this study. We would also like to thank all other members of our DESIREE research group for their support. Finally, we would like to thank the developers of the clinical decision support systems used for our study who provided advice on the case studies. We thank the reviewers of this paper for their constructive and attentive comments.

Open Access funding enabled and organized by Projekt DEAL. The project “DESIREE – Decision Support in Routine and Emergency Health Care – Ethical and Social Implications” was funded by the German Federal Ministry of Education and Research (Grant ID 01GP1911A-D). F.F. was also supported by the VolkswagenStiftung (Digital Medical Ethics Network, Grant ID 9B 233). The funders had no involvement in the design of the study, the collection, analysis or interpretation of data, or the writing of the manuscript.

Author information

Authors and affiliations.

Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany

F. Funer, S. Tinnemeyer & S. Salloch

Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany

Faculty of Theology, University of Greifswald, Am Rubenowplatz 2/3, 17489, Greifswald, Germany

You can also search for this author in PubMed   Google Scholar

Contributions

S.T. and S.S. developed the interview guide. S.T. did the interviews. F.F. and S.S. did the data analysis and interpretation and drafted the manuscript. F.F., S.S., S.T. and W.L. contributed to the conceptual background and discussion. All authors reviewed and approved the final version of this manuscript.

Corresponding author

Correspondence to S. Salloch .

Ethics declarations

Ethics approval and consent to participate.

The authors confirm that the study was performed in accordance with relevant guidelines and regulations (such as the Declaration of Helsinki). This study was approved by the Research Ethics Committee of Hannover Medical School, Germany (Reg. No. 9805_BO_K_2021). All participants provided written informed consent to participate in this study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Funer, F., Tinnemeyer, S., Liedtke, W. et al. Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students. BMC Med Ethics 25 , 107 (2024). https://doi.org/10.1186/s12910-024-01109-w

Download citation

Received : 20 October 2023

Accepted : 26 September 2024

Published : 07 October 2024

DOI : https://doi.org/10.1186/s12910-024-01109-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical decision support systems (CDSS)
  • Artificial Intelligence (AI)
  • Future health care professionals
  • Level of understanding
  • Explicability

BMC Medical Ethics

ISSN: 1472-6939

role of artificial intelligence in problem solving

COMMENTS

  1. Opportunities of artificial intelligence for supporting complex problem

    The research objective of this paper is to advance knowledge about the role of artificial intelligence (AI) in complex problem-solving. ... as exemplified by generative AI tools such as ChatGPT, 1 increasing the prospect that AI can play an active role in solving complex problems. AI systems are capable of analysing large datasets, including ...

  2. AI accelerates problem-solving in complex scenarios

    This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution. The process is so onerous that a company often must stop the software ...

  3. The Transformative Role of Artificial Intelligence in Problem Solving

    The Role of AI in Problem Solving. Artificial intelligence refers to the development and implementation of computer systems that can perform tasks that typically require human intelligence. In the context of problem-solving psychology, AI algorithms can be utilized to analyze vast amounts of data and extract meaningful insights.

  4. AI accelerates problem-solving in complex scenarios

    Researchers from MIT and ETZ Zurich have developed a new, data-driven machine-learning technique that speeds up software programs used to solve complex optimization problems that can have millions of potential solutions. Their approach could be applied to many complex logistical challenges, such as package routing, vaccine distribution, and power grid management.

  5. What Is Artificial Intelligence (AI)?

    Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy. Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language.

  6. The Intersection of Math and AI: A New Era in Problem-Solving

    Machine Learning: A New Era in Mathematical Problem Solving. Machine learning is a subfield of AI, or artificial intelligence, in which a computer program is trained on large datasets and learns to find new patterns and make predictions. The conference, the first put on by the new Richard N. Merkin Center for Pure and Applied Mathematics, will ...

  7. Artificial Intelligence for the Real World

    Broadly speaking, AI can support three important business needs: automating business processes (typically back-office administrative and financial activities), gaining insight through data ...

  8. Combining Human and Artificial Intelligence: Hybrid Problem-Solving in

    Organizations increasingly use artificial intelligence (AI) to solve previously unexplored problems. While routine tasks can be automated, the intricate nature of exploratory tasks, such as solving new problems, demands a hybrid approach that integrates human intelligence with AI. We argue that the outcomes of this human-AI collaboration are contingent on the processes employed to combine ...

  9. Understanding problem solving in artificial intelligence

    Problem solving in artificial intelligence refers to the process of finding solutions to complex problems using computational systems or algorithms. It involves defining and structuring the problem, formulating a plan or strategy to solve it, and executing the plan to reach the desired solution.

  10. When Should You Use AI to Solve Problems?

    Jorg Greuel/Getty Images. Summary. AI is increasingly informing business decisions but can be misused if executives stick with old decision-making styles. A key to effective collaboration is to ...

  11. What is AI (artificial intelligence)?

    The term "artificial general intelligence" (AGI) was coined to describe AI systems that possess capabilities comparable to those of a human. In theory, AGI could someday replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension.

  12. Problem Space Search in Artificial Intelligence

    The role of artificial intelligence in problem solving is to assist humans and automate tedious processes. AI can analyze vast amounts of data and provide insights that aid in decision-making. It can also optimize processes and improve resource allocation by finding the most efficient solutions to complex problems.

  13. King's Business School: How AI is transforming problem-solving

    A new study by researchers at King's Business School and Wazoku has revealed that AI is transforming global problem-solving. The report found that nearly half (46%) of Wazoku's 700,000-strong network of problem solvers had utilised generative AI (GenAI) to work on innovative ideas over the past year.

  14. Problem Solving in Artificial Intelligence

    The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem. We can also say that a problem-solving agent is a result-driven agent and always ...

  15. Artificial Intelligence and Problem Solving

    This book lends insight into solving some well-known AI problems using the most efficient problem-solving methods by humans and computers. The book discusses the importance of developing critical-thinking methods and skills, and develops a consistent approach toward each problem. This book assembles in one place a set of interesting and challenging AI-type problems that students regularly ...

  16. PDF Fostering Problem Solving and Critical Thinking in Mathematics Through

    Artificial Intelligence, ChatGPT, Critical Thinking, Mathematics Education, Problem Solving 1. INTRODUCTION The role of artificial intelligence (AI) in everyday life is increasingly extensive: nowadays, it helps us carry out fundamental tasks for the society, not only for specific jobs but also for general areas regarding the whole

  17. Can AI have common sense? Finding out will be key to achieving ...

    As a result, quantifying how close LLMs are to displaying human-like behaviour remains an unsolved problem. A test of artificial intelligence Humans are good at dealing with uncertain and ...

  18. How Leaders Are Using AI As A Problem-Solving Tool

    Consequently, AI can be a valuable problem-solving tool for leaders across the private and public sectors, primarily through three methods. 1) Automation. One of AI's most beneficial ways to ...

  19. The role of artificial intelligence in achieving the Sustainable

    The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using ...

  20. Artificial intelligence (AI)

    artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

  21. Artificial intelligence

    Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems.It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. [1]

  22. What are the Key Components of Artificial Intelligence (AI)?

    Artificial intelligence (AI) consists of multiple important elements that help machines imitate human-like mental processes. These components allow AI to tackle problems, make choices, understand speech, and adjust to new data. Let's go over the five key elements of AI: learning, reasoning, problem-solving, perception, and language processing. 1.

  23. Problem Solving Techniques in AI

    Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modelling frameworks. A same issue has a number of solutions, that are all accomplished using an unique algorithm.

  24. The role of AI in improving health care: artificial intelligence and

    As we explore the impact of technology and artificial intelligence in health care, ensuring a focus on the human experience of these transformational tools and approaches is needed. While automation increases, how teams adopt new processes significantly affects whether investments in new technology and tools will yield positive results.

  25. Clinicians' roles and necessary levels of understanding in the use of

    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision ...