the problem solving agent

Problem-Solving Agents In Artificial Intelligence

Problem-Solving Agents In Artificial Intelligence

In artificial intelligence, a problem-solving agent refers to a type of intelligent agent designed to address and solve complex problems or tasks in its environment. These agents are a fundamental concept in AI and are used in various applications, from game-playing algorithms to robotics and decision-making systems. Here are some key characteristics and components of a problem-solving agent:

  • Perception : Problem-solving agents typically have the ability to perceive or sense their environment. They can gather information about the current state of the world, often through sensors, cameras, or other data sources.
  • Knowledge Base : These agents often possess some form of knowledge or representation of the problem domain. This knowledge can be encoded in various ways, such as rules, facts, or models, depending on the specific problem.
  • Reasoning : Problem-solving agents employ reasoning mechanisms to make decisions and select actions based on their perception and knowledge. This involves processing information, making inferences, and selecting the best course of action.
  • Planning : For many complex problems, problem-solving agents engage in planning. They consider different sequences of actions to achieve their goals and decide on the most suitable action plan.
  • Actuation : After determining the best course of action, problem-solving agents take actions to interact with their environment. This can involve physical actions in the case of robotics or making decisions in more abstract problem-solving domains.
  • Feedback : Problem-solving agents often receive feedback from their environment, which they use to adjust their actions and refine their problem-solving strategies. This feedback loop helps them adapt to changing conditions and improve their performance.
  • Learning : Some problem-solving agents incorporate machine learning techniques to improve their performance over time. They can learn from experience, adapt their strategies, and become more efficient at solving similar problems in the future.

Problem-solving agents can vary greatly in complexity, from simple algorithms that solve straightforward puzzles to highly sophisticated AI systems that tackle complex, real-world problems. The design and implementation of problem-solving agents depend on the specific problem domain and the goals of the AI application.

Hridhya Manoj

Hello, I’m Hridhya Manoj. I’m passionate about technology and its ever-evolving landscape. With a deep love for writing and a curious mind, I enjoy translating complex concepts into understandable, engaging content. Let’s explore the world of tech together

Which Of The Following Is A Privilege In SQL Standard

Implicit Return Type Int In C

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Reach Out to Us for Any Query

SkillVertex is an edtech organization that aims to provide upskilling and training to students as well as working professionals by delivering a diverse range of programs in accordance with their needs and future aspirations.

© 2024 Skill Vertex

Box Of Notes

Problem Solving Agents in Artificial Intelligence

In this post, we will talk about Problem Solving agents in Artificial Intelligence, which are sort of goal-based agents. Because the straight mapping from states to actions of a basic reflex agent is too vast to retain for a complex environment, we utilize goal-based agents that may consider future actions and the desirability of outcomes.

You Will Learn

Problem Solving Agents

Problem Solving Agents decide what to do by finding a sequence of actions that leads to a desirable state or solution.

An agent may need to plan when the best course of action is not immediately visible. They may need to think through a series of moves that will lead them to their goal state. Such an agent is known as a problem solving agent , and the computation it does is known as a search .

The problem solving agent follows this four phase problem solving process:

  • Goal Formulation: This is the first and most basic phase in problem solving. It arranges specific steps to establish a target/goal that demands some activity to reach it. AI agents are now used to formulate goals.
  • Problem Formulation: It is one of the fundamental steps in problem-solving that determines what action should be taken to reach the goal.
  • Search: After the Goal and Problem Formulation, the agent simulates sequences of actions and has to look for a sequence of actions that reaches the goal. This process is called search, and the sequence is called a solution . The agent might have to simulate multiple sequences that do not reach the goal, but eventually, it will find a solution, or it will find that no solution is possible. A search algorithm takes a problem as input and outputs a sequence of actions.
  • Execution: After the search phase, the agent can now execute the actions that are recommended by the search algorithm, one at a time. This final stage is known as the execution phase.

Problems and Solution

Before we move into the problem formulation phase, we must first define a problem in terms of problem solving agents.

A formal definition of a problem consists of five components:

Initial State

Transition model.

It is the agent’s starting state or initial step towards its goal. For example, if a taxi agent needs to travel to a location(B), but the taxi is already at location(A), the problem’s initial state would be the location (A).

It is a description of the possible actions that the agent can take. Given a state s, Actions ( s ) returns the actions that can be executed in s. Each of these actions is said to be appropriate in s.

It describes what each action does. It is specified by a function Result ( s, a ) that returns the state that results from doing action an in state s.

The initial state, actions, and transition model together define the state space of a problem, a set of all states reachable from the initial state by any sequence of actions. The state space forms a graph in which the nodes are states, and the links between the nodes are actions.

It determines if the given state is a goal state. Sometimes there is an explicit list of potential goal states, and the test merely verifies whether the provided state is one of them. The goal is sometimes expressed via an abstract attribute rather than an explicitly enumerated set of conditions.

It assigns a numerical cost to each path that leads to the goal. The problem solving agents choose a cost function that matches its performance measure. Remember that the optimal solution has the lowest path cost of all the solutions .

Example Problems

The problem solving approach has been used in a wide range of work contexts. There are two kinds of problem approaches

  • Standardized/ Toy Problem: Its purpose is to demonstrate or practice various problem solving techniques. It can be described concisely and precisely, making it appropriate as a benchmark for academics to compare the performance of algorithms.
  • Real-world Problems: It is real-world problems that need solutions. It does not rely on descriptions, unlike a toy problem, yet we can have a basic description of the issue.

Some Standardized/Toy Problems

Vacuum world problem.

Let us take a vacuum cleaner agent and it can move left or right and its jump is to suck up the dirt from the floor.

The state space graph for the two-cell vacuum world.

The vacuum world’s problem can be stated as follows:

States: A world state specifies which objects are housed in which cells. The objects in the vacuum world are the agent and any dirt. The agent can be in either of the two cells in the simple two-cell version, and each call can include dirt or not, therefore there are 2×2×2 = 8 states. A vacuum environment with n cells has n×2 n states in general.

Initial State: Any state can be specified as the starting point.

Actions: We defined three actions in the two-cell world: sucking, moving left, and moving right. More movement activities are required in a two-dimensional multi-cell world.

Transition Model: Suck cleans the agent’s cell of any filth; Forward moves the agent one cell forward in the direction it is facing unless it meets a wall, in which case the action has no effect. Backward moves the agent in the opposite direction, whilst TurnRight and TurnLeft rotate it by 90°.

Goal States: The states in which every cell is clean.

Action Cost: Each action costs 1.

8 Puzzle Problem

In a sliding-tile puzzle , a number of tiles (sometimes called blocks or pieces) are arranged in a grid with one or more blank spaces so that some of the tiles can slide into the blank space. One variant is the Rush Hour puzzle, in which cars and trucks slide around a 6 x 6 grid in an attempt to free a car from the traffic jam. Perhaps the best-known variant is the 8- puzzle (see Figure below ), which consists of a 3 x 3 grid with eight numbered tiles and one blank space, and the 15-puzzle on a 4 x 4  grid. The object is to reach a specified goal state, such as the one shown on the right of the figure. The standard formulation of the 8 puzzles is as follows:

STATES : A state description specifies the location of each of the tiles.

INITIAL STATE : Any state can be designated as the initial state. (Note that a parity property partitions the state space—any given goal can be reached from exactly half of the possible initial states.)

ACTIONS : While in the physical world it is a tile that slides, the simplest way of describing action is to think of the blank space moving Left , Right , Up , or Down . If the blank is at an edge or corner then not all actions will be applicable.

TRANSITION MODEL : Maps a state and action to a resulting state; for example, if we apply Left to the start state in the Figure below, the resulting state has the 5 and the blank switched.

A typical instance of the 8-puzzle

GOAL STATE :  It identifies whether we have reached the correct goal state. Although any state could be the goal, we typically specify a state with the numbers in order, as in the Figure above.

ACTION COST : Each action costs 1.

You Might Like:

  • Agents in Artificial Intelligence

Types of Environments in Artificial Intelligence

  • Understanding PEAS in Artificial Intelligence
  • River Crossing Puzzle | Farmer, Wolf, Goat and Cabbage

Share Article:

Digital image processing: all you need to know.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 25 January 2022

Intelligent problem-solving as integrated hierarchical reinforcement learning

  • Manfred Eppe   ORCID: orcid.org/0000-0002-5473-3221 1   nAff4 ,
  • Christian Gumbsch   ORCID: orcid.org/0000-0003-2741-6551 2 , 3 ,
  • Matthias Kerzel 1 ,
  • Phuong D. H. Nguyen 1 ,
  • Martin V. Butz   ORCID: orcid.org/0000-0002-8120-8537 2 &
  • Stefan Wermter 1  

Nature Machine Intelligence volume  4 ,  pages 11–20 ( 2022 ) Cite this article

5311 Accesses

32 Citations

8 Altmetric

Metrics details

  • Cognitive control
  • Computational models
  • Computer science
  • Learning algorithms
  • Problem solving

According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms. Hierarchical reinforcement learning is a promising computational approach that may eventually yield comparable problem-solving behaviour in artificial agents and robots. However, so far, the problem-solving abilities of many human and non-human animals are clearly superior to those of artificial systems. Here we propose steps to integrate biologically inspired hierarchical mechanisms to enable advanced problem-solving skills in artificial agents. We first review the literature in cognitive psychology to highlight the importance of compositional abstraction and predictive processing. Then we relate the gained insights with contemporary hierarchical reinforcement learning methods. Interestingly, our results suggest that all identified cognitive mechanisms have been implemented individually in isolated computational architectures, raising the question of why there exists no single unifying architecture that integrates them. As our final contribution, we address this question by providing an integrative perspective on the computational challenges to develop such a unifying architecture. We expect our results to guide the development of more sophisticated cognitively inspired hierarchical machine learning architectures.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

the problem solving agent

Similar content being viewed by others

the problem solving agent

Phy-Q as a measure for physical reasoning intelligence

the problem solving agent

Hierarchical motor control in mammals and machines

the problem solving agent

Hierarchical generative modelling for autonomous robots

Gruber, R. et al. New Caledonian crows use mental representations to solve metatool problems. Curr. Biol. 29 , 686–692 (2019).

Article   Google Scholar  

Butz, M. V. & Kutter, E. F. How the Mind Comes into Being (Oxford Univ. Press, 2017).

Perkins, D. N. & Salomon, G. in International Encyclopedia of Education (eds. Husen T. & Postelwhite T. N.) 6452–6457 (Pergamon Press, 1992).

Botvinick, M. M., Niv, Y. & Barto, A. C. Hierarchically organized behavior and its neural foundations: a reinforcement learning perspective. Cognition 113 , 262–280 (2009).

Tomov, M. S., Yagati, S., Kumar, A., Yang, W. & Gershman, S. J. Discovery of hierarchical representations for efficient planning. PLoS Comput. Biol. 16 , e1007594 (2020).

Arulkumaran, K., Deisenroth, M. P., Brundage, M. & Bharath, A. A. Deep reinforcement learning: a brief survey. IEEE Signal Process. Mag. 34 , 26–38 (2017).

Li, Y. Deep reinforcement learning: an overview. Preprint at https://arxiv.org/abs/1701.07274 (2018).

Sutton, R. S. & Barto, A. G. Reinforcement Learning : An Introduction 2nd edn (MIT Press, 2018).

Neftci, E. O. & Averbeck, B. B. Reinforcement learning in artificial and biological systems. Nat. Mach. Intell. 1 , 133–143 (2019).

Eppe, M., Nguyen, P. D. H. & Wermter, S. From semantics to execution: integrating action planning with reinforcement learning for robotic causal problem-solving. Front. Robot. AI 6 , 123 (2019).

Oh, J., Singh, S., Lee, H. & Kohli, P. Zero-shot task generalization with multi-task deep reinforcement learning. In Proc. 34th International Conference on Machine Learning ( ICML ) (eds. Precup, D. & Teh, Y. W.) 2661–2670 (PMLR, 2017).

Sohn, S., Oh, J. & Lee, H. Hierarchical reinforcement learning for zero-shot generalization with subtask dependencies. In Proc. 32nd International Conference on Neural Information Processing Systems ( NeurIPS ) (eds Bengio S. et al.) Vol. 31, 7156–7166 (ACM, 2018).

Hegarty, M. Mechanical reasoning by mental simulation. Trends Cogn. Sci. 8 , 280–285 (2004).

Klauer, K. J. Teaching for analogical transfer as a means of improving problem-solving, thinking and learning. Instruct. Sci. 18 , 179–192 (1989).

Duncker, K. & Lees, L. S. On problem-solving. Psychol. Monographs 58, No.5 (whole No. 270), 85–101 https://doi.org/10.1037/h0093599 (1945).

Dayan, P. Goal-directed control and its antipodes. Neural Netw. 22 , 213–219 (2009).

Dolan, R. J. & Dayan, P. Goals and habits in the brain. Neuron 80 , 312–325 (2013).

O’Doherty, J. P., Cockburn, J. & Pauli, W. M. Learning, reward, and decision making. Annu. Rev. Psychol. 68 , 73–100 (2017).

Tolman, E. C. & Honzik, C. H. Introduction and removal of reward, and maze performance in rats. Univ. California Publ. Psychol. 4 , 257–275 (1930).

Google Scholar  

Butz, M. V. & Hoffmann, J. Anticipations control behavior: animal behavior in an anticipatory learning classifier system. Adaptive Behav. 10 , 75–96 (2002).

Miller, G. A., Galanter, E. & Pribram, K. H. Plans and the Structure of Behavior (Holt, Rinehart & Winston, 1960).

Botvinick, M. & Weinstein, A. Model-based hierarchical reinforcement learning and human action control. Philos. Trans. R. Soc. B Biol. Sci. 369 , 20130480 (2014).

Wiener, J. M. & Mallot, H. A. ’Fine-to-coarse’ route planning and navigation in regionalized environments. Spatial Cogn. Comput. 3 , 331–358 (2003).

Stock, A. & Stock, C. A short history of ideo-motor action. Psychol. Res. 68 , 176–188 (2004).

Hommel, B., Müsseler, J., Aschersleben, G. & Prinz, W. The theory of event coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24 , 849–878 (2001).

Hoffmann, J. in Anticipatory Behavior in Adaptive Learning Systems : Foundations , Theories and Systems (eds Butz, M. V. et al.) 44–65 (Springer, 2003).

Kunde, W., Elsner, K. & Kiesel, A. No anticipation-no action: the role of anticipation in action and perception. Cogn. Process. 8 , 71–78 (2007).

Barsalou, L. W. Grounded cognition. Annu. Rev. Psychol. 59 , 617–645 (2008).

Butz, M. V. Toward a unified sub-symbolic computational theory of cognition. Front. Psychol. 7 , 925 (2016).

Pulvermüller, F. Brain embodiment of syntax and grammar: discrete combinatorial mechanisms spelt out in neuronal circuits. Brain Lang. 112 , 167–179 (2010).

Sutton, R. S., Precup, D. & Singh, S. Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112 , 181–211 (1999).

Article   MathSciNet   MATH   Google Scholar  

Flash, T. & Hochner, B. Motor primitives in vertebrates and invertebrates. Curr. Opin. Neurobiol. 15 , 660–666 (2005).

Schaal, S. in Adaptive Motion of Animals and Machines (eds. Kimura, H. et al.) 261–280 (Springer, 2006).

Feldman, J., Dodge, E. & Bryant, J. in The Oxford Handbook of Linguistic Analysis (eds Heine, B. & Narrog, H.) 111–138 (Oxford Univ. Press, 2009).

Fodor, J. A. Language, thought and compositionality. Mind Lang. 16 , 1–15 (2001).

Frankland, S. M. & Greene, J. D. Concepts and compositionality: in search of the brain’s language of thought. Annu. Rev. Psychol. 71 , 273–303 (2020).

Hummel, J. E. Getting symbols out of a neural architecture. Connection Sci. 23 , 109–118 (2011).

Haynes, J. D., Wisniewski, D., Gorgen, K., Momennejad, I. & Reverberi, C. FMRI decoding of intentions: compositionality, hierarchy and prospective memory. In Proc. 3rd International Winter Conference on Brain-Computer Interface ( BCI ), 1-3 (IEEE, 2015).

Gärdenfors, P. The Geometry of Meaning : Semantics Based on Conceptual Spaces (MIT Press, 2014).

Book   MATH   Google Scholar  

Lakoff, G. & Johnson, M. Philosophy in the Flesh (Basic Books, 1999).

Eppe, M. et al. A computational framework for concept blending. Artif. Intell. 256 , 105–129 (2018).

Turner, M. The Origin of Ideas (Oxford Univ. Press, 2014).

Deci, E. L. & Ryan, R. M. Self-determination theory and the facilitation of intrinsic motivation. Am. Psychol. 55 , 68–78 (2000).

Friston, K. et al. Active inference and epistemic value. Cogn. Neurosci. 6 , 187–214 (2015).

Berlyne, D. E. Curiosity and exploration. Science 153 , 25–33 (1966).

Loewenstein, G. The psychology of curiosity: a review and reinterpretation. Psychol. Bull. 116 , 75–98 (1994).

Oudeyer, P.-Y., Kaplan, F. & Hafner, V. V. Intrinsic motivation systems for autonomous mental development. In IEEE Transactions on Evolutionary Computation (eds. Coello, C. A. C. et al.) Vol. 11, 265–286 (IEEE, 2007).

Pisula, W. Play and exploration in animals—a comparative analysis. Polish Psychol. Bull. 39 , 104–107 (2008).

Jeannerod, M. Mental imagery in the motor context. Neuropsychologia 33 , 1419–1432 (1995).

Kahnemann, D. & Tversky, A. in Judgement under Uncertainty : Heuristics and Biases (eds Kahneman, D. et al.) Ch. 14, 201–208 (Cambridge Univ. Press, 1982).

Wells, G. L. & Gavanski, I. Mental simulation of causality. J. Personal. Social Psychol. 56 , 161–169 (1989).

Taylor, S. E., Pham, L. B., Rivkin, I. D. & Armor, D. A. Harnessing the imagination: mental simulation, self-regulation and coping. Am. Psychol. 53 , 429–439 (1998).

Kaplan, F. & Oudeyer, P.-Y. in Embodied Artificial Intelligence , Lecture Notes in Computer Science Vol. 3139 (eds Iida, F. et al.) 259–270 (Springer, 2004).

Schmidhuber, J. Formal theory of creativity, fun, and intrinsic motivation. IEEE Trans. Auton. Mental Dev. 2 , 230–247 (2010).

Friston, K., Mattout, J. & Kilner, J. Action understanding and active inference. Biol. Cybern. 104 , 137–160 (2011).

Oudeyer, P.-Y. Computational theories of curiosity-driven learning. In The New Science of Curiosity (ed. Goren Gordon), 43-72 (Nova Science Publishers, 2018); https://arxiv.org/abs/1802.10546

Colombo, M. & Wright, C. First principles in the life sciences: the free-energy principle, organicism and mechanism. Synthese 198 , 3463–3488 (2021).

Article   MathSciNet   Google Scholar  

Huang, Y. & Rao, R. P. Predictive coding. WIREs Cogn. Sci. 2 , 580–593 (2011).

Friston, K. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11 , 127–138 (2010).

Knill, D. C. & Pouget, A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 27 , 712–719 (2004).

Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36 , 181–204 (2013).

Clark, A. Surfing Uncertainty : Prediction , Action and the Embodied Mind (Oxford Univ. Press, 2016).

Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S. & Reyonolds, J. R. Event perception: a mind/brain perspective. Psychol. Bull. 133 , 273–293 (2007).

Eysenbach, B., Ibarz, J., Gupta, A. & Levine, S. Diversity is all you need: learning skills without a reward function. In International Conference on Learning Representations (ICLR, 2019).

Frans, K., Ho, J., Chen, X., Abbeel, P. & Schulman, J. Meta learning shared hierarchies. In Proc. International Conference on Learning Representations https://openreview.net/pdf?id=SyX0IeWAW (ICLR, 2018).

Heess, N. et al. Learning and transfer of modulated locomotor controllers. Preprint at https://arxiv.org/abs/1610.05182 (2016).

Jiang, Y., Gu, S., Murphy, K. & Finn, C. Language as an abstraction for hierarchical deep reinforcement learning. In Neural Information Processing Systems ( NeurIPS ) (eds. Wallach, H. et al.) 9414–9426 (ACM, 2019).

Li, A. C., Florensa, C., Clavera, I. & Abbeel, P. Sub-policy adaptation for hierarchical reinforcement learning. In Proc. International Conference on Learning Representations https://openreview.net/forum?id=ByeWogStDS (ICLR, 2020).

Qureshi, A. H. et al. Composing task-agnostic policies with deep reinforcement learning. In Proc. International Conference on Learning Representations https://openreview.net/forum?id=H1ezFREtwH (ICLR, 2020).

Sharma, A., Gu, S., Levine, S., Kumar, V. & Hausman, K. Dynamics-aware unsupervised discovery of skills. In Proc. International Conference on Learning Representations https://openreview.net/forum?id=HJgLZR4KvH (ICLR, 2020).

Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J. & Mannor, S. A deep hierarchical approach to lifelong learning in minecraft. In Proc. 31st AAAI Conference on Artificial Intelligence 1553–1561 (AAAI, 2017).

Vezhnevets, A. et al. Strategic attentive writer for learning macro-actions. In Neural Information Processing Systems ( NIPS ) (eds. Lee, D. et al.) 3494–3502 (NIPS, 2016).

Devin, C., Gupta, A., Darrell, T., Abbeel, P. & Levine, S. Learning modular neural network policies for multi-task and multi-robot transfer. In Proc. International Conference on Robotics and Automation ( ICRA ) (eds. Okamura, A. et al.) 2169–2176 (IEEE, 2017).

Hejna, D. J., Abbeel, P. & Pinto, L. Hierarchically decoupled morphological transfer. In Proc. International Conference on Machine Learning ( ICML ) (eds. Daumé III, H. & Singh, A.) 11409–11420 (PMLR, 2020).

Hamrick, J. B. et al. On the role of planning in model-based deep reinforcement learning. In Proc. International Conference on Learning Representations https://openreview.net/pdf?id=IrM64DGB21 (ICLR, 2021).

Sutton, R. S. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proc. 7th International Conference on Machine Learning ( ICML ) (eds. Porter, B. W. & Mooney, R. J.) 216–224 (Morgan Kaufmann, 1990).

Nau, D. et al. SHOP2: an HTN planning system. J. Artif. Intell. Res. 20 , 379–404 (2003).

Article   MATH   Google Scholar  

Lyu, D., Yang, F., Liu, B. & Gustafson, S. SDRL: interpretable and data-efficient deep reinforcement learning leveraging symbolic planning. In Proc. AAAI Conference on Artificial Intelligence Vol. 33, 2970–2977 (AAAI, 2019).

Ma, A., Ouimet, M. & Cortés, J. Hierarchical reinforcement learning via dynamic subspace search for multi-agent planning. Auton. Robot. 44 , 485–503 (2020).

Bacon, P.-L., Harb, J. & Precup, D. The option-critic architecture. In Proc. 31st AAAI Conference on Artificial Intelligence 1726–1734 (AAAI, 2017).

Dietterich, T. G. State abstraction in MAXQ hierarchical reinforcement learning. In Advances in Neural Information Processing Systems ( NIPS ) (eds. Solla, S. et al.) Vol. 12, 994–1000 (NIPS, 1999).

Kulkarni, T. D., Narasimhan, K. R., Saeedi, A. & Tenenbaum, J. B. Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. In Neural Information Processing Systems ( NIPS ) (eds. Lee, D. et al.) 3675–3683 (NIPS, 2016).

Shankar, T., Pinto, L., Tulsiani, S. & Gupta, A. Discovering motor programs by recomposing demonstrations. In Proc. International Conference on Learning Representations https://openreview.net/attachment?id=rkgHY0NYwr&name=original_pdf (ICLR, 2020).

Vezhnevets, A. S., Wu, Y. T., Eckstein, M., Leblond, R. & Leibo, J. Z. Options as responses: grounding behavioural hierarchies in multi-agent reinforcement learning. In Proc. International Conference on Machine Learning ( ICML ) (eds. Daumé III, H. & Singh, A.) 9733–9742 (PMLR, 2020).

Ghazanfari, B., Afghah, F. & Taylor, M. E. Sequential association rule mining for autonomously extracting hierarchical task structures in reinforcement learning. IEEE Access 8 , 11782–11799 (2020).

Levy, A., Konidaris, G., Platt, R. & Saenko, K. Learning multi-level hierarchies with hindsight. In Proc. International Conference on Learning Representations https://openreview.net/pdf?id=ryzECoAcY7 (ICLR, 2019).

Nachum, O., Gu, S., Lee, H. & Levine, S. Data-efficient hierarchical reinforcement learning. In Proc. 32nd International Conference on Neural Information Processing Systems (NIPS) (eds. Bengio, S. et al.) 3307–3317 (NIPS, 2018).

Rafati, J. & Noelle, D. C. Learning representations in model-free hierarchical reinforcement learning. In Proc. 33rd AAAI Conference on Artificial Intelligence 10009–10010 (AAAI, 2019).

Röder, F., Eppe, M., Nguyen, P. D. H. & Wermter, S. Curious hierarchical actor-critic reinforcement learning. In Proc. International Conference on Artificial Neural Networks ( ICANN ) (eds. Farkaš, I. et al.) 408–419 (Springer, 2020).

Zhang, T., Guo, S., Tan, T., Hu, X. & Chen, F. Generating adjacency-constrained subgoals in hierarchical reinforcement learning. In Neural Information Processing Systems ( NIPS ) (eds. Larochelle, H. et al.) 21579-21590 (NIPS, 2020).

Lample, G. & Chaplot, D. S. Playing FPS games with deep reinforcement learning. In Proc. 31st AAAI Conference on Artificial Intelligence 2140–2146 (AAAI, 2017).

Vezhnevets, A. S. et al. FeUdal networks for hierarchical reinforcement learning. In Proc. 34th International Conference on Machine Learning ( ICML ) (eds. Precup, D. & Teh, Y. W.) Vol. 70, 3540–3549 (PMLR, 2017).

Wulfmeier, M. et al. Compositional Transfer in Hierarchical Reinforcement Learning. In Robotics: Science and System XVI (RSS) (eds. Toussaint M. et al.) (Robotics: Science and Systems Foundation, 2020); https://arxiv.org/abs/1906.11228

Yang, Z., Merrick, K., Jin, L. & Abbass, H. A. Hierarchical deep reinforcement learning for continuous action control. IEEE Trans. Neural Netw. Learn. Syst. 29 , 5174–5184 (2018).

Toussaint, M., Allen, K. R., Smith, K. A. & Tenenbaum, J. B. Differentiable physics and stable modes for tool-use and manipulation planning. In Proc. Robotics : Science and Systems XIV ( RSS ) (eds. Kress-Gazit, H. et al.) https://ipvs.informatik.uni-stuttgart.de/mlr/papers/18-toussaint-RSS.pdf (Robotics: Science and Systems Foundation, 2018).

Akrour, R., Veiga, F., Peters, J. & Neumann, G. Regularizing reinforcement learning with state abstraction. In Proc. IEEE / RSJ International Conference on Intelligent Robots and Systems ( IROS ) 534–539 (IEEE, 2018).

Schaul, T. & Ring, M. Better generalization with forecasts. In Proc. 23rd International Joint Conference on Artificial Intelligence ( IJCAI ) (ed. Rossi, F.) 1656–1662 (AAAI, 2013).

Colas, C., Akakzia, A., Oudeyer, P.-Y., Chetouani, M. & Sigaud, O. Language-conditioned goal generation: a new approach to language grounding for RL. Preprint at https://arxiv.org/abs/2006.07043 (2020).

Blaes, S., Pogancic, M. V., Zhu, J. J. & Martius, G. Control what you can: intrinsically motivated task-planning agent. Neural Inf. Process. Syst. 32 , 12541–12552 (2019).

Haarnoja, T., Hartikainen, K., Abbeel, P. & Levine, S. Latent space policies for hierarchical reinforcement learning. In Proc. International Conference on Machine Learning ( ICML ) (eds. Dy, J. & Krause, A.) Vol. 4, 2965–2975 (PMLR, 2018).

Rasmussen, D., Voelker, A. & Eliasmith, C. A neural model of hierarchical reinforcement learning. PLoS ONE 12 , e0180234 (2017).

Riedmiller, M. et al. Learning by playing—solving sparse reward tasks from scratch. In Proc. International Conference on Machine Learning ( ICML ) (eds. Dy, J. & Krause, A.) Vol. 10, 6910–6919 (PMLR, 2018).

Yang, F., Lyu, D., Liu, B. & Gustafson, S. PEORL: integrating symbolic planning and hierarchical reinforcement learning for robust decision-making. In Proc. 27th International Joint Conference on Artificial Intelligence ( IJCAI ) (ed. Lang, J.) 4860–4866 (IJCAI, 2018).

Machado, M. C., Bellemare, M. G. & Bowling, M. A Laplacian framework for option discovery in reinforcement learning. In Proc. International Conference on Machine Learning (ICML) (eds. Precup, D. & Teh, Y. W.) Vol. 5, 3567–3582 (PMLR, 2017).

Pathak, D., Agrawal, P., Efros, A. A. & Darrell, T. Curiosity-driven exploration by self-supervised prediction. In Proc. 34th International Conference on Machine Learning ( ICML ) (eds. Precup, D. & Teh, Y. W.) 2778–2787 (PMLR, 2017).

Schillaci, G. et al. Intrinsic motivation and episodic memories for robot exploration of high-dimensional sensory spaces. Adaptive Behav. 29 549–566 (2020).

Colas, C., Fournier, P., Sigaud, O., Chetouani, M. & Oudeyer, P.-Y. CURIOUS: intrinsically motivated modular multi-goal reinforcement learning. In Proc. International Conference on Machine Learning ( ICML ) (eds. Chaudhuri, K. & Salakhutdinov, R.) 1331–1340 (PMLR, 2019).

Hafez, M. B., Weber, C., Kerzel, M. & Wermter, S. Improving robot dual-system motor learning with intrinsically motivated meta-control and latent-space experience imagination. Robot. Auton. Syst. 133 , 103630 (2020).

Yamamoto, K., Onishi, T. & Tsuruoka, Y. Hierarchical reinforcement learning with abductive planning. In Proc. ICML / IJCAI / AAMAS 2018 Workshop on Planning and Learning ( PAL-18 ) (2018).

Wu, B., Gupta, J. K. & Kochenderfer, M. J. Model primitive hierarchical lifelong reinforcement learning . In Proc. International Joint Conference on Autonomous Agents and Multiagent Systems ( AAMAS ) (eds. Agmon, N. et al.) Vol. 1, 34–42 (IFAAMAS, 2019).

Li, Z., Narayan, A. & Leong, T. Y. An efficient approach to model-based hierarchical reinforcement learning. In Proc. 31st AAAI Conference on Artificial Intelligence 3583–3589 (AAAI, 2017).

Hafner, D., Lillicrap, T. & Norouzi, M. Dream to control: learning behaviors by latent imagination. In Proc. International Conference on Learning Representations https://openreview.net/pdf?id=S1lOTC4tDS (ICLR, 2020).

Deisenroth, M. P., Rasmussen, C. E. & Fox, D. Learning to control a low-cost manipulator using data-efficient reinforcement learning. In Robotics : Science and Systems VII ( RSS ) (eds. Durrant-Whyte, H. et al.) 57–64 (Robotics: Science and Systems Foundation, 2011).

Ha, D. & Schmidhuber, J. Recurrent world models facilitate policy evolution. In Proc. 32nd International Conference on Neural Information Processing Systems (NeurIPS) (eds. Bengio, S. et al.) 2455–2467 (NIPS, 2018).

Battaglia, P. W. et al. Relational inductive biases, deep learning and graph networks. Preprint at https://arxiv.org/abs/1806.01261 (2018).

Andrychowicz, M. et al. Hindsight experience replay. In Proc. Neural Information Processing Systems ( NIPS ) (eds. Guyon I. et al.) 5048–5058 (NIPS, 2017); https://papers.nips.cc/paper/7090-hindsight-experience-replay.pdf

Schwartenbeck, P. et al. Computational mechanisms of curiosity and goal-directed exploration. eLife 8 , e41703 (2019).

Haarnoja, T., Zhou, A., Abbeel, P. & Levine, S. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proc. International Conference on Machine Learning ( ICML ) (eds. Dy, J. & Krause, A.) 1861–1870 (PMLR, 2018).

Yu, A. J. & Dayan, P. Uncertainty, neuromodulation and attention. Neuron 46 , 681–692 (2005).

Baldwin, D. A. & Kosie, J. E. How does the mind render streaming experience as events? Top. Cogn. Sci. 13 , 79–105 (2021).

Download references

Acknowledgements

We acknowledge funding from the DFG (projects IDEAS, LeCAREbot, TRR169, SPP 2134, RTG 1808 and EXC 2064/1), the Humboldt Foundation and Max Planck Research School IMPRS-IS.

Author information

Manfred Eppe

Present address: Hamburg University of Technology, Hamburg, Germany

Authors and Affiliations

Universität Hamburg, Hamburg, Germany

Manfred Eppe, Matthias Kerzel, Phuong D. H. Nguyen & Stefan Wermter

University of Tübingen, Tübingen, Germany

Christian Gumbsch & Martin V. Butz

Max Planck Institute for Intelligent Systems, Tübingen, Germany

Christian Gumbsch

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Manfred Eppe .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Boxes 1–6 and Table 1.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Eppe, M., Gumbsch, C., Kerzel, M. et al. Intelligent problem-solving as integrated hierarchical reinforcement learning. Nat Mach Intell 4 , 11–20 (2022). https://doi.org/10.1038/s42256-021-00433-9

Download citation

Received : 18 December 2020

Accepted : 07 December 2021

Published : 25 January 2022

Issue Date : January 2022

DOI : https://doi.org/10.1038/s42256-021-00433-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Efficient stacking and grasping in unstructured environments.

  • Jinbiao Zhu

Journal of Intelligent & Robotic Systems (2024)

Four attributes of intelligence, a thousand questions

  • Matthieu Bardal
  • Eric Chalmers

Biological Cybernetics (2023)

An Alternative to Cognitivism: Computational Phenomenology for Deep Learning

  • Pierre Beckmann
  • Guillaume Köstner
  • Inês Hipólito

Minds and Machines (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

the problem solving agent

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping
  • Knowledge based agents in AI
  • Artificial Intelligence in Financial Market
  • Artificial Intelligence (AI) Algorithms
  • Hallucination (artificial_intelligence)
  • What is Artificial Intelligence?
  • Turing Test in Artificial Intelligence
  • Artificial Intelligence | An Introduction
  • What is Artificial General Intelligence (AGI)?
  • Top 10 Intelligent Automation Companies
  • Artificial Intelligence - Boon or Bane
  • Types of Artificial Intelligence
  • Game Playing in Artificial Intelligence
  • Types of Human Intelligence
  • Artificial Intelligence - Terminology
  • What is Artificial Narrow Intelligence (ANI)?
  • Artificial Intelligence in NASA and DARPA in 2000s
  • Chinese Room Argument in Artificial Intelligence
  • Artificial Intelligence in Robotics
  • Web 4.0 - Intelligent Web

Intelligent Agent in AI

In the realm of AI, Intelligent Agents stand as pivotal entities, driving automation and decision-making with cognitive abilities. This article explores the concept, architecture, functionalities, and real-world applications of these agents, shaping the modern AI landscape.

Table of Content

Understanding Intelligent Agents

Rational agents and rationality in decision-making, how intelligent agent work inside, peas representation of ai agent, applications of intelligent agents, challenges for intelligent agents.

Intelligent agents represent a subset of AI systems demonstrating intelligent behaviour, including adaptive learning, planning, and problem-solving. It operate in dynamic environments, where it makes decisions based on the information available to them. These agents dynamically adjust their behaviour, learning from past experiences to improve their approach and aiming for accurate solutions. The design of an intelligent agent typically involves four key components:

  • Perception: Agents have sensors or mechanisms to observe and perceive aspects of their environment. This may involve collecting data from the physical world, accessing databases, or receiving input from other software components.
  • Reasoning: Agents possess computational or cognitive capabilities to process the information they perceive. They use algorithms, logic, or machine learning techniques to analyze data, make inferences, and derive insights from the available information.
  • Decision-Making: Based on their perception and reasoning, agents make decisions about the actions they should take to achieve their goals. These decisions are guided by predefined objectives, which may include optimizing certain criteria or satisfying specific constraints.
  • Action: Agents execute actions in their environment to affect change and progress towards their goals. These actions can range from simple operations, such as sending a message or adjusting parameters, to more complex tasks, such as navigating a virtual world or controlling physical devices.

Examples of Intelligent Agents include self-driving cars, recommendation systems, virtual assistants, and game-playing AI.

Intelligent agents are characterized by their rationality in decision-making, which aims to attain optimal outcomes or, in uncertain scenarios, the best-expected outcome.

A rational agent can be said to those, who do the right thing, It is an autonomous entity designed to perceive its environment, process information, and act in a way that maximizes the achievement of its predefined goals or objectives. Rational agents always aim to produce an optimal solution.

Rationality in AI refers to the principle that such agents should consistently choose actions that are expected to lead to the best possible outcomes, given their current knowledge and the uncertainties present in the environment. This principle of rationality guides the behavior of intelligent agents in the following ways:

  • Perception and Information Processing: Rational agents strive to perceive and process information efficiently to gain the most accurate understanding of their environment.
  • Reasoning and Inference: They employ logical reasoning and probabilistic inference to make informed decisions based on available evidence and prior knowledge.
  • Decision-Making Under Uncertainty: When faced with uncertainty, rational agents weigh the probabilities of different outcomes and choose actions that maximize their expected utility or achieve the best possible outcome given the available information.
  • Adaptation and Learning: Rational agents adapt their behavior over time based on feedback and experience, continuously refining their decision-making strategies to improve performance and achieve their goals more effectively.

Example of a rational agent is a chess-playing AI, which selects moves with the highest likelihood of winning.

An agent’s internal workings involve Agent program that run on computing device and process the data comes from the environment through its architecture. Let’s discuss how an agent works from the inside using program and architecture:

1. Agent architecture

Intelligent-Agent-Structure

  • Environment: Environment is the area around the agent that it interacts with. An environment can be anything like a physical space, a room or a virtual space like a game world or the internet.
  • Sensors: Sensors are tools that AI agent uses to perceive their environment. They can be any physical like cameras, microphones, temperature sensors or a software sensor that read data from files.
  • Actuators: Actuators are tools that AI agent uses to interact with their environment through some actions. They can be any physical actuators like wheels, motors, robotic hands, or computer screens or they can be software actuators that send messages.
  • Effectors: Effectors take instructions from decision making mechanism and translates them into actions and these actions are performed through actuators.

2. Program or Decision-making mechanism:

This is the brain of the AI agent, this mechanism processes the information that is received through sensors and makes decisions through that data using programs. Let’s understand how the agent’s program execute the operations.

  • The decision-making mechanism, often referred to as the agent’s program, processes information from sensors and makes decisions based on that data.
  • The program takes current percepts as input and generates actions for the actuators.
  • It embodies the agent function, which maps percepts to actions based on the agent’s goals and objectives.
  • Various types of agent programs exist, such as simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents.
  • These programs differ in how they process percepts and generate actions, depending on the agent’s design and task requirements.

For example, a simple reflex agent may have a program that directly maps percept states to actions without considering past or future percepts for a two-state vacuum environment. This decision will be executed through effectors.

PEAS stands for performace measure, environment, actuators and sensors. It is a framework that is used to describe an AI agent. It’s a structured approach to design and understand AI systems.

  • Perfromance measure: Performance measure is a criteria that measures the success of the agent. It is used to evaluate how well the agent is acheiving its goal. For example, in a spam filter system, the performance measure could be minimizing the number of spam emails reaching the inbox.
  • Environment : The environment represents the domain or context in which the agent operates and interacts. This can range from physical spaces like rooms to virtual environments such as game worlds or online platforms like the internet.
  • Actuators : Actuators are the mechanisms through which the AI agent performs actions or interacts with its environment to achieve its goals. These can include physical actuators like motors and robotic hands, as well as digital actuators like computer screens and text-to-speech converters.
  • Sensors: Sensors enable the AI agent to gather information from its environment, providing data that informs its decision-making process and actions. These sensors can capture various environmental parameters such as temperature, sound, movement, or visual input. Examples of sensors include cameras, microphones, temperature sensors, and motion sensors.

Intelligent agents find applications across a wide range of domains, revolutionizing industries and enhancing human capabilities. Some notable applications include:

  • Autonomous Systems: Intelligent agents power autonomous vehicles, drones, and robots, enabling them to perceive their surroundings, navigate complex environments, and make decisions in real-time.
  • Personal Assistants: Virtual personal assistants like Siri, Alexa, and Google Assistant employ intelligent agents to understand user queries, retrieve relevant information, and perform tasks such as scheduling appointments, setting reminders, and controlling smart home devices.
  • Recommendation Systems: E-commerce platforms, streaming services, and social media platforms utilize intelligent agents to analyze user preferences and behavior, providing personalized recommendations for products, movies, music, and content.
  • Financial Trading: Intelligent agents are employed in algorithmic trading systems to analyze market data, identify trading opportunities, and execute trades autonomously, maximizing returns and minimizing risks.

Despite their immense potential, intelligent agents also pose several challenges and considerations:

  • Ethical and Legal Implications: Intelligent agents raise ethical concerns regarding privacy, bias, transparency, and accountability. Developers must ensure that agents behave ethically and comply with legal regulations and societal norms.
  • Robustness and Reliability: Agents must be robust and reliable in dynamic and uncertain environments. They should be capable of handling unexpected situations, adversarial attacks, and noisy or incomplete data.
  • Interpretability: Understanding and interpreting the decisions made by intelligent agents is crucial for building trust and transparency. Explainable AI techniques are essential for providing insights into the reasoning process and decision-making of agents.
  • Scalability and Efficiency: As AI systems become increasingly complex and data-intensive, scalability and efficiency become critical considerations. Designing agents that can scale to large-scale deployments and operate efficiently with limited computational resources is essential.

Intelligent Agents are essential components driving automation and decision-making in AI. These agents, equipped with adaptive learning, planning, and problem-solving capabilities, dynamically adjust their behavior to achieve accurate solutions. Examples such as self-driving cars, recommendation systems, virtual assistants, and game-playing AI illustrate the diverse applications of intelligent agents in shaping the modern AI landscape. As AI advances, Intelligent Agents will continue to lead innovation and shape the future of technology.

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

Help | Advanced Search

Computer Science > Computation and Language

Title: self-reflection in llm agents: effects on problem-solving performance.

Abstract: In this study, we investigated the effects of self-reflection in large language models (LLMs) on problem-solving performance. We instructed nine popular LLMs to answer a series of multiple-choice questions to provide a performance baseline. For each incorrectly answered question, we instructed eight types of self-reflecting LLM agents to reflect on their mistakes and provide themselves with guidance to improve problem-solving. Then, using this guidance, each self-reflecting agent attempted to re-answer the same questions. Our results indicate that LLM agents are able to significantly improve their problem-solving performance through self-reflection ($p < 0.001$). In addition, we compared the various types of self-reflection to determine their individual contribution to performance. All code and data are available on GitHub at this https URL

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

LSE - Small Logo

  • LSE Authors
  • Subscribe to our newsletter

Ravi Sawhney

May 24th, 2024, how to enhance generative ai’s problem-solving capabilities and boost workplace productivity.

0 comments | 5 shares

Estimated reading time: 5 minutes

In 2024, enterprise software companies are betting on generative AI, in a quest to enhance productivity. OpenAI has recently released GPT-4o, which includes interpretation and generation of voice and vision. Ravi Sawhney discusses how to incorporate this technology into end-user workplace technology and introduces the concept of multi-agent workflows, an idea that allows organisations to imitate entire knowledge teams.

Back in 2021 I wrote a piece for LSE Business Review in which I demonstrated the power of OpenAI’s GPT3 in being able to interpret human language by converting it into code. At the time, the technology was in its infancy and didn’t generate the spark ChatGPT did when it was released to the public in November 2022, that moment truly ignited the generative AI (GenAI) boom. Here I provide some personal thoughts on why GenAI matters and the challenges in using it for work. I also introduce the concept of multi-agent workflows as a method to boost the potential of where we can go.

It’s all about productivity

In 2024, nearly all enterprise software companies are making bets on GenAI, which perhaps has taken away some of the limelight from existing machine learning approaches such as supervised and unsupervised learning that are still crucial parts of any complete AI framework. The premise of why organisations are doing this all ties back to what got me interested in this technology in the first place: productivity.

In its more basic form, GenAI can be thought of as the most powerful autocomplete technology we have ever seen. The ability of large language models (LLMs) to predict the next word is so good that it can step in and perform knowledge worker tasks such as classifying, editing, summarising, questions and answers as well as content creation.

Additionally, variations of this technology can operate across modalities, much like human senses, to include interpretation and generation across voice and vision. In fact, in 2024 the nomenclature is shifting from LLMs to large multimodal models (LMMs) and the recent release of GPT-4o from OpenAI is evidence of this. Whether the step-in process is advisory, with a human in-the-loop or full-blown automated decision-making, it is not hard to see how GenAI has the potential to deliver transformational boost to labour productivity across the knowledge working sector. A recent paper  on this very topic estimated that, when used to drive task automation, GenAI could boost labour productivity by 3.3 percentage points annually, creating $4.4 trillion of value to global GDP.

The productivity benefits perhaps take us closer to the aspiration Keynes had when he wrote Economic Possibilities for our Grandchildren in 1930, in which he forecast that in a hundred years, thanks to technological advancements improving the standard of living, we could all be doing 15-hour work weeks. This sentiment was echoed by Nobel Prize winner in economics Sir Chirstopher Pissarides, who said ChatGPT could herald 4-day work week.

So, if the potential to meaningfully transform how we work is right in front of us and is being developed at breakneck speed, then how do we bridge the gap to make this possibility a reality?

Trust and tooling

Two typical challenges need to be considered when incorporating this technology into end-user workplace technology. The largest, arguably, is managing the trust issue. By default, LLMs do not have access to your own private information, so asking it about a very specific support issue on your own product will typically produce a confident but inaccurate response, typically referred to as “hallucinations”. Fine-tuning LLMs on your own data is one option, albeit an expensive one given the hardware requirements. A much more approachable method that has become commonplace in the community is referred to as retrieval-augmented-generation (RAG) . This is where your private data is brought into the query prompt using the power of embeddings to perform lookups from a given query. This resultant response is synthesised from this data along with the LLM’s existing knowledge, resulting in something that could be considered useful, albeit with some appropriate user guidance.

The second challenge is maths. While LLMs, with some careful prompting, could create a unique, compelling and (importantly) convincing story from scratch, it would struggle with basic to intermediate maths, depending on the foundation model you are using. Here the community has introduced the concept of tooling or sometimes referred to as agents. In this paradigm, the LLM can categorise the query being asked and, rather than trying to answer it, call the appropriate ‘tool’ for the job . For example, if being asked about the weather outside, it might call a weather API service. If being asked to perform math, it would route the query to a calculator API. And if it needs to retrieve information from a database, it might convert the request to SQL or Pandas, execute the resulting code in a sandbox environment and return the result to the user, who might be none the wiser about what is going on under the hood.

The potential of multi-agent workflows

Agent frameworks with tooling are expanding the possibilities of how LLMs can be used to solve real-world problems today. However, they still largely fall short of being able to perform complex knowledge-work tasks due to limitations such as lack of memory, planning and reasoning capabilities. Multi-agent frameworks present an opportunity to tackle some of these challenges. A good way to understand how they could work is to draw a contrast with System 1 and System 2 thinking , popularised by Daniel Kahneman.

Think of System 1 as your gut instinct: fast, automatic and intuitive. In the world of LLMs, that’s like the model’s ability to generate human-like responses based on its vast training data. In contrast, System 2 thinking is slower, more deliberate, and logical, representing the model’s capacity for structured, step-by-step problem-solving and reasoning.

To fully unleash the potential of LLMs, we need to develop techniques that leverage both System 1 and System 2 capabilities. By breaking down complex tasks into smaller, manageable steps, we can guide LLMs to perform more structured and reliable problem-solving, akin to how humans would solve challenges.

Consider a team of agents, each assigned a specific role through prompt engineering, working together to tackle a single goal. That’s essentially what agent workflows, or sometimes referred to as agentic workflows, do for LLMs. Each agent is responsible for a specific subtask, and they communicate with each other, passing information and results back and forth until the overall task is complete. By designing prompts that encourage logical reasoning, step-by-step problem-solving, and collaboration with other agents, we can create a system that mimics the deliberate and rational thinking associated with System 2.

Here is where this gets exciting: agent workflows could allow us to imitate entire knowledge teams. Imagine a virtual team of AI agents, each with its workflow’s own specialism, collaborating to solve problems and make decisions just like a human team would. This could revolutionise the way we work, allowing us to tackle more complex challenges with zero or minimal human-in-the-loop supervision. It also opens the idea of allowing us to simulate how teams will react to events in a sandbox environment, where every team member is modelled as an agent in the workflow. The conversational outputs could even be saved for retrieval latter, serving as long-term memory.

By combining the raw power of System 1 thinking with the structured reasoning of System 2, we can create AI systems that not only generate human-like responses but can also tackle more complex tasks and move towards solving problems. The future of work is here, and it’s powered by the symbiosis of human ingenuity and artificial intelligence.

  • Authors’ disclaimer: All views expressed are my own.
  • This blog post represents the views of the author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
  • Featured image  provided by Shutterstock
  • When you leave a comment, you’re agreeing to our  Comment Policy .

Print Friendly, PDF & Email

About the author

the problem solving agent

Ravi Sawhney is Head of Buyside Execution at Bloomberg LP. He has wide experience in the financial markets. Ravi holds a Bsc Hons in economics from LSE.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Related Posts

the problem solving agent

Why pay packages based on fair criteria matter

August 10th, 2016.

the problem solving agent

How non-profit work gives some people a ‘moral licence’ to engage in sexual harassment

June 14th, 2023.

the problem solving agent

What’s at stake for women in Argentina

November 10th, 2023.

the problem solving agent

When employees are ideological misfits

May 14th, 2019.

The promise and the reality of gen AI agents in the enterprise

The evolution of generative AI (gen AI) has opened the door to great opportunities across organizations, particularly regarding gen AI agents—AI-powered software entities that plan and perform tasks or aid humans by delivering specific services on their behalf. So far, adoption at scale across businesses has faced difficulties because of data quality, employee distrust, and cost of implementation. In addition, capabilities have raced ahead of leaders’ capacity to imagine how these agents could be used to transform work.

However, as gen AI technologies progress and the next-generation agents emerge, we expect more use cases to be unlocked, deployment costs to decrease, long-tail use cases to become economically viable, and more at-scale automation to take place across a wider range of enterprise processes, employee experiences, and customer interfaces. This evolution will demand investing in strong AI trust and risk management practices and policies as well as platforms for managing and monitoring agent-based systems.

In this interview, McKinsey Digital’s Barr Seitz speaks with senior partners Jorge Amar and Lari Hämäläinen and partner Nicolai von Bismarck to explore the evolution of gen AI agents and how companies can and should implement the technology, where the pools of value lie for the enterprise as a whole. They particularly explore what these developments mean for customer service. An edited transcript of the conversation follows.

Barr Seitz: What exactly is a gen AI agent?

Headshot of McKinsey's Lari Hamalainen

Lari Hämäläinen: When we talk about gen AI agents, we mean software entities that can orchestrate complex workflows, coordinate activities among multiple agents, apply logic, and evaluate answers. These agents can help automate processes in organizations or augment workers and customers as they perform processes. This is valuable because it will not only help humans do their jobs better but also fully digitalize underlying processes and services.

For example, in customer services, recent developments in short- and long-term memory structures enable these agents to personalize interactions with external customers and internal users, and help human agents learn. All of this means that gen AI agents are getting much closer to becoming true virtual workers that can both augment and automate enterprise services in all areas of the business, from HR to finance to customer service. That means we’re well on our way to automating a wide range of tasks in many service functions while also improving service quality.

Barr Seitz: Where do you see the greatest value from gen AI agents?

Headshot of McKinsey's Jorge Amar

Jorge Amar: We have estimated that gen AI enterprise use cases  could yield $2.6 trillion to $4.4 trillion annually in value across more than 60 use cases. 1 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. But how much of this value is realized as business growth and productivity will depend on how quickly enterprises can reimagine and truly transform work in priority domains—that is, user journeys, processes across an entire chain of activities, or a function.

Gen-AI-enabled agents hold the promise of accelerating the automation of a very long tail of workflows that would otherwise require inordinate amounts of resources to implement. And the potential extends even beyond these use cases: 60 to 70 percent of the work hours in today’s global economy could theoretically be automated by applying a wide variety of existing technology capabilities, including generative AI, but doing so will require a lot in terms of solutions development and enterprise adoption.

Consider customer service. Currently, the value of gen AI agents in the customer service environment is going to come either from a volume reduction or a reduction in average handling times. For example, in work we published earlier this year, we looked at 5,000 customer service agents using gen AI and found that issue resolution increased by 14 percent an hour, while time spent handling issues went down 9 percent. 2 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

The other area for value is agent training. Typically, we see that it takes somewhere between six to nine months for a new agent to perform at par with the level of more tenured peers. With this technology, we see that time come down to three months, in some cases, because new agents have at their disposal a vast library of interventions and scripts that have worked in other situations.

Over time, as gen AI agents become more proficient, I expect to see them improve customer satisfaction and generate revenue. By supporting human agents and working autonomously, for example, gen AI agents will be critical not just in helping customers with their immediate questions but also beyond, be that selling new services or addressing broader needs. As companies add more gen AI agents, costs are likely to come down, and this will open up a wider array of customer experience options for companies, such as offering more high-touch interactions with human agents as a premium service.

Barr Seitz: What are the opportunities you are already seeing with gen AI agents?

Jorge Amar: Customer care will be one of the first but definitely not the only function with at-scale AI agents. Over the past year, we have seen a lot of successful pilots with gen AI agents helping to improve customer service functions. For example, you could have a customer service agent who is on the phone with a customer and receives help in real time from a dedicated gen AI agent that is, for instance, recommending the best knowledge article to refer to or what the best next steps are for the conversation. The gen AI agent can also give coaching on behavioral elements, such as tone, empathy, and courtesy.

It used to be the case that dedicating an agent to an individual customer at each point of their sales journey was cost-prohibitive. But, as Lari noted, with the latest developments in gen AI agents, now you can do it.

Headshot of McKinsey's Nicolai von Bismarck

Nicolai von Bismarck: It’s worth emphasizing that gen AI agents not only automate processes but also support human agents. One thing that gen AI agents are so good at, for example, is in helping customer service representatives get personalized coaching not only from a hard-skill perspective but also in soft skills like understanding the context of what is being said. We estimate that applying generative AI to customer care functions could increase productivity by between 30 to 45 percent. 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023.

Jorge Amar: Yes, and in other cases, gen AI agents assist the customer directly. A digital sales assistant can assist the customer at every point in their decision journey by, for example, retrieving information or providing product specs or cost comparisons—and then remembering the context if the customer visits, leaves, and returns. As those capabilities grow, we can expect these gen AI agents to generate revenue through upselling.

[For more on how companies are using gen AI agents, see the sidebar, “A closer look at gen AI agents: The Lenovo experience.”]

Barr Seitz: Can you clarify why people should believe that gen AI agents are a real opportunity and not just another false technology promise?

A closer look at gen AI agents: The Lenovo experience

Three leaders at Lenovo —Solutions and Services Group chief technology officer Arthur Hu, COO and head of strategy Linda Yao, and Digital Workplace Solutions general manager Raghav Raghunathan—discuss with McKinsey senior partner Lari Hämäläinen and McKinsey Digital’s Barr Seitz how the company uses generative AI (gen AI) agents.

Barr Seitz: What existing gen AI agent applications has Lenovo been running and what sort of impact have you seen from them?

Headshot of Lenovo's Arthur Hu

Arthur Hu: We’ve focused on two main areas. One is software engineering. It’s the low-hanging fruit to help our people enhance speed and quality of code production. Our people are already getting 10 percent improvements, and we’re seeing that increase to 15 percent as teams get better at using gen AI agents.

The second one is about support. We have hundreds of millions of interactions with our customers across online, chat, voice, and email. We’re applying LLM [large language model]-enhanced bots to address customer issues across the entire customer journey and are seeing some great improvements already. We believe it’s possible to address as much as 70 to 80 percent of all customer interactions without needing to pull in a human.

Headshot of Lenovo's Linda Yao

Linda Yao: With our gen AI agents helping support customer service, we’re seeing double-digit productivity gains on call handling time. And we’re seeing incredible gains in other places too. We’re finding that marketing teams, for example, are cutting the time it takes to create a great pitch book by 90 percent and also saving on agency fees.

Barr Seitz: How are you getting ready for a world of gen AI agents?

Linda Yao: I was working with our marketing and sales training teams just this morning as part of a program to develop a learning curriculum for our organization, our partners, and our key customers. We’re figuring out what learning should be at all levels of the business and for different roles.

Arthur Hu: On the tech side, employees need to understand what gen AI agents are and how they can help. It’s critical to be able to build trust or they’ll resist adopting it. In many ways, this is a demystification exercise.

Headshot of Lenovo's Raghav Raghunathan

Raghav Raghunathan: We see gen AI as a way to level the playing field in new areas. You don’t need a huge talent base now to compete. We’re investing in tools and workflows to allow us to deliver services with much lower labor intensity and better outcomes.

Barr Seitz: What sort of learning programs are you developing to upskill your people?

Linda Yao: The learning paths for managers, for example, focus on building up their technical acumen, understanding how to change their KPIs because team outputs are changing quickly. At the executive level, it’s about helping leaders develop a strong understanding of the tech so they can determine what’s a good use case to invest in, and which one isn’t.

Arthur Hu: We’ve found that as our software engineers learn how to work with gen AI agents, they go from basically just chatting with them for code snippets to developing much broader thinking and focus. They start to think about changing the software workflow, such as working with gen AI agents on ideation and other parts of the value chain.

Raghav Raghunathan: Gen AI provides an experiential learning capability that’s much more effective. They can prepare sales people for customer interactions or guide them during sales calls. This approach is having a much greater impact than previous learning approaches. It gives them a safe space to learn. They can practice their pitches ahead of time and learn through feedback in live situations.

Barr Seitz: How do you see the future of gen AI agents evolving?

Linda Yao: In our use cases to date, we’ve refined gen AI agents so they act as a good assistant. As we start improving the technology, gen AI agents will become more like deputies that human agents can deploy to do tasks. We’re hoping to see productivity improvements, but we expect this to be a big improvement for the employee experience. These are tasks people don’t want to do.

Arthur Hu: There are lots of opportunities, but one area we’re exploring is how to use gen AI to capture discussions and interactions, and feed the insights and outputs into our development pipeline. There are dozens of points in the customer interaction journey, which means we have tons of data to mine to understand complex intent and even autogenerate new knowledge to address issues.

Jorge Amar: These are still early days, of course, but the kinds of capabilities we’re seeing from gen AI agents are simply unprecedented. Unlike past technologies, for example, gen AI not only can theoretically handle the hundreds of millions of interactions between employees and customers across various channels but also can generate much higher-quality interactions, such as delivering personalized content. And we know that personalized service is a key driver of better customer service. There is a big opportunity here because we found in a survey of customer care executives we ran that less than 10 percent of respondents  in North America reported greater-than-expected satisfaction with their customer service performance. 4 “ Where is customer care in 2024? ,” McKinsey, March 12, 2024.

Lari Hämäläinen: Let me take the technology view. This is the first time where we have a technology that is fitted to the way humans interact and can be deployed at enterprise scale. Take, for example, the IVR [interactive voice response] experiences we’ve all suffered through on calls. That’s not how humans interact. Humans interact in an unstructured way, often with unspoken intent. And if you think about LLMs [large language models], they were basically created from their inception to handle unstructured data and interactions. In a sense, all the technologies we applied so far to places like customer service worked on the premise that the customer is calling with a very structured set of thoughts that fit predefined conceptions.

Barr Seitz: How has the gen AI agent landscape changed in the past 12 months?

Lari Hämäläinen: The development of gen AI has been extremely fast. In the early days of LLMs, some of their shortcomings, like hallucinations and relatively high processing costs, meant that models were used to generate pretty basic outputs, like providing expertise to humans or generating images. More complex options weren’t viable. For example, consider that in the case of an LLM with just 80 percent accuracy applied to a task with ten related steps, the cumulative accuracy rate would be just 11 percent.

Today, LLMs can be applied to a wider variety of use cases and more complex workflows because of multiple recent innovations. These include advances in the LLMs themselves in terms of their accuracy and capabilities, innovations in short- and long-term memory structures, developments in logic structures and answer evaluation, and frameworks to apply agents and models to complex workflows. LLMs can evaluate and correct “wrong” answers so that you can have much higher accuracy. With an experienced human in the loop to handle cases that are identified as tricky, then the joint human-plus-machine outcome can generate great quality and great productivity.

Finally, it’s worth mentioning that a lot of gen AI applications beyond chat have been custom-built in the past year by bringing different components together. What we are now seeing is the standardization and industrialization of frameworks to become closer to “packaged software.” This will speed up implementation and improve cost efficiency, making real-world applications even more viable, including addressing the long-tail use cases in enterprises.

Barr Seitz: What sorts of hurdles are you seeing in adopting the gen AI agent technology for customer service?

Nicolai von Bismarck: One big hurdle we’re seeing is building trust across the organization in gen AI agents. At one bank, for example, they knew they needed to cut down on wrong answers to build trust. So they created an architecture that checks for hallucinations. Only when the check confirms that the answer is correct is it released. And if the answer isn’t right, the chatbot would say that it cannot answer this question and try to rephrase it. The customer is then able to either get an answer to their question quickly or decide that they want to talk to a live agent. That’s really valuable, as we find that customers across all age groups — even Gen Z — still prefer live phone conversations for customer help and support. .

Jorge Amar: We are seeing very promising results, but these are in controlled environments with a small group of customers or agents. To scale these results, change management will be critical. That’s a big hurdle for organizations. It’s much broader than simply rolling out a new set of tools. Companies are going to need to rewire how functions work so they can get the full value from gen AI agents.

Take data, which needs to be in the right format and place for gen AI technologies to use them effectively. Almost 20 percent of most organizations, in fact, see data as the biggest challenge to capturing value with gen AI. 5 “ The state of AI in 2023: Generative AI’s breakout year ,” McKinsey, August 1, 2023. One example of this kind of issue could be a chatbot sourcing outdated information, like a policy that was used during COVID-19, in delivering an answer. The content might be right, but it’s hopelessly out of date. Companies are going to need to invest in cleaning and organizing their data.

In addition, companies need a real commitment to building AI trust and governance capabilities. These are the principles, policies, processes, and platforms that assure companies are not just compliant with fast-evolving regulations—as seen in the recent EU AI law and similar actions in many countries—but also able to keep the kinds of commitments that they make to customers and employees in terms of fairness and lack of bias. This will also require new learning, new levels of collaboration with legal and risk teams, and new technology to manage and monitor systems at scale.

Change needs to happen in other areas as well. Businesses will need to build extensive and tailored learning curricula for all levels of the customer service function—from managers who will need to create new KPIs and performance management protocols to frontline agents who will need to understand different ways to engage with both customers and gen AI agents.

The technology will need to evolve to be more flexible and develop a stronger life cycle capability to support gen AI tools, what we’d call MLOps [machine learning operations] or, increasingly, gen AI Ops [gen AI operations]. The operating model will need to support small teams working iteratively on new service capabilities. And adoption will require sustained effort and new incentives so that people learn to trust the tools and realize the benefits. This is particularly true with more tenured agents, who believe their own skills cannot be augmented or improved on with gen AI agents. For customer operations alone, we’re talking about a broad effort here, but with more than $400 billion of potential value from gen AI at stake, it’s worth it. 6 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023.

Barr Seitz: Staying with customer service, how will gen AI agents help enterprises?

Jorge Amar: This is a great question, because we believe the immediate impact comes from augmenting the work that humans do even as broader automation happens. My belief is that gen AI agents can and will transform various corporate services and workflows. It will help us automate a lot of tasks that were not adding value while creating a better experience for both employees and customers. For example, corporate service centers will become more productive and have better outcomes and deliver better experiences.

In fact, we’re seeing this new technology help reduce employee attrition. As gen AI becomes more pervasive, we may see an emergence of more specialization in service work. Some companies and functions will lead adoption and become fully automated, and some may differentiate by building more high-touch interactions.

Nicolai von Bismarck: As an example, we’re seeing this idea in practice at one German company, which is implementing an AI-based learning and coaching engine. And it’s already seeing a significant improvement in the employee experience as measured while it’s rolling this out, both from a supervisor and employee perspective, because the employees feel that they’re finally getting feedback that is relevant to them. They’re feeling valued, they’re progressing in their careers, and they’re also learning new skills. For instance, instead of taking just retention calls, they can now take sales calls. This experience is providing more variety in the work that people do and less dull repetition.

Lari Hämäläinen: Let me take a broader view. We had earlier modeled a midpoint scenario when 50 percent of today’s work activities could be automated to occur around 2055. But the technology is evolving so much more quickly than anyone had expected—just look at the capabilities of some LLMs that are approaching, and even surpassing, in certain cases, average human levels of proficiency. The innovations in gen AI have helped accelerate that midpoint scenario by about a decade. And it’s going to keep getting faster, so we can expect the adoption timeline to shrink even further. That’s a crucial development that every executive needs to understand.

Jorge Amar is a senior partner in McKinsey’s Miami office, Lari Hämäläinen is a senior partner in the Seattle office, and Nicolai von Bismarck is a partner in the Boston office. Barr Seitz is director of global publishing for McKinsey Digital and is based in the New York office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

Arthur Mensch headshot

Creating a European AI unicorn: Interview with Arthur Mensch, CEO of Mistral AI

Abstract 3D representation of artificial intelligence: a stylized silhouette of a head with a pixelated brain placed atop a cell phone, surrounded by a network emanating from the head.

Why AI-enabled customer service is key to scaling telco personalization

  • Share full article

Advertisement

Supported by

The Best Sex Advice Might Also Be the Hardest to Follow

Some couples would rather get divorced than talk openly about their intimate lives.

An illustration of a couple lying down and facing each other on a bed shaped like a three-dimensional speech bubble. They both wear pajamas.

By Catherine Pearson

As a reporter who covers sex and intimacy, I spend a lot of time listening to experts extol the virtues of open, honest communication. To have good sex — and to keep having good sex over time — couples must be willing to talk about it , they say.

But some people would rather leave their relationships than have those conversations, said Jeffrey Chernin, a marriage and family therapist and the author of “Achieving Intimacy: How to Have a Loving Relationship That Lasts” — especially if things in the bedroom aren’t going particularly well.

“One of the things I often say to couples who are having trouble is: ‘I wish there was another way through this,’” he said. “But the only way I know to have a better sex life, or to resume your sex life, is to discuss it.”

Dr. Chernin acknowledged how stressful those conversations can be, sometimes deteriorating into finger-pointing, belittling or stonewalling. That said, these suggestions may help.

Embrace the awkwardness.

It’s common for partners to have trouble talking about intimacy and desire. Research suggests that even in long-term relationships, people know only about 60 percent of what their partner likes sexually, and only about 25 percent of what they don’t like.

Cyndi Darnell, a sex and relationships therapist in New York City, said her patients frequently tell her that talking about sex is “awkward” — which is especially true “if you’ve spent months or years avoiding it,” she said.

“We’ve been tricked into believing sex is natural,” she added. “But, if it were easy and natural, people wouldn’t struggle with it as much as they do.”

She mentioned one couple she worked with, both in their 50s, who hadn’t had sex in years. Every time they talked about it, they fought. So they sought outside help to get past their embarrassment and anger.

In therapy, they realized that they had only been focused on penetration, but the husband was really longing for closeness and tenderness. And once the wife realized that her husband was not going to “pounce on her” whenever she cuddled with him, they were able to be more sensual with each other — and to talk about what they like to do and why, Ms. Darnell said. But it took a spirit of willingness, curiosity and acceptance.

Death to ‘We need to talk.’

It may be possible to temper the dread that often accompanies these conversations, if you approach them sensitively. “When a partner says, ‘We need to talk,’ Dr. Chernin said, “the other person feels like, ‘I’m going to the principal’s office.’”

Instead, try to:

Focus on problem-solving together

That means saying something like: “On the one hand, I know how difficult this is for us to talk about,” Dr. Chernin said. “On the other hand, I think it’s important for our marriage or for our relationship to be able to have some discussions about our sex life.”

Then ask: “What can we do about it?”

Prepare questions ahead of time

A script offers scaffolding, Ms. Darnell said. She suggested prompts like: “Our relationship is really important to me, and I’d like for sex to be part of it (again). I was curious if that is something you’d be into also?”

Bring in some positives

Maggie Bennett-Brown, a research fellow at the Kinsey Institute and an assistant professor at Texas Tech University, said “it doesn’t have to be explicit.” Maybe you tell your partner that you like it when he hugs you or plans a romantic night on the town.

If it has been a while since you were intimate, it can help to reminisce — and that can segue into a deeper question. “If people have never had a conversation about: ‘What do you enjoy?’ that’s a good first step,” Dr. Bennett-Brown said.

Be mindful of your timing

Be careful about initiating a discussion about sex while in bed, Dr. Chernin said, particularly if you are being critical. (Though some couples may find it easier to talk about sex when they are basking in the afterglow, he said.)

“Think about a conversation as a series of discussions,” Dr. Chernin said. “That way, you’re not putting too much pressure on yourself or your partner.”

Know when to talk to a professional.

If your partner is unwilling to talk — or if the conversation feels painful, not just uncomfortable, Ms. Darnell said — a sex therapist or couples counselor may be able to help mediate.

She did not downplay how high-stakes these conversations can be. But she added that sex may not always be a necessary component of a satisfying romantic relationship.

“One of the questions I often ask my couples for whom sex is a tenuous and difficult issue is: Does this relationship have to be sexual?” she said. She worked with one couple in their 30s and 40s who realized they liked engaging in flirty banter, but did not want to move beyond that. “Permission to not have sex at this phase of their relationship was huge — and a relief,” she said.

“Sex is about so much more than just what we do when our pants are off,” she said.

Catherine Pearson is a Times reporter who writes about families and relationships. More about Catherine Pearson

What to Know About Your Sexual Health

Sexual health can be an important part of personal well-being. the information below can help you demystify this often misunderstood topic..

Older daters are not getting adequate screening and protection from S.T.I.s. Here’s how to be a safer sexually active senior .

Any physical activity can improve your sexual health. But these five exercises  are especially beneficial.

New regimens in development, including once-weekly pills and semiannual shots , could help control H.I.V. in hard-to-reach populations.

Many women will deal with a yeast infection at least once in their lifetimes. Luckily, there are plenty of effective solutions .

The connection between the birth control pill and sexual desire is complex. The pill lowers testosterone, but what does that do to libido ?

We asked sex therapists and researchers to share a myth about sex they wished would go away. Here’s what they said .

Julie Radico Psy.D. ABPP

Self-Esteem

It’s ok you can’t solve every problem, trying to “fix" everything can leave you feeling like a failure..

Updated May 10, 2024 | Reviewed by Ray Parker

  • What Is Self-Esteem?
  • Find a therapist near me
  • Your intrinsic value is more than what you can do for other people.

You are still worthwhile and can be successful, even if you don’t have all the solutions.

  • Consider which decision will make you feel you’ve stayed true to your values.

In coaching others, I often discuss problem-solving strategies to help individuals think creatively and consider many options when they are faced with challenging situations.

Problem solving 1-2 includes the following:

  • Define the problem, identify obstacles, and set realistic goals .
  • Generate a variety of alternative solutions to overcome obstacles identified.
  • Choose which idea has the highest likelihood to achieve the goal.
  • Try out the solution in real-life and see if it worked or not.

Problem-solving strategies can be helpful in many situations. Thinking creatively and testing out different potential solutions can help you come up with alternative ways of solving your problems.

While many problems can be solved, there are also situations in which there is no “perfect” solution or in which what seems to be the best solution still leaves you feeling unsatisfied or like you’re not doing enough.

I encourage you to increase your comfort around the following three truths:

1. You can’t always solve everyone else’s problems.

2. You can’t always solve all of your own problems.

3. You are not a failure if you can’t solve every problem.

Source: Hans-Peter Gauster / Unsplash

You can’t always solve everyone else’s problems.

When someone around you needs help, do you feel compelled to find solutions to their problem?

Are you seen as the problem solver at your job or in your close relationships?

Does it feel uncomfortable for you to listen to someone tell you about a problem and not offer solutions?

There are times when others come to you because they know you can help them solve a problem. There are also times when the other person is coming to you not for a solution to their problem, but for support, empathy, and a listening ear.

Your relationships may be negatively impacted if others feel that you don’t fully listen and only try to “fix” everything for them. While this may feel like a noble act, it may lead the other person to feel like they have failed or that you think they are unable to solve their own problems.

Consider approaching such situations with curiosity by saying to the other person:

  • As you share this information with me, tell me how I can best support you.
  • What would be most helpful right now? Are you looking for an empathetic ear or want to brainstorm potential next steps?
  • I want to be sure I am as helpful as I can be right now; what are you hoping to get out of our conversation?

You can’t always solve all of your own problems.

We are taught from a young age that problems have a solution. For example, while solving word problems in math class may not have been your favorite thing to do, you knew there was ultimately a “right” answer. Many times, the real world is much more complex, and many of the problems that you face do not have clear or “right” answers.

You may often be faced with finding solutions that do the most good for the most amount of people, but you know that others may still be left out or feel unsatisfied with the result.

Your beliefs about yourself, other people, and the world can sometimes help you make decisions in such circumstances. You may ask for help from others. Some may consider their faith or spirituality for guidance. While others may consider philosophical theories.

Knowing that there often isn’t a “perfect” solution, you may consider asking yourself some of the following questions:

  • What’s the healthiest decision I can make? The healthiest decision for yourself and for those who will be impacted.
  • Imagine yourself 10 years in the future, looking back on the situation: What do you think the future-you would encourage you to do?
  • What would a wise person do?
  • What decision will allow you to feel like you’ve stayed true to your values?

You are not a failure if you can’t solve all of the problems.

If you have internalized feeling like you need to be able to solve every problem that comes across your path, you may feel like a failure each time you don’t.

It’s impossible to solve every problem.

the problem solving agent

Your intrinsic value is more than what you can do for other people. You have value because you are you.

Consider creating more realistic and adaptive thoughts around your ability to help others and solve problems.

Some examples include:

  • I am capable, even without solving all of the problems.
  • I am worthwhile, even if I’m not perfect.
  • What I do for others does not define my worth.
  • In living my values, I know I’ve done my best.

I hope you utilize the information above to consider how you can coach yourself the next time you:

  • Start to solve someone else’s problem without being asked.
  • Feel stuck in deciding the best next steps.
  • Judge yourself negatively.

1. D'zurilla, T. J., & Goldfried, M. R. (1971). Problem solving and behavior modification. Journal of abnormal psychology, 78(1), 107.

2. D’Zurilla, T. J., & Nezu, A. M. (2010). Problem-solving therapy. Handbook of cognitive-behavioral therapies, 3(1), 197-225.

Julie Radico Psy.D. ABPP

Julie Radico, Psy.D. ABPP, is a board-certified clinical psychologist and coauthor of You Will Get Through This: A Mental Health First-Aid Kit.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

COMMENTS

  1. Artificial Intelligence Series: Problem Solving Agents

    The problem solving agent chooses a cost function that reflects its own performance measure. The solution to the problem is an action sequence that leads from initial state to goal state and the ...

  2. Problem-Solving Agents In Artificial Intelligence

    May 10, 2024. In artificial intelligence, a problem-solving agent refers to a type of intelligent agent designed to address and solve complex problems or tasks in its environment. These agents are a fundamental concept in AI and are used in various applications, from game-playing algorithms to robotics and decision-making systems.

  3. Problem Solving in Artificial Intelligence

    The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem. We can also say that a problem-solving agent is a result-driven agent and always ...

  4. Chapter 3 Solving Problems by Searching

    Chapter 3 Solving Problems by Searching . When the correct action to take is not immediately obvious, an agent may need to plan ahead: to consider a sequence of actions that form a path to a goal state. Such an agent is called a problem-solving agent, and the computational process it undertakes is called search.. Problem-solving agents use atomic representations, that is, states of the world ...

  5. Problem Solving Agents in Artificial Intelligence

    The problem solving agent follows this four phase problem solving process: Goal Formulation: This is the first and most basic phase in problem solving. It arranges specific steps to establish a target/goal that demands some activity to reach it. AI agents are now used to formulate goals. Problem Formulation: It is one of the fundamental steps ...

  6. PDF Problem-Solving Agents

    CPE/CSC 580-S06 Artificial Intelligence - Intelligent Agents Well-Defined Problems exact formulation of problems and solutions initial state current state / set of states, or the state at the beginning of the problem-solving process must be known to the agent operator description of an action state space set of all states reachable from the ...

  7. PDF Problem solving and search

    Problem-solving agents Restricted form of general agent: function Simple-Problem-Solving-Agent(percept) returns an action static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation state Update-State(state,percept) if seq is empty then

  8. PDF Problem-solving agents

    Problem formulation ♦ Example problems ♦ Basic search algorithms Chapter 3 2 Problem-solving agents Restricted form of general agent: function Simple-Problem-Solving-Agent (percept) returns an action static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a ...

  9. PDF Problem Solving and Search

    Problem Solving and Search Problem Solving • Agent knows world dynamics • World state is finite, small enough to enumerate • World is deterministic • Utility for a sequence of states is a sum over path The utility for sequences of states is a sum over the path of the utilities of the individual states.

  10. PDF Problem-solving agents

    Chapter 3. Outline. Chapter3 1. Problem-solving agents. function Simple-Problem-Solving-Agent(percept) returns an action static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation. state←Update-State(state,percept)

  11. PDF Problem Solving Agents and Uninformed Search

    Problem Solving Agents and Uninformed Search An intelligent agents act to increase their performance measure. Some do this by adopting a goal. Four general steps in problem solving: Goal formulation - deciding on what the goal states are - based on current situation and agent's performance measure - What are the successful world states

  12. What is the problem-solving agent in artificial intelligence?

    Problem-solving agents can be used in a number of different ways in artificial intelligence. They can be used to help find solutions to specific problems or tasks, or they can be used to generalize a problem and find potential solutions. In either case, the problem-solving agent is able to understand complex instructions and carry out specific ...

  13. Artificial Intelligence

    00:00 - 3.1 Problem-solving agents (The Romania problem)02:10 - 3.1.1 Search problems and solutions 09:10 - 3.1.2 Formulating problems11:55 - 3.2.1 Standardi...

  14. PDF 11

    algorithms and agent designs, outlined in Chapters 12 and 17. 11.1 THE PLANNING PROBLEM Let us consider what can happen when an ordinary problem-solving agent using standard search algorithms—depth-first, A , and so on—comes up against large, real-world problems. That will help us design better planning agents. 375

  15. PDF From Computational Creativity to Creative Problem Solving Agents

    Figure 1: Creative Problem Solving (CPS) occurs when the initial conceptual space of the agent is insufficient to com-plete the task, and the agent needs to expand its conceptual space to achieve the task goal. Traditional planning or learn-ing in AI would return a failure in such scenarios.

  16. Intelligent problem-solving as integrated hierarchical ...

    According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms. Hierarchical ...

  17. Intelligent Agent in AI

    Intelligent agents represent a subset of AI systems demonstrating intelligent behaviour, including adaptive learning, planning, and problem-solving. It operate in dynamic environments, where it makes decisions based on the information available to them. These agents dynamically adjust their behaviour, learning from past experiences to improve ...

  18. A Deep Exploration of Search-based Agents

    By utilizing heuristic information, Informed agents can affect more efficient and effective problem-solving. Informed agents are best suited for problems where the state space is large, complex, or infinite, and where heuristic information can significantly narrow down the search. These agents excel in scenarios where:

  19. PDF 3 SOLVING PROBLEMS BY SEARCHING

    Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn. Goal-based agents, on the other hand, can succeed by considering future actions and the desirability of their outcomes. PROBLEM›SOLVING This chapter describes one kind of goal-based agent called a problem ...

  20. PDF Solving problems by searching

    5 Well-defined problems and solutions A problem can be defined formally by (5) components: (1) The initial state from which the agent starts. (2) A description of possible actions available to the agent: ACTIONS(s) (3) A description of what each action does, i.e. the transition model, specified by a function RESULT (s,a)=a'. Together, the initial state, actions and transition model ...

  21. PDF 1.3 Problem Solving Agents Problem-solving Approach in ...

    ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY CS34 91-ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING Steps performed by Problem-solving agent Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to

  22. Navigating Complexity: Orchestrated Problem Solving with Multi-Agent LLMs

    2: Input the complex problem statement. 3: Decompose the problem into subproblems. 4: for each subproblem do. 5: Select a specialized LLM agent based on the subproblem's domain and requirements. 6: Assign the subproblem to the selected agent. 7: Agent solves the subproblem and stores the solution. 8: end for.

  23. Self-Reflection in LLM Agents: Effects on Problem-Solving Performance

    In this study, we investigated the effects of self-reflection in large language models (LLMs) on problem-solving performance. We instructed nine popular LLMs to answer a series of multiple-choice questions to provide a performance baseline. For each incorrectly answered question, we instructed eight types of self-reflecting LLM agents to reflect on their mistakes and provide themselves with ...

  24. How to enhance GenAI's problem-solving

    Each agent is responsible for a specific subtask, and they communicate with each other, passing information and results back and forth until the overall task is complete. By designing prompts that encourage logical reasoning, step-by-step problem-solving, and collaboration with other agents, we can create a system that mimics the deliberate and ...

  25. The promise and the reality of gen AI agents in the enterprise

    The evolution of generative AI (gen AI) has opened the door to great opportunities across organizations, particularly regarding gen AI agents—AI-powered software entities that plan and perform tasks or aid humans by delivering specific services on their behalf. So far, adoption at scale across businesses has faced difficulties because of data quality, employee distrust, and cost of ...

  26. Korean Unscripted Series 'Agents of Mystery' by the Producer of 'The

    Crafted by the visionary producer Jeong Jong-yeon (The Devil's Plan, The Great Escape, High School Mystery Club), the new Korean unscripted series Agents of Mystery weaves together brain games and collaborative problem-solving with six agents — Lee Yong-jin, John Park, Lee Eun-ji, Lee Hye-ri, Kim Do-hoon, and Karina — making one special ...

  27. Lesson Learned: Your job is problem-solver

    Agent. Lesson Learned: There are always problems. Your job is to solve them. Honesty, transparency and a knack for problem-solving are the keys to Coldwell Banker agent Robert Robinson's service ...

  28. How to Talk About Sex With Your Partner

    Be mindful of your timing. Be careful about initiating a discussion about sex while in bed, Dr. Chernin said, particularly if you are being critical. (Though some couples may find it easier to ...

  29. It's OK You Can't Solve Every Problem

    In coaching others, I often discuss problem-solving strategies to help individuals think creatively and consider many options when they are faced with challenging situations. Problem solving 1-2 ...