Principles of Task Analysis and Modeling: Understanding Activity, Modeling Tasks, and Analyzing Models

  • Living reference work entry
  • First Online: 17 November 2022
  • Cite this living reference work entry

Book cover

  • Célia Martinie 4 ,
  • Philippe Palanque 4 &
  • Eric Barboni 4  

135 Accesses

2 Citations

Task analysis identifies user goals and tasks when using an interactive system. In the case of users performing real-life work, task analysis can be a cumbersome process gathering a huge amount of unorganized information. Task Models provide a mean for the analysts to organize information gathered during task analysis in an abstract way and to detail it further if needed. This chapter presents the benefits of using task models for task analysis with a practical view on the process for building task models. As task models can be large, it is important to provide the analyst with computer-based tools for editing task models and for simulating them. In this chapter, we illustrate the presented concepts with the HAMSTERS notation and its associated eponym tool.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

https://www.irit.fr/ICS/tools/

Anderson R, Carroll J, Grudin J, McGrew L, Scapin D (1990) Task analysis: the oft missing step in the development of computer-human interfaces; its desirable nature, value, and role. INTERACT:1051–1054

Google Scholar  

Anett J (2004) Hierarchical task analysis. In: Dan D, Neville S (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 67–82

Annett J, Duncan K (1967) Task analysis and training design. Occup Psychol 41:211–221

Bernhaupt R, Palanque P, Drouet D, Martinie C (2018) Enriching task models with us-ability and user experience evaluation data. In: Bogdan C, Kuusinen K, Lárusdóttir M, Palanque P, Winckler M (eds) Human-centered software engineering. HCSE 2018, Lecture Notes in Computer Science, vol 11262. Springer, Cham

Bouzekri E, Martinie C, Palanque P, Atwood K, Gris C (2021) Should I add recommendations to my warning system? The RCRAFT framework can answer this and other questions about supporting the assessment of automation designs. In: Ardito C et al (eds) Human-computer interaction – INTERACT 2021. INTERACT 2021, Lecture Notes in Computer Science, vol 12935. Springer, Cham. https://doi.org/10.1007/978-3-030-85610-6_24

Chapter   Google Scholar  

Broders N, Martinie C, Palanque P, Winckler M, Halunen K (2020) A generic multimodels-based approach for the analysis of usability and security of authentication mechanisms. In: Bernhaupt R, Ardito C, Sauer S (eds) Human-centered software engineering. HCSE 2020, Lecture Notes in Computer Science, vol 12481. Springer, Cham. https://doi.org/10.1007/978-3-030-64266-2_4

Calvary G, Coutaz J, Nigay L (1997) From single-user architectural design to PAC*: a generic software architecture model for CSCW. In Proc. of CHI '97. ACM, 242–249

Campos JC, Fayollas C, Gonçalves M, Martinie C, Navarre D, Palanque P, Pinto M (2017) A more intelligent test case generation approach through task models manipulation. Proc ACM Hum-Comput Interact. 1, EICS, Article 9, 20 p

Card S, Moran T, Newell A (1983) The psychology of human-computer interaction. Erlbaum, ISBN 0898598591, pp. I-XIII, 1–469

Cockton G, Woolrych A (2001) Understanding inspection methods: lessons from an assessment of heuristic evaluation. Springer, People and Computers, pp 171–192

Diaper D (2004) Understanding task analysis for human-computer interaction. Lawrence Erlbaum Associates, The handbook of task analysis for human-computer interaction

Dix A, Ramduny-Ellis D, Wilkinson J (2004) Chapter 19:Trigger analysis - understanding broken tasks. In: Diaper D, Stanton N (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 381–400

Ellis CA, Gibbs SJ, Rein G (1991) Groupware: some issues and experiences. Comm ACM 34(1):39–58

Article   Google Scholar  

Fahssi R, Martinie C, Palanque P (2015) Enhanced task modelling for systematic identification and explicit representation of human errors. IFIP TC 13 INTERACT conference, LNCS 9299, part IV, Springer Verlag

Forbrig P., Martinie C., Palanque P., Winckler M., Fahssi R (2014) Rapid task-models development using sub-models, sub-routines and generic components. IFIP conf. on Human-Centric Software Eng., HCSE pp 144–163

Friedenthal S, Moore A, Steiner R (2011) A practical guide to SysML: the systems modeling language, 2nd edn. The MK/OMG Press

Gong R, Elkerton J (1990) Designing minimal documentation using the GOMS model: a usability evaluation of an engineering approach. CHI 90 Proc ACM DL

Greenberg S (2004) Working through task-centered system design. In: Diaper D, Stanton N (eds) The handbook of task analysis for human-computer interaction. Lawrence Erlbaum Associates, pp 49–66

Gribova V (2008) A method of context-sensitive help generation using a task project. Int J Info Theories Appl 15:391–395

Guerrero J, Vanderdonckt J, Gonzalez Calleros J (2008) FlowiXML: a step towards designing workflow management systems. J Web Eng:163–182

Heer J, Agrawala M (2008) Design considerations for collaborative visual analytics. Info Visualiz 7(1):49–62

International Standard Organization (2018). ISO 9241-11:2018 ergonomics of human-system interaction part 11: usability: Definitions and concepts, 2018, ISO

John B, Kieras DE (1996) The GOMS family of user interface analysis techniques: comparison and contrast. ACM Trans Comput-Hum Interact 3(4):320–351

Johnson P (1992) Human-computer interaction: psychology, task analysis and software engineering. McGraw Hill, Maidenhead

Johnson P, Johnson H, Hamilton F (2000) Getting the knowledge into HCI: theoretical and practical aspects of task knowledge structures. In: Schraagen J, Chipman S, Shalin V (eds) Cognitive task analysis. LEA

Kieras D (2004) GOMS models for task analysis. The handbook of task analysis for human-computer interaction, Lawrence Erlbaum Associates, pp 83–116

Lallai G, Loi ZG, Martinie C, Palanque P, Pisano M, Spano LD (2021) Engineering task-based augmented reality guidance: application to the training of aircraft flight procedures. Interact Comput 33(1):17–39

Martinie C, Palanque P, Navarre D, Winckler M, Poupart E (2011a) Model-based training: an approach supporting operability of critical interactive systems: application to satellite ground segments, EICS 2011, ACM DL. pp. 141–151

Martinie C, Palanque P, Barboni E, Ragosta M (2011b) Task-model based assessment of automation levels: application to space ground segments. Proc of the IEEE SMC, Anchorage

Martinie C, Palanque P, Winckler M (2011c) Structuring and composition mechanisms to address scalability issues in task models. In: IFIP TC 13 INTERACT conference. Springer Verlag, pp 589–609

Martinie C, Palanque P, Ragosta M, Fahssi R (2013) Extending procedural task models by systematic explicit integration of objects, knowledge and information. Europ Conf Cognitive Ergonomics: 23-34, ACM DL

Martinie C, Barboni E, Navarre D, Palanque P, Fahssi R, Poupart E, Cubero-Castan E (2014) Multi-models-based engineering of collaborative systems: application to collision avoidance operations for spacecraft. Proc. of the 2014 ACM SIGCHI symposium on engineering interactive computing systems (EICS '14). ACM, New York, pp 85–94

Martinie C, Palanque P, Bouzekri E, Cockburn A, Canny A, Barboni E (2019) Analysing and demonstrating tool-supported customizable task notations. PACM on human-computer interaction, Vol. 3, EICS, Article 12, 26 p

McGrath JE (1984) Groups: interaction and performance. Prentice Hall, Inc., Englewood Cliffs

Meyer DE, Annett J, Duncan KD (1967) Task analysis and training design. J Occup Psychol 41

Mori G, Paternó F, Santoro C (2002) CTTE : support for developing and analyzing task models for interactive system design. TOSE J 28(8):797–813

Navarre D, Palanque P, Bastide R, Paternó F, Santoro C (2001) A tool suite for integrating task and system models through scenarios. DSV-IS'2001; LNCS 2220. Springer

O’Donnell RD, Eggemeier FT (1986) Workload assessment methodology. In: Handbook of perception and human performance, Vol. II Cognitive Processes and Performance. Wi, pp 42–49

Palanque P, Martinie C (2011) Contextual help for supporting critical Systems' operators: application to space ground segments activity in context workshop, AAAI conference on. Artif Intell

Palanque P, Bastide R, Dourte L (1993) Contextual help for free with formal dialogue design. Proc HCI Int 1993:615–620

Pangoli S, Paternò F (1995) Automatic generation of task-oriented help. ACM Symp UIST:181–187

Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. Syst Man Cybernetics Part A: Syst Humans IEEE Trans 30(3):286–297

Paternò F (1999) Model-based design and evaluation of interactive application. Springer. ISBN 1-85233-155-0

MATH   Google Scholar  

Paternò F (2002) Task models in interactive software systems. In: Handbook of software engineering and knowledge engineering, vol 1. World Scientific, pp 1–19

Paternò F, Mancini C (1999) Developing task models from informal scenarios. CHI Extended Abstracts pp 228–229

Paterno F, Santoro C (2002) Preventing user errors by systematic analysis of deviations from the system task model. Int J Human Comput Syst 56(2):225–245

Paternò F, Zini E (2004) Applying information visualization techniques to visual representations of task models. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, pp 105–111

Pinelle D, Gutwin C, Greenberg S (2003) Task analysis for groupware usability evaluation: modeling shared-workspace tasks with the mechanics of collaboration. ToCHI 10(4):281–311

Roschelle J, Teasley SD (1995) The construction of shared knowledge in collaborative problem solving. In C. E. O'Malley (Ed.), Computer-supported collaborative learning. pp 69–197

Rosson MB, Carroll JM (2002) Chapter 53: Scenario-based design. In: Jacko J, Sears A (eds) The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications. Lawrence Erlbaum Associates, pp 1032–1050

Rumbaugh J, Jacobson I, Booch G (2004) Unified modeling language reference manual. Pearson Higher Education

Sinnig D, Chalin P, Khendek F (2013) Use case and task models: an integrated development methodology and its formal foundation. ACM TSEM 22(3):27

Stapleton J (ed) (2003) DSDM: business focused development. Pearson Education

van der Veer GC, Lenting VF, Bergevoet BA (1996) GTA: groupware task analysis - modeling complexity. Acta Psychol 91:297–322

van Welie M, van der Veer GC (2003) Groupware task analysis. In: Handbook of cognitive task design. LEA, NJ, pp 447–476

Winckler M, Palanque P, Freitas C (2004) Tasks and scenario-based evaluation of information visualization techniques. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, NY, USA, pp 165–172

Download references

Author information

Authors and affiliations.

Institute of Research in Informatics of Toulouse (IRIT), Université Paul Sabatier – Toulouse III, Toulouse, France

Célia Martinie, Philippe Palanque & Eric Barboni

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Célia Martinie .

Editor information

Editors and affiliations.

Universite catholique de Louvain, Louvain-la-Neuve, Belgium

Jean Vanderdonckt

IRIT - Interactive Critical Sys Group, Paul Sabatier University, Toulouse, France

Philippe Palanque

I3S, INRIA wimmics/SPARKS team, Université Nice Sophia Antipolis, Sophia Antipolis Cedex, France

Marco Winckler

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Martinie, C., Palanque, P., Barboni, E. (2022). Principles of Task Analysis and Modeling: Understanding Activity, Modeling Tasks, and Analyzing Models. In: Vanderdonckt, J., Palanque, P., Winckler, M. (eds) Handbook of Human Computer Interaction. Springer, Cham. https://doi.org/10.1007/978-3-319-27648-9_57-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-27648-9_57-1

Received : 17 December 2021

Accepted : 17 May 2022

Published : 17 November 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-27648-9

Online ISBN : 978-3-319-27648-9

eBook Packages : Springer Reference Computer Sciences Reference Module Computer Science and Engineering

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Skill components of task analysis

Some task analysis methods break down a task into a hierarchy of subgoals. Although an important tool of many fields of study, learning to create such a hierarchy (redescription) is not trivial. To further the understanding of what makes task analysis a skill, the present research examined novices’ problems with learning Hierarchical Task Analysis and captured practitioners’ performance. All participants received a task description and analyzed three cooking and three communication tasks by drawing on their knowledge of those tasks. Thirty six younger adults (18–28 years) in Study 1 analyzed one task before training and five afterwards. Training consisted of a general handout that all participants received and an additional handout that differed between three conditions: a list of steps, a flow-diagram, and concept map. In Study 2, eight experienced task analysts received the same task descriptions as in Study 1 and demonstrated their understanding of task analysis while thinking aloud. Novices’ initial task analysis scored low on all coding criteria. Performance improved on some criteria but was well below 100 % on others. Practitioners’ task analyses were 2–3 levels deep but also scored low on some criteria. A task analyst’s purpose of analysis may be the reason for higher specificity of analysis. This research furthers the understanding of Hierarchical Task Analysis and provides insights into the varying nature of task analyses as a function of experience. The derived skill components can inform training objectives.

Introduction

Some task analysis (TA) methods are used to understand, discover, and represent a task in terms of goals and subgoals, for example, Hierarchical Task Analysis (HTA, Annett and Duncan 1967 ) and Goal-Directed Task Analysis (GDTA, Endsley et al. 2003 ). Although widely described in procedure and underlying skills (e.g., Crandall et al. 2006 ; Kirwan and Ainsworth 1992 ), there is still much to be learned. The few existing training studies of HTA we found indicated that learning HTA, for example, is not trivial (e.g., Patrick et al. 2000 ; Stanton and Young 1999 ). The present research approached TA as a skill acquisition problem to be understood through scientific inquiry. Two studies were designed to characterize novice and experienced TA performance and identify skill components.

Task analysis (TA)

TA refers to a collection of methods used in wide variety of areas. In general, a task analyst aims to understand a task/work, its context, and performance for the purpose of improving or supporting effectiveness, efficiency, and safety via design or training ( Annett 2004 ; Diaper 2004 ; Hoffman and Militello 2009 ; Jonassen et al. 1989 ; Kirwan and Ainsworth 1992 ; Redish and Wixon 2003 ). To illustrate, TA is used in the nuclear and defense industry ( Ainsworth and Marshall 1998 ), has helped evaluate food menu systems ( Crawford et al. 2001 ), identify errors in administering medication ( Lane et al. 2006 ), assess shopping and phone skills in children with intellectual disabilities ( Drysdale et al. 2008 ), and furthered understanding of troubleshooting ( Schaafstal et al. 2000 ).

The general process of TA is iterative and involves planning and preparing for the TA, gathering data, organizing, and analyzing them. It involves reporting (i.e., presenting findings and recommending solutions) and verifying the outcome (e.g., Ainsworth 2001 ; Clark et al. 2008 ; Crandall et al. 2006 ). TA methods differ depending on which phase(s) of TA they focus on, what aspects of a task they target (e.g., behaviors, knowledge, reasoning), how results are presented, and what type of recommendations result.

HTA and hierarchical organization

HTA, used over 40 years, is admittedly difficult to learn (e.g., Diaper 2004 ; Stanton 2006 ). Data organization in HTA occurs through two processes: redescription and decomposition . During redescription the analyst defines the main goal (task description) and breaks it down (“re”-describes it) into lower-level subgoals , recursively, until reaching a predetermined stopping criterion. This results in a hierarchy (HTA diagram) as shown in Fig. 1 that provides a task overview and serves as input/framework for further analysis. During decomposition , the analyst inspects each lower level subgoal with respect to categories such as input required, resulting action, feedback received, time to completion, or errors observed ( Ainsworth 2001 ; Shepherd 1998 ), the result of which is better represented in a tabular format ( Shepherd 1976 ).

An external file that holds a picture, illustration, etc.
Object name is nihms871176f1.jpg

Example task redescription (breakdown of goals into subgoals, including plan). This example has a depth of two and a breadth of five

Redescription is not trivial ( Stanton 2006 ). Sub-Goal Template (SGT, Ormerod and Shepherd 2004 ; Shepherd 1993 ) provides templates to solve this problem but refers the analyst back to redescription when no template is available. Few studies specifically investigated novices’ problems when learning how to apply HTA. They found, for example, that novices who just received training, tended to redescribe a task in terms of specific actions rather than subgoals ( Patrick et al. 2000 ; Stanton and Young 1999 ), a similar challenge that novices learning GDTA encounter ( Endsley et al. 2003 ). Difficult is also redescribing a higher-level subgoal equivalently into lower-level subgoals (e.g., not just one subgoal), knowing what subgoals to include/exclude, and specifying plans at all levels of analysis ( Patrick et al. 2000 ). Plans ( Fig. 1 ) specify the sequence and conditions of accomplishing subgoals ( Ainsworth 2001 ).

Addressing the training need

Studies of training HTA pointed out general training needs, but the question remains specifically how to train redescription and what performance looks like across a number of tasks and levels of proficiency. TA is a complex cognitive skill (e.g., Patrick et al. 2000 ), and the 4C/ID-Model of instructional design suggests creating a hierarchy of constituent skills as the first of four activities in a principled skill decomposition (van Merriënboer 1997 ). Therefore, our overarching question was what component skills underlie TA, and particularly, hierarchical redescription?

Understanding skill components will help determine training objectives, assess performance, and tailor learner feedback. This is important as the need for training TA and specific methods is likely to increase, given the call for more (and more competent) practitioners and new TA methods ( Crandall et al. 2006 ). Further training is also essential because a hierarchical redescription of subgoals feeds into subsequent TA methods (e.g., SHERPA, TAFEI, GOMS). A better understanding of the skill will advance the discussion on how to assess the quality of a TA.

To accomplish our research goal, we chose two levels of TA experience, six tasks for repeat measurement, one TA method to train novices (HTA), focusing on understanding redescription. Data from novices identify barriers to initial progress. Data from practitioners using TA in their job inform goals for skill development. Comparing the two delineate skill components because of their absence in novice performance ( Seamster et al. 2000 ).

Tasks to be analyzed

The wide usage of TA makes it challenging to select tasks that are pertinent to all areas. Previous HTA training tasks included painting a door, making a cup of tea ( Patrick et al. 2000 ), evaluating a radio-cassette machine ( Stanton and Young 1999 ), as well as making a piece of toast, a cup of coffee, a phone call, and a South African main dish ( Felipe et al. 2010 ). We reviewed tasks used in literature ( Craik and Bialystok 2006 ; Davis and Rebelsky 2007 ; Felipe et al. 2010 ; Patrick et al. 2000 ; Shepherd 2001 ), and developed a set of six tasks (see Table 1 ). We chose two familiar domains (cooking, communication) to allow novices to focus on redescription. Not having to learn about a new domain and extract knowledge from subject-matter expert at the same time should reduce novices’ intrinsic cognitive load (high degree of complexity; cf. Carlson et al. 2003 ). Moreover, being familiar with a task procedure could influence the availability of task-related information ( Patrick et al. 2000 ); thus, each domain had one task for which the procedure was specific, general, or unknown (unfamiliar tasks).

Overview of the analyzed tasks

Overview of research

To determine skill components in redescription we conducted two studies. Novices participated in Study 1 and practitioners in Study 2. Years of experience using TA served as a proxy for practitioner’s proficiency. The goals of Study 1 were to characterize novices’ HTA (product and process) and determine the effectiveness of three types of training. The goals of Study 2 were to characterize practitioners’ TA (product and process). All participants analyzed six tasks. Novices analyzed one task before and five after training, whereas practitioners analyzed tasks while thinking aloud. Questionnaires assessed declarative knowledge (Study 1) and strategic knowledge (Study 1 + 2).

Study 1: novices

As background to characterizing novices’ performance and determining the effectiveness of three types of training, we considered the scenario in which a novice reads an overview of HTA and applies that knowledge shortly thereafter. Although there is the occasional course devoted to task analysis ( Crandall et al. 2006 ) and some novices receive initial training followed by months of expert mentoring ( Stanton 2006 ; Sullivan et al. 2007 ), current methods of learning often include using books, web resources, or brief workshops ( Crandall et al. 2006 ). Therefore, it was worthwhile to assess the limits of short declarative training procedures.

Declarative training

This study builds on Felipe et al. (2010) who employed two sets of instructions: one for all participants and one that differed across experimental conditions. Previous literature on training HTA used custom-made instructions ( Patrick et al. 2000 ; Stanton and Young 1999 ). This study used declarative training in the form of an introduction to HTA, outlining the main concepts, that was available from literature ( Shepherd 2001 ). Given novices’ problems adhering to the HTA format ( Patrick et al. 2000 ), we removed references to HTA format (hierarchy and tabular format) to learn what formats novices naturally choose to represent their analysis rather than assessing how well novices adhered to the specific HTA format.

Instructions on HTA usually include a flowchart or list of steps (e.g., Shepherd 2001 ). We compared the relative benefits of three types of additional instructions that emphasize different aspect of conducting HTA (procedures, decisions, and goals), illustrated by three types of spatial diagrams: matrix, network, and hierarchy. Figure 2 shows a list of steps exemplifying a matrix that statically relates element pairs in rows and columns. The decision-action-diagram is a network example, depicting dynamic information as a graph or path diagram. A concept map represents a hierarchy , that is, a tree diagram with information rigidly organized in nodes and links at different levels ( Novick 2006 ; Novick and Hurley 2001 ).

An external file that holds a picture, illustration, etc.
Object name is nihms871176f2.jpg

Additional instructions from Study 1

We expected participants’ HTA to reflect the instructions’ emphasis on procedures, decisions, and goals. Novices who receive step-by-step (procedural) instructions on how to conduct HTA do not need to generate their own. Given novices’ focus on lower levels of analysis ( Patrick et al. 2000 ), these HTA were expected to do exactly that, specifying one task procedure. Participants receiving information about goals of HTA (concept map) have to generate their own procedures on how to conduct HTA, and thus we expected their HTA to contain more (higher-level) subgoals (as found by Felipe et al. 2010 ) and be more general. The decision-action diagram of HTA then should fall in between these two conditions.

Research on procedural training (how to do something) versus conceptual training (drawing attention to concepts and why) suggests an immediate benefit of procedural training on trained tasks but a benefit of conceptual training for novel tasks (e.g., Dattel et al. 2009 ; Hickman et al. 2007 ; Olfman and Mandviwalla 1994 ). In this study, analyzing an unfamiliar task constitutes a novel task. Given the absence of specific task information, we expected that all participants would produce general HTA; possibly more so participants receiving the concept map.

Evaluating novices’ HTA

Novices’ HTA were characterized on seven categories: format; breadth and depth; subgoals; plan; main goal; criteria; and versatility. First, we expected participants to use lists and flowcharts in absence of HTA format, given novices’ tendency to use list and flowcharts even when instructed to use a hierarchy ( Patrick et al. 2000 ). Second, we quantified the dimensions of HTAs through depth (the number of levels) and breadth (the number of subgoals making up the highest level), because hierarchical means that an HTA is at least two levels deep and literature provides guidance regarding HTA breadth. Rules of thumb regarding breadth include four and five subgoals ( Stanton and Young 1999 ), no more than seven ( Ainsworth 2001 ), between three and ten ( Stanton 2006 ), or between four and eight (Patrick et al. 1986, as cited by Stanton 2006 ). We chose a breadth between three and eight subgoals to be most consistent with all recommendations.

The third question was whether participants recognized the importance of subgoals to HTA and what subgoals were identified. A subgoal was a verb-noun pair, for example, “obtain bread” and compared to a master HTA created for each task. Fourth, we were interested in whether participants would recognize the importance of a plan. Fifth, we assessed whether the participant stated the main goal (e.g., making a phone call), which provides important context for the HTA. Sixth was whether participants included satisfaction criteria that define conditions for determining whether a task was completed satisfactorily. Last, we determined if the HTA was versatile (general) and accounted for at least three task variations, for example, using different types of cell phones or a rotary phone.

Overview of Study 1

The goals of this study were to characterize novices’ HTA (products and process), and determine the effectiveness of three types of training. Novices analyzed one task before and five tasks after training, which allowed assessment of naïve understanding of TA as well as trained performance. Training consisted of an introduction to HTA and a handout that different between training conditions. Procedural knowledge (HTA products) was assessed on seven characteristics. Declarative knowledge was assessed via a recall test, and strategic knowledge elicited via a questionnaire.

Method for novices

Participants.

We report data collected from 11 male and 25 female undergraduate students. Participants ranged in age from 18 to 24 years ( M = 20.6 years, SD = 1.5) with the majority being Caucasian (75 %). Participants’ majors reflected the variety of majors offered at a large research university. Table 2 shows descriptive data and that participants did not significantly differ in their general abilities (measures of perceptual speed, working memory, and vocabulary). The experiment lasted approximately 2 h for which participants received two extra course credits.

Characteristics of Participants in Study 1 ( N = 12 per condition)

Alpha level was set at .05; none of the group differences were significant

Participants had to fulfill two criteria to be considered a novice. First, their initial TA was not rendered in the hierarchical or tabular format as prescribed by HTA. Second, participants had to report having no experience conducting a TA outside of class, as assessed by three questions in the Demographics and Experience Questionnaire . Although 25 % of the participants had heard about TA in a class, their initial TA was not in HTA format so they were included.

Table 1 shows the six tasks to be analyzed and their range of expected degree of familiarity (low, high) and procedural specificity (specific, general). Tasks were simple enough for a draft to be completed within a short period (15 min).

Training materials

All participants received a three-page handout providing a general Introduction to Hierarchical Task Analysis adapted from Shepherd (2001) . It included a brief overview of the history and goals of HTA and main concepts such as hierarchical nature, goals, subgoals, constraints, and plans for accomplishing the goal.

Participants in each training condition received an additional one-page handout with Condition - Specific Instructions shown in Fig. 2 . These additional instructions focused on redescribing a higher-level goal into lower-level subgoals, but also included initial TA activities such as defining the purpose of the analysis and gathering data. In the Steps Condition, the additional information was presented as a bulleted list and focused on the sequence of steps (adapted from Stanton 2006 , p. 62ff). The Decision-Action Diagram Condition provided a diagram illustrating the flow of decisions and actions (taken from Shepherd 1985 ). Concept Map Condition contained the information rendered as a concept map, including high-level goals of HTA (based on Shepherd 2001 ). To ensure that all participants were exposed to the same topics, information about determining if the redescription was equivalent was added to the Steps Condition, and the Decision-Action Diagram was amended with information about defining the purpose of the analysis and gathering data.

Questionnaires

Participants completed three questionnaires. The Demographics and Experience Questionnaire collected data on age, gender, education, and TA experience. The Task Questionnaire assessed familiarity with each task analyzed in the study (1 = not very familiar, 5 = very familiar) and how often those tasks were performed in everyday life (1 = never, 5 = daily). The Task Analysis Questionnaire assessed declarative and strategic knowledge that participants gained about HTA. Declarative knowledge was assessed by prompting participants to list and briefly describe the main features of HTA. Strategic knowledge was elicited by seven open-ended questions about how participants had identified goals and subgoals, indicated order, decided on breath and depth of the analysis, and what elements to analyze further. The Task Analysis Questionnaire also asked participants to rate difficulty and confidence of each task analysis; however, those data are not presented here.

Participants read the informed consent and completed the ability tests listed in Table 2 . To obtain a baseline measure for comparison after training participants received a written task description (as listed in Table 1 ) and were asked to perform a TA of either making sandwich or making phone call . Participants were free to use 11 × 17 in. paper as needed. Participants were not given a specific purpose for the TA. If participants had question, they were directed to work to the best of their knowledge and understanding of what it means to perform a TA. The experimenter collected the paper (HTA product) when participants put down their pen/pencil to indicate that they were done or when 15 min had passed. After the initial TA, participants received the Introduction to Hierarchical Task Analysis and had 10–15 min time to familiarize themselves with it. Then participants received the Condition - Specific Instructions for their training condition and were required to spend at least 5 min with this extra material but had up to 15 min available. Participants were allowed to make notes on the instructions.

After the instruction phase, participants analyzed two more tasks of the same domain following the same procedure as described above. Participants then completed the Demographics and Experience Questionnaire and contact information sheet before analyzing the three tasks of the second domain. Participants had 15 min for each analysis and could refer to the instructions throughout. The experimenter collected all instructions after the last HTA, and participants completed the Task Questionnaire and Task Analysis Questionnaire before being debriefed.

This experiment was a between participant design with three training conditions: Steps, Decision-Action Diagram, and Concept Map. Task was a repeated measure (participants analyzed six tasks). Domain order (cooking, communication) was counterbalanced. Within a domain, task order was fixed as listed in Table 1 ; half the participants analyzed making sandwich ( making a phone call ) as Task 1 (before training) and the other half as Task 4 (after training). Participants were individually tested and randomly assigned to one training condition and counterbalance version. Procedural knowledge was assessed by coding participants’ HTA. Declarative knowledge was determined via the first question of the Task Analysis Questionnaire (list five main features of HTA). The HTA process (strategic knowledge) was assessed by answers about decisions factor questions in the Task Analysis Questionnaire .

Results for novices

Data analysis addressed three questions: What are the HTA product characteristics before and after training on the seven criteria? Was there a beneficial effect of training? What strategies characterize the HTA process?

Task familiarity

A repeated measure ANOVA (task by condition by version) confirmed that familiarity ratings differed between tasks (main effect of task, F = 711.79, df = 1.7, p < .01, η p 2 = .96 ) but not training conditions ( p = .14) or counterbalance version ( p = .89). The reported F value and degrees of freedom are Greenhouse-Geisser corrected. As expected, participants were very familiar with making sandwich and making phone call (high familiarity, Median = 5, range = 0) and unfamiliar with making Vetkoek and sharing pictures using Adgers (low familiarity, Median = 1, range = 0). Intermediate to high familiarity ratings emerged for making breakfast (Median = 5, range = 1) and arranging meeting (Median = 4, range = 4). Frequency ratings were in line with familiarity ratings: high for the high-familiarity tasks and low (never) for low-familiarity tasks. Thus, the task manipulation was successful.

Coding scheme

Table 3 shows the coding scheme for assessing novices’ procedural and declarative knowledge. The categories were derived from Patrick et al. (2000) and the Introduction to Hierarchical Task Analysis that participants received. Two coders coded all material to ensure consistency of coding. Disagreements were resolved through discussion. Overall coder agreement for TA products was 79 % (range: 74–85 %; mean Cohen’s Kappa = .73, range .68–.81). The total number of HTA features listed in the Task Analysis Questionnaire and included in data analysis was 169 (180 total—5 blanks—6 duplicates). Overall coder agreement for declarative knowledge was 85 % (Kappa = .81).

Study 1 coding scheme for task analyses and questionnaires

Format of HTA

When examining what format participants would naturally choose to render their TA, we expected to find lists and flowcharts. Data showed that the most common format was a list (58.3 %), usually a numbered list. Participants also used a flowchart (8.3 %), pictures such as shown in the right panel of Fig. 3 (11.1 %), or a combination of the above (22.2 %). One participant even initially acted out the task. Thus, participants did prefer lists but also drew on other formats.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f3.jpg

Example task analyses from Study 1 of novice participants before they read instructions ( a ) and of novice participants after they read the instructions ( b )

After training participants still preferred a list format (59.4 % of all five HTA after training), not counting participants who combined a list with another format. The second-most frequent choice was a combination of formats (21.1 %), most often a list format combined with a flowchart or another list format. Two new formats emerged: a hierarchy and a narrative (paragraphs of text). Flowcharts and hierarchies accounted for 12.8 % of all trained HTA, whereas narratives and other formats made up the remaining 6.7 % of formats. Thus, participants showed a strong preference for a list format both before and after training.

Training conditions differed in the extent to which participants used lists, flowcharts, or combinations after training ( χ 2 = 42.23, df = 6, p < .01). The majority of participants in the Steps condition preferred to use either lists (45 %) or flowcharts (31.7 %), the latter of which included the two participants who used a hierarchy. In contrast, participants in the Decision-Action Diagram and Concept Map condition had a strong preference for lists (61.7 and 71.7 %) or combined formats (30 and 23.3 %).

Depth and breadth of HTA

The second goal was to describe HTA depth and breadth. If participants created the appropriate procedural knowledge from the training material, then HTA conducted after training should be deeper than before. HTA depth was determined at its deepest level by counting how often a participant created subdivisions. Before training, making sandwich had an average depth of 1.3 subgoals (SD = .5) and making phone call had a depth of 1.1 (SD = .3). Figure 3a shows two analyses with a depth of one, representative of the initial, untrained HTA (Task 1).

To understand the effect of training, HTA for making sandwich and making phone call were compared between participants who analyzed these tasks as their Task 1 (before training) and participants who analyzed these tasks as their Task 4 (after training). Figure 4 shows that HTA for Task 4 (after training) were significantly deeper than for the initial Task 1 ( making sandwich : F (1,36) = 15.85, p < .01, η p 2 = .35 ; making phone call : F (1,36) = 16.81, p < .01, η p 2 = .36 ). Figure 3b shows an HTA produced after training with a depth of three: the first level is labeled “goals”, the second level is labeled “subgoals”, and the third level is the number list. These data show that training was successful in deepening the HTA.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f4.jpg

Study 1: Depth and breadth of task analyses for making sandwich and making phone call before and after training

To explore the nature of HTA breadth, we determined if participants naturally rendered the highest level of their HTA between three to eight subgoals as recommended by literature. For example, Fig. 3a shows a breadth of five (left) and three (right), and b shows a breadth of three. Before training, the average breadth on the highest level for making sandwich was 5.5 subgoals (SD = 2.4) and for making phone call was 4.2 subgoals (SD = 1.7). HTA breadth did not significantly differ between Task 1 (untrained) and Task 4 after training ( making sandwich : p = .64; making phone call : p = .81) and was within the desired range of three to eight subgoals, albeit at the narrow end.

A repeated measure ANOVA for breadth and depth of the five trained HTA (by task order) showed that depth and breadth remained the same across the five trained tasks ( p depth = .12; p breadth = .50), training conditions ( p depth = .21; p breadth = .19), and counterbalance versions ( p depth = .59; p breadth = .55). None of the interactions was significant. The above analysis of novices’ procedural knowledge showed that HTA depth improved after training and that the breadth was within limits of breadth recommendations. However, as Table 4 shows, some participants continued to create TA that were shallow and/or too narrow or too broad.

Breadth and depth of task analyses

Participants’ answers to the declarative knowledge test complete this assessment. If participants recognized that a hierarchy was important to HTA (after all, it is part of the name), they should have listed it as one of the five main features of HTA. However, no participant listed “hierarchical” as a main feature of HTA. This may indicate a lack of awareness of the hierarchical nature of HTA, which is consistent with some participants continuing to produce a depth of one.

To assess what subgoals participants identified, two coders coded 2,417 verb–noun pairs with respect to master HTA. Novices sometimes just specified nouns without the verb (e.g., “phone”), which were not coded. We also noticed, but did not quantify, that novices tended to chunk subgoals. For example, one bullet point would have three subgoals listed in one sentence rather than each as a sub-bullet.

The first question was if participants understood the importance of subgoals to HTA. Participants illustrated this in three ways. First, “subgoal” was one of the top-three recalled features in the declarative knowledge test (75 % of participants). Second, participants included the label “subgoal” in 33.9 % of the five task completed after training as seen in Fig. 3b . Third, the total number of subgoals listed for making sandwich and making phone call doubled after training: Overall, participants identified 233 subgoals when analyzing these as Task 1 (untrained) compared to 473 subgoals noted by participants who analyzed the two tasks after training (Task 4). This represents an increase from an average nine subgoals per participant ( SD = 4, range: 4–20) to 16 subgoals ( SD = 6, range: 5–25) for making sandwich and an increase from four subgoals ( SD = 3, range: 0–9) to 10 subgoals ( SD = 6, range: 1–24) for making phone call . Participants thus indicated on a number of measures that they understood the importance of subgoals to HTA.

If participants generated the required procedural knowledge for redescription, they should include main level subgoals. Participants were expected to identify more main level subgoals for unfamiliar tasks and if they were in the Concept Map condition. Although participants identified more subgoals after training, their focus of analysis remained on lower level subgoals. As Fig. 5 illustrates, participants identified the same proportion of main level to lower level subgoals before and after training for making sandwich ( p = .72) and for making phone call ( p = .62). Participants also identified the same proportion of main level subgoals (15.9 %) to lower level subgoals (84.1 %) for general and unfamiliar tasks ( p = .82), irrespective of training condition ( p = .43). Thus, participants chose a low level of analysis even for tasks for which they did not have specific details.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f5.jpg

Study 1: average number and standard error for main level goals, lower level subgoals, and those not in the master task analysis (extra)

What subgoals did novices identify? Most subgoals for the cooking tasks were in the “follow recipe” category and the focus of making sandwich (89 % of subgoals). For making breakfast , participants also emphasized “follow recipe” subgoals (56 %) but devoted some attention to “get recipe” (16 %) and “serve food” (16 %). This stands in contrast to making Vetkoek for which participants went into depth for “get recipe” (33 %) compared to making breakfast (16 %) and making sandwich (4 %). Hardly any subgoals (7 %) were devoted to “enjoy food” and “wrap-up” activities for any of the three tasks.

Verb-noun pairs for tasks in the communication domain were distributed more equally across the categories of the master HTA than for the cooking tasks. One notable exception was the low number of subgoals pertaining to wrap-up activities such as “end call” (9 %), “end meeting” (0 %), and “end sharing” (1 %). Sharing pictures had a large number of extra subgoals (29 %) that specified downloading, installing, and learning how to use Adgers.

To summarize, novices focused their cooking HTA on food preparation, and rarely included wrap-up activities for either domain. In contrast to familiar tasks, participants devoted a third of subgoals to preparing for and learning about unfamiliar tasks.

As plans are an important component of HTA, we assessed whether participants understood this. Participants illustrated in two ways that they recognized the importance of plans to HTA. “Plan” was one of the top three recalled features in the declarative knowledge test (72 % of participants). Participants also used the label “plan” in 36 % of TA completed after training, typically attaching this label to the lowest level of analysis. Although participants recognized the importance of plans to HTA, few participants explicitly devoted space of their TA to it. In addition, participants specified only one plan and implied the hierarchical order of goal—subgoal—plan.

Main goal, criteria, and versatility

Because mentioning the main goal and satisfaction criteria as well as versatility of a TA were dichotomous (yes/no, general/specific), each TA received a composite score. A score of three reflects a “good” TA, containing the main goal, satisfaction criteria, and being general (i.e., included at least three variations).

First, we assessed whether participants’ naïve understanding of TA included mentioning of main goal and satisfaction criteria. Based on Patrick et al. (2000) we expected the untrained TA to be specific. Novices’ initial TA did not show a good quality. The majority of participants (69 %) scored zero, and as a group, participants only reached an average of 12 % (SD = 20) on the composite score. Only one participant mentioned the main goal, two mentioned satisfaction criteria, and only 10 of the initial TA were general. Thus, novices were not inclined to include the main goal and criteria without instructions and as expected, preferred creating specific TA.

If participants generated the required procedural knowledge from the training material, then their composite scores should be higher after training. As expected, novices who analyzed making sandwich and making phone after training (Task 4) created significantly better TA ( M = 56 %, SD = 31) than novices who analyzed those two tasks before training ( F (1,33) = 48.70, p < .01). Median and mode of the composite score increased from zero (Task 1) to 67 % (Task 4), that is, two (of three) features. The quality of TA remained at an average of two features over all trained tasks ( M = 65 %, SD = 28) and did not significantly differ across trained tasks ( p = .09) or training conditions ( p = .22). Although overall performance improved after training, only 28 % of TA had a perfect score, and another 28 % of TA had no or only one feature. Thus, training was successful in improving the quality of TA on the three features, but not for everybody.

What were some typical errors? Participants were least successful at creating general TA. Versatility of the TA did not significantly differ between untrained Task 1 (28 %) and trained Task 4 (42 %, p = .08). Of all trained TA, 59 % were general. We expected participants to create more general TA for unfamiliar tasks and if they were in the Concept Map condition. However, a Chi square analysis showed that participants created as many general (or specific) TA for unfamiliar tasks as for familiar tasks ( p = .27), and the pattern did not significantly differ across training condition ( p = .49). Thus, unfamiliarity with a task did not prevent participants from producing a specific TA, which shows that creating a general TA was not easy and participants need further instructions.

Although participants were most successful at mentioning the main goal, an error emerged here as well. Seventy one percent of all trained HTAs included the main goal as given to participants (e.g., making a phone call ), which is in line with 75 % of participants listing the main goal as one of the top three HTA features in the declarative knowledge test. However, about half of the participants “adjusted” the main goal at least once (17 % of all TA), for example by abbreviation to “a good sandwich”, or change from “ sharing pictures using Adgers ” to “allow others to see pictures which have been shared with Adgers” and from “ arranging a meeting ” to “have a meeting”. Novices’ tendency to adjust the wording of the main goal is worthwhile noting because it may lead to an analysis different from the one requested.

Decision factors

To gauge strategies, three questions of the Task Analysis Questionnaire at the end of the experiment prompted novices to share their approach to TA: “How did you decide on the depth of the analysis, that is, to which level to analyze”, “How did you decide on the breadth of the analysis, that is, where to start and where to end the task?”, and “How did you identify the goals and subgoals?”. Two coders segmented and coded participants’ responses. Coder agreement was 94 % for depth (Kappa = .92), 90 % for breadth (Kappa = .85), and 97 % for goals/subgoals (Kappa = .96).

Two main strategies emerged, accounting for 92 % of all comments: using a process and using a definition. Process factors included those that referenced a person (e.g., prior knowledge, task familiarity, fatigue), a task (e.g., task complexity), and other, such as asking questions, determining logical order, being specific, being shallow, considering problems, eliminating ambiguities, thinking of the simplest way to do it, or being detailed. Some of the same factors were mentioned as a reason to increase or to decrease analysis depth.

Definitions mostly pertained to that of goals and subgoals and breadth of the analysis. Participants defined a goal as “basically the task”, “pretty much given”, “the big picture”, “the main part”, and “final product”. Subgoals then “were the things needed to meet those goals”, “each step was a subgoal”, and “the elements which were necessary to get the goal, however not broken down into steps like the plan” and “open to my interpretation”.

Breadth-related definitions focused on specifying the starting and the ending point. A starting point was “the first step”, “whatever step would begin the actual process”, “gathering of all relevant information”, “the biggest question”, and “whatever seemed logically correct as to a beginning”. The ending point was “when the tasks were completed”, “when the goal was met”, or participants “decided not to make it too long” and “stopped before another task would have occurred (prompt was: making breakfast not making and eating)”, “to arrange the meeting. So it was arranged. Not in participating in it”.

Discussion for novices

This study investigated novices’ redescriptions before and after receiving one of three types of instructions to inform training of HTA. Table 5 summarizes the desirable outcomes of this training, areas of concerns, and recommendations.

Overview of findings from Study 1: training implications

Baseline performance: what to expect from a naïve learner

Data on baseline performance are important to assess the effectiveness of instructions. Where do novices start? Participants preferred to render their analysis in a list-style format and less often as flowcharts. This explains why participants of Patrick et al. (2000) chose these formats even when instructed to use HTA format. However, novices also explored formats such as pictures and motions. Without instructions, TA were shallow with only one (sometimes two) levels deep and rarely contained the main goal. TA were specific to a procedure or technology used and focused on lower-level subgoals, which is consistent with errors that novices make after instruction ( Patrick et al. 2000 ). This suggests that for the naïve learner analyzing a task means unpacking it in some fashion-with one level already providing plenty of detail.

What novices learned

Novices’ performance improved on a number of measures after the brief training and practice on five tasks. Participants’ declarative knowledge test results showed that the majority of participants recognized that the main goal, subgoals, and plans are important to HTA, which is consistent with Felipe et al. (2010) . The HTA themselves were significantly deeper after training, contained about twice as many subgoals, and were of better quality (as defined by mentioning the main goal, satisfaction criteria, and being general). An increased depth, higher number of subgoals and mentioning of the main goal is consistent with findings by Felipe et al. (2010) and shows that novices extracted important aspects of HTA and successfully translated them into procedural knowledge.

Much left to be learned

After training, only few participants did not mention the main goal at all. However, some participants adjusted the main goal as given to them. Such adjustment is not wrong per se and in fact may be a by-product of the overall TA process ( Kirwan and Ainsworth 1992 ). However, it is also important not to change the main goal, once agreement has been achieved. Training could address this topic and increase the quality of HTA. Another error that novices make is not specifying plans on every level of analysis ( Patrick et al. 2000 ; Shepherd 1976 ). Data from this study suggest that this may be because novices tend to think of a plan in terms of one specific way to complete the overall task, associated with the lowest level of analysis rather than every level of analysis.

Despite the success of deeper HTAs, the present data show that the idea of a hierarchy—what it is and looks like—needs further instruction. We suspected that the word “hierarchical” in HTA could be a give away for the declarative knowledge test. However, similar to Felipe et al. (2010) , no participant noted that this was a main feature of HTA. Furthermore, some participants continued creating HTA with a depth of one, which is problematic given that HTA depth is a prerequisite to other HTA concepts such as equivalence that we mentioned earlier. Few participants spontaneously used a hierarchy (tree diagram), which is consistent with Felipe et al. (2010) . Thus, the HTA diagram itself requires targeted instruction. Given participants strong preference for a list format, training could start with a list, introduce the tree diagram later, and address other formats such as narratives. The differences we found in format choice by training conditions is unclear and may be spurious, given that Felipe et al. (2010) did not report such findings.

Participants indicated that they recognized the importance of subgoals. However, consistent with previous research ( Patrick et al. 2000 ; Stanton and Young 1999 ), most identified subgoals fell at a lower level. Even without procedural details (unfamiliar tasks), participants preferred to focus on whatever details they knew (e.g., how to obtain a recipe for Vetkoek) rather than outlining higher level subgoals which they should have known. Thus, an unfamiliar task is insufficient to refocus novices’ attention to a higher level of analysis.

Somewhat related, novices’ HTA tended to be specific, which is consistent with previous findings ( Felipe et al. 2010 ; Patrick et al. 2000 ). Although the number of general HTAs was higher on the trained task (28 vs. 42 %), this difference was not significant, which can be viewed to support criticism of using simple tasks for teaching. This study suggests that simple tasks have merit, but there are limits to what concepts can be taught by using simple tasks. An alternative explanation is that the idea of a general HTA develops slowly; general HTAs continued to increase to 59 % across the trained tasks.

The differences between training conditions were minimal and provided little support for a differential effect of spatial diagrams. This may be due to the brief amount of training in duration and content, the absence of feedback, or the selection of tasks. Felipe et al. (2010) found that participants in the Concept Map condition identified significantly more subgoals than Steps and Decision-Action diagram conditions, when analyzing four specific and one unfamiliar task ( making Vetkoek ). In contrast, participants in this study analyzed tasks that varied from specific to general to unfamiliar. The choice of tasks to use for training may be more important than how the information is rendered, at least initially.

To include or not to include

The majority of HTAs were within the recommended breadth range. Yet, participants also created HTAs after training that were too narrow or too broad. This is consistent with a finding by Patrick et al. (2000) indicating that novices have problems determining the correct task boundaries. More specifically, we found that novices tended to forget task completion subgoals for both domains. This blind spot is not trivial and may influence the overall outcome of the TA because potentially important sources of errors are overlooked or new design may not have required functionalities.

Strategies to HTA

Patrick et al. (2000) found that participants used a sequencing or breakdown strategy. Participants in the present study are best described as having used a process or definition. The latter is especially interesting for training purposes, because it points out the necessity to provide clear definitions, for example, for subgoals. However, it also points to the problem that current definitions and differentiation of goals from other concepts (actions, functions) are unclear and a general source of confusion in TA (e.g., Diaper 2004 ), which has lead some authors to suggest abandoning these concepts altogether (e.g., Diaper and Stanton 2004 ).

Study 2: Practitioners

Although understanding novices’ performance and errors is important for training, studying experienced performers can provide valuable insights. In fact, many TA methods use subject matter experts to understand the knowledge and strategies involved in task performance and inform the design of training ( Hoffman and Militello 2009 ). Just to name a few examples, curricula informed by (cognitive) TA methods have improved learning outcomes in medicine ( Luker et al. 2008 ; Sullivan et al. 2007 ), mathematical problem-solving skills ( Scheiter et al. 2010 ), as well as biology lab reports and lab attendance rates ( Feldon et al. 2010 ).

Studying experienced performers provides information about the goals of skill development, and the benefits of knowing goals have been shown to be an important factor in training (e.g., Adams 1987 ). To understand a skill, it is critical to obtain a picture of what experienced performers are actually superior in and to what stimuli and circumstances the skill applies ( Ericsson and Smith 1991 ). This study focused on describing what practitioners would do given similar constraints as novices in Study 1, both to place novices’ performance into perspective and to gather (presumably) superior performance. We will refer to this as TA rather than HTA, given that participants may not have intended to use HTA. Our focus remains on subgoal redescription.

Two possible approaches for redescribing subgoals are differentiated based on whether a task analyst chooses to analyze the breadth or the depth of a task first ( Jonassen et al. 1999 ). A breadth-first approach means to redescribe the main goal into lower levels before moving on to the next level. Using numbers indicating levels such as those shown in Fig. 1 , the sequence of subgoals identified might look like this: 1.0, 2.0, 3.0, 4.0, 5.0, 1.1. 1.2, 1.3, 1.4, 2.1, 2.2, 2.3 and so forth. Visually this might look like a line that undulates horizontally. Conversely, an analyst using a depth-first approach will start redescribing the main goal into the first subgoal and continue to move down and up in depth, in effect showing a sequence such as 1.0, 1.1. 1.1.1, 1.1.2, 1.1.3, 1.2, 1.3, 1.4, 2.0, 2.1., and so forth. Visually this approach might look like a line that undulates vertically.

Another strategy is to ask questions, often in the context of eliciting knowledge (e.g., Stanton 2006 ). Two general questions guide the instructional designer during the principled skill decomposition phase. “Which skills are necessary in order to be able to perform the skill under investigation” (van Merriënboer 1997 , p. 86) is meant to elicit elements on a lower level in the hierarchy, and “Are there any other skills necessary to be able to perform the skill under consideration” (p. 87) helps elicit elements on the same level in the hierarchy. Stanton (2006) compared different lists of specific questions that varied based on the problem domain a task analyst is working in. We chose six general questions (what, when, where, who, why, how) to investigate in more depth how questions guide a practitioner during redescription.

Assumptions can be viewed as the flip-side of questions, namely when analysis has to progress but there is nobody to answer questions. Furthermore, stating assumptions is an important part of the analysis because it helps understand the limitations and applicability of the analysis ( Kieras 2004 ). Thus, we assessed whether experienced task analysts did indeed make assumptions, and if yes, what those were.

Overview of Study 2

To summarize, the goals of this study were to determine the characteristics of TA products of experienced practitioners along some of the same criteria as in Study 1. In addition, we wanted to characterize practitioners’ approach. To gather information about characteristics of experienced task analysts’ products and process, participants in Study 2 analyzed six tasks while thinking aloud, completed questionnaires, and participated in a semi-structured interview (the data for which are not presented here).

Method for practitioners

Four of the eight practitioners participated in Atlanta (GA) and four participated in Raleigh (NC). All participants (2 male, 6 female) spoke English as their native Language (see Table 6 for participant characteristics). Most of the participants were Caucasian (5). Six participants indicated a master’s degree as their highest level of education, one had a doctorate. The majors were Industrial Engineering, Biomedical Engineering, Industrial Engineering, Instructional Design, Rehabilitation Counseling, Occupational Ergonomics, and Psychology. Licenses included Certified Professional Ergonomists, Industrial-Professional Engineer, and Occupational Therapist. The study lasted approximately 3 h for which participants received a $50.00 honorarium.

Study 2 participant characteristics ( N = 8)

Recruitment

Participants were recruited via professional organizations and companies whose members were known or likely to use TA: Human Factors and Ergonomics Society, Special Interest Group on Computer–Human Interaction, Instructional Technology Forum, and the Board of Certification in Professional Ergonomics.

Task analysis experience and self-rated proficiency

To be included in the study, participants needed to be native English speakers, use TA in their job, have at least 2 years experience conducting TA, and worked on at least one TA in the past year. Two years experience should ensure that participants experienced some breadth in their TA work without having advanced to a managerial position.

Participants expressed a range of experience with TA as assessed by the Demographics and Experience Questionnaire and shown in Table 6 . In the past year, six participants had conducted relatively few (2–5) TA whereas two participants had conducted many (30–50). Over the course of their professional life, two participants had conducted fewer than 5 TA, one participant conducted between 6 and 12, and the remaining 5 participants indicated that they conducted more than 50 TA. TA methods that participants reported using reflected the variety of existing methods. One participant stated using every type depending on the circumstances, whereas another participant did not know the formal names of the methods used. Participants reported learning the methods on the job (46 %), in school (43 %), or in a course (11 %). As for specific TA methods that we queried participants about: five participants had heard about CTA, three about HTA, and two about SGT.

TA are undertaken for a particular purpose and have specific, measurable goals. Participants’ top three purposes for conducting TA were designing tasks; designing equipment and products; and training individuals. Less frequently mentioned was environmental design. No participant used TA to select individuals, but they did for identifying barriers to person-environment fit and selecting jobs for individuals with disabilities. The top two goals for conducting TA were to enhance performance and increase safety. Increasing comfort and user satisfaction was also a goal for half of the participants, but less frequently. Only one participant used TA to find an assistive technology fit for a person.

The tasks that participants analyzed in their work were diverse and spanned from household work to repairing an airplane. Tasks were those found in military, repair and vehicle manufacturing, factory, office, work, and service industry environments. More specific descriptions included graph construction, software installation, and authentication. Participants also listed complex performance (equipment diagnostics, equipment operation), cognitive tasks (decision-making, critical thinking), aircraft maintenance, as well as various airport and airline tasks. Participants moreover reportedly analyzed how a person works at a desk, performs various household activities (e.g., cooking or cleaning), specific computer tasks, uses a telephone, or checks in at a hotel.

Instructions for task analysis

Participants received a scenario that described them joining a new team. The new team members had asked the participant to create common ground by illustrating her/his understanding of TA on a number of example tasks. To capture participants’ approach, we neither provided a purpose for conducting their TA nor instructed participants to focus on a specific TA phase/method.

Practitioners analyzed the same tasks as novices in Study 1.

Over the course of the study, participants completed the same three questionnaires as novices had in Study 1. The Demographics and Experience Questionnaire also probed for information about certifications, experience with TA, for what purposes and goals participants used TA, and what aspects of a task participants emphasized in their analysis. Participants listed the TA methods they used and indicated how often they used them, when and how they learned them, and rated their own proficiency. Then questions specifically targeted experience with five TA methods, including HTA and CTA. In the Task Analysis Questionnaire , participants also rated how representative their TA was in comparison to the ones in their job.

Equipment and set-up

An Olympus DM-10 voice recorder taped all interviews. Participants conducted their TA on 11 × 17 in. paper, placed in landscape format in front of them. Two QuickCam web cameras (Logitech 2007) and Morae Recorder software (TechSmith 2009) captured participants’ hands and workspace from two different angles while participants completed the TA.

Design and procedure

As in Study 1, this study incorporated repeated measures as participants analyzed six tasks, arranged in two counterbalanced orders. Participants read and signed the informed consent form. Then the experimenter collected the Demographics and Experience Questionnaire that had been mailed to participants prior to the study. Participants were oriented to what the video cameras captured before the video recording began. Familiarization with thinking aloud and being recorded occurred by playing tic-tac-toe with the experimenter. Then, participants read the scenario that asked them to illustrate their understanding of TA on a number of example tasks to the new team they joined. For each task, participants received a written task to be analyzed (as shown in Table 1 ) and instructed to perform the TA while thinking aloud. The experimenter collected the TA and provided the next task when participants indicated that the TA was complete (putting down the pencil), latest after 15 min. After three TA, participants took a 5-min break. Once all tasks were analyzed, participants completed the Task Questionnaire and Task Analysis Questionnaire . Participants took a 10-min break before beginning the semi-structured interview (data are not presented here) after which they were debriefed.

Results for practitioners

Data analysis focused on two areas. First, participants’ TA products were examined to determine product characteristics. The TA were coded on the same dimensions as used for novices from Study 1: format of TA, dimensions of the hierarchy, subgoals, quality (main goal, satisfaction criteria, and versatility). Two coders coded all TA with respect to master TA, and disagreements were resolved through discussion. Mean overall coder agreement was 82 % (range 75–86 %, Kappa = .69–.84). Second, participants’ think-aloud protocols were analyzed for process characteristics (breadth or depth-first, questions, and assumptions). Coder agreement was 86 % (Kappa = .83).

Format of task analyses

The first question was what format practitioners would choose to render their TA and how prominent a hierarchy featured. We expected TA to reflect the diversity of formats reported by Ainsworth and Marshall’s (1998) : decomposition tables, subjective reports, flow charts, timelines, verbal transcripts, workload assessment graphs, HTA diagrams, and decision-action diagrams. The majority of practitioners used a list format such as a numbered or bulleted list with indents to indicate different levels of analysis (83 % of TA). Participants also used a flowchart (9 %) or combined formats (4 %). No participant used a hierarchy to illustrate TA, and two TA (4 %) showed a loose collection of subgoals. Irrespective of format, each verb-noun pair was visually separated from other verb-noun pairs in some fashion.

Figure 6a shows two TA examples from an instructional design perspective. Notable here is the decomposition (classification) of subgoals into knowledge, motor skills, and attitude (KSA). This participant was the only one who had a similar task structure for all three cooking tasks. Other participants would create a similar TA but without the KSA classification. A TA from a system design perspective is shown in Fig. 6b . Although not formally a hierarchy (tree diagram), the levels of analysis of goal, subgoals, and actions are clearly labeled. Also documented are the assumptions on the right.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f6.jpg

Example task analyses from Study 2 practitioners

Figure 6c illustrates a TA focused on assessing task performance of a patient with brain injury. This practitioner first documented assumptions about the patient (usually a given) and then outlined what to assess (e.g., cueing, sequencing, problem solving, judgment). This practitioner would then ask the patient to go through task steps while checking/evaluating if the performance was adequate. Thus, our sample did not reflect the diversity of formats found by Ainsworth and Marshall’s (1998) . Although some participants indicated different levels of analysis or using HTA and GDTA as methods they used in their work, nobody used a hierarchy during the 15 min of illustration.

Depth and breadth of task analysis

Practitioners’ TA were expected to be at least two levels deep, reflecting different levels of analysis, and they were on average 2.3 levels deep (SD = .95), ranging in depth from one to six levels. A non-parametric Friedman test showed that TA did not differ significantly in depth across tasks ( p = .88). Thus, practitioners created TA of more than one level for specific tasks such as making sandwich or making phone call , as well as when no specific details were available (unfamiliar tasks). However, as shown in Table 4 , some participants created TA that were only one level deep. As expected most, but not all, practitioners redescribed their TA.

Breadth was expected to be within suggested boundaries of three to eight subgoals derived from literature as outlined earlier. TA breadth was on average 6.1 subgoals wide (SD = 4.23), ranging from 2 to 21 subgoals. A Friedman test showed that tasks signifi-cantly differed in their breadth ( χ 2 = 11.67, df = 5, p = .04); however, follow-up multiple comparisons using the Wilcoxon test and a Bonferroni-adjusted alpha-level did not indicate significant differences between all pairs. Although the average breadth of practitioners’ TA was within the suggested boundaries of three to eight subgoals, Fig. 7 shows that practitioners in this study created TA that were beyond those boundaries. The broadest and shallowest analyses were created for making sandwich , with two participants creating the broadest TA of 19 and 21 elements. This illustrates practitioners’ individual differences and that they do not necessarily adhere to the breadth standards suggested in the literature.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f7.jpg

Breadth and depth of task analyses for all six tasks of Study 2 ( N = 8 per task)

Of particular interest was what subgoals practitioners included and excluded from the TA. In general, practitioners were rather specific in their analysis with 5 % of all identified subgoals matching to a main level subgoal of our master, 90 % of subgoals focusing on lower level subgoals and 5 % were extra. As Fig. 8 shows, practitioners mentioned on average only one main level subgoal from our master TA and identified almost three times as many subgoals for cooking tasks compared to communication tasks. Because novices presumably have inappropriate task boundaries ( Patrick et al. 2000 ), we will now describe for each task what subgoals practitioners included.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f8.jpg

Study 2: average number and standard error for main level goals, lower level subgoals, and those not in the master task analysis (extra)

Subgoals for cooking tasks

More specifically, for the task of making sandwich , participants concentrated on describing the procedure (80 % of 176 subgoals), with rarely mentioning to determine what to make (part of get recipe) or serving the sandwich. However, five of eight participants included the main level goal of enjoying the sandwich. A notable number of the verb–noun pairs (13 %) were devoted to wrapping up, that is, cleaning.

For the task of making breakfast , participants also paid most attention to the preparation of breakfast items (64 % of 241 subgoals), focusing on food and rarely mentioned beverage. However, some verb–noun pairs were devoted to “determining what to make” (13 %), which is not that surprising, given the task of making breakfast includes choices that are already made for the task of making sandwich . Another 10 % of subgoals were devoted to serve breakfast. And again, participants noted wrap-up activities such as cleaning the dishes (7 %). Two participants included subgoals such as leaving the room and turning off the lights, which were outside our master task list and coded as extra.

When practitioners analyzed making Vetkoek they spent much of their focus on learning what Vetkoek is (39 % of 146 subgoals). These subgoals (get recipe) included determining what Vetkoek is, where it comes from, what ingredients it uses, how to make it, and if they had the equipment and knew the techniques involved in making the dish. Only 43 % of subgoals related to following the recipe, and only two participants noted to enjoy the dish. Again, TA included some wrap-up activities (9 %), which suggests that participants perceived cleaning and storing items as part of the general cooking task structure.

Subgoals for communication tasks

For the task of making a phone call , participants focused on subgoals related to determining the receiver (38 % of 65 subgoals), connecting (40 %), and somewhat on communication (15 %). Little emphasis was placed on obtaining a phone (2 %) or ending the call (5 %).

When analyzing arranging meeting , participants emphasized determining date and time (28 % of 89 subgoals), determine attendees (17 %), and determine location (17 %). Not so much focus was placed on determining the reason for the meeting (8 %) or confirming the meeting details (7 %). Practitioners invested 17 % of their subgoals to prepare for the meeting, but only 5 % to meet, and none to end and wrap up the meeting. One could argue that the task of arranging a meeting does not include the meeting itself and that this finding should not be surprising. However, one may counter that making a phone call does not include the conversation either; yet, participants included it in their TA. Nobody included any items related to ending the meeting.

Last, for the unfamiliar task of sharing pictures using Adgers, participants mainly analyzed the exchange aspect of making the picture available (39 % of 51 subgoals), followed by connecting using Adgers (22 %), obtaining the picture (16 %), and determining which picture to be shared (16 %). Only few subgoals pertained to determine receiver information (8 %), and no participant mentioned any subgoals related to end the sharing. However, participants identified an additional 24 subgoals to include efforts to obtain a copy of the software, install it, use a tutorial, and explore the software to become familiar with it, thus including tasks in the TA that they would be doing themselves because of their unfamiliarity with Adgers. Similar to the task of arranging meeting , participants did not address the end of sharing pictures as a closing symmetry.

Only one participant pondered about the task boundaries and decided not to include learning about the unfamiliar task in the task analysis itself: “I’m trying to decide where I would start since I don’t have a clue what Adgers is. So I’m trying to decide if I would include something like learn what Adgers is, is part of the task analysis. Presumably if I’m doing a task analysis though, I wouldn’t, normally I wouldn’t include something like that, as part of the task of actually sharing the pictures”. This suggests that practitioners who are conducting TA can use their inexperience with a task as a guide.

Subgoal symmetry

As mentioned above, participants’ TA of the cooking domain included symmetrical wrap-up activities such as cleaning and storing away times. There was a noticeable symmetry even on lower levels of analysis. For example, “open jar” was followed by “close jar”, “open the fridge” was followed by “close the fridge”, and “open the sandwich” was followed by “close the sandwich”. Cleaning can be viewed as being symmetrical to the whole sandwich making activity. Participants also included wrap-up activities for making phone call (end call), but not for the other communication tasks. Thus, practitioners’ TA contained symmetry but it was not pervasive across all tasks (as we defined it).

Qualities of a good task analysis

Mentioning the main goal, satisfaction criteria, and versatility were assessed for practitioners in the same fashion as for novices in Study 1, that is, they sum up to three for each TA and person and indicate a “good” TA. Overall, the quality of practitioners’ TA on these three categories was 28 %. Practitioners rarely stated satisfaction criteria, but that is not too surprising, given that these criteria are not necessarily part of all TA methods. Surprising was, however, that only 27 % of TA contained the main goal (as given or adjusted), with one participant (instructional design) accounting for half of those.

Only 56 % of participants’ TA were general, with one participant creating specific TA for all tasks and another creating general TA for all tasks. Think-aloud data may explain how and why participants created specific TA for unfamiliar tasks ( making Vetkoek , sharing pictures using Adgers). One reason was that participants constrained the problem space very tightly. For example, one participant constrained the TA of making Vetkoek so that it only included finding a recipe for Vetkoek in a cookbook. Another reason was being guided by existing technology. For example, some participants thought of Facebook when analyzing sharing pictures , and let this knowledge and experience be their guide.

Another explanation for why participants created specific TA relates to the purpose of conducting a TA. To illustrate, one participant who used TA to evaluate the capabilities of a specific person to perform a certain job was thus working with clearly defined parameters. The person whose performance is assessed has very specific capabilities and limitations (e.g., due to injury), performance was evaluated in a very specific environment (e.g., kitchen) and was tied to very specific objects (e.g., phone model). Thus, the resulting TA (assessment) was specific. This stands in contrast to another participant who started out with a particular scenario and then tested how the TA held up when expanding the assumptions to different scenarios, thus creating a general TA needed for system or training design. There are a number of reasons why a TA might be specific, perhaps by design or inadvertently.

Task analysis process

The think-aloud data provided the basis for analyzing the TA process as an account of how participants conducted the TA: (1) Do participants determine first the breadth of the analysis or analyze subgoals in depth first before determining the next subgoal? and (2) What questions do participants ask, and what assumptions do they make? Overall coder agreement between two coders was 86 % (Kappa = .83).

Breadth-first versus depth-first

TA were coded as to whether a participant approached it breadth-first or depth-first. If a participant outlined all subgoals on the highest level first before redescribing these into lower level subgoals, then this was coded as breadth-first, even the TA consisted only of one level. If a participant redescribed subgoals before having outlined all high-level subgoals, then this was coded as depth-first.

One participant’s comments shed light onto the benefits of a breadth-first approach: “I would start with the breadth-first analysis, ‘cause […] what I want to understand is, do I understand the end problem? You know, are there any big gaps in my knowledge about where the user is going to start and where the user is gonna end up?”. Besides determining the boundaries of the task, a breadth-first approach also prevents the team from wasting time outlining details of a branch that may be cut out of the project sometime later. Furthermore, having specific details may be counterproductive to creating a shared understanding because software developers may be inclined to code too early in the process.

Twenty one TA (44 %) were created by a breadth-first approach and 27 (56 %) were done depth-first. Cooking tasks were more likely conducted depth-first and communication tasks were more likely conducted breadth-first ( χ 2 = 5.76, df = 1, p < .01). One participant changed from breadth-first to a depth-first approach when moving from the communication to the cooking tasks. This participant explicitly noted this change in approach while analyzing making breakfast : “I just realized that I rushed right into the making the peanut butter jelly sandwich without clarifying the assumptions that I had there, which was that the sandwich was for me”. This suggests that a breadth-first approach has practical benefits but that procedural/sequential aspects of a task (domain) may influence redescription.

Practitioners’ questions during a task analysis

The next goal was to understand what questions practitioners used during their tasks analysis and what these accomplished. Think-aloud data were coded for whether participants mentioned the questions “what, when, where, who, why, and how” during their TA. A segment was defined as an idea unit, containing a question that furthered the TA (i.e., not including questions to the experimenter). The think-aloud protocols were conservatively coded, that is, excluding questions that were phrased as statements. One coder selected the 226 segments and two coders coded them. The coding scheme included an “other” category for questions other than the ones previously mentioned.

All participants asked questions at some point during their TA and varied in the number of questions they asked, from none to 16 for one task and between one and 51 for all six tasks. Most questions in this phase of analysis pertained to what (43 % of 226), followed by questions about how (16 %). The remaining four questions accounted only for 16 % of the remaining segments, whereas 25 % of the questions were not captured by the six questions in the coding scheme. We will now show some examples in more detail.

What? : Common themes of what questions emerged. One category of questions were about understanding the task space and specifying its objects, such as What is it? , What type of jelly? What type of phone? What type of materials? What would you use? . This was seen especially, but not only, with the two unfamiliar tasks (e.g., What are the system requirements? , What are the capabilities? ). Another category of what questions related to the procedure of the task, such as What is next? , What is the process? , What are the steps? , What is the first step? and What do I do? .

A third category of what questions related to searching for specific requirements (e.g., What would I need? What utensils do I need? ). Other what questions included checking specific aspects of a task: What will I need to know? , What will I need to be able to do? , What would the knowledge behavior be? , What behavior would I use? , and What kind of motor skills are involved? . Participants occasionally asked What if? questions to understand alternative paths and asked What else? to search the task space for potentially undiscovered task elements.

Questions were also phrased so they fell in a different question category. For example, a who question was phrased as What’s the audience? and a How long? question was phrased as What takes the longest? . This suggests that participants would rephrase open ended questions into a specific question that would guide them to the next step, in this case, to start with the item that takes the longest. Thus, what questions aimed to elicit and specify task objects, information requirements, and procedural details.

How? : Questions that contained how were the second most frequently mentioned category (16 % of the segments). Questions mostly related to “ how to ” followed by a verb, for instance, how to share, how to have, how to use, how to dial, how to open, or how to choose . These questions suggest the search for a procedure. There were also questions related to number ( how many ), time ( how far back ), assessment of ability (e.g., how able is he to maneuver? ) and even looking for answers (e.g., how can he cue himself? ), which suggests that how questions can also be used for quantification and evaluation.

When, where, who, and why ?: Only 16 % of segments fell into the remaining four question categories. Where questions could refer to a location of an object (e.g., Where is the peanut butter? ) or a starting point of the TA ( Where do I start? ). Who questions focused on defining an audience, both more general as in Who do you want to share the pictures with? and more specifically Who in the family? . One participant contributed to seven of the nine why questions by questioning the main goal at the beginning of the TA: Why are we sharing pictures ? or Why are we using Adgers ?

Other questions : One quarter of the segments included other questions. Participants asked focused questions that required a yes/no answer, for example when assessing behavior (e.g., Is he able to do x? or Is he doing y? ), determining timing ( Are there things that are going to be done in parallel? ), and checking aspects (e.g., Is there anything I need to know? ) or asked “Do I…” (e.g., Do I create the agenda before? or Do we have everything necessary? ). Furthermore, participants searched for the right word ( stove … use it? Employ it? ) These questions show while analyzing a task, participants also checked the presence of specific task aspects and tried to determine the right labels for the subgoal they had in mind.

Practitioners’ assumptions during a task analysis

We expected participants to make assumptions when not enough information was available, especially at the beginning of the analysis given our general instructions. Think-aloud protocols were inspected for whether participants mentioned assumptions, which were conservatively defined as an idea unit containing the words “assume”, “assuming”, or “assumption”. Overall, participants stated 69 assumptions, 80 % of which came from two participants. The following categories of assumptions emerged: assumptions about the user (e.g., who am I making this for), experience (e.g., I have used/never used this before), ability (assume that he can/cannot reach), location and prerequisites (e.g., “I assume I have a kitchen and the ingredients are already there so I don’t have to go out and buy them”), particular make up of an object (e.g., assume a jar—as opposed to other peanut butter containers). Data showed that participants also rejected assumptions once they were made. Also, as one participant pointed out “So end user…is me. And there’s assumptions embedded in what me means”. Data showed the participants made (and rejected) assumptions in the beginning, but that the TA analysis itself also brought about decision points where assumptions were necessary to continue.

Discussion for practitioners

The goal of Study 2 was to capture characteristics about the products and processes of experienced task analysts. Collecting information about skill expression is one step in studying expertise ( Ericsson and Smith 1991 ) and provides information about goals of skill development and some basis against which to adjust current performance.

General characteristics of practitioners’ task analysis product

In 15 min, practitioners created a TA that was most often rendered in a list format with each individual subgoal placed separately. Most of the participants focused on redescription, with only one practitioner illustrating how to evaluate performance based on subgoals. TA had an average breadth of six subgoals on the highest level and an average depth of two to three levels. Depth was independent of task, which suggests that participants have a certain depth in mind for their initial draft, even for unfamiliar tasks, that is, when specific details were unknown. These data provide ballpark numbers for novice products.

Stating the main goal provides important context for an analysis, and we expected participants to do this irrespective of the purpose of the analysis. We were surprised that one participant (instructional design) accounted for half the TA that contained the main goal. Thus, the presence of a main goal is not a good predictor of redescription quality—at least not in the first draft and given the limitation that some participants were less experienced than others.

Deviation, draft, or inferior product?

The majority of the TA (60 %) fell within the suggested range of three to eight subgoals; however, some TA were as small as two and as broad as 21 subgoals on the highest level. Breadth varied between tasks, but not as a function of familiarity. This illustrates that although practitioners’ TA were mostly within the suggested breadth boundaries, they did not necessarily adhere to the standards suggested in the literature, at least not for an initial draft created within 15 min. Participants also created TA of a depth one, which either illustrates suboptimal performance or suggests that this level of analysis is all that is required for some practitioners (e.g., to assess performance).

The importance of defining the purpose of conducting a TA is mentioned throughout literature along with the emphasis that it influences the TA (e.g., Kirwan and Ainsworth 1992 ). Participants in this sample did not receive a specific purpose yet focused on redescription, except for one practitioner who did not create but rather used a redescription for assessment. Furthermore, half of the TA were considered specific to a person, technology, or procedure. One advantage of a goals hierarchy is that it is generalizable and technology independent—at least on the higher levels of analysis (e.g., Annett 2004 ; Endsley et al. 2003 ). Think-aloud data suggested that versatility depended on the purpose of the TA, along with how tightly participants constrained the task space and whether they used another technology as a guide.

Figure 9 illustrates how a system designer may be concerned with how a variety of people will make a phone call using a range of phones (e.g., cell phone or landline). Hence, the TA needs to consider different person and phone variables, which means accommodating a number of scenarios. In contrast, an Occupational Therapist’s or Ergonomist’s concern may be one particular person with a unique combination of abilities or injuries. The focus is then on whether this person can accomplish, for example, the goal of dialing the number on one particular phone, thus considering one point in the task space at any given time. This is important to keep in mind when assessing novice performance because the same outcome can have different reasons: A practitioner may explicitly choose a level of versatility whereas a novice may do so by default.

An external file that holds a picture, illustration, etc.
Object name is nihms871176f9.jpg

Possible task space of a system designer ( broad ) and an occupational therapist ( narrow )

Subgoals & task boundaries

Compared to our master TA, practitioners focused on identifying lower level subgoals. Only 5 % of the subgoals were identified on the highest level in the master TA (one average one subgoal) and another 5 % were extra, that is, outside of the boundaries of the master TA. These patterns are beneficial to understand novice performance given the same task constraints. Participants also included subgoals such as learning about unfamiliar tasks; however, one participant reflected on this notion and decided to exclude it. Unfamiliarity with a task is said to be an advantage for the task analyst ( Shepherd 2001 ); yet these data suggest that task unfamiliarity may also be a pitfall for drafting a TA before meeting with subject matter experts.

Participants included subgoals for tasks in the cooking domain and for making phone call that can be described as symmetrical or complementary. For example, if there was a subgoal that specified to open a jar, drawer, or fridge, then there was another subgoal that specified to close it. The importance of considering symmetrical subgoals is evident when considering a storage system (a warehouse, a data disk, or working memory) that only allows to “ add items ” but not to “ delete items ”. In the case of sharing pictures using Adgers, participants could have used this as an important cue to check if the analysis was complete and notice that they uploaded a picture but that, according to their draft, it lived on the web forever.

A dynamic process

When reading introductions to TA, the iterative nature of the process is emphasized, yet instructions appear static (one-directional) and intermediary products not displayed (e.g., Shepherd 2001 ). Flowcharts illustrating how redescription occurs make it appear as if a hierarchy is being created top–down, with questions designed to guide how each level is fleshed out before moving on to the next level. This reflects less the act of “creating” a goals hierarchy and resembles more the act of retrieving a well-formed representation, such as one that an analyst may learn with repeated analyses of the same domain (Shepherd).

Participants in this study illustrated that creating the redescription is a dynamic process. Many questions that participants asked are similar to those compared by Stanton (2006) . However, participants used these questions to understand the task space, narrow or broaden it, search it for objects and requirements, elicit information about procedure. They negotiated which subgoals to include, how to name them, and where their place should be, even switching them around. This resembles the qualitative data analysis approach that (Cognitive)TA uses to find meaningful findings ( Crandall et al. 2006 )—only that the unit of analysis in redescription is a verb–noun pair rather than cognition.

In GDTA, the practitioner can expect the initial draft of a goals hierarchy to change when reviewing the draft with the subject matter expert by adding, deleted, or rearranging goals ( Endsley et al. 2003 ). Our data show that this can also happen when constructing the initial draft. Endsley et al. (2003) differentiate goals (e.g., provide effective communication ) from tasks (e.g., make radio communication , technology dependent and to be physically accomplished) and information requirements. The latter two provide important cues as goals can be derived by then asking “why” and “what for”. This may feed into efforts of automating task analysis (e.g., Asimakopoulos et al. 2011 ) that parse specific actions from procedural scenarios in an attempt to infer task structure.

It appears then that creating a goals hierarchy can be described as a three-stage process. First, it is outlined “what” is done, then goals are revised by asking “why”, and Endsley et al. (2003) also recommended asking “why” to position a subgoal in the goals hierarchy. Asking “why” is also a major component of the approach to extract subgoals as described by Catrambone (2011) . “Why” questions were rarely asked by our participants, which could suggest that they were still in the first phase and we could expect some revisions.

Overall discussion

Although the process and methods of TA have been described and skills identified (e.g., Hoffman and Militello 2009 ), expertise in TA has yet to be defined. It has been suggested that TA is the process of understanding a task and associated knowledge or problems, and that its value lies in creating the shared understanding of the TA team (e.g., Shepherd 1976 ). However, a product such as a redescription is just as valuable, given that it feeds into subsequent analyses (e.g., SHERPA, GOMS) and could affect subsequent results. We therefore operationalized expertise in (H)TA to consist of both the process and the product (HTA diagram) and compared novice and experienced performance on the same set of tasks using the same master TA (coding scheme).

The goal of this research was to determine skill components in redescription by characterizing novices’ and practitioners’ performance on the same set of tasks. These studies captured a first draft of a redescription, completed within 15 min, based on a main goal given to participants. Table 7 shows a summary of the skill components and guidelines for training. This table is by no means complete given that (H)TA is an iterative process and includes many other activities and skills such as interviewing ( Ainsworth 2001 ) and bootstrapping (quickly learning about a new domain; Crandall et al. 2006 ). Thus, it remains unanswered how this initial draft may change given more information, thought, and analysis.

Skill components and guidelines for evaluating task analysis

What to expect from a novice

Novices (after training) and practitioners performed similarly on some of the measures. For example, their redescriptions had similar breadth and depth dimensions, and both groups created HTA with a depth of one and had breadths that were outside of breadth recommendations. This suggests that (a) the dimensions of novices’ HTA reflect what one can reasonably expect from a first draft, and (b) that a depth of one may be enough to be useful for some practitioners but that for novice this indicates suboptimal performance. Knowing what to expect from a novice is important for a teacher to gauge performance, especially someone who has little practical HTA experience.

On average, novices and practitioners both identified more subgoals for the cooking domain compared to the communication domain; however, for practitioners the difference was threefold. An obvious explanation is that practitioners tended to be older and thus probably had more cooking experience. However, given that both groups have experience in the communication domain, we interpret this difference to reflect practitioners’ focus and appreciation of task details (verb–noun pairs) that are more easily accessible and manifold when analyzing some tasks. Tasks such as the making a peanut-butter jelly sandwich may be perceived as being too simple to learn TA. Yet, our data suggest that cooking tasks are well suited to convey the idea of attention to detail, thus leaving complex tasks for training of more advanced concepts.

Unit of analysis

Table 7 is a summary of components based on findings from both studies. To illustrate how we arrived at them we will consider an area in which practitioners and novices differed: subgoals. For example, practitioners clearly delineated verb–noun pairs by placing individual ones on a different line or bullet, which indicates a small chunk size. Novices, however, often clumped three or four verb–noun pairs into one bullet without any subdivisions. One difference between novices and experts (or highly skilled performers) is how they represented a problem, in this case a task (e.g., Ericsson and Smith 1991 ). Our data suggest that for redescription, this means that a problem (the task) is presented in small units (one verb–noun pair) that are chunked together. This also means that the size of the unit of analysis might predict TA experience.

Using this and other findings we can unpack instructions on redescription that say “state subordinate goals” (e.g., Stanton 2006 ). To state subgoals then means that one has to define what a subgoal is, identify the subgoal, delineate it from other subgoals, determine how to label it, and notice if it outside the boundaries. We suggest then that training for novices start with recognizing and separating the unit of analysis (verb–noun pairs), how they are or can be arranged in the hierarchy before advancing to other concepts such as determining whether levels of analysis are equivalent to each other.

Measures of quality

The quest for how to determine the reliability, validity, efficiency, and effectiveness of a TA is ongoing, and it has been suggested to understand qualities of poor and good TA instead ( Hoffman and Militello 2009 ). Some have argued that a good TA is one that produces useful results (e.g., Shepherd 1976 ) and captures task aspects one intends to measure, for example, cognition, and is grounded in theory ( Crandall et al. 2006 ). However, these definitions may lead to devalue a good analysis when there is just not much to be found ( Ainsworth and Marshall 1998 ).

Measures to determine the quality of an HTA diagram include being hierarchical, equivalence between levels of analysis, logical decomposition, and versatility of the analysis ( Patrick et al. 2000 ). Some of these qualities depend on the existence of others. For example, a hierarchy is a prerequisite to determine if different levels of analysis are equivalent. We combined recommendations from literature to create different operational definitions and assess task redescriptions. The goal was to show that these measures are useful in describing novice and practitioner performance. However, they also have limitations. For example, whether a redescription is versatile appears to depend on the TA purpose and thus should be considered during assessment of a redescription.

Novices desire validation of their redescriptions, but this request has been countered with the argument that there are many valid ways to redescribe a task ( Shepherd 1976 ). Thus, the description of a hierarchy in terms of breadth and depth may come with the caveat that it only provides a relative assessment rather than an absolute one. Nevertheless, recommendations for breadth and depth are informed by boundaries of what analysts can conceptualize ( Ainsworth 2001 ), experience teaches that some redescriptions are more useful than others (Shepherd). Thus, quantifying the dimensions of a redescription may be beneficial after all for an initial, visual assessment.

This research suggests in more detail how redescription occurs based on novice and practitioner data derived from the same six tasks and coding scheme. Crandall et al. (2006) called for new methods and more competent practitioners indicate an increased training need, not only in how to analyze qualitative data but also how to draw a task structure from those data. These skill components address some of the related training questions.

Acknowledgments

This research was supported in part by contributions from Deere & Company. We thank Jerry Duncan for his support and advice on this research. This research was submitted in partial fulfillment of the Doctor of Philosophy degree at Georgia Institute of Technology (Adams, 2010). We thank the many people who helped connect us with practitioners for Study 2, and the members of the Human Factors and Aging Laboratory helped with data collection and analysis. We especially appreciate the many hours of coding task analysis by Sarah (Felipe) Gobrogge and the space provided by Dr. Anne McLaughlin to meet with participants. Lastly, we thank the reviewers for their time and feedback: Drs. R. R. Hoffman & T. Ormerod and one anonymous reviewer.

Electronic supplementary material The online version of this article (doi:10.1007/s11251-013-9270-9) contains supplementary material, which is available to authorized users.

  • Adams JA. Historical review and appraisal of research on the learning, retention, and transfer of human motor skills. Psychological Bulletin. 1987; 101 (1):41–74. [ Google Scholar ]
  • Ainsworth L. Task analysis. In: Noyes J, Bransby M, editors. People in control. London, UK: Institution of Electrical Engineers; 2001. pp. 117–132. [ Google Scholar ]
  • Ainsworth L, Marshall E. Issues of quality and practicability in task analysis: Preliminary results from two surveys. Ergonomics. 1998; 41 (11):1607–1617. doi: 10.1080/001401398186090. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Annett J. Hierarchical task analysis. In: Diaper D, Stanton NA, editors. The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2004. pp. 67–82. [ Google Scholar ]
  • Annett J, Duncan KD. Task analysis and training design. Occupational Psychology. 1967; 41 (4):211–221. [ Google Scholar ]
  • Asimakopoulos S, Dix A, Fildes R. Using hierarchical task decomposition as a grammar to map actions in context: Application to forecasting systems in supply chain planning. International Journal of Human-Computer Studies. 2011; 69 :234–250. [ Google Scholar ]
  • Carlson R, Chandler P, Sweller J. Learning and understanding science instructional material. Journal of Educational Psychology. 2003; 95 (3):629–640. doi: 10.1037/0022-0663.95.3.629. [ CrossRef ] [ Google Scholar ]
  • Catrambone R. Paper presented at the 2011 Learning and Technology Symposium. Columbus, GA: 2011. Task analysis by problem solving (TAPS): Uncovering expert knowledge to develop high-quality instructional materials and training. [ Google Scholar ]
  • Clark RE, Feldon DF, van Merriënboer JJG, Yates KE, Early S. Cognitive task analysis. In: Spector JM, Merrill MD, van Merriënboer JJG, Driscoll MP, editors. Handbook of research on educational communications and technology. 3rd. Mahwah, NJ: Lawrence Erlbaum Associates; 2008. [ Google Scholar ]
  • Craik IM, Bialystok E. Planning and task management in older adults: Cooking breakfast. Memory & Cognition. 2006; 34 (6):1236–1249. [ PubMed ] [ Google Scholar ]
  • Crandall B, Klein GA, Hoffman RR. Working minds: A practitioner’s guide to cognitive task analysis. Cambridge, MA: MIT Press; 2006. [ Google Scholar ]
  • Crawford JO, Taylor C, Po NLW. A case study of on-screen prototypes and usability evaluation of electronic timers and food menu systems. International Journal of Human-Computer Interaction. 2001; 13 (2):187–201. doi: 10.1207/S15327590IJHC1302_6. [ CrossRef ] [ Google Scholar ]
  • Dattel AR, Durso FT, Bédard R. Procedural or conceptual training: Which is better for teaching novice pilots landings and traffic patterns? Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2009; 53 (26):1964–1968. doi: 10.1177/154193120905302618. [ CrossRef ] [ Google Scholar ]
  • Davis J, Rebelsky SA. Food-first computer science: Starting the first course right with PB&J. Proceedings of the 38th SIGCSE technical symposium on Computer science education. 2007:372–376. [ Google Scholar ]
  • Diaper D. Understanding task analysis for human-computer interaction. In: Diaper D, Stanton N, editors. The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2004. pp. 5–47. [ Google Scholar ]
  • Diaper D, Stanton NA. Wishing on a sTAr: The future of task analysis. In: Diaper D, Stanton N, editors. The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2004. pp. 603–619. [ Google Scholar ]
  • Drysdale J, Casey J, Porter-Armstrong A. Effectiveness of training on community skills of children with intellectual disabilities. Scandinavian Journal of Occupational Therapy. 2008; 15 :247–255. doi: 10.1080/11038120802456136. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Endsley MR, Bolté B, Jones DG. Designing for situation awareness. An approach to user-centered design. Boca Raton, FL: CRC/Taylor& Francis; 2003. [ Google Scholar ]
  • Ericsson KA, Smith J. Prospects and limits of the empirical study of expertise: An introduction. In: Ericsson K, Smith J, editors. Toward a general theory of expertise: Prospects and limits. New York, NY: Cambridge University Press; 1991. pp. 1–38. [ Google Scholar ]
  • Feldon DF, Timmerman BC, Stowe KA, Showman R. Translating expertise into effective instruction: The impacts of cognitive task analysis (CTA) on lab report quality and student retention in the biological sciences. Journal of Research in Science Teaching. 2010; 47 (10):1165–1185. [ Google Scholar ]
  • Felipe SK, Adams AE, Rogers WA, Fisk AD. Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting. Santa Monica, CA: Human Factors and Ergonomics Society; 2010. Training novices on Hierarchical Task Analysis. [ Google Scholar ]
  • Hickman JM, Rogers WA, Fisk AD. Training older adults to use new technology. Journals of Gerontology. 2007; 62B :77–84. [ PubMed ] [ Google Scholar ]
  • Hoffman RR, Militello LG. Perspectives on cognitive task analysis. New York, NY: Psychology Press; 2009. [ Google Scholar ]
  • Jonassen DH, Hannum WH, Tessmer M. Handbook of task analysis procedures. Westport, CT: Praeger; 1989. [ Google Scholar ]
  • Jonassen DH, Tessmer M, Hannum WH. Task analysis methods for instructional design. 1999 Retrieved from netlibrary database. [ Google Scholar ]
  • Kieras D. GOMS models for task analysis. In: Diaper D, Stanton NA, editors. The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2004. pp. 83–116. [ Google Scholar ]
  • Kirwan B, Ainsworth LK. A guide to task analysis. London: Taylor & Francis; 1992. [ Google Scholar ]
  • Lane R, Stanton NA, Harrison D. Applying hierarchical task analysis to medication administration errors. Applied Ergonomics. 2006; 37 :669–679. doi: 10.1016/j.apergo.2005.08.001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luker KR, Sullivan ME, Peyre SE, Sherman R, Grundwald T. The use of a cognitive task analysis-based multimedia program to teach surgical decision making in flexon tendon repair. The American Journal of Surgery. 2008; 195 :11–15. [ PubMed ] [ Google Scholar ]
  • Novick LR. Understanding spatial diagram structure: An analysis of hierarchies, matrices, and networks. The Quarterly Journal of Experimental Psychology. 2006; 59 (10):1826–1856. [ PubMed ] [ Google Scholar ]
  • Novick LR, Hurley SM. To matrix, network, or hierarchy: That is the question. Cognitive Psychology. 2001; 42 (2):158–216. [ PubMed ] [ Google Scholar ]
  • Olfman L, Mandviwalla M. Conceptual versus procedural software training for graphical user interfaces: A longitudinal field experiment. MIS Quarterly. 1994; 18 (4):405–426. [ Google Scholar ]
  • Ormerod TC, Shepherd A. Using task analysis for information requirements specification: The sub-goal template (SGT) method. In: Diaper D, Stanton NA, editors. The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2004. pp. 347–365. [ Google Scholar ]
  • Patrick J, Gregov A, Halliday P. Analysing and training task analysis. Instructional Science. 2000; 28 (1):51–79. [ Google Scholar ]
  • Redish JC, Wixon D. Task analysis. In: Jacko J, Sears A, editors. The human-computer interaction handbook. Mahwah, NJ: Lawrence Erlbaum Associates; 2003. pp. 922–940. [ Google Scholar ]
  • Schaafstal A, Schraagen JM, van Berlo M. Cognitive task analysis and innovation of training: The case of structured roubleshooting. Human Factors. 2000; 42 (1):75–86. [ PubMed ] [ Google Scholar ]
  • Scheiter K, Gerjets P, Schuh J. The acquisition of problem-solving skills in mathematics: How animations can aid understanding of structural problem features and solution procedures. Instructional Science. 2010; 38 :487–502. doi: 10.1007/s11251-009-9114-9. [ CrossRef ] [ Google Scholar ]
  • Seamster TL, Redding RE, Kaempf GL. A skill-based cognitive task analysis framework. In: Schraagen JM, Chipman SF, Shalin VL, editors. Cognitive task analysis. Mahwah, NJ: Lawrence Erlbaum Associates, Inc; 2000. pp. 135–146. [ Google Scholar ]
  • Shepherd A. An improved tabular format for task analysis. Journal of Occupational Psychology. 1976; 49 :93–104. [ Google Scholar ]
  • Shepherd A. Hierarchical task analysis and training decisions. Programmed Learning and Educational Technology. 1985; 22 :162–176. [ Google Scholar ]
  • Shepherd A. An approach to information requirements specification for process control tasks. Ergonomics. 1993; 36 (11):1425–1437. doi: 10.1080/00140139308968010. [ CrossRef ] [ Google Scholar ]
  • Shepherd A. HTA as a framework for task analysis. Ergonomics. 1998; 41 (11):1537–1552. [ PubMed ] [ Google Scholar ]
  • Shepherd A. Hierarchical task analysis. London: Taylor & Francis; 2001. [ Google Scholar ]
  • Shipley W, Zachary R. Shipley Institute of Living Scale. Los Angeles: Western Psychological Services; 1939. [ Google Scholar ]
  • Stanton NA. Hierarchical task analysis: Developments, applications, and extensions. Applied Ergonomics. 2006; 37 :55–79. [ PubMed ] [ Google Scholar ]
  • Stanton NA, Young MS. A guide to methodology in ergonomics Designing for human use. London, UK: Taylor & Francis; 1999. [ Google Scholar ]
  • Sullivan ME, Brown CVR, Peyre SE, Salim A, Martin M, Towfigh S, et al. The use of cognitive task analysis to improve the learning of percutaneous tracheostomy placement. The American Journal of Surgery. 2007; 193 (1):96–99. [ PubMed ] [ Google Scholar ]
  • van Merriënboer JJG. Training complex cognitive skills. Englewood Cliffs, NJ: Educational Technology Publications; 1997. [ Google Scholar ]
  • Wechsler D. Wechsler Memory Scale III. 3rd. San Antonio, TX: The Psychological Corporation; 1997. [ Google Scholar ]

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

12.14: Sample Student Literary Analysis Essays

  • Last updated
  • Save as PDF
  • Page ID 40514

  • Heather Ringo & Athena Kashyap
  • City College of San Francisco via ASCCC Open Educational Resources Initiative

The following examples are essays where student writers focused on close-reading a literary work.

While reading these examples, ask yourself the following questions:

  • What is the essay's thesis statement, and how do you know it is the thesis statement?
  • What is the main idea or topic sentence of each body paragraph, and how does it relate back to the thesis statement?
  • Where and how does each essay use evidence (quotes or paraphrase from the literature)?
  • What are some of the literary devices or structures the essays analyze or discuss?
  • How does each author structure their conclusion, and how does their conclusion differ from their introduction?

Example 1: Poetry

Victoria Morillo

Instructor Heather Ringo

3 August 2022

How Nguyen’s Structure Solidifies the Impact of Sexual Violence in “The Study”

Stripped of innocence, your body taken from you. No matter how much you try to block out the instance in which these two things occurred, memories surface and come back to haunt you. How does a person, a young boy , cope with an event that forever changes his life? Hieu Minh Nguyen deconstructs this very way in which an act of sexual violence affects a survivor. In his poem, “The Study,” the poem's speaker recounts the year in which his molestation took place, describing how his memory filters in and out. Throughout the poem, Nguyen writes in free verse, permitting a structural liberation to become the foundation for his message to shine through. While he moves the readers with this poignant narrative, Nguyen effectively conveys the resulting internal struggles of feeling alone and unseen.

The speaker recalls his experience with such painful memory through the use of specific punctuation choices. Just by looking at the poem, we see that the first period doesn’t appear until line 14. It finally comes after the speaker reveals to his readers the possible, central purpose for writing this poem: the speaker's molestation. In the first half, the poem makes use of commas, em dashes, and colons, which lends itself to the idea of the speaker stringing along all of these details to make sense of this time in his life. If reading the poem following the conventions of punctuation, a sense of urgency is present here, as well. This is exemplified by the lack of periods to finalize a thought; and instead, Nguyen uses other punctuation marks to connect them. Serving as another connector of thoughts, the two em dashes give emphasis to the role memory plays when the speaker discusses how “no one [had] a face” during that time (Nguyen 9-11). He speaks in this urgent manner until the 14th line, and when he finally gets it off his chest, the pace of the poem changes, as does the more frequent use of the period. This stream-of-consciousness-like section when juxtaposed with the latter half of the poem, causes readers to slow down and pay attention to the details. It also splits the poem in two: a section that talks of the fogginess of memory then transitions into one that remembers it all.

In tandem with the fluctuating nature of memory, the utilization of line breaks and word choice help reflect the damage the molestation has had. Within the first couple of lines of the poem, the poem demands the readers’ attention when the line breaks from “floating” to “dead” as the speaker describes his memory of Little Billy (Nguyen 1-4). This line break averts the readers’ expectation of the direction of the narrative and immediately shifts the tone of the poem. The break also speaks to the effect his trauma has ingrained in him and how “[f]or the longest time,” his only memory of that year revolves around an image of a boy’s death. In a way, the speaker sees himself in Little Billy; or perhaps, he’s representative of the tragic death of his boyhood, how the speaker felt so “dead” after enduring such a traumatic experience, even referring to himself as a “ghost” that he tries to evict from his conscience (Nguyen 24). The feeling that a part of him has died is solidified at the very end of the poem when the speaker describes himself as a nine-year-old boy who’s been “fossilized,” forever changed by this act (Nguyen 29). By choosing words associated with permanence and death, the speaker tries to recreate the atmosphere (for which he felt trapped in) in order for readers to understand the loneliness that came as a result of his trauma. With the assistance of line breaks, more attention is drawn to the speaker's words, intensifying their importance, and demanding to be felt by the readers.

Most importantly, the speaker expresses eloquently, and so heartbreakingly, about the effect sexual violence has on a person. Perhaps what seems to be the most frustrating are the people who fail to believe survivors of these types of crimes. This is evident when he describes “how angry” the tenants were when they filled the pool with cement (Nguyen 4). They seem to represent how people in the speaker's life were dismissive of his assault and who viewed his tragedy as a nuisance of some sorts. This sentiment is bookended when he says, “They say, give us details , so I give them my body. / They say, give us proof , so I give them my body,” (Nguyen 25-26). The repetition of these two lines reinforces the feeling many feel in these scenarios, as they’re often left to deal with trying to make people believe them, or to even see them.

It’s important to recognize how the structure of this poem gives the speaker space to express the pain he’s had to carry for so long. As a characteristic of free verse, the poem doesn’t follow any structured rhyme scheme or meter; which in turn, allows him to not have any constraints in telling his story the way he wants to. The speaker has the freedom to display his experience in a way that evades predictability and engenders authenticity of a story very personal to him. As readers, we abandon anticipating the next rhyme, and instead focus our attention to the other ways, like his punctuation or word choice, in which he effectively tells his story. The speaker recognizes that some part of him no longer belongs to himself, but by writing “The Study,” he shows other survivors that they’re not alone and encourages hope that eventually, they will be freed from the shackles of sexual violence.

Works Cited

Nguyen, Hieu Minh. “The Study” Poets.Org. Academy of American Poets, Coffee House Press, 2018, https://poets.org/poem/study-0 .

Example 2: Fiction

Todd Goodwin

Professor Stan Matyshak

Advanced Expository Writing

Sept. 17, 20—

Poe’s “Usher”: A Mirror of the Fall of the House of Humanity

Right from the outset of the grim story, “The Fall of the House of Usher,” Edgar Allan Poe enmeshes us in a dark, gloomy, hopeless world, alienating his characters and the reader from any sort of physical or psychological norm where such values as hope and happiness could possibly exist. He fatalistically tells the story of how a man (the narrator) comes from the outside world of hope, religion, and everyday society and tries to bring some kind of redeeming happiness to his boyhood friend, Roderick Usher, who not only has physically and psychologically wasted away but is entrapped in a dilapidated house of ever-looming terror with an emaciated and deranged twin sister. Roderick Usher embodies the wasting away of what once was vibrant and alive, and his house of “insufferable gloom” (273), which contains his morbid sister, seems to mirror or reflect this fear of death and annihilation that he most horribly endures. A close reading of the story reveals that Poe uses mirror images, or reflections, to contribute to the fatalistic theme of “Usher”: each reflection serves to intensify an already prevalent tone of hopelessness, darkness, and fatalism.

It could be argued that the house of Roderick Usher is a “house of mirrors,” whose unpleasant and grim reflections create a dark and hopeless setting. For example, the narrator first approaches “the melancholy house of Usher on a dark and soundless day,” and finds a building which causes him a “sense of insufferable gloom,” which “pervades his spirit and causes an iciness, a sinking, a sickening of the heart, an undiscerned dreariness of thought” (273). The narrator then optimistically states: “I reflected that a mere different arrangement of the scene, of the details of the picture, would be sufficient to modify, or perhaps annihilate its capacity for sorrowful impression” (274). But the narrator then sees the reflection of the house in the tarn and experiences a “shudder even more thrilling than before” (274). Thus the reader begins to realize that the narrator cannot change or stop the impending doom that will befall the house of Usher, and maybe humanity. The story cleverly plays with the word reflection : the narrator sees a physical reflection that leads him to a mental reflection about Usher’s surroundings.

The narrator’s disillusionment by such grim reflection continues in the story. For example, he describes Roderick Usher’s face as distinct with signs of old strength but lost vigor: the remains of what used to be. He describes the house as a once happy and vibrant place, which, like Roderick, lost its vitality. Also, the narrator describes Usher’s hair as growing wild on his rather obtrusive head, which directly mirrors the eerie moss and straw covering the outside of the house. The narrator continually longs to see these bleak reflections as a dream, for he states: “Shaking off from my spirit what must have been a dream, I scanned more narrowly the real aspect of the building” (276). He does not want to face the reality that Usher and his home are doomed to fall, regardless of what he does.

Although there are almost countless examples of these mirror images, two others stand out as important. First, Roderick and his sister, Madeline, are twins. The narrator aptly states just as he and Roderick are entombing Madeline that there is “a striking similitude between brother and sister” (288). Indeed, they are mirror images of each other. Madeline is fading away psychologically and physically, and Roderick is not too far behind! The reflection of “doom” that these two share helps intensify and symbolize the hopelessness of the entire situation; thus, they further develop the fatalistic theme. Second, in the climactic scene where Madeline has been mistakenly entombed alive, there is a pairing of images and sounds as the narrator tries to calm Roderick by reading him a romance story. Events in the story simultaneously unfold with events of the sister escaping her tomb. In the story, the hero breaks out of the coffin. Then, in the story, the dragon’s shriek as he is slain parallels Madeline’s shriek. Finally, the story tells of the clangor of a shield, matched by the sister’s clanging along a metal passageway. As the suspense reaches its climax, Roderick shrieks his last words to his “friend,” the narrator: “Madman! I tell you that she now stands without the door” (296).

Roderick, who slowly falls into insanity, ironically calls the narrator the “Madman.” We are left to reflect on what Poe means by this ironic twist. Poe’s bleak and dark imagery, and his use of mirror reflections, seem only to intensify the hopelessness of “Usher.” We can plausibly conclude that, indeed, the narrator is the “Madman,” for he comes from everyday society, which is a place where hope and faith exist. Poe would probably argue that such a place is opposite to the world of Usher because a world where death is inevitable could not possibly hold such positive values. Therefore, just as Roderick mirrors his sister, the reflection in the tarn mirrors the dilapidation of the house, and the story mirrors the final actions before the death of Usher. “The Fall of the House of Usher” reflects Poe’s view that humanity is hopelessly doomed.

Poe, Edgar Allan. “The Fall of the House of Usher.” 1839. Electronic Text Center, University of Virginia Library . 1995. Web. 1 July 2012. < http://etext.virginia.edu/toc/modeng/public/PoeFall.html >.

Example 3: Poetry

Amy Chisnell

Professor Laura Neary

Writing and Literature

April 17, 20—

Don’t Listen to the Egg!: A Close Reading of Lewis Carroll’s “Jabberwocky”

“You seem very clever at explaining words, Sir,” said Alice. “Would you kindly tell me the meaning of the poem called ‘Jabberwocky’?”

“Let’s hear it,” said Humpty Dumpty. “I can explain all the poems that ever were invented—and a good many that haven’t been invented just yet.” (Carroll 164)

In Lewis Carroll’s Through the Looking-Glass , Humpty Dumpty confidently translates (to a not so confident Alice) the complicated language of the poem “Jabberwocky.” The words of the poem, though nonsense, aptly tell the story of the slaying of the Jabberwock. Upon finding “Jabberwocky” on a table in the looking-glass room, Alice is confused by the strange words. She is quite certain that “ somebody killed something ,” but she does not understand much more than that. When later she encounters Humpty Dumpty, she seizes the opportunity at having the knowledgeable egg interpret—or translate—the poem. Since Humpty Dumpty professes to be able to “make a word work” for him, he is quick to agree. Thus he acts like a New Critic who interprets the poem by performing a close reading of it. Through Humpty’s interpretation of the first stanza, however, we see the poem’s deeper comment concerning the practice of interpreting poetry and literature in general—that strict analytical translation destroys the beauty of a poem. In fact, Humpty Dumpty commits the “heresy of paraphrase,” for he fails to understand that meaning cannot be separated from the form or structure of the literary work.

Of the 71 words found in “Jabberwocky,” 43 have no known meaning. They are simply nonsense. Yet through this nonsensical language, the poem manages not only to tell a story but also gives the reader a sense of setting and characterization. One feels, rather than concretely knows, that the setting is dark, wooded, and frightening. The characters, such as the Jubjub bird, the Bandersnatch, and the doomed Jabberwock, also appear in the reader’s head, even though they will not be found in the local zoo. Even though most of the words are not real, the reader is able to understand what goes on because he or she is given free license to imagine what the words denote and connote. Simply, the poem’s nonsense words are the meaning.

Therefore, when Humpty interprets “Jabberwocky” for Alice, he is not doing her any favors, for he actually misreads the poem. Although the poem in its original is constructed from nonsense words, by the time Humpty is done interpreting it, it truly does not make any sense. The first stanza of the original poem is as follows:

’Twas brillig, and the slithy toves

Did gyre and gimble in the wabe;

All mimsy were the borogroves,

An the mome raths outgrabe. (Carroll 164)

If we replace, however, the nonsense words of “Jabberwocky” with Humpty’s translated words, the effect would be something like this:

’Twas four o’clock in the afternoon, and the lithe and slimy badger-lizard-corkscrew creatures

Did go round and round and make holes in the grass-plot round the sun-dial:

All flimsy and miserable were the shabby-looking birds

with mop feathers,

And the lost green pigs bellowed-sneezed-whistled.

By translating the poem in such a way, Humpty removes the charm or essence—and the beauty, grace, and rhythm—from the poem. The poetry is sacrificed for meaning. Humpty Dumpty commits the heresy of paraphrase. As Cleanth Brooks argues, “The structure of a poem resembles that of a ballet or musical composition. It is a pattern of resolutions and balances and harmonizations” (203). When the poem is left as nonsense, the reader can easily imagine what a “slithy tove” might be, but when Humpty tells us what it is, he takes that imaginative license away from the reader. The beauty (if that is the proper word) of “Jabberwocky” is in not knowing what the words mean, and yet understanding. By translating the poem, Humpty takes that privilege from the reader. In addition, Humpty fails to recognize that meaning cannot be separated from the structure itself: the nonsense poem reflects this literally—it means “nothing” and achieves this meaning by using “nonsense” words.

Furthermore, the nonsense words Carroll chooses to use in “Jabberwocky” have a magical effect upon the reader; the shadowy sound of the words create the atmosphere, which may be described as a trance-like mood. When Alice first reads the poem, she says it seems to fill her head “with ideas.” The strange-sounding words in the original poem do give one ideas. Why is this? Even though the reader has never heard these words before, he or she is instantly aware of the murky, mysterious mood they set. In other words, diction operates not on the denotative level (the dictionary meaning) but on the connotative level (the emotion(s) they evoke). Thus “Jabberwocky” creates a shadowy mood, and the nonsense words are instrumental in creating this mood. Carroll could not have simply used any nonsense words.

For example, let us change the “dark,” “ominous” words of the first stanza to “lighter,” more “comic” words:

’Twas mearly, and the churly pells

Did bimble and ringle in the tink;

All timpy were the brimbledimps,

And the bip plips outlink.

Shifting the sounds of the words from dark to light merely takes a shift in thought. To create a specific mood using nonsense words, one must create new words from old words that convey the desired mood. In “Jabberwocky,” Carroll mixes “slimy,” a grim idea, “lithe,” a pliable image, to get a new adjective: “slithy” (a portmanteau word). In this translation, brighter words were used to get a lighter effect. “Mearly” is a combination of “morning” and “early,” and “ringle” is a blend of “ring” and "dingle.” The point is that “Jabberwocky’s” nonsense words are created specifically to convey this shadowy or mysterious mood and are integral to the “meaning.”

Consequently, Humpty’s rendering of the poem leaves the reader with a completely different feeling than does the original poem, which provided us with a sense of ethereal mystery, of a dark and foreign land with exotic creatures and fantastic settings. The mysteriousness is destroyed by Humpty’s literal paraphrase of the creatures and the setting; by doing so, he has taken the beauty away from the poem in his attempt to understand it. He has committed the heresy of paraphrase: “If we allow ourselves to be misled by it [this heresy], we distort the relation of the poem to its ‘truth’… we split the poem between its ‘form’ and its ‘content’” (Brooks 201). Humpty Dumpty’s ultimate demise might be seen to symbolize the heretical split between form and content: as a literary creation, Humpty Dumpty is an egg, a well-wrought urn of nonsense. His fall from the wall cracks him and separates the contents from the container, and not even all the King’s men can put the scrambled egg back together again!

Through the odd characters of a little girl and a foolish egg, “Jabberwocky” suggests a bit of sage advice about reading poetry, advice that the New Critics built their theories on. The importance lies not solely within strict analytical translation or interpretation, but in the overall effect of the imagery and word choice that evokes a meaning inseparable from those literary devices. As Archibald MacLeish so aptly writes: “A poem should not mean / But be.” Sometimes it takes a little nonsense to show us the sense in something.

Brooks, Cleanth. The Well-Wrought Urn: Studies in the Structure of Poetry . 1942. San Diego: Harcourt Brace, 1956. Print.

Carroll, Lewis. Through the Looking-Glass. Alice in Wonderland . 2nd ed. Ed. Donald J. Gray. New York: Norton, 1992. Print.

MacLeish, Archibald. “Ars Poetica.” The Oxford Book of American Poetry . Ed. David Lehman. Oxford: Oxford UP, 2006. 385–86. Print.

Attribution

  • Sample Essay 1 received permission from Victoria Morillo to publish, licensed Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International ( CC BY-NC-SA 4.0 )
  • Sample Essays 2 and 3 adapted from Cordell, Ryan and John Pennington. "2.5: Student Sample Papers" from Creating Literary Analysis. 2012. Licensed Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported ( CC BY-NC-SA 3.0 )

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Organizing Your Analysis

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

This resource covers how to write a rhetorical analysis essay of primarily visual texts with a focus on demonstrating the author’s understanding of the rhetorical situation and design principles.

There is no one perfect way to organize a rhetorical analysis essay. In fact, writers should always be a bit leery of plug-in formulas that offer a perfect essay format. Remember, organization itself is not the enemy, only organization without considering the specific demands of your particular writing task. That said, here are some general tips for plotting out the overall form of your essay.

Introduction

Like any rhetorical analysis essay, an essay analyzing a visual document should quickly set the stage for what you’re doing. Try to cover the following concerns in the initial paragraphs:

  • Make sure to let the reader know you’re performing a rhetorical analysis. Otherwise, they may expect you to take positions or make an evaluative argument that may not be coming.
  • Clearly state what the document under consideration is and possibly give some pertinent background information about its history or development. The intro can be a good place for a quick, narrative summary of the document. The key word here is “quick, for you may be dealing with something large (for example, an entire episode of a cartoon like the Simpsons). Save more in-depth descriptions for your body paragraph analysis.
  • If you’re dealing with a smaller document (like a photograph or an advertisement), and copyright allows, the introduction or first page is a good place to integrate it into your page.
  • Give a basic run down of the rhetorical situation surrounding the document: the author, the audience, the purpose, the context, etc.

Thesis Statements and Focus

Many authors struggle with thesis statements or controlling ideas in regards to rhetorical analysis essays. There may be a temptation to think that merely announcing the text as a rhetorical analysis is purpose enough. However, especially depending on your essay’s length, your reader may need a more direct and clear statement of your intentions. Below are a few examples.

1. Clearly narrow the focus of what your essay will cover. Ask yourself if one or two design aspects of the document is interesting and complex enough to warrant a full analytical treatment.

The website for Amazon.com provides an excellent example of alignment and proximity to assist its visitors in navigating a potentially large and confusing amount of information.

2. Since visual documents often seek to move people towards a certain action (buying a product, attending an event, expressing a sentiment), an essay may analyze the rhetorical techniques used to accomplish this purpose. The thesis statement should reflect this goal.

The call-out flyer for the Purdue Rowing Team uses a mixture of dynamic imagery and tantalizing promises to create interest in potential, new members.

3. Rhetorical analysis can also easily lead to making original arguments. Performing the analysis may lead you to an argument; or vice versa, you may start with an argument and search for proof that supports it.

A close analysis of the female body images in the July 2007 issue of Cosmopolitan magazine reveals contradictions between the articles’ calls for self-esteem and the advertisements’ unrealistic, beauty demands.

These are merely suggestions. The best measure for what your focus and thesis statement should be the document itself and the demands of your writing situation. Remember that the main thrust of your thesis statement should be on how the document creates meaning and accomplishes its purposes. The OWl has additional information on writing thesis statements.

Analysis Order (Body Paragraphs)

Depending on the genre and size of the document under analysis, there are a number of logical ways to organize your body paragraphs. Below are a few possible options. Which ever you choose, the goal of your body paragraphs is to present parts of the document, give an extended analysis of how that part functions, and suggest how the part ties into a larger point (your thesis statement or goal).

Chronological

This is the most straight-forward approach, but it can also be effective if done for a reason (as opposed to not being able to think of another way). For example, if you are analyzing a photo essay on the web or in a booklet, a chronological treatment allows you to present your insights in the same order that a viewer of the document experiences those images. It is likely that the images have been put in that order and juxtaposed for a reason, so this line of analysis can be easily integrated into the essay.

Be careful using chronological ordering when dealing with a document that contains a narrative (i.e. a television show or music video). Focusing on the chronological could easily lead you to plot summary which is not the point of a rhetorical analysis.

A spatial ordering covers the parts of a document in the order the eye is likely to scan them. This is different than chronological order, for that is dictated by pages or screens where spatial order concerns order amongst a single page or plane. There are no unwavering guidelines for this, but you can use the following general guidelines.

  • Left to right and top to down is still the normal reading and scanning pattern for English-speaking countries.
  • The eye will naturally look for centers. This may be the technical center of the page or the center of the largest item on the page.
  • Lines are often used to provide directions and paths for the eye to follow.
  • Research has shown that on web pages, the eye tends to linger in the top left quadrant before moving left to right. Only after spending a considerable amount of time on the top, visible portion of the page will they then scroll down.

Persuasive Appeals

The classic, rhetorical appeals are logos, pathos, and ethos. These concepts roughly correspond to the logic, emotion, and character of the document’s attempt to persuade. You can find more information on these concepts elsewhere on the OWL. Once you understand these devices, you could potentially order your essay by analyzing the document’s use of logos, ethos, and pathos in different sections.

The conclusion of a rhetorical analysis essay may not operate too differently from the conclusion of any other kind of essay. Still, many writers struggle with what a conclusion should or should not do. You can find tips elsewhere on the OWL on writing conclusions. In short, however, you should restate your main ideas and explain why they are important; restate your thesis; and outline further research or work you believe should be completed to further your efforts.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • How to write an argumentative essay | Examples & tips

How to Write an Argumentative Essay | Examples & Tips

Published on July 24, 2020 by Jack Caulfield . Revised on July 23, 2023.

An argumentative essay expresses an extended argument for a particular thesis statement . The author takes a clearly defined stance on their subject and builds up an evidence-based case for it.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

When do you write an argumentative essay, approaches to argumentative essays, introducing your argument, the body: developing your argument, concluding your argument, other interesting articles, frequently asked questions about argumentative essays.

You might be assigned an argumentative essay as a writing exercise in high school or in a composition class. The prompt will often ask you to argue for one of two positions, and may include terms like “argue” or “argument.” It will frequently take the form of a question.

The prompt may also be more open-ended in terms of the possible arguments you could make.

Argumentative writing at college level

At university, the vast majority of essays or papers you write will involve some form of argumentation. For example, both rhetorical analysis and literary analysis essays involve making arguments about texts.

In this context, you won’t necessarily be told to write an argumentative essay—but making an evidence-based argument is an essential goal of most academic writing, and this should be your default approach unless you’re told otherwise.

Examples of argumentative essay prompts

At a university level, all the prompts below imply an argumentative essay as the appropriate response.

Your research should lead you to develop a specific position on the topic. The essay then argues for that position and aims to convince the reader by presenting your evidence, evaluation and analysis.

  • Don’t just list all the effects you can think of.
  • Do develop a focused argument about the overall effect and why it matters, backed up by evidence from sources.
  • Don’t just provide a selection of data on the measures’ effectiveness.
  • Do build up your own argument about which kinds of measures have been most or least effective, and why.
  • Don’t just analyze a random selection of doppelgänger characters.
  • Do form an argument about specific texts, comparing and contrasting how they express their thematic concerns through doppelgänger characters.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

task analysis thesis

An argumentative essay should be objective in its approach; your arguments should rely on logic and evidence, not on exaggeration or appeals to emotion.

There are many possible approaches to argumentative essays, but there are two common models that can help you start outlining your arguments: The Toulmin model and the Rogerian model.

Toulmin arguments

The Toulmin model consists of four steps, which may be repeated as many times as necessary for the argument:

  • Make a claim
  • Provide the grounds (evidence) for the claim
  • Explain the warrant (how the grounds support the claim)
  • Discuss possible rebuttals to the claim, identifying the limits of the argument and showing that you have considered alternative perspectives

The Toulmin model is a common approach in academic essays. You don’t have to use these specific terms (grounds, warrants, rebuttals), but establishing a clear connection between your claims and the evidence supporting them is crucial in an argumentative essay.

Say you’re making an argument about the effectiveness of workplace anti-discrimination measures. You might:

  • Claim that unconscious bias training does not have the desired results, and resources would be better spent on other approaches
  • Cite data to support your claim
  • Explain how the data indicates that the method is ineffective
  • Anticipate objections to your claim based on other data, indicating whether these objections are valid, and if not, why not.

Rogerian arguments

The Rogerian model also consists of four steps you might repeat throughout your essay:

  • Discuss what the opposing position gets right and why people might hold this position
  • Highlight the problems with this position
  • Present your own position , showing how it addresses these problems
  • Suggest a possible compromise —what elements of your position would proponents of the opposing position benefit from adopting?

This model builds up a clear picture of both sides of an argument and seeks a compromise. It is particularly useful when people tend to disagree strongly on the issue discussed, allowing you to approach opposing arguments in good faith.

Say you want to argue that the internet has had a positive impact on education. You might:

  • Acknowledge that students rely too much on websites like Wikipedia
  • Argue that teachers view Wikipedia as more unreliable than it really is
  • Suggest that Wikipedia’s system of citations can actually teach students about referencing
  • Suggest critical engagement with Wikipedia as a possible assignment for teachers who are skeptical of its usefulness.

You don’t necessarily have to pick one of these models—you may even use elements of both in different parts of your essay—but it’s worth considering them if you struggle to structure your arguments.

Regardless of which approach you take, your essay should always be structured using an introduction , a body , and a conclusion .

Like other academic essays, an argumentative essay begins with an introduction . The introduction serves to capture the reader’s interest, provide background information, present your thesis statement , and (in longer essays) to summarize the structure of the body.

Hover over different parts of the example below to see how a typical introduction works.

The spread of the internet has had a world-changing effect, not least on the world of education. The use of the internet in academic contexts is on the rise, and its role in learning is hotly debated. For many teachers who did not grow up with this technology, its effects seem alarming and potentially harmful. This concern, while understandable, is misguided. The negatives of internet use are outweighed by its critical benefits for students and educators—as a uniquely comprehensive and accessible information source; a means of exposure to and engagement with different perspectives; and a highly flexible learning environment.

The body of an argumentative essay is where you develop your arguments in detail. Here you’ll present evidence, analysis, and reasoning to convince the reader that your thesis statement is true.

In the standard five-paragraph format for short essays, the body takes up three of your five paragraphs. In longer essays, it will be more paragraphs, and might be divided into sections with headings.

Each paragraph covers its own topic, introduced with a topic sentence . Each of these topics must contribute to your overall argument; don’t include irrelevant information.

This example paragraph takes a Rogerian approach: It first acknowledges the merits of the opposing position and then highlights problems with that position.

Hover over different parts of the example to see how a body paragraph is constructed.

A common frustration for teachers is students’ use of Wikipedia as a source in their writing. Its prevalence among students is not exaggerated; a survey found that the vast majority of the students surveyed used Wikipedia (Head & Eisenberg, 2010). An article in The Guardian stresses a common objection to its use: “a reliance on Wikipedia can discourage students from engaging with genuine academic writing” (Coomer, 2013). Teachers are clearly not mistaken in viewing Wikipedia usage as ubiquitous among their students; but the claim that it discourages engagement with academic sources requires further investigation. This point is treated as self-evident by many teachers, but Wikipedia itself explicitly encourages students to look into other sources. Its articles often provide references to academic publications and include warning notes where citations are missing; the site’s own guidelines for research make clear that it should be used as a starting point, emphasizing that users should always “read the references and check whether they really do support what the article says” (“Wikipedia:Researching with Wikipedia,” 2020). Indeed, for many students, Wikipedia is their first encounter with the concepts of citation and referencing. The use of Wikipedia therefore has a positive side that merits deeper consideration than it often receives.

An argumentative essay ends with a conclusion that summarizes and reflects on the arguments made in the body.

No new arguments or evidence appear here, but in longer essays you may discuss the strengths and weaknesses of your argument and suggest topics for future research. In all conclusions, you should stress the relevance and importance of your argument.

Hover over the following example to see the typical elements of a conclusion.

The internet has had a major positive impact on the world of education; occasional pitfalls aside, its value is evident in numerous applications. The future of teaching lies in the possibilities the internet opens up for communication, research, and interactivity. As the popularity of distance learning shows, students value the flexibility and accessibility offered by digital education, and educators should fully embrace these advantages. The internet’s dangers, real and imaginary, have been documented exhaustively by skeptics, but the internet is here to stay; it is time to focus seriously on its potential for good.

If you want to know more about AI tools , college essays , or fallacies make sure to check out some of our other articles with explanations and examples or go directly to our tools!

  • Ad hominem fallacy
  • Post hoc fallacy
  • Appeal to authority fallacy
  • False cause fallacy
  • Sunk cost fallacy

College essays

  • Choosing Essay Topic
  • Write a College Essay
  • Write a Diversity Essay
  • College Essay Format & Structure
  • Comparing and Contrasting in an Essay

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

An argumentative essay tends to be a longer essay involving independent research, and aims to make an original argument about a topic. Its thesis statement makes a contentious claim that must be supported in an objective, evidence-based way.

An expository essay also aims to be objective, but it doesn’t have to make an original argument. Rather, it aims to explain something (e.g., a process or idea) in a clear, concise way. Expository essays are often shorter assignments and rely less on research.

At college level, you must properly cite your sources in all essays , research papers , and other academic texts (except exams and in-class exercises).

Add a citation whenever you quote , paraphrase , or summarize information or ideas from a source. You should also give full source details in a bibliography or reference list at the end of your text.

The exact format of your citations depends on which citation style you are instructed to use. The most common styles are APA , MLA , and Chicago .

The majority of the essays written at university are some sort of argumentative essay . Unless otherwise specified, you can assume that the goal of any essay you’re asked to write is argumentative: To convince the reader of your position using evidence and reasoning.

In composition classes you might be given assignments that specifically test your ability to write an argumentative essay. Look out for prompts including instructions like “argue,” “assess,” or “discuss” to see if this is the goal.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, July 23). How to Write an Argumentative Essay | Examples & Tips. Scribbr. Retrieved April 11, 2024, from https://www.scribbr.com/academic-essay/argumentative-essay/

Is this article helpful?

Jack Caulfield

Jack Caulfield

Other students also liked, how to write a thesis statement | 4 steps & examples, how to write topic sentences | 4 steps, examples & purpose, how to write an expository essay, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • What is task analysis?

Last updated

28 February 2023

Reviewed by

Miroslav Damyanov

Every business and organization should understand the needs and challenges of its customers, members, or users. Task analysis allows you to learn about users by observing their behavior. The process can be applied to many types of actions, such as tracking visitor behavior on websites, using a smartphone app, or completing a specific action such as filling out a form or survey.

In this article, we'll look at exactly what task analysis is, why it's so valuable, and provide some examples of how it is used.

All your UX research in one place

Surface patterns and tie themes together across all your UX research when you analyze it with Dovetail

Task analysis is learning about users by observing their actions. It entails breaking larger tasks into smaller ones so you can track the specific steps users take to complete a task.

Task analysis can be useful in areas such as the following:

Website users signing up for a mailing list or free trial. Track what steps visitors typically take, such as where they find your site and how many pages they visit before taking action. You'd also track the behavior of visitors who leave without completing the task.

Teaching children to read. For example, a task analysis for second-graders may identify steps such as matching letters to sounds, breaking longer words into smaller chunks, and teaching common suffixes such as "ing" and "ies." 

  • Benefits of task analysis

There are several benefits to using task analysis for understanding user behavior:

Simplifies long and complex tasks

Allows for the introduction of new tasks

Reduces mistakes and improves efficiency

Develops a customized approach

  • Types of task analysis

There are two main categories of task analysis, cognitive and hierarchical.

Cognitive task analysis

Cognitive task analysis, also known as procedural task analysis, is concerned with understanding the steps needed to complete a task or solve a problem. It is visualized as a linear diagram, such as a flowchart. This is used for fairly simple tasks that can be performed sequentially.

Hierarchical task analysis

Hierarchical task analysis identifies a hierarchy of goals or processes. This is visualized as a top-to-bottom process, where the user needs top-level knowledge to proceed to subsequent tasks. A hierarchical task analysis is top-to-bottom, as in Google's example following the user journey of a student completing a class assignment .

What is the difference between cognitive and hierarchical task analysis?

There are a few differences between cognitive and hierarchical task analysis. While cognitive task analysis is concerned with the user experience when performing tasks, hierarchical task analysis looks at how each part of a system relates to the whole.

  • When to use task analysis

A task analysis is useful for any project where you need to know as much as possible about the user experience. To be helpful, you need to perform a task analysis early in the process before you invest too much time or money into features or processes you'll need to change later.

You can take what you learn from task analysis and apply it to other user design processes such as website design , prototyping , wireframing , and usability testing .

  • How to conduct a task analysis

There are several steps involved in conducting a task analysis.

Identify one major goal (the task) you want to learn about. One challenge is knowing what steps to include. If you are studying users performing a task on your website, do you want to start the analysis when they actually land on your site or earlier? You may also want to know how they got there, such as by searching on Google.

Break the main task into smaller subtasks. "Going to the store" might be separated into getting dressed, getting your wallet, leaving the house, walking or driving to the store. You can decide which sub-tasks are meaningful enough to include.

Draw a diagram to visualize the process. A diagram makes it easier to understand the process.

Write down a list of the steps to accompany the diagram to make it more useful to those who were not familiar with the tasks you analyzed.

Share and validate the results with your team to get feedback on whether your description of the tasks and subtasks, as well as the diagram, are clear and consistent.

  • Task analysis in UX

One of the most valuable uses of task analysis is for improving user experience (UX) . The entire goal of UX is to identify and overcome user problems and challenges. Task analysis can be helpful in a number of ways.

Identify the steps users take when using a product. Can some of the steps be simplified or eliminated?

Finding areas in the process that users find difficult or frustrating. For example, if many users abandon a task at a certain stage, you'll want to introduce changes that improve the completion rate.

Hierarchical analysis reveals what users need to know to get from one step to the next. If there are gaps (i.e., not all users have the expertise to complete the steps), they should be filled.

  • Task analysis is a valuable tool for developers and project managers

Task analysis is a process that can improve the quality of training, software, product prototypes, website design, and many other areas. By helping you identify user experience, you can make improvements and solve problems. It's a tool that you can continually refine as you observe results.

By consistently applying the most appropriate kind of task analysis (e.g., cognitive or hierarchical), you can make consistent improvements to your products and processes. Task analysis is valuable for the entire product team, including product managers , UX designers , and developers .

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 18 January 2023

Last updated: 27 April 2023

Last updated: 26 March 2024

Last updated: 24 June 2023

Last updated: 29 May 2023

Last updated: 28 February 2023

Last updated: 14 November 2023

Last updated: 3 April 2024

Last updated: 21 June 2023

Last updated: 19 November 2023

Last updated: 6 April 2023

Last updated: 31 July 2023

Last updated: 22 June 2023

Latest articles

Related topics, log in or sign up.

Get started for free

  • Bibliography
  • More Referencing guides Blog Automated transliteration Relevant bibliographies by topics
  • Automated transliteration
  • Relevant bibliographies by topics
  • Referencing guides

Dissertations / Theses on the topic 'Task analysis – Mathematical models'

Create a spot-on reference in apa, mla, chicago, harvard, and other styles.

Consult the top 50 dissertations / theses for your research on the topic 'Task analysis – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

Harris, David Wayne. "A degradation analysis methodology for maintenance tasks." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/24867.

Britton, Matthew Scott. "Stochastic task scheduling in time-critical information delivery systems." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phb8629.pdf.

Södergren, Viktor. "Simulation and Mathematical Analysis of a Task Partitioning Model of a Colony of Ants." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-70161.

Roschat, Christina [Verfasser]. "Mathematical Analysis of Marine Ecosystem Models / Christina Roschat." Kiel : Universitätsbibliothek Kiel, 2016. http://d-nb.info/1111558604/34.

Keita, Sana. "Eulerian Droplet Models: Mathematical Analysis, Improvement and Applications." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37907.

Racheal, Cooper. "Analysis of Mathematical Models of the Human Lung." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3289.

Wu, Guangxi. "Sensitivity and uncertainty analysis of subsurface drainage design." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28529.

Tumanova, Natalija. "The Numerical Analysis of Nonlinear Mathematical Models on Graphs." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120720_121648-24321.

Beckham, Jon Regan. "Analysis of mathematical models of electrostatically deformed elastic bodies." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 169 p, 2008. http://proquest.umi.com/pqdweb?did=1475178561&sid=27&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Chiang, T. "Mathematical and statistical models for the analysis of protein." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597600.

De, la Harpe Alana. "A comparative analysis of mathematical models for HIV epidemiology." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96983.

Serkov, S. K. "Asymptotic analysis of mathematical models for elastic composite media." Thesis, University of Bath, 1998. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390311.

Mercurio, Matthew Forrest. "Divider analysis of drainage divides delineated at the field scale." Virtual Press, 2004. http://liblink.bsu.edu/uhtbin/catkey/1306855.

Lee, M. E. M. "Mathematical models of the carding process." Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249543.

Crawford, David Michael. "Analysis of biological pattern formation models." Thesis, University of Oxford, 1989. http://ora.ox.ac.uk/objects/uuid:aaa19d3b-c930-4cfa-adc6-8ea498fa5695.

Hakami, Amir. "Direct sensitivity analysis in air quality models." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180202/unrestricted/hakami%5Famir%5F200312%5Fphd.pdf.

Akileh, Aiman R. "Elastic-plastic analysis of axisymmetrically loaded isotropic circular and annular plates undergoing large deflections." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3559.

Sood, Premlata Khetan. "Profit sharing, unemployment, and inflation in Canada : a simulation analysis." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34459.

Khalilzadeh, Amir Hossein. "Variance Dependent Pricing Kernels in GARCH Models." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-180373.

Haseeb, Hayat. "A Comparison of Models for Oil Futures." Thesis, Uppsala universitet, Analys och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202847.

Venter, Daniel Jacobus Lodewyk. "The consolidation of forecests with regression models." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/d1020964.

YU, CHUNG-CHYI. "FINITE-ELEMENT ANALYSIS OF TIME-DEPENDENT CONVECTION DIFFUSION EQUATIONS (PETROV-GALERKIN)." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183930.

Musa, Zulkarnain 1964. "An accelerated conjugate direction procedure for slope stability analysis." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276912.

鄭定陽 and Dingyang Zheng. "Vibration and stability analysis of plate-type structures under movingloads by analytical and numercial methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31239791.

Grm, Aleksander. "Mathematical analysis of macroscopic models for slow dense granular flow." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=98408214X.

Font, Moragón Carme. "Mathematical models for energy and landscape integrated analysis in agroecosystems." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/399906.

Tosun, Kursad. "QUALITATIVE AND QUANTITATIVE ANALYSIS OF STOCHASTIC MODELS IN MATHEMATICAL EPIDEMIOLOGY." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/732.

Danbaba, Ahmed. "Mathematical models and analysis for the transmission dynamics of malaria." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/53483.

Oremland, Matthew Scott. "Techniques for mathematical analysis and optimization of agent-based models." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25138.

Otieno, Andrew Alex Omondi. "Application of lie group analysis to mathematical models in epidemiology." Thesis, Walter Sisulu University, 2013. http://hdl.handle.net/11260/100.

Danbaba, Usman Ahmed. "Mathematical models and analysis for the transmission dynamics of malaria." Diss., University of Pretoria, 2015. http://hdl.handle.net/2263/53483.

Hildebrand, Paul. "The use of absorbing boundaries in the analysis of bankruptcy." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ34550.pdf.

Richard, Paul François. "A computer analysis of the flow of water and nutrients in agricultural soils as affected by subsurface drainage." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29171.

Blanding, James Michael. "An analytical study and computer analysis of three-dimensional, steady-state vibration of multishaft geared-rotor systems." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54198.

Yuksel, Hasan Zafer. "Performance measures: Traditional versus new models." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3086.

Schoof, C. "Mathematical models of glacier sliding and drumlin formation." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249325.

Shi, Wen, and 石雯. "Dynamic simulation and quantitative analysis of urban taxi services." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45528767.

Yeoh, Lean-Weng. "An analysis of MLAYER a multilayer tropospheric propagation program /." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA232733.

馮達淸 and Tat-ching Fung. "Steady state solutions of nonlinear dynamic systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31231809.

Zhou, Qi Jessie. "Inferential methods for extreme value regression models /." *McMaster only, 2002.

Malone, Brett. "Multidisciplinary optimization in aircraft design using analysis technology models." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10102009-020042/.

Schliemann, Bernd F. "Analysis and modeling of the initiative tenet of current army operations doctrine." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/25091.

Lin, Erlu, and 林尔路. "Analysis of dividend payments for insurance risk models with correlated aggregate claims." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40203992.

Doustmohammadi, Ali. "Modeling and analysis of production systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15776.

Ipperciel, David. "The performance of some new technical signals for investment timing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0028/NQ50190.pdf.

Zhang, Dongxiao. "Conditional stochastic analysis of solute transport in heterogeneous geologic media." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186553.

Chowdhury, Mohammed. "A Bayesian analysis of a conception model." Virtual Press, 2008. http://liblink.bsu.edu/uhtbin/catkey/1398705.

Tabb, Jeremiah R. "Using wavelets and principle components analysis to model data from simulated sheet forming processes." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/10146.

Myers, Cliff. "A fractal analysis of diffusion limited aggregation." PDXScholar, 1988. https://pdxscholar.library.pdx.edu/open_access_etds/4047.

Runa, Eris [Verfasser]. "Mathematical Analysis of Lattice gradient models & Nonlinear Elasticity / Eris Runa." Bonn : Universitäts- und Landesbibliothek Bonn, 2015. http://d-nb.info/1079273298/34.

Task Analysis Examples for Use in the Classroom to Help Students Reach Goals

  • Carol Lee McCulloch
  • Categories : Inclusion strategies for mainstreamed classrooms
  • Tags : Special ed information for teachers & parents

Task Analysis Examples for Use in the Classroom to Help Students Reach Goals

Task Analysis Examples

Use these task analysis examples to help your classroom

Make It Simple

Classrooms from pre-school to high school can utilize the task analysis process by using routine rules and learning skills. For example, in the kindergarten and lower elementary setting, the daily routine laid out for students to follow can provide opportunities for sub-tasking. If a teacher posts rules of conduct, or expectations in a given subject area, a checklist can be provided to monitor behavioral and academic progress. If rules or procedures are too general for young children to grasp completely, a listing of “how-to’s” can be charted for clarity. Here’s a simple task analysis example - If the general rule or procedure is “Be Respectful To Your Fellow Classmates,” it may be more helpful to list step by step the ways this can be accomplished; a) Ask different classmates to play with you on the playground, b) Speak kindly to each classmate, c) Do not make fun of anyone, d) Be a helper, not a troublemaker, and so on. The young student can then check off the steps he or she has accomplished, and as a result, good classroom habits will be developed and the general concept will be fully understood.

Strategies and Skills

For high school and college instructors, task analysis may be best utilized through the use of charting strategies and skills that are required to accomplish the task. In other words, the instructor needs to know if the student’s prerequisite skills are in place before designing the course of study. In English class, for example, a task analysis on how to write a simple research paper can prove very useful. The procedures and strategies approach is highly successful in teaching a how-to lesson. STRATEGIES are listed on one side of the chart with SKILLS REQUIRED directly across. Each section is sub-divided to best explain what is expected and what a student should know in order to accomplish the goal. Another analysis approach lists sequential (boxed) steps which must be followed to complete a specific task. Long division in upper elementary, as well as organizing thoughts and processes in science and social studies class, have proven much easier to digest using this method of task analysis.

Positive Benefits

According to an article on “Linking Task Analysis to Student Learning,” from the Educational Resource Information Center, there are many perspectives and approaches to task analysis. But the one point that all theorists agree on is that “task analysis, at a minimum, assists the instructor or designer to understand the content to be taught. This alone is sufficient reason for recommending it.” Task analysis activities have definitely been useful in helping teachers, students, employers and employees stay on track throughout a specific learning process. Goals are more easily understood and accomplished if the expected outcome is presented in pieces. Let us know in the comments if you have any task analysis examples you wish to share!

What&rsquo;s the Purpose of Task Analysis ERIC Education Resources Information Center “Linking Task Analysis with Student Learning” Image by Aline Dassel from Pixabay

This post is part of the series: Special Education Activities

With many innovative approaches to teaching children with disabilities, educators, coaches, and volunteers alike can find exciting, rewarding ways to share expertise with the special needs population!

  • Task Analysis Activities: Teaching Students to Complete Tasks
  • Incorporating Music Into Teaching Students With Special Needs
  • Sports Activities for the Disabled
  • How the School Based Support Team Works

IMAGES

  1. Task Analysis Infographic

    task analysis thesis

  2. FREE 15+ Sample Task Analysis Templates in Google Docs

    task analysis thesis

  3. 45 Perfect Thesis Statement Templates (+ Examples) ᐅ TemplateLab

    task analysis thesis

  4. FREE 15+ Sample Task Analysis Templates in Google Docs

    task analysis thesis

  5. 30+ SAMPLE Task Analysis in PDF

    task analysis thesis

  6. FREE 15+ Sample Task Analysis Templates in Google Docs

    task analysis thesis

VIDEO

  1. Literary Analysis Thesis Feedback

  2. THESIS STATEMENT For writing TASK 2 // Agree / Disagree / Problem solution 🤟🤟

  3. What Is a Thesis?

  4. THESIS (WRITING TASK 2)

  5. What Is a master's Thesis (5 Characteristics of an A Plus Thesis)

  6. Literary Analysis Thesis statements

COMMENTS

  1. A Simulation-Based Task Analysis using Agent-Based, Discrete Event and

    A variety of task analysis approaches and tools have been proposed and developed over the years. According to Diaper and Stanton [1], over 100 task analysis methods have been reported in the literature. Each method is selected based on different criteria, such as the purpose and scope of the analysis, as well as task, cost, and time factors,

  2. PDF The Graduate School University of Wisconsin-Stout April, 2009

    Task Analysis. Task analysis is a method for determining the knowledge, skills, tools, conditions, and requirements needed to perform ajob (Bemis et aI., 1983). Limitations of the Study One limitation of this study is that literature from fields outside of training and development was excluded, thus eliminating potential needs assessment ...

  3. PDF Implementation of a Task Analysis Tool for Human Factors Engineering

    3 Author Henrik Lucander Title of thesis Implementation of a Task Analysis Tool for Human Factors Engineering Programme Master's Programme in Automation and Electri- cal Engineering Major Control, Robotics and Autonomous Systems Thesis supervisor Prof. Valeriy Vyatkin Thesis advisor(s) Leena Salo, MSc Collaborative partner Fortum Power and Heat Date 28.05.2021 Number of pages 66 Language English

  4. How to Write a Literary Analysis Essay

    Table of contents. Step 1: Reading the text and identifying literary devices. Step 2: Coming up with a thesis. Step 3: Writing a title and introduction. Step 4: Writing the body of the essay. Step 5: Writing a conclusion. Other interesting articles.

  5. Dissertations / Theses: 'Task analysis'

    The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating ...

  6. PDF Compilation of Task Analysis Methods: Practical Approach of

    COMPILATION OF TASK ANALYSIS METHODS: PRACTICAL APPROACH OF HIERARCHICAL TASK ANALYSIS, COGNITIVE WORK ANALYSYS AND GOALS, OPERATIONS, METHODS AND ... 1.3 REASON FOR BEING OF THE THESIS Task Analysis are relatively new methods because they are closely related with computer's

  7. PDF Using Picture-Based Task-Analytic Instruction to Teach Students with

    Cameron, Dorsey, & Fleming, 2004). A task analysis is used by teachers to analyze skills and knowledge that should be taught and then break it down into small, discrete behaviors or steps for students (Collins, 2012). Picture-based task analysis has been used to teach many different skills to students with moderate to severe

  8. Principles of Task Analysis and Modeling: Understanding Activity

    Task analysis is a cornerstone of User Centered Design (UCD) approaches (Diaper 2004), aiming to collect information from users about the work they are doing and the way they perform it.According to (Johnson 1992), "any Task Analysis is comprised of three major activities; first, the collection of data; second, the analysis of that data; and third, the modeling of the task domain " (p. 165).

  9. PDF The Analysis of Writing Tasks in High School English Textbooks: A ...

    implications for writing task and curriculum development for material writers, curriculum designers, and practitioners. Key words: process-genre based approach, genre, process, context, audience, writing task, writing sections, pre-writing, post-writing 1. INTRODUCTION Due to the advancement of technology, the needs for written communication in the

  10. An intelligent task analysis approach for special ...

    Task analysis is the process of breaking down complex tasks into subtasks such that the subtasks are easily understandable and manageable [20]. With task analysis, subtasks of motor abilities what ...

  11. PDF Naval Postgraduate School

    to-task analysis instructions or the process by which it is conducted. In this context, a troop-to-task analysis is a methodological process of matching the suitable number and quality of personnel and equipment to a unit's Mission Essential Task List (METL) for the purpose of justifying the need for uncompensated force structure. The study finds

  12. What Is Task Analysis?

    Task analysis is the complete study and breakdown of how a user successfully completes a task, including all physical and cognitive steps needed. It involves observing an individual to learn the knowledge, thought processes, and ability necessary to achieve a set goal. For example, a website designer may perform a task analysis to see the ...

  13. Dissertations / Theses: 'Task analysis in education'

    This thesis addresses how teachers and pupils jointly constructed a physical education classroom ecology in one case study school. Taking into account the persuasive influences facing young people in contemporary society, this research addresses the complexity of young people's agendas in physical education from a socio-cultural perspective.

  14. How to Write an Effective Literary Analysis Thesis Statement

    This is where you could build the roadmap aspect of the thesis: list the elements in the order you will write about them in, and suddenly you will have a clear path for entire literary analysis. 3. Clear and Concise. This may seem obvious, but it is crucial. A clear thesis will play into the idea of a roadmap, but it will also avoid using long ...

  15. Skill components of task analysis

    Introduction. Some task analysis (TA) methods are used to understand, discover, and represent a task in terms of goals and subgoals, for example, Hierarchical Task Analysis (HTA, Annett and Duncan 1967) and Goal-Directed Task Analysis (GDTA, Endsley et al. 2003).Although widely described in procedure and underlying skills (e.g., Crandall et al. 2006; Kirwan and Ainsworth 1992), there is still ...

  16. How to Write Literary Analysis

    3 Construct a Thesis When you've examined all the evidence you've collected and know how you want to answer the question, it's time to write your thesis statement. A thesis is a claim about a work of literature that needs to be supported by evidence and arguments. The thesis statement is the heart of the literary essay, and the bulk of ...

  17. 12.14: Sample Student Literary Analysis Essays

    Heather Ringo & Athena Kashyap. City College of San Francisco via ASCCC Open Educational Resources Initiative. Table of contents. Example 1: Poetry. Example 2: Fiction. Example 3: Poetry. Attribution. The following examples are essays where student writers focused on close-reading a literary work.

  18. Organizing Your Analysis

    This resource covers how to write a rhetorical analysis essay of primarily visual texts with a focus on demonstrating the author's understanding of the rhetorical situation and design principles. ... only organization without considering the specific demands of your particular writing task. That said, here are some general tips for plotting ...

  19. How to Write an Argumentative Essay

    For example, both rhetorical analysis and literary analysis essays involve making arguments about texts. In this context, you won't necessarily be told to write an argumentative essay—but making an evidence-based argument is an essential goal of most academic writing, and this should be your default approach unless you're told otherwise. ...

  20. Task Analysis: Definition, When to Use and Examples

    Cognitive task analysis. Cognitive task analysis, also known as procedural task analysis, is concerned with understanding the steps needed to complete a task or solve a problem. It is visualized as a linear diagram, such as a flowchart. This is used for fairly simple tasks that can be performed sequentially. Hierarchical task analysis

  21. How to Write an Analytical Essay in 6 Steps

    The introduction is where you present your thesis statement and prepare your reader for what follows. Because analytical essays focus on a single topic, the introduction should give all the background information and context necessary for the reader to understand the writer's argument. Save the actual analysis of your topic for the body.

  22. Dissertations / Theses on the topic 'Task analysis

    This thesis focuses on the mathematical analysis of an Eulerian model for air- droplet flows, here called the Eulerian droplet model. This model can be seen as the sticky particle system with a source term and is successfully used for the prediction of droplet impingement and more recently for the prediction of particle flows in air- ways.

  23. Task Analysis Examples for Use in the Classroom to Help Students Reach

    Task Analysis Examples. Task analysis, in simple terms, is a process that breaks down an activity into smaller parts. By using task analysis in the classroom, teachers find that goals are more easily reached and that students are more likely to recall material at a later date. Sequences or steps are followed and practiced, making complex goals more attainable and hazy directions clearer!