Cost-Minimization Analysis

  • Reference work entry
  • First Online: 20 October 2020
  • Cite this reference work entry

cost minimization analysis case study

  • Alejandra Duenas 2  

101 Accesses

1 Citations

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References and Readings

Briggs, A. H., & O’Brien, B. J. (2001). The death of cost-minimization analysis? Health Economics, 10 (2), 179–184.

Article   CAS   PubMed   Google Scholar  

Kobelt, G. (2002). Health economics: An introduction to economic evaluation (2nd ed.). London: Office of Health Economics.

Google Scholar  

Download references

Author information

Authors and affiliations.

School of Management, IESEG, Paris, France

Alejandra Duenas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Alejandra Duenas .

Editor information

Editors and affiliations.

Behavioral Medicine Research Center, Department of Psychology, University of Miami, Miami, FL, USA

Marc D. Gellman

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Cite this entry.

Duenas, A. (2020). Cost-Minimization Analysis. In: Gellman, M.D. (eds) Encyclopedia of Behavioral Medicine. Springer, Cham. https://doi.org/10.1007/978-3-030-39903-0_1376

Download citation

DOI : https://doi.org/10.1007/978-3-030-39903-0_1376

Published : 20 October 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-39901-6

Online ISBN : 978-3-030-39903-0

eBook Packages : Medicine Reference Module Medicine

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Search life-sciences literature (43,961,101 articles, preprints and more)

  • Free full text
  • Similar Articles

Cost-minimization analysis of escitalopram, fluoxetine, and amitriptyline in the treatment of depression.

Author information, affiliations.

  • Salian HH 1
  • Raghav MV 2
  • Divakar A 4

Indian Journal of Pharmacology , 01 Sep 2023 , 55(5): 293-298 https://doi.org/10.4103/ijp.ijp_854_22   PMID: 37929407  PMCID: PMC10751525

Abstract 

  • Introduction

Materials and methods

Free full text , cost-minimization analysis of escitalopram, fluoxetine, and amitriptyline in the treatment of depression, harshit hemant salian.

Department of Psychiatry, AIIMS, Vijaypur, Jammu, India

M. V. Raghav

1 Department of Pharmacology, AIIMS, Jodhpur, Rajasthan, India

Vikram Singh Rawat

2 Department of Psychiatry, AIIMS, Rishikesh, Uttarakhand, India

3 Department of Pharmacology, Seth GS Medical College and KEM Hospital, Mumbai, Maharashtra, India

INTRODUCTION:

Escitalopram, fluoxetine, and amitriptyline are the drugs commonly used in the treatment of depression. The pharmacoeconomic evaluation of these drugs becomes relevant as they are prescribed for a long period of time, and depression causes a significant economic burden. The cost-minimization study would contribute to bringing down the annual treatment costs, leading to better medication adherence and ultimately better patient outcomes.

MATERIALS AND METHODS:

All drug prices are mentioned in Indian National Rupee (INR). All expenses are based on 2022 pricing. No cost discounting was used because all expenditures were calculated over a year. We considered hypothetical scenarios where the patient was prescribed the lowest possible dose for depression, an equivalent antidepressant dose, a defined daily dose, and the maximum acceptable therapeutic dose for depression.

Annual average treatment costs of amitriptyline, escitalopram, and fluoxetine in patients with depression at baseline with equivalent dosing as mono-drug therapy were 2765.53, 2914.78, and 1422.72 rupees (INR), respectively. Savings were high when the patient was shifted to fluoxetine from either escitalopram or amitriptyline. The savings from switching to fluoxetine were 50.66% and 56.42% from escitalopram and amitriptyline, respectively.

CONCLUSION:

The choice of an antidepressant depends on multiple aspects, among which the cost of treatment plays a crucial role. Among the drugs compared, fluoxetine seems to offer greater value for money. The study emphasizes that selective serotonin reuptake inhibitors are the most commonly prescribed antidepressants not only because of their favorable pharmacological profile but also because of their affordability.

Depression is a common chronic medical condition that impacts one's thoughts, mood, and physical well-being. Low mood, low energy, pessimism, sleeplessness, and an inability to appreciate life are some of its symptoms.[ 1 ] The three main types of treatment for managing depression are antidepressants, electroconvulsive therapy, and psychosocial therapies. In general, selective serotonin reuptake inhibitors (SSRIs) are regarded as first-line antidepressants due to their favorable efficacy and safety profiles. Tricyclic antidepressants (TCAs) such as amitriptyline and atypical antidepressants such as mirtazapine, bupropion, and venlafaxine are alternate therapeutic options.[ 2 ]

According to a cohort study in Canada by Tanner et al ., depression can make the patient and the family suffer mentally, physically, and economically. Among individuals who were depressed, comorbidities were 43% more common. Patients with depression had greater rates of deliberate self-harm, overall death from all causes, and suicide mortality than the nondepressive population. The hospitalization rate, doctor visits, doctor-assisted psychotherapy, and prescription medication use were all greater in the depressive cohort than in the nondepressive group. Patients with depression had average health-care expenses that were 3.5 times greater than those without depression.[ 3 ] By ensuring optimal costs for prompt and continued treatment, the economic burden caused by depression can be overcome.

Pharmacoeconomics is defined as the application of economics to optimize benefits for patients, health-care payers, and society through data-driven decision-making by balancing the costs and benefits of interventions toward the utilization of scarce resources.[ 4 ] Cost-minimization analysis, cost–benefit analysis, cost-effectiveness analysis, and cost–utility analysis are the four major types of pharmacoeconomic approaches.[ 5 ] A cost-minimization analysis is performed to choose the least expensive alternative therapeutic option with almost equal health-care outcomes.[ 6 ]

Escitalopram, fluoxetine, and amitriptyline are drugs commonly used to treat depression. Escitalopram, fluoxetine, and amitriptyline were enlisted as medicines used in depressive disorders under 23.2.1 of the National List of Essential Medicines (NLEMs) 2022, Government of India. Fluoxetine and escitalopram belong to a class called SSRIs, which are the approved first-line drugs for the treatment of depression. Amitriptyline, which is a TCA, would be an alternative that is equally efficacious but not used as a first-line medication.[ 7 , 8 , 9 ]

The pharmacoeconomic evaluation of these drugs becomes relevant as they are commonly prescribed for a long time, and depression causes a significant economic burden. Although cost-variation studies are present, the cost-minimization study of these drugs is currently unavailable. The cost-minimization study would reduce the annual treatment costs, leading to better medication adherence and, ultimately, better patient outcomes.

The primary objective was to compare the costs of three antidepressants (escitalopram, fluoxetine, and amitriptyline) assuming they had similar efficacy. The secondary objective was to objectively measure the percentage cost saved after switch from one drug to another.

  • Materials and Methods

The current study is a pharmacoeconomic evaluation of antidepressant therapy in the International Classification of Diseases-10-diagnosed cases of depression – three antidepressants were included, escitalopram, fluoxetine, and amitriptyline. It flows model-based pharmacoeconomic evaluation design. All data used in calculation of costs were used from publicly available databases.

It follows a model-based pharmacoeconomic evaluation design using the cost-minimization principle and using four hypothetical scenarios. Cost-minimization analysis compares the expenses of different strategies and compares them. When comparing the treatments, it is assumed that the counterparts are equally effective, allowing just the cost differences to be considered. In order to compare costs, it is assumed that the intervention with the lower cost would be used.[ 10 ]

We limited our study to the dosage forms mentioned in the NLEM 2022, as these were the most common formulations to be used.[ 11 ] All drug prices are mentioned in Indian Rupee (INR). Only the direct costs of the drugs are considered. The maximum price was obtained from the National Pharmaceutical Pricing Authority (NPPA) pricing list of April 18, 2022.[ 12 ] The NPPA mentions the ceiling price at which drugs can be sold, which we assumed was the maximum price. The minimum price was obtained from Jan Aushadhi, a government of India-sponsored generic drug program.[ 13 ] The lowest and highest costs were used to calculate the average yearly cost of therapy. All the prices used for calculation are summarized in Table 1 . We only included preparations with escitalopram, fluoxetine, and amitriptyline as a single active ingredient, and combination preparations were excluded.

Ceiling, minimum, and average prices of escitalopram, fluoxetine, and amitriptyline in INR

*Formulation was not available on Jan Aushadhi value obtained from CIMS 2022, ^Formulation was not available on Jan Aushadhi value was calculated as two and three tablets of 20 mg, respectively. CIMS=Current index of medical specialities

There were three significant stages in the model.

Computing the cost: In this study, the payer's point of view was adopted on the presumption that the patient would directly pay for the medication. We, therefore, assume that patients taking any of the three drugs have a similar frequency of doctor visits because all three medications are common oral antidepressant agents, well tolerated, and have similar treatment efficacy and gastrointestinal adverse reactions, which can be alleviated by starting at a low dose and gradually increasing the dose. We thus assume that the expenses associated with treating depression, such as those associated with doctor visits, diagnostic tests, inspections, and hospitalization, may be accounted to be equal and excluded from our analysis. All expenses are based on 2022 pricing. No cost discounting was used because all expenditures were calculated over a year

Base-case identification: The treatment of depression does not include an exhaustive dosage regimen. Hence, a scenario-based approach was sought

Sensitivity analysis: A number of distinct clinical scenarios were put together after discourse with psychiatrists treating depression patients with the goal to demonstrate potential clinical situations and to analyze the difference in annual average treatment costs because physicians' compliance with drug instruction recommendations with regard to the initiation and monitoring of drug dosage in treating depression was unknown.[ 14 ]

Titration of the doses according to the patient's response is also assumed to be the same, and onset of response by each drug is also assumed to be the same. There are no specific medication dosing guidelines for the treatment of depression as to how dose escalation has to be done and when to switch drugs. However, the course of treatment is determined by the patient's clinical state and medication response as assessed by the treating psychiatrist. We looked at model cases which included the following:[ 15 , 16 , 17 , 18 , 19 , 20 ]

The patient was given the lowest antidepressant dose that could be prescribed for depression

An equivalent dose of an antidepressant as mentioned in a 2015 Japan study

A defined daily dose (DDD) of these medications as described by the World Health Organization (WHO): anatomic and therapeutic classification of drugs

Maximum acceptable therapeutic dose for depression. Table 2 lists the dosages used for computation.

Hypothetical case scenarios used for cost-minimization analysis between escitalopram, fluoxetine, and amitriptyline

*Actual equivalent doses were 18 mg and 122.3 mg but rounded off to the nearest number as fractional dose formulations are not available. WHO=World Health Organization, DDD=Defined daily dose

The doses used for calculation are mentioned in Table 2 .

The calculations were based on the following assumptions:

SSRIs are usually prescribed as a single dose in the morning to avoid sleep disturbances. TCAs can be given as a single dose at night or twice daily, with a higher dose at night

We calculated the prices of drugs for 1 year of antidepressant therapy.

Annual average treatment costs of escitalopram, fluoxetine, and amitriptyline in patients with depression with equivalent dosing as mono-drug therapy were 2765.53, 2914.79, and 1422.72 rupees (INR), respectively. For the DDD of escitalopram, fluoxetine, and amitriptyline in patients with depression, the prices were 1935.77, 1823.18, and 899.73 rupees (INR), respectively. Further drug prices are summarized in Table 3 .

Prices in INR of annual treatment with escitalopram, fluoxetine, and amitriptyline as mono-drug therapy in four hypothetical scenarios

DDD=Defined daily dose

The annual treatment cost, percentage cost difference, and percentage savings in the annual cost of escitalopram, fluoxetine, and amitriptyline in patients with depression are summarized in Table 4 and Figures 1 - ​ -3. 3 . Compared to escitalopram, amitriptyline had a lesser annual average treatment cost in scenarios 1 and 3, whereas amitriptyline and escitalopram were almost similar in prices in scenarios 2 and 4. When fluoxetine was compared with either of the two drugs, unanimously, in all four scenarios, fluoxetine was the cheaper alternative. The findings were that if a patient switched to fluoxetine from escitalopram, he would save 50.66% on treatment annually. Similarly, if the patient switched to fluoxetine from amitriptyline, 37.99% cost could be cut down every year. Hence, savings were high when the patient was shifted to fluoxetine from either escitalopram or amitriptyline.

Comparison of annual treatment cost in INR (₹) of escitalopram, fluoxetine, and amitriptyline in patients with depression

*One-way ANOVA for independent variables was performed ( P <0.05 is considered statistically significant). Post hoc Tukey’s HSD test revealed the following P -value between the groups - amitriptyline: Fluoxetine (0.39), amitriptyline: Escitalopram (0.98), fluoxetine: Escitalopram (0.32). HSD: Honestly significant difference

cost minimization analysis case study

The annual treatment cost, percentage cost difference, and percentage savings in the annual cost of escitalopram and amitriptyline in patients with depression

cost minimization analysis case study

The annual treatment cost, percentage cost difference, and percentage savings in the annual cost of escitalopram and fluoxetine in patients with depression

cost minimization analysis case study

The annual treatment cost, percentage cost difference, and percentage savings in the annual cost of amitriptyline and fluoxetine in patients with depression

Economic evaluation is the comparative analysis of various projects' costs and effects using economic theories and methods. In terms of health policy, there is an increasing need to identify medical treatment solutions that are efficient but less expensive as more governments try to contain the growth in health expenditure.

To obtain good value for money, a pharmacoeconomic method is frequently employed to assess the health benefits of medication therapies. Table 5 summarizes various pharmacoeconomic models available. In a nation like India, where the inclusion of a drug on the NLEMs, the inclusion and exclusion of a drug from the NPPA, and the pricing of new drugs, patent medicines, and other drugs are crucial, the economic appraisal of medical products is very crucial.[ 14 ] Drugs in NLEM are readily available across the country and at most government health facilities.

Methods of evaluating subgroups using various pharmacoeconomic designs

QALYs=Quality adjusted life years

The selection of an antidepressant is primarily governed by the indication. Not every illness responds to antidepressants in a similar manner. It is hard to prove that one antidepressant regularly outperforms another in treating MDD. As a result, selecting an antidepressant for the treatment of depression mostly depends on pragmatic factors such as cost, availability, side effects, potential drug interactions, the patient treatment response history (or lack thereof), and patient preference. The choice of an antidepressant may also be influenced by additional elements such as the patient's age, gender, and health.[ 21 ]

We employed the cost-minimization analysis approach on the assumption that the major clinical outcomes and side effects of the three drugs are practically equal.[ 22 ] Since it can be challenging to adequately evaluate research variables in economic evaluation and since each pharmacological therapy may have a varied cost of care depending on the population or medical institution, it is crucial to consider how fundamental assumptions may affect study outcomes. As a result, we created four scenarios to reflect the real-life cost profile.

The indirect cost associated with the antidepressant therapy was not considered because this study was done from the payer's point of view. The payments for doctor visits, medications, diagnostics, checkups, hospitalizations, transportation, and other expenses are considered direct medical costs. Because we expected that other expenses would be the same in the three treatment groups, we only calculated the medication cost in this study and no other expenditures.

This study also only considers a single monotherapy for a year; however, in the real-world scenario, due to the complexity of mental health disorders (comorbid anxiety, sleep disturbances, obsessive–compulsive disorder, or switch in bipolar disorder), patients frequently switch medications, which can have an impact on the cost. Minimum and ceiling prices were considered based on prices available on government portals. However, the prices of these drugs may not reflect the real-world scenario. Further research is required to comprehend the annual expenditures on antidepressant therapy and support resource allocation decisions.[ 14 , 23 ]

Our study has a notable strength in that it is the first cost-minimization evaluation comparing various antidepressant therapies undertaken based on findings from previous studies and data available in the public domain. Our study included antidepressants commonly used for the treatment of depression, which makes it relevant in a real-world scenario.

The choice of an antidepressant depends on aspects such as efficacy, patient illness profile, adverse drug reactions, availability, drug interactions, and cost of treatment, which play a crucial role in a developing economy like India. The study emphasizes that SSRIs are the most commonly prescribed antidepressants not only because of their favorable pharmacological profile but also because of their affordability. In the future, more research can be taken up and include sensitivity analysis for better interpretation of results.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Full text links 

Read article at publisher's site: https://doi.org/10.4103/ijp.ijp_854_22

Similar Articles 

To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Assessing the comparative-effectiveness of antidepressants commonly prescribed for depression in the US Medicare population.

Kaplan C , Zhang Y

J Ment Health Policy Econ , 15(4):171-178, 01 Dec 2012

Cited by: 16 articles | PMID: 23525835 | PMCID: PMC3608926

Free full text in Europe PMC

Efficacy and tolerability of antidepressant drugs in treatment of depression in children and adolescents: a network meta-analysis.

Rao Y , Yang R , Zhao J , Cao Q

Zhejiang Da Xue Xue Bao Yi Xue Ban , 51(4):480-490, 01 Aug 2022

Cited by: 1 article | PMID: 37202104 | PMCID: PMC10264982

Pharmacoeconomics of antidepressants in moderate-to-severe depressive disorder in Colombia.

Machado M , Lopera MM , Diaz-Rojas J , Jaramillo LE , Einarson TR , Universidad Nacional de Colombia Pharmacoeconomics Group

Rev Panam Salud Publica , 24(4):233-239, 01 Oct 2008

Cited by: 5 articles | PMID: 19133171

Mirtazapine. A pharmacoeconomic review of its use in depression.

Holm KJ , Jarvis B , Foster RH

Pharmacoeconomics , 17(5):515-534, 01 May 2000

Cited by: 6 articles | PMID: 10977391

Spotlight on the pharmacoeconomics of escitalopram in depression.

Croom KF , Plosker GL

CNS Drugs , 18(7):469-473, 01 Jan 2004

Cited by: 12 articles | PMID: 15139801

Europe PMC is part of the ELIXIR infrastructure

Cost minimization analysis of laparoscopic surgery for colorectal cancer within the enhanced recovery after surgery (ERAS) protocol: a single-centre, case-matched study

Affiliations.

  • 1 2 Department of General Surgery, Jagiellonian University Medical College, Krakow, Poland.
  • 2 Department of Medical Education, Jagiellonian University Medical College, Krakow, Poland.
  • 3 Stanley Dudrick Memorial Hospital, Skawina, Poland.
  • PMID: 28133495
  • PMCID: PMC4840186
  • DOI: 10.5114/wiitm.2016.58617

Introduction: The goal of modern medical treatment is to provide high quality medical care in a cost-effective environment.

Aim: To assess the cost-effectiveness of laparoscopic colorectal surgery combined with the enhanced recovery after surgery protocol (ERP) in Poland.

Material and methods: We designed a single-centre, case-matched study. Economic and clinical data were collected in 3 groups of patients (33 patients in each group): group 1 - patients undergoing laparoscopy with ERP; group 2 - laparoscopy without ERP; group 3 - open resection without ERP. An independent administrative officer, not involved in the treatment process, matched patients for age, sex and type of resection. Primary outcome was cost analysis. It was carried out incorporating institutional costs: hospital bed stay, anaesthesia, surgical procedure and equipment, drugs and complications. Secondary outcomes were length of stay (LOS), readmission and complication rate.

Results: Cost of laparoscopic procedure alone was significantly more expensive than open resection. However, implementation of the ERAS protocol reduced additional costs. Total cost per patient in group 1 was significantly lower than in groups 2 and 3 (EUR 1826 vs. EUR 2355.3 vs. EUR 2459.5, p < 0.0001). Median LOS was 3, 6 and 9 days in groups 1, 2 and 3 respectively (p < 0.001). Postoperative complications were noted in 5 (15.2%), 6 (18.2%) and 13 (39.4%) patients in groups 1, 2, 3 respectively (p = 0.0435).

Conclusions: In a low medical care expenditure country, minimally invasive surgery combined with ERP can be a safe and a cost-effective alternative to open surgery with traditional perioperative care.

Keywords: colorectal cancer; enhanced recovery; fast-track; laparoscopy; perioperative management; postoperative complications.

  • Circular Flow
  • Business Cycle
  • Unemployment
  • Monetary Policy
  • Aggregate Demand
  • Supply-Side
  • Foreign Trade
  • Perfect Comp.
  • Game Theory
  • Monopolistic Comp.
  • Economies of Scale
  • Market Failure
  • Cost Minimization

Steve Bain

Cost Minimization Analysis, Formula & Graphs

By Steve Bain ©

Cost minimization analysis, in microeconomics, is focused on finding the most efficient combination of inputs that a firm can use to produce a given output. By ‘inputs’ we mean all the resources that are used to create goods and services. Profit maximization and cost minimization go hand-in-hand, and we also assume that firms desire both.

Since we are looking at how to minimize the cost of those inputs, it stands to reason that we are assuming that firms can adjust the amounts used. The point here is that we are focusing on the long-run. Short-run microeconomic models hold capital is fixed and only labor as a variable input.

My article about productive efficiency refers to cost minimization in the context of the entire economy, where all goods and services are produced at the lowest cost. In this article I will focus on efficiency at the level of the firm, rather than the entire economy, and on the production of a given amount of output for one particular good.

Cost Minimization Graphs

The cost minimization graph below uses two concepts that I have discussed in separate articles. To understand the analysis here you will need to understand these concepts first, don’t worry, they are quite simple. For details, see:

  • The Isocost Line
  • The Isoquant Curve

In the graphs below, we simplify the cost minimization analysis by restricting the inputs under inspection to capital and labor. More complicated models do exist that use a more comprehensive array of inputs, but it is not possible to illustrate these with 2-dimensional graphs.

Restricting the analysis of cost minimization to labor and capital is standard practice in undergraduate microeconomics, because it helps us to grasp the basic concepts.

Cost Minimization Graph

In the cost minimization graph above, capital and labor inputs are illustrated on the vertical and horizontal axes respectively. These costs should be considered on a units-per-year basis. We also assume that the firm under inspection is operating in a competitive market and that, consequently, it has no market power to influence the prices of capital or labor.

We take the price of labor as the wage rate (w) and the price of capital as the rental rate (r). We use the rental rate for simplicity, nothing fundamental changes if the firm purchases capital rather than rents it. See my article about the User Cost of Capital for more details.

Starting with the green isoquant curve (q) in the graph above, this curve plots all possible combinations of inputs that can be used to create a desired amount of output. The three blue isocost lines represent three alternative amounts of expenditure. Only isocost line C represents a cost minimization point, at point X in the graph.

Isocost line C’ does not intercept the isoquant at any point, which means that this low level of expenditure (costs) on inputs is insufficient to produce the desired output. Isocost line C’’ intercepts the isoquant curve at points Y and Z, meaning that the desired output can be produced with the input combination shown (L2, K3 and L3, K2 respectively) but the costs of doing so are higher than at point X.

Only where the isoquant curve just touches the lowest possible isocost line do we have a cost minimization point.

Input Substitution Graph

As explained in my article about the isocost line, the slope of the line depends on the relative prices of labor and capital i.e., its slope is equal to –(w/r). In the graph above, when the wage rate increases, the slope of the isocost line gets steeper, as illustrated.

Starting with isocost line C, the cost minimizing point is given at point X. However, if the wage rate were to rise relative to the rental cost of capital, a steeper Isocost line (the dashed line C’) will result. While this line still allows as much capital input to be used as before, but the higher cost of labor means that less can be used at the original expenditure/cost level.

In order to maintain production of the desired quantity of output, the firm must increase its costs until a new, steeper, isocost line just touches the isoquant curve. This is shown above by isocost C’’, which just touches the isoquant at point Y, the new cost minimization point.

Cost Minimization Formula

Intuitively we can understand that when costs are minimized, the relative amount of output gained from the last unit of labor must be equal to the relative amount of output gained from the last unit of capital, taking the costs of those inputs into account.

The cost minimization formula occurs where:

MPL/MPK = w/r

In words this can be stated as the cost minimization rule occurring where the marginal product of labor (MPL) , divided by the marginal product of capital (MPK) , is equal to the wage rate (w) divided by the rental cost of capital (r).

Cost Minimization Example

For example, imagine that one extra unit of labor costs $10, and that it can create 10 extra units of output. If the last unit of capital costs $40, then it must create 40 units of extra output. Any less than that would mean that the firm could reduce costs by using more labor units instead of capital units, and vice versa. For this reason, the cost minimization rule states that the formula above must be true.

Cost minimization analysis in economics is a strategic process employed by businesses and organizations to produce a desired level of output while keeping costs as low as possible. This involves a close examination of input costs, production technology, and the potential for input substitution.

Firms assess the costs associated with various inputs, primarily labor and capital, and study their relationship in order to identify the most cost-effective production methods.

Throughout this process, businesses aim to optimize resource utilization, enhance competitiveness, and maximize profits. The analysis extends to the production function , where the marginal and average productivities of inputs are studied to identify optimal combinations. It's about achieving efficiency and effectiveness in production by finding the right balance in input usage for a given level of output.

Related Pages:

  • The Costs of Production
  • The Expansion Path
  • Marginal Rate of Technical Substitution MRTS
  • Production Possibilities Curve

To get my latest updates, just enter your email address and click subscribe.

Recent Articles

RSS

The Mundell Fleming Trilemma & The Impossible Trinity

Apr 16, 24 05:21 AM

The Economics of Western Decline

Feb 29, 24 09:06 AM

The Circular Flow Model in Economics Explained (with diagrams)

Feb 21, 24 08:28 AM

Privacy Policy  | Cookies |  Glossary

DyingEconomy.com, Copyright © 2020-2024

15 Woodlands Way, Spion Kop, Mansfield, Nottinghamshire, United Kingdom, NG20 0FN

Tel. +44 1623 846656

Logo for Open Educational Resources

7 Minimizing Costs

The policy question will an increase in the minimum wage decrease employment.

Recently, a much-discussed policy topic in the United States is the idea of increasing minimum wages as a response to poverty and growing income inequality. There are many questions of debate about minimum wages as an effective policy tool to tackle these problems, but one common criticism of minimum wages is that they increase unemployment. In this chapter, we will study how firms decide how much of each input to employ in their production of a good or service. This knowledge will allow us to address the question of whether firms are likely to reduce the amount of labor they employ if the minimum wage is increased.

In order to provide goods and services to the marketplace, firms use inputs. These inputs are costly, so firms must be smart about how they use labor, capital, and other inputs to achieve a certain level of output. The goal of any profit maximizing firm is to produce any level of output at the minimum cost. Doing anything else cannot be a profit maximizing strategy. This chapter studies the cost minimization problem for firms: how to most efficiently use inputs to produce output.

Exploring the Policy Question

Read the Labor Day Turns Attention Back to Minimum-Wage Debate article and answer the following questions:

  • What potential benefits to raising the minimum wage are identified in the article?
  • What potential disadvantages to raising the minimum wage are identified?

Learning Objectives

7.1 the economic concept of cost.

Learning Objective 7.1 : Explain fixed and variable costs, opportunity cost, sunk cost, and depreciation.

7.2 Short-Run Cost Minimization

Learning Objective 7.2 : Describe the solution to the cost minimization problem in the short run.

7.3 Long-Run Cost Minimization

Learning Objective 7.3 : Describe the solution to the cost minimization problem in the long run.

7.4 When Input Costs and Output Change

Learning Objective 7.4 : Analyze the effect of changes in prices or output on total cost.

7.5 Policy Example Will an Increase in the Minimum Wage Decrease Employment?

Learning Objective 7.5 : Apply the concept of cost minimization to a minimum-wage policy.

From the isoquants described in chapter 6 , we know that firms have many choices of input combinations to produce the same amount of output. Each point on an isoquant represents a different combination of inputs that produces the same amount of output.

Of course, inputs are not free: the firm must pay workers for their labor, buy raw materials, and buy or rent machines, all of which are costly. So the key question for firms is, Which point on an isoquant is the best choice? The answer is the point that represents the lowest cost. The topics of this chapter will help us locate that point.

This chapter studies production costs—that is, how costs are related to output. In order to draw a cost curve that shows a single cost for each output amount, we have to understand how firms make the decision about which set of possible inputs to use to be as efficient as possible. To be as efficient as possible means that the firm wants to produce output at the lowest possible cost.

For now, we assume that firms want to produce as efficiently as possible—in other words, minimize costs. Later, in chapter 9 , “Profit Maximization and Supply,” we will see that producing at the lowest cost is what profit maximizing firms must do (otherwise, they cannot possibly be maximizing profit!).

A good way to think about the cost side of the firm is to consider a manager who is in charge of running a factory for a large company. She is responsible for producing a specific amount of output at the lowest possible cost. She must choose the mix of inputs the factory will use to achieve the production target. Her task, in other words, is to run her factory as efficiently as possible. She does not want to use any extra inputs, and she does not want to pick a mix of inputs that costs more than another mix of inputs that produces the same amount of output.

To make efficient or cost-minimizing decisions, it is important to understand some basic cost concepts, starting with fixed and variable costs as well as opportunity costs, sunk costs, and depreciation.

Fixed and Variable Costs

A fixed cost is a cost that does not change as output changes. For example, a firm might need to pay for the lights to be on in order for the workers to see what they are doing and for production to happen. But the lights are simply on or off, and the cost of powering them does not change when output changes.

A variable cost is a cost that changes as output changes. For example, a firm that wishes to produce more output might need to employ more labor hours by either hiring more workers or having existing workers work more hours. The cost of this labor is therefore a variable cost, as it changes as the output level changes.

Opportunity Costs

As we learned in chapter 3 , the opportunity cost of something is the value of the next best alternative given up in order to get it.

Suppose a firm has access to an input that it can use in production without paying a price for it. A simple example is a family farm. The farm uses land, water, seeds, fertilizer, labor, and farm machinery to produce a crop—let’s say corn—which it then sells in the marketplace. If the farm owns the land it uses to produce the corn, do we then say that the land component is not part of the firm’s costs? The answer, of course, is no. When the farm uses the land to produce corn, it forgoes any other use of the land; that is, it gives up the opportunity to use the resource for another purpose. In many cases, the opportunity cost is the market value of the input.

For example, suppose an alternative use for the land is to rent it to another farmer. The forgone rent from the decision to use the land to produce its own corn is the farm’s opportunity cost and should be factored into the production decision. If the opportunity cost, which in this case is the rental fee, is higher than the profit the farm will earn from producing corn, the most efficient economic decision is to rent out the land instead.

Now consider the more complex example of a farm manager who is told to produce a certain amount of corn. Suppose that the manager figures out that she can produce exactly that amount using a low-fertilizer variety of corn and all the available land. She also knows that another way to produce the same amount of corn is with a higher-yielding variety that requires a lot more fertilizer but uses only 75 percent of the land. The additional fertilizer for this higher-yielding corn will cost an extra $50,000. Which option should the farm manager choose?

Without considering the opportunity cost, she would use the low-fertilizer variety of corn and all of the land because it costs $50,000 less than the alternative method. But what if, under the alternative method, she could rent out for $60,000 the 25 percent of the land that would not be planted? In that case, the cost-minimizing decision is actually to use the higher-yielding corn variety and rent out the unused land.

Another classic example is that of a small business owner who runs, say, a coffee shop. The inputs into the coffee shop are the labor, the coffee, the electricity, the machines, and so on. But suppose the owner also works a lot in the shop. He does not pay himself a salary but simply pays himself from the shop’s excess revenues, or revenues in excess of the cost of the other inputs. The cost of his labor for the shop is not $0 but the amount he could earn working elsewhere instead. If, for example, he could work in the local bank for $4,000 a month, then the opportunity cost of his working at the coffee shop is $4,000, and if the excess revenues are less than $4,000, he should close the shop and work at the bank instead, assuming he likes both jobs equally well.

Some costs are recoverable, and some are not. An example of a recoverable cost is the money a farmer spends on a new tractor knowing that she can turn around and re-sell it for the same amount she paid.

A sunk cost is an expenditure that is not recoverable. An example of a sunk cost is the cost of the paint a business owner uses to paint the leased storefront of his coffee shop. Once the paint is on the wall, it has no resale value. Many inputs reflect both recoverable and sunk costs. A business that buys a car for an employee’s use for $30,000 and can resell it for $20,000 should consider $10,000 of its expenditure a sunk cost and $20,000 a recoverable cost.

Why do sunk costs matter in choosing inputs? Because after incurring sunk costs, a manager should not consider them in making subsequent decisions. Sunk costs have no bearing on such decisions. To see this, suppose you buy a $500 non-refundable and non-transferrable airline ticket to go to Florida during spring break. However, as spring break approaches, you are invited by friends to spend the break at a mountain cabin they have rented to which they will give you a ride in their car at no cost to you. You prefer to spend the break with your friends in the cabin, but you have already spent the $500 on the ticket, and you feel compelled to get your money’s worth by using it to go to Florida. Doing so would be the wrong decision. At the time of your decision, the $500 spent on the ticket is non-recoverable and therefore a sunk cost. Whether you go to Florida or not, you cannot get the $500 back, so you should do what makes you most happy, which is going to the cabin.

Depreciation

Depreciation is the loss of value of a durable good or asset over time. A durable good is a good that has a long usable life. Durable goods are things like vehicles, factory machines, or appliances that generally last many years. The difference between the beginning value of a durable good and its value sometime later is called depreciation. Most durable goods depreciate: machines wear out; newer, more advanced ones are produced, thus reducing the value of current ones; and so on.

Suppose you are a manager who runs a pencil-making factory that uses a large machine. How does the machine’s depreciation factor into the cost of using it?

The appropriate way to think about these costs is through the lens of opportunity cost. If your factory didn’t use the machine, you could rent it to another firm or sell it. Let’s consider the selling of the machine. Suppose it costs $100,000 to purchase the machine new, and it depreciates at a rate of $1,000 per month, meaning that each month, its resale value drops by $1,000. Note that the purchase cost of the machine is not sunk because it is recoverable—but each month, the recoverable amount diminishes with the rate of depreciation. So, for example, exactly one year after purchase, the machine is worth $88,000. At this point in time, the $12,000 depreciation has become a sunk cost. However, at the current point in time, you have a choice: you can sell the machine for $88,000 or use it for a month and sell it at the end of the month for $87,000. The opportunity cost of using the machine for this month is exactly $1,000 in depreciation.

Why does depreciation matter in choosing inputs? Well, suppose workers can do the same job as the machine, or in economics parlance, suppose you can substitute workers for capital. To produce the same amount of pencils as the machine, you need to add labor hours at a rate of $900 a month. A manager without good economics training might think that since the firm purchased the machine, the machine is free, and since the labor costs $900 a month, use the machine. But you—a manager well trained in economics—know that the monthly cost of using the machine is the $1,000 drop in resale value (the depreciation cost), and the monthly cost of the labor is $900. You will save the company $100 a month by using the labor and selling the machine.

In order to maximize profits, firms must minimize costs. Cost minimization simply implies that firms are maximizing their productivity or using the lowest cost amount of inputs to produce a specific output. In the short run, firms have fixed inputs, like capital, giving them less flexibility than in the long run. This lack of flexibility in the choice of inputs tends to result in higher costs.

In chapter 6 , we studied the short-run production function:

[latex]Q=f(L,\overline{K})[/latex],

where [latex]Q[/latex] is output, [latex]L[/latex] is the labor input, and [latex]\overline{K}[/latex] is capital. The bar over [latex]K[/latex] indicates that it is a fixed input. The short-run cost minimization problem is straightforward: since the only adjustable input is labor, the solution to the problem is to employ just enough labor to produce a given level of output.

Figure 7.1 illustrates the solution to the short-run cost minimization problem. Since capital is fixed, the decision about labor is to choose the amount that, combined with the available capital, enables the firm to produce the desired level of output given by the isoquant.

From figure 7.1 , it is clear that the only cost-minimizing level of the variable input, labor, is at [latex]L^*[/latex]. To see this, note that any level of labor below [latex]L^*[/latex] would yield a lower level of output, and any level of labor above [latex]L^*[/latex] would yield the desired level of output but would be more costly than [latex]L^*[/latex] because each additional unit of labor employed must be paid for.

Mathematically, this problem requires only that we solve the following production function for [latex]L[/latex]:

[latex]Q=f(\overline{K},L)[/latex]

Solving the production function requires that we invert it, which we can do only if the function is monotonic. This requirement is satisfied for our production functions because we assume that output always increases when inputs increase.

[latex]L^*=f^{−1}(\overline{K},q)[/latex]

Let’s consider a specific example of a Cobb-Douglas production function:

[latex]Q=10\overline{K}^{\frac{1}{2}}L^{\frac{1}{2}}[/latex]

To find the cost-minimizing level of labor input, [latex]L^*[/latex], we need to solve this equation for [latex]L^*[/latex]:

[latex]L^{\frac{1}{2}}=(\frac{q}{10\overline{K}^{\frac{1}{2}}})^2[/latex]

This simplifies to

[latex]L^*=\frac{q^2}{100\overline{K}}[/latex]

Note that this equation does not require a specific output target but rather gives us the cost-minimizing level of labor for every level of output. We call this an input-demand function : a function that describes the optimal factor input level for every possible level of output.

The long run, by definition, is a period of time when all inputs are variable. This gives the firm much more flexibility to adjust inputs to find the optimal mix based on their relative prices and relative marginal productivities. This means that the cost can be made as low as possible, and generally lower than in the short run.

Total Cost in the Long Run and the Isocost Line

For a long-run, two-input production function, [latex]Q=f(L,K)[/latex], the total cost of production is the sum of the cost of the labor input, [latex]L[/latex], and the capital input, [latex]K[/latex].

  • The cost of labor is called the wage rate, [latex]w[/latex].
  • The cost of capital is called the rental rate, [latex]r[/latex].
  • The cost of the labor input is the wage rate multiplied by the amount of labor employed, [latex]wL[/latex].
  • The cost of capital is the rental rate multiplied by the amount of capital, [latex]rK[/latex].

The total cost ([latex]C[/latex]), therefore, is

[latex]C(Q)=wL+rK[/latex]

If we hold total cost [latex]C(Q)[/latex] constant, we can use this equation to find isocost lines. An isocost line is a graph of every possible combination of inputs that yields the same cost of production. By picking a cost, and given wage rates, [latex]w[/latex], and rental rates, [latex]r[/latex], we can find all the combinations of [latex]L[/latex] and [latex]K[/latex] that solve the equation and graph the isocost line.

Consider the example of a pencil-making factory, where both capital in the form of pencil-making machines and labor to run those machines are utilized. Suppose the wage rate of labor for the pencil maker is $20 per hour and the rental rate of capital is $10 per hour. If the total cost of production is $200, the firm could be employing ten hours of labor and no capital, twenty hours of capital and no labor, five hours of labor and ten hours of capital, or any other combination of capital and labor for which the total cost is $200. Figure 7.2 illustrates this particular isocost line.

This figure represents the isocost line where total cost equals $200. But we can draw an isocost line that is associated with any total cost level. Notice that any combination of labor hours and capital that is less expensive than this particular isocost line will end up on a lower isocost line. For example, two hours of labor and five hours of capital will cost $90. Any combination of hours of labor and capital that are more expensive than this particular isocost line will end up on a higher isocost line. For example, twenty hours of labor and thirty hours of capital will cost $700.

Note that the slope of the isocost line is the ratio of the input prices, [latex]-w/r[/latex]. This tells us how much of one input (capital) we have to give up to get one more unit of the other input (labor) and maintain the same level of total cost. For example, if both labor and capital cost $10 an hour, the ratio would be −10/10 or −1. This is intuitive—if they cost the same amount, to get one more hour of labor, you need to give up one hour of capital. In our pencil-maker example, labor is $20 per hour, and capital is $10 per hour, so the ratio is −2: to get one more hour of labor input, you must give up two hours of capital in order to maintain the same total cost or remain on the same isocost line.

The solution to the long-run cost minimization problem is illustrated in figure 7.3 . The plant manager’s problem is to produce a given level of output at the lowest cost possible. A given level of output corresponds to a particular isoquant, so the cost minimization problem is to pick the point on the isoquant that is the lowest cost of production. This is the same as saying the point that places the firm on the lowest isocost line. We can see this by examining figure 7.3 and noting that the point on the isoquant that corresponds to the lowest isocost line is the one where the isocost is tangent to the isoquant.

From figure 7.3 , we can see that the optimal solution to the cost minimization problem is where the isocost and isoquant are tangent: the point at which they have the same slope. We just learned that the slope of the isocost is [latex]-w/r[/latex], and in chapter 6 , we learned that the slope of the isoquant is the marginal rate of technical substitution (MRTS), which is the ratio of the marginal product of labor and capital:

[latex]MRTS=-\frac{MP_L}{MP_K}[/latex]

So the solution to the long-run cost minimization problem is

[latex]MRTS=-\frac{w}{r}[/latex],

[latex]\frac{MP_L}{MP_K}=\frac{w}{r}[/latex] (7.1)

This can be rearranged to help with intuition:

[latex]\frac{MP_L}{w}=\frac{MP_K}{r}[/latex] (7.2)

Equation (7.2) says that at the cost-minimizing mix of inputs the marginal products per dollar must be equal. This conclusion makes sense if you think about what would happen if equation ([pb_glossay id=”752″]7.2[/pb_glossary]) did not hold. Suppose instead that the marginal product of capital per dollar was more than the marginal product of labor per dollar:

[latex]\frac{MP_L}{w}< \frac{MP_K}{r}[/latex]

This inequality tells us that this current use of labor and capital cannot be an optimal solution to the cost minimization problem. To understand why, consider the effect of taking a dollar spent on labor input away, thereby lowering the amount of labor input ( raising the [latex]MP_L[/latex]—remember the law of diminishing marginal returns), and spending that dollar instead on capital and increasing the capital input ( lowering the [latex]MP_K[/latex]). We know from the inequality that if we do this, overall output must increase because the additional output from the extra dollar spent on capital has to be greater than the lost output from the diminished labor. Therefore, the net effect is an increase in overall output. The same argument applies if the inequality were reversed.

An Example of Minimizing Costs in the Long Run

Calculus (long-run cost minimization problem).

Mathematically, we express the long-run cost minimization problem in the following way; we want to minimize total cost subject to an output target:

[latex]\frac{min}{(L,K)}{wL+rK}[/latex] (7.1C)

subject to [latex]Q=f=(L,K)[/latex] (7.2C)

We can proceed by defining a Lagrangian function:

(7.3C) [latex]\wedge (L,K,\lambda) = wL+rK-\lambda (f(L,K)-q)[/latex]

where [latex]\lambda[/latex] is the Lagrange multiplier. The first-order conditions for an interior solution (i.e., [latex]L \gt 0[/latex] and [latex]K \gt 0[/latex]) are as follows:

(7.4C) [latex]\frac{\partial \wedge}{\partial L}=0\Rightarrow w=\lambda \frac{\partial f(L,K)}{\partial L}[/latex]

(7.5C) [latex]\frac{\partial \wedge }{\partial K}=0\Rightarrow r=\lambda \frac{\partial f(L,K)}{\partial K}[/latex]

(7.6C) [latex]\frac{\partial \wedge }{\partial \lambda }=0\Rightarrow Q=f(L,K)[/latex]

From chapter 6 , we know that [latex]MP_L=\frac{\partial f(L,K)}{\partial L}[/latex] and [latex]MP_K=\frac{\partial f(L,K)}{\partial K}[/latex].

Substituting these in and combining 7.4C and 7.5C to get rid of the Lagrange multiplier yields expression (7.1):

(7.7C) [latex]\frac{MP_L}{MP_K}=\frac{w}{r}[/latex]

And 7.6C is the constraint:

(7.8C) [latex]Q=f(L,K)[/latex]

Equations (7.7C) and (7.8C) are two equations in two unknowns, [latex]L[/latex] and [latex]K[/latex], and can be solved by repeated substitution. Note that these are exactly the conditions that describe figure 7.3 . Equation (7.7C) is the mathematical expression of [latex]MRTS = −w/r[/latex], and equation (7.8C) pins us down to a specific isoquant, as [latex]MRTS = −w/r[/latex] holds for a potentially infinite number of isoquants and isocost lines depending on the [latex]Q[/latex] chosen.

Consider a specific example of a gourmet root beer producer whose labor cost is $20 an hour and whose capital cost is $5 an hour. Suppose the production function for a barrel of root beer, [latex]Q[/latex], is [latex]Q=10L^{\frac{1}{2}}K^{\frac{1}{2}}[/latex]. If the output target is one thousand barrels of root beer, they could, for example, utilize one hundred hours of labor, [latex]L[/latex], and one hundred hours of capital, [latex]K[/latex], to yield [latex]10(10)(10)=1,000[/latex] barrels of root beer. But is this the most cost-efficient way to do it? More generally, what is the most cost-effective mix of labor and capital to produce one thousand barrels of root beer?

To determine this, we must start with the marginal products of labor and capital, which for this production function are the following:

(7.3) [latex]MP_L=5L^{-\frac{1}{2}}K^{\frac{1}{2}}[/latex]

(7.4) [latex]MP_K=5L^{\frac{1}{2}}K^{-\frac{1}{2}}[/latex]

And thus the [latex]MRTS,-\frac{MP_L}{MP_K}[/latex] is [latex]-\frac{K}{L}[/latex].

The ratio [latex]-\frac{w}{r}[/latex] in this case is [latex]-\frac{20}{5}[/latex], or −4.

So the condition that characterizes the cost-minimizing level of input utilization is [latex]\frac{K}{L}=4[/latex], or [latex]K=4L[/latex]. That is, for every hour of labor employed, [latex]L[/latex], the firm should utilize four hours of capital. This makes sense when you think about the fact that labor is four times as expensive as capital. Now, what are the specific amounts? To find them, we substitute our ratio into the production function set at one thousand barrels:

(7.5) [latex]1,000=10L^{\frac{1}{2}}K^{\frac{1}{2}}[/latex]

If [latex]K = 4L[/latex], then [latex]1,000=10L^{\frac{1}{2}}(4L)^{\frac{1}{2}}[/latex], or [latex]L=\frac{100}{\sqrt{4}}[/latex], which equals fifty.

If [latex]L = 50[/latex], then [latex]K = 200[/latex]. So using fifty hours of labor and two hundred hours of capital is the most cost-effective way to produce one thousand barrels of root beer for this firm.

  • Find the marginal product of labor.
  • Find the marginal product of capital.
  • Find the MRTS.
  • Find the optimal amount of labor and capital inputs.
  • For [latex]Q=5L^{\frac{1}{3}}K^{\frac{2}{3}}[/latex] and [latex]w[/latex] and [latex]r[/latex], solve for the input-demand functions using the Lagrangian method.

In the previous section, we determined the cost-minimizing combination of labor and capital to produce one thousand barrels of root beer. As long as the prices of labor and capital remain constant, this producer will continue to make the same choice for every one thousand barrels of root beer produced. But what happens when input prices change?

Suppose, for example, an increasing demand for the capital equipment used to make root beer drives the rental price up to $10 an hour. This means capital is more expensive than before not only in absolute terms but in relative terms as well. In other words, the opportunity cost of capital has increased. Before the price increase, for every extra hour of capital utilized, the root beer firm had to give up [latex]\frac{1}{4}[/latex] of an hour of labor. After the rental rate increase, the opportunity cost has increased to [latex]\frac{1}{2}[/latex] an hour of labor. A cost-minimizing firm should therefore adjust by utilizing less of the relatively more expensive input and more of the relatively less expensive input.

In this case, the ratio [latex]-\frac{w}{r}[/latex] is now [latex]-\frac{20}{10}[/latex], or −2. So the new condition that characterizes the cost-minimizing level of input utilization after the price change is [latex]\frac{K}{L}=2[/latex], or [latex]K=\,2\,L[/latex].

The production function for one thousand barrels has not changed:

[latex]1,000=10L^{\frac{1}{2}}K^{\frac{1}{2}}[/latex]

So if [latex]K=2L[/latex], then [latex]1,000=10L^\frac{1}{2}(2L)^\frac{1}{2}[/latex], or [latex]L=\frac{100}{\sqrt{2}}[/latex], which equals roughly 71. If [latex]L = 71[/latex], then [latex]K = 2L = 142[/latex]. As expected, the firm now uses more labor than it did prior to the price change and less capital.

We can also calculate and compare the total cost before and after the increase in the rental rate for capital. Total cost is [latex]C(Q)=wL+rK[/latex], so in the first case where [latex]w[/latex] is $20 and [latex]r[/latex] is $5, the total cost is

[latex]C(Q) = 20(L) + 5(K) = $20(50) + $5(200) = $1,000 + $1,000 = $2,000[/latex].

Now when capital rental rates increase to $10, total cost becomes

[latex]C(Q) = 20(L) + 10(K) = $20(71) + $10(142) = $1,420 + $1,420 = $2,840[/latex].

This new higher cost makes sense because the production function did not change, so the firm’s efficiency remained constant, wages remained constant, but rental rates increased. So overall, the firm saw a cost increase and no change in productivity, leading to an increase in production costs.

Expansion Path

A firm’s expansion path is a curve that shows the cost-minimizing amount of each input for every level of output. Let’s look at an example to see how the expansion path is derived.

Equation (7.5) describes the production function set to the specific production target of one thousand barrels of root beer. If we replace one thousand with the output level [latex]Q[/latex], we get the following expression:

(7.6) [latex]Q=10L^{\frac{1}{2}}K^{\frac{1}{2}}[/latex]

We use the [latex]K=4L[/latex] ratio of capital to labor that characterizes the cost-minimizing ratio when the wage rate for labor is $20 an hour and the rental rate for capital is $5 an hour:

  • If [latex]K=4L[/latex], then [latex]Q=10L^{\frac{1}{2}}(4L)^{\frac{1}{2}}[/latex], or [latex]Q=20L[/latex] or [latex]L(q)={\frac{q}{20}}[/latex].
  • If [latex]L(q)=\frac{q}{20’}[/latex], then [latex]K(q)=\frac{4q}{20}=\frac{q}{5}[/latex].

So using fifty hours of labor and two hundred hours of capital is the most cost-effective way to produce one thousand barrels of root beer for this firm.

Note that [latex]L(q)=\frac{q}{20}[/latex] and [latex]K(q)=\frac{4q}{20}=\frac{q}{5}[/latex] are both functions of output [latex]Q[/latex]. These are the input-demand functions.

Input-demand functions describe the optimal, or cost-minimizing, amount of a specific production input for every level of output. Note that when the output [latex]Q = 1,000[/latex], [latex]L(Q) = 50[/latex] and [latex]K(Q) = 200[/latex], just as we found before. But from these factor demands, we can immediately find the optimal amount of labor and capital for any output target at the given input prices. For example, suppose the factory wanted to increase output to two thousand or three thousand barrels of root beer:

  • At [latex]Q=2,000[/latex]
  • [latex]L(2,000)=\frac{2,000}{20}=100[/latex] and [latex]K(2,000)=\frac{2,000}{5}=400[/latex].
  • At [latex]Q=3,000[/latex]
  • [latex]L(3,000)=\frac{3,000}{20}=150[/latex] and [latex]K(3,000)=\frac{3,000}{5}=600[/latex].

We can graph this firm’s expansion path ( figure 7.4 ) from the input demands when [latex]Q[/latex] equals one thousand, two thousand, and three thousand. We can also immediately derive the long-run total cost curve from these factor demands by putting them into the long-run cost function, [latex]C(!)=wL+rK[/latex]:

[latex]C(Q)=wL+rK=wL(Q)+rK(Q)=w\frac{Q}{20}+r\frac{q}{5}[/latex]

At input prices [latex]w[/latex] = $20 and [latex]r[/latex] = $5, the function becomes [latex]C(Q)=20\frac{Q}{20}+5\frac{Q}{5}=2Q[/latex].

The long-run total cost curve shows us the specific total cost for each output amount when the firm is minimizing input costs.

Graphically, the expansion path and associated long-run total cost curve look like figure 7.4 .

Figure 7.4 illustrates how the solution to the cost minimization problem translates into factor demands and long-run total cost. We can solve for the factor demands and the total cost function more generally by replacing our specific input prices with w and r in the following way. The solution to the cost minimization problem is characterized by the MRTS equaling the input price ratio: the [latex]MRTS=\frac{K}{L}[/latex] and the [latex]\text{input price ratio}=\frac{w}{r}[/latex], so [latex]\frac{K}{L}=\frac{w}{r}[/latex], or [latex]rK=wL[/latex], or [latex]L=\frac{rK}{w}[/latex]. We can plug this into the production function to get

[latex]Q=10L^{\frac{1}{2}}K^{\frac{1}{2}}=10(\frac{rK}{w})^{\frac{1}{2}}\,K^{\frac{1}{2}}=10(\frac{r}{w})^{\frac{1}{2}}\,K[/latex].

Solving for the input demand for capital yields

[latex]K^*=\frac{Q}{10}(\frac{w}{r})^{\frac{1}{2}}[/latex].

Since [latex]L=\frac{rK}{w}[/latex], we can find the input demand for labor:

[latex]L^*=\frac{Q}{10}(\frac{r}{w})^{\frac{1}{2}}[/latex]

Now we have input-demand functions that are functions of both output, [latex]Q[/latex], and the input prices, [latex]w[/latex] and [latex]r[/latex]. Note that when [latex]Q[/latex] rises, the inputs of both capital and labor rise as well. Also note that when the price of labor, [latex]w[/latex], rises relative to the price of capital, [latex]r[/latex], or when [latex]\frac{w}{r}[/latex] rises, the use of the capital input rises and the use of the labor input falls. And when the price of capital rises relative to the price of labor, the use of labor rises, and the use of capital falls. So from these functions, we can see the firm’s optimal adjustment to changing input costs in the form of substituting the relatively cheaper input for the relatively more expensive input.

Perfect Complement and Perfect Substitute Production Functions

Perfect complements and perfect substitutes in production are not uncommon. Suppose our pencil-making firm needs exactly one operator (labor) to operate one pencil-making machine. A second worker per machine adds nothing to output, and a second machine per worker also adds nothing to output. In this case, the pencil-making firm would have a perfect complement production function. Alternatively, suppose our root beer producer could either use two workers (labor) to measure and mix up the ingredients or employ one machine to do the same job. Either combination yields the same output. In this case, the root beer producer would have a perfect substitute production function.

Similar to the consumer choice problem, for production functions where inputs are perfect complements or substitutes, the condition that MRTS equals the price ratio will no longer hold. To see this, consider figure 7.5 .

In panel (a), a perfect complement isoquant just intersects the isocost line at the corner of the isoquant. (Take a moment and confirm to yourself that any other combination of labor and capital on the isoquant would be more expensive.) However, at the corner of the isoquant, the slope is undefined, so there is no MRTS. For perfect complements, using inputs in any combination other than the optimal ratio is not cost minimizing. So we can immediately express the optimal ratio as a condition of cost. If the production function is of the perfect complement type, [latex]Q=min[\alpha L,\beta K][/latex], the optimal input ratio is [latex]\alpha L=\beta K[/latex]. And since output is equal to the minimum of the two arguments of the function, that means [latex]Q=\alpha L=\beta K[/latex]. So the optimal amount of inputs for any output level q is [latex]L^*=\frac{q}{a’}[/latex] and [latex]K^*=\frac{q}{\beta}[/latex].

Panels (b) and (c) of figure 7.5 show the optimal solution to the long-run cost minimization problem when the production function is a perfect substitute type. The solution is on one corner or the other of the isocost line, depending on the marginal productivities of the inputs and their costs. In (b), the MRTS or the slope of the isoquant is lower (less steep) than the slope of the isocost line or the ratio of the input prices. Since this is the case, it is much less costly to employ only capital to produce [latex]Q[/latex]. In (c), the MRTS or the slope of the isoquant is higher (steeper) than the slope of the isocost line or the ratio of the input prices. Since this is the case, it is much less costly to employ only labor to produce [latex]Q[/latex].

Recall that a perfect substitute production function is of the additive type:

[latex]Q=\alpha L+\beta K[/latex]

The marginal product of labor is [latex]\alpha[/latex], and the marginal product of capital is [latex]\beta[/latex].

Since the MRTS is the ratio of the marginal products, the MRTS is [latex]\frac{\alpha}{\beta}[/latex], which is also the slope of the isoquant.

The ratio of input prices is [latex]\frac{w}{r}[/latex]. This price ratio is the slope of the isocost.

From the graphs we can see that if [latex]\frac{\alpha }{\beta } \lt \frac{w}{r}[/latex], or the isoquant is less steep than the isocost, only capital is used, thus we know that no labor will be employed, or [latex]L^*= 0[/latex], and output must equal [latex]\beta K[/latex], or [latex]Q=\beta K[/latex]. Solving this for [latex]K[/latex] gives us [latex]K^*=\frac{q}{\beta}[/latex]. Alternatively, if [latex]\frac{\alpha }{\beta } \gt \frac{w}{r}[/latex], only labor is used, so [latex]K^*=0[/latex], [latex]Q=\alpha L[/latex], and [latex]L^*=\frac{q}{\alpha}[/latex].

On May 15, 2014, the city of Seattle, Washington, passed an ordinance that established a minimum wage of $15 an hour, almost $5 more than the statewide minimum wage and more than double the federal minimum of $7.25 an hour.

A minimum wage increase brings up many issues about its impact, particularly for a city surrounded by suburbs that allow much lower rates of pay. One question we can answer with our current tools is how Seattle-based businesses affected by the increased minimum wage are likely to react to the higher cost of labor.

Most businesses that employ labor have many other inputs as well, some of which can be substituted for labor. Consider a janitorial firm that sells floor-cleaning services to office buildings, restaurants, and industrial plants. The janitorial firm can choose to clean floors using a small amount of capital and a large amount of labor: they can employ many cleaners and equip them with a simple mop.

“Cleaning13” from Nick Youngson from Alpha Stock images is licensed under CC BY-SA.

Or they could choose to employ more capital in the form of a modern floor-cleaning machine and employ fewer cleaners.

Our theory of cost minimization can help us understand and predict the consequences of making the labor input for cleaning the floors 50 percent more expensive. Figure 7.6 shows a typical firm’s long-run cost minimization problem. It is reasonable to consider the long run in this case because it would not take the firm very long to lease or purchase and have delivered a floor-cleaning machine. It is also reasonable to assume that floor-cleaning machines and workers are substitutes but not perfect ones—meaning that machines can be used to replace some labor hours, but some machine operators are still needed. The opposite is also true: the restaurant can replace machines with labor, but labor needs some capital (a simple mop) to clean a floor.

In figure 7.6 , the isoquant, [latex]\overline{Q}[/latex], represents the fixed amount of floor the firm needs to clean each day and the different combinations of capital and labor it can use to achieve that output target. When the cost of labor, [latex]w[/latex], increases and the cost of capital, [latex]r[/latex], stays the same, the isocost line gets steeper as w/r increases. We can see in the figure that when this happens, the firm will naturally shift away from using the relatively more expensive input and toward the relatively cheaper input. The restaurant will decrease the amount of labor it employs and increase the amount of capital it uses.

From this specific firm, we can generalize that a dramatic permanent increase in the minimum wage will cause affected firms to employ fewer hours of labor and that employment overall will fall in the affected area. The magnitude of employment change caused by such a policy depends on the production technology of all affected firms—that is, how easy it is for them to substitute more capital. All we can predict with our model currently is the fact that such shifts away from labor will likely occur. Whether the cost of this decrease in employment is outweighed by the benefit of such a policy is beyond the scope of the current analysis, but our model of cost minimization has provided useful insight into the decisions firms will make in reaction to the increase in minimum wage.

  • Do you support a national minimum wage increase? Why or why not?
  • Do you think the benefits of a minimum wage increase outweigh the costs? Explain your answer.
  • What do you predict would happen if, instead of a minimum wage, a tax on the purchase or rental of capital equipment was imposed?

Review: Topics and learning outcomes

Learn: key topics.

A cost that does not change as output changes.

A cost that changes as output changes.

An expenditure that is not recoverable, i.e., the cost of the paint a business owner uses to paint the leased storefront of his coffee shop.

The loss of value of a durable good or asset over time.

A good that has a long usable life.

A function that describes the optimal factor input level for every possible level of output, i.e.,

A graph of every possible combination of inputs that yields the same cost of production.

The MRTS is also the slope of the isoquant

Long-run cost minimization problem

The slope of the isoquant is the MRTS, and the slope of the is

[latex]-w/r[/latex]

[latex]MRTS=-\frac{w}{r}[/latex]

[latex]\frac{MP_L}{w}=\frac{MP_K}{r}[/latex]

This formula has many different calculus derived conclusions that should be reviewed.

Media Attributions

  • 721Artboard-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 731Artboard-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 732Artboard-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 741Artboard-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • 742Artboard-1-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • cleaning13 © Nick Youngson is licensed under a CC BY-SA (Attribution ShareAlike) license
  • 751Artboard-1 © Patrick M. Emerson is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license

[latex]\frac{\partial \wedge}{\partial L}=0\Rightarrow w=\lambda \frac{\partial f(L,K)}{\partial L}[/latex]

[latex]\frac{\partial \wedge }{\partial K}=0\Rightarrow r=\lambda \frac{\partial f(L,K)}{\partial K}[/latex] {~?~ST: end equation}

[latex]\frac{MP_L}{MP_K}=\frac{w}{r}[/latex]

[latex]\frac{\partial \wedge }{\partial \lambda }=0\Rightarrow Q=f(L,K)[/latex]

[latex]Q=f(L,K)[/latex]

Intermediate Microeconomics Copyright © 2019 by Patrick M. Emerson is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Kidney Int Rep
  • v.5(6); 2020 Jun

A Cost-Minimization Analysis of Nurse-Led Virtual Case Management in Late-Stage CKD

Thomas w. ferguson.

1 Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, Manitoba, Canada

2 Seven Oaks Hospital Chronic Disease Innovation Centre, Winnipeg, Manitoba, Canada

Reid H. Whitlock

Michelle di nella, navdeep tangri, paul komenda, claudio rigatto, associated data, introduction.

Interventions are needed to improve early detection of indications for dialysis before development of severe symptoms or complications. This may reduce suboptimal dialysis starts, prevent hospitalizations, and decrease costs. Our objectives were to explore assumptions around a nurse-led virtual case management intervention for patients with late-stage chronic kidney disease (CKD) with a 2-year Kidney Failure Risk Equation (KFRE) estimated risk of kidney failure ≥80% and to estimate how these assumptions affect potential cost savings.

We performed a cost-minimization analysis by developing a decision analytic microsimulation model constructed from the perspective of the health payer. Our primary outcome was the break-even point, defined as the maximum amount a health payer could spend on the intervention without incurring any net financial loss or gain. The intervention group received remote telemonitoring, including daily measurement of several health metrics (blood pressure, oxygen saturation, and weight), and a validated symptom questionnaire accompanied by nurse-led case management, whereas the comparator group received usual care. We assumed patients received the intervention for a maximum of 2 years.

The break-even point was $7339 per late-stage CKD patient enrolled in the intervention. Based on the distribution of time receiving the intervention, we determined a maximum monthly intervention cost of $703.37. In probabilistic sensitivity analyses, we found that 75% of simulations produced break-even points between $3929 and $9460.

Nurse-led virtual home monitoring interventions in patients with CKD at high risk of kidney failure have the potential for significant cost savings from the perspective of the health payer.

Graphical abstract

An external file that holds a picture, illustration, etc.
Object name is fx1.jpg

CKD is a growing epidemic affecting more than 1 in 10 individuals in North America. 1 , 2 CKD is a potent risk factor for death, hospitalization, and reduced quality of life. 3 , 4 Patients progressing to kidney failure require life-sustaining therapy in the form of dialysis or a kidney transplant to survive. Kidney transplantation is the optimal form of renal replacement therapy from a cost, quality of life, and health outcomes perspective. 5 A limited supply of organs and an aging, frail, and highly comorbid CKD population preclude this option for most patients. 6 Most patients with kidney failure are thus treated with facility-based hemodialysis, a burdensome and expensive therapy with poor health outcomes. 7

The transition from late-stage nondialysis CKD to dialysis is challenging for both patients and health care teams. The optimal initiation of dialysis is defined as elective, outpatient implementation of a patient’s chosen modality (e.g., home hemodialysis, home peritoneal dialysis, or facility-based hemodialysis) with the most suitable dialysis access in place. 8 Unfortunately, even among patients followed by a nephrologist and multidisciplinary care team, 50% or more experience a suboptimal initiation of dialysis. 9 , 10 Moreover, more than half of dialysis initiations involve a hospitalization or emergency department visit due to severe uremic symptoms, volume overload, or hyperkalemia. 7 Initiating dialysis earlier at a higher kidney function before patients are symptomatic and when there is less risk of suboptimal starts is not an ideal solution, as it increases health care costs without a clinical benefit. 11 , 12

Interventions are needed to improve early detection of indications for dialysis before the development of severe symptoms and complications. This could decrease the number of suboptimal starts and may substantially decrease costs and poor outcomes associated with acute inpatient dialysis initiation. In this regard, telemonitoring and virtual ward technologies (virtual case management) have shown benefit in high-risk, specific disease states such as heart failure. 13 It is tempting to hypothesize that enhanced monitoring of patients with late-stage CKD using this technology may reduce the rate of suboptimal dialysis starts; however, the costs of any such intervention must be weighed against the potential benefits.

As a prelude to developing and testing such an intervention in CKD, and to inform decisions regarding patient selection and outcomes measures in future trials, we wished to better understand how assumptions about patient risk of progression to kidney failure, intervention effectiveness, and cost might influence the final cost-effectiveness of a putative nurse-led virtual case management intervention and in doing so, define a break-even point for such a strategy.

The primary objective of the present study, therefore, was to explore assumptions around a nurse-led virtual case management intervention for patients with late-stage CKD and to estimate how these assumptions might affect potential cost savings using a cost-minimization approach.

We defined our hypothetical study population as adult patients with late-stage CKD receiving care from a primary care provider or nephrologist and having a KFRE estimated risk of kidney failure at 2 years of 80% or greater in the baseline model. The KFRE has been internationally validated in nearly 700,000 patients across more than 30 countries and has demonstrated excellent discrimination (C statistic > 0.90) for the prediction of kidney failure in patients with CKD stages 3 to 5. 14 , 15

We constructed a cost-minimization model from the perspective of the health care payer. A decision analysis Markov Model using microsimulation (N = 100,000) was created using TreeAge Pro 2018 (Williamstown, MA) in accordance with guidelines for economic evaluations of health interventions. 16 The primary output of the model was the break-even point for the nurse-led virtual case management intervention. The break-even point is defined as the maximum amount a health payer could spend on the intervention without increasing total net costs in comparison with the usual care scenario, in which there would be no net financial loss or gain from adopting the intervention. This is equal to the cost savings calculated by assigning the intervention an incremental cost of $0 in a cost-minimization model. For example, if an intervention were expected to save the health care system $1000, an intervention cost of $1000 over the entire time horizon the patient is followed in the model would be permissible to achieve a cost neutral intervention (i.e., without incurring a net increase in total health care costs). In addition, a threshold analysis was performed in which we estimated the potential monthly cost of the intervention based on the distribution of time spent receiving the intervention observed in the model. Secondary outcomes included the number of hospitalizations and suboptimal dialysis initiations in both the intervention and comparator arms. Our model used monthly cycles, followed patients until kidney failure or death, and assumed that no patient would receive the intervention (i.e., the virtual home monitoring platform) for longer than a time horizon of 24 months, at which point we expect approximately 85% of patients to have died or progressed to kidney failure ( Supplementary Figure S1 ). We presumed no differences in survival, time to dialysis initiation, or quality of life between the intervention and comparator in the baseline model. A half-cycle correction was applied in the model to account for the overestimation of state membership in TreeAge Pro. 17 All costs used in the model were inflated to 2017 values and then exchanged to U.S. dollars using purchasing power parities. 18 , 19 , 20 Costs were discounted at an annual rate of 5%. As the study contains only aggregate, previously published, or publically available data, we did not seek approval from an institutional research ethics board. An overview of the model structure is provided in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is gr1.jpg

Overview of microsimulation model. Blue squares represent the decision node where both alternative treatments branch from and model results are calculated at its branches for each alternative. The purple circle represents a Markov process node, which runs each cycle of the model until the terminal condition is met (24 cycles or months in the baseline scenario). The green nodes represent chance nodes, where a probability event occurs. Red triangles are terminal nodes where patients are absorbed and exit the Markov process and the model (death and kidney failure). CKD, chronic kidney disease.

The hypothetical intervention considered in our analysis was composed of remote telemonitoring, including daily measurement of several health metrics that could potentially signal signs of volume overload or uremic symptoms (blood pressure, oxygen saturation, and weight), and a validated symptom questionnaire, 21 accompanied by nurse-led case management. The comparator group was assumed to receive usual care for patients with late-stage CKD with rates of hospitalization and mortality modeled after the Kaiser Permanente Renal Registry. 4

We estimated mortality and hospitalization rates from a previously published study of patients receiving usual medical care ( n > 1.1 million), using CKD stage 5 as a proxy for ≥80% 2-year KFRE risk. 4 Rates of kidney failure were assumed to be 80% over 2 years based on calculated KFRE risk, with a uniform distribution (i.e., equal probability of kidney failure each month during the 2-year period) in the baseline scenario. 14 Compliance with the intervention was assumed to be 83.5% based on personal communication from virtual application developers. The baseline proportion of patients who experience a suboptimal dialysis initiation was assumed to be 0.62 (e.g., initiation on a nonpreferred modality, with a nonpreferred access, or in hospital). 9 Cost estimates for optimal and suboptimal initiation of dialysis were taken from a multicenter retrospective study, and totaled $54,679 for patients who experience suboptimal dialysis initiations and $33,953 for optimally initiated patients. 9 The cost of a hospitalization event was assumed to be $11,640 based on estimates from the Agency for Healthcare Research and Quality. 22 We assumed that the intervention would afford a relative risk of 0.66 for hospitalization events and a relative risk of 0.5474 for the probability of experiencing a suboptimal dialysis initiation. 10

Univariate sensitivity analysis was performed by varying model inputs ±50% from baseline, or until a theoretical maximum or minimum (e.g., compliance cannot exceed 100%). The annual discount rate was varied between 0% and 5%. We considered a lifetime horizon analysis in which patients could remain in the intervention for longer than 2 years. Second-order Monte Carlo simulation (probabilistic sensitivity analysis) was performed to evaluate parameter uncertainty by varying model inputs over assumed distributions across 1000 simulations. We conducted a 2-way sensitivity analysis on the primary effectiveness estimates (reduction in hospitalizations and suboptimal dialysis initiations) versus hypothetical costs of delivering the case management intervention at $400, $700, and $1000 per month. Last, we conducted a scenario analysis treating the proportion of patients compliant with the intervention as a time-dependent variable wherein compliance declined by an absolute reduction of 3% per month (e.g., month 1 compliance is the baseline value of 83.5%, month 2 compliance would have a value of 80.5%, and month 24 compliance would be 11.5%).

An overview of model inputs, data sources, ranges for sensitivity analysis, and assumed distributions is provided in Table 1 . 4 , 9 , 10 , 14 , 22

Table 1

Model inputs, data sources, ranges for sensitivity analyses, and assumed distributions

CKD, chronic kidney disease; NA, not applicable; KFRE, Kidney Failure Risk Equation.

Internal model validity was evaluated by comparing output from the microsimulation to inputs used in the model. In the status quo comparator scenario, 72,656 patients in a hypothetical cohort of 100,000 are expected to initiate dialysis. In the microsimulation, we found that the status quo arm had 44,832 suboptimal dialysis initiations (61.7% vs. the assumed proportion of 62%). Mortality for patients on CKD was assumed to be a rate per person year of 0.1414. In the microsimulation a total of 12,271 patients were expected to die over a 2-year period. With a mean expected follow-up of 11.4 months, a total of approximately 94,604 patient years are observed in a hypothetical cohort of 100,000 patients, with an annual rate per person year of 0.1297; however, kidney failure is treated as an absorbing state in our model and any mortality occurring in the same cycle as a kidney failure event would not be represented in the simulation, and as such we would expect a slightly lower mortality rate in the model.

In our hypothetical cohort of 100,000 patients with CKD with a KFRE risk ≥80%, 44,934 would have a suboptimal dialysis initiation in the status quo comparator, reduced to 27,858 with the virtual case management intervention. In our baseline model, we would expect the intervention to prevent approximately 36,000 hospitalization events across 100,000 patients (127,367 hospitalizations in the comparator arm versus 91,095 hospitalizations in the treatment arm) ( Table 2 ). The median cost of hospitalizations and suboptimal dialysis initiations was $14,405.86 (interquartile range, $4780.86–$23,571.63) in the intervention arm in comparison with $23,505.62 (interquartile range, $11,780.80–$31,199.66) in the comparator arm, The expected value associated with the cost of hospitalizations and suboptimal dialysis initiations was $15,411.68 in the intervention arm versus $22,751.16 among patients receiving usual care, providing a break-even cost of $7339 per patient with late-stage CKD enrolled in the intervention (amount that the health payer could spend on the intervention per-patient to result in no net financial loss or gain). Patients were expected to remain in the CKD state receiving the intervention for a median time of 9 months (interquartile range, 4 to 18 months), with an expected value of 11.4 months, following a bi-modal distribution ( Supplementary Figure S5 ), providing a potential monthly intervention cost of $703.37 in threshold analyses.

Table 2

Results of microsimulation analysis ( n  = 100,000)

In univariate sensitivity analysis, we found that the maximum break-even point for the virtual case management intervention ranged from approximately $3500 to $11,500 per patient with late-stage CKD enrolled. Evaluating the model with a lifetime horizon did not have an impact on our results, producing a total break-even spending of $8481.95, and based on the distributions of time spent receiving the intervention produced a potential monthly intervention cost that was approximately the same as the base model ($702.72). The model was most sensitive to assumptions surrounding the effectiveness of the intervention (e.g., the relative risk of hospitalization and relative risk of suboptimal dialysis initiation) and patient compliance ( Figure 2 ). In the probabilistic sensitivity analysis, we found that more than 99% of simulations produced break-even points greater than $0 for the virtual case management intervention, with a median break-even point of $6124 per patient with CKD enrolled in the intervention (interquartile range, $3930–$9013) ( Figure 3 ). Our 2-way sensitivity analysis of both effectiveness estimates (reduction in all-cause hospitalizations and reduction in suboptimal dialysis initiations) found that at the baseline assumed relative risk of suboptimal dialysis initiation (0.5474) and a putative monthly intervention cost of $400, the relative risk of hospitalization would need to be 0.93 to reach a break-even point, decreasing to 0.39 at a monthly intervention cost of $1000. At the baseline assumed relative risk of all-cause hospitalization (0.66) and a monthly intervention cost of $400, the relative risk of suboptimal dialysis initiation would need to be 0.97 to reach a break-even point, decreasing to 0.14 at a monthly intervention cost of $1000 ( Figure 4 ). In our scenario analysis considering a declining compliance over time of 3% absolutely per month, we arrived at a monthly intervention cost of $499.43 to reach a break-even point.

An external file that holds a picture, illustration, etc.
Object name is gr2.jpg

Univariate sensitivity analysis of break-even points associated with the virtual case management intervention. CKD, chronic kidney disease.

An external file that holds a picture, illustration, etc.
Object name is gr3.jpg

Distribution of break-even points based on 1000 second-order Monte Carlo simulations (probabilistic sensitivity analysis).

An external file that holds a picture, illustration, etc.
Object name is gr4.jpg

Scenario analyses of intervention effectiveness estimates by putative monthly cost of intervention. RR, relative risk.

In our cost-minimization analysis we found that a nurse-led home virtual monitoring intervention for patients with late-stage CKD at high risk of progression to kidney failure could reach a monthly cost of $703.37 to reach a break-even point in comparison with usual care. Based on the distribution of time spent receiving the intervention, this permits the health care payer to spend up to $7399 per patient with high-risk CKD enrolled in the intervention without experiencing net financial loss or gain. In our multivariate sensitivity analyses, we found that more than 75% of simulations found break-even points above $3930 per patient.

As expected, our Markov simulation was influenced by changes to the input assumptions. Our model was most sensitive to assumptions about the efficacy of the intervention, specifically the presumed relative risk reductions in hospitalizations and suboptimal dialysis initiations. This is not surprising, as both these events are extremely costly when they occur. We drew our estimates from the only published data directly in the CKD population. 10 Although additional data in CKD would be ideal to strengthen the evidence, it is important to note that similar estimates have been observed in home-monitoring interventions in heart failure populations. Drawing a parallel between CKD at high risk of kidney failure and heart failure is not unreasonable: both conditions, for example, are single-organ diseases with systemic consequences, both share a propensity to fluid retention and pulmonary edema, and as a consequence volume management and monitoring is a large component of ongoing care in both. In addition, drawing an analogy between telemonitoring and nurse-led case management is also reasonable, because most telemonitoring interventions apply principles or components of case management delivered remotely. 13

Our model has several important implications for the future development of home monitoring technologies in CKD. First, the economics appear to be very favorable, as even a small effect could allow for a reasonable intervention cost to generate substantial cost savings for the health care payer. Innovation in this area should be prioritized by health care providers and insurers, particularly those who assume global risk for their subscribers, such as Accountable Care Organizations. 23 Second, we have defined reasonable minimum efficacy thresholds for a monitoring intervention to result in cost savings. Interventions must offer a combined effectiveness across a reduction in hospitalizations and suboptimal dialysis initiations that will permit a cost savings to be achieved based on reasonable putative intervention costs, or offer additional benefits such as increasing the time it takes to reach dialysis, a reduction in emergency department utilization, or an increase in home modality uptake. These data can inform the design of implementation RCTs of home monitoring in CKD. Understandably, integration of these types of interventions into existing patient management may impact the burden of workflow on staff or may require hiring new staff to accommodate these additional tasks. Augmented care interventions have shown to be feasible with nursing ratios of 100 patients:1 nurse, 10 and it is possible that avoided hospitalizations or suboptimal dialysis initiations may lessen the burden on hospital nursing resources. Further research evaluating the human resources implications of these interventions is warranted.

There are some additional considerations that need to be taken into account with home monitoring interventions in the CKD population. First, interventions similar to that proposed in this study have not been shown to be efficacious in patients with CKD who are at low risk of progression to kidney failure, whereas we hypothesize that targeting patients with a substantial risk of progression would be ideal. A randomized controlled trial of patients with an estimated glomerular filtration rate <60 ml/min per 1.73 m 2 found no improvement in mortality, hospitalization, or emergency department visits. The population in this study, however, had a mean estimated glomerular filtration rate of 37 ml/min per 1.73 m 2 and a mean urine albumin-to-creatinine ratio of 296 mg/g. 24 This would, on average, produce a 2-year KFRE risk of only approximately 3%, and as such, the risk of hospitalization or transition to dialysis would likely be very low even over an extended period. Second, there is uncertainty with regard to the timing of dialysis initiation in the context of the proposed intervention, where it may be possible that patients initiate dialysis earlier if indicated by interaction with the health care team or by metrics measured with home monitoring devices; conversely, there also may be a delay in the time to dialysis initiation if timely care can be offered in a situation that may have been overlooked in the usual care setting. Last, considering the effects of increasing home modality uptake as a result in reduced suboptimal dialysis, initiations may be warranted. The clinical trial used to inform effectiveness estimates in our study found that 23% of patients initiated with home PD in the intervention arm versus only 3% in the control arm, but was not statistically significant within the size of their study. 10 In the context of kidney failure populations where home modalities are currently prescribed in greater numbers than the United States (e.g., Canada, Australia, and New Zealand), 25 , 26 there may be an additional hypothetical benefit and subsequent cost savings.

Our model has several limitations. First, our model used rates of mortality and hospitalization taken from a study of patients with CKD stage 5 as a proxy for patients with a KFRE risk ≥80%, and as such, are likely an underestimation of the true rate of hospitalization and an overestimation of the true rate of mortality in our population; however, this would be a difference that is unlikely to alter the conclusion of our model, as it was robust to changes in the rate of mortality and the estimated maximum break-even point would increase with a higher baseline rate of hospitalization. Second, the data on the effectiveness of a virtual case management intervention with the outcome of an optimal start (elective outpatient initiation on hemodialysis with an arteriovenous access, on home dialysis, or with a preemptive transplant) was unavailable, and as such we took a conservative approach and estimated the effectiveness of the intervention based solely on a reduction in hospitalization at dialysis initiation, therefore possibly underestimating the true effectiveness of the intervention. Third, our perspective took that of the health payer, and we did not take into account other costs and benefits that may be associated with the intervention, such as changes to productivity, caregiver burden, or disability and social security payments. A final caveat of our model is that it applied average estimates for many variables (e.g., compliance, hospitalization risk), which may not be independent of a patient’s history. For example, patients with low compliance may have a higher risk of kidney failure, death, or hospitalization, and conversely, patients with high compliance may have a higher than average treatment effectiveness. Further research to explore these relationships is warranted.

In conclusion, nurse-led virtual home monitoring interventions in patients with CKD at high risk of kidney failure have the potential for significant cost savings from the perspective of the health payer. We believe we have defined the necessary factors required for a successful virtual monitoring program trial and encourage innovators throughout the world to design, develop, and implement programs that meet these specifications to reduce suboptimal dialysis initiations and related hospital admissions, evaluating them with formally conducted, randomized control trials to develop the stronger evidence to support these interventions.

All the authors declared no competing interests.

Author Contributions

DH, PK, CR, TWF, and NT researched the idea and designed the study; PK, CR, and TWF acquired data; DH, MD, PK, CR, TWF, RHW, and NT analyzed and interpreted data; TWF and RHW analyzed statistics; and PK, CR, and NT supervised and mentored. Each author contributed important intellectual content during manuscript drafting or revision, accepts personal accountability for the author’s own contributions, and agrees to ensure that questions pertaining to the accuracy or integrity of any portion of the work are appropriate investigated and resolved.

Supplementary File (PDF)

Figure S1. Proportion of patients remaining in CKD state over time.

Figure S2. Overview of methods used to convert rates to monthly transition probabilities.

Figure S3. Overview of methods used to convert costs to 2017 U.S. dollars.

Figure S4. Overview of methods used to determine distributional parameters for the probabilistic sensitivity analysis.

Figure S5. Distribution of time spent receiving the intervention in months.

Supplementary Material

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 April 2024

The economic commitment of climate change

  • Maximilian Kotz   ORCID: orcid.org/0000-0003-2564-5043 1 , 2 ,
  • Anders Levermann   ORCID: orcid.org/0000-0003-4432-4704 1 , 2 &
  • Leonie Wenz   ORCID: orcid.org/0000-0002-8500-1568 1 , 3  

Nature volume  628 ,  pages 551–557 ( 2024 ) Cite this article

70k Accesses

3527 Altmetric

Metrics details

  • Environmental economics
  • Environmental health
  • Interdisciplinary studies
  • Projection and prediction

Global projections of macroeconomic climate-change damages typically consider impacts from average annual and national temperatures over long time horizons 1 , 2 , 3 , 4 , 5 , 6 . Here we use recent empirical findings from more than 1,600 regions worldwide over the past 40 years to project sub-national damages from temperature and precipitation, including daily variability and extremes 7 , 8 . Using an empirical approach that provides a robust lower bound on the persistence of impacts on economic growth, we find that the world economy is committed to an income reduction of 19% within the next 26 years independent of future emission choices (relative to a baseline without climate impacts, likely range of 11–29% accounting for physical climate and empirical uncertainty). These damages already outweigh the mitigation costs required to limit global warming to 2 °C by sixfold over this near-term time frame and thereafter diverge strongly dependent on emission choices. Committed damages arise predominantly through changes in average temperature, but accounting for further climatic components raises estimates by approximately 50% and leads to stronger regional heterogeneity. Committed losses are projected for all regions except those at very high latitudes, at which reductions in temperature variability bring benefits. The largest losses are committed at lower latitudes in regions with lower cumulative historical emissions and lower present-day income.

Similar content being viewed by others

cost minimization analysis case study

Climate damage projections beyond annual temperature

cost minimization analysis case study

Investment incentive reduced by climate damages can be restored by optimal policy

cost minimization analysis case study

Climate economics support for the UN climate targets

Projections of the macroeconomic damage caused by future climate change are crucial to informing public and policy debates about adaptation, mitigation and climate justice. On the one hand, adaptation against climate impacts must be justified and planned on the basis of an understanding of their future magnitude and spatial distribution 9 . This is also of importance in the context of climate justice 10 , as well as to key societal actors, including governments, central banks and private businesses, which increasingly require the inclusion of climate risks in their macroeconomic forecasts to aid adaptive decision-making 11 , 12 . On the other hand, climate mitigation policy such as the Paris Climate Agreement is often evaluated by balancing the costs of its implementation against the benefits of avoiding projected physical damages. This evaluation occurs both formally through cost–benefit analyses 1 , 4 , 5 , 6 , as well as informally through public perception of mitigation and damage costs 13 .

Projections of future damages meet challenges when informing these debates, in particular the human biases relating to uncertainty and remoteness that are raised by long-term perspectives 14 . Here we aim to overcome such challenges by assessing the extent of economic damages from climate change to which the world is already committed by historical emissions and socio-economic inertia (the range of future emission scenarios that are considered socio-economically plausible 15 ). Such a focus on the near term limits the large uncertainties about diverging future emission trajectories, the resulting long-term climate response and the validity of applying historically observed climate–economic relations over long timescales during which socio-technical conditions may change considerably. As such, this focus aims to simplify the communication and maximize the credibility of projected economic damages from future climate change.

In projecting the future economic damages from climate change, we make use of recent advances in climate econometrics that provide evidence for impacts on sub-national economic growth from numerous components of the distribution of daily temperature and precipitation 3 , 7 , 8 . Using fixed-effects panel regression models to control for potential confounders, these studies exploit within-region variation in local temperature and precipitation in a panel of more than 1,600 regions worldwide, comprising climate and income data over the past 40 years, to identify the plausibly causal effects of changes in several climate variables on economic productivity 16 , 17 . Specifically, macroeconomic impacts have been identified from changing daily temperature variability, total annual precipitation, the annual number of wet days and extreme daily rainfall that occur in addition to those already identified from changing average temperature 2 , 3 , 18 . Moreover, regional heterogeneity in these effects based on the prevailing local climatic conditions has been found using interactions terms. The selection of these climate variables follows micro-level evidence for mechanisms related to the impacts of average temperatures on labour and agricultural productivity 2 , of temperature variability on agricultural productivity and health 7 , as well as of precipitation on agricultural productivity, labour outcomes and flood damages 8 (see Extended Data Table 1 for an overview, including more detailed references). References  7 , 8 contain a more detailed motivation for the use of these particular climate variables and provide extensive empirical tests about the robustness and nature of their effects on economic output, which are summarized in Methods . By accounting for these extra climatic variables at the sub-national level, we aim for a more comprehensive description of climate impacts with greater detail across both time and space.

Constraining the persistence of impacts

A key determinant and source of discrepancy in estimates of the magnitude of future climate damages is the extent to which the impact of a climate variable on economic growth rates persists. The two extreme cases in which these impacts persist indefinitely or only instantaneously are commonly referred to as growth or level effects 19 , 20 (see Methods section ‘Empirical model specification: fixed-effects distributed lag models’ for mathematical definitions). Recent work shows that future damages from climate change depend strongly on whether growth or level effects are assumed 20 . Following refs.  2 , 18 , we provide constraints on this persistence by using distributed lag models to test the significance of delayed effects separately for each climate variable. Notably, and in contrast to refs.  2 , 18 , we use climate variables in their first-differenced form following ref.  3 , implying a dependence of the growth rate on a change in climate variables. This choice means that a baseline specification without any lags constitutes a model prior of purely level effects, in which a permanent change in the climate has only an instantaneous effect on the growth rate 3 , 19 , 21 . By including lags, one can then test whether any effects may persist further. This is in contrast to the specification used by refs.  2 , 18 , in which climate variables are used without taking the first difference, implying a dependence of the growth rate on the level of climate variables. In this alternative case, the baseline specification without any lags constitutes a model prior of pure growth effects, in which a change in climate has an infinitely persistent effect on the growth rate. Consequently, including further lags in this alternative case tests whether the initial growth impact is recovered 18 , 19 , 21 . Both of these specifications suffer from the limiting possibility that, if too few lags are included, one might falsely accept the model prior. The limitations of including a very large number of lags, including loss of data and increasing statistical uncertainty with an increasing number of parameters, mean that such a possibility is likely. By choosing a specification in which the model prior is one of level effects, our approach is therefore conservative by design, avoiding assumptions of infinite persistence of climate impacts on growth and instead providing a lower bound on this persistence based on what is observable empirically (see Methods section ‘Empirical model specification: fixed-effects distributed lag models’ for further exposition of this framework). The conservative nature of such a choice is probably the reason that ref.  19 finds much greater consistency between the impacts projected by models that use the first difference of climate variables, as opposed to their levels.

We begin our empirical analysis of the persistence of climate impacts on growth using ten lags of the first-differenced climate variables in fixed-effects distributed lag models. We detect substantial effects on economic growth at time lags of up to approximately 8–10 years for the temperature terms and up to approximately 4 years for the precipitation terms (Extended Data Fig. 1 and Extended Data Table 2 ). Furthermore, evaluation by means of information criteria indicates that the inclusion of all five climate variables and the use of these numbers of lags provide a preferable trade-off between best-fitting the data and including further terms that could cause overfitting, in comparison with model specifications excluding climate variables or including more or fewer lags (Extended Data Fig. 3 , Supplementary Methods Section  1 and Supplementary Table 1 ). We therefore remove statistically insignificant terms at later lags (Supplementary Figs. 1 – 3 and Supplementary Tables 2 – 4 ). Further tests using Monte Carlo simulations demonstrate that the empirical models are robust to autocorrelation in the lagged climate variables (Supplementary Methods Section  2 and Supplementary Figs. 4 and 5 ), that information criteria provide an effective indicator for lag selection (Supplementary Methods Section  2 and Supplementary Fig. 6 ), that the results are robust to concerns of imperfect multicollinearity between climate variables and that including several climate variables is actually necessary to isolate their separate effects (Supplementary Methods Section  3 and Supplementary Fig. 7 ). We provide a further robustness check using a restricted distributed lag model to limit oscillations in the lagged parameter estimates that may result from autocorrelation, finding that it provides similar estimates of cumulative marginal effects to the unrestricted model (Supplementary Methods Section 4 and Supplementary Figs. 8 and 9 ). Finally, to explicitly account for any outstanding uncertainty arising from the precise choice of the number of lags, we include empirical models with marginally different numbers of lags in the error-sampling procedure of our projection of future damages. On the basis of the lag-selection procedure (the significance of lagged terms in Extended Data Fig. 1 and Extended Data Table 2 , as well as information criteria in Extended Data Fig. 3 ), we sample from models with eight to ten lags for temperature and four for precipitation (models shown in Supplementary Figs. 1 – 3 and Supplementary Tables 2 – 4 ). In summary, this empirical approach to constrain the persistence of climate impacts on economic growth rates is conservative by design in avoiding assumptions of infinite persistence, but nevertheless provides a lower bound on the extent of impact persistence that is robust to the numerous tests outlined above.

Committed damages until mid-century

We combine these empirical economic response functions (Supplementary Figs. 1 – 3 and Supplementary Tables 2 – 4 ) with an ensemble of 21 climate models (see Supplementary Table 5 ) from the Coupled Model Intercomparison Project Phase 6 (CMIP-6) 22 to project the macroeconomic damages from these components of physical climate change (see Methods for further details). Bias-adjusted climate models that provide a highly accurate reproduction of observed climatological patterns with limited uncertainty (Supplementary Table 6 ) are used to avoid introducing biases in the projections. Following a well-developed literature 2 , 3 , 19 , these projections do not aim to provide a prediction of future economic growth. Instead, they are a projection of the exogenous impact of future climate conditions on the economy relative to the baselines specified by socio-economic projections, based on the plausibly causal relationships inferred by the empirical models and assuming ceteris paribus. Other exogenous factors relevant for the prediction of economic output are purposefully assumed constant.

A Monte Carlo procedure that samples from climate model projections, empirical models with different numbers of lags and model parameter estimates (obtained by 1,000 block-bootstrap resamples of each of the regressions in Supplementary Figs. 1 – 3 and Supplementary Tables 2 – 4 ) is used to estimate the combined uncertainty from these sources. Given these uncertainty distributions, we find that projected global damages are statistically indistinguishable across the two most extreme emission scenarios until 2049 (at the 5% significance level; Fig. 1 ). As such, the climate damages occurring before this time constitute those to which the world is already committed owing to the combination of past emissions and the range of future emission scenarios that are considered socio-economically plausible 15 . These committed damages comprise a permanent income reduction of 19% on average globally (population-weighted average) in comparison with a baseline without climate-change impacts (with a likely range of 11–29%, following the likelihood classification adopted by the Intergovernmental Panel on Climate Change (IPCC); see caption of Fig. 1 ). Even though levels of income per capita generally still increase relative to those of today, this constitutes a permanent income reduction for most regions, including North America and Europe (each with median income reductions of approximately 11%) and with South Asia and Africa being the most strongly affected (each with median income reductions of approximately 22%; Fig. 1 ). Under a middle-of-the road scenario of future income development (SSP2, in which SSP stands for Shared Socio-economic Pathway), this corresponds to global annual damages in 2049 of 38 trillion in 2005 international dollars (likely range of 19–59 trillion 2005 international dollars). Compared with empirical specifications that assume pure growth or pure level effects, our preferred specification that provides a robust lower bound on the extent of climate impact persistence produces damages between these two extreme assumptions (Extended Data Fig. 3 ).

figure 1

Estimates of the projected reduction in income per capita from changes in all climate variables based on empirical models of climate impacts on economic output with a robust lower bound on their persistence (Extended Data Fig. 1 ) under a low-emission scenario compatible with the 2 °C warming target and a high-emission scenario (SSP2-RCP2.6 and SSP5-RCP8.5, respectively) are shown in purple and orange, respectively. Shading represents the 34% and 10% confidence intervals reflecting the likely and very likely ranges, respectively (following the likelihood classification adopted by the IPCC), having estimated uncertainty from a Monte Carlo procedure, which samples the uncertainty from the choice of physical climate models, empirical models with different numbers of lags and bootstrapped estimates of the regression parameters shown in Supplementary Figs. 1 – 3 . Vertical dashed lines show the time at which the climate damages of the two emission scenarios diverge at the 5% and 1% significance levels based on the distribution of differences between emission scenarios arising from the uncertainty sampling discussed above. Note that uncertainty in the difference of the two scenarios is smaller than the combined uncertainty of the two respective scenarios because samples of the uncertainty (climate model and empirical model choice, as well as model parameter bootstrap) are consistent across the two emission scenarios, hence the divergence of damages occurs while the uncertainty bounds of the two separate damage scenarios still overlap. Estimates of global mitigation costs from the three IAMs that provide results for the SSP2 baseline and SSP2-RCP2.6 scenario are shown in light green in the top panel, with the median of these estimates shown in bold.

Damages already outweigh mitigation costs

We compare the damages to which the world is committed over the next 25 years to estimates of the mitigation costs required to achieve the Paris Climate Agreement. Taking estimates of mitigation costs from the three integrated assessment models (IAMs) in the IPCC AR6 database 23 that provide results under comparable scenarios (SSP2 baseline and SSP2-RCP2.6, in which RCP stands for Representative Concentration Pathway), we find that the median committed climate damages are larger than the median mitigation costs in 2050 (six trillion in 2005 international dollars) by a factor of approximately six (note that estimates of mitigation costs are only provided every 10 years by the IAMs and so a comparison in 2049 is not possible). This comparison simply aims to compare the magnitude of future damages against mitigation costs, rather than to conduct a formal cost–benefit analysis of transitioning from one emission path to another. Formal cost–benefit analyses typically find that the net benefits of mitigation only emerge after 2050 (ref.  5 ), which may lead some to conclude that physical damages from climate change are simply not large enough to outweigh mitigation costs until the second half of the century. Our simple comparison of their magnitudes makes clear that damages are actually already considerably larger than mitigation costs and the delayed emergence of net mitigation benefits results primarily from the fact that damages across different emission paths are indistinguishable until mid-century (Fig. 1 ).

Although these near-term damages constitute those to which the world is already committed, we note that damage estimates diverge strongly across emission scenarios after 2049, conveying the clear benefits of mitigation from a purely economic point of view that have been emphasized in previous studies 4 , 24 . As well as the uncertainties assessed in Fig. 1 , these conclusions are robust to structural choices, such as the timescale with which changes in the moderating variables of the empirical models are estimated (Supplementary Figs. 10 and 11 ), as well as the order in which one accounts for the intertemporal and international components of currency comparison (Supplementary Fig. 12 ; see Methods for further details).

Damages from variability and extremes

Committed damages primarily arise through changes in average temperature (Fig. 2 ). This reflects the fact that projected changes in average temperature are larger than those in other climate variables when expressed as a function of their historical interannual variability (Extended Data Fig. 4 ). Because the historical variability is that on which the empirical models are estimated, larger projected changes in comparison with this variability probably lead to larger future impacts in a purely statistical sense. From a mechanistic perspective, one may plausibly interpret this result as implying that future changes in average temperature are the most unprecedented from the perspective of the historical fluctuations to which the economy is accustomed and therefore will cause the most damage. This insight may prove useful in terms of guiding adaptation measures to the sources of greatest damage.

figure 2

Estimates of the median projected reduction in sub-national income per capita across emission scenarios (SSP2-RCP2.6 and SSP2-RCP8.5) as well as climate model, empirical model and model parameter uncertainty in the year in which climate damages diverge at the 5% level (2049, as identified in Fig. 1 ). a , Impacts arising from all climate variables. b – f , Impacts arising separately from changes in annual mean temperature ( b ), daily temperature variability ( c ), total annual precipitation ( d ), the annual number of wet days (>1 mm) ( e ) and extreme daily rainfall ( f ) (see Methods for further definitions). Data on national administrative boundaries are obtained from the GADM database version 3.6 and are freely available for academic use ( https://gadm.org/ ).

Nevertheless, future damages based on empirical models that consider changes in annual average temperature only and exclude the other climate variables constitute income reductions of only 13% in 2049 (Extended Data Fig. 5a , likely range 5–21%). This suggests that accounting for the other components of the distribution of temperature and precipitation raises net damages by nearly 50%. This increase arises through the further damages that these climatic components cause, but also because their inclusion reveals a stronger negative economic response to average temperatures (Extended Data Fig. 5b ). The latter finding is consistent with our Monte Carlo simulations, which suggest that the magnitude of the effect of average temperature on economic growth is underestimated unless accounting for the impacts of other correlated climate variables (Supplementary Fig. 7 ).

In terms of the relative contributions of the different climatic components to overall damages, we find that accounting for daily temperature variability causes the largest increase in overall damages relative to empirical frameworks that only consider changes in annual average temperature (4.9 percentage points, likely range 2.4–8.7 percentage points, equivalent to approximately 10 trillion international dollars). Accounting for precipitation causes smaller increases in overall damages, which are—nevertheless—equivalent to approximately 1.2 trillion international dollars: 0.01 percentage points (−0.37–0.33 percentage points), 0.34 percentage points (0.07–0.90 percentage points) and 0.36 percentage points (0.13–0.65 percentage points) from total annual precipitation, the number of wet days and extreme daily precipitation, respectively. Moreover, climate models seem to underestimate future changes in temperature variability 25 and extreme precipitation 26 , 27 in response to anthropogenic forcing as compared with that observed historically, suggesting that the true impacts from these variables may be larger.

The distribution of committed damages

The spatial distribution of committed damages (Fig. 2a ) reflects a complex interplay between the patterns of future change in several climatic components and those of historical economic vulnerability to changes in those variables. Damages resulting from increasing annual mean temperature (Fig. 2b ) are negative almost everywhere globally, and larger at lower latitudes in regions in which temperatures are already higher and economic vulnerability to temperature increases is greatest (see the response heterogeneity to mean temperature embodied in Extended Data Fig. 1a ). This occurs despite the amplified warming projected at higher latitudes 28 , suggesting that regional heterogeneity in economic vulnerability to temperature changes outweighs heterogeneity in the magnitude of future warming (Supplementary Fig. 13a ). Economic damages owing to daily temperature variability (Fig. 2c ) exhibit a strong latitudinal polarisation, primarily reflecting the physical response of daily variability to greenhouse forcing in which increases in variability across lower latitudes (and Europe) contrast decreases at high latitudes 25 (Supplementary Fig. 13b ). These two temperature terms are the dominant determinants of the pattern of overall damages (Fig. 2a ), which exhibits a strong polarity with damages across most of the globe except at the highest northern latitudes. Future changes in total annual precipitation mainly bring economic benefits except in regions of drying, such as the Mediterranean and central South America (Fig. 2d and Supplementary Fig. 13c ), but these benefits are opposed by changes in the number of wet days, which produce damages with a similar pattern of opposite sign (Fig. 2e and Supplementary Fig. 13d ). By contrast, changes in extreme daily rainfall produce damages in all regions, reflecting the intensification of daily rainfall extremes over global land areas 29 , 30 (Fig. 2f and Supplementary Fig. 13e ).

The spatial distribution of committed damages implies considerable injustice along two dimensions: culpability for the historical emissions that have caused climate change and pre-existing levels of socio-economic welfare. Spearman’s rank correlations indicate that committed damages are significantly larger in countries with smaller historical cumulative emissions, as well as in regions with lower current income per capita (Fig. 3 ). This implies that those countries that will suffer the most from the damages already committed are those that are least responsible for climate change and which also have the least resources to adapt to it.

figure 3

Estimates of the median projected change in national income per capita across emission scenarios (RCP2.6 and RCP8.5) as well as climate model, empirical model and model parameter uncertainty in the year in which climate damages diverge at the 5% level (2049, as identified in Fig. 1 ) are plotted against cumulative national emissions per capita in 2020 (from the Global Carbon Project) and coloured by national income per capita in 2020 (from the World Bank) in a and vice versa in b . In each panel, the size of each scatter point is weighted by the national population in 2020 (from the World Bank). Inset numbers indicate the Spearman’s rank correlation ρ and P -values for a hypothesis test whose null hypothesis is of no correlation, as well as the Spearman’s rank correlation weighted by national population.

To further quantify this heterogeneity, we assess the difference in committed damages between the upper and lower quartiles of regions when ranked by present income levels and historical cumulative emissions (using a population weighting to both define the quartiles and estimate the group averages). On average, the quartile of countries with lower income are committed to an income loss that is 8.9 percentage points (or 61%) greater than the upper quartile (Extended Data Fig. 6 ), with a likely range of 3.8–14.7 percentage points across the uncertainty sampling of our damage projections (following the likelihood classification adopted by the IPCC). Similarly, the quartile of countries with lower historical cumulative emissions are committed to an income loss that is 6.9 percentage points (or 40%) greater than the upper quartile, with a likely range of 0.27–12 percentage points. These patterns reemphasize the prevalence of injustice in climate impacts 31 , 32 , 33 in the context of the damages to which the world is already committed by historical emissions and socio-economic inertia.

Contextualizing the magnitude of damages

The magnitude of projected economic damages exceeds previous literature estimates 2 , 3 , arising from several developments made on previous approaches. Our estimates are larger than those of ref.  2 (see first row of Extended Data Table 3 ), primarily because of the facts that sub-national estimates typically show a steeper temperature response (see also refs.  3 , 34 ) and that accounting for other climatic components raises damage estimates (Extended Data Fig. 5 ). However, we note that our empirical approach using first-differenced climate variables is conservative compared with that of ref.  2 in regard to the persistence of climate impacts on growth (see introduction and Methods section ‘Empirical model specification: fixed-effects distributed lag models’), an important determinant of the magnitude of long-term damages 19 , 21 . Using a similar empirical specification to ref.  2 , which assumes infinite persistence while maintaining the rest of our approach (sub-national data and further climate variables), produces considerably larger damages (purple curve of Extended Data Fig. 3 ). Compared with studies that do take the first difference of climate variables 3 , 35 , our estimates are also larger (see second and third rows of Extended Data Table 3 ). The inclusion of further climate variables (Extended Data Fig. 5 ) and a sufficient number of lags to more adequately capture the extent of impact persistence (Extended Data Figs. 1 and 2 ) are the main sources of this difference, as is the use of specifications that capture nonlinearities in the temperature response when compared with ref.  35 . In summary, our estimates develop on previous studies by incorporating the latest data and empirical insights 7 , 8 , as well as in providing a robust empirical lower bound on the persistence of impacts on economic growth, which constitutes a middle ground between the extremes of the growth-versus-levels debate 19 , 21 (Extended Data Fig. 3 ).

Compared with the fraction of variance explained by the empirical models historically (<5%), the projection of reductions in income of 19% may seem large. This arises owing to the fact that projected changes in climatic conditions are much larger than those that were experienced historically, particularly for changes in average temperature (Extended Data Fig. 4 ). As such, any assessment of future climate-change impacts necessarily requires an extrapolation outside the range of the historical data on which the empirical impact models were evaluated. Nevertheless, these models constitute the most state-of-the-art methods for inference of plausibly causal climate impacts based on observed data. Moreover, we take explicit steps to limit out-of-sample extrapolation by capping the moderating variables of the interaction terms at the 95th percentile of the historical distribution (see Methods ). This avoids extrapolating the marginal effects outside what was observed historically. Given the nonlinear response of economic output to annual mean temperature (Extended Data Fig. 1 and Extended Data Table 2 ), this is a conservative choice that limits the magnitude of damages that we project. Furthermore, back-of-the-envelope calculations indicate that the projected damages are consistent with the magnitude and patterns of historical economic development (see Supplementary Discussion Section  5 ).

Missing impacts and spatial spillovers

Despite assessing several climatic components from which economic impacts have recently been identified 3 , 7 , 8 , this assessment of aggregate climate damages should not be considered comprehensive. Important channels such as impacts from heatwaves 31 , sea-level rise 36 , tropical cyclones 37 and tipping points 38 , 39 , as well as non-market damages such as those to ecosystems 40 and human health 41 , are not considered in these estimates. Sea-level rise is unlikely to be feasibly incorporated into empirical assessments such as this because historical sea-level variability is mostly small. Non-market damages are inherently intractable within our estimates of impacts on aggregate monetary output and estimates of these impacts could arguably be considered as extra to those identified here. Recent empirical work suggests that accounting for these channels would probably raise estimates of these committed damages, with larger damages continuing to arise in the global south 31 , 36 , 37 , 38 , 39 , 40 , 41 , 42 .

Moreover, our main empirical analysis does not explicitly evaluate the potential for impacts in local regions to produce effects that ‘spill over’ into other regions. Such effects may further mitigate or amplify the impacts we estimate, for example, if companies relocate production from one affected region to another or if impacts propagate along supply chains. The current literature indicates that trade plays a substantial role in propagating spillover effects 43 , 44 , making their assessment at the sub-national level challenging without available data on sub-national trade dependencies. Studies accounting for only spatially adjacent neighbours indicate that negative impacts in one region induce further negative impacts in neighbouring regions 45 , 46 , 47 , 48 , suggesting that our projected damages are probably conservative by excluding these effects. In Supplementary Fig. 14 , we assess spillovers from neighbouring regions using a spatial-lag model. For simplicity, this analysis excludes temporal lags, focusing only on contemporaneous effects. The results show that accounting for spatial spillovers can amplify the overall magnitude, and also the heterogeneity, of impacts. Consistent with previous literature, this indicates that the overall magnitude (Fig. 1 ) and heterogeneity (Fig. 3 ) of damages that we project in our main specification may be conservative without explicitly accounting for spillovers. We note that further analysis that addresses both spatially and trade-connected spillovers, while also accounting for delayed impacts using temporal lags, would be necessary to adequately address this question fully. These approaches offer fruitful avenues for further research but are beyond the scope of this manuscript, which primarily aims to explore the impacts of different climate conditions and their persistence.

Policy implications

We find that the economic damages resulting from climate change until 2049 are those to which the world economy is already committed and that these greatly outweigh the costs required to mitigate emissions in line with the 2 °C target of the Paris Climate Agreement (Fig. 1 ). This assessment is complementary to formal analyses of the net costs and benefits associated with moving from one emission path to another, which typically find that net benefits of mitigation only emerge in the second half of the century 5 . Our simple comparison of the magnitude of damages and mitigation costs makes clear that this is primarily because damages are indistinguishable across emissions scenarios—that is, committed—until mid-century (Fig. 1 ) and that they are actually already much larger than mitigation costs. For simplicity, and owing to the availability of data, we compare damages to mitigation costs at the global level. Regional estimates of mitigation costs may shed further light on the national incentives for mitigation to which our results already hint, of relevance for international climate policy. Although these damages are committed from a mitigation perspective, adaptation may provide an opportunity to reduce them. Moreover, the strong divergence of damages after mid-century reemphasizes the clear benefits of mitigation from a purely economic perspective, as highlighted in previous studies 1 , 4 , 6 , 24 .

Historical climate data

Historical daily 2-m temperature and precipitation totals (in mm) are obtained for the period 1979–2019 from the W5E5 database. The W5E5 dataset comes from ERA-5, a state-of-the-art reanalysis of historical observations, but has been bias-adjusted by applying version 2.0 of the WATCH Forcing Data to ERA-5 reanalysis data and precipitation data from version 2.3 of the Global Precipitation Climatology Project to better reflect ground-based measurements 49 , 50 , 51 . We obtain these data on a 0.5° × 0.5° grid from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) database. Notably, these historical data have been used to bias-adjust future climate projections from CMIP-6 (see the following section), ensuring consistency between the distribution of historical daily weather on which our empirical models were estimated and the climate projections used to estimate future damages. These data are publicly available from the ISIMIP database. See refs.  7 , 8 for robustness tests of the empirical models to the choice of climate data reanalysis products.

Future climate data

Daily 2-m temperature and precipitation totals (in mm) are taken from 21 climate models participating in CMIP-6 under a high (RCP8.5) and a low (RCP2.6) greenhouse gas emission scenario from 2015 to 2100. The data have been bias-adjusted and statistically downscaled to a common half-degree grid to reflect the historical distribution of daily temperature and precipitation of the W5E5 dataset using the trend-preserving method developed by the ISIMIP 50 , 52 . As such, the climate model data reproduce observed climatological patterns exceptionally well (Supplementary Table 5 ). Gridded data are publicly available from the ISIMIP database.

Historical economic data

Historical economic data come from the DOSE database of sub-national economic output 53 . We use a recent revision to the DOSE dataset that provides data across 83 countries, 1,660 sub-national regions with varying temporal coverage from 1960 to 2019. Sub-national units constitute the first administrative division below national, for example, states for the USA and provinces for China. Data come from measures of gross regional product per capita (GRPpc) or income per capita in local currencies, reflecting the values reported in national statistical agencies, yearbooks and, in some cases, academic literature. We follow previous literature 3 , 7 , 8 , 54 and assess real sub-national output per capita by first converting values from local currencies to US dollars to account for diverging national inflationary tendencies and then account for US inflation using a US deflator. Alternatively, one might first account for national inflation and then convert between currencies. Supplementary Fig. 12 demonstrates that our conclusions are consistent when accounting for price changes in the reversed order, although the magnitude of estimated damages varies. See the documentation of the DOSE dataset for further discussion of these choices. Conversions between currencies are conducted using exchange rates from the FRED database of the Federal Reserve Bank of St. Louis 55 and the national deflators from the World Bank 56 .

Future socio-economic data

Baseline gridded gross domestic product (GDP) and population data for the period 2015–2100 are taken from the middle-of-the-road scenario SSP2 (ref.  15 ). Population data have been downscaled to a half-degree grid by the ISIMIP following the methodologies of refs.  57 , 58 , which we then aggregate to the sub-national level of our economic data using the spatial aggregation procedure described below. Because current methodologies for downscaling the GDP of the SSPs use downscaled population to do so, per-capita estimates of GDP with a realistic distribution at the sub-national level are not readily available for the SSPs. We therefore use national-level GDP per capita (GDPpc) projections for all sub-national regions of a given country, assuming homogeneity within countries in terms of baseline GDPpc. Here we use projections that have been updated to account for the impact of the COVID-19 pandemic on the trajectory of future income, while remaining consistent with the long-term development of the SSPs 59 . The choice of baseline SSP alters the magnitude of projected climate damages in monetary terms, but when assessed in terms of percentage change from the baseline, the choice of socio-economic scenario is inconsequential. Gridded SSP population data and national-level GDPpc data are publicly available from the ISIMIP database. Sub-national estimates as used in this study are available in the code and data replication files.

Climate variables

Following recent literature 3 , 7 , 8 , we calculate an array of climate variables for which substantial impacts on macroeconomic output have been identified empirically, supported by further evidence at the micro level for plausible underlying mechanisms. See refs.  7 , 8 for an extensive motivation for the use of these particular climate variables and for detailed empirical tests on the nature and robustness of their effects on economic output. To summarize, these studies have found evidence for independent impacts on economic growth rates from annual average temperature, daily temperature variability, total annual precipitation, the annual number of wet days and extreme daily rainfall. Assessments of daily temperature variability were motivated by evidence of impacts on agricultural output and human health, as well as macroeconomic literature on the impacts of volatility on growth when manifest in different dimensions, such as government spending, exchange rates and even output itself 7 . Assessments of precipitation impacts were motivated by evidence of impacts on agricultural productivity, metropolitan labour outcomes and conflict, as well as damages caused by flash flooding 8 . See Extended Data Table 1 for detailed references to empirical studies of these physical mechanisms. Marked impacts of daily temperature variability, total annual precipitation, the number of wet days and extreme daily rainfall on macroeconomic output were identified robustly across different climate datasets, spatial aggregation schemes, specifications of regional time trends and error-clustering approaches. They were also found to be robust to the consideration of temperature extremes 7 , 8 . Furthermore, these climate variables were identified as having independent effects on economic output 7 , 8 , which we further explain here using Monte Carlo simulations to demonstrate the robustness of the results to concerns of imperfect multicollinearity between climate variables (Supplementary Methods Section  2 ), as well as by using information criteria (Supplementary Table 1 ) to demonstrate that including several lagged climate variables provides a preferable trade-off between optimally describing the data and limiting the possibility of overfitting.

We calculate these variables from the distribution of daily, d , temperature, T x , d , and precipitation, P x , d , at the grid-cell, x , level for both the historical and future climate data. As well as annual mean temperature, \({\bar{T}}_{x,y}\) , and annual total precipitation, P x , y , we calculate annual, y , measures of daily temperature variability, \({\widetilde{T}}_{x,y}\) :

the number of wet days, Pwd x , y :

and extreme daily rainfall:

in which T x , d , m , y is the grid-cell-specific daily temperature in month m and year y , \({\bar{T}}_{x,m,{y}}\) is the year and grid-cell-specific monthly, m , mean temperature, D m and D y the number of days in a given month m or year y , respectively, H the Heaviside step function, 1 mm the threshold used to define wet days and P 99.9 x is the 99.9th percentile of historical (1979–2019) daily precipitation at the grid-cell level. Units of the climate measures are degrees Celsius for annual mean temperature and daily temperature variability, millimetres for total annual precipitation and extreme daily precipitation, and simply the number of days for the annual number of wet days.

We also calculated weighted standard deviations of monthly rainfall totals as also used in ref.  8 but do not include them in our projections as we find that, when accounting for delayed effects, their effect becomes statistically indistinct and is better captured by changes in total annual rainfall.

Spatial aggregation

We aggregate grid-cell-level historical and future climate measures, as well as grid-cell-level future GDPpc and population, to the level of the first administrative unit below national level of the GADM database, using an area-weighting algorithm that estimates the portion of each grid cell falling within an administrative boundary. We use this as our baseline specification following previous findings that the effect of area or population weighting at the sub-national level is negligible 7 , 8 .

Empirical model specification: fixed-effects distributed lag models

Following a wide range of climate econometric literature 16 , 60 , we use panel regression models with a selection of fixed effects and time trends to isolate plausibly exogenous variation with which to maximize confidence in a causal interpretation of the effects of climate on economic growth rates. The use of region fixed effects, μ r , accounts for unobserved time-invariant differences between regions, such as prevailing climatic norms and growth rates owing to historical and geopolitical factors. The use of yearly fixed effects, η y , accounts for regionally invariant annual shocks to the global climate or economy such as the El Niño–Southern Oscillation or global recessions. In our baseline specification, we also include region-specific linear time trends, k r y , to exclude the possibility of spurious correlations resulting from common slow-moving trends in climate and growth.

The persistence of climate impacts on economic growth rates is a key determinant of the long-term magnitude of damages. Methods for inferring the extent of persistence in impacts on growth rates have typically used lagged climate variables to evaluate the presence of delayed effects or catch-up dynamics 2 , 18 . For example, consider starting from a model in which a climate condition, C r , y , (for example, annual mean temperature) affects the growth rate, Δlgrp r , y (the first difference of the logarithm of gross regional product) of region r in year y :

which we refer to as a ‘pure growth effects’ model in the main text. Typically, further lags are included,

and the cumulative effect of all lagged terms is evaluated to assess the extent to which climate impacts on growth rates persist. Following ref.  18 , in the case that,

the implication is that impacts on the growth rate persist up to NL years after the initial shock (possibly to a weaker or a stronger extent), whereas if

then the initial impact on the growth rate is recovered after NL years and the effect is only one on the level of output. However, we note that such approaches are limited by the fact that, when including an insufficient number of lags to detect a recovery of the growth rates, one may find equation ( 6 ) to be satisfied and incorrectly assume that a change in climatic conditions affects the growth rate indefinitely. In practice, given a limited record of historical data, including too few lags to confidently conclude in an infinitely persistent impact on the growth rate is likely, particularly over the long timescales over which future climate damages are often projected 2 , 24 . To avoid this issue, we instead begin our analysis with a model for which the level of output, lgrp r , y , depends on the level of a climate variable, C r , y :

Given the non-stationarity of the level of output, we follow the literature 19 and estimate such an equation in first-differenced form as,

which we refer to as a model of ‘pure level effects’ in the main text. This model constitutes a baseline specification in which a permanent change in the climate variable produces an instantaneous impact on the growth rate and a permanent effect only on the level of output. By including lagged variables in this specification,

we are able to test whether the impacts on the growth rate persist any further than instantaneously by evaluating whether α L  > 0 are statistically significantly different from zero. Even though this framework is also limited by the possibility of including too few lags, the choice of a baseline model specification in which impacts on the growth rate do not persist means that, in the case of including too few lags, the framework reverts to the baseline specification of level effects. As such, this framework is conservative with respect to the persistence of impacts and the magnitude of future damages. It naturally avoids assumptions of infinite persistence and we are able to interpret any persistence that we identify with equation ( 9 ) as a lower bound on the extent of climate impact persistence on growth rates. See the main text for further discussion of this specification choice, in particular about its conservative nature compared with previous literature estimates, such as refs.  2 , 18 .

We allow the response to climatic changes to vary across regions, using interactions of the climate variables with historical average (1979–2019) climatic conditions reflecting heterogenous effects identified in previous work 7 , 8 . Following this previous work, the moderating variables of these interaction terms constitute the historical average of either the variable itself or of the seasonal temperature difference, \({\hat{T}}_{r}\) , or annual mean temperature, \({\bar{T}}_{r}\) , in the case of daily temperature variability 7 and extreme daily rainfall, respectively 8 .

The resulting regression equation with N and M lagged variables, respectively, reads:

in which Δlgrp r , y is the annual, regional GRPpc growth rate, measured as the first difference of the logarithm of real GRPpc, following previous work 2 , 3 , 7 , 8 , 18 , 19 . Fixed-effects regressions were run using the fixest package in R (ref.  61 ).

Estimates of the coefficients of interest α i , L are shown in Extended Data Fig. 1 for N  =  M  = 10 lags and for our preferred choice of the number of lags in Supplementary Figs. 1 – 3 . In Extended Data Fig. 1 , errors are shown clustered at the regional level, but for the construction of damage projections, we block-bootstrap the regressions by region 1,000 times to provide a range of parameter estimates with which to sample the projection uncertainty (following refs.  2 , 31 ).

Spatial-lag model

In Supplementary Fig. 14 , we present the results from a spatial-lag model that explores the potential for climate impacts to ‘spill over’ into spatially neighbouring regions. We measure the distance between centroids of each pair of sub-national regions and construct spatial lags that take the average of the first-differenced climate variables and their interaction terms over neighbouring regions that are at distances of 0–500, 500–1,000, 1,000–1,500 and 1,500–2000 km (spatial lags, ‘SL’, 1 to 4). For simplicity, we then assess a spatial-lag model without temporal lags to assess spatial spillovers of contemporaneous climate impacts. This model takes the form:

in which SL indicates the spatial lag of each climate variable and interaction term. In Supplementary Fig. 14 , we plot the cumulative marginal effect of each climate variable at different baseline climate conditions by summing the coefficients for each climate variable and interaction term, for example, for average temperature impacts as:

These cumulative marginal effects can be regarded as the overall spatially dependent impact to an individual region given a one-unit shock to a climate variable in that region and all neighbouring regions at a given value of the moderating variable of the interaction term.

Constructing projections of economic damage from future climate change

We construct projections of future climate damages by applying the coefficients estimated in equation ( 10 ) and shown in Supplementary Tables 2 – 4 (when including only lags with statistically significant effects in specifications that limit overfitting; see Supplementary Methods Section  1 ) to projections of future climate change from the CMIP-6 models. Year-on-year changes in each primary climate variable of interest are calculated to reflect the year-to-year variations used in the empirical models. 30-year moving averages of the moderating variables of the interaction terms are calculated to reflect the long-term average of climatic conditions that were used for the moderating variables in the empirical models. By using moving averages in the projections, we account for the changing vulnerability to climate shocks based on the evolving long-term conditions (Supplementary Figs. 10 and 11 show that the results are robust to the precise choice of the window of this moving average). Although these climate variables are not differenced, the fact that the bias-adjusted climate models reproduce observed climatological patterns across regions for these moderating variables very accurately (Supplementary Table 6 ) with limited spread across models (<3%) precludes the possibility that any considerable bias or uncertainty is introduced by this methodological choice. However, we impose caps on these moderating variables at the 95th percentile at which they were observed in the historical data to prevent extrapolation of the marginal effects outside the range in which the regressions were estimated. This is a conservative choice that limits the magnitude of our damage projections.

Time series of primary climate variables and moderating climate variables are then combined with estimates of the empirical model parameters to evaluate the regression coefficients in equation ( 10 ), producing a time series of annual GRPpc growth-rate reductions for a given emission scenario, climate model and set of empirical model parameters. The resulting time series of growth-rate impacts reflects those occurring owing to future climate change. By contrast, a future scenario with no climate change would be one in which climate variables do not change (other than with random year-to-year fluctuations) and hence the time-averaged evaluation of equation ( 10 ) would be zero. Our approach therefore implicitly compares the future climate-change scenario to this no-climate-change baseline scenario.

The time series of growth-rate impacts owing to future climate change in region r and year y , δ r , y , are then added to the future baseline growth rates, π r , y (in log-diff form), obtained from the SSP2 scenario to yield trajectories of damaged GRPpc growth rates, ρ r , y . These trajectories are aggregated over time to estimate the future trajectory of GRPpc with future climate impacts:

in which GRPpc r , y =2020 is the initial log level of GRPpc. We begin damage estimates in 2020 to reflect the damages occurring since the end of the period for which we estimate the empirical models (1979–2019) and to match the timing of mitigation-cost estimates from most IAMs (see below).

For each emission scenario, this procedure is repeated 1,000 times while randomly sampling from the selection of climate models, the selection of empirical models with different numbers of lags (shown in Supplementary Figs. 1 – 3 and Supplementary Tables 2 – 4 ) and bootstrapped estimates of the regression parameters. The result is an ensemble of future GRPpc trajectories that reflect uncertainty from both physical climate change and the structural and sampling uncertainty of the empirical models.

Estimates of mitigation costs

We obtain IPCC estimates of the aggregate costs of emission mitigation from the AR6 Scenario Explorer and Database hosted by IIASA 23 . Specifically, we search the AR6 Scenarios Database World v1.1 for IAMs that provided estimates of global GDP and population under both a SSP2 baseline and a SSP2-RCP2.6 scenario to maintain consistency with the socio-economic and emission scenarios of the climate damage projections. We find five IAMs that provide data for these scenarios, namely, MESSAGE-GLOBIOM 1.0, REMIND-MAgPIE 1.5, AIM/GCE 2.0, GCAM 4.2 and WITCH-GLOBIOM 3.1. Of these five IAMs, we use the results only from the first three that passed the IPCC vetting procedure for reproducing historical emission and climate trajectories. We then estimate global mitigation costs as the percentage difference in global per capita GDP between the SSP2 baseline and the SSP2-RCP2.6 emission scenario. In the case of one of these IAMs, estimates of mitigation costs begin in 2020, whereas in the case of two others, mitigation costs begin in 2010. The mitigation cost estimates before 2020 in these two IAMs are mostly negligible, and our choice to begin comparison with damage estimates in 2020 is conservative with respect to the relative weight of climate damages compared with mitigation costs for these two IAMs.

Data availability

Data on economic production and ERA-5 climate data are publicly available at https://doi.org/10.5281/zenodo.4681306 (ref. 62 ) and https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5 , respectively. Data on mitigation costs are publicly available at https://data.ene.iiasa.ac.at/ar6/#/downloads . Processed climate and economic data, as well as all other necessary data for reproduction of the results, are available at the public repository https://doi.org/10.5281/zenodo.10562951  (ref. 63 ).

Code availability

All code necessary for reproduction of the results is available at the public repository https://doi.org/10.5281/zenodo.10562951  (ref. 63 ).

Glanemann, N., Willner, S. N. & Levermann, A. Paris Climate Agreement passes the cost-benefit test. Nat. Commun. 11 , 110 (2020).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Burke, M., Hsiang, S. M. & Miguel, E. Global non-linear effect of temperature on economic production. Nature 527 , 235–239 (2015).

Article   ADS   CAS   PubMed   Google Scholar  

Kalkuhl, M. & Wenz, L. The impact of climate conditions on economic production. Evidence from a global panel of regions. J. Environ. Econ. Manag. 103 , 102360 (2020).

Article   Google Scholar  

Moore, F. C. & Diaz, D. B. Temperature impacts on economic growth warrant stringent mitigation policy. Nat. Clim. Change 5 , 127–131 (2015).

Article   ADS   Google Scholar  

Drouet, L., Bosetti, V. & Tavoni, M. Net economic benefits of well-below 2°C scenarios and associated uncertainties. Oxf. Open Clim. Change 2 , kgac003 (2022).

Ueckerdt, F. et al. The economically optimal warming limit of the planet. Earth Syst. Dyn. 10 , 741–763 (2019).

Kotz, M., Wenz, L., Stechemesser, A., Kalkuhl, M. & Levermann, A. Day-to-day temperature variability reduces economic growth. Nat. Clim. Change 11 , 319–325 (2021).

Kotz, M., Levermann, A. & Wenz, L. The effect of rainfall changes on economic production. Nature 601 , 223–227 (2022).

Kousky, C. Informing climate adaptation: a review of the economic costs of natural disasters. Energy Econ. 46 , 576–592 (2014).

Harlan, S. L. et al. in Climate Change and Society: Sociological Perspectives (eds Dunlap, R. E. & Brulle, R. J.) 127–163 (Oxford Univ. Press, 2015).

Bolton, P. et al. The Green Swan (BIS Books, 2020).

Alogoskoufis, S. et al. ECB Economy-wide Climate Stress Test: Methodology and Results European Central Bank, 2021).

Weber, E. U. What shapes perceptions of climate change? Wiley Interdiscip. Rev. Clim. Change 1 , 332–342 (2010).

Markowitz, E. M. & Shariff, A. F. Climate change and moral judgement. Nat. Clim. Change 2 , 243–247 (2012).

Riahi, K. et al. The shared socioeconomic pathways and their energy, land use, and greenhouse gas emissions implications: an overview. Glob. Environ. Change 42 , 153–168 (2017).

Auffhammer, M., Hsiang, S. M., Schlenker, W. & Sobel, A. Using weather data and climate model output in economic analyses of climate change. Rev. Environ. Econ. Policy 7 , 181–198 (2013).

Kolstad, C. D. & Moore, F. C. Estimating the economic impacts of climate change using weather observations. Rev. Environ. Econ. Policy 14 , 1–24 (2020).

Dell, M., Jones, B. F. & Olken, B. A. Temperature shocks and economic growth: evidence from the last half century. Am. Econ. J. Macroecon. 4 , 66–95 (2012).

Newell, R. G., Prest, B. C. & Sexton, S. E. The GDP-temperature relationship: implications for climate change damages. J. Environ. Econ. Manag. 108 , 102445 (2021).

Kikstra, J. S. et al. The social cost of carbon dioxide under climate-economy feedbacks and temperature variability. Environ. Res. Lett. 16 , 094037 (2021).

Article   ADS   CAS   Google Scholar  

Bastien-Olvera, B. & Moore, F. Persistent effect of temperature on GDP identified from lower frequency temperature variability. Environ. Res. Lett. 17 , 084038 (2022).

Eyring, V. et al. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev. 9 , 1937–1958 (2016).

Byers, E. et al. AR6 scenarios database. Zenodo https://zenodo.org/records/7197970 (2022).

Burke, M., Davis, W. M. & Diffenbaugh, N. S. Large potential reduction in economic damages under UN mitigation targets. Nature 557 , 549–553 (2018).

Kotz, M., Wenz, L. & Levermann, A. Footprint of greenhouse forcing in daily temperature variability. Proc. Natl Acad. Sci. 118 , e2103294118 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Myhre, G. et al. Frequency of extreme precipitation increases extensively with event rareness under global warming. Sci. Rep. 9 , 16063 (2019).

Min, S.-K., Zhang, X., Zwiers, F. W. & Hegerl, G. C. Human contribution to more-intense precipitation extremes. Nature 470 , 378–381 (2011).

England, M. R., Eisenman, I., Lutsko, N. J. & Wagner, T. J. The recent emergence of Arctic Amplification. Geophys. Res. Lett. 48 , e2021GL094086 (2021).

Fischer, E. M. & Knutti, R. Anthropogenic contribution to global occurrence of heavy-precipitation and high-temperature extremes. Nat. Clim. Change 5 , 560–564 (2015).

Pfahl, S., O’Gorman, P. A. & Fischer, E. M. Understanding the regional pattern of projected future changes in extreme precipitation. Nat. Clim. Change 7 , 423–427 (2017).

Callahan, C. W. & Mankin, J. S. Globally unequal effect of extreme heat on economic growth. Sci. Adv. 8 , eadd3726 (2022).

Diffenbaugh, N. S. & Burke, M. Global warming has increased global economic inequality. Proc. Natl Acad. Sci. 116 , 9808–9813 (2019).

Callahan, C. W. & Mankin, J. S. National attribution of historical climate damages. Clim. Change 172 , 40 (2022).

Burke, M. & Tanutama, V. Climatic constraints on aggregate economic output. National Bureau of Economic Research, Working Paper 25779. https://doi.org/10.3386/w25779 (2019).

Kahn, M. E. et al. Long-term macroeconomic effects of climate change: a cross-country analysis. Energy Econ. 104 , 105624 (2021).

Desmet, K. et al. Evaluating the economic cost of coastal flooding. National Bureau of Economic Research, Working Paper 24918. https://doi.org/10.3386/w24918 (2018).

Hsiang, S. M. & Jina, A. S. The causal effect of environmental catastrophe on long-run economic growth: evidence from 6,700 cyclones. National Bureau of Economic Research, Working Paper 20352. https://doi.org/10.3386/w2035 (2014).

Ritchie, P. D. et al. Shifts in national land use and food production in Great Britain after a climate tipping point. Nat. Food 1 , 76–83 (2020).

Dietz, S., Rising, J., Stoerk, T. & Wagner, G. Economic impacts of tipping points in the climate system. Proc. Natl Acad. Sci. 118 , e2103081118 (2021).

Bastien-Olvera, B. A. & Moore, F. C. Use and non-use value of nature and the social cost of carbon. Nat. Sustain. 4 , 101–108 (2021).

Carleton, T. et al. Valuing the global mortality consequences of climate change accounting for adaptation costs and benefits. Q. J. Econ. 137 , 2037–2105 (2022).

Bastien-Olvera, B. A. et al. Unequal climate impacts on global values of natural capital. Nature 625 , 722–727 (2024).

Malik, A. et al. Impacts of climate change and extreme weather on food supply chains cascade across sectors and regions in Australia. Nat. Food 3 , 631–643 (2022).

Article   ADS   PubMed   Google Scholar  

Kuhla, K., Willner, S. N., Otto, C., Geiger, T. & Levermann, A. Ripple resonance amplifies economic welfare loss from weather extremes. Environ. Res. Lett. 16 , 114010 (2021).

Schleypen, J. R., Mistry, M. N., Saeed, F. & Dasgupta, S. Sharing the burden: quantifying climate change spillovers in the European Union under the Paris Agreement. Spat. Econ. Anal. 17 , 67–82 (2022).

Dasgupta, S., Bosello, F., De Cian, E. & Mistry, M. Global temperature effects on economic activity and equity: a spatial analysis. European Institute on Economics and the Environment, Working Paper 22-1 (2022).

Neal, T. The importance of external weather effects in projecting the macroeconomic impacts of climate change. UNSW Economics Working Paper 2023-09 (2023).

Deryugina, T. & Hsiang, S. M. Does the environment still matter? Daily temperature and income in the United States. National Bureau of Economic Research, Working Paper 20750. https://doi.org/10.3386/w20750 (2014).

Hersbach, H. et al. The ERA5 global reanalysis. Q. J. R. Meteorol. Soc. 146 , 1999–2049 (2020).

Cucchi, M. et al. WFDE5: bias-adjusted ERA5 reanalysis data for impact studies. Earth Syst. Sci. Data 12 , 2097–2120 (2020).

Adler, R. et al. The New Version 2.3 of the Global Precipitation Climatology Project (GPCP) Monthly Analysis Product 1072–1084 (University of Maryland, 2016).

Lange, S. Trend-preserving bias adjustment and statistical downscaling with ISIMIP3BASD (v1.0). Geosci. Model Dev. 12 , 3055–3070 (2019).

Wenz, L., Carr, R. D., Kögel, N., Kotz, M. & Kalkuhl, M. DOSE – global data set of reported sub-national economic output. Sci. Data 10 , 425 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Gennaioli, N., La Porta, R., Lopez De Silanes, F. & Shleifer, A. Growth in regions. J. Econ. Growth 19 , 259–309 (2014).

Board of Governors of the Federal Reserve System (US). U.S. dollars to euro spot exchange rate. https://fred.stlouisfed.org/series/AEXUSEU (2022).

World Bank. GDP deflator. https://data.worldbank.org/indicator/NY.GDP.DEFL.ZS (2022).

Jones, B. & O’Neill, B. C. Spatially explicit global population scenarios consistent with the Shared Socioeconomic Pathways. Environ. Res. Lett. 11 , 084003 (2016).

Murakami, D. & Yamagata, Y. Estimation of gridded population and GDP scenarios with spatially explicit statistical downscaling. Sustainability 11 , 2106 (2019).

Koch, J. & Leimbach, M. Update of SSP GDP projections: capturing recent changes in national accounting, PPP conversion and Covid 19 impacts. Ecol. Econ. 206 (2023).

Carleton, T. A. & Hsiang, S. M. Social and economic impacts of climate. Science 353 , aad9837 (2016).

Article   PubMed   Google Scholar  

Bergé, L. Efficient estimation of maximum likelihood models with multiple fixed-effects: the R package FENmlm. DEM Discussion Paper Series 18-13 (2018).

Kalkuhl, M., Kotz, M. & Wenz, L. DOSE - The MCC-PIK Database Of Subnational Economic output. Zenodo https://zenodo.org/doi/10.5281/zenodo.4681305 (2021).

Kotz, M., Wenz, L. & Levermann, A. Data and code for “The economic commitment of climate change”. Zenodo https://zenodo.org/doi/10.5281/zenodo.10562951 (2024).

Dasgupta, S. et al. Effects of climate change on combined labour productivity and supply: an empirical, multi-model study. Lancet Planet. Health 5 , e455–e465 (2021).

Lobell, D. B. et al. The critical role of extreme heat for maize production in the United States. Nat. Clim. Change 3 , 497–501 (2013).

Zhao, C. et al. Temperature increase reduces global yields of major crops in four independent estimates. Proc. Natl Acad. Sci. 114 , 9326–9331 (2017).

Wheeler, T. R., Craufurd, P. Q., Ellis, R. H., Porter, J. R. & Prasad, P. V. Temperature variability and the yield of annual crops. Agric. Ecosyst. Environ. 82 , 159–167 (2000).

Rowhani, P., Lobell, D. B., Linderman, M. & Ramankutty, N. Climate variability and crop production in Tanzania. Agric. For. Meteorol. 151 , 449–460 (2011).

Ceglar, A., Toreti, A., Lecerf, R., Van der Velde, M. & Dentener, F. Impact of meteorological drivers on regional inter-annual crop yield variability in France. Agric. For. Meteorol. 216 , 58–67 (2016).

Shi, L., Kloog, I., Zanobetti, A., Liu, P. & Schwartz, J. D. Impacts of temperature and its variability on mortality in New England. Nat. Clim. Change 5 , 988–991 (2015).

Xue, T., Zhu, T., Zheng, Y. & Zhang, Q. Declines in mental health associated with air pollution and temperature variability in China. Nat. Commun. 10 , 2165 (2019).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Liang, X.-Z. et al. Determining climate effects on US total agricultural productivity. Proc. Natl Acad. Sci. 114 , E2285–E2292 (2017).

Desbureaux, S. & Rodella, A.-S. Drought in the city: the economic impact of water scarcity in Latin American metropolitan areas. World Dev. 114 , 13–27 (2019).

Damania, R. The economics of water scarcity and variability. Oxf. Rev. Econ. Policy 36 , 24–44 (2020).

Davenport, F. V., Burke, M. & Diffenbaugh, N. S. Contribution of historical precipitation change to US flood damages. Proc. Natl Acad. Sci. 118 , e2017524118 (2021).

Dave, R., Subramanian, S. S. & Bhatia, U. Extreme precipitation induced concurrent events trigger prolonged disruptions in regional road networks. Environ. Res. Lett. 16 , 104050 (2021).

Download references

Acknowledgements

We gratefully acknowledge financing from the Volkswagen Foundation and the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH on behalf of the Government of the Federal Republic of Germany and Federal Ministry for Economic Cooperation and Development (BMZ).

Open access funding provided by Potsdam-Institut für Klimafolgenforschung (PIK) e.V.

Author information

Authors and affiliations.

Research Domain IV, Research Domain IV, Potsdam Institute for Climate Impact Research, Potsdam, Germany

Maximilian Kotz, Anders Levermann & Leonie Wenz

Institute of Physics, Potsdam University, Potsdam, Germany

Maximilian Kotz & Anders Levermann

Mercator Research Institute on Global Commons and Climate Change, Berlin, Germany

Leonie Wenz

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the design of the analysis. M.K. conducted the analysis and produced the figures. All authors contributed to the interpretation and presentation of the results. M.K. and L.W. wrote the manuscript.

Corresponding author

Correspondence to Leonie Wenz .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Xin-Zhong Liang, Chad Thackeray and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 constraining the persistence of historical climate impacts on economic growth rates..

The results of a panel-based fixed-effects distributed lag model for the effects of annual mean temperature ( a ), daily temperature variability ( b ), total annual precipitation ( c ), the number of wet days ( d ) and extreme daily precipitation ( e ) on sub-national economic growth rates. Point estimates show the effects of a 1 °C or one standard deviation increase (for temperature and precipitation variables, respectively) at the lower quartile, median and upper quartile of the relevant moderating variable (green, orange and purple, respectively) at different lagged periods after the initial shock (note that these are not cumulative effects). Climate variables are used in their first-differenced form (see main text for discussion) and the moderating climate variables are the annual mean temperature, seasonal temperature difference, total annual precipitation, number of wet days and annual mean temperature, respectively, in panels a – e (see Methods for further discussion). Error bars show the 95% confidence intervals having clustered standard errors by region. The within-region R 2 , Bayesian and Akaike information criteria for the model are shown at the top of the figure. This figure shows results with ten lags for each variable to demonstrate the observed levels of persistence, but our preferred specifications remove later lags based on the statistical significance of terms shown above and the information criteria shown in Extended Data Fig. 2 . The resulting models without later lags are shown in Supplementary Figs. 1 – 3 .

Extended Data Fig. 2 Incremental lag-selection procedure using information criteria and within-region R 2 .

Starting from a panel-based fixed-effects distributed lag model estimating the effects of climate on economic growth using the real historical data (as in equation ( 4 )) with ten lags for all climate variables (as shown in Extended Data Fig. 1 ), lags are incrementally removed for one climate variable at a time. The resulting Bayesian and Akaike information criteria are shown in a – e and f – j , respectively, and the within-region R 2 and number of observations in k – o and p – t , respectively. Different rows show the results when removing lags from different climate variables, ordered from top to bottom as annual mean temperature, daily temperature variability, total annual precipitation, the number of wet days and extreme annual precipitation. Information criteria show minima at approximately four lags for precipitation variables and ten to eight for temperature variables, indicating that including these numbers of lags does not lead to overfitting. See Supplementary Table 1 for an assessment using information criteria to determine whether including further climate variables causes overfitting.

Extended Data Fig. 3 Damages in our preferred specification that provides a robust lower bound on the persistence of climate impacts on economic growth versus damages in specifications of pure growth or pure level effects.

Estimates of future damages as shown in Fig. 1 but under the emission scenario RCP8.5 for three separate empirical specifications: in orange our preferred specification, which provides an empirical lower bound on the persistence of climate impacts on economic growth rates while avoiding assumptions of infinite persistence (see main text for further discussion); in purple a specification of ‘pure growth effects’ in which the first difference of climate variables is not taken and no lagged climate variables are included (the baseline specification of ref.  2 ); and in pink a specification of ‘pure level effects’ in which the first difference of climate variables is taken but no lagged terms are included.

Extended Data Fig. 4 Climate changes in different variables as a function of historical interannual variability.

Changes in each climate variable of interest from 1979–2019 to 2035–2065 under the high-emission scenario SSP5-RCP8.5, expressed as a percentage of the historical variability of each measure. Historical variability is estimated as the standard deviation of each detrended climate variable over the period 1979–2019 during which the empirical models were identified (detrending is appropriate because of the inclusion of region-specific linear time trends in the empirical models). See Supplementary Fig. 13 for changes expressed in standard units. Data on national administrative boundaries are obtained from the GADM database version 3.6 and are freely available for academic use ( https://gadm.org/ ).

Extended Data Fig. 5 Contribution of different climate variables to overall committed damages.

a , Climate damages in 2049 when using empirical models that account for all climate variables, changes in annual mean temperature only or changes in both annual mean temperature and one other climate variable (daily temperature variability, total annual precipitation, the number of wet days and extreme daily precipitation, respectively). b , The cumulative marginal effects of an increase in annual mean temperature of 1 °C, at different baseline temperatures, estimated from empirical models including all climate variables or annual mean temperature only. Estimates and uncertainty bars represent the median and 95% confidence intervals obtained from 1,000 block-bootstrap resamples from each of three different empirical models using eight, nine or ten lags of temperature terms.

Extended Data Fig. 6 The difference in committed damages between the upper and lower quartiles of countries when ranked by GDP and cumulative historical emissions.

Quartiles are defined using a population weighting, as are the average committed damages across each quartile group. The violin plots indicate the distribution of differences between quartiles across the two extreme emission scenarios (RCP2.6 and RCP8.5) and the uncertainty sampling procedure outlined in Methods , which accounts for uncertainty arising from the choice of lags in the empirical models, uncertainty in the empirical model parameter estimates, as well as the climate model projections. Bars indicate the median, as well as the 10th and 90th percentiles and upper and lower sixths of the distribution reflecting the very likely and likely ranges following the likelihood classification adopted by the IPCC.

Supplementary information

Supplementary information, peer review file, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kotz, M., Levermann, A. & Wenz, L. The economic commitment of climate change. Nature 628 , 551–557 (2024). https://doi.org/10.1038/s41586-024-07219-0

Download citation

Received : 25 January 2023

Accepted : 21 February 2024

Published : 17 April 2024

Issue Date : 18 April 2024

DOI : https://doi.org/10.1038/s41586-024-07219-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

cost minimization analysis case study

IMAGES

  1. PPT

    cost minimization analysis case study

  2. COST MINIMISATION ANALYSIS

    cost minimization analysis case study

  3. Cost Minimization Analysis PowerPoint Template

    cost minimization analysis case study

  4. Cost Minimization Analysis PowerPoint Template

    cost minimization analysis case study

  5. PPT

    cost minimization analysis case study

  6. Cost Minimization Analysis eA2bR

    cost minimization analysis case study

VIDEO

  1. Cost Minimization Analysis of Pharmaceuticals

  2. Cost minimization analysis (CMA) and Cost effectiveness analysis (CEA)

  3. PHARMACO ECONOMICS

  4. Cost Minimization

  5. Analisis Minimalisasi Biaya (Cost Minimization Analysis)

  6. Review Micro Optimization Problems in 7 Minutes

COMMENTS

  1. Cost-Minimization Analysis

    33.3.1 Cost Minimization Analysis. Cost minimization analysis (CMA) comprises for the least costly alternatives when the outcomes of two or more therapies are virtually identical. CMA involves calculating drug costs to analyze the least costly drug or therapeutic modality. It also reflects the cost of preparing and administering a dose.

  2. Chapter 3: Cost-Minimization Analysis

    Chapter 3: Cost-Minimization Analysis. Cost-minimization analysis (CMA) is considered the simplest of the types of pharmacoeconomic analysis. It is often used to determine the appropriate setting or route for a treatment. CMA can be used when the outcomes have been determined by the decision-makers to be equivalent—that is, when experts in a ...

  3. Cost-minimization analysis of oral versus intravenous antibiotic

    A cost-minimization analysis was conducted for Klebsiella pneumoniae liver abscess (KLA) patients enrolled in a randomized controlled trial which found oral ciprofloxacin to be non-inferior to ...

  4. Principles of pharmacoeconomic analysis: the case of pharmacist-led

    Cost-minimization analysis (CMA): aims to determine the least costly among alternative technologies that are assumed to produce equivalent healthcare outcomes (~same efficacy/safety profiles). The evidence on the equivalence must be referenced by the author conducting the study and should have been done prior to the cost-minimization analysis. 16

  5. PDF Cost-Minimization Analysis

    OBJECTIVE: The objective of the study was to perform a cost-minimization analysis (CMA) comparing the cost of Oncoplatin given in two doses with Oncoplatin combined with NoNausea administered in one dose. The per-spective of the study is the third-party payer. METHODS: Over a 6-month period (February 2007 to July 2007), patients from two oncology

  6. Cost Minimisation Analysis: When And Where? A Review ...

    Cost minimisation analysis (CMA) is a form of economic evaluation advocated in cases where products have equal efficacy. Whilst there is consensus on this point, the criteria for establishing equal efficacy varies by source. Additionally, the availability of guidelines on the acceptability of CMA is not always available from HTA bodies. This study reviewed HTA guidelines to determine: 1) if ...

  7. Cost-Minimization Analysis

    This term refers to an economic evaluation tool. Cost-minimization analysis is mostly applied in the health sector and is a method used to measure and compare the costs of different medical interventions. The principal limitations of this cost evaluation method are that it can only be used to compare treatments that provide the same benefits or ...

  8. Cost-effectiveness in health: consolidated research and ...

    There are different types of evaluation in this area: cost-consequence analysis, cost-minimization analysis, cost-effectiveness analysis (CEA), cost-utility analysis, cost to the user (Vuong, 2015 ...

  9. Cost-minimization analysis

    Cost-minimization is a tool used in pharmacoeconomics to compare the cost per course of treatment when alternative therapies have demonstrably equivalent clinical ... there is no requirement to find a common efficacy denominator as would be the case when conducting a cost-effectiveness study. The author is not precluded from doing so through ...

  10. Cost-minimization analysis of escitalopram, fluoxetine, and

    The cost-minimization study would contribute to bringing down the annual treatment costs, leading to better medication adherence and ultimately better patient outcomes. ... Aznar-Lou I, Pontinha VM, Pontarolo R, Fernandez-Llimos F. Principles of pharmacoeconomic analysis: The case of pharmacist-led interventions. Pharm Pract (Granada) 2021; 19: ...

  11. A Cost-Minimization Analysis of a Medical Record-based, Store and

    A cost-minimization analysis performed in the same setting for the specific case of teledermatology showed social savings of approximately €11.4 per visit, which have an impact, especially on users (77% of the total amount saved) as opposed to the healthcare system (23%).

  12. Health Economic Methods: Cost-Minimization, Cost-Effectiveness, Cost

    A review of cost studies of intensive care units: problems with the cost concept. Crit Care Med. 1995; 23: 964-972. ... Cost-Minimization Analysis. ... U.S. Public Health Service Panel on Cost-Effectiveness in Health and Medicine (PCEHM) recommends conducting a reference case from a societal perspective, 15. Weinstein M.C. Siegel J.E. Gold M.R.

  13. A Cost-Minimization Analysis of Nurse-Led Virtual Case Management in

    Results: The break-even point was $7339 per late-stage CKD patient enrolled in the intervention. Based on the distribution of time receiving the intervention, we determined a maximum monthly intervention cost of $703.37. In probabilistic sensitivity analyses, we found that 75% of simulations produced break-even points between $3929 and $9460.

  14. Cost-Minimization Analysis

    33.3.1 Cost Minimization Analysis. Cost minimization analysis (CMA) comprises for the least costly alternatives when the outcomes of two or more therapies are virtually identical. CMA involves calculating drug costs to analyze the least costly drug or therapeutic modality.

  15. Cost-Minimization Analysis of Metformin and Acarbose in Treatment of

    Cost-minimization analysis was conducted on the assumption that metformin and acarbose have equivalent clinical effectiveness. The cost of treatment was detected and evaluated from a payer's perspective. ... In base-case cost analysis, the annual treatment cost of metformin was ¥1358.90 while that of acarbose was ¥2260.08 when referring to ...

  16. Cost-minimization analysis of escitalopram, fluoxetine, and

    The cost-minimization study would contribute to bringing down the annual treatment costs, leading to better medication adherence and ultimately better patient outcomes. ... Tonin FS, Aznar-Lou I, Pontinha VM, Pontarolo R, Fernandez-Llimos F. Principles of pharmacoeconomic analysis: The case of pharmacist-led interventions. Pharm Pract (Granada ...

  17. Cost minimization analysis of laparoscopic surgery for colorectal

    Cost minimization analysis of laparoscopic surgery for colorectal cancer within the enhanced recovery after surgery (ERAS) protocol: a single-centre, case-matched study Wideochir Inne Tech Maloinwazyjne. 2016;11(1):14-21. doi: 10.5114/wiitm.2016.58617. Epub 2016 Mar 16. Authors ...

  18. Cost-minimization analysis of adjuvant chemotherapy regimens given to

    Cost-minimization analysis is one of methods to evaluate cost-effectiveness of ... (11,500 dollars), which was equivalent to approximately 60 % of the total cost. In the case of mFOLFOX6, the cost of oxaliplatin was about 900,000 yen (9000 dollars), which was equivalent to approximately 40 % of the total cost. ... three studies of cost ...

  19. Cost Minimization Analysis, Formula & Graphs

    Cost minimization analysis, in microeconomics, is focused on finding the most efficient combination of inputs that a firm can use to produce a given output. ... Firms assess the costs associated with various inputs, primarily labor and capital, and study their relationship in order to identify the most cost-effective production methods ...

  20. Cost-Minimization Analysis: A Follow-Up Study of a Telemedicine Program

    Cost-Minimization Analysis: A Follow-Up Study of a Telemedicine Program. ... Review of the Literature and Research Guidelines for Benefit-Cost Analysis. ... Telemedicine and e-Health, Vol. 15, No. 10. Making the Business Case for Telemedicine: An Interactive Spreadsheet.

  21. Minimizing Costs

    7.1 The Economic Concept of Cost. Learning Objective 7.1: Explain fixed and variable costs, opportunity cost, sunk cost, and depreciation.. 7.2 Short-Run Cost Minimization. Learning Objective 7.2: Describe the solution to the cost minimization problem in the short run.. 7.3 Long-Run Cost Minimization. Learning Objective 7.3: Describe the solution to the cost minimization problem in the long run.

  22. (PDF) Total Cost Minimization Transportation Problem -A Case Study of

    Vol ume-9 Issue-11 September 2020, DOI: 10.26671/IJIRG.2020.11.9.102 Page 97. Total Cost Minimization Transportation Problem - A Case Study of Carl Star. 1 Rakesh Agarwal, 2 Piyusha Somvanshi. 1 ...

  23. A Cost-Minimization Analysis of Nurse-Led Virtual Case Management in

    The primary objective of the present study, therefore, was to explore assumptions around a nurse-led virtual case management intervention for patients with late-stage CKD and to estimate how these assumptions might affect potential cost savings using a cost-minimization approach. ... versus hypothetical costs of delivering the case management ...

  24. The economic commitment of climate change

    Analysis of projected sub-national damages from temperature and precipitation show an income reduction of 19% of the world economy within the next 26 years independent of future emission choices.