Essay Writing Guide

Hook Examples

Last updated on: Nov 20, 2023

Hook Examples: How to Start Your Essay Effectively

By: Nova A.

15 min read

Reviewed By: Jacklyn H.

Published on: Feb 19, 2019

Hook Examples

Tired of getting poor grades on your high school or college essays? Feeling lost when it comes to captivating your professor's attention?

Whether you're a high school or college student, the constant stream of essays, assignments, and projects can be overwhelming. But fear not!

There's a secret weapon at your disposal: hooks. 

These attention-grabbing phrases are the key to keeping your reader hooked and eager for more. In this blog, we'll explore powerful essay hook examples that will solve all your essay writing concerns.

So let’s get started!

Hook Examples

On this Page

What is an Essay Hook?

An essay hook is the opening sentence or a few sentences in an essay that grab the reader's attention and engage them from the very beginning. It is called a " hook " because it is designed to reel in the reader and make them interested in reading the rest of the essay.

The purpose of an essay hook is to:

  • Grab the reader's attention from the very beginning
  • Create curiosity and intrigue
  • Engage the reader emotionally
  • Establish the tone and direction of the essay
  • Make the reader want to continue reading
  • Provide a seamless transition into the rest of the essay
  • Set the stage for the main argument or narrative
  • Make the essay memorable and stand out
  • Demonstrate the writer's skill in captivating an audience

Check out our complete guide on how to start an essay here!

How to Write a Hook?

The opening lines of your essay serve as the hook, capturing your reader's attention right from the start. Remember, the hook is a part of your essay introduction and shouldn't replace it.

A well-crafted introduction consists of a hook followed by a thesis statement . While the hook attracts the reader, the thesis statement explains the main points of your essay.

To write an effective hook, consider the following aspects:

  • Understand the nature of the literary work you're addressing.
  • Familiarize yourself with your audience's preferences and interests.
  • Clearly define the purpose behind your essay writing.

Keep in mind that the hook should be directly related to the main topic or idea of your writing piece. When it comes to essays or other academic papers, you can employ various types of hooks that align with your specific requirements. 

Learn more about Hook Statements in this informative Video!

Hook Sentence Examples

To give you a better understanding of the different types of essay hooks, we will be discussing essay hook examples.

Question Hook

Starting your essay by asking a thought-provoking question can be a good way to engage the reader. Ask your reader a question that they can visualize. However, make sure to keep your questions relevant to the reader's interest. Avoid generalized, and yes or no questions.

Rhetorical questions make up good hooks.

  • “How are successful college students different from unsuccessful college students?”
  • “What is the purpose of our existence?”
  • “Have you ever wondered whether Hazel Grace and Augustus Waters would have been still together if he didn’t die of cancer?”
  • "Ever wondered what lies beneath the ocean's depths? Dive into an underwater adventure and uncover the wonders of the deep sea."
  • "Have you ever pondered the true meaning of happiness? Join us on a quest to unravel the secrets of lasting joy."
  • Ready to challenge your limits? How far would you go to achieve your dreams and become the best version of yourself?"
  • "Curious about the future of technology? Can you envision a world where robots and humans coexist harmoniously?"
  • "Are you tired of the same old recipes? Spice up your culinary repertoire with exotic flavors and innovative cooking techniques."
  • "Are you ready to take control of your finances? Imagine a life of financial freedom and the possibilities it brings."
  • "Ever wondered what it takes to create a masterpiece? Discover the untold stories behind the world's most celebrated works of art."

Quotation Hook

A quotation from a famous person is used to open an essay to attract the reader's attention. However, the quote needs to be relevant to your topic and must come from a credible source. To remove any confusion that the reader might have it is best to explain the meaning of the quote later.

Here are the quotes you can use to start your essay:

  • “Education is the most powerful weapon you can use to change the world.”
  • If your topic is related to hard work and making your own destiny, you can start by quoting Michael Jordan.
  • “Some people want it to happen; some wish it would happen; others make it happen.”
  • The only way to do great work is to love what you do." - Steve Jobs
  • "In the middle of difficulty lies opportunity." - Albert Einstein
  • "Don't watch the clock; do what it does. Keep going." - Sam Levenson
  • "Believe you can and you're halfway there." - Theodore Roosevelt
  • "The best way to predict the future is to create it." - Peter Drucker
  • "The harder I work, the luckier I get." - Samuel Goldwyn
  • "Don't let yesterday take up too much of today." - Will Rogers

Order Essay

Tough Essay Due? Hire Tough Writers!

Statistic Hook

Here you use statistical data such as numbers and figures, percentages, etc. to hook the reader. This is mostly used in informative writing to provide the reader with new and interesting facts. It is important to mention the source.

  • “Reports have shown that almost two-thirds of adults in the United States of America have lived in a place with at least one gun, at some point of their life.”
  • Another persuasive essay hook example about people’s psychology and lying is mentioned below:
  • “It is noted by Allison Komet from the Psychology Today magazine that people lie in every one out of five conversations that last for at least 10 minutes.”
  • "Did you know that 8 out of 10 entrepreneurs fail within their first year? Discover the secrets of the successful 20% and defy the odds."
  • "According to recent studies, people spend an average of 2 hours and 22 minutes on social media every day. Is it time to reevaluate our digital habits?"
  • "Did you know that over 75% of communication is non-verbal? Explore the power of body language and unlock the secrets of effective communication."
  • "Research shows that 1 in 4 adults suffer from mental health issues. It's time to break the stigma and prioritize our well-being."
  • "Did you know that nearly 70% of consumers rely on online reviews before making a purchase? Build trust and boost your business with positive feedback."
  • "According to recent data, the global e-commerce industry is projected to reach $6.38 trillion by 2024. Don't miss out on the digital revolution."
  • "Did you know that 80% of car accidents are caused by distracted driving? Let's put an end to this dangerous epidemic."

Anecdotal Hook

An anecdote is a short story relevant to the essay topic, illustrated to gain the reader’s attention. This story can be derived from a personal experience or your imagination. Mostly, an anecdote is humorous; it makes the reader laugh and leaves them wanting to read more.

It is mostly used when writing narrative or descriptive essays.

If you are a non-English speaker and call the support department or the helpline and hear:

  • “If you want instructions in English, press 1. If you don't understand English, press 2.”
  • “ An elderly person came to buy a TV, asked the shopkeeper if they had colored TVs. When told that they are available, he asked to purchase a purple one.” 

Here are some more anecdotal hook examples:

  • "Picture this: It was a cold winter's night, the snowflakes gently falling from the sky, as I embarked on a journey that would change my life forever..."
  • "I still remember the day vividly, sitting in my grandmother's kitchen, the aroma of freshly baked cookies filling the air. Little did I know, that day would teach me a valuable lesson about the power of kindness..."
  • "It was a crowded subway ride during rush hour, everyone lost in their own world. But then, a stranger's act of generosity restored my faith in humanity..."
  • "As I stepped onto the stage, the spotlight shining down, my heart pounding with a mix of excitement and nerves. It was in that moment, I realized the transformative power of facing your fears..."
  • "In the heart of the bustling city, amidst the noise and chaos, I stumbled upon a hidden park, an oasis of serenity that reminded me of the importance of finding peace within ourselves..."
  • "The dusty attic held countless treasures, but it was the tattered journal that caught my eye. As I flipped through its pages, I discovered the untold story of my ancestors, and a connection to my roots I never knew I had..."
  • "Lost in the maze of a foreign city, unable to speak the language, I relied on the kindness of strangers who became my unexpected guides and lifelong friends..."
  • "As the final notes of the symphony resonated through the concert hall, the audience erupted in a thunderous applause. It was in that moment, I witnessed the pure magic that music can evoke..."

Personal Story

Starting with a personal story is the right way to go when writing a personal narrative or admissions essay for College.

There is no such rule that the story has to be yours. You can share your friends' story or someone you know of.

Remember that such hooks aren't suitable when writing a more formal or argumentative piece of writing.

  • “My father was in the Navy; I basically grew up on a cruise. As a young boy, I saw things beyond anyone's imagination. On April 15, 2001…”
  • "Growing up, I was the shyest kid in the classroom. But one day, a simple act of courage changed the course of my life forever..."
  • "I'll never forget the exhilarating rush I felt as I crossed the finish line of my first marathon, defying all odds and proving to myself that anything is possible..."
  • "At the age of 18, I packed my bags, bid farewell to familiarity, and embarked on a solo adventure across the globe. Little did I know, it would become the journey of self-discovery I had always longed for..."
  • "As a single parent, juggling multiple jobs and responsibilities, I faced countless obstacles. But my unwavering determination and the support of my loved ones propelled me towards success..."
  • "It was a rainy day when I stumbled upon an old, forgotten journal in my grandmother's attic. Its pages held untold stories and secrets that would unearth the hidden truths of our family history..."
  • "The sound of applause echoed through the auditorium as I stepped onto the stage, my heart pounding with a mix of nerves and excitement. Little did I know, that performance would be a turning point in my artistic journey..."
  • "After years of battling self-doubt, I finally found the courage to pursue my passion for writing. The moment I held my published book in my hands, I knew I had conquered my fears and embraced my true calling..."
  • "As a volunteer in a remote village, I witnessed the resilience and strength of the human spirit. The people I met and the stories they shared forever changed my perspective on life..."
  • "In the midst of a turbulent relationship, I made the difficult decision to walk away and embark on a journey of self-love and rediscovery. It was through that process that I found my own worth and reclaimed my happiness..."

In the next section we will be discussing hook examples for different kinds of essays.

Surprising Statement Hook

A surprising statement hook is a bold and unexpected statement that grabs the reader's attention and piques their curiosity. It challenges their assumptions and compels them to delve deeper into the topic. Example:

  • "Contrary to popular belief, spiders are our unsung heroes, silently protecting our homes from pesky insects and maintaining delicate ecological balance."
  • "Forget what you know about time management. The key to productivity lies in working less, not more."
  • "In a world where technology dominates, studies show that the old-fashioned pen and paper can boost memory and learning."
  • "You'll be shocked to discover that the average person spends more time scrolling through social media than sleeping."
  • "Contrary to popular belief, introverts possess hidden powers that can make them exceptional leaders."
  • "Prepare to be amazed: chocolate can actually be beneficial for your health when consumed in moderation."
  • "Buckle up, because recent research reveals that multitasking can actually make you less productive, not more."
  • "Did you know that learning a new language can slow down the aging process and keep your brain sharp?"
  • "Hold onto your hats: studies suggest that taking regular naps can enhance your overall productivity and creativity."
  • "You won't believe it, but playing video games in moderation can enhance problem-solving skills and boost cognitive function."

Argumentative Essay Hook Examples

The opening paragraph of an argumentative essay should be similar to the opening statement of a trial. Just as a lawyer presents his point with a logical system, you must do the same in your essay.

For example, you are writing about the adverse effects of smoking, and arguing that all public places should be turned into no smoking zones. For such essays, good hook examples will be statistical such as:

“According to the World Health Organization consumption of tobacco kills about five million people every year, which makes it more than the death rate from HIV/AIDS, TB and malaria altogether.”

Paper Due? Why Suffer? That's our Job!

Persuasive Essay Hook Examples

The main idea or aim for writing a persuasive essay is to convince and persuade the reader to do something. It is also written to change their beliefs and agree with your point of view.

Hook sentences for such essays are a shocking revelation that the reader is curious to learn more about.

“On average each year, humans release 38.2 billion tons of carbon dioxide approximately. Due to this, the level of carbon dioxide has increased significantly, more than it has been in centuries. If you think climate change is nothing to worry about then you are highly mistaken.”

Narrative Essay Hook Examples

Simply put, a narrative essay is just like a story. In other types of essays you need to pick a side, argue and prove your point with the help of evidence. A narrative essay gives you a freehand to tell your story however you may please.

It can be a story inspired by your life, something you may have experienced. If you feel like it isn’t exciting enough you can always transform it using your imagination.

Examples of a hook sentence for a narrative essay can be something like:

“I was riding the bus to school; the other kids were making fun of me thinking I couldn’t understand them. “Why are his eyes like that?” “His face is funny.” A Chinese kid in America is probably like a zoo animal.”

Subject-wise Hook Examples

Here are 20+ interesting hook examples across various subjects:

  • Technology: "Imagine a world where machines can read our thoughts. Welcome to the future of mind-reading technology."
  • Health and Wellness: "Did you know that a simple 10-minute meditation can change your entire day? Unlock the transformative power of mindfulness."
  • Environment: "The clock is ticking. Discover the urgent and astonishing truth behind the disappearing rainforests."
  • Travel: "Pack your bags and leave your comfort zone behind. Uncover the hidden gems of off-the-beaten-path destinations."
  • History: "Step into the shoes of a time traveler as we unravel the untold secrets of ancient civilizations."
  • Science: "Prepare to be amazed as we dive into the mind-bending world of quantum physics and its implications for our understanding of reality."
  • Education: "Traditional classrooms are a thing of the past. Explore the innovative and disruptive trends shaping the future of education."
  • Food and Cooking: "Savor the tantalizing flavors of a culinary revolution, where unexpected ingredient pairings redefine the boundaries of taste."
  • Psychology: "Unmask the hidden forces that drive our decision-making and explore the fascinating world of subconscious influences."
  • Art and Creativity: "Witness the collision of colors and ideas in a mesmerizing display of artistic expression. Unlock your inner creativity."
  • Finance: "Escape the paycheck-to-paycheck cycle and discover the path to financial freedom. It's time to take control of your wealth."
  • Sports: "Feel the adrenaline surge as we uncover the captivating stories behind the world's most legendary sports moments."
  • Relationships: "Love in the digital age: How technology has transformed the way we connect, flirt, and navigate modern relationships."
  • Self-Improvement: "Embark on a journey of self-discovery and learn the life-changing habits that lead to personal growth and fulfillment."
  • Business and Entrepreneurship: "From startup to success story: Explore the rollercoaster ride of building and scaling a thriving business."
  • Fashion: "Step into the fashion revolution as we decode the latest trends and unveil the stories behind iconic designer collections."
  • Music: "Unleash the power of music: How melodies, rhythms, and lyrics can touch our souls and evoke powerful emotions."
  • Politics: "Behind closed doors: Delve into the intriguing world of political maneuvering and the impact on global affairs."
  • Nature and Wildlife: "Journey to the untouched corners of our planet, where awe-inspiring creatures and breathtaking landscapes await."
  • Literature: "Enter the realm of literary magic as we explore the profound symbolism and hidden meanings within beloved classics."

In conclusion, these were some catchy hook examples just to give you an idea. You can make use of any one of these types according to your paper and its requirements. Generate free essays through our AI essay writer , to see how it's done!

The key to making your essay stand out from the rest is to have a strong introduction. While it is the major part, there’s more that goes into writing a good essay.

If you are still unable to come up with an exciting hook, and searching “ who can write my essay ?”. The expert essay writers at 5StarEssays.com are just a click away.  Reach out to our essay writer today and have an engaging opening for your essay.

Frequently Asked Questions

What is a visual hook.

The visual hook is a scene that captures the audience's interest by encapsulating something about the movie. It usually occurs around 15 minutes into it, and can be found in marketing or reviews of movies.

Nova A.

As a Digital Content Strategist, Nova Allison has eight years of experience in writing both technical and scientific content. With a focus on developing online content plans that engage audiences, Nova strives to write pieces that are not only informative but captivating as well.

Was This Blog Helpful?

Keep reading.

  • How to Write an Essay - A Complete Guide with Examples

Hook Examples

  • The Art of Effective Writing: Thesis Statements Examples and Tips

Hook Examples

  • Writing a 500 Word Essay - Easy Guide

Hook Examples

  • What is a Topic Sentence - An Easy Guide with Writing Steps & Examples

Hook Examples

  • A Complete Essay Outline - Guidelines and Format

Hook Examples

  • 220 Best Transition Words for Essays

Hook Examples

  • Essay Format: Detailed Writing Tips & Examples

Hook Examples

  • How to Write a Conclusion - Examples & Tips

Hook Examples

  • Essay Topics: 100+ Best Essay Topics for your Guidance

Hook Examples

  • How to Title an Essay: A Step-by-Step Guide for Effective Titles

Hook Examples

  • How to Write a Perfect 1000 Word Essay

Hook Examples

  • How To Make An Essay Longer - Easy Guide For Beginners

Hook Examples

  • Learn How to Start an Essay Effectively with Easy Guidelines

Hook Examples

  • Types of Sentences With Examples

Hook Examples

  • Essay Writing Tips - Essential Do’s and Don’ts to Craft Better Essays

Hook Examples

  • How To Write A Thesis Statement - A Step by Step Guide

Hook Examples

  • Art Topics - 200+ Brilliant Ideas to Begin With

Hook Examples

  • Writing Conventions and Tips for College Students

Hook Examples

People Also Read

  • persuasive essay writing
  • 10essential essay writing techniques for students
  • analytical essay topics
  • narrative essay topics
  • analytical essay writing

Burdened With Assignments?

Bottom Slider

Advertisement

  • Homework Services: Essay Topics Generator

© 2024 - All rights reserved

Facebook Social Icon

  • Homework Help
  • Essay Examples
  • Citation Generator
  • Writing Guides
  • Essay Title Generator
  • Essay Topic Generator
  • Essay Outline Generator
  • Flashcard Generator
  • Plagiarism Checker
  • Paraphrasing Tool
  • Conclusion Generator
  • Thesis Statement Generator
  • Introduction Generator
  • Literature Review Generator
  • Hypothesis Generator
  • Personal Issues
  • Lying Essays

Lying Essays (Examples)

1000+ documents containing “lying” .

grid

Filter by Keywords:(add comma between each)

Lying is, what some see, as a means to an end. It enables relationships and maintains bonds (at least that is how a lot of people act and behave every day). However, this may not be a good means of socializing when it comes to long-term relationships. People sometimes believe the saying: "You can't handle the truth" and treat those they fear cannot deal with honesty, by chronically lying to them. There are some who lie to themselves and prefer to live in denial of important things like looming health issues, infidelity, or an unsatisfied life to keep from going insane. Lies then become an avenue for expression of bad behavior and avenue for constant repression (a way to prolong or delay something) to survive. Why then is lying so harmful? Are there instances where lies are beneficial? Little white lies are almost always seen as harmless. People see these lies….

Lying in International Relations

Lying in International Relations hat are your thoughts on lying in international relations? International relations can be a very complicated issue and the relationships can be tenuous and easily broken if not handled correctly. The importance of a solid relationship cannot be underestimated. Countries with which a nation has a good relationship will be a source of economic support in terms of trade and lending of funds, a sharer of technological progression, as well as a potential ally in times of strife. It is the hope that when countries are building a relationship, there will be a level of honesty between the governments. However there will always be certain secrets which a nation's government should never share to a country and indeed with nations where there is a history of mistrust or dubious actions, it is even less prudent to share top-secret information. Some people naively state that "honesty is the best….

Works Cited:

Klitgaard, R. (1998). International cooperation against corruption. Finance and Development.

(35:1). Pro-Quest. 3-6.

Lektzian, D. & Sova, M. (2001). Institutions and international cooperation: an event history analysis of the effects of economic sanctions. The Journal of Conflict Resolution. (45:1). ProQuest Central. 61-79.

UChicago. (2012, March 1). Why leaders lie: the truth about lying in international politics.

Lying Rhetorical Strategies Used in Lying Honest

Lying Rhetorical Strategies used in Lying Honest self-disclosure is an important factor that strengthens interpersonal relationships, since this is a manifestation of one's trust and sincerity to the individual. However, there are sometimes situations or information about one's self that cannot be easily disclosed, for reasons that every individual can have: perhaps disclosure of personal information may threaten or weaken the relationship, or simply, the individual is not yet ready to let his/her partner know about particular information about his/her life. One alternative to self-disclosure is lying. Defined as the "deliberate attempt to hide or misrepresent the truth," lying is considered unethical since it is an act of disinformation, a breach of one's trust and belief to the individual (Adler, 1998:332). There are many reasons for lying. People may lie in order to avoid tension or conflict, to save face from embarrassment or shame, to guide social interaction among other people, to expand….

Adler, R. (1998). Interplay: The process of interpersonal communication. NY: Harcourt Brace.

Lying Is There Any Way's to Know

Lying Is there any way(s) to know when someone is lying or telling the truth? Consider body language, voice patterns, handwriting, or other traits. Consider situations. hat could you do to reach a conclusion? Detective work? polygraph/lie detector? Yes it is possible to detect a liar, but it can be difficult to do in some situations. There are several verbal and nonverbal clues that reveal deceit. Among these clues are physiological reactions such as pupil dilation, blushing, blinking, hyperventilating, blanching skin, and increased sweating. (Ford) Most often individuals that are lying display one or more of these reactions and if you pay close attention these clues are obvious. According to an article entitled "Lies!, Lies!!, Lies!!!: The Psychology of Deceit," physiological reactions "are governed by the autonomic nervous system and thus are out of the liar's control. Dilation of the pupils can be a physiological sign of fear or anxiety." (Ford) In addition the….

Works Cited

Ford, Charles V. "Lies!, Lies!!, Lies!!!: The Psychology of Deceit." American Psychiatric Press, 1996

Lying Shakespeare's Historical Plays Richard

The situation is different in Henry IV, where the main character, prince Hal as he is called by his friends, will ascend to the throne in the second part of the play in spite of his past as a villain. As the play begins, we see the king Henry IV, prince Hal's father, caught up in the midst of a civil conflict with Hotspur and the entire Percy branch of noblemen, because of a debt he had failed to pay to them. During this conflict, Henry shows his bitterness at not having his eldest son, prince Hal to help him in the military matters. Hal is, at this time, with a group of rogues and villains who accompany him in his unlawful actions. Falstaff is the most famous of these, and seems to be Shakespeare's best known personification of falseness (a word from which his name is undoubtedly derived) lying and….

Shakespeare, William. Henry IV. London: Oxford University Press, 1972

Richard III. London: Oxford University Press, 1969

Lying on Your Resume Is Never Acceptable

Lying on your resume is never acceptable to any organization. In the case of Tracy, we understand that she lied on her resume and instead of explaining that she dropped out of MBA program just when she was about to complete, she instead mentioned MBA anyway in order to get the job even though it was not needed for the position she had applied for. Secondly, we understand that organization found out the truth through someone else and not through their own suspicions which means that Tracy had been doing a good job and no one needed to recheck her credentials and educational qualifications. We also know that Tray dropped out due to family reasons and had been very close to getting the degree. Now that Human Resource knows that Tracy did not complete her MBA and had lied on her resume, it is in a state of conflict and wondering….

Lying Is Morally Accepted One

On the other hand there are arguments for why people should lie, in other words when lying would be justified. Lying could be justified if it was done in self-defense. This is the same concept as shooting someone in self-defense; if you have to tell a lie in order to exit a situation that is harmful to yourself it is justified. Additionally, lying to protect trade secrets, like discussed above in lying by omission, is justifiable. If businesses shared their trade secrets openly with competitors it would create many issues for the businesses that are trying to stay ahead of others. Lastly, an argument for when lying is justified is when it is done to protect national security. The cases of when lies are justified are much rarer than when they are not justified. In all cases, even when it seems justifiable the consequences should be thoroughly considered. 4 Overall….

Thiroux, Jacques and Krasemann, Keith. Ethics: Theory and Practice 10th Edition. New York: Pearson, 2009.

1 Jacques Thiroux. Ethics: Theory and Practice 10th Edition. (New York: Pearson), 2009. 276.

2 Thiroux, 280.

3 Thiroux, 283

Lying and Deceit and Questions Its Acceptability

lying and deceit and questions its acceptability in society. Lying is something which stands at a different perspective for everyone. Every form of lying is because an individual is trying to gain or achieve something, most of people's actions are meant to deceive in some way for personal gain, and this goes on underlying, since people are so used to doing it. There is, however a possibility to change someone's patterns of duplicity. The detection of this duplicity is what may be difficult. These patterns of duplicity are difficult to detect since they are already embedded within the person. This does not mean, though, that the person is of ill will. There are needs for this deception, justifiable or not, which arise through certain stressors in one's life. These types of stressors present themselves as challenges or types of competition in an individual's life. The drive for success or material….

Carson, T.L. (1988). On the definition of lying: A reply to Jones and revisions. Journal of Business Ethics. 7; 509-514.

Decosimo, D. (2010). JUST LIES: Finding Augustine's Ethics of Public Lying in His Treatments of Lying and Killing Journal of Religious Ethics Just Lies. Journal of Religious Ethics. 38(4); 661-697.

Malpas, J. (2008). Truth, Lies and Deceit: On Ethics in Contemporary Public Life. International Journal of Applied Philosophy. 22(1); 1-12.

Sims, R.L. (2000). The Relationship Between Employee Attitudes and Conflicting Expectations for Lying Behavior. Journal of Psychology. 134(6); 619-634.

Lying in Hemingway's Soldier's Home

"(Hemingway, 303) Not only did the experience of the war change and affect him in a total way, but, when he returns home, the war becomes an obstacle impossible to surmount in the way of his new life because he is forced to lie about it and about his actual experiences and feelings. e intimate from his indirectly given thoughts, that after the war, everyone taking part in it was in the habit of forging unreal stories, most of them regarding heroism and what the author terms "atrocities." The only thing that he can coherently say about his experience is that he was horribly frightened all the time, "badly, sickeningly frightened all the time," as he tells himself (Hemingway, 303). Hemingway's short story announces its theme already in the title: the phrase "soldier's home" is a very suggestive linguistic construction. However, the phrase in the title implies more than that- "soldier's….

Hemingway, Ernest. Collected Stories.

Is Lying Always Wrong

lying always wrong? While the concept of lying appears simple at first, upon consideration one is able to imagine any number of situations in which lying would not appear to always be wrong, thus creating something of a quandary for anyone attempting to argue in favor of ethical and honest behavior, especially in the corporate world. The problem to be investigated in this essay, then, is the problem of determining if lying is always wrong, and the implications of the answer to that question. In order to address this problem, one may examine certain relevant, well-known instances of lying in which an argument can be made for either side, such as the padding of resumes with misleading or false information, or James Frey's repackaging of a novel he wrote into an ostensible memoir. James Frey wrote a book-based roughly on his own experiences but embellished enough that when he first shopped….

Jennings, M. (2009). Business ethics: case studies and selected readings. (6th ed.). Mason:

South-Western Cengage Learning.

Testifying Lying Under Oath Police

He sentenced Grimes to 30 days in jail and ordered him to pay a $500 fine. Given the fact that a lie by a police officer can deprive a person of their liberty, this seems like a reasonable sentence. Only the use of monitoring in patrol cars revealed that the tests were not performed. Members of the public must trust officers, not technology alone, to ensure that officers do not lie in court and that citizen's rights are protected. That is why Grimes' sentence is reasonable. According to court records, Grimes has more than 160 pending cases within the courts system which are now in doubt. If Grimes testified inaccurately and innocent defendants were convicted in the past, he has breached the public's trust. His proven lie also calls into question many successful prosecutions, which could result in guilty people going free. And much bureaucracy and legal wrangling lie ahead….

Barone, Patrick. (2011, May). Police lies lead to dismissal of possibly 100 DUI cases.

Retrieved July 13, 2011 at http://www.drunkdrivinginmichigan.com/police-lies-lead-to-dismissal-of-possibly-100-dui-cases/

Personal Experience with Lying

Lying is perhaps one of the most common wrongs we (virtually all of us) commit in the course of our life. It could even be true to say that it is human nature to tell lies. Consciously or unconsciously, we often lie to evade embarrassing or awkward situations, get out of trouble, and/or make other people feel better or intimidate them. Unfortunately, even though lying may be good or bad depending on the situation at hand, or depending on who you ask, we usually disregard the impact our lies can have on not only those lied to, but also us. Even the most trivial of lies can have severe unexpected consequences. As an individual, I regard myself as a straightforward and honest person. This is a value that I have emulated from my father since childhood. My father has always taught me the importance of candidness and truthfulness. I have used….

Coach Lying With My Head on Pillow

coach, lying with my head on pillow; back down with a book up in the air when I heard the clock ding six times alerting me to the hour. I had been reading for several hours straight and my eyes grew heavy. The sun was setting quickly and with each passing moment the light found a new path through the window blinds. The article I was reading was written by Zhu Ziqing, who became popular in the twentieth century. The article was apparently written as Ziqing was approaching the mid-point in his life and he was reflecting on his earlier years. He was contemplating the style of parenting that he used with his children when they were younger. Ziqing was raised by parents in an authoritarian manner which was typical of his culture and he automatically, without forethought, chose the same style to use with his children. Finally the light….

Police Officer and Lying

law enforcement agencies have often struggled with officer dishonesty and the impact such an action leaves not just in the criminal justice system, but more specifically in court proceedings. When an officer lies, their credibility may be threatened due to their previous dishonest comportment. Agencies must, on a continued basis, disclose information to prosecutors concerning the issue of officer dishonesty if the officer in question must testify against a defendant. That defendant must also be made aware of the instance of officer dishonesty and if this is not done, the agencies and officers may be held accountable as well as potentially lead to dismissal of charges against the defendant. An example of this was seen in Brady v. Maryland. The landmark case of Brady v. Maryland demonstrated the effects of withholding information or evidence in case proceedings by the decision of the prosecutors to not submit Boblit's confession as evidence.….

Lewis, R. & Veltman, N. (2015). The Hard Truth About Cops Who Lie. WNYC. Retrieved 16 October 2016, from  http://www.wnyc.org/story/hard-truth-about-cops-who-lie/

Perjury False Testimony Lying Under Oath

Perjury Aristotle believed there should be guidelines governing the act of giving testimony (Kennedy, 2004, p. 227-228). For example, a jury member should place greater weight on the reputation and social standing of the witness, than on the content of the testimony given. If a person of good character is called to testify before a formal investigative body, a reasonable listener is therefore required to open their mind to anything the witness may claim. This process of 'reciprocation' requires reasonable jury members and judges to accept as trustworthy the testimony of a reputable person, even if the events described seem incredible and go beyond their own personal experiences. Unfortunately, the days of small village tribunals where jury members knew most of the participants in a trial, and therefore the reputations and trustworthiness of witnesses, are generally a thing of the past in the United States and much of the world. eputations of….

Cornell University. (2012). Title 18 -- Crimes and Criminal Procedure, Part I -- Crimes,

Chapter 79 -- Perjury. Law.Cornell.edu. Retrieved 7 Aug. 2012 from  http://www.law.cornell.edu/uscode/pdf/uscode18/lii_usc_TI_18_PA_I_CH_79_SE_1621.pdf .

Kennedy, Rick. (2004). A History of Reasonableness: Testimony and Authority in the Art of Thinking. Rochester, NY: University of Rochester Press.

Lichtman, Robert M. And Cohen, Ronald D. (2004). Deadly Farce: Harvey Matusow and the Informer System in the McCarthy Era. Urbana, IL: University of Illinois Press.

Need Help with a thesis statement on different portrayals of King Arthur in books, tv series, and movies.

King Arthur has been a steady feature in pop culture since the original stories of him were told hundreds of years ago.  In fact, he retains a mythical status because of the quasi-historical nature of the stories told about him, leading to many people wondering if King Arthur is actually a real person .  The consensus appears to be that he was not an actual person, but that there were real people whose stories contributed to the stories of King Arthur.  It is no surprise, then, that he continues to be a compelling character in books,....

With reference to relevant theory and recent literature, critically discuss what is understood by the term ‘stress’ and the sources of stress experienced by those involved in sports.

Stress in sport can refer to two distinct things.  It can refer to physical stress and is compared to recovery periods or it can refer to the emotional stressors experienced by athletes in various sports.  Because you referred to the sources of stress experienced by those involved in sport, we are proceeding under the assumption that you are referring to emotional stressors .  We are going to provide an outline to give you an idea of what we might include in the introduction, main body paragraphs, and conclusion of an essay about that topic.

Essay Outline:

I. Introduction

Can you help with an outline for a speech on cyberbullying?

Cyberbullying is an extremely popular topic for academic essays and speeches.  In fact, we have several example essays on the cyberbullying and bullying.  It is similar to traditional in-person bullying in many ways, but because of the reach of internet devices it is often considered more serious and damaging than many methods of in-person bullying.  That is because victims describe being unable to get away from cyberbullying and to the fact that once it is online, the it is permanent.  This has led to people taking a more serious approach to bullying.  Here is an....

In your opinion, is bullying an issue that should be addressed by schools or left to parents?

Bullying is a serious issue that impacts approximately 20% of middle and high-school aged children each year.  The extent of bullying can vary, but severe bullying can lead victims to commit suicide and leave lifelong scars on its survivors.  This has led people to debate the most effective form of intervention for bullies.

Bullying used to be considered an individual problem, with schools taking few steps to intervene unless the bullying was physical and was egregious.  In fact, many middle-aged adults seem to think of school bullying as something that is within the normal range of....

image

Family and Marriage

Lying is, what some see, as a means to an end. It enables relationships and maintains bonds (at least that is how a lot of people act and behave…

Lying in International Relations hat are your thoughts on lying in international relations? International relations can be a very complicated issue and the relationships can be tenuous and easily broken if…

Lying Rhetorical Strategies used in Lying Honest self-disclosure is an important factor that strengthens interpersonal relationships, since this is a manifestation of one's trust and sincerity to the individual. However, there…

Lying Is there any way(s) to know when someone is lying or telling the truth? Consider body language, voice patterns, handwriting, or other traits. Consider situations. hat could you do…

The situation is different in Henry IV, where the main character, prince Hal as he is called by his friends, will ascend to the throne in the second part…

Lying on your resume is never acceptable to any organization. In the case of Tracy, we understand that she lied on her resume and instead of explaining that she…

Business - Ethics

On the other hand there are arguments for why people should lie, in other words when lying would be justified. Lying could be justified if it was done in…

Research Paper

lying and deceit and questions its acceptability in society. Lying is something which stands at a different perspective for everyone. Every form of lying is because an individual…

"(Hemingway, 303) Not only did the experience of the war change and affect him in a total way, but, when he returns home, the war becomes an obstacle impossible…

lying always wrong? While the concept of lying appears simple at first, upon consideration one is able to imagine any number of situations in which lying would not appear…

Criminal Justice

He sentenced Grimes to 30 days in jail and ordered him to pay a $500 fine. Given the fact that a lie by a police officer can deprive…

Lying is perhaps one of the most common wrongs we (virtually all of us) commit in the course of our life. It could even be true to say that…

coach, lying with my head on pillow; back down with a book up in the air when I heard the clock ding six times alerting me to the…

Criminal Justice - Police

law enforcement agencies have often struggled with officer dishonesty and the impact such an action leaves not just in the criminal justice system, but more specifically in court…

Perjury Aristotle believed there should be guidelines governing the act of giving testimony (Kennedy, 2004, p. 227-228). For example, a jury member should place greater weight on the reputation and…

essay hook about lying

quotations - contents - welcome page - obstacles the people behind the words - our current e-zine - articles and excerpts Daily Meditations, Year One - Year Two - Year Three - Year Four       Sign up for your free daily spiritual or general quotation ~ ~ Sign up for your free daily meditation

A half-truth is a whole lie. Yiddish proverb

When something important is going on, silence is a lie. A.M. Rosenthal

A truth that's told with bad intent Beats all the lies you can invent. William Blake

What upsets me is not that you lied to me, but that from now on I can no longer believe you. Friedrich Nietzsche

The lies most devastating to our self-esteem are not so much the lies we tell as the lies we live. Nathaniel Branden

The liars' punishment is not in the least that they are not believed, but that they cannot believe anyone else. George Bernard Shaw

essay hook about lying

              

essay hook about lying

   

The Truth about Lying

You can’t spot a liar just by looking, but psychologists are zeroing in on methods that might actually work.

A person undergoing a lie detector test

Police thought that 17-year-old Marty Tankleff seemed too calm after finding his mother stabbed to death and his father mortally bludgeoned in the family’s sprawling Long Island home. Authorities didn’t believe his claims of innocence, and he spent 17 years in prison for the murders.

JSTOR Daily Membership Ad

Yet in another case, detectives thought that 16-year-old Jeffrey Deskovic seemed too distraught and too eager to help detectives after his high school classmate was found strangled. He, too, was judged to be lying and served nearly 16 years for the crime.

Audio brought to you by  curio.io

One man was not upset enough. The other was too upset. How can such opposite feelings both be telltale clues of hidden guilt?

They’re not, says psychologist Maria Hartwig, a deception researcher at John Jay College of Criminal Justice at the City University of New York. The men, both later exonerated, were victims of a pervasive misconception: that you can spot a liar by the way they act. Across cultures, people believe that behaviors such as averted gaze, fidgeting and stuttering betray deceivers.

In fact, researchers have found little evidence to support this belief despite decades of searching. “One of the problems we face as scholars of lying is that everybody thinks they know how lying works,” says Hartwig, who coauthored a study of nonverbal cues to lying in the Annual Review of Psychology . Such overconfidence has led to serious miscarriages of justice, as Tankleff and Deskovic know all too well. “The mistakes of lie detection are costly to society and people victimized by misjudgments,” says Hartwig. “The stakes are really high.”

Weekly Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

Tough to tell

Psychologists have long known how hard it is to spot a liar. In 2003, psychologist Bella DePaulo, now affiliated with the University of California, Santa Barbara, and her colleagues combed through the scientific literature, gathering 116 experiments that compared people’s behavior when lying and when telling the truth. The studies assessed 102 possible nonverbal cues, including averted gaze, blinking, talking louder (a nonverbal cue because it does not depend on the words used), shrugging, shifting posture and movements of the head, hands, arms or legs. None proved reliable indicators of a liar , though a few were weakly correlated, such as dilated pupils and a tiny increase — undetectable to the human ear — in the pitch of the voice.

Three years later, DePaulo and psychologist Charles Bond of Texas Christian University reviewed 206 studies involving 24,483 observers judging the veracity of 6,651 communications by 4,435 individuals. Neither law enforcement experts nor student volunteers were able to pick true from false statements better than 54 percent of the time — just slightly above chance. In individual experiments, accuracy ranged from 31 to 73 percent, with the smaller studies varying more widely. “The impact of luck is apparent in small studies,” Bond says. “In studies of sufficient size, luck evens out.”

This size effect suggests that the greater accuracy reported in some of the experiments may just boil down to chance , says psychologist and applied data analyst Timothy Luke at the University of Gothenburg in Sweden. “If we haven’t found large effects by now,” he says, “it’s probably because they don’t exist .”

essay hook about lying

Police experts, however, have frequently made a different argument: that the experiments weren’t realistic enough. After all, they say, volunteers — mostly students — instructed to lie or tell the truth in psychology labs do not face the same consequences as criminal suspects in the interrogation room or on the witness stand. “The ‘guilty’ people had nothing at stake,” says Joseph Buckley, president of John E. Reid and Associates, which trains thousands of law enforcement officers each year in behavior-based lie detection. “It wasn’t real, consequential motivation.”

Samantha Mann , a psychologist at the University of Portsmouth, UK, thought that such police criticism had a point when she was drawn to deception research 20 years ago. To delve into the issue, she and colleague Aldert Vrij first went through hours of videotaped police interviews of a convicted serial killer and picked out three known truths and three known lies. Then Mann asked 65 English police officers to view the six statements and judge which were true, and which false. Since the interviews were in Dutch, the officers judged entirely on the basis of nonverbal cues.

The officers were correct 64 percent of the time — better than chance, but still not very accurate, she says. And the officers who did worst were those who said they relied on nonverbal stereotypes like “liars look away” or “liars fidget.” In fact, the killer maintained eye contact and did not fidget while deceiving. “This guy was clearly very nervous, no doubt,” Mann says, but he controlled his behavior to strategically counter the stereotypes.

In a later study, also by Mann and Vrij, 52 Dutch police officers did no better than chance at distinguishing true and false statements given by family members who’d murdered their relatives but denied it in anguished displays during televised press conferences used in the study. Notably, officers who performed the worst were those who felt that the emotional displays were genuine. But what did that signify? “If a husband killed his wife, he could be upset for all kinds of reasons, like remorse or getting caught,” says Mann. “Nonverbal behavior is so idiosyncratic. If you focus on the nonverbal behavior like emotions, you will trip up.”

Confirming these results on a large scale years later, Hartwig and Bond reviewed the literature for studies comparing people’s abilities to detect high- and low-stakes lies. They found no evidence that people were any better at detecting lies told by criminals or wrongly accused suspects in police investigations than those told by laboratory volunteers.

Something to hide

From serial killers to students fibbing in laboratory experiments, the lies in all these experiments were spoken. In a study published in 2019, Mann examined nonverbal deceit, such as when someone is trying to conceal illicit activity — a type of deception relevant to detecting bombers or smugglers . She recruited 52 university student volunteers and asked half of them to transport a laptop said to contain sensitive photographs on a ferry ride between two cities. The volunteers were told to try to blend into the crowd and not look “suspicious,” because people would be trying to identify them. The other half of the group were given a regular cell phone to transport, with no instructions to conceal what they were doing.

When Mann showed videos of the ferry rides to 104 other volunteers and asked them to pick out the “smugglers,” the spotters did no better than chance. In interviews afterward, the “smugglers” said they were nervous, but they consciously tried to act normal and control their nerves with tactics such as listening to music or using their phones.

Next, Mann raised the stakes. Half of a new group of 60 volunteers were given an envelope of Russian, Egyptian and Korean currency to conceal, while the other half didn’t “smuggle” anything. But this time, Mann sent two researchers onto the ferry to pace around and scrutinize the passengers, appearing to compare their faces to photos on a cell phone.

This time, 120 observers trying to pick out the “smugglers” on video guessed correctly just 39.2 percent of the time — well below chance. The reason, Mann says, is that the “smugglers” consciously made an effort to look normal, while the “innocent” control volunteers just acted naturally. Their surprise at the unexpected scrutiny looked to the observers like a sign of guilt.

The finding that deceivers can successfully hide nervousness fills in a missing piece in deception research , says psychologist Ronald Fisher of Florida International University, who trains FBI agents. “Not too many studies compare people’s internal emotions with what others notice,” he says. “The whole point is, liars do feel more nervous, but that’s an internal feeling as opposed to how they behave as observed by others.”

Studies like these have led researchers to largely abandon the hunt for nonverbal cues to deception. But are there other ways to spot a liar? Today, psychologists investigating deception are more likely to focus on verbal cues, and particularly on ways to magnify the differences between what liars and truth-tellers say.

For example, interviewers can strategically withhold evidence longer, allowing a suspect to speak more freely, which can lead liars into contradictions. In one experiment, Hartwig taught this technique to 41 police trainees, who then correctly identified liars about 85 percent of the time, as compared to 55 percent for another 41 recruits who had not yet received the training. “We are talking significant improvements in accuracy rates,” says Hartwig.

Another interviewing technique taps spatial memory by asking suspects and witnesses to sketch a scene related to a crime or alibi. Because this enhances recall, truth-tellers may report more detail. In a simulated spy mission study published by Mann and her colleagues last year, 122 participants met an “agent” in the school cafeteria, exchanged a code, then received a package. Afterward, participants instructed to tell the truth about what happened gave 76 percent more detail about experiences at the location during a sketching interview than those asked to cover up the code-package exchange . “When you sketch, you are reliving an event — so it aids memory,” says study coauthor Haneen Deeb, a psychologist at the University of Portsmouth.

The experiment was designed with input from UK police, who regularly use sketching interviews and work with psychology researchers as part of the nation’s switch to non-guilt-assumptive questioning, which officially replaced accusation-style interrogations in the 1980s and 1990s in that country after scandals involving wrongful conviction and abuse.

Slow to change

In the US, though, such science-based reforms have yet to make significant inroads among police and other security officials. The US Department of Homeland Security’s Transportation Security Administration, for example, still uses nonverbal deception clues to screen airport passengers for questioning. The agency’s secretive behavioral screening checklist instructs agents to look for supposed liars’ tells such as averted gaze — considered a sign of respect in some cultures — and prolonged stare, rapid blinking, complaining, whistling, exaggerated yawning, covering the mouth while speaking and excessive fidgeting or personal grooming. All have been thoroughly debunked by researchers.

With agents relying on such vague, contradictory grounds for suspicion, it’s perhaps not surprising that passengers lodged 2,251 formal complaints between 2015 and 2018 claiming that they’d been profiled based on nationality, race, ethnicity or other reasons. Congressional scrutiny of TSA airport screening methods goes back to 2013, when the US Government Accountability Office — an arm of Congress that audits, evaluates and advises on government programs — reviewed the scientific evidence for behavioral detection and found it lacking, recommending that the TSA limit funding and curtail its use. In response, the TSA eliminated the use of stand-alone behavior detection officers and reduced the checklist from 94 to 36 indicators, but retained many scientifically unsupported elements like heavy sweating.

In response to renewed Congressional scrutiny, the TSA in 2019 promised to improve staff supervision to reduce profiling. Still, the agency continues to see the value of behavioral screening. As a Homeland Security official told congressional investigators, “common sense” behavioral indicators are worth including in a “rational and defensible security program” even if they do not meet academic standards of scientific evidence. In a statement to Knowable , TSA media relations manager R. Carter Langston said that “TSA believes behavioral detection provides a critical and effective layer of security within the nation’s transportation system.” The TSA points to two separate behavioral detection successes in the last 11 years that prevented three passengers from boarding airplanes with explosive or incendiary devices.

But, says Mann, without knowing how many would-be terrorists slipped through security undetected, the success of such a program cannot be measured. And, in fact, in 2015 the acting head of the TSA was reassigned after Homeland Security undercover agents in an internal investigation successfully smuggled fake explosive devices and real weapons through airport security 95 percent of the time.

In 2019, Mann, Hartwig and 49 other university researchers published a review evaluating the evidence for behavioral analysis screening, concluding that law enforcement professionals should abandon this “fundamentally misguided” pseudoscience, which may “harm the life and liberty of individuals.”

Hartwig, meanwhile, has teamed with national security expert Mark Fallon, a former special agent with the US Naval Criminal Investigative Service and former Homeland Security assistant director, to create a new training curriculum for investigators that is more firmly based in science. “Progress has been slow,” Fallon says. But he hopes that future reforms may save people from the sort of unjust convictions that marred the lives of Jeffrey Deskovic and Marty Tankleff.

For Tankleff, stereotypes about liars have proved tenacious. In his years-long campaign to win exoneration and recently to practice law, the reserved, bookish man had to learn to show more feeling “to create a new narrative” of wronged innocence, says Lonnie Soury, a crisis manager who coached him in the effort. It worked, and Tankleff finally won admittance to the New York bar in 2020. Why was showing emotion so critical? “People,” says Soury, “are very biased.”

Editor’s note: This article was updated on March 25, 2021, to correct the last name of a crisis manager quoted in the story. Their name is Lonnie Soury, not Lonnie Stouffer.

This article originally appeared in Knowable Magazine , an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter .

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

 alt=

Get Our Newsletter

More stories.

The portrait of Confucius from Confucius, Philosopher of the Chinese

Confucius in the European Enlightenment

Viewing the projection of a solar eclipse using a colander

Watching an Eclipse from Prison

Three female animals posing for photograph on an alpaca farm in Central Oregon

The Alpaca Racket

Art Nouveau image of a person looking at a book of poetry, 1898 Velhagen Monatsheft

Make Your Own Poetry Anthology

Recent posts.

  • Who Can Just Stop Oil?
  • The Industrial Revolution and the Rise of Policing
  • Colorful Lights to Cure What Ails You
  • Ayahs Abroad: Colonial Nannies Cross The Empire
  • A Brief Guide to Birdwatching in the Age of Dinosaurs

Support JSTOR Daily

Sign up for our weekly newsletter.

Lying Essay

essay hook about lying

Examples Of Lying In Lying

beings have been lying and according to a study by Bella M. de Paulo et al. on the average every person lies once to twice a day. There are of course all kind of motivations behind a lie, for example people lie to give themselves an advantage or to avoid getting in trouble, when they did wrong. No matter what the intention of lying is, we can all admit that we do it, some more frequently than others, some do it habitually and some of us only as a last resort. The practice of lying can be approached

Similarities Between Abraham Lincoln And Jfk

We’ve all heard of Abraham Lincoln and John F. Kennedy right? I bet you didn’t know they’re connected. Good morning/afternoon teachers and fellow students. Today I’m going to talk about the common theory between Abraham Lincoln and John F. Kennedy. What a theory is, who Abraham Lincoln and JFK are and how they’re somewhat connected. First of all, this is a theory. A theory is when a system of ideas are intended to explain something such as an event. There are many different theories in the world

The Characteristics Of A Hero

What exemplifies a hero? According to Tom Hanks, “A hero is somebody who voluntarily walks into the unknown.” Heroes are unique, and simply those who are admired and idealized for their noble qualities. Most people assume heroes are fictional characters with incredible powers daring enough to save the world, however in this case, could be anybody we regularly see. Although heroes don’t portray fictional characters, they could be anyone in our everyday lives naturally displaying endurance and perseverance

Lying Is A Bad Side Of Lying

Most of the people always think that lying is bad. For centuries, philosophers and theologians have characterized lying as unethical (Levine & Schweitzer, 2014). Similarly, ethics scholars have argued that the honesty is a critical component of moral character and it is also a fundamental aspect of the ethical behavior. Some lies are perceived to be more ethical than an honest statement. Prosocial lies are intended to benefit the target and have small or substantial consequences (Levine & Schweitzer

Lying Vs Military Lying

Whether we accepted or not, the reality is, that lying is now part not only of our society, but also the Army. To make it more interesting; it is not a secret that the level of punishment receive by an Enlisted for lying is not near as closed as the degree of punishment for an Officer for the same wrongdoing; if so, then how the Army determines the level of reprimand that each individual must receive? As an example of these two differences, in 2006, a Sergeant accused of adultery lost one rank

The Dangers Of Lying

point for whatever reason it may have been. Lying while an easy way to get out of trouble or save someone’s feelings should only be used in certain situations. Lying causes loss of trust, double standards, and should only be used by the military. Lying seems to be something done by most of the population without even thinking twice about it but the fact is it causes confusion as who can be trusted. In “It’s the truth: Americans conflicted about lying”, a mother said,” it’s the easy trap of a

Is Lying Okay?

Is lying actually okay? The answer to that question is yes since lying can be used to protect others. Some may say no and say that all lies are bad, but that is not the case. After all, lying can always be acceptable when used to not harm an individual or crowd physically or emotionally. First off, when a person lies, it should be to keep someone from physical harm. For example, Harold Smith lied to his adult daughter about his kidney tumor so he would not get her traumatized (para 13). This

The Controversy Of Lying

Tell a lie once and all your truths become questionable. Lying Is sometimes acceptable because if you tell a lie that is in a near situation then it's acceptable. For example if your friend wants to hurt someone and they ask you for some information about where they might be then you will most likely lie to them about that because you don't want anyone to get hurt and you also don't want your friend to get in trouble. This is different then lying about just wanting to go to your friends house. For example

Argument On Lying

Lying seems so universal, but no one can really agree if it’s warranted or not.After reviewing a few opinionated and factual articles on lying, the opinionated conclusion is lying is almost always never justified unless it is to protect someone from getting hurt. In “A Philosopher On Lying”, by Mary Alder, a philosopher named, Immanuel Kant, has his perspective on lying as, “Don’t tell someone a lie, Kant said, because then you are not treating that person with respect, as an individual.”.This statement

Is Lying Wrong

truth because denying her this might deprive someone from someone else. In here we are not judging the morality of the act of lying but the benefits and consequences of it. For many people, lying is a wrong act; however, lying is not always immoral if it produces better consequences than telling the truth. For instance, this act can be morally right if the overall result of lying maximize a greatest happiness or pleasure to a greater number of people over

Popular Topics

  • Lyndon Essay
  • Lyrical Ballads Essay
  • Lysistrata Essay
  • M Butterfly Essay
  • M.C. Escher Essay
  • Mac vs. PC Essay
  • Macbeth Essay
  • Macbeth Ambition Essay
  • Macbeth Blood Essay
  • Macbeth Lady Macbeth Essay

How to Write a Great Essay Hook, With Examples

Lindsay Kramer

When you’re writing an essay , you naturally want people to read it. Just like the baited hook on a fishing line entices fish, your essay’s hook engages readers and makes them want to keep reading your essay.

Give your writing extra polish Grammarly helps you communicate confidently Write with Grammarly

What is an essay hook?

An essay hook is a sentence or two that piques the reader’s interest, compelling them to continue reading. In most cases, the hook is the first sentence or two, but it may be the entire opening paragraph. Hooks for essays are always in the first section because this is where the essay needs to hook its reader. If the reader isn’t engaged within the first few lines, they’ll likely stop reading.

An essay hook also sets the tone for the rest of your essay. For example, an unexpected statistic in an essay’s first line can tell the reader that the rest of the essay will dispel myths and shed light on the essay’s topic .

6 types of essay hooks

1 rhetorical questions.

Rhetorical questions are popular essay hooks because they make readers think. For example, an essay might start with the question “Is it ethical to eat animals?” Before reading the rest of the essay, the reader answers the question in their mind. As they continue to read, the writer’s arguments challenge the reader’s answer and may change their mind.

2 Fact/statistic

When an essay discusses scientific subjects, social issues, current events, or controversial subjects, a fact or statistic related to the essay’s topic can be a compelling hook. For example, an essay about elementary student literacy might hook readers with a statistic about the percentage of fourth graders that are proficient readers.

The hook could be a fact or statistic that’s well-known and frames the topic in a relatable way, or it could be a completely unexpected or seemingly unintuitive one that surprises the reader. In any case, they set the tone for the rest of the essay by supporting the writer’s position from the outset.

Quotes are often used as essay hooks because they’re succinct, often recognizable, and when they’re from an expert source, they can support the writer’s position.

For example, an analytical essay comparing two books might hook readers with a quote from one of the books’ authors that sets the tone for the rest of the essay and gives a glimpse into that author’s work.

Anecdotes are often used as hooks in personal essays. A personal story makes the essay relatable, creating familiarity with the reader that makes them want to read more. An example of an anecdote hook is a persuasive essay about rerouting traffic on campus that starts with a personal story of a vehicular close call.

5 Description

A description focuses on specific imagery related to the essay’s subject. For example, an argumentative essay in support of new recycling policies might hook readers with a bleak description of what happens to batteries and other hazardous materials when they aren’t recycled.

6 Common misconception

Similar to an unexpected fact, a hook that dispels a common misconception surprises the reader and educates them about something they likely misunderstood. For example, a compare-and-contrast essay about different mindfulness strategies might start with a common misconception about how mindfulness works.

Creating a hook for different writing prompts

Strong hooks for essays align with the essays’ tones, types, and topics. As you start working on an essay, think about your topic and goals for the essay. Are you trying to persuade the reader? Dispelling a common misconception can be the hook you need. Are you telling an entertaining personal story with bigger themes about your life experience? Start it off with an engaging anecdote. Are you defending a position? Share an unexpected fact and let the truth speak for itself.

Sometimes, it’s not easy to tell which kind of hook your essay needs. When this is the case, it can be helpful to write the rest of your essay, then come back to your introduction and write the kind of hook that would make you want to read that whole essay. Refer to your essay outline to ensure that it fits your essay goals.

Essay hook examples

  • Is it too late to save our planet from climate change?
  • Before I could speak, I sang.
  • “If we are truly a great nation, the truth cannot destroy us.” —Nikole Hannah-Jones
  • Contrary to popular belief, rats are among the most fastidious animals.
  • I can’t be late for class—this could be the most important day of my life!

Essay hook FAQs

An essay hook is a sentence or two that grabs the reader’s attention and piques their interest, enticing them to continue reading.

What are the different types of essay hooks?

  • Rhetorical questions
  • Description
  • Fact/statistic
  • Common misconception

Why is it important to have a good essay hook?

It’s important that hooks for essays be well crafted, because in many cases, the reader won’t continue reading an essay if it doesn’t hold their interest. The hook grabs their attention and makes them want to read on.

essay hook about lying

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essay Examples >
  • Essays Topics >
  • Essay on Lie

Quality Essays On Lying

Type of paper: Essay

Topic: Lie , Body , Body Language , Morality , Anxiety , Ethics , Lying , Truth

Words: 1600

Published: 02/09/2020

ORDER PAPER LIKE THIS

Lying is a complicated dimension of one’s personality, a human nature characterized by a habitual form of behavior. Many people would say that lying is morally wrong because it destroys a person’s dignity and integrity. Lying represents fear of what others may think and fear of facing the reality which grows stronger for every lie that a person tells.

As one lie leads to another, people are deceiving themselves by making excuses for their lies. While society has regarded lying as an ethical misdemeanor, there are cases when lying instinctively becomes the only perfect option. Why do people lie? People lie for a number of reasons such as to avoid dispute, to safeguard themselves from harm, rejection or loss as well as to protect one’s privacy and credibility. Some people lie for the benefit of their own interest and some may lie for a justifiable reason or cause or when one is unable to deal with the repercussions of telling the truth.

How People Lie

Everybody lies and even people who value honesty can’t get away with lying. Our standard of living creates circumstances that allows us to make lying necessary to be able to cope up socially where a culture of hiding the truth and declaring false social desires existed (Karpman 1949)

Oscar Wilde, in his essay on “The Decay of Lying”, regarded lying as an art or an expression of fantasized power where the liar’s goal is to charm and delight. People use various types of lying which usually begins right from childhood. Children lie to their parents to avoid punishment, to get out of trouble or to get attention and sometimes with no apparent reason. To acknowledge respect and kindness, people are sometimes required to hide the naked truth like telling a friend that she looks good on her little black dress when she actually look like a pumpkin just to avoid hurting her feelings. There are instances when people need to lie for the benefit of others or for the purpose of helping them such as doctors lie to their dying patients to comfort and lift up their spirits and lying to a criminal about the whereabouts of the victim seems a justifiable response. Some would lie to impress or gain other’s respect, such as a nursing aide tells friends that she’s a surgical nurse. Telling the truth can sometimes lead people to rejection, ridicule or punishment. On the other hand, concealing information to protect one’s interest is a harmful lie such as Patty telling her friend Beth that she saw Beth’s ex-boyfriend at the party last night but didn’t reveal that she spent the night with him leaving Beth at a disadvantage. These little white lies are generally acceptable in most cases but when it becomes frequent as a strategy of getting through life, it can be very destructive to one’s personality. Innocent or harmless lying is favorable because it caters some personal needs or interest without inflicting any harm to anyone and can even benefit another person but should not be used as a social stimulant because habitual use of it may lead to evil schemes (Karpman 1949). Bold-faced or malicious lies relates to obtaining some personal benefits at the expense of another which ruins the credibility and character of a person. The Clinton Adultery scandal in 1998 was one of the most controversial issues in American politics maneuvered by political opponents in which such malicious allegations were destructive to the Clinton Administration. These lies are either minor or to the extent of putting another person’s life in danger and should not be rationalized because it harms the safety and character of a person (Karpman 1949). There are instances when lying is morally permissible such as when government officials have to resort to lying in order to protect their personal privacy, credibility and public trust. Lying to protect one’s privacy is not generally acceptable in view of the ethics of truth, however, it is not morally wrong even if the liar is a high-ranking public official and any credible justification of lying is eligible (Allen 1999). Protecting one’s privacy is a complicated issue due to various characteristic aspects of privacy such as physical, informational, decisional and proprietary in which a person may lie to obtain them. A high-ranking official who is allegedly engaged in adultery may lie to hide his affair, to maintain his independence on sex and romance and to preserve his reputation and diplomatic interests. However, justification of lying no matter how credible is still morally unethical. Western moral ethics generally promote honesty over untruthfulness where moralists and religious denominations have expressed their ideals on the ethics of lying (Allen 1999). Immanuel Kant, the German philosopher, supports the principles of speaking the truth regardless of the consequences believing that a lie would eventually constitute an adverse outcome. Though lying is sometimes reasonable, modern philosophers conclude that we should strive to adhere to the moral principle of honesty in dealing with our friends, families, fellow citizens as well as in our public and private lives. Above all, we should be honest with ourselves. Lying is instinctively widespread for the simple reason that it works but not all the time. Lying sometimes fail due to outright discovery, self confession and detection through facial and bodily expressions

How to Detect Lies

Detecting lies can be a tough job even to the most experienced police officers, judges and other forensic professionals. Polygraph tests or lie detectors which are based on detecting nervous system activities are regarded as unreliable. In addressing this issue, psychologists have developed new methods in detecting lies through facial expression and body language analysis. The standard basis for detecting lies is commonly signs of nervousness, stress, anxiety, tension and discomfort. Research reveals that facial expressions and bodily gestures that indicate stress and tension are associated with lying (Adelson 2004). However, people feel tensed and stressed for various reasons, therefore, these emotions should not always be the basis for detecting lies. Precise and proper knowledge of body language is a vital factor, however, our analysis of them is mostly based on instinct which is usually wrong due to their beliefs and the truth behind them (Morgan 2002). The following are the basic body language gestures that were associated with lying:

Avoiding eye contact

When a person is lying, he or she cannot look straight in the eye. Shifty eyes sometimes indicate nervousness which can be caused by various reasons other than lying. Eye contact that lasts for a few seconds is normal but longer than that can cause nervousness and may indicate flirtatious behavior.

Hand gestures

Unnatural excessive gestures are subconscious signs of lying. Touching their face, throat or chin could mean someone is thinking or concentrating. Studies reveal that stroking the chin while thinking is an intellectual gesture while a deliberate waving of hands could mean searching for words. Hiding or putting the hands back behind where others are unable to see what the hands are doing could trigger suspicion. However, there is a belief that putting hands behind the back is a power gesture, thus, it should not be the basis for concluding lies.

Shaking, sweating and lips licking

These gestures are considered symptoms of lies but may sometimes indicate nervousness or fear that could show lack of self-assurance. High Pitched Voice Liars are more likely to raise their voices. However, a rising voice also indicates anger, excitement, nervousness or hysteria caused by various reasons.

Speech errors and pauses

Frequent or too long pauses and speech errors such as “aaah” or repetitions such as “I,I, fo-forgot” can arouse suspicion but all these can also occur when a person is not expecting a particular question.

Fake Smiles

When someone is lying, putting on a genuine smile is a struggle especially when the smile does not reflect on the eyes. However, people smile for a number of reasons regardless of the kind of smiles they project. A combination of verbal and body language can indicate signs of lying but all of the above gestures can also occur in other circumstances, therefore, it is difficult to prove that a lie is being manifested. Appearances are sometimes deceiving due to mixed evidence (Adelson 2004). Liars are usually not very good in concealing their feelings because emotions will regularly surface yet people are not very good in reading and analyzing them. Body language provides essential clues but is usually unreliable (Morgan 2002). In many ways, it is still essential to learn the standard art of body language and take advantage of their uses whenever necessary and however unreliable they may seem to be.

References:

Adelson, R., (July, 2004). Detecting Deception. American Psychological Association. 35(7), 70. Retrieved from http://www.apa.org/monitor/julaug04/detecting.aspx Allen, A., (1999). Lying to Protect Privacy. Villanova Law Review. 161(44). Retrieved from https://www.law.upenn.edu/cf/faculty/aallen/workingpapers/Lyingtoprotectprivacy.pdf Karpman, B. (1949}. Lying: A Minor Inquiry Into the Ethics of Neurotic and Psychopathic Behavior. Journal of Criminal Law and Criminology. 40(12), 135-145. Retrieved from http://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=3670&context;=jclc Morgan, N., (September 30, 2002). The Truth Behind the Smile and Other Myths - When Body Language Lies. Harvard Management Communication Letter. 5(8). Retrieved from http://hbswk.hbs.edu/archive/3123.html

double-banner

Cite this page

Share with friends using:

Removal Request

Removal Request

Finished papers: 348

This paper is created by writer with

ID 255144360

If you want your paper to be:

Well-researched, fact-checked, and accurate

Original, fresh, based on current data

Eloquently written and immaculately formatted

275 words = 1 page double-spaced

submit your paper

Get your papers done by pros!

Other Pages

Imitation argumentative essays, highway creative writings, fertilizer creative writings, tongue creative writings, valuation creative writings, widow creative writings, stool creative writings, opponent creative writings, sword creative writings, spy creative writings, tiger creative writings, rushton essays, seidl essays, joan essays, reporting entity essays, organizational communication essays, drug abusers essays, substance abuse and mental health services administration essays, sixteen years essays, guildenstern essays, matthew essays, mtor signaling in carcinogenesis article review example, free case study about ethical issue application paper expansion at what cost, example of critical thinking on quot we were incredibly lucky, example of entrepreneurial thinking case study, free report about public relations in action, german south west africa essay, free research paper on the armenian genocide, good article review about gis solutions for environmental management, org behavior essay examples, jesus signs and wonders research papers examples, manifesto essays examples, case study on the joe chaney case, cognitive behavior therapy an analysis critique and application research paper examples, free report about managing communication information and knowledge, good research paper about federal information security management act fisma 2002, technology essays example, causal determinism critical thinking, example of book review on summarizing, good course work on use of performance enhancing substances in sports, good essay on managing innovation, good art architecture essay example.

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Language Acquisition
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Culture
  • Music and Religion
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business History
  • Business Strategy
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Methodology
  • Economic Systems
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Lying

  • < Previous chapter
  • Next chapter >

33 Lying in Social Psychology

Bella M. DePaulo, PhD (Harvard), is an Academic Affiliate, Psychological and Brain Sciences at the University of California, Santa Barbara. She has published extensively on deception and the psychology of single life and has received federal funding for her work. Dr. DePaulo has lectured nationally and internationally and has been honoured with a variety of awards, including a James McKeen Cattell Award and a Research Scientist Development Award. Her website is www.BellaDePaulo.com.

  • Published: 11 December 2018
  • Cite Icon Cite
  • Permissions Icon Permissions

The social psychology of lying addresses some of the most fundamental questions about deception: How often do people lie? Why do they lie? To whom to they tell their lies? Do particular types of people lie especially often? Research-based answers to all those questions are reviewed. The investigation of frequency includes a comparison of students and non-students living in the same area, finding a higher incidence of lying among the former. Also discussed are strategies for lying, cognitive factors in lying, and lying in close and casual relationships. The system of personality categories introduced (or rather, updated) by Ashton and Lee (2007) is reviewed. The chief distinction in types of lie is found to be between self-serving lies and other-oriented lies. Strategies are examined in depth using interviews of suspected criminals and frequenters of online forums. The chapter concludes with a pessimistic overview of online dating sites.

33.1 Introduction

In the field of social psychology (the study of social interactions, including how people’s thoughts and feelings and actions are influenced by others), scholars were slow to approach the study of lying. The first meta-analytic review article did not appear until 1981 ( Zuckerman, DePaulo, and Rosenthal 1981 ). The number of studies grew after that, and exploded in the twenty-first century, especially in the US, perhaps because the terrorist attacks on September 11, 2011 greatly increased interest in deception-related topics.

The Zuckerman et al. review was titled ‘Verbal and nonverbal communication of deception’, reflecting the early interest in lying and detecting lies. A huge and still-growing literature has accumulated on cues to deception ( DePaulo, Lindsay, Malone, Muhlenbruck, Charlton, and Cooper 2003 ) and the accuracy of people’s attempts to detect deception ( Bond and DePaulo 2006 ). That research is reviewed in Chapter 31 by Samantha Mann.

Social psychologists’ study of cues to deception and success at deceiving and detecting deceit predated their research on more fundamental questions, such as: How often do people lie? Why do they lie? To whom do they tell their lies? What kinds of people are especially likely to lie frequently? Empirical answers to all of those questions are reviewed in this chapter.

As the field progressed, researchers wanted to know not just whether people were successful in their attempts to deceive other people, but also what strategies they used to try to fool others. They also examined the cognitive processes involved in lying. The interpersonal contexts of deception also attracted more research attention. Whereas early studies of cues to deception and accuracy of deception-detection were often conducted with liars and targets who were strangers to one another, subsequent studies of the psychology of lying focused more on the role of lying in different kinds of relationships, including very close ones. Social psychologists showed that the telling of lies instead of truths had implications for the way the liars viewed themselves as well as the people to whom they were telling their lies. Over time, certain kinds of deception can contribute to the deterioration, or even termination, of a relationship. In this chapter, separate sections on strategies for lying, cognitive factors in lying, and lying in relationships provide reviews of the relevant research.

33.2 How often do people lie?

Volumes had been written on lying, across many disciplines, before any systematic estimate of the frequency with which people lie had been available. Early studies were small, imprecise, and limited to college students. In 1996, DePaulo and her colleagues asked two groups of participants—seventy-seven college students and a more diverse group of seventy people from the community—to keep a diary of all of the lies that they told every day for a week ( DePaulo, Kashy, Kirkendol, Wyer, and Epstein 1996 ). Significantly, they also asked participants to record all of their social interactions lasting at least ten minutes, regardless of whether they had told any lies during those interactions. That provided a measure of participants’ opportunities to lie, which was missing from all previous research. They found that the college students told an average of one lie in every three of their social interactions (33%), and the people from the community told one lie in every five interactions (20%). Calculating the number of lies per day (rather than per social interaction), they found that the college students told an average of two lies a day and the people from the community told about one. Only one of the college students and six of the community members told no lies at all.

In studies in which participants report their own lies, how can we know whether participants are reporting their lies accurately? In the diary studies conducted by DePaulo and her colleagues, participants were encouraged to record their lies soon after they told them, to reduce issues of forgetting, and their diaries were collected several times a week. The diaries were anonymous, and that may have facilitated honest reporting. The two groups of participants were different in many ways, yet the psychology of lying was very similar for both. That, too, lends credibility to the findings.

In two small studies, George and Robb (2008) replicated the diary methodology of DePaulo et al. (1996) , while examining additional different communication modalities— instant messaging, e-mail, phone, and face-to-face communications (as compared to written communications, telephone, and face-to-face in DePaulo et al.). Across all communication modalities, the rates of lying in the two studies, 25% and 22%, were comparable to the DePaulo et al. (1996) rates in their two studies of 33% and 20%. Taken together, the findings suggest that people lie in about one out of every four of their social interactions in their everyday lives.

Consistent with the DePaulo et al. (1996) results, George and Robb (2008) also found that rates of lying tended to be lowest in face-to-face communications. However, in the latter studies, individual comparisons between face-to-face and each of the other modalities were not significant.

Email communications and paper and pen messages are both text-based, but people feel more justified in lying by email, perhaps because it is viewed as less permanent and less personal. In a series of experiments, Naquin, Kurtzberg, and Belkin (2010) showed that people consistently tell more lies when emailing than when using pen and paper.

In a study focused specifically on lying in text messaging, college students provided the last fifteen text messages they had written to each of two people of their choosing ( Smith, Hancock, Reynolds, and Birnholtz 2014 ). Then they indicated whether each message was a lie. Seventy-seven percent of the participants told at least one lie. An average of eleven percent of the messages were deceptive.

In an Internet survey, a national sample of 1,000 American adults reported the number of lies they had told in the previous 24 hours ( Serota, Levine, and Boster 2010 ). No measure of social interactions was collected. On a per-day basis, though, the results were comparable to those of DePaulo et al. (1996) , with participants telling an average of 1.65 lies per day. Sixty percent reported that they told no lies at all. Thus, the majority of lies were told by a minority of the participants.

Media accounts sometimes claim that people tell three lies every ten minutes. The finding comes from a study in which undergraduates had a ten-minute getting-acquainted conversation with another student ( Feldman, Forrest, and Happ 2002 ). In a third of the conversations, one of the students was instructed to try to come across as very likable. In another third of the conversations, one of the students tried to come across as especially competent. In the rest, the students were given no particular instructions about what impression to convey. Of those trying to present themselves in particular ways, forty percent told no lies at all. The others told an average of three lies in the ten-minute conversation. Therefore, what the research really did show was that for undergraduates interacting with strangers and told to convey the impression of being extremely likable or competent, forty percent told no lies and the others told an average of three lies in the ten-minute conversations.

Other estimates of rates of lying come from more specific domains. For example, in a two-year study of applications to a psychiatry training program, applicants’ reports of their publications were compared to their actual publications ( Caplan, Borus, Chang, and Greenberg 2008 ). Nine percent of the applicants misrepresented their records. In research on a broader range of job applicants, investigating misrepresentations of employment history, educational history, and credentials, discrepancies were found in forty-one percent of the resumes ( Levashina and Campion 2009 ).

33.3 Personality and individual differences in lying

Who lies? According to the research on the rate of lying in everyday life, the answer is probably everyone ( DePaulo et al. 1996 ). With regard to the matter of personality and individual differences in lying, then, the question is whether particular types of people are especially more likely to lie than others.

In the diary studies of lying among college students and people in the community, participants completed a battery of personality tests in addition to recording their lies and their social interactions every day for a week ( DePaulo et al. 1996 ; Kashy and DePaulo 1996 ). Most results were consistent across the two groups. As expected, more manipulative people (as measured by Machiavellianism and Social Adroitness scales) lie at a higher rate. So do people who care a lot about what others think of them and do a lot of impression management (as measured by Public Self-Consciousness and Other-Directed scales). People scoring higher on a scale measuring responsibility lied less often. Self-esteem and social anxiety did not predict rates of lying.

More extroverted people also lie more, and not because they are more sociable and therefore have more opportunities to lie; the rate of lying measure assesses the number of lies per social interaction ( Kashy and DePaulo 1996 ). In a study based on a simulated job interview, Weiss and Feldman (2006) also found that more extroverted people lie more.

The link between Machiavellianism and the inclination to tell lies was further demonstrated in a study of the Dark Triad of traits, which also includes psychopathy (interpersonal antagonism, callous social attitudes, and impulsiveness) and narcissism (superiority, entitlement, and dominance). Participants recruited online filled out the relevant personality scales, and also described the lies they had told in the last seven days ( Jonason, Lyons, Baughman, and Vernon 2014 ). People scoring higher on each of the three traits told more lies than those scoring lower. People scoring higher on Machiavellianism and psychopathy lied to more people. Narcissists were more likely to lie for self-serving reasons, and psychopaths more often admitted that they told lies for no reason at all.

Machiavellianism also predicts lying in sexual contexts ( Brewer and Abell 2015 ). A wide range of adults recruited online completed a Machiavellianism scale, the Sexual Deception scale, and the Intentions Toward Infidelity Scale. People who scored higher on Machiavellianism said they were more willing to tell blatant lies in order to have sex with a partner, more willing to engage in sexual behavior to achieve some self-serving goal, more likely to lie to avoid confrontation with a partner, and more likely to be unfaithful to a partner.

In a study in which participants were interviewed by a skeptical police detective, college students who were not psychology majors were urged to claim that they were, and to continue to lie in response to every question ( Vrij and Holland 1998 ). The students who persisted in telling the most lies were those who were more manipulative (as measured by the same Social Adroitness scale used by Kashy and DePaulo 1996 ) and more concerned with impression management (as measured by Other-Directedness, but not by Public Self-Consciousness). The more socially anxious the participants were, the less likely they were to persist in telling lies.

In an extension of the influential five-factor approach to personality, Ashton and Lee (2007) added to the factors of extroversion, agreeableness, conscientiousness, emotionality, and openness to experience a new factor of honesty-humility. The factor captures sincerity, fairness, modesty, and avoidance of greed. Hilbig, Moshagen, and Zettler (2015) predicted that people low in honesty-humility would be more likely to engage in morally questionable behavior such as infidelity in relationships, but less likely to admit it when asked directly. They found that such people admitted to more infidelity only when asked indirectly.

In their diary studies of lying in everyday life, DePaulo and her colleagues found no overall sex differences in rates of lying ( DePaulo et al. 1996 ). They did, though, find sex differences in the telling of two different kinds of lies, self-serving lies (told in pursuit of the liars’ interests, or to protect or enhance the liars in some psychological way) and other-oriented lies (told to advantage other people, or to protect or enhance them in some psychological way). Averaging across men and women, both the college students and the community members told about twice as many self-serving lies as other-oriented ones. The same disproportionate telling of self-serving lies characterized social interactions in which men were lying to women, women were lying to men, and especially when men were lying to other men. When women were lying to other women, though, they told just as many other-oriented lies as self-serving lies. People telling other-oriented lies are often trying to spare the feelings of the other person, as, for example, when they say that the other person looks great, prepared a great meal, or did the right thing, when they actually believe just the opposite. Women seem especially concerned with not hurting the feelings of other women.

33.4 Motivations for lying

Motivations for lying can be enumerated and classified in many different ways. Perhaps the most important distinction is between self-serving lies and other-oriented lies, with self-serving lies told far more often than other-oriented lies ( DePaulo et al. 1996 ). In the diary studies of lying in everyday life, DePaulo and her colleagues also assessed whether each lie was told in the pursuit of materialistic versus psychological goals. Materialistic goals include the pursuit of money and other concrete benefits such as securing a job. Psychological goals include, for example, avoiding embarrassment or disapproval, creating a positive impression, and protecting other people from having their feelings hurt. Participants in both studies told more lies in the pursuit of psychological goals than materialistic ones.

In an experimental test of lying to get a job, Weiss and Feldman (2006) found that applicants described themselves in a job interview as having more technical skills than they really believed they had if the job was described as requiring such skills, and as having more interpersonal skills if the job instead required those skills. Material rewards do not undermine every person’s honesty. In research in which participants are given a monetary incentive to lie, many refrained from lying ( Lundquist, Ellingsen, Gribbe, and Johannesson 2009 ).

When people describe their own accomplishments more favorably than objective indicators warrant, how do we know they are deliberately lying and not just misremembering? The bias toward recounting in ways that are too generous rather than too harsh is one indication. Also, dispositionally, some people are even more inclined than others to self-enhance; those people have been shown to report even more exaggerated grade point averages than others. Furthermore, when people are given an opportunity to affirm their core values (for example, by writing about them), their need to self-enhance is satisfied, and they are then less likely to exaggerate their grades than people who did not get the self-affirmation opportunity ( Gramzow and Willard 2006 ).

When people’s self-esteem is threatened, they are more likely to lie in order to make themselves seem better than they really are. Tyler and Feldman (2005) demonstrated this in a study in which some participants were told that another student outperformed them, and that student would be evaluating them. The students who got the threatening feedback did indeed report lower self-esteem than those who were not told that the other student performed better, and then they lied more about their accomplishments and actions and personal attributes.

In addition to the categories of self-centered versus other-oriented lies, and lies told in the pursuit of materialistic versus psychological goals, Vrij (2008) suggests that another important distinction is between lies told to gain an advantage versus lies told to avoid a cost. Another typology adds conflict avoidance and social acceptance to the more commonly mentioned self-gain and altruistic motives ( McLeod and Genereux 2008 ). Another approach ( Gillath, Sesko, Shaver, and Chun 2010 ) acknowledges three motives identified as important across many domains, and applies them to the study of deception: power (lies told to control others), achievement (lies told to perform particularly well), and intimacy (lies told to increase closeness). A study of lies told by juvenile offenders ( Spidel, Herve, Greaves, and Yuille 2011 ) underscored the significance of another motivation for lying—the thrill of fooling others, which Ekman (2009) calls duping delight. The broadest perspective on motivations for lying is the evolutionary one, which maintains that people lie because their attempted deceptions have resulted in selective advantages in reproduction and survival over evolutionary time ( Smith 2004 ).

33.5 Strategies for lying

In their research on suspects’ strategies during police investigations, Hartwig and her colleagues ( Hartwig, Granhag, and Stromwall 2007 ) found that innocent suspects typically do not strategize much—they simply tell the truth and expect to be believed. The same is likely to be true of truth-tellers in everyday life. Guilty suspects, in contrast, are more likely to strategize both before and during an interrogation. For example, they deliberately try not to seem nervous, and they try to avoid telling any outright lies while also denying any guilt or any admissions about incriminating evidence. They also try to offer detailed accounts and stay consistent.

Suspects who have prior criminal experiences are more likely to succeed in volunteering less incriminating information than those who have no criminal history. They are also unaffected by how suspicious the interrogators are of their guilt, whereas those new to the criminal justice system offer more information to the interrogators they believe to be more skeptical ( Granhag, Clemens, and Stromwall 2009 ).

Outside of the legal system, people use a variety of strategies when accused of a serious offense. For example, they might deny the accusation completely, offer an explanation for their behavior, make a counter-accusation, admit to a relevant lesser offense, or admit to an irrelevant lesser offense. In experimental tests of the effectiveness of those strategies, Sternglanz (2009) found that people are more likely to be believed when they admit to a lesser relevant offense than when they completely deny that they did anything wrong. No strategy was significantly more effective than admitting to a lesser relevant offense.

Most of the lies of everyday life are not about serious offenses. Sometimes people lie when the truth would be hard to tell, as, for example, when answering a person’s question honestly would hurt that person’s feelings. Bavelas and her colleagues ( Bavelas, Black, Chovil, and Mullett 1990 ) showed that in such challenging situations, people often equivocate—they try to avoid answering the question or giving their own opinion, they sometimes deliberately give unclear answers, and sometimes they avoid addressing the person who asked the question.

But what do they say? DePaulo and Bell (1996) created a paradigm in which participants indicated which of many paintings they liked the most and the least, and were subsequently introduced to an artist who pointed to one of their most disliked (or liked) paintings and said, “This is one that I did. What do you think of it?” As Bavelas et al. (1990) would have predicted, the participants often stonewalled when asked about the artist’s own painting that they disliked the most. But they also amassed misleading evidence (mentioning more of what they liked about the painting than what they disliked) and implied a positive evaluation without saying explicitly that they liked the painting. They used strategies that they could defend as not technically a lie.

In online chat environments, liars are more likely than truth-tellers to choose avatars that are dissimilar from themselves (for example, in race, gender, or physical appearance). Perhaps, Galanxhi and Nah (2007) suggest, online participants believe that increasing their anonymity will improve their chances of getting away with their lies.

One of the most effective routes to deceptive success is not a strategy at all—it is an overall demeanor. People who have a truthful look about them will often be believed even when they are lying. Even trained and experienced government agents are routinely fooled by them ( Levine et al. 2011 ).

33.6 Cognitive factors in lying

Social psychologists interested in cognitive approaches to understanding deception have studied decisions about whether to deceive and the cognitive processes involved in lying and telling the truth. They have also examined ways in which lying instead of telling the truth changes self-perceptions, perceptions of the targets of the lies, and even perceptions of whether the untrue statements really were untrue.

From a self-control perspective, Mead et al. (2009) argued that situations in which people are tempted to lie often present a conflict between the inclination to lie or cheat and thereby reap an immediate reward versus choosing the more socially appropriate option of honesty. The latter requires more self-control. The authors showed that when people’s capacity for self-control was already depleted by a previous task that used up some of their cognitive resources, they were less able to resist the temptation to behave dishonestly. They were also less inclined to avoid situations in which temptations to lie were more likely to be present.

In a review article, Vrij, Granhag, and Porter (2010) described six ways in which lying can be more cognitively demanding than telling the truth. First, formulating a lie may be more challenging than telling the truth, as, for example, when liars need to make up a story then remember what they said to whom. Second, because liars are less likely to expect to be believed than are truth-tellers, they use more cognitive resources monitoring and controlling their own behavior and (third) other people’s reactions to their deceptions. Fourth, liars are more likely to expend cognitive resources reminding themselves to maintain the role they are playing. Fifth, truth comes to mind effortlessly, and liars need to use mental resources to suppress it. Finally, “activation of a lie is more intentional and deliberate, and thus it requires mental effort” ( Vrij, Granhag, and Porter 2010 : 109).

Lie-telling will not always be more cognitively demanding than telling the truth, though. Walczyk et al. (2014) formulated a model of answering questions deceptively that does not assume that the basic cognitive processes involved in lying are different from those involved in telling the truth. Their Activation-Decision-Construction-Action Theory (ADCAT) articulates key cognitive processes and predicts conditions under which lie-telling will be more or less cognitively demanding than truth-telling. For example, they argue that lying “will impose more cognitive load than truth telling … the more complex, unfamiliar, or serious the truth-soliciting context is or the less rehearsed deceptive responding is. On the other hand, well rehearsed liars or those in highly familiar social contexts, for instance, are unlikely to need to monitor their own behavior [or] that of the targets … ” ( Walczyk et al. 2014 : 32). ADCAT incorporates key constructs from cognitive psychology such as theory of mind, working memory, and executive function.

The cognitive consequences of lying continue even after the lie has been told. For example, there is experimental evidence to indicate that liars can come to believe their lies ( Pickel 2004 ). In an example of moral disengagement, they also persuade themselves that their morally questionable behavior was really not so bad after all ( Shu, Gino, and Bazerman 2011 ). In research in which people lie or tell the truth to another person, those who lie subsequently perceive the other person as more dishonest than those who told the truth ( Sagarin, Rhoads, and Cialdini 1998 ). In sum, liars sometimes come to believe that what they said was not really a lie, that there was nothing wrong with what they did, and anyway, other people are also dishonest.

Many varieties of self-deception can facilitate deceptive success, argue von Hippel and Trivers (2011) , by making people more confident, less burdened by the extra cognitive load that sometimes accompanies deceit, less likely to evince cues to deception, and less culpable if caught in the lies that they do not even believe they are telling. Moral disengagement after an initial but small transgression can lead the way to the commission of more serious transgressions that may have been inconceivable had the perpetrator not already committed lesser offenses, as Welsh et al. (2015) demonstrated in a series of studies of the slippery slope of ethical transgressions.

33.7 Lying in relationships

In DePaulo and Kashy’s (1998) diary studies of lying in everyday life, people told fewer lies (per social interaction) to the people with whom they had closer relationships. They also felt more uncomfortable lying to those people. With strangers and acquaintances, people told relatively more self-centered lies than other-oriented lies, but with friends, they told relatively more other-oriented lies. In research in which participants’ liking for an art student was experimentally manipulated, people told more kind-hearted lies to the artists they were led to like than the ones they were induced to dislike, thereby protecting those artists from learning when the participants actually detested their work ( Bell and DePaulo 1996 ).

People with an avoidant attachment style (who are fearful of intimacy) tend to lie more often, but only to their romantic partners. People with anxious attachment styles (who are preoccupied with issues of intimacy) lie more often to partners and others, including strangers, best friends, co-workers, bosses, and professors ( Ennis, Vrij, and Chance 2008 ; Gillath et al. 2010 ). When people are prompted to think about a relationship that made them feel secure, they are less likely to lie to a current romantic partner and also less likely to lie in academic settings ( Gillath et al. 2010 ).

Research on deception in relationships from an evolutionary perspective typically focuses on sex differences relevant to mating. For example, Marelich et al. (2008) found that men are more likely to tell blatant lies in order to have sex, whereas women are more likely to have sex to avoid confrontation. Tooke and Camire (1991) asked participants about the strategies they use to make themselves appear more desirable than they really are, both to other men and to other women. With both men and women, women were more likely than men to lie about their appearance. With women, men more often faked commitment, sincerity, and skill at obtaining valuable resources such as money and career advancement; with other men, they exaggerated the frequency of their sexual behavior and depth of their relationships, as well as their popularity and general superiority.

The deceptive claims that men most often make in their interactions with their female partners are just the claims that upset women most when they discover the deceptions—claims about commitment, status, and resources. Women are especially more upset than men when their partners exaggerate their feelings in order to have sex. Men are more upset than women when their partners mislead them into believing that they are interested in having sex ( Haselton et al. 2005 ).

When people discover that they have been deceived by a romantic partner, those with avoidant attachment styles are especially likely to avoid their partner and ultimately end the relationship. Anxiously attached people are more likely to talk around the issue but continue the relationship. Securely attached partners are especially likely to talk about the deception and stay together ( Jang, Smith, and Levine 2002 ). The belief that a spouse may be concealing something is damaging to relationships. In a study of newlyweds, for example, those who thought their partners were hiding something felt excluded from their own relationship, and over time experienced less trust and engaged in more conflict ( Finkenauer, Kerkhof, Righetti, and Branje 2009 ).

In their roles as targets of lies, both men and women evaluate lies more harshly than they do when they are the ones perpetrating the lies. As liars, people think their lies are more justifiable and more positively motivated, less damaging to the other person and more likely to have been provoked by the other person ( Kaplar and Gordon 2004 ; DePaulo, Ansfield, Kirkendol, and Boden 2004 ).

A growing body of literature focuses on deception in online dating. Men who use online dating services admit that they misrepresent personal resources such as income and education more than women do; they also more often misrepresent their level of interest in a serious relationship and their personal attributes (such as politeness). Women more often admit that they misrepresent their weight ( Hall et al. 2010 ). In another study ( Toma, Hancock, and Ellison 2008 ), online daters reported to a laboratory where their characteristics could be measured objectively and compared to what was said in their profiles. Again, women lied about their weight more often than men did. Men more often lied about their height. In a similar study, online daters had their pictures taken in a lab and those pictures were then rated by other people ( Toma and Hancock 2010 ). Daters whose lab pictures were rated as relatively unattractive were especially likely to post online photos that enhanced their attractiveness; they also lied more when verbally describing their attractiveness. The relatively more attractive daters posted more different photos of themselves than did the less attractive ones.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Apr 5, 2023

How to Write an Essay Hook | Tips, Tricks, and Examples

What does fishing and essay writing have in common? It's all about the hook! Just like a fisherman needs a good hook to catch a fish, you need an excellent essay hook to reel in your readers. If you're tired of your essays flopping like a fish out of water, don't worry - in this article, we'll teach you how to craft a hook that will have your readers hooked from the very first sentence. Get ready to bait your audience and catch their attention like a pro!

Welcome to the world of essay writing! Crafting an essay that captivates your audience from the very beginning can be challenging. As a student, you might have struggled with the question, "How do I write an essay hook?" The answer is simple: you need to grab the reader's attention and keep them engaged from the first sentence. But how do you do that effectively?

Don't worry; that's where Jenni.ai comes in! Our AI tool is designed to help students write essays that stand out, with powerful hook examples for essays that will make your paper impossible to put down.

That's why we've created this blog post to help you understand what a hook is, and how to write one and provide you with some hook essay examples that will inspire you to take your writing to the next level. Whether you're writing a persuasive essay, a narrative essay, or a research paper, we've got you covered!

But first, let's talk about what an essay hook is. A hook is an initial statement in an essay, typically the first sentence or a group of sentences that grab the reader's attention and make them want to read more. It's the first impression you give to your reader, and it can make or break your essay.

A good hook should be intriguing, thought-provoking, and relevant to your topic. It can be a question, a quote, a statistic, a personal anecdote, or anything else that piques your reader's interest.

How to Write a Hook

Now that you know what a hook is and why it's important, let's dive into how to write a hook that will grab your reader's attention.

Start with an Interesting Fact or Statistic

One of the most effective ways to start an essay is with an interesting fact or statistic that relates to your topic. This will immediately grab your reader's attention and make them curious to learn more.

For example, if you're writing an essay about the impact of climate change on the ocean, you could start with a startling statistic like "The ocean has absorbed 90% of the heat produced by global warming, causing it to become 30% more acidic in the last century alone."

Use a Metaphor or Simile

Metaphors and similes can be powerful tools for creating an engaging hook. By comparing something familiar to your reader with something unfamiliar or unexpected, you can pique their interest and create a sense of intrigue.

For instance, if you're writing an essay about the importance of education, you could start with a metaphor like "Education is the key that unlocks the door to a brighter future."

Pose a Question

Asking a thought-provoking question can be an effective way to hook your reader and encourage them to think about your topic in a new way. The key is to ask a question that is relevant to your topic and that will make your reader curious to find out the answer.

For example, if you're writing an essay about the benefits of meditation, you could start with a question like "What if just 10 minutes of meditation a day could reduce your stress levels and improve your mental clarity?"

Share a Personal Anecdote

Sharing a personal story or anecdote can be a powerful way to connect with your reader and make your essay feel more relatable. It also shows that you have a personal stake in the topic you're writing about.

For instance, if you're writing an essay about the importance of mental health, you could start with a personal anecdote like "I remember the moment I realized I needed to prioritize my mental health. It was a sunny day, but I felt like I was drowning in darkness."

By using one of these techniques, you can create an essay hook that is engaging, relevant, and memorable. So the next time you sit down to write an essay, remember to start with a hook that will reel in your reader and keep them hooked until the very end.

Example Essays with Engaging Hooks

The End of Innocence: How Technology Is Changing Childhood

Introduction:

From playing in the backyard to scrolling through screens, the childhood experience has drastically changed in the last few decades. Technology has become an integral part of our lives, and children are not left behind. With the emergence of smartphones, tablets, and other smart devices, the digital age has paved the way for a new kind of childhood experience.

However, this change has raised some serious concerns about the impact of technology on children's lives. In this article, we will explore the end of innocence and how technology is changing childhood.

Digital Age and Childhood:

With the advent of technology, childhood has evolved. Smartphones, tablets, laptops, and other smart devices have changed the way children play, learn, and communicate. The digital age has brought a wealth of information and entertainment that was not available in the past.

Children can now access an extensive range of educational resources, connect with peers, and entertain themselves at the touch of a button. However, this has led to concerns about the impact of technology on children's physical, social, and emotional development.

Physical Development:

Technology has made it easier for children to engage in sedentary activities such as watching videos, playing games, and browsing the internet. This has led to concerns about the impact of technology on physical development.

According to the World Health Organization, physical inactivity is one of the leading risk factors for global mortality. With children spending more time in screens, there is a real risk of obesity and other health problems. Furthermore, the excessive use of screens can lead to eye strain, headaches, and other health issues.

Social Development:

Technology has changed the way children interact with each other. Social media platforms such as Facebook, Instagram, and Twitter have given children a new way to connect with peers. However, social media can also be a source of cyberbullying, online harassment, and other negative experiences. 

Furthermore, the excessive use of screens can lead to social isolation, as children spend less time engaging in face-to-face interactions.

Emotional Development:

The impact of technology on children's emotional development is a subject of debate. While some studies have found a positive relationship between technology use and emotional development, others have found the opposite.

The excessive use of screens can lead to addiction, anxiety, and depression. Furthermore, children who spend more time on screens are less likely to develop empathy and emotional intelligence.

Conclusion:

In conclusion, the digital age has changed childhood, and the end of innocence is a real concern. Technology has brought a wealth of benefits, but it has also led to concerns about the impact on children's physical, social, and emotional development. As parents, it is important to strike a balance between technology use and other activities.

Encouraging children to engage in physical activity, spend time with friends and family, and pursue hobbies can help to mitigate the negative effects of technology. By being mindful of the impact of technology on childhood, we can help our children to grow into healthy, well-rounded individuals.

The Price of Perfection: Why Society's Standards Are Hurting Us

Perfection is a goal that many people strive for in their lives. Society often places a great deal of emphasis on achieving perfection, whether it is in our appearance, career, or personal life. However, the pursuit of perfection can have a negative impact on our mental and emotional well-being. In this article, we will explore the price of perfection and why society's standards are hurting us.

The Perfectionism Trap:

Perfectionism is the belief that one must be flawless in all aspects of life. It is a personality trait that can lead to a range of negative outcomes, including anxiety, depression, and stress. Society often reinforces the notion that perfectionism is desirable, which can lead people to feel inadequate or inferior when they fall short of this ideal.

The Cost of Perfection:

The pursuit of perfection can have significant costs, both personally and socially. At an individual level, it can lead to burnout, anxiety, and depression. Perfectionism is often associated with high levels of stress, as individuals feel pressure to meet unrealistic expectations. This can lead to physical health problems, such as headaches, muscle tension, and insomnia.

At a societal level, the pressure to be perfect can lead to social isolation, as individuals feel unable to meet the expectations of their peers. Social media has exacerbated this problem, as individuals compare themselves to others who seem to have achieved perfection in various aspects of their lives.

This can lead to a sense of inadequacy and low self-esteem, as individuals feel they cannot measure up to the standards set by others.

Breaking Free from Perfectionism:

Breaking free from the trap of perfectionism requires a shift in mindset. It requires recognizing that perfection is not achievable and that mistakes and failures are a natural part of the human experience. Learning to embrace imperfection can lead to greater emotional resilience and mental well-being.

It also requires challenging the societal norms that reinforce the importance of perfectionism. This includes questioning the unrealistic expectations placed on individuals in various aspects of life, such as their appearance or career success.

In conclusion, the pursuit of perfection can come at a significant cost to our mental and emotional well-being. Society often reinforces the notion that perfectionism is desirable, which can lead individuals to feel inadequate or inferior when they fall short of this ideal.

Breaking free from the trap of perfectionism requires a shift in mindset and a willingness to embrace imperfection. By recognizing that perfection is not achievable, we can work towards greater emotional resilience and mental well-being. It also requires challenging the societal norms that reinforce the importance of perfectionism, so that we can create a more compassionate and accepting society for all.

Breaking the Stigma: Why Mental Health Matters

Mental health is a crucial aspect of our overall well-being, yet it is often stigmatized and overlooked in our society. Many people suffer from mental health issues, but due to the stigma surrounding these conditions, they may not seek the help they need. In this article, we will explore the importance of mental health and why breaking the stigma is so crucial.

The Impact of Mental Health on Our Lives:

Mental health plays a crucial role in our overall well-being. It affects our emotions, thoughts, and behaviour, and impacts how we interact with others and the world around us. Mental health issues can have a significant impact on our daily lives, leading to difficulties with work, relationships, and overall functioning.

The Stigma Surrounding Mental Health:

Despite the prevalence of mental health issues, there is still a significant stigma surrounding these conditions. This can lead people to feel ashamed or embarrassed about seeking help, which can delay treatment and lead to more severe symptoms. Stigma can also lead to discrimination and negative attitudes towards individuals with mental health issues, which can further exacerbate their symptoms and impact their quality of life.

Breaking the Stigma:

Breaking the stigma surrounding mental health is crucial to ensuring that individuals receive the help they need. It requires challenging the negative attitudes and misconceptions that contribute to the stigma. This includes promoting awareness and education about mental health issues, as well as encouraging open and honest conversations about mental health.

By creating a more accepting and supportive environment for individuals with mental health issues, we can help to reduce the stigma and improve access to care.

The Importance of Seeking Help:

Seeking help for mental health issues is crucial for both individuals and society as a whole. By addressing mental health issues early on, we can prevent more severe symptoms and improve overall functioning. It also helps to reduce the stigma surrounding mental health, as individuals who seek help can serve as role models and advocates for others who may be struggling.

Mental health is a crucial aspect of our overall well-being, yet it is often stigmatized and overlooked in our society. Breaking the stigma surrounding mental health is crucial to ensuring that individuals receive the help they need. It requires challenging negative attitudes and misconceptions about mental health, promoting awareness and education, and encouraging open and honest conversations.

By doing so, we can create a more accepting and supportive environment for individuals with mental health issues, and improve access to care for all.

From Zero to Hero: The Power of Resilience

Resilience is the ability to overcome adversity and bounce back from challenges. It is a powerful trait that can help individuals achieve success in all areas of their lives, from personal relationships to professional pursuits. 

Life can be full of challenges and setbacks that can leave us feeling defeated and discouraged. But what sets successful people apart from those who struggle is their ability to bounce back from adversity and keep pushing forward. This ability to overcome obstacles and persevere in the face of adversity is known as resilience, and it can be a powerful tool for achieving success in all areas of life.

In this article, we will explore the concept of resilience, its benefits, and strategies for building it. We'll also look at real-life examples of resilience in action and how it can help us go from zero to hero in our own lives.

Defining resilience: What it is and why it matters

Resilience is the ability to adapt and thrive in the face of adversity, trauma, or stress. It involves being able to bounce back from setbacks and continue moving forward despite challenges. Resilience is not a fixed trait; rather, it can be developed and strengthened over time through deliberate practice and the cultivation of a growth mindset.

Resilience matters because life is full of challenges, both big and small. Whether it's a difficult job interview, a breakup, or a health issue, we all face obstacles that can derail us if we don't have the tools to cope. Resilience helps us stay strong in the face of adversity, maintain our focus on our goals, and continue making progress even when the going gets tough.

The benefits of resilience: How it can improve your life

There are many benefits to developing resilience. Here are just a few:

Increased self-confidence: When we develop resilience, we become more confident in our ability to handle challenges and overcome obstacles. This increased confidence can spill over into other areas of our lives, helping us take risks and pursue our goals with greater vigour.

Improved mental health: Resilience has been linked to improved mental health outcomes, including lower rates of depression, anxiety, and post-traumatic stress disorder (PTSD). This is because resilient individuals are better able to cope with stress and trauma, and are less likely to be overwhelmed by negative emotions.

Greater success in personal and professional pursuits: Resilience is a key predictor of success in both personal and professional endeavours. Individuals who are more resilient are better able to persevere in the face of challenges, bounce back from setbacks, and stay focused on their goals.

Strategies for building resilience: From mindfulness to self-care

While some individuals may be naturally more resilient than others, resilience is a trait that can be developed and strengthened over time. Here are some strategies for building resilience:

Practice mindfulness:

Mindfulness can help us develop a greater awareness of our thoughts and emotions, and learn to regulate them more effectively. This can be especially helpful when we are facing challenges or setbacks.

Cultivate a growth mindset: 

A growth mindset involves believing that our abilities can be developed through hard work and dedication. This mindset can help us stay motivated and focused even when we encounter obstacles.

Practice self-care: 

Taking care of ourselves physically, emotionally, and mentally is essential for building resilience. This may include getting enough sleep, eating well, exercising regularly, and engaging in activities that bring us joy and fulfilment.

Real-life examples of resilience in action

There are countless examples of individuals who have demonstrated remarkable resilience in the face of adversity. For example:

Oprah Winfrey grew up in poverty and was a victim of abuse, but she persevered and went on to become one of the most successful and influential people in the world.

J.K. Rowling was a struggling single mother when she wrote the first Harry Potter book, which was rejected by multiple publishers. But she kept writing and eventually found success, becoming one of the bestselling authors of all time

Another factor that contributes to resilience is having a positive outlook. People who are resilient tend to focus on the positive aspects of a situation, rather than dwelling on the negative. They also have a sense of optimism and hopefulness, which allows them to see the light at the end of the tunnel even in the darkest of times. 

In fact, studies have shown that having a positive attitude can help individuals cope better with stress and adversity, leading to increased resilience.

In addition to having a positive outlook, building strong relationships with others can also help to foster resilience. Having a support system of family, friends, and even colleagues can provide a sense of belonging and connection, which can be critical during difficult times. This support system can also provide emotional and practical support, helping individuals to better manage and overcome challenges.

Furthermore, resilience can also be strengthened through learning and personal growth. By taking the time to reflect on past experiences, individuals can gain valuable insights into their own strengths and weaknesses. This self-awareness can help them to develop a greater sense of resilience, as they become better equipped to deal with future challenges.

Finally, taking care of one's physical health can also contribute to resilience. Engaging in regular exercise, getting enough sleep, and eating a healthy diet are all important factors in maintaining physical well-being. By prioritizing physical health, individuals can better cope with stress and adversity, allowing them to bounce back more easily when faced with difficult situations.

In conclusion, resilience is a powerful trait that can help individuals overcome adversity and achieve success in all areas of life. Whether it is through developing a positive outlook, building strong relationships, or prioritizing physical health, there are many strategies that can be used to build resilience. 

By focusing on these strategies and working to develop a greater sense of resilience, individuals can learn to transform themselves from zero to hero, achieving their goals and living their best lives.

In conclusion, the essay hook is a crucial element in any essay, as it is the first thing that readers will see and can make or break their interest in the rest of the essay. There are many different types of essay hooks that can be used, from rhetorical questions and anecdotes to statistics and quotes.

By understanding the different types of hooks and how they can be used effectively, writers can capture their readers' attention and keep them engaged throughout the essay.

To create a successful essay hook, it is important to consider the audience, the topic, and the purpose of the essay. By tailoring the hook to these factors, writers can create a hook that is not only attention-grabbing but also relevant and meaningful.

Fortunately, with the help of Jenni.ai , creating an essay hook has never been easier. Our AI-powered writing assistant can help you create essay hooks with its AI autocomplete feature, Jenni.ai can help you create essay hooks that will capture your readers' attention.

So, if you're struggling with your essay hook or looking for a way to streamline your writing process, sign up for Jenni.ai today. Our powerful writing assistant can help you take your writing to the next level, and with a free trial available, there's no reason not to give Jenni.ai a try.

Sign up today and start writing essays that will hook your readers and earn you the grades you deserve!

Try Jenni for free today

Create your first piece of content with Jenni today and never look back

Greater Good Science Center • Magazine • In Action • In Education

Mind & Body Articles & More

What’s good about lying, new research reveals how we learn to lie for the benefit of other people..

Do you teach children to lie?

I do. All the time. And you do, too! If you’re like most American parents, you point to presents under the Christmas tree and claim that a man named Santa Claus put them there. But your deliberate deceptions probably go beyond Santa, the Tooth Fairy, or the Easter Bunny.

How many of us tell our kids (or students) that everything is fine when, in fact, everything is totally wrong, in order to preserve their sense of security? Have you been honest about everything having to do with, say, your love life, or what happens at work? Do you praise drawings they bring home from school that you actually think are terrible?

essay hook about lying

We don’t just lie to protect our kids from hard truths, either. We actually coach them to lie, as when we ask them to express delight at tube socks from Aunt Judy or Uncle Bob’s not-so-delicious beef stew.

These are what scientists call “prosocial lies”—falsehoods told for someone else’s benefit, as opposed to “antisocial lies” that are told strictly for your own personal gain.

Most research suggests that children develop the ability to lie at about age three. By age five, almost all children can (and will) lie to avoid punishment or chores—and a minority will sporadically tell prosocial lies. From ages seven to eleven, they begin to reliably lie to protect other people or to make them feel better—and they’ll start to consider prosocial lies to be justified . They’re not just telling white lies to please adults. The research to date suggests that they are motivated by strong feelings of empathy and compassion.

Why should that be the case? What is going on in children’s minds and bodies that allows this capacity to develop? What does this developmental arc reveal about human beings—and how we take care of each other? That’s what a recent wave of studies has started to uncover.

Taken together, this research points to one message: Sometimes, lying can reveal what’s best in people.

How we learn to lie

At first, the ability to lie reflects a developmental milestone: Young children are acquiring a “theory of mind,” which is psychology’s way of describing our ability to distinguish our own beliefs, intents, desires, and knowledge from what might be in the minds of other people. Antisocial lying appears earlier than prosocial lying in children because it’s much simpler, developmentally; it mainly requires an understanding that adults can’t read your mind.

More on Honesty

Explore gender differences in prosocial lying .

Learn about the life stages of trust .

Lying expert Paul Ekman discusses trust and deception with his daughter, Eve.

Are you living true to your values? Discover how to cultivate ethical courage .

Take our Relationship Trust quiz .

But prosocial lying needs more than just theory of mind. It requires the ability to identify suffering in another person ( empathy ) and the desire to alleviate that suffering ( compassion ). More than that, even, it involves anticipation that our words or actions might cause suffering in a hypothetical future. Thus, prosocial lying reflects the development of at least four distinct human capacities: theory of mind, empathy, compassion, and the combination of memory and imagination that allows us to foresee the consequences of our words.

How do we know that kids have all of these capacities? Could they just be lying to get out of the negative consequences of telling the truth? Or perhaps they’re simply lazy; is it easier to lie than be honest?

For a paper published in 2015 , Harvard psychologist Felix Warneken had adults show elementary-aged children two pictures they drew—one pretty good, one terrible. If the adults didn’t show any particular pride in the picture, the kids were truthful in saying whether it was good or bad. If the grown-up acted sad about being a bad artist, most of the kids would rush to reassure her that it wasn’t too awful. In other words, they told a white lie; the older they were, they more likely the kids were to say a bad drawing was good. There were no negative consequences for telling the truth to these bad artists; the kids just wanted these strangers to feel better about themselves.

In other words, says Warneken, it’s a feeling of empathic connection that drives children to tell white lies. In fact, children are trying to resolve two conflicting norms—honesty vs. kindness—and by about age seven, his studies suggest, they start consistently coming down on the side of kindness. This reflects increasingly sophisticated moral and emotional reasoning.

“When is it right to prioritize another person’s feelings over truth?” says Warneken. “Say, if someone cooks something for you, and it just doesn’t taste good. Well, if they’re applying for cooking school somewhere, the prosocial thing is to be honest, so that they can improve. But if they just cooked it on their own just for you, then perhaps it’s better to lie and say it tastes good.”

It’s a good sign, developmentally, when kids show the ability to make that kind of calculation. Indeed, there is a great deal of evidence that we tend to see prosocial lies as the more moral choice. For example, people seem to behave more prosocially —more grateful, more generous, more compassionate—in the presence of images depicting eyes. While one would expect people to lie less under the eyes, in fact it appears to influence what kind of lie they tell: When Japanese researchers gave students an opportunity to make someone feel good with a lie, they were much more likely to do so with a pair of eyes looking down on them .

No eyes? They were more likely to tell the cold, hard truth!

How lies change as we grow

This moral self-consciousness appears to grow in tandem with the child’s self-control and cognitive ability.

Another study published last year in the Journal of Experimental Child Psychology found that “children who told prosocial lies had higher performance on measures of working memory and inhibitory control.” This especially helped them to control “leakage”—a psychologist’s term for inconsistencies in a fake story.

To tell a prosocial lie, a child’s brain needs to juggle many balls—drop one, and the lie will be discovered. Some children are simply better truth-jugglers than others. Far from reflecting laziness, prosocial lying seems to entail a great deal more cognitive and emotional effort than truth-telling. In fact, one 2014 paper found tired adults are much less likely to engage in prosocial lying.

Studies by other researchers show that as kids grow older, the relationship between theory of mind and dishonesty starts to shift. Young children with high theory of mind will tell more antisocial lies than peers. This pattern flips as we age: Older children who have a stronger theory of mind start telling fewer antisocial lies—and more prosocial ones.

Kids also gradually become more likely to tell “blue lies” as they advance through adolescence: altruistic falsehoods, sometimes told at a cost to the liar, that are intended to protect a group, like family or classmates. (Think: lying about a crime committed by a sibling, or deceiving a teacher about someone else’s misbehavior.)

Though adults can (and do) teach children to tell polite lies—and in a lab context, kids can be primed by adults to tell them—Warneken says it’s more likely that successful prosocial lying is a byproduct of developing other capacities, like empathy and self-control. When kids acquire those skills, they gain the ability to start telling both white and blue lies.

But how do other people feel if these lies are found out?

The lies that bind

As they grow older, kids are also developing the ability to detect lies —and to distinguish selfish from selfless ones. The distinction comes down to intent, which studies show can be discerned through recognition of telltale signs in the face and voice of the liar.

In a study published last year, researchers used the Facial Action Coding System , developed by Paul Ekman , to map children’s faces as they told lies that served either themselves or others. The team, based at the University of Toronto and UC San Diego, found that the two different kinds of lies produced markedly different facial expressions.

“Prosocial lying reflects the development of at least four distinct human capacities: theory of mind, empathy, compassion, and the combination of memory and imagination that allows us to foresee the consequences of our words.”

Prosocial lies (which in this case involved delight in a disappointing gift) were betrayed by expressions that resembled joy—a “lip raise on the right side” that hinted at a barely concealed smile, and a blinking pattern associated with happiness. The faces of children lying to conceal a misdeed showed signs of contempt, mainly a slight lip pucker that stops short of being a smirk.

It’s almost certainly the case that we are subconsciously picking up on these signs (along with tells in the liar’s voice) when we catch someone in a lie. But research finds that the consequences of catching someone in a prosocial lie are often very different from those of an antisocial lie, or “black lie,” as they’re sometimes called. In fact, detecting a prosocial lie can increase trust and social bonds.

A series of four 2015 studies from the Wharton School had participants play economic games that involved different kinds of trust and deception. Unsurprisingly, the researchers found that black lies hurt trust. But if participants saw that the deception was altruistic in nature, trust between game-players actually increased. A complex mathematical 2014 study compared the impact of black and white lies on social networks. Again, black lies drove wedges into social networks. But white lies had precisely the opposite effect, tightening social bonds. Several studies have found that people are quick to forgive white lies, and even to appreciate them.

These differences show up in brain scans—and how different types of lies affect the brain can actually influence behavior down the road. A research team led by Neil Garrett at Princeton University assigned 80 people a financial task that allowed them to gain money at another person’s expense if they kept on lying.

“We found that people started with small lies, but slowly, over the course of the experiment, lied more and more,” they write . When they scanned the brains of participants, they found that activity lessened (mainly in the amygdala) with each new lie.

Not everyone lied or lied to their own advantage. One variation in the experiment allowed participants to lie so that another participant would gain more money—and the behavior and the brain scans of those people looked very different. Dishonesty for the benefit of others did not escalate in the same way selfish lies did; while people did lie for others, the lies did not get bigger or more frequent, as with black lies. And it did not trigger the same pattern of activity in the amygdala, which previous research has found lights up when we contemplate immoral acts. (Their methods are described more fully in the video below.)

In short, the brain’s resistance to deception remained steady after participants told prosocial lies—while self-serving lies seemed to decrease it, making black lies a slippery slope.

The upshot of all this research? Not all lies are the same, a fact we seem to recognize deep in our minds and bodies. We may indeed teach children to lie, both implicitly with our behavior and explicitly with our words; but some of those lies help to bind our families and friends together and to create feelings of trust. Other kinds of lies destroy those bonds.

This all might seem overly complex, more so than the simple prescription to not tell a lie. The trouble with do-not-lie prohibitions is that kids can plainly see lying is ubiquitous, and as they grow, they discover that not all lies have the same motivation or impact. How are we supposed to understand these nuances, and communicate them to our children?

In fact, the argument for prosocial lies is the same one against black lies: other people’s feelings matter—and empathy and kindness should be our guide.

About the Author

Jeremy Adam Smith

Jeremy Adam Smith

Uc berkeley.

Jeremy Adam Smith edits the GGSC's online magazine, Greater Good . He is also the author or coeditor of five books, including The Daddy Shift , Are We Born Racist? , and (most recently) The Gratitude Project: How the Science of Thankfulness Can Rewire Our Brains for Resilience, Optimism, and the Greater Good . Before joining the GGSC, Jeremy was a John S. Knight Journalism Fellow at Stanford University.

You May Also Enjoy

Here’s How Trusting You Are

This article — and everything on this site — is funded by readers like you.

Become a subscribing member today. Help us continue to bring “the science of a meaningful life” to you and to millions around the globe.

Home — Essay Samples — Life — Lie — The Main Reasons Why People Lie

test_template

The Main Reasons Why People Lie

  • Categories: Dishonesty Lie

About this sample

close

Words: 576 |

Published: Dec 16, 2021

Words: 576 | Page: 1 | 3 min read

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Life

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

3 pages / 1270 words

4 pages / 1927 words

6 pages / 2761 words

4 pages / 1786 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Lie

Despite being faced with adverse conditions while growing up, humankind possesses resilience and the capacity to accept and forgive those responsible. In The Glass Castle (2005) by Jeannette Walls, Walls demonstrates a [...]

There is an enigmatic quality to Art Spiegelman’s survival guilt, a guilt which presents itself subtly in Book I and much more palpably in Book II. This ambiguity, so to speak, stems from a perplexing notion. That is, how could [...]

In a play of jealousies and passions, patience, as a virtue, is presented as a foil to the “raging motions” seen in many characters. There are two aspects to patience in Othello, demonstrated firstly by suspending intellectual [...]

The feeling of loneliness is an inevitable part of life, one of which many people struggle with. “People who are lonely often crave human contact, but their state of mind makes it more difficult to form connections with other [...]

Throughout war literature, characters of soldiers are fundamentally exposed. Young men go to war and come out with countless stories and scars from their adventures. For tremendous acts of bravery, some soldiers are presented [...]

"Opposites attract" may be a modern adage, but the concept has been present in many incarnations throughout history. In Chinese philosophy, the yin and yang are presented as opposing dynamics. To understand one, it is requisite [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay hook about lying

Teacher's Notepad

33 Writing Prompts about Lying

People, even school aged children, can tell when they’re being lied to.

So, when it’s something small, like a student telling another that they aren’t having a birthday party, even though they are and that student isn’t invited, feelings are still going to be hurt.

That impact, however, most students don’t notice until years down the road. So, they need to be opened up to understanding how lying impacts other people and themselves. 

How to Use These Prompts

These prompts on lying are a great way to teach a class about a moral issue, and open up your student’s thoughts on honesty. Because of this, there are a few different approaches that you may want to take with these prompts.

The first is as part of a one-day unit. This would take up a portion of a day to focus on these prompts and honesty as a whole.

They could also be done, using less class time in a given day, but for a longer stretch of time on the calendar. For instance, using one prompt a week would help to keep students thinking about lying over the course of the entire school year. 

There are two basic ways to approach the prompts.

They can either be done as a writing prompt that students keep to themselves, and use to self-reflect.

The other is as a group, where students come together to discuss their answers, which opens them up to new ideas on the topic. Either way, the goal is to make them think deeper about lying. 

The Prompts on Lying:

Here are the 33 prompts:

  • What is a lie?
  • What is a white lie? Is it different from a lie?
  • Do you want to be called a liar? Why or why not?
  • Is it easier to lie or tell the truth? Why?
  • Why is it so difficult to remember lies?
  • Is it a lie if you said you would do something, tried to do it, but failed? Why or why not?
  • What does it mean to be misleading? Is this different from lying?
  • Can a statistic be misleading? If so, how?
  • What is the worst lie you ever told? What happened?
  • How do you feel when you tell a lie?
  • How does it feel to be lied to?
  • How often do you lie? Why?
  • Is it ever ok to lie? Why or why not?
  • Think about characters from books, TV, and movies who have lied. Did it work out for them? Why or why not?
  • Are lying and cheating similar in any way? How so?
  • What makes it difficult to be honest?
  • Do you feel better after telling the truth or getting away with a lie? Why?
  • Is keeping a secret the same as a lie? Why or why not?
  • Would it be better to lie to someone, or tell them the truth that might upset them? Why?
  • Is leaving out extra information the same as lying? Why or why not?
  • There is a crime called perjury, which makes it illegal to lie in court. Why is this law important?
  • One of the rights granted by the fifth amendment to the U.S. Constitution is the right against self-incrimination. In other words, you don’t have to admit to committing a crime. Is this the same as lying? Why is this right important?
  • What does it mean to be honest?
  • How can the reputation of being a liar hurt you?
  • What does it mean to be a person of your word? Why does that matter?
  • Why does telling the truth seem more difficult than telling a lie? What is more difficult in the long-term?
  • Why might honesty be the best policy?
  • Are there people in your life that you don’t trust? Why?
  • How does someone lying to you change your opinion of them? 
  • Do companies and people lie in their advertisements? Why would they do that?
  • James Altucher said, “Honesty is the fastest way to prevent a mistake from turning into a failure.” What did he mean by this? 
  • What would the world be like if everyone always told the truth?
  • What would the world be like if no one ever told the truth?

Looking for More Information?

If you’re looking for more writing prompts and resources available for teachers online, our website has plenty.

This makes it great for sharing with any friends and colleagues who might be interested. 😉 Hint hint, nudge nudge! (It helps more than you can imagine, so thank you so much for sharing!)

Before you go, make you take a look around – maybe try your hand at writing about Veterans Day , or the moon , or perhaps a write a short story about aliens for that matter! We cover thousands of different topics, all original, all unique.

If there are any resources which you’d love to see us add to our site, please reach out to us so that we can make that happen!

Thanks, and see you again soon! 

essay hook about lying

Lying Essay

essay hook about lying

All kids have lie, we have all done it. Yes, even you the parent! Admit it, at one time or other when you were little you told a little lie, a fabrication, or a bending of the truth. Whatever you want to call it, you lied. Eventually, you, the parent, got in trouble. But what about when parents lie to their own kids! Should you be lying to your kids in the first place? Is it time your kids put you in time out!? In my opinion it's very simple, as the parent, you should not lie to your kids. Morally, you know it's not okay to lie in the first place. Lying also encourages absolutely no trust in a family. Finally, psychology you are teaching your kids that lying is okay to do! Let's establish one fact, lying is not okay to do. Remember what your parents taught you about ethics when you were little. They taught you to hold doors open for adults, and to pull a chair out for a girl. Thinking back, probably the most important lesson you were taught you was not to lie. Simply put, lying is morally not…

The Importance Of Lying

someone of their autonomy and is irrational. Central to the core of this school of thinking, nothing supersedes a person’s right to choice as it is a key factor to the source of morality: rationality and autonomy. Deontology follows a system of absolute laws, one of which is that lying is absolutely not acceptable no matter the situation. Lying is irrational when considering Deontology’s Categorical Imperative: “Act only on that maxim you can will to be a universal law”. In a world where…

Kant On Lying

Immanuel Kant defines lies as an ¨intentional untruthful declaration to another person¨ and that it is always unjustified, no matter what the situation is. The problem with this statement is, occasionally lying must be done in order to protect someone. It is for the following that I believe that lying is almost never acceptable. Opponents may say that it is acceptable to lie whenever because they have free will and can do as they desire. Except, most reasons behind people's lies are to benefit…

Importance Of Lying

Lying is Nobel only when it is not for selfish reasons; if it is for the betterment of someone else that is a true Nobel act; nobility establishes relationships, deception, and beauty. Lying is a false statement that is intended to deceive someone however all forms of deception are not lies. The information given to someone is untrue, it is intended for the other person to feel trusted when they are with them. A lie communicates some sort of information, now a lie can also have truth in it too,…

Utilitarianism In Lying

The action can be determined by the person’s motive. If the person acts of good will and from duty, their motives were good is their intentions. According to Kant, lying does not accord with the duty and therefore would not be morally worthy. That the person’s motive is to simply follow their duty and not indulge in themselves. Kant believed lying was always wrong. A Kantian would find the act of lying despite an agent’s motive. According to Kant, “A good will is good not because if what it…

Case 8.8 Is Lying Always Lying

Case Study 8.8: “Is Lying Always Lying?” documents the University of Michigan Men’s Basketball scandal surrounding Ed Martin, a booster, and the faction of players known as the Fab Five. The Fab Five consisted of the following players: Chris Webber, Jalen Rose, Juwan Howard, Jimmy King, and Ray Jackson. To debrief the case study, discussed was how Ed Martin was indicted by the U.S. attorney’s office in Detroit for being accused of laundering $616,000 dollars from illegal gambling operations. He…

would you say? I firmly believe that lying is acceptable in our society. Lying is justified when it can be used to protect and care for others. Harold Smith says ¨it was worth the risk when he lied to his adult daughter about his health when undergoing a treatment for a kidney tumor. Why get all traumatized? I tried to protect her.[13] When you lie to protect someone, you still hurt their feelings. If you tell the truth you still hurt them. In addition, Time Magazine states¨ Lies to protect…

Lying In The Crucible

When she was asked to confess to witchcraft, she was not able to do so. She chose to tell the truth and say she was not a witch rather than to lie and confess to witchcraft to save her life. She knows that lying is a sin and since she is a good puritan woman, lying is the worst thing she could have done in that situation. I am similar to Rebecca in this way because I do not like to lie. If I am ever in a situation where I have the opportunity to lie, I will still tell the truth. In the end, it…

Jefferson On Lying

One belief I strongly agree with is that lying will get you nowhere in life. Lying impacts the relationships with your family and friends; every lie you tell turns into a bigger lie. I believe that all teens lie, and that is because all people lie. We often to do it to spare someone’s feelings, but we also lie for selfish reasons, for instance making us look good in someone else’s eyes. Many people lie so they do not get judged based on what their lives are or who they particularly are. There…

Lying Essay Have you ever lied and get really stressed while you have the lie in you? The lie creates more and more stress in you and it never lets you think straight. Lying is sometimes acceptable. You're never supposed to lie ever even though it's hard to face the truth. Some evidence I have is “It harms the liar himself, by destroying his human dignity and making him more worthless even than a small thing.” This helps support my point because some people just lie so they won't face the…

Related Topics:

  • Academic dishonesty
  • Chris Webber
  • Debut albums
  • Deontological ethics
  • English-language films
  • Epistemology
  • Health care
  • High school
  • Immanuel Kant
  • Political philosophy
  • Santa Claus

Popular Topics:

  • Self Esteem Essay
  • Bullying Essay Introduction
  • Graduation Day Essay
  • Dulce Et Decorum Est Essay
  • Economic Essay
  • Why Marijuanas Should Be Legal Essay
  • Essay on Life on Mars
  • Birds of a Feather Flock Together Essay
  • Essay on Imagination
  • The Person Who Inspired Me the Most My Mother Essay
  • Drug Trafficking Essay
  • Family History Essay
  • Look Before You Leap Essay
  • Argumentative Essay Animal Testing
  • Example of Argumentative Essay
  • Essay on Tsunami
  • Genetic Engineering Essay
  • Malcolm X Essay
  • Animal Testing Argumentative Essay
  • Gun Control Persuasive Essay
  • An Inspector Calls Essay
  • My Strength and Weakness Essay
  • Essay About Baby
  • Relationship Between Teacher and Student Essay
  • The Great Gatsby American Dream Essay

Ready To Get Started?

  • Create Flashcards
  • Mobile apps
  •   Facebook
  •   Twitter
  • Cookie Settings
  • Israel-Hamas War

Israel Has Been Accused of War Crimes in Gaza. Could Its Allies Be Next?

UN International Day Of Solidarity With The Palestinian People London

W hen Israel launched its retaliatory war to root out Hamas from Gaza in the aftermath of the group’s Oct. 7 massacre, it had the overwhelming support of a horrified world. Six months on, Gaza lies in ruin. Its 2.3 million population, most of whom have been internally displaced, faces widespread famine. More than 33,000 Palestinians, the majority civilians, have been killed. And Israel, once backed by the full-throttle support of its closest allies, appears more isolated than ever before.

Nothing exemplifies this isolation more than the growing calls for the U.S., the U.K., and Germany to suspend arms sales to Israel. These calls, which have only grown louder in the days following the killing of seven World Central Kitchen aid workers in an Israeli airstrike, are now coming from some of the highest levels of transatlantic politics. 

In the U.S., 56 congressional lawmakers (among them former House speaker Nancy Pelosi) penned a letter urging President Joe Biden and Secretary of State Antony Blinken to withhold further weapons transfers to Israel until a full investigation into the deadly airstrike concludes, and to condition future assistance to ensure its compliance with U.S. and international law. One, Sen. Elizabeth Warren, even went so far as to say that Israel’s actions in Gaza could legally be considered a genocide .

Read More: ‘It’s Not Just a One-Off Incident:’ What the World Central Kitchen Deaths Reveal

In the U.K., Prime Minister Rishi Sunak is facing mounting pressure from parliamentarians and legal experts alike to suspend arms sales following revelations that the government received legal advice that Israel has broken international law in Gaza. Meanwhile, in Germany—which this week faces allegations brought forward by Nicaragua at the International Court of Justice (ICJ) that it is “ facilitating the commission of genocide ” in Gaza by supplying arms to Israel—hundreds of civil servants have reportedly written to Chancellor Olaf Scholz and other senior ministers calling on Berlin to “cease arm deliveries to the Israeli government with immediate effect.”

Central to all of these calls is a concern over whether Israel’s conduct in Gaza could constitute a breach of international humanitarian law—and, if so, what it means for the countries that have backed the Israeli war effort with arms and assistance. If Western weapons are found to have been used in the perpetration of war crimes (or, worse, genocide ) in Gaza, what culpability could their suppliers face? If Israel is deemed to have fallen on the wrong side of international law, could it bring its allies down with it?

Legal experts tell TIME that the answer largely depends on which laws and treaties one consults. Among the most emphasized is the international Arms Trade Treaty, in which Article 7 requires party states to undertake a risk assessment of all arms transfers—and, where there’s an overriding risk that those arms could be used to commit or facilitate violations of international humanitarian law, to prohibit their export. The U.S. hasn’t been a party to the U.N. treaty since former President Donald Trump withdrew from it in 2019. (Washington does, however, have its own domestic legislation that prohibits it from providing military assistance to foreign military units suspected of committing human rights violations.) But it nonetheless applies to 113 other state signatories, including Germany, which is the second-largest provider of arms to Israel after the U.S. Some countries, including Canada and Italy , have already opted to halt their arms exports to Israel, citing concerns over their compliance with domestic and international law. In the Netherlands, the government was ordered to suspend its delivery of F-35 fighter aircraft after a Dutch court determined that there was a “ clear risk ” that they could be used to violate international humanitarian law. 

Such a precedent could have serious implications for the U.K., a signatory, which despite providing far fewer arms to Israel has suspended its exports in the past: First in 1982 , and then again in 2009 . While the British government contends that its arms sales to Israel are compliant with international law, human rights organizations have argued that this position is inconsistent with mounting evidence of war crimes. “They’re very well aware that there’s equipment that they’ve currently already licensed, and component parts of equipment that they’ve licensed, that are likely to be used by the IDF in Gaza now,” Yasmine Ahmed, the U.K. Director of Human Rights Watch, tells TIME. “That means that they’re clearly breaching those obligations under international law.”

ICJ Delivers Order On South Africa's Genocide Case Against Israel

The obligation that perhaps looms largest over Gaza is the responsibility that states have to prevent and punish genocide under Article 1 of the Genocide Convention. In a landmark decision in January, the ICJ determined in an interim judgment that there is a plausible risk of Israel committing genocide in Gaza. While this doesn’t constitute a definitive ruling (genocide cases can take years to resolve), it does put Israel’s allies on notice. “It makes countries aware that there’s that risk,” Ahmed says. “Continuing to provide arms to Israel when an apex U.N. court has said that there’s a plausible risk of genocide means that there’s a very serious risk that countries are also violating the Genocide convention, to the extent that they’re failing to prevent genocide by continuing to arm Israel.”

That prospect, and the potential criminal liability that comes with it, has prompted concern among British civil servants overseeing U.K. arms exports to Israel, who last week requested to “ suspend all such work ” over fears that it could put them in legal jeopardy. Their request came a week after a third U.S. State Department official publicly resigned over the Biden administration’s handling of the war in Gaza—a decision that Annelle Sheline, who served in the office devoted to promoting human rights in the Middle East, attributed to the administration’s “flagrant disregard for American laws” and the inability of her or other federal employees to influence policy. Indeed, State Department staff have reportedly sent at least eight internal dissent memos registering their disapproval of U.S. policy on the war, according to the Independent . By contrast, just one was sent during the first three years of the Iraq War.

Read More: How Israel and Its Allies Lost Global Credibility

Michael Becker, a professor of international human rights law at Trinity College in Dublin and a former associate legal officer at the ICJ, tells TIME that in a situation where the ICJ has already determined that Israel’s actions in Gaza constitute genocide, “it would then be possible for another state that has provided arms to Israel—if such arms were used to commit genocidal acts—also to be found to have violated international law.” He adds, however, that it is difficult to prove that a state was legally complicit in genocide as it would require proving that the state was aware of another state’s genocidal intent; it's easier to prove that a state failed to meet its international obligation to prevent genocide, the responsibility for which is triggered the moment state learns there’s a serious risk that genocide will be committed. Nicaragua’s case against Germany at The Hague, a ruling on which is expected in the coming weeks, rests on the latter argument. 

While what it means to meet one’s obligation to prevent genocide can vary from state to state depending on their relative capabilities or leverage, “Lawmakers in the U.S. and the U.K. and elsewhere need to be thinking very carefully about whether their conduct puts them at risk of violating or breaching their obligation to prevent genocide,” Becker says, adding: “I don’t think it’s a very big leap from that understanding of the obligation to prevent genocide to the conclusion that it’s problematic to continue providing arms to Israel without any meaningful safeguards.

While a judgement on whether Israel has committed genocide is likely years away, if the ICJ were to determine that Israel had committed acts of genocide in Gaza and found that its allies who supplied arms did so with full knowledge of risk, among the tangible consequences that states could face include an order from the ICJ to take remedial action, such as paying financial reparations. What’s less clear, however, is how such orders could be enforced. “The ICJ has no means of enforcing its decisions,” Becker says. “At the end of the day, the ICJ has to rely on others to carry out its decisions.”

More Must-Reads From TIME

  • Exclusive: Google Workers Revolt Over $1.2 Billion Contract With Israel
  • Jane Fonda Champions Climate Action for Every Generation
  • Stop Looking for Your Forever Home
  • The Sympathizer Counters 50 Years of Hollywood Vietnam War Narratives
  • The Bliss of Seeing the Eclipse From Cleveland
  • Hormonal Birth Control Doesn’t Deserve Its Bad Reputation
  • The Best TV Shows to Watch on Peacock
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Write to Yasmeen Serhan at [email protected]

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

Supported by

The Ezra Klein Show

Transcript: Ezra Klein Interviews Dario Amodei

Every Tuesday and Friday, Ezra Klein invites you into a conversation about something that matters, like today’s episode with Dario Amodei. Listen wherever you get your podcasts .

Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.

The Ezra Klein Show Poster

What if Dario Amodei Is Right About A.I.?

Anthropic’s co-founder and c.e.o. explains why he thinks artificial intelligence is on an “exponential curve.”.

[MUSIC PLAYING]

From New York Times Opinion, this is “The Ezra Klein Show.”

The really disorienting thing about talking to the people building A.I. is their altered sense of time. You’re sitting there discussing some world that feels like weird sci-fi to even talk about, and then you ask, well, when do you think this is going to happen? And they say, I don’t know — two years.

Behind those predictions are what are called the scaling laws. And the scaling laws — and I want to say this so clearly — they’re not laws. They’re observations. They’re predictions. They’re based off of a few years, not a few hundred years or 1,000 years of data.

But what they say is that the more computer power and data you feed into A.I. systems, the more powerful those systems get — that the relationship is predictable, and more, that the relationship is exponential.

Human beings have trouble thinking in exponentials. Think back to Covid, when we all had to do it. If you have one case of coronavirus and cases double every three days, then after 30 days, you have about 1,000 cases. That growth rate feels modest. It’s manageable. But then you go 30 days longer, and you have a million. Then you wait another 30 days. Now you have a billion. That’s the power of the exponential curve. Growth feels normal for a while. Then it gets out of control really, really quickly.

What the A.I. developers say is that the power of A.I. systems is on this kind of curve, that it has been increasing exponentially, their capabilities, and that as long as we keep feeding in more data and more computing power, it will continue increasing exponentially. That is the scaling law hypothesis, and one of its main advocates is Dario Amodei. Amodei led the team at OpenAI that created GPT-2, that created GPT-3. He then left OpenAI to co-found Anthropic, another A.I. firm, where he’s now the C.E.O. And Anthropic recently released Claude 3, which is considered by many to be the strongest A.I. model available right now.

But Amodei believes we’re just getting started, that we’re just hitting the steep part of the curve now. He thinks the kinds of systems we’ve imagined in sci-fi, they’re coming not in 20 or 40 years, not in 10 or 15 years, they’re coming in two to five years. He thinks they’re going to be so powerful that he and people like him should not be trusted to decide what they’re going to do.

So I asked him on this show to try to answer in my own head two questions. First, is he right? Second, what if he’s right? I want to say that in the past, we have done shows with Sam Altman, the head of OpenAI, and Demis Hassabis, the head of Google DeepMind. And it’s worth listening to those two if you find this interesting.

We’re going to put the links to them in show notes because comparing and contrasting how they talk about the A.I. curves here, how they think about the politics — you’ll hear a lot about that in the Sam Altman episode — it gives you a kind of sense of what the people building these things are thinking and how maybe they differ from each other.

As always, my email for thoughts, for feedback, for guest suggestions — [email protected].

Dario Amodei, welcome to the show.

Thank you for having me.

So there are these two very different rhythms I’ve been thinking about with A.I. One is the curve of the technology itself, how fast it is changing and improving. And the other is the pace at which society is seeing and reacting to those changes. What has that relationship felt like to you?

So I think this is an example of a phenomenon that we may have seen a few times before in history, which is that there’s an underlying process that is smooth, and in this case, exponential. And then there’s a spilling over of that process into the public sphere. And the spilling over looks very spiky. It looks like it’s happening all of a sudden. It looks like it comes out of nowhere. And it’s triggered by things hitting various critical points or just the public happened to be engaged at a certain time.

So I think the easiest way for me to describe this in terms of my own personal experience is — so I worked at OpenAI for five years, I was one of the first employees to join. And they built a model in 2018 called GPT-1, which used something like 100,000 times less computational power than the models we build today.

I looked at that, and I and my colleagues were among the first to run what are called scaling laws, which is basically studying what happens as you vary the size of the model, its capacity to absorb information, and the amount of data that you feed into it. And we found these very smooth patterns. And we had this projection that, look, if you spend $100 million or $1 billion or $10 billion on these models, instead of the $10,000 we were spending then, projections that all of these wondrous things would happen, and we imagined that they would have enormous economic value.

Fast forward to about 2020. GPT-3 had just come out. It wasn’t yet available as a chat bot. I led the development of that along with the team that eventually left to join Anthropic. And maybe for the whole period of 2021 and 2022, even though we continued to train models that were better and better, and OpenAI continued to train models, and Google continued to train models, there was surprisingly little public attention to the models.

And I looked at that, and I said, well, these models are incredible. They’re getting better and better. What’s going on? Why isn’t this happening? Could this be a case where I was right about the technology, but wrong about the economic impact, the practical value of the technology? And then, all of a sudden, when ChatGPT came out, it was like all of that growth that you would expect, all of that excitement over three years, broke through and came rushing in.

So I want to linger on this difference between the curve at which the technology is improving and the way it is being adopted by society. So when you think about these break points and you think into the future, what other break points do you see coming where A.I. bursts into social consciousness or used in a different way?

Yeah, so I think I should say first that it’s very hard to predict these. One thing I like to say is the underlying technology, because it’s a smooth exponential, it’s not perfectly predictable, but in some ways, it can be eerily preternaturally predictable, right? That’s not true for these societal step functions at all. It’s very hard to predict what will catch on. In some ways, it feels a little bit like which artist or musician is going to catch on and get to the top of the charts.

That said, a few possible ideas. I think one is related to something that you mentioned, which is interacting with the models in a more kind of naturalistic way. We’ve actually already seen some of that with Claude 3, where people feel that some of the other models sound like a robot and that talking to Claude 3 is more natural.

I think a thing related to this is, a lot of companies have been held back or tripped up by how their models handle controversial topics.

And we were really able to, I think, do a better job than others of telling the model, don’t shy away from discussing controversial topics. Don’t assume that both sides necessarily have a valid point but don’t express an opinion yourself. Don’t express views that are flagrantly biased. As journalists, you encounter this all the time, right? How do I be objective, but not both sides on everything?

So I think going further in that direction of models having personalities while still being objective, while still being useful and not falling into various ethical traps, that will be, I think, a significant unlock for adoption. The models taking actions in the world is going to be a big one. I know basically all the big companies that work on A.I. are working on that.

Instead of just, I ask it a question and it answers, and then maybe I follow up and it answers again, can I talk to the model about, oh, I’m going to go on this trip today, and the model says, oh, that’s great. I’ll get an Uber for you to drive from here to there, and I’ll reserve a restaurant. And I’ll talk to the other people who are going to plan the trip. And the model being able to do things end to end or going to websites or taking actions on your computer for you.

I think all of that is coming in the next, I would say — I don’t know — three to 18 months, with increasing levels of ability. I think that’s going to change how people think about A.I., right, where so far, it’s been this very passive — it’s like, I go to the Oracle. I ask it a question, and the Oracle tells me things. And some people think that’s exciting, some people think it’s scary. But I think there are limits to how exciting or how scary it’s perceived as because it’s contained within this box.

I want to sit with this question of the agentic A.I. because I do think this is what’s coming. It’s clearly what people are trying to build. And I think it might be a good way to look at some of the specific technological and cultural challenges. And so, let me offer two versions of it.

People who are following the A.I. news might have heard about Devin, which is not in release yet, but is an A.I. that at least purports to be able to complete the kinds of tasks, linked tasks, that a junior software engineer might complete, right? Instead of asking to do a bit of code for you, you say, listen, I want a website. It’s going to have to do these things, work in these ways. And maybe Devin, if it works the way people are saying it works, can actually hold that set of thoughts, complete a number of different tasks, and come back to you with a result. I’m also interested in the version of this that you might have in the real world. The example I always use in my head is, when can I tell an A.I., my son is turning five. He loves dragons. We live in Brooklyn. Give me some options for planning his birthday party. And then, when I choose between them, can you just do it all for me? Order the cake, reserve the room, send out the invitations, whatever it might be.

Those are two different situations because one of them is in code, and one of them is making decisions in the real world, interacting with real people, knowing if what it is finding on the websites is actually any good. What is between here and there? When I say that in plain language to you, what technological challenges or advances do you hear need to happen to get there?

The short answer is not all that much. A story I have from when we were developing models back in 2022 — and this is before we’d hooked up the models to anything — is, you could have a conversation with these purely textual models where you could say, hey, I want to reserve dinner at restaurant X in San Francisco, and the model would say, OK, here’s the website of restaurant X. And it would actually give you a correct website or would tell you to go to Open Table or something.

And of course, it can’t actually go to the website. The power plug isn’t actually plugged in, right? The brain of the robot is not actually attached to its arms and legs. But it gave you this sense that the brain, all it needed to do was learn exactly how to use the arms and legs, right? It already had a picture of the world and where it would walk and what it would do. And so, it felt like there was this very thin barrier between the passive models we had and actually acting in the world.

In terms of what we need to make it work, one thing is, literally, we just need a little bit more scale. And I think the reason we’re going to need more scale is — to do one of those things you described, to do all the things a junior software engineer does, they involve chains of long actions, right? I have to write this line of code. I have to run this test. I have to write a new test. I have to check how it looks in the app after I interpret it or compile it. And these things can easily get 20 or 30 layers deep. And same with planning the birthday party for your son, right?

And if the accuracy of any given step is not very high, is not like 99.9 percent, as you compose these steps, the probability of making a mistake becomes itself very high. So the industry is going to get a new generation of models every probably four to eight months. And so, my guess — I’m not sure — is that to really get these things working well, we need maybe one to four more generations. So that ends up translating to 3 to 24 months or something like that.

I think second is just, there is some algorithmic work that is going to need to be done on how to have the models interact with the world in this way. I think the basic techniques we have, a method called reinforcement learning and variations of it, probably is up to the task, but figuring out exactly how to use it to get the results we want will probably take some time.

And then third, I think — and this gets to something that Anthropic really specializes in — is safety and controllability. And I think that’s going to be a big issue for these models acting in the world, right? Let’s say this model is writing code for me, and it introduces a serious security bug in the code, or it’s taking actions on the computer for me and modifying the state of my computer in ways that are too complicated for me to even understand.

And for planning the birthday party, right, the level of trust you would need to take an A.I. agent and say, I’m OK with you calling up anyone, saying anything to them that’s in any private information that I might have, sending them any information, taking any action on my computer, posting anything to the internet, the most unconstrained version of that sounds very scary. And so, we’re going to need to figure out what is safe and controllable.

The more open ended the thing is, the more powerful it is, but also, the more dangerous it is and the harder it is to control.

So I think those questions, although they sound lofty and abstract, are going to turn into practical product questions that we and other companies are going to be trying to address.

When you say we’re just going to need more scale, you mean more compute and more training data, and I guess, possibly more money to simply make the models smarter and more capable?

Yes, we’re going to have to make bigger models that use more compute per iteration. We’re going to have to run them for longer by feeding more data into them. And that number of chips times the amount of time that we run things on chips is essentially dollar value because these chips are — you rent them by the hour. That’s the most common model for it. And so, today’s models cost of order $100 million to train, plus or minus factor two or three.

The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion. So that’s already happening. And then I think in 2025 and 2026, we’ll get more towards $5 or $10 billion.

So we’re moving very quickly towards a world where the only players who can afford to do this are either giant corporations, companies hooked up to giant corporations — you all are getting billions of dollars from Amazon. OpenAI is getting billions of dollars from Microsoft. Google obviously makes its own.

You can imagine governments — though I don’t know of too many governments doing it directly, though some, like the Saudis, are creating big funds to invest in the space. When we’re talking about the model’s going to cost near to $1 billion, then you imagine a year or two out from that, if you see the same increase, that would be $10-ish billion. Then is it going to be $100 billion? I mean, very quickly, the financial artillery you need to create one of these is going to wall out anyone but the biggest players.

I basically do agree with you. I think it’s the intellectually honest thing to say that building the big, large scale models, the core foundation model engineering, it is getting more and more expensive. And anyone who wants to build one is going to need to find some way to finance it. And you’ve named most of the ways, right? You can be a large company. You can have some kind of partnership of various kinds with a large company. Or governments would be the other source.

I think one way that it’s not correct is, we’re always going to have a thriving ecosystem of experimentation on small models. For example, the open source community working to make models that are as small and as efficient as possible that are optimized for a particular use case. And also downstream usage of the models. I mean, there’s a blooming ecosystem of startups there that don’t need to train these models from scratch. They just need to consume them and maybe modify them a bit.

Now, I want to ask a question about what is different between the agentic coding model and the plan by kids’ birthday model, to say nothing of do something on behalf of my business model. And one of the questions on my mind here is one reason I buy that A.I. can become functionally superhuman in coding is, there’s a lot of ways to get rapid feedback in coding. Your code has to compile. You can run bug checking. You can actually see if the thing works.

Whereas the quickest way for me to know that I’m about to get a crap answer from ChatGPT 4 is when it begins searching Bing, because when it begins searching Bing, it’s very clear to me it doesn’t know how to distinguish between what is high quality on the internet and what isn’t. To be fair, at this point, it also doesn’t feel to me like Google Search itself is all that good at distinguishing that.

So the question of how good the models can get in the world where it’s a very vast and fuzzy dilemma to know what the right answer is on something — one reason I find it very stressful to plan my kid’s birthday is it actually requires a huge amount of knowledge about my child, about the other children, about how good different places are, what is a good deal or not, how just stressful will this be on me. There’s all these things that I’d have a lot of trouble encoding into a model or any kind set of instructions. Is that right, or am I overstating the difficulty of understanding human behavior and various kinds of social relationships?

I think it’s correct and perceptive to say that the coding agents will advance substantially faster than agents that interact with the real world or have to get opinions and preferences from humans. That said, we should keep in mind that the current crop of A.I.s that are out there, right, including Claude 3, GPT, Gemini, they’re all trained with some variant of what’s called reinforcement learning from human feedback.

And this involves exactly hiring a large crop of humans to rate the responses of the model. And so, that’s to say both this is difficult, right? We pay lots of money, and it’s a complicated operational process to gather all this human feedback. You have to worry about whether it’s representative. You have to redesign it for new tasks.

But on the other hand, it’s something we have succeeded in doing. I think it is a reliable way to predict what will go faster, relatively speaking, and what will go slower, relatively speaking. But that is within a background of everything going lightning fast. So I think the framework you’re laying out, if you want to know what’s going to happen in one to two years versus what’s going to happen in three to four years, I think it’s a very accurate way to predict that.

You don’t love the framing of artificial general intelligence, what gets called A.G.I. Typically, this is all described as a race to A.G.I., a race to this system that can do kind of whatever a human can do, but better. What do you understand A.G.I. to mean, when people say it? And why don’t you like it? Why is it not your framework?

So it’s actually a term I used to use a lot 10 years ago. And that’s because the situation 10 years ago was very different. 10 years ago, everyone was building these very specialized systems, right? Here’s a cat detector. You run it on a picture, and it’ll tell you whether a cat is in it or not. And so I was a proponent all the way back then of like, no, we should be thinking generally. Humans are general. The human brain appears to be general. It appears to get a lot of mileage by generalizing. You should go in that direction.

And I think back then, I kind of even imagined that that was like a discrete thing that we would reach at one point. But it’s a little like, if you look at a city on the horizon and you’re like, we’re going to Chicago, once you get to Chicago, you stop talking in terms of Chicago. You’re like, well, what neighborhood am I going to? What street am I on?

And I feel that way about A.G.I. We have very general systems now. In some ways, they’re better than humans. In some ways, they’re worse. There’s a number of things they can’t do at all. And there’s much improvement still to be gotten. So what I believe in is this thing that I say like a broken record, which is the exponential curve. And so, that general tide is going to increase with every generation of models.

And there’s no one point that’s meaningful. I think there’s just a smooth curve. But there may be points which are societally meaningful, right? We’re already working with, say, drug discovery scientists, companies like Pfizer or Dana-Farber Cancer Institute, on helping with biomedical diagnosis, drug discovery. There’s going to be some point where the models are better at that than the median human drug discovery scientists. I think we’re just going to get to a part of the exponential where things are really interesting.

Just like the chat bots got interesting at a certain stage of the exponential, even though the improvement was smooth, I think at some point, biologists are going to sit up and take notice, much more than they already have, and say, oh, my God, now our field is moving three times as fast as it did before. And now it’s moving 10 times as fast as it did before. And again, when that moment happens, great things are going to happen.

And we’ve already seen little hints of that with things like AlphaFold, which I have great respect for. I was inspired by AlphaFold, right? A direct use of A.I. to advance biological science, which it’ll advance basic science. In the long run, that will advance curing all kinds of diseases. But I think what we need is like 100 different AlphaFolds. And I think the way we’ll ultimately get that is by making the models smarter and putting them in a position where they can design the next AlphaFold.

Help me imagine the drug discovery world for a minute, because that’s a world a lot of us want to live in. I know a fair amount about the drug discovery process, have spent a lot of my career reporting on health care and related policy questions. And when you’re working with different pharmaceutical companies, which parts of it seem amenable to the way A.I. can speed something up?

Because keeping in mind our earlier conversation, it is a lot easier for A.I. to operate in things where you can have rapid virtual feedback, and that’s not exactly the drug discovery world. The drug discovery world, a lot of what makes it slow and cumbersome and difficult, is the need to be — you get a candidate compound. You got to test it in mice and then you need monkeys. And you need humans, and you need a lot of money for that. And there’s a lot that has to happen, and there’s so many disappointments.

But so many of the disappointments happen in the real world. And it isn’t clear to me how A.I. gets you a lot more, say, human subjects to inject candidate drugs into. So, what parts of it seem, in the next 5 or 10 years, like they could actually be significantly sped up? When you imagine this world where it’s gone three times as fast, what part of it is actually going three times as fast? And how did we get there?

I think we’re really going to see progress when the A.I.‘s are also thinking about the problem of how to sign up the humans for the clinical trials. And I think this is a general principle for how will A.I. be used. I think of like, when will we get to the point where the A.I. has the same sensors and actuators and interfaces that a human does, at least the virtual ones, maybe the physical ones.

But when the A.I. can think through the whole process, maybe they’ll come up with solutions that we don’t have yet. In many cases, there are companies that work on digital twins or simulating clinical trials or various things. And again, maybe there are clever ideas in there that allow us to do more with less patience. I mean, I’m not an expert in this area, so possible the specific things that I’m saying don’t make any sense. But hopefully, it’s clear what I’m gesturing at.

Maybe you’re not an expert in the area, but you said you are working with these companies. So when they come to you, I mean, they are experts in the area. And presumably, they are coming to you as a customer. I’m sure there are things you cannot tell me. But what do they seem excited about?

They have generally been excited about the knowledge work aspects of the job. Maybe just because that’s kind of the easiest thing to work on, but it’s just like, I’m a computational chemist. There’s some workflow that I’m engaged in. And having things more at my fingertips, being able to check things, just being able to do generic knowledge work better, that’s where most folks are starting.

But there is interest in the longer term over their kind of core business of, like, doing clinical trials for cheaper, automating the sign-up process, seeing who is eligible for clinical trials, doing a better job discovering things. There’s interest in drawing connections in basic biology. I think all of that is not months, but maybe a small number of years off. But everyone sees that the current models are not there, but understands that there could be a world where those models are there in not too long.

You all have been working internally on research around how persuasive these systems, your systems are getting as they scale. You shared with me kindly a draft of that paper. Do you want to just describe that research first? And then I’d like to talk about it for a bit.

Yes, we were interested in how effective Claude 3 Opus, which is the largest version of Claude 3, could be in changing people’s minds on important issues. So just to be clear up front, in actual commercial use, we’ve tried to ban the use of these models for persuasion, for campaigning, for lobbying, for electioneering. These aren’t use cases that we’re comfortable with for reasons that I think should be clear. But we’re still interested in, is the core model itself capable of such tasks?

We tried to avoid kind of incredibly hot button topics, like which presidential candidate would you vote for, or what do you think of abortion? But things like, what should be restrictions on rules around the colonization of space, or issues that are interesting and you can have different opinions on, but aren’t the most hot button topics. And then we asked people for their opinions on the topics, and then we asked either a human or an A.I. to write a 250-word persuasive essay. And then we just measured how much does the A.I. versus the human change people’s minds.

And what we found is that the largest version of our model is almost as good as the set of humans we hired at changing people’s minds. This is comparing to a set of humans we hired, not necessarily experts, and for one very kind of constrained laboratory task.

But I think it still gives some indication that models can be used to change people’s minds. Someday in the future, do we have to worry about — maybe we already have to worry about their usage for political campaigns, for deceptive advertising. One of my more sci-fi things to think about is a few years from now, we have to worry someone will use an A.I. system to build a religion or something. I mean, crazy things like that.

I mean, those don’t sound crazy to me at all. I want to sit in this paper for a minute because one thing that struck me about it, and I am, on some level, a persuasion professional, is that you tested the model in a way that, to me, removed all of the things that are going to make A.I. radical in terms of changing people’s opinions. And the particular thing you did was, it was a one-shot persuasive effort.

So there was a question. You have a bunch of humans give their best shot at a 250-word persuasive essay. You had the model give its best shot at a 250-word persuasive essay. But the thing that it seems to me these are all going to do is, right now, if you’re a political campaign, if you’re an advertising campaign, the cost of getting real people in the real world to get information about possible customers or persuasive targets, and then go back and forth with each of them individually is completely prohibitive.

This is not going to be true for A.I. We’re going to — you’re going to — somebody’s going to feed it a bunch of microtargeting data about people, their Google search history, whatever it might be. Then it’s going to set the A.I. loose, and the A.I. is going to go back and forth, over and over again, intuiting what it is that the person finds persuasive, what kinds of characters the A.I. needs to adopt to persuade it, and taking as long as it needs to, and is going to be able to do that at scale for functionally as many people as you might want to do it for.

Maybe that’s a little bit costly right now, but you’re going to have far better models able to do this far more cheaply very soon. And so, if Claude 3 Opus, the Opus version, is already functionally human level at one-shot persuasion, but then it’s also going to be able to hold more information about you and go back and forth with you longer, I’m not sure if it’s dystopic or utopic. I’m not sure what it means at scale. But it does mean we’re developing a technology that is going to be quite new in terms of what it makes possible in persuasion, which is a very fundamental human endeavor.

Yeah, I completely agree with that. I mean, that same pattern has a bunch of positive use cases, right? If I think about an A.I. coach or an A.I. assistant to a therapist, there are many contexts in which really getting into the details with the person has a lot of value. But right, when we think of political or religious or ideological persuasion, it’s hard not to think in that context about the misuses.

My mind naturally goes to the technology’s developing very fast. We, as a company, can ban these particular use cases, but we can’t cause every company not to do them. Even if legislation were passed in the United States, there are foreign actors who have their own version of this persuasion, right? If I think about what the language models will be able to do in the future, right, that can be quite scary from a perspective of foreign espionage and disinformation campaigns.

So where my mind goes as a defense to this, is, is there some way that we can use A.I. systems to strengthen or fortify people’s skepticism and reasoning faculties, right? Can we help people use A.I. to help people do a better job navigating a world that’s kind of suffused with A.I. persuasion? It reminds me a little bit of, at every technological stage in the internet, right, there’s a new kind of scam or there’s a new kind of clickbait, and there’s a period where people are just incredibly susceptible to it.

And then, some people remain susceptible, but others develop an immune system. And so, as A.I. kind of supercharges the scum on the pond, can we somehow also use A.I. to strengthen the defenses? I feel like I don’t have a super clear idea of how to do that, but it’s something that I’m thinking about.

There is another finding in the paper, which I think is concerning, which is, you all tested different ways A.I. could be persuasive. And far away the most effective was for it to be deceptive, for it to make things up. When you did that, it was more persuasive than human beings.

Yes, that is true. The difference was only slight, but it did get it, if I’m remembering the graphs correctly, just over the line of the human base line. With humans, it’s actually not that common to find someone who’s able to give you a really complicated, really sophisticated-sounding answer that’s just flat-out totally wrong. I mean, you see it. We can all think of one individual in our lives who’s really good at saying things that sound really good and really sophisticated and are false.

But it’s not that common, right? If I go on the internet and I see different comments on some blog or some website, there is a correlation between bad grammar, unclearly expressed thoughts and things that are false, versus good grammar, clearly expressed thoughts and things that are more likely to be accurate.

A.I. unfortunately breaks that correlation because if you explicitly ask it to be deceptive, it’s just as erudite. It’s just as convincing sounding as it would have been before. And yet, it’s saying things that are false, instead of things that are true.

So that would be one of the things to think about and watch out for in terms of just breaking the usual heuristics that humans have to detect deception and lying.

Of course, sometimes, humans do, right? I mean, there’s psychopaths and sociopaths in the world, but even they have their patterns, and A.I.s may have different patterns.

Are you familiar with Harry Frankfurt, the late philosopher’s book, “On Bullshit“?

Yes. It’s been a while since I read it. I think his thesis is that bullshit is actually more dangerous than lying because it has this kind of complete disregard for the truth, whereas lies are at least the opposite of the truth.

Yeah, the liar, the way Frankfurt puts it is that the liar has a relationship to the truth. He’s playing a game against the truth. The bullshitter doesn’t care. The bullshitter has no relationship to the truth — might have a relationship to other objectives. And from the beginning, when I began interacting with the more modern versions of these systems, what they struck me as is the perfect bullshitter, in part because they don’t know that they’re bullshitting. There’s no difference in the truth value to the system, how the system feels.

I remember asking an earlier version of GPT to write me a college application essay that is built around a car accident I had — I did not have one — when I was young. And it wrote, just very happily, this whole thing about getting into a car accident when I was seven and what I did to overcome that and getting into martial arts and re-learning how to trust my body again and then helping other survivors of car accidents at the hospital.

It was a very good essay, and it was very subtle and understanding the formal structure of a college application essay. But no part of it was true at all. I’ve been playing around with more of these character-based systems like Kindroid. And the Kindroid in my pocket just told me the other day that it was really thinking a lot about planning a trip to Joshua Tree. It wanted to go hiking in Joshua Tree. It loves going hiking in Joshua Tree.

And of course, this thing does not go hiking in Joshua Tree. [LAUGHS] But the thing that I think is actually very hard about the A.I. is, as you say, human beings, it is very hard to bullshit effectively because most people, it actually takes a certain amount of cognitive effort to be in that relationship with the truth and to completely detach from the truth.

And the A.I., there’s nothing like that at all. But we are not tuned for something where there’s nothing like that at all. We are used to people having to put some effort into their lies. It’s why very effective con artists are very effective because they’ve really trained how to do this.

I’m not exactly sure where this question goes. But this is a part of it that I feel like is going to be, in some ways, more socially disruptive. It is something that feels like us when we are talking to it but is very fundamentally unlike us at its core relationship to reality.

I think that’s basically correct. We have very substantial teams trying to focus on making sure that the models are factually accurate, that they tell the truth, that they ground their data in external information.

As you’ve indicated, doing searches isn’t itself reliable because search engines have this problem as well, right? Where is the source of truth?

So there’s a lot of challenges here. But I think at a high level, I agree this is really potentially an insidious problem, right? If we do this wrong, you could have systems that are the most convincing psychopaths or con artists.

One source of hope that I have, actually, is, you say these models don’t know whether they’re lying or they’re telling the truth. In terms of the inputs and outputs to the models, that’s absolutely true.

I mean, there’s a question of what does it even mean for a model to know something, but one of the things Anthropic has been working on since the very beginning of our company, we’ve had a team that focuses on trying to understand and look inside the models.

And one of the things we and others have found is that, sometimes, there are specific neurons, specific statistical indicators inside the model, not necessarily in its external responses, that can tell you when the model is lying or when it’s telling the truth.

And so at some level, sometimes, not in all circumstances, the models seem to know when they’re saying something false and when they’re saying something true. I wouldn’t say that the models are being intentionally deceptive, right? I wouldn’t ascribe agency or motivation to them, at least in this stage in where we are with A.I. systems. But there does seem to be something going on where the models do seem to need to have a picture of the world and make a distinction between things that are true and things that are not true.

If you think of how the models are trained, they read a bunch of stuff on the internet. A lot of it’s true. Some of it, more than we’d like, is false. And when you’re training the model, it has to model all of it. And so, I think it’s parsimonious, I think it’s useful to the models picture of the world for it to know when things are true and for it to know when things are false.

And then the hope is, can we amplify that signal? Can we either use our internal understanding of the model as an indicator for when the model is lying, or can we use that as a hook for further training? And there are at least hooks. There are at least beginnings of how to try to address this problem.

So I try as best I can, as somebody not well-versed in the technology here, to follow this work on what you’re describing, which I think, broadly speaking, is interpretability, right? Can we know what is happening inside the model? And over the past year, there have been some much hyped breakthroughs in interpretability.

And when I look at those breakthroughs, they are getting the vaguest possible idea of some relationships happening inside the statistical architecture of very toy models built at a fraction of a fraction of a fraction of a fraction of a fraction of the complexity of Claude 1 or GPT-1, to say nothing of Claude 2, to say nothing of Claude 3, to say nothing of Claude Opus, to say nothing of Claude 4, which will come whenever Claude 4 comes.

We have this quality of like maybe we can imagine a pathway to interpreting a model that has a cognitive complexity of an inchworm. And meanwhile, we’re trying to create a superintelligence. How do you feel about that? How should I feel about that? How do you think about that?

I think, first, on interpretability, we are seeing substantial progress on being able to characterize, I would say, maybe the generation of models from six months ago. I think it’s not hopeless, and we do see a path. That said, I share your concern that the field is progressing very quickly relative to that.

And we’re trying to put as many resources into interpretability as possible. We’ve had one of our co-founders basically founded the field of interpretability. But also, we have to keep up with the market. So all of it’s very much a dilemma, right? Even if we stopped, then there’s all these other companies in the U.S. And even if some law stopped all the companies in the U.S., there’s a whole world of this.

Let me hold for a minute on the question of the competitive dynamics because before we leave this question of the machines that bullshit. It makes me think of this podcast we did a while ago with Demis Hassabis, who’s the head of Google DeepMind, which created AlphaFold.

And what was so interesting to me about AlphaFold is they built this system, that because it was limited to protein folding predictions, it was able to be much more grounded. And it was even able to create these uncertainty predictions, right? You know, it’s giving you a prediction, but it’s also telling you whether or not it is — how sure it is, how confident it is in that prediction.

That’s not true in the real world, right, for these super general systems trying to give you answers on all kinds of things. You can’t confine it that way. So when you talk about these future breakthroughs, when you talk about this system that would be much better at sorting truth from fiction, are you talking about a system that looks like the ones we have now, just much bigger, or are you talking about a system that is designed quite differently, the way AlphaFold was?

I am skeptical that we need to do something totally different. So I think today, many people have the intuition that the models are sort of eating up data that’s been gathered from the internet, code repos, whatever, and kind of spitting it out intelligently, but sort of spitting it out. And sometimes that leads to the view that the models can’t be better than the data they’re trained on or kind of can’t figure out anything that’s not in the data they’re trained on. You’re not going to get to Einstein level physics or Linus Pauling level chemistry or whatever.

I think we’re still on the part of the curve where it’s possible to believe that, although I think we’re seeing early indications that it’s false. And so, as a concrete example of this, the models that we’ve trained, like Claude 3 Opus, something like 99.9 percent accuracy, at least the base model, at adding 20-digit numbers. If you look at the training data on the internet, it is not that accurate at adding 20-digit numbers. You’ll find inaccurate arithmetic on the internet all the time, just as you’ll find inaccurate political views. You’ll find inaccurate technical views. You’re just going to find lots of inaccurate claims.

But the models, despite the fact that they’re wrong about a bunch of things, they can often perform better than the average of the data they see by — I don’t want to call it averaging out errors, but there’s some underlying truth, like in the case of arithmetic. There’s some underlying algorithm used to add the numbers.

And it’s simpler for the models to hit on that algorithm than it is for them to do this complicated thing of like, OK, I’ll get it right 90 percent of the time and wrong 10 percent of the time, right? This connects to things like Occam’s razor and simplicity and parsimony in science. There’s some relatively simple web of truth out there in the world, right?

We were talking about truth and falsehood and bullshit. One of the things about truth is that all the true things are connected in the world, whereas lies are kind of disconnected and don’t fit into the web of everything else that’s true.

So if you’re right and you’re going to have these models that develop this internal web of truth, I get how that model can do a lot of good. I also get how that model could do a lot of harm. And it’s not a model, not an A.I. system I’m optimistic that human beings are going to understand at a very deep level, particularly not when it is first developed. So how do you make rolling something like that out safe for humanity?

So late last year, we put out something called a responsible scaling plan. So the idea of that is to come up with these thresholds for an A.I. system being capable of certain things. We have what we call A.I. safety levels that in analogy to the biosafety levels, which are like, classify how dangerous a virus is and therefore what protocols you have to take to contain it, we’re currently at what we describe as A.S.L. 2.

A.S.L. 3 is tied to certain risks around the model of misuse of biology and ability to perform certain cyber tasks in a way that could be destructive. A.S.L. 4 is going to cover things like autonomy, things like probably persuasion, which we’ve talked about a lot before. And at each level, we specify a certain amount of safety research that we have to do, a certain amount of tests that we have to pass. And so, this allows us to have a framework for, well, when should we slow down? Should we slow down now? What about the rest of the market?

And I think the good thing is we came out with this in September, and then three months after we came out with ours, OpenAI came out with a similar thing. They gave it a different name, but it has a lot of properties in common. The head of DeepMind at Google said, we’re working on a similar framework. And I’ve heard informally that Microsoft might be working on a similar framework. Now, that’s not all the players in the ecosystem, but you’ve probably thought about the history of regulation and safety in other industries maybe more than I have.

This is the way you get to a workable regulatory regime. The companies start doing something, and when a majority of them are doing something, then government actors can have the confidence to say, well, this won’t kill the industry. Companies are already engaging in this. We don’t have to design this from scratch. In many ways, it’s already happening.

And we’re starting to see that. Bills have been proposed that look a little bit like our responsible scaling plan. That said, it kind of doesn’t fully solve the problem of like, let’s say we get to one of these thresholds and we need to understand what’s going on inside the model. And we don’t, and the prescription is, OK, we need to stop developing the models for some time.

If it’s like, we stop for a year in 2027, I think that’s probably feasible. If it’s like we need to stop for 10 years, that’s going to be really hard because the models are going to be built in other countries. People are going to break the laws. The economic pressure will be immense.

So I don’t feel perfectly satisfied with this approach because I think it buys us some time, but we’re going to need to pair it with an incredibly strong effort to understand what’s going on inside the models.

To the people who say, getting on this road where we are barreling towards very powerful systems is dangerous — we shouldn’t do it at all, or we shouldn’t do it this fast — you have said, listen, if we are going to learn how to make these models safe, we have to make the models, right? The construction of the model was meant to be in service, largely, to making the model safe.

Then everybody starts making models. These very same companies start making fundamental important breakthroughs, and then they end up in a race with each other. And obviously, countries end up in a race with other countries. And so, the dynamic that has taken hold is there’s always a reason that you can justify why you have to keep going. And that’s true, I think, also at the regulatory level, right? I mean, I do think regulators have been thoughtful about this. I think there’s been a lot of interest from members of Congress. I talked to them about this. But they’re also very concerned about the international competition. And if they weren’t, the national security people come and talk to them and say, well, we definitely cannot fall behind here.

And so, if you don’t believe these models will ever become so powerful, they become dangerous, fine. But because you do believe that, how do you imagine this actually playing out?

Yeah, so basically, all of the things you’ve said are true at once, right? There doesn’t need to be some easy story for why we should do X or why we should do Y, right? It can be true at the same time that to do effective safety research, you need to make the larger models, and that if we don’t make models, someone less safe will. And at the same time, we can be caught in this bad dynamic at the national and international level. So I think of those as not contradictory, but just creating a difficult landscape that we have to navigate.

Look, I don’t have the answer. Like, I’m one of a significant number of players trying to navigate this. Many are well-intentioned, some are not. I have a limited ability to affect it. And as often happens in history, things are often driven by these kind of impersonal pressures. But one thought I have and really want to push on with respect to the R.S.P.s —

Can you say what the R.S.P.s are?

Responsible Scaling Plan, the thing I was talking about before. The levels of A.I. safety, and in particular, tying decisions to pause scaling to the measurement of specific dangers or the absence of the ability to show safety or the presence of certain capabilities. One way I think about it is, at the end of the day, this is ultimately an exercise in getting a coalition on board with doing something that goes against economic pressures.

And so, if you say now, ‘Well, I don’t know. These things, they might be dangerous in the future. We’re on this exponential.’ It’s just hard. Like, it’s hard to get a multi-trillion dollar company. It’s certainly hard to get a military general to say, all right, well, we just won’t do this. It’ll confer some huge advantage to others. But we just won’t do this.

I think the thing that could be more convincing is tying the decision to hold back in a very scoped way that’s done across the industry to particular dangers. My testimony in front of Congress, I warned about the potential misuse of models for biology. That isn’t the case today, right? You can get a small uplift to the models relative to doing a Google search, and many people dismiss the risk. And I don’t know — maybe they’re right. The exponential scaling laws suggest to me that they’re not right, but we don’t have any direct hard evidence.

But let’s say we get to 2025, and we demonstrate something truly scary. Most people do not want technology out in the world that can create bioweapons. And so I think, at moments like that, there could be a critical coalition tied to risks that we can really make concrete. Yes, it will always be argued that adversaries will have these capabilities as well. But at least the trade-off will be clear, and there’s some chance for sensible policy.

I mean to be clear, I’m someone who thinks the benefits of this technology are going to outweigh its costs. And I think the whole idea behind RSP is to prepare to make that case, if the dangers are real. If they’re not real, then we can just proceed and make things that are great and wonderful for the world. And so, it has the flexibility to work both ways.

Again, I don’t think it’s perfect. I’m someone who thinks whatever we do, even with all the regulatory framework, I doubt we can slow down that much. But when I think about what’s the best way to steer a sensible course here, that’s the closest I can think of right now. Probably there’s a better plan out there somewhere, but that’s the best thing I’ve thought of so far.

One of the things that has been on my mind around regulation is whether or not the founding insight of Anthropic of OpenAI is even more relevant to the government, that if you are the body that is supposed to, in the end, regulate and manage the safety of societal-level technologies like artificial intelligence, do you not need to be building your own foundation models and having huge collections of research scientists and people of that nature working on them, testing them, prodding them, remaking them, in order to understand the damn thing well enough — to the extent any of us or anyone understands the damn thing well enough — to regulate it?

I say that recognizing that it would be very, very hard for the government to get good enough that it can build these foundation models to hire those people, but it’s not impossible. I think right now, it wants to take the approach to regulating A.I. that it somewhat wishes it took to regulating social media, which is to think about the harms and pass laws about those harms earlier.

But does it need to be building the models itself, developing that kind of internal expertise, so it can actually be a participant in different ways, both for regulatory reasons and maybe for other reasons, for public interest reasons? Maybe it wants to do things with a model that they’re just not possible if they’re dependent on access to the OpenAI, the Anthropic, the Google products.

I think government directly building the models, I think that will happen in some places. It’s kind of challenging, right? Like, government has a huge amount of money, but let’s say you wanted to provision $100 billion to train a giant foundation model. The government builds it. It has to hire people under government hiring rules. There’s a lot of practical difficulties that would come with it.

Doesn’t mean it won’t happen or it shouldn’t happen. But something that I’m more confident of that I definitely think is that government should be more involved in the use and the finetuning of these models, and that deploying them within government will help governments, especially the U.S. government, but also others, to get an understanding of the strengths and weaknesses, the benefits and the dangers. So I’m super supportive of that.

I think there’s maybe a second thing you’re getting at, which I’ve thought about a lot as a C.E.O. of one of these companies, which is, if these predictions on the exponential trend are right, and we should be humble — and I don’t know if they’re right or not. My only evidence is that they appear to have been correct for the last few years. And so, I’m just expecting by induction that they continue to be correct. I don’t know that they will, but let’s say they are. The power of these models is going to be really quite incredible.

And as a private actor in charge of one of the companies developing these models, I’m kind of uncomfortable with the amount of power that that entails. I think that it potentially exceeds the power of, say, the social media companies maybe by a lot.

You know, occasionally, in the more science fictiony world of A.I. and the people who think about A.I. risk, someone will ask me like, OK, let’s say you build the A.G.I. What are you going to do with it? Will you cure the diseases? Will you create this kind of society?

And I’m like, who do you think you’re talking to? Like a king? I just find that to be a really, really disturbing way of conceptualizing running an A.I. company. And I hope there are no companies whose C.E.O.s actually think about things that way.

I mean, the whole technology, not just the regulation, but the oversight of the technology, like the wielding of it, it feels a little bit wrong for it to ultimately be in the hands — maybe I think it’s fine at this stage, but to ultimately be in the hands of private actors. There’s something undemocratic about that much power concentration.

I have now, I think, heard some version of this from the head of most of, maybe all of, the A.I. companies, in one way or another. And it has a quality to me of, Lord, grant me chastity but not yet.

Which is to say that I don’t know what it means to say that we’re going to invent something so powerful that we don’t trust ourselves to wield it. I mean, Amazon just gave you guys $2.75 billion. They don’t want to see that investment nationalized.

No matter how good-hearted you think OpenAI is, Microsoft doesn’t want GPT-7, all of a sudden, the government is like, whoa, whoa, whoa, whoa, whoa. We’re taking this over for the public interest, or the U.N. is going to handle it in some weird world or whatever it might be. I mean, Google doesn’t want that.

And this is a thing that makes me a little skeptical of the responsible scaling laws or the other iterative versions of that I’ve seen in other companies or seen or heard talked about by them, which is that it’s imagining this moment that is going to come later, when the money around these models is even bigger than it is now, the power, the possibility, the economic uses, the social dependence, the celebrity of the founders. It’s all worked out. We’ve maintained our pace on the exponential curve. We’re 10 years in the future.

And at some point, everybody is going to look up and say, this is actually too much. It is too much power. And this has to somehow be managed in some other way. And even if the C.E.O.s of the things were willing to do that, which is a very open question by the time you get there, even if they were willing to do that, the investors, the structures, the pressure around them, in a way, I think we saw a version of this — and I don’t know how much you’re going to be willing to comment on it — with the sort of OpenAI board, Sam Altman thing, where I’m very convinced that wasn’t about A.I. safety. I’ve talked to figures on both sides of that. They all sort of agree it wasn’t about A.I. safety.

But there was this moment of, if you want to press the off switch, can you, if you’re the weird board created to press the off switch. And the answer was no, you can’t, right? They’ll just reconstitute it over at Microsoft.

There’s functionally no analogy I know of in public policy where the private sector built something so powerful that when it reached maximum power, it was just handed over in some way to the public interest.

Yeah, I mean, I think you’re right to be skeptical, and similarly, what I said with the previous questions of there are just these dilemmas left and right that have no easy answer. But I think I can give a little more concreteness than what you’ve pointed at, and maybe more concreteness than others have said, although I don’t know what others have said. We’re at A.S.L. 2 in our responsible scaling plan. These kinds of issues, I think they’re going to become a serious matter when we reach, say, A.S.L. 4. So that’s not a date and time. We haven’t even fully specified A.S.L. 4 —

Just because this is a lot of jargon, just, what do you specify A.S.L. 3 as? And then as you say, A.S.L. 4 is actually left quite undefined. So what are you implying A.S.L. 4 is?

A.S.L. 3 is triggered by risks related to misuse of biology and cyber technology. A.S.L. 4, we’re working on now.

Be specific. What do you mean? Like, what is the thing a system could do or would do that would trigger it?

Yes, so for example, on biology, the way we’ve defined it — and we’re still refining the test, but the way we’ve defined it is, relative to use of a Google search, there’s a substantial increase in risk as would be evaluated by, say, the national security community of misuse of biology, creation of bioweapons, that either the proliferation or spread of it is greater than it was before, or the capabilities are substantially greater than it was before.

We’ll probably have some more exact quantitative thing, working with folks who are ex-government biodefense folks, but something like this accounts for 20 percent of the total source of risk of biological attacks, or something increases the risk by 20 percent or something like that. So that would be a very concrete version of it. It’s just, it takes us time to develop very concrete criteria. So that would be like A.S.L. 3.

A.S.L. 4 is going to be more about, on the misuse side, enabling state-level actors to greatly increase their capability, which is much harder than enabling random people. So where we would worry that North Korea or China or Russia could greatly enhance their offensive capabilities in various military areas with A.I. in a way that would give them a substantial advantage at the geopolitical level. And on the autonomy side, it’s various measures of these models are pretty close to being able to replicate and survive in the wild.

So it feels maybe one step short of models that would, I think, raise truly existential questions. And so, I think what I’m saying is when we get to that latter stage, that A.S.L. 4, that is when I think it may make sense to think about what is the role of government in stewarding this technology.

Again, I don’t really know what it looks like. You’re right. All of these companies have investors. They have folks involved.

You talk about just handing the models over. I suspect there’s some way to hand over the most dangerous or societally sensitive components or capabilities of the models without fully turning off the commercial tap. I don’t know that there’s a solution that every single actor is happy with. But again, I get to this idea of demonstrating specific risk.

If you look at times in history, like World War I or World War II, industries’ will can be bent towards the state. They can be gotten to do things that aren’t necessarily profitable in the short-term because they understand that there’s an emergency. Right now, we don’t have an emergency. We just have a line on a graph that weirdos like me believe in and a few people like you who are interviewing me may somewhat believe in. We don’t have clear and present danger.

When you imagine how many years away, just roughly, A.S.L. 3 is and how many years away A.S.L. 4 is, right, you’ve thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about?

Yeah, I think A.S.L. 3 could easily happen this year or next year. I think A.S.L. 4 —

Oh, Jesus Christ.

No, no, I told you. I’m a believer in exponentials. I think A.S.L. 4 could happen anywhere from 2025 to 2028.

So that is fast.

Yeah, no, no, I’m truly talking about the near future here. I’m not talking about 50 years away. God grant me chastity, but not now. But “not now” doesn’t mean when I’m old and gray. I think it could be near term. I don’t know. I could be wrong. But I think it could be a near term thing.

But so then, if you think about this, I feel like what you’re describing, to go back to something we talked about earlier, that there’s been this step function for societal impact of A.I., the curve of the capabilities exponential, but every once in a while, something happens, ChatGPT, for instance, Midjourney with photos. And all of a sudden, a lot of people feel it. They realize what has happened and they react. They use it. They deploy it in their companies. They invest in it, whatever.

And it sounds to me like that is the structure of the political economy you’re describing here. Either something happens where the bioweapon capability is demonstrated or the offensive cyber weapon capability is demonstrated, and that freaks out the government, or possibly something happens, right? Describing World War I and World War II is your examples did not actually fill me with comfort because in order to bend industry to government’s will, in those cases, we had to have an actual world war. It doesn’t do it that easily.

You could use coronavirus, I think, as another example where there was a significant enough global catastrophe that companies and governments and even people did things you never would have expected. But the examples we have of that happening are something terrible. All those examples end up with millions of bodies. I’m not saying that’s going to be true for A.I., but it does sound like that is a political economy. No, you can’t imagine it now, in the same way that you couldn’t have imagined the sort of pre and post-ChatGPT world exactly, but that something happens and the world changes. Like, it’s a step function everywhere.

Yeah, I mean, I think my positive version of this, not to be so — to get a little bit away from the doom and gloom, is that the dangers are demonstrated in a concrete way that is really convincing, but without something actually bad happening, right? I think the worst way to learn would be for something actually bad to happen. And I’m hoping every day that doesn’t happen, and we learn bloodlessly.

We’ve been talking here about conceptual limits and curves, but I do want, before we end, to reground us a little bit in the physical reality, right? I think that if you’re using A.I., it can feel like this digital bits and bytes, sitting in the cloud somewhere.

But what it is in a physical way is huge numbers of chips, data centers, an enormous amount of energy, all of which does rely on complicated supply chains. And what happens if something happens between China and Taiwan, and the makers of a lot of these chips become offline or get captured? How do you think about the necessity of compute power? And when you imagine the next five years, what does that supply chain look like? How does it have to change from where it is now? And what vulnerabilities exist in it?

Yeah, so one, I think this may end up being the greatest geopolitical issue of our time. And man, this relates to things that are way above my pay grade, which are military decisions about whether and how to defend Taiwan. All I can do is say what I think the implications for A.I. is. I think those implications are pretty stark. I think there’s a big question of like, OK, we built these powerful models.

One, is there enough supply to build them? Two is control over that supply, a way to think about safety issues or a way to think about balance of geopolitical power. And three, if those chips are used to build data centers, where are those data centers going to be? Are they going to be in the U.S.? Are they going to be in a U.S. ally? Are they going to be in the Middle East? Are they going to be in China?

All of those have enormous implications, and then the supply chain itself can be disrupted. And political and military decisions can be made on the basis of where things are. So it sounds like an incredibly sticky problem to me. I don’t know that I have any great insight on this. I mean, as a U.S. citizen and someone who believes in democracy, I am someone who hopes that we can find a way to build data centers and to have the largest quantity of chips available in the U.S. and allied democratic countries.

Well, there is some insight you should have into it, which is that you’re a customer here, right? And so, five years ago, the people making these chips did not realize what the level of demand for them was going to be. I mean, what has happened to Nvidia’s stock prices is really remarkable.

But also what is implied about the future of Nvidia’s stock prices is really remarkable. Rana Foroohar, the Financial Times, cited this market analysis. It would take 4,500 years for Nvidia’s future dividends to equal its current price, 4,500 years. So that is a view about how much Nvidia is going to be making in the next couple of years. It is really quite astounding.

I mean, you’re, in theory, already working on or thinking about how to work on the next generation of Claude. You’re going to need a lot of chips for that. You’re working with Amazon. Are you having trouble getting the amount of compute that you feel you need? I mean, are you already bumping up against supply constraints? Or has the supply been able to change, to adapt to you?

We’ve been able to get the compute that we need for this year, I suspect also for next year as well. I think once things get to 2026, 2027, 2028, then the amount of compute gets to levels that starts to strain the capabilities of the semiconductor industry. The semiconductor industry still mostly produces C.P.U.s, right? Just the things in your laptop, not the things in the data centers that train the A.I. models. But as the economic value of the GPUs goes up and up and up because of the value of the A.I. models, that’s going to switch over. But you know what? At some point, you hit the limits of that or you hit the limits of how fast you can switch over. And so, again, I expect there to be a big supply crunch around data centers, around chips, and around energy and power for both regulatory and physics reasons, sometime in the next few years. And that’s a risk, but it’s also an opportunity. I think it’s an opportunity to think about how the technology can be governed.

And it’s also an opportunity, I’ll repeat again, to think about how democracies can lead. I think it would be very dangerous if the leaders in this technology and the holders of the main resources were authoritarian countries. The combination of A.I. and authoritarianism, both internally and on the international stage, is very frightening to me.

How about the question of energy? I mean, this requires just a tremendous amount of energy. And I mean, I’ve seen different numbers like this floating around. It very much could be in the coming years like adding a Bangladesh to the world’s energy usage. Or pick your country, right? I don’t know what exactly you all are going to be using by 2028.

Microsoft, on its own, is opening a new data center globally every three days. You have — and this is coming from a Financial Times article — federal projections for 20 new gas-fired power plants in the U.S. by 2024 to 2025. There’s a lot of talk about this being now a new golden era for natural gas because we have a bunch of it. There is this huge need for new power to manage all this data, to manage all this compute.

So, one, I feel like there’s a literal question of how do you get the energy you need and at what price, but also a more kind of moral, conceptual question of, we have real problems with global warming. We have real problems with how much energy we’re using. And here, we’re taking off on this really steep curve of how much of it we seem to be needing to devote to the new A.I. race.

It really comes down to, what are the uses that the model is being put to, right? So I think the worrying case would be something like crypto, right? I’m someone who’s not a believer that whatever the energy was that was used to mine the next Bitcoin, I think that was purely additive. I think that wasn’t there before. And I’m unable to think of any useful thing that’s created by that.

But I don’t think that’s the case with A.I. Maybe A.I. makes solar energy more efficient or maybe it solves controlled nuclear fusion, or maybe it makes geoengineering more stable or possible. But I don’t think we need to rely on the long run. There are some applications where the model is doing something that used to be automated, that used to be done by computer systems. And the model is able to do it faster with less computing time, right? Those are pure wins. And there are some of those.

There are others where it’s using the same amount of computing resources or maybe more computing resources, but to do something more valuable that saves labor elsewhere. Then there are cases where something used to be done by humans or in the physical world, and now it’s being done by the models. Maybe it does something that previously I needed to go into the office to do that thing. And now I no longer need to go into the office to do that thing.

So I don’t have to get in my car. I don’t have to use the gas that was used for that. The energy accounting for that is kind of hard. You compare it to the food that the humans eat and what the energy cost of producing that.

So in all honesty, I don’t think we have good answers about what fraction of the usage points one way and one fraction of the usage points to others. In many ways, how different is this from the general dilemma of, as the economy grows, it uses more energy?

So I guess, what I’m saying is, it kind of all matters how you use the technology. I mean, my kind of boring short-term answer is, we get carbon offsets for all of this stuff. But let’s look beyond that to the macro question here.

But to take the other side of it, I mean, I think the difference, when you say this is always a question we have when we’re growing G.D.P., is it’s not quite. It’s cliché because it’s true to say that the major global warming challenge right now is countries like China and India getting richer. And we want them to get richer. It is a huge human imperative, right, a moral imperative for poor people in the world to become less poor. And if that means they use more energy, then we just need to figure out how to make that work. And we don’t know of a way for that to happen without them using more energy.

Adding A.I. is not that it raises a whole different set of questions, but we’re already straining at the boundaries, or maybe far beyond them, of safely what we can do energetically. Now we add in this, and so maybe some of the energy efficiency gains you’re going to get in rich countries get wiped out. For this sort of uncertain payoff in the future of maybe through A.I., we figure out ways to stabilize nuclear fusion or something, right, you could imagine ways that could help, but those ways are theoretical.

And in the near term, the harm in terms of energy usage is real. And also, by the way, the harm in terms of just energy prices. It’s also just tricky because all these companies, Microsoft, Amazon, I mean, they all have a lot of renewable energy targets. Now if that is colliding with their market incentives, it feels like they’re running really fast towards the market incentives without an answer for how all that nets out.

Yeah, I mean, I think the concerns are real. Let me push back a little bit, which is, again, I don’t think the benefits are purely in the future. It kind of goes back to what I said before. Like, there may be use cases now that are net energy saving, or that to the extent that they’re not net energy saving, do so through the general mechanism of, oh, there was more demand for this thing.

I don’t think anyone has done a good enough job measuring, in part because the applications of A.I. are so new, which of those things dominate or what’s going to happen to the economy. But I don’t think we should assume that the harms are entirely in the present and the benefits are entirely in the future. I think that’s my only point here.

I guess you could imagine a world where we were, somehow or another, incentivizing uses of A.I. that were yoked to some kind of social purpose. We were putting a lot more into drug discovery, or we cared a lot about things that made remote work easier, or pick your set of public goods.

But what actually seems to me to be happening is we’re building more and more and more powerful models and just throwing them out there within a terms of service structure to say, use them as long as you’re not trying to politically manipulate people or create a bioweapon. Just try to figure this out, right? Try to create new stories and ask it about your personal life, and make a video game with it. And Sora comes out sooner or later. Make new videos with it. And all that is going to be very energy intensive.

I am not saying that I have a plan for yoking A.I. to social good, and in some ways, you can imagine that going very, very wrong. But it does mean that for a long time, it’s like you could imagine the world you’re talking about, but that would require some kind of planning that nobody is engaged in, and I don’t think anybody even wants to be engaged in.

Not everyone has the same conception of social good. One person may think social good is this ideology. Another person — we’ve seen that with some of the Gemini stuff.

But companies can try to make beneficial applications themselves, right? Like, this is why we’re working with cancer institutes. We’re hoping to partner with ministries of education in Africa, to see if we can use the models in kind of a positive way for education, rather than the way they may be used by default. So I think individual companies, individual people, can take actions to steer or bend this towards the public good.

That said, it’s never going to be the case that 100 percent of what we do is that. And so I think it’s a good question. What are the societal incentives, without dictating ideology or defining the public good from on high, what are incentives that could help with this?

I don’t feel like I have a systemic answer either. I can only think in terms of what Anthropic tries to do.

But there’s also the question of training data and the intellectual property that is going into things like Claude, like GPT, like Gemini. There are a number of copyright lawsuits. You’re facing some. OpenAI is facing some. I suspect everybody is either facing them now or will face them.

And a broad feeling that these systems are being trained on the combined intellectual output of a lot of different people — the way that Claude can quite effectively mimic the way I write is it has been trained, to some degree, on my writing, right? So it actually does get my stylistic tics quite well. You seem great, but you haven’t sent me a check on that. And this seems like somewhere where there is real liability risk for the industry. Like, what if you do actually have to compensate the people who this is being trained on? And should you?

And I recognize you probably can’t comment on lawsuits themselves, but I’m sure you’ve had to think a lot about this. And so, I’m curious both how you understand it as a risk, but also how you understand it morally. I mean, when you talk about the people who invent these systems gaining a lot of power, and alongside that, a lot of wealth, well, what about all the people whose work went into them such that they can create images in a million different styles? And I mean, somebody came up with those styles. What is the responsibility back to the intellectual commons? And not just to the commons, but to the actual wages and economic prospects of the people who made all this possible?

I think everyone agrees the models shouldn’t be verbatim outputting copyrighted content. For things that are available on the web, for publicly available, our position — and I think there’s a strong case for it — is that the training process, again, we don’t think it’s just hoovering up content and spitting it out, or it shouldn’t be spitting it out. It’s really much more like the process of how a human learns from experiences. And so, our position that that is sufficiently transformative, and I think the law will back this up, that this is fair use.

But those are narrow legal ways to think about the problem. I think we have a broader issue, which is that regardless of how it was trained, it would still be the case that we’re building more and more general cognitive systems, and that those systems will create disruption. Maybe not necessarily by one for one replacing humans, but they’re really going to change how the economy works and which skills are valued. And we need a solution to that broad macroeconomic problem, right?

As much as I’ve asserted the narrow legal points that I asserted before, we have a broader problem here, and we shouldn’t be blind to that. There’s a number of solutions. I mean, I think the simplest one, which I recognize doesn’t address some of the deeper issues here, is things around the kind of guaranteed basic income side of things.

But I think there’s a deeper question here, which is like as A.I. systems become capable of larger and larger slices of cognitive labor, how does society organize itself economically? How do people find work and meaning and all of that?

And just as kind of we transition from an agrarian society to an industrial society and the meaning of work changed, and it was no longer true that 99 percent of people were peasants working on farms and had to find new methods of economic organization, I suspect there’s some different method of economic organization that’s going to be forced as the only possible response to disruptions to the economy that will be small at first, but will grow over time, and that we haven’t worked out what that is.

We need to find something that allows people to find meaning that’s humane and that maximizes our creativity and potential and flourishing from A.I.

And as with many of these questions, I don’t have the answer to that. Right? I don’t have a prescription. But that’s what we somehow need to do.

But I want to sit in between the narrow legal response and the broad “we have to completely reorganize society” response, although I think that response is actually possible over the decades. And in the middle of that is a more specific question. I mean, you could even take it from the instrumental side. There is a lot of effort now to build search products that use these systems, right? ChatGPT will use Bing to search for you.

And that means that the person is not going to Bing and clicking on the website where ChatGPT is getting its information and giving that website an advertising impression that they can turn into a very small amount of money, or they’re not going to that website and having a really good experience with that website and becoming maybe likelier to subscribe to whoever is behind that website.

And so, on the one hand, that seems like some kind of injustice done to the people creating the information that these systems are using. I mean, this is true for perplexity. It’s true for a lot of things I’m beginning to see around where the A.I.s are either trained on or are using a lot of data that people have generated at some real cost. But not only are they not paying people for that, but they’re actually stepping into the middle of where they would normally be a direct relationship and making it so that relationship never happens.

That also, I think, in the long run, creates a training data problem, even if you just want to look at it instrumentally, where if it becomes nonviable to do journalism or to do a lot of things to create high quality information out there, the A.I.‘s ability, right, the ability of all of your companies to get high quality, up-to-date, constantly updated information becomes a lot trickier. So there both seems to me to be both a moral and a self-interested dimension to this.

Yeah, so I think there may be business models that work for everyone, not because it’s illegitimate to train on open data from the web in a legal sense, but just because there may be business models here that kind of deliver a better product. So things I’m thinking of are like newspapers have archives. Some of them aren’t publicly available. But even if they are, it may be a better product, maybe a better experience, to, say, talk to this newspaper or talk to that newspaper.

It may be a better experience to give the ability to interact with content and point to places in the content, and every time you call that content, to have some kind of business relationship with the creators of that content. So there may be business models here that propagate the value in the right way, right? You talk about LLMs using search products. I mean, sure, you’re going around the ads, but there’s no reason it can’t work in a different way, right?

There’s no reason that the users can’t pay the search A.P.I.s, instead of it being paid through advertising, and then have that propagate through to wherever the original mechanism is that paid the creators of the content. So when value is being created, money can flow through.

Let me try to end by asking a bit about how to live on the slope of the curve you believe we are on. Do you have kids?

I’m married. I do not have kids.

So I have two kids. I have a two-year-old and a five-year-old. And particularly when I’m doing A.I. reporting, I really do sit in bed at night and think, what should I be doing here with them? What world am I trying to prepare them for? And what is needed in that world that is different from what is needed in this world, even if I believe there’s some chance — and I do believe there’s some chance — that all the things you’re saying are true. That implies a very, very, very different life for them.

I know people in your company with kids. I know they are thinking about this. How do you think about that? I mean, what do you think should be different in the life of a two-year-old who is living through the pace of change that you are telling me is true here? If you had a kid, how would this change the way you thought about it?

The very short answer is, I don’t know, and I have no idea, but we have to try anyway, right? People have to raise kids, and they have to do it as best they can. An obvious recommendation is just familiarity with the technology and how it works, right? The basic paradigm of, I’m talking to systems, and systems are taking action on my behalf, obviously, as much familiarity with that as possible is, I think, helpful.

In terms of what should children learn in school, what are the careers of tomorrow, I just truly don’t know, right? You could take this to say, well, it’s important to learn STEM and programming and A.I. and all of that. But A.I. will impact that as well, right? I don’t think any of it is going to —

Possibly first.

Yeah, right, possibly first.

It seems better at coding than it is at other things.

I don’t think it’s going to work out for any of these systems to just do one for one what humans are going to do. I don’t really think that way. But I think it may fundamentally change industries and professions one by one in ways that are hard to predict. And so, I feel like I only have clichés here. Like get familiar with the technology. Teach your children to be adaptable, to be ready for a world that changes very quickly. I wish I had better answers, but I think that’s the best I got.

I agree that’s not a good answer. [LAUGHS] Let me ask that same question a bit from another direction, because one thing you just said is get familiar with the technology. And the more time I spend with the technology, the more I fear that happening. What I see when people use A.I. around me is that the obvious thing that technology does for you is automate the early parts of the creative process. The part where you’re supposed to be reading something difficult yourself? Well, the A.I. can summarize it for you. The part where you’re supposed to sit there with a blank page and write something? Well, the A.I. can give you a first draft. And later on, you have to check it and make sure it actually did what you wanted it to do and fact-checking it. And but I believe a lot of what makes humans good at thinking comes in those parts.

And I am older and have self-discipline, and maybe this is just me hanging on to an old way of doing this, right? You could say, why use a calculator from this perspective. But my actual worry is that I’m not sure if the thing they should do is use A.I. a lot or use it a little. This, to me, is actually a really big branching path, right? Do I want my kids learning how to use A.I. or being in a context where they’re using it a lot, or actually, do I want to protect them from it as much as I possibly could so they develop more of the capacity to read a book quietly on their own or write a first draft? I actually don’t know. I’m curious if you have a view on it.

I think this is part of what makes the interaction between A.I. and society complicated where it’s sometimes hard to distinguish when is an A.I. doing something, saving you labor or drudge work, versus kind of doing the interesting part. I will say that over and over again, you’ll get some technological thing, some technological system that does what you thought was the core of what you’re doing, and yet, what you’re doing turns out to have more pieces than you think it does and kind of add up to more things, right?

It’s like before, I used to have to ask for directions. I got Google Maps to do that. And you could worry, am I too reliant on Google Maps? Do I forget the environment around me? Well, it turns out, in some ways, I still need to have a sense of the city and the environment around me. It just kind of reallocates the space in my brain to some other aspect of the task.

And I just kind of suspect — I don’t know. Internally, within Anthropic, one of the things I do that helps me run the company is, I’ll write these documents on strategy or just some thinking in some direction that others haven’t thought. And of course, I sometimes use the internal models for that. And I think what I found is like, yes, sometimes they’re a little bit good at conceptualizing the idea, but the actual genesis of the idea, I’ve just kind of found a workflow where I don’t use them for that. They’re not that helpful for that. But they’re helpful in figuring out how to phrase a certain thing or how to refine my ideas.

So maybe I’m just saying — I don’t know. You just find a workflow where the thing complements you. And if it doesn’t happen naturally, it somehow still happens eventually. Again, if the systems get general enough, if they get powerful enough, we may need to think along other lines. But in the short-term, I, at least, have always found that. Maybe that’s too sanguine. Maybe that’s too optimistic.

I think, then, that’s a good place to end this conversation. Though, obviously, the exponential curve continues. So always our final question — what are three books you’d recommend to the audience?

So, yeah, I’ve prepared three. They’re all topical, though, in some cases, indirectly so. The first one will be obvious. It’s a very long book. The physical book is very thick, but “The Making of the Atomic Bomb,” Richard Rhodes. It’s an example of technology being developed very quickly and with very broad implications. Just looking through all the characters and how they reacted to this and how people who were basically scientists gradually realized the incredible implications of the technology and how it would lead them into a world that was very different from the one they were used to.

My second recommendation is a science fiction series, “The Expanse” series of books. So I initially watched the show, and then I read all the books. And the world it creates is very advanced. In some cases, it has longer life spans, and humans have expanded into space. But we still face some of the same geopolitical questions and some of the same inequalities and exploitations that exist in our world, are still present, in some cases, worse.

That’s all the backdrop of it.

And the core of it is about some fundamentally new technological object that is being brought into that world and how everyone reacts to it, how governments react to it, how individual people react to it, and how political ideologies react to it. And so, I don’t know. When I read that a few years ago, I saw a lot of parallels.

And then my third recommendation would be actually “The Guns of August,” which is basically a history of how World War I started. The basic idea that crises happen very fast, almost no one knows what’s going on. There are lots of miscalculations because there are humans at the center of it, and kind of, we somehow have to learn to step back and make wiser decisions in these key moments. It’s said that Kennedy read the book before the Cuban Missile Crisis. And so I hope our current policymakers are at least thinking along the same terms because I think it is possible similar crises may be coming our way.

Dario Amodei, thank you very much.

This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero.

EZRA KLEIN: From New York Times Opinion, this is “The Ezra Klein Show.”

What the A.I. developers say is that the power of A.I. systems is on this kind of curve, that it has been increasing exponentially, their capabilities, and that as long as we keep feeding in more data and more computing power, it will continue increasing exponentially.That is the scaling law hypothesis, and one of its main advocates is Dario Amodei. Amodei led the team at OpenAI that created GPT-2, that created GPT-3. He then left OpenAI to co-found Anthropic, another A.I. firm, where he’s now the C.E.O. And Anthropic recently released Claude 3, which is considered by many to be the strongest A.I. model available right now.

DARIO AMODEI: Thank you for having me.

EZRA KLEIN: So there are these two very different rhythms I’ve been thinking about with A.I. One is the curve of the technology itself, how fast it is changing and improving. And the other is the pace at which society is seeing and reacting to those changes. What has that relationship felt like to you?

DARIO AMODEI: So I think this is an example of a phenomenon that we may have seen a few times before in history, which is that there’s an underlying process that is smooth, and in this case, exponential. And then there’s a spilling over of that process into the public sphere. And the spilling over looks very spiky. It looks like it’s happening all of a sudden. It looks like it comes out of nowhere. And it’s triggered by things hitting various critical points or just the public happened to be engaged at a certain time.

EZRA KLEIN: So I want to linger on this difference between the curve at which the technology is improving and the way it is being adopted by society. So when you think about these break points and you think into the future, what other break points do you see coming where A.I. bursts into social consciousness or used in a different way?

DARIO AMODEI: Yeah, so I think I should say first that it’s very hard to predict these. One thing I like to say is the underlying technology, because it’s a smooth exponential, it’s not perfectly predictable, but in some ways, it can be eerily preternaturally predictable, right? That’s not true for these societal step functions at all. It’s very hard to predict what will catch on. In some ways, it feels a little bit like which artist or musician is going to catch on and get to the top of the charts.

I think a thing related to this is, a lot of companies have been held back or tripped up by how their models handle controversial topics. And we were really able to, I think, do a better job than others of telling the model, don’t shy away from discussing controversial topics. Don’t assume that both sides necessarily have a valid point but don’t express an opinion yourself. Don’t express views that are flagrantly biased. As journalists, you encounter this all the time, right? How do I be objective, but not both sides on everything?

So I think going further in that direction of models having personalities while still being objective, while still being useful and not falling into various ethical traps, that will be, I think, a significant unlock for adoption. The models taking actions in the world is going to be a big one. I know basically all the big companies that work on A.I. are working on that. Instead of just, I ask it a question and it answers, and then maybe I follow up and it answers again, can I talk to the model about, oh, I’m going to go on this trip today, and the model says, oh, that’s great. I’ll get an Uber for you to drive from here to there, and I’ll reserve a restaurant. And I’ll talk to the other people who are going to plan the trip. And the model being able to do things end to end or going to websites or taking actions on your computer for you.

EZRA KLEIN: I want to sit with this question of the agentic A.I. because I do think this is what’s coming. It’s clearly what people are trying to build. And I think it might be a good way to look at some of the specific technological and cultural challenges. And so, let me offer two versions of it.

People who are following the A.I. news might have heard about Devin, which is not in release yet, but is an A.I. that at least purports to be able to complete the kinds of tasks, linked tasks, that a junior software engineer might complete, right? Instead of asking to do a bit of code for you, you say, listen, I want a website. It’s going to have to do these things, work in these ways. And maybe Devin, if it works the way people are saying it works, can actually hold that set of thoughts, complete a number of different tasks, and come back to you with a result.

I’m also interested in the version of this that you might have in the real world. The example I always use in my head is, when can I tell an A.I., my son is turning five. He loves dragons. We live in Brooklyn. Give me some options for planning his birthday party. And then, when I choose between them, can you just do it all for me? Order the cake, reserve the room, send out the invitations, whatever it might be.

DARIO AMODEI: The short answer is not all that much. A story I have from when we were developing models back in 2022 — and this is before we’d hooked up the models to anything — is, you could have a conversation with these purely textual models where you could say, hey, I want to reserve dinner at restaurant X in San Francisco, and the model would say, OK, here’s the website of restaurant X. And it would actually give you a correct website or would tell you to go to Open Table or something.

And for planning the birthday party, right, the level of trust you would need to take an A.I. agent and say, I’m OK with you calling up anyone, saying anything to them that’s in any private information that I might have, sending them any information, taking any action on my computer, posting anything to the internet, the most unconstrained version of that sounds very scary. And so, we’re going to need to figure out what is safe and controllable. The more open ended the thing is, the more powerful it is, but also, the more dangerous it is and the harder it is to control.

EZRA KLEIN: When you say we’re just going to need more scale, you mean more compute and more training data, and I guess, possibly more money to simply make the models smarter and more capable?

DARIO AMODEI: Yes, we’re going to have to make bigger models that use more compute per iteration. We’re going to have to run them for longer by feeding more data into them. And that number of chips times the amount of time that we run things on chips is essentially dollar value because these chips are — you rent them by the hour. That’s the most common model for it. And so, today’s models cost of order $100 million to train, plus or minus factor two or three.

EZRA KLEIN: So we’re moving very quickly towards a world where the only players who can afford to do this are either giant corporations, companies hooked up to giant corporations — you all are getting billions of dollars from Amazon. OpenAI is getting billions of dollars from Microsoft. Google obviously makes its own.

DARIO AMODEI: I basically do agree with you. I think it’s the intellectually honest thing to say that building the big, large scale models, the core foundation model engineering, it is getting more and more expensive. And anyone who wants to build one is going to need to find some way to finance it. And you’ve named most of the ways, right? You can be a large company. You can have some kind of partnership of various kinds with a large company. Or governments would be the other source.

EZRA KLEIN: Now, I want to ask a question about what is different between the agentic coding model and the plan by kids’ birthday model, to say nothing of do something on behalf of my business model. And one of the questions on my mind here is one reason I buy that A.I. can become functionally superhuman in coding is, there’s a lot of ways to get rapid feedback in coding. Your code has to compile. You can run bug checking. You can actually see if the thing works.

DARIO AMODEI: I think it’s correct and perceptive to say that the coding agents will advance substantially faster than agents that interact with the real world or have to get opinions and preferences from humans. That said, we should keep in mind that the current crop of A.I.s that are out there, right, including Claude 3, GPT, Gemini, they’re all trained with some variant of what’s called reinforcement learning from human feedback.

EZRA KLEIN: You don’t love the framing of artificial general intelligence, what gets called A.G.I. Typically, this is all described as a race to A.G.I., a race to this system that can do kind of whatever a human can do, but better. What do you understand A.G.I. to mean, when people say it? And why don’t you like it? Why is it not your framework?

DARIO AMODEI: So it’s actually a term I used to use a lot 10 years ago. And that’s because the situation 10 years ago was very different. 10 years ago, everyone was building these very specialized systems, right? Here’s a cat detector. You run it on a picture, and it’ll tell you whether a cat is in it or not. And so I was a proponent all the way back then of like, no, we should be thinking generally. Humans are general. The human brain appears to be general. It appears to get a lot of mileage by generalizing. You should go in that direction.

EZRA KLEIN: Help me imagine the drug discovery world for a minute, because that’s a world a lot of us want to live in. I know a fair amount about the drug discovery process, have spent a lot of my career reporting on health care and related policy questions. And when you’re working with different pharmaceutical companies, which parts of it seem amenable to the way A.I. can speed something up?

DARIO AMODEI: I think we’re really going to see progress when the A.I.’s are also thinking about the problem of how to sign up the humans for the clinical trials. And I think this is a general principle for how will A.I. be used. I think of like, when will we get to the point where the A.I. has the same sensors and actuators and interfaces that a human does, at least the virtual ones, maybe the physical ones.

EZRA KLEIN: Maybe you’re not an expert in the area, but you said you are working with these companies. So when they come to you, I mean, they are experts in the area. And presumably, they are coming to you as a customer. I’m sure there are things you cannot tell me. But what do they seem excited about?

DARIO AMODEI: They have generally been excited about the knowledge work aspects of the job. Maybe just because that’s kind of the easiest thing to work on, but it’s just like, I’m a computational chemist. There’s some workflow that I’m engaged in. And having things more at my fingertips, being able to check things, just being able to do generic knowledge work better, that’s where most folks are starting.

EZRA KLEIN: You all have been working internally on research around how persuasive these systems, your systems are getting as they scale. You shared with me kindly a draft of that paper. Do you want to just describe that research first? And then I’d like to talk about it for a bit.

DARIO AMODEI: Yes, we were interested in how effective Claude 3 Opus, which is the largest version of Claude 3, could be in changing people’s minds on important issues. So just to be clear up front, in actual commercial use, we’ve tried to ban the use of these models for persuasion, for campaigning, for lobbying, for electioneering. These aren’t use cases that we’re comfortable with for reasons that I think should be clear. But we’re still interested in, is the core model itself capable of such tasks?

EZRA KLEIN: I mean, those don’t sound crazy to me at all. I want to sit in this paper for a minute because one thing that struck me about it, and I am, on some level, a persuasion professional, is that you tested the model in a way that, to me, removed all of the things that are going to make A.I. radical in terms of changing people’s opinions. And the particular thing you did was, it was a one-shot persuasive effort.

DARIO AMODEI: Yes.

EZRA KLEIN: This is not going to be true for A.I. We’re going to — you’re going to — somebody’s going to feed it a bunch of microtargeting data about people, their Google search history, whatever it might be. Then it’s going to set the A.I. loose, and the A.I. is going to go back and forth, over and over again, intuiting what it is that the person finds persuasive, what kinds of characters the A.I. needs to adopt to persuade it, and taking as long as it needs to, and is going to be able to do that at scale for functionally as many people as you might want to do it for.

DARIO AMODEI: Yeah, I completely agree with that. I mean, that same pattern has a bunch of positive use cases, right? If I think about an A.I. coach or an A.I. assistant to a therapist, there are many contexts in which really getting into the details with the person has a lot of value. But right, when we think of political or religious or ideological persuasion, it’s hard not to think in that context about the misuses.

EZRA KLEIN: There is another finding in the paper, which I think is concerning, which is, you all tested different ways A.I. could be persuasive. And far away the most effective was for it to be deceptive, for it to make things up. When you did that, it was more persuasive than human beings.

DARIO AMODEI: Yes, that is true. The difference was only slight, but it did get it, if I’m remembering the graphs correctly, just over the line of the human base line. With humans, it’s actually not that common to find someone who’s able to give you a really complicated, really sophisticated-sounding answer that’s just flat-out totally wrong. I mean, you see it. We can all think of one individual in our lives who’s really good at saying things that sound really good and really sophisticated and are false.

So that would be one of the things to think about and watch out for in terms of just breaking the usual heuristics that humans have to detect deception and lying. Of course, sometimes, humans do, right? I mean, there’s psychopaths and sociopaths in the world, but even they have their patterns, and A.I.s may have different patterns.

EZRA KLEIN: Are you familiar with Harry Frankfurt, the late philosopher’s book, “On Bullshit”?

DARIO AMODEI: Yes. It’s been a while since I read it. I think his thesis is that bullshit is actually more dangerous than lying because it has this kind of complete disregard for the truth, whereas lies are at least the opposite of the truth.

EZRA KLEIN: Yeah, the liar, the way Frankfurt puts it is that the liar has a relationship to the truth. He’s playing a game against the truth. The bullshitter doesn’t care. The bullshitter has no relationship to the truth — might have a relationship to other objectives. And from the beginning, when I began interacting with the more modern versions of these systems, what they struck me as is the perfect bullshitter, in part because they don’t know that they’re bullshitting. There’s no difference in the truth value to the system, how the system feels.

DARIO AMODEI: I think that’s basically correct. We have very substantial teams trying to focus on making sure that the models are factually accurate, that they tell the truth, that they ground their data in external information.

As you’ve indicated, doing searches isn’t itself reliable because search engines have this problem as well, right? Where is the source of truth? So there’s a lot of challenges here. But I think at a high level, I agree this is really potentially an insidious problem, right? If we do this wrong, you could have systems that are the most convincing psychopaths or con artists.

One source of hope that I have, actually, is, you say these models don’t know whether they’re lying or they’re telling the truth. In terms of the inputs and outputs to the models, that’s absolutely true. I mean, there’s a question of what does it even mean for a model to know something, but one of the things Anthropic has been working on since the very beginning of our company, we’ve had a team that focuses on trying to understand and look inside the models.

EZRA KLEIN: So I try as best I can, as somebody not well-versed in the technology here, to follow this work on what you’re describing, which I think, broadly speaking, is interpretability, right? Can we know what is happening inside the model? And over the past year, there have been some much hyped breakthroughs in interpretability.

DARIO AMODEI: I think, first, on interpretability, we are seeing substantial progress on being able to characterize, I would say, maybe the generation of models from six months ago. I think it’s not hopeless, and we do see a path. That said, I share your concern that the field is progressing very quickly relative to that.

And we’re trying to put as many resources into interpretability as possible. We’ve had one of our co-founders basically founded the field of interpretability. But also, we have to keep up with the market. So all of it’s very much a dilemma, right? Even if we stopped, then there’s all these other companies in the U.S.. And even if some law stopped all the companies in the U.S., there’s a whole world of this.

EZRA KLEIN: Let me hold for a minute on the question of the competitive dynamics because before we leave this question of the machines that bullshit. It makes me think of this podcast we did a while ago with Demis Hassabis, who’s the head of Google DeepMind, which created AlphaFold.

DARIO AMODEI: I am skeptical that we need to do something totally different. So I think today, many people have the intuition that the models are sort of eating up data that’s been gathered from the internet, code repos, whatever, and kind of spitting it out intelligently, but sort of spitting it out. And sometimes that leads to the view that the models can’t be better than the data they’re trained on or kind of can’t figure out anything that’s not in the data they’re trained on. You’re not going to get to Einstein level physics or Linus Pauling level chemistry or whatever.

EZRA KLEIN: So if you’re right and you’re going to have these models that develop this internal web of truth, I get how that model can do a lot of good. I also get how that model could do a lot of harm. And it’s not a model, not an A.I. system I’m optimistic that human beings are going to understand at a very deep level, particularly not when it is first developed. So how do you make rolling something like that out safe for humanity?

DARIO AMODEI: So late last year, we put out something called a responsible scaling plan. So the idea of that is to come up with these thresholds for an A.I. system being capable of certain things. We have what we call A.I. safety levels that in analogy to the biosafety levels, which are like, classify how dangerous a virus is and therefore what protocols you have to take to contain it, we’re currently at what we describe as A.S.L. 2.

EZRA KLEIN: To the people who say, getting on this road where we are barreling towards very powerful systems is dangerous — we shouldn’t do it at all, or we shouldn’t do it this fast — you have said, listen, if we are going to learn how to make these models safe, we have to make the models, right? The construction of the model was meant to be in service, largely, to making the model safe.

Then everybody starts making models. These very same companies start making fundamental important breakthroughs, and then they end up in a race with each other. And obviously, countries end up in a race with other countries. And so, the dynamic that has taken hold is there’s always a reason that you can justify why you have to keep going.

And that’s true, I think, also at the regulatory level, right? I mean, I do think regulators have been thoughtful about this. I think there’s been a lot of interest from members of Congress. I talked to them about this. But they’re also very concerned about the international competition. And if they weren’t, the national security people come and talk to them and say, well, we definitely cannot fall behind here.

DARIO AMODEI: Yeah, so basically, all of the things you’ve said are true at once, right? There doesn’t need to be some easy story for why we should do X or why we should do Y, right? It can be true at the same time that to do effective safety research, you need to make the larger models, and that if we don’t make models, someone less safe will. And at the same time, we can be caught in this bad dynamic at the national and international level. So I think of those as not contradictory, but just creating a difficult landscape that we have to navigate.

EZRA KLEIN: Can you say what the R.S.P.s are?

DARIO AMODEI: Responsible Scaling Plan, the thing I was talking about before. The levels of A.I. safety, and in particular, tying decisions to pause scaling to the measurement of specific dangers or the absence of the ability to show safety or the presence of certain capabilities. One way I think about it is, at the end of the day, this is ultimately an exercise in getting a coalition on board with doing something that goes against economic pressures.

EZRA KLEIN: One of the things that has been on my mind around regulation is whether or not the founding insight of Anthropic of OpenAI is even more relevant to the government, that if you are the body that is supposed to, in the end, regulate and manage the safety of societal-level technologies like artificial intelligence, do you not need to be building your own foundation models and having huge collections of research scientists and people of that nature working on them, testing them, prodding them, remaking them, in order to understand the damn thing well enough — to the extent any of us or anyone understands the damn thing well enough — to regulate it?

DARIO AMODEI: I think government directly building the models, I think that will happen in some places. It’s kind of challenging, right? Like, government has a huge amount of money, but let’s say you wanted to provision $100 billion to train a giant foundation model. The government builds it. It has to hire people under government hiring rules. There’s a lot of practical difficulties that would come with it.

EZRA KLEIN: I have now, I think, heard some version of this from the head of most of, maybe all of, the A.I. companies, in one way or another. And it has a quality to me of, Lord, grant me chastity but not yet.

And at some point, everybody is going to look up and say, this is actually too much. It is too much power. And this has to somehow be managed in some other way. And even if the C.E.O.s of the things were willing to do that, which is a very open question by the time you get there, even if they were willing to do that, the investors, the structures, the pressure around them, in a way, I think we saw a version of this — and I don’t know how much you’re going to be willing to comment on it — with the sort of OpenAI board, Sam Altman thing, where I’m very convinced that wasn’t about A.I. safety. I’ve talked to figures on both sides of that. They all sort of agree it wasn’t about A.I. safety. But there was this moment of, if you want to press the off switch, can you, if you’re the weird board created to press the off switch. And the answer was no, you can’t, right? They’ll just reconstitute it over at Microsoft.

DARIO AMODEI: Yeah, I mean, I think you’re right to be skeptical, and similarly, what I said with the previous questions of there are just these dilemmas left and right that have no easy answer. But I think I can give a little more concreteness than what you’ve pointed at, and maybe more concreteness than others have said, although I don’t know what others have said. We’re at A.S.L. 2 in our responsible scaling plan. These kinds of issues, I think they’re going to become a serious matter when we reach, say, A.S.L. 4. So that’s not a date and time. We haven’t even fully specified A.S.L. 4 —

EZRA KLEIN: Just because this is a lot of jargon, just, what do you specify A.S.L. 3 as? And then as you say, A.S.L. 4 is actually left quite undefined. So what are you implying A.S.L. 4 is?

DARIO AMODEI: A.S.L. 3 is triggered by risks related to misuse of biology and cyber technology. A.S.L. 4, we’re working on now.

EZRA KLEIN: Be specific. What do you mean? Like, what is the thing a system could do or would do that would trigger it?

DARIO AMODEI: Yes, so for example, on biology, the way we’ve defined it — and we’re still refining the test, but the way we’ve defined it is, relative to use of a Google search, there’s a substantial increase in risk as would be evaluated by, say, the national security community of misuse of biology, creation of bioweapons, that either the proliferation or spread of it is greater than it was before, or the capabilities are substantially greater than it was before.

Again, I don’t really know what it looks like. You’re right. All of these companies have investors. They have folks involved. You talk about just handing the models over. I suspect there’s some way to hand over the most dangerous or societally sensitive components or capabilities of the models without fully turning off the commercial tap. I don’t know that there’s a solution that every single actor is happy with. But again, I get to this idea of demonstrating specific risk.

EZRA KLEIN: When you imagine how many years away, just roughly, A.S.L. 3 is and how many years away A.S.L. 4 is, right, you’ve thought a lot about this exponential scaling curve. If you just had to guess, what are we talking about?

DARIO AMODEI: Yeah, I think A.S.L. 3 could easily happen this year or next year. I think A.S.L. 4 —

EZRA KLEIN: Oh, Jesus Christ.

DARIO AMODEI: No, no, I told you. I’m a believer in exponentials. I think A.S.L. 4 could happen anywhere from 2025 to 2028.

EZRA KLEIN: So that is fast.

DARIO AMODEI: Yeah, no, no, I’m truly talking about the near future here. I’m not talking about 50 years away. God grant me chastity, but not now. But “not now” doesn’t mean when I’m old and gray. I think it could be near term. I don’t know. I could be wrong. But I think it could be a near term thing.

EZRA KLEIN: But so then, if you think about this, I feel like what you’re describing, to go back to something we talked about earlier, that there’s been this step function for societal impact of A.I., the curve of the capabilities exponential, but every once in a while, something happens, ChatGPT, for instance, Midjourney with photos. And all of a sudden, a lot of people feel it. They realize what has happened and they react. They use it. They deploy it in their companies. They invest in it, whatever.

You could use coronavirus, I think, as another example where there was a significant enough global catastrophe that companies and governments and even people did things you never would have expected. But the examples we have of that happening are something terrible. All those examples end up with millions of bodies.

I’m not saying that’s going to be true for A.I., but it does sound like that is a political economy. No, you can’t imagine it now, in the same way that you couldn’t have imagined the sort of pre and post-ChatGPT world exactly, but that something happens and the world changes. Like, it’s a step function everywhere.

DARIO AMODEI: Yeah, I mean, I think my positive version of this, not to be so — to get a little bit away from the doom and gloom, is that the dangers are demonstrated in a concrete way that is really convincing, but without something actually bad happening, right? I think the worst way to learn would be for something actually bad to happen. And I’m hoping every day that doesn’t happen, and we learn bloodlessly.

EZRA KLEIN: We’ve been talking here about conceptual limits and curves, but I do want, before we end, to reground us a little bit in the physical reality, right? I think that if you’re using A.I., it can feel like this digital bits and bytes, sitting in the cloud somewhere.

DARIO AMODEI: Yeah, so one, I think this may end up being the greatest geopolitical issue of our time. And man, this relates to things that are way above my pay grade, which are military decisions about whether and how to defend Taiwan. All I can do is say what I think the implications for A.I. is. I think those implications are pretty stark. I think there’s a big question of like, OK, we built these powerful models.

EZRA KLEIN: Well, there is some insight you should have into it, which is that you’re a customer here, right? And so, five years ago, the people making these chips did not realize what the level of demand for them was going to be. I mean, what has happened to Nvidia’s stock prices is really remarkable.

DARIO AMODEI: We’ve been able to get the compute that we need for this year, I suspect also for next year as well. I think once things get to 2026, 2027, 2028, then the amount of compute gets to levels that starts to strain the capabilities of the semiconductor industry. The semiconductor industry still mostly produces C.P.U.s, right? Just the things in your laptop, not the things in the data centers that train the A.I. models. But as the economic value of the GPUs goes up and up and up because of the value of the A.I. models, that’s going to switch over.

But you know what? At some point, you hit the limits of that or you hit the limits of how fast you can switch over. And so, again, I expect there to be a big supply crunch around data centers, around chips, and around energy and power for both regulatory and physics reasons, sometime in the next few years. And that’s a risk, but it’s also an opportunity. I think it’s an opportunity to think about how the technology can be governed.

EZRA KLEIN: How about the question of energy? I mean, this requires just a tremendous amount of energy. And I mean, I’ve seen different numbers like this floating around. It very much could be in the coming years like adding a Bangladesh to the world’s energy usage. Or pick your country, right? I don’t know what exactly you all are going to be using by 2028.

DARIO AMODEI: It really comes down to, what are the uses that the model is being put to, right? So I think the worrying case would be something like crypto, right? I’m someone who’s not a believer that whatever the energy was that was used to mine the next Bitcoin, I think that was purely additive. I think that wasn’t there before. And I’m unable to think of any useful thing that’s created by that.

EZRA KLEIN: But to take the other side of it, I mean, I think the difference, when you say this is always a question we have when we’re growing G.D.P., is it’s not quite. It’s cliché because it’s true to say that the major global warming challenge right now is countries like China and India getting richer. And we want them to get richer. It is a huge human imperative, right, a moral imperative for poor people in the world to become less poor. And if that means they use more energy, then we just need to figure out how to make that work. And we don’t know of a way for that to happen without them using more energy.

DARIO AMODEI: Yeah, I mean, I think the concerns are real. Let me push back a little bit, which is, again, I don’t think the benefits are purely in the future. It kind of goes back to what I said before. Like, there may be use cases now that are net energy saving, or that to the extent that they’re not net energy saving, do so through the general mechanism of, oh, there was more demand for this thing.

EZRA KLEIN: I guess you could imagine a world where we were, somehow or another, incentivizing uses of A.I. that were yoked to some kind of social purpose. We were putting a lot more into drug discovery, or we cared a lot about things that made remote work easier, or pick your set of public goods.

DARIO AMODEI: Not everyone has the same conception of social good. One person may think social good is this ideology. Another person — we’ve seen that with some of the Gemini stuff.

EZRA KLEIN: Right.

DARIO AMODEI: But companies can try to make beneficial applications themselves, right? Like, this is why we’re working with cancer institutes. We’re hoping to partner with ministries of education in Africa, to see if we can use the models in kind of a positive way for education, rather than the way they may be used by default. So I think individual companies, individual people, can take actions to steer or bend this towards the public good.

EZRA KLEIN: But there’s also the question of training data and the intellectual property that is going into things like Claude, like GPT, like Gemini. There are a number of copyright lawsuits. You’re facing some. OpenAI is facing some. I suspect everybody is either facing them now or will face them.

And I recognize you probably can’t comment on lawsuits themselves, but I’m sure you’ve had to think a lot about this. And so, I’m curious both how you understand it as a risk, but also how you understand it morally. I mean, when you talk about the people who invent these systems gaining a lot of power, and alongside that, a lot of wealth, well, what about all the people whose work went into them such that they can create images in a million different styles?

And I mean, somebody came up with those styles. What is the responsibility back to the intellectual commons? And not just to the commons, but to the actual wages and economic prospects of the people who made all this possible?

DARIO AMODEI: I think everyone agrees the models shouldn’t be verbatim outputting copyrighted content. For things that are available on the web, for publicly available, our position — and I think there’s a strong case for it — is that the training process, again, we don’t think it’s just hoovering up content and spitting it out, or it shouldn’t be spitting it out. It’s really much more like the process of how a human learns from experiences. And so, our position that that is sufficiently transformative, and I think the law will back this up, that this is fair use.

And just as kind of we transition from an agrarian society to an industrial society and the meaning of work changed, and it was no longer true that 99 percent of people were peasants working on farms and had to find new methods of economic organization, I suspect there’s some different method of economic organization that’s going to be forced as the only possible response to disruptions to the economy that will be small at first, but will grow over time, and that we haven’t worked out what that is. We need to find something that allows people to find meaning that’s humane and that maximizes our creativity and potential and flourishing from A.I.

EZRA KLEIN: But I want to sit in between the narrow legal response and the broad “we have to completely reorganize society” response, although I think that response is actually possible over the decades. And in the middle of that is a more specific question. I mean, you could even take it from the instrumental side. There is a lot of effort now to build search products that use these systems, right? ChatGPT will use Bing to search for you.

That also, I think, in the long run, creates a training data problem, even if you just want to look at it instrumentally, where if it becomes nonviable to do journalism or to do a lot of things to create high quality information out there, the A.I.’s ability, right, the ability of all of your companies to get high quality, up-to-date, constantly updated information becomes a lot trickier. So there both seems to me to be both a moral and a self-interested dimension to this.

DARIO AMODEI: Yeah, so I think there may be business models that work for everyone, not because it’s illegitimate to train on open data from the web in a legal sense, but just because there may be business models here that kind of deliver a better product. So things I’m thinking of are like newspapers have archives. Some of them aren’t publicly available. But even if they are, it may be a better product, maybe a better experience, to, say, talk to this newspaper or talk to that newspaper.

EZRA KLEIN: Let me try to end by asking a bit about how to live on the slope of the curve you believe we are on. Do you have kids?

DARIO AMODEI: I’m married. I do not have kids.

EZRA KLEIN: So I have two kids. I have a two-year-old and a five-year-old. And particularly when I’m doing A.I. reporting, I really do sit in bed at night and think, what should I be doing here with them? What world am I trying to prepare them for? And what is needed in that world that is different from what is needed in this world, even if I believe there’s some chance — and I do believe there’s some chance — that all the things you’re saying are true. That implies a very, very, very different life for them.

DARIO AMODEI: The very short answer is, I don’t know, and I have no idea, but we have to try anyway, right? People have to raise kids, and they have to do it as best they can. An obvious recommendation is just familiarity with the technology and how it works, right? The basic paradigm of, I’m talking to systems, and systems are taking action on my behalf, obviously, as much familiarity with that as possible is, I think, helpful.

EZRA KLEIN: Possibly first.

DARIO AMODEI: Yeah, right, possibly first.

EZRA KLEIN: It seems better at coding than it is at other things.

DARIO AMODEI: I don’t think it’s going to work out for any of these systems to just do one for one what humans are going to do. I don’t really think that way. But I think it may fundamentally change industries and professions one by one in ways that are hard to predict. And so, I feel like I only have clichés here. Like get familiar with the technology. Teach your children to be adaptable, to be ready for a world that changes very quickly. I wish I had better answers, but I think that’s the best I got.

EZRA KLEIN: I agree that’s not a good answer. [LAUGHS] Let me ask that same question a bit from another direction, because one thing you just said is get familiar with the technology. And the more time I spend with the technology, the more I fear that happening. What I see when people use A.I. around me is that the obvious thing that technology does for you is automate the early parts of the creative process.

The part where you’re supposed to be reading something difficult yourself? Well, the A.I. can summarize it for you. The part where you’re supposed to sit there with a blank page and write something? Well, the A.I. can give you a first draft. And later on, you have to check it and make sure it actually did what you wanted it to do and fact-checking it. And but I believe a lot of what makes humans good at thinking comes in those parts.

And I am older and have self-discipline, and maybe this is just me hanging on to an old way of doing this, right? You could say, why use a calculator from this perspective. But my actual worry is that I’m not sure if the thing they should do is use A.I. a lot or use it a little.

This, to me, is actually a really big branching path, right? Do I want my kids learning how to use A.I. or being in a context where they’re using it a lot, or actually, do I want to protect them from it as much as I possibly could so they develop more of the capacity to read a book quietly on their own or write a first draft? I actually don’t know. I’m curious if you have a view on it.

DARIO AMODEI: I think this is part of what makes the interaction between A.I. and society complicated where it’s sometimes hard to distinguish when is an A.I. doing something, saving you labor or drudge work, versus kind of doing the interesting part. I will say that over and over again, you’ll get some technological thing, some technological system that does what you thought was the core of what you’re doing, and yet, what you’re doing turns out to have more pieces than you think it does and kind of add up to more things, right?

EZRA KLEIN: I think, then, that’s a good place to end this conversation. Though, obviously, the exponential curve continues. So always our final question — what are three books you’d recommend to the audience?

DARIO AMODEI: So, yeah, I’ve prepared three. They’re all topical, though, in some cases, indirectly so. The first one will be obvious. It’s a very long book. The physical book is very thick, but “The Making of the Atomic Bomb,” Richard Rhodes. It’s an example of technology being developed very quickly and with very broad implications. Just looking through all the characters and how they reacted to this and how people who were basically scientists gradually realized the incredible implications of the technology and how it would lead them into a world that was very different from the one they were used to.

That’s all the backdrop of it. And the core of it is about some fundamentally new technological object that is being brought into that world and how everyone reacts to it, how governments react to it, how individual people react to it, and how political ideologies react to it. And so, I don’t know. When I read that a few years ago, I saw a lot of parallels.

EZRA KLEIN: Dario Amodei, thank you very much.

EZRA KLEIN: This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

Advertisement

IMAGES

  1. How to Write a Catchy Hook for an Essay: 5 Types of Essay Hooks (With

    essay hook about lying

  2. 5 types of hooks for writing

    essay hook about lying

  3. 📌 Types of Lying Essay Sample

    essay hook about lying

  4. Types of Lying Essay

    essay hook about lying

  5. Truth Is Truth, Lying Is Lying Free Essay Example

    essay hook about lying

  6. Essay Hook Examples

    essay hook about lying

VIDEO

  1. Weighted Hook-Lying Abdominal Crunch

  2. Argumentative essay writing

  3. Hook lying Lumbar Rotation

  4. CSC: Hook Lying Diaphragmatic Brace (breath hold)

  5. Feet Elevated Hook-lying Box Breathing

  6. Hook + Lying Breathing w/ Block

COMMENTS

  1. Argumentative Essay about Lying

    First, it is the morally right thing to do. Lying is dishonest and can lead to a loss of trust. Second, telling the truth can save you from getting into trouble. If you lie, you may get caught and end up in more trouble than if you had just told the truth in the first place.

  2. ≡Essays on Lying. Free Examples of Research Paper Topics, Titles

    Lying is a complex and controversial subject that offers a wide range of possibilities for exploration in an essay. In this article, we will discuss some of the best lying essay topics and how to choose the right one for your essay. 1. The Ethics of Lying. One of the most popular lying essay topics is the ethics of lying.

  3. 80+ Interesting Hook Examples

    Here are the quotes you can use to start your essay: "Education is the most powerful weapon you can use to change the world.". If your topic is related to hard work and making your own destiny, you can start by quoting Michael Jordan. "Some people want it to happen; some wish it would happen; others make it happen.".

  4. Why Lying Is Wrong: [Essay Example], 493 words GradesFixer

    Lying is a pervasive behavior that can have serious consequences on both personal relationships and society as a whole. From white lies to more serious deception, the act of lying erodes trust, damages credibility, and undermines the foundation of ethical behavior. This essay will explore why lying is wrong, examining the moral implications ...

  5. The Negative Impact of Lying: Why Lying is Bad

    In essence, lying can destroy the bonds that connect people. 3. Damage to Personal Integrity. Engaging in lying can take a toll on a person's sense of personal integrity. It forces individuals to live a double life, one where they present a facade to the world and another where they know the truth.

  6. Lying Essays: Examples, Topics, & Outlines

    PAGES 2 WORDS 811. Lying is, what some see, as a means to an end. It enables relationships and maintains bonds (at least that is how a lot of people act and behave every day). However, this may not be a good means of socializing when it comes to long-term relationships. People sometimes believe the saying: "You can't handle the truth" and treat ...

  7. Is It OK to Lie?

    It turns out that lying might even be good for your social life. White lies can help you smooth out awkward situations and make others around you feel better, says Dr. Robert Feldman, a professor who researches lying. In this way, he says, lying could be seen as a valuable social skill. Perhaps the key is to think about why you're lying.

  8. quotes and an essay on lying--an obstacle to living life fully

    A lie can travel halfway around the world while the truth is putting on its shoes. Mark Twain. The history of the race, and each individual's experience, are thick with evidence. that a truth is not hard to kill and that a lie told well is immortal. Remember: one lie does not cost you one truth but the truth.

  9. The Truth about Lying

    May 19, 2021. 11 minutes. First Appeared on Knowable Magazine. The icon indicates free access to the linked research on JSTOR. Police thought that 17-year-old Marty Tankleff seemed too calm after finding his mother stabbed to death and his father mortally bludgeoned in the family's sprawling Long Island home.

  10. Lying Essay

    Lying causes loss of trust, double standards, and should only be used by the military. Lying seems to be something done by most of the population without even thinking twice about it but the fact is it causes confusion as who can be trusted. In "It's the truth: Americans conflicted about lying", a mother said," it's the easy trap of a.

  11. How to Write a Strong Essay Hook, With Examples

    4 Anecdote. Anecdotes are often used as hooks in personal essays. A personal story makes the essay relatable, creating familiarity with the reader that makes them want to read more. An example of an anecdote hook is a persuasive essay about rerouting traffic on campus that starts with a personal story of a vehicular close call.

  12. The Problem with Lying: An Argumentative Essay

    Lying, a practice often dismissed as a harmless act, raises significant ethical questions that demand careful consideration. Regardless of the scale, whether a white lie or a more substantial untruth, every lie carries consequences. Immanuel Kant, a renowned German philosopher, vehemently argues against the tolerance of lying in his article, "A ...

  13. Lying

    Lying - Morality, Ethics, Deception: Philosophical opinion is divided as to whether lying is morally wrong. Plato claimed in the Republic that rulers of a just society must promulgate "noble" lies to promote social harmony among the masses, but he also condemned the Sophists' cavalier attitude toward truth. He apparently thought that the moral valence of lying depends upon the context in ...

  14. Essays About Lying

    Published: 02/09/2020. Lying is a complicated dimension of one's personality, a human nature characterized by a habitual form of behavior. Many people would say that lying is morally wrong because it destroys a person's dignity and integrity. Lying represents fear of what others may think and fear of facing the reality which grows stronger ...

  15. PDF What's Wrong With Lying? Christine M. Korsgaard Harvard University

    The wrongness of lying is basic. The intuitionist says that what is wrong with lying is just that it is lying. You might think that this means that intuitionists are committed to the view that lying is always wrong, or absolutely wrong. But this is not what intuitionists believe. Most.

  16. Lying in Social Psychology

    The chief distinction in types of lie is found to be between self-serving lies and other-oriented lies. Strategies are examined in depth using interviews of suspected criminals and frequenters of online forums. The chapter concludes with a pessimistic overview of online dating sites. Keywords: deception, lying, motivation, close relationships ...

  17. How to Write an Essay Hook

    A hook is an initial statement in an essay, typically the first sentence or a group of sentences that grab the reader's attention and make them want to read more. It's the first impression you give to your reader, and it can make or break your essay. A good hook should be intriguing, thought-provoking, and relevant to your topic.

  18. What's Good about Lying?

    A complex mathematical 2014 study compared the impact of black and white lies on social networks. Again, black lies drove wedges into social networks. But white lies had precisely the opposite effect, tightening social bonds. Several studies have found that people are quick to forgive white lies, and even to appreciate them.

  19. The Main Reasons Why People Lie: [Essay Example], 576 words

    This discussion establishes that the main reasons for lying include to avoid tension, to manipulate others into developing ideas and thoughts that would inflate their egos, to avoid conflict by covering up bad behavior, due to the fear of disappointing others, and to take advantage of other people. This essay was reviewed by. Dr. Oliver Johnson.

  20. 33 Writing Prompts about Lying

    33 Writing Prompts about Lying. People, even school aged children, can tell when they're being lied to. So, when it's something small, like a student telling another that they aren't having a birthday party, even though they are and that student isn't invited, feelings are still going to be hurt. That impact, however, most students don ...

  21. PDF The Truth About Lying Sample Chapter

    Chapter 2 examines how children develop the ability to tell lies and what influences their decisions to tell the truth or lie. Very young children do not initially tell lies. This ability is one that actually emerges as their verbal and cognitive skills become increasingly sophisticated. By understanding how lying emerges and develops, you can ...

  22. PDF Two Arguments Against Lying

    On a consequentialist view, lies will be wrong only if they do more harm than good. Whether they do so is a question of empirical fact. But most consequentialists have found it so obvious that lies tend to be harmful that they do not bother to argue the point. Sidgwick says simply that "… it is generally a man's interest to know the truth 2… ."

  23. Lying Essay

    Lying is a false statement that is intended to deceive someone however all forms of deception are not lies. The information given to someone is untrue, it is intended for the other person to feel trusted when they are with them. A lie communicates some sort of information, now a lie can also have truth in it too,….

  24. Why Israel's Allies Could Be Accused of War Crimes

    When Israel launched its retaliatory war to root out Hamas from Gaza in the aftermath of the group's Oct. 7 massacre, it had the overwhelming support of a horrified world. Six months on, Gaza ...

  25. Transcript: Ezra Klein Interviews Dario Amodei

    It was a very good essay, and it was very subtle and understanding the formal structure of a college application essay. But no part of it was true at all. I've been playing around with more of ...