Oxford Scholastica Academy logo

How to Write the Perfect Essay

06 Feb, 2024 | Blog Articles , English Language Articles , Get the Edge , Humanities Articles , Writing Articles

Student sitting at a desk writing in a notebook

You can keep adding to this plan, crossing bits out and linking the different bubbles when you spot connections between them. Even though you won’t have time to make a detailed plan under exam conditions, it can be helpful to draft a brief one, including a few key words, so that you don’t panic and go off topic when writing your essay.

If you don’t like the mind map format, there are plenty of others to choose from: you could make a table, a flowchart, or simply a list of bullet points.

Discover More

Thanks for signing up, step 2: have a clear structure.

Think about this while you’re planning: your essay is like an argument or a speech. It needs to have a logical structure, with all your points coming together to answer the question.

Start with the basics! It’s best to choose a few major points which will become your main paragraphs. Three main paragraphs is a good number for an exam essay, since you’ll be under time pressure. 

If you agree with the question overall, it can be helpful to organise your points in the following pattern:

  • YES (agreement with the question)
  • AND (another YES point)
  • BUT (disagreement or complication)

If you disagree with the question overall, try:

  • AND (another BUT point)

For example, you could structure the Of Mice and Men sample question, “To what extent is Curley’s wife portrayed as a victim in Of Mice and Men ?”, as follows:

  • YES (descriptions of her appearance)
  • AND (other people’s attitudes towards her)
  • BUT (her position as the only woman on the ranch gives her power as she uses her femininity to her advantage)

If you wanted to write a longer essay, you could include additional paragraphs under the YES/AND categories, perhaps discussing the ways in which Curley’s wife reveals her vulnerability and insecurities, and shares her dreams with the other characters. Alternatively, you could also lengthen your essay by including another BUT paragraph about her cruel and manipulative streak.

Of course, this is not necessarily the only right way to answer this essay question – as long as you back up your points with evidence from the text, you can take any standpoint that makes sense.

Smiling student typing on laptop

Step 3: Back up your points with well-analysed quotations

You wouldn’t write a scientific report without including evidence to support your findings, so why should it be any different with an essay? Even though you aren’t strictly required to substantiate every single point you make with a quotation, there’s no harm in trying.

A close reading of your quotations can enrich your appreciation of the question and will be sure to impress examiners. When selecting the best quotations to use in your essay, keep an eye out for specific literary techniques. For example, you could highlight Curley’s wife’s use of a rhetorical question when she says, a”n’ what am I doin’? Standin’ here talking to a bunch of bindle stiffs.” This might look like:

The rhetorical question “an’ what am I doin’?” signifies that Curley’s wife is very insecure; she seems to be questioning her own life choices. Moreover, she does not expect anyone to respond to her question, highlighting her loneliness and isolation on the ranch.

Other literary techniques to look out for include:

  • Tricolon – a group of three words or phrases placed close together for emphasis
  • Tautology – using different words that mean the same thing: e.g. “frightening” and “terrifying”
  • Parallelism – ABAB structure, often signifying movement from one concept to another
  • Chiasmus – ABBA structure, drawing attention to a phrase
  • Polysyndeton – many conjunctions in a sentence
  • Asyndeton – lack of conjunctions, which can speed up the pace of a sentence
  • Polyptoton – using the same word in different forms for emphasis: e.g. “done” and “doing”
  • Alliteration – repetition of the same sound, including assonance (similar vowel sounds), plosive alliteration (“b”, “d” and “p” sounds) and sibilance (“s” sounds)
  • Anaphora – repetition of words, often used to emphasise a particular point

Don’t worry if you can’t locate all of these literary devices in the work you’re analysing. You can also discuss more obvious techniques, like metaphor, simile and onomatopoeia. It’s not a problem if you can’t remember all the long names; it’s far more important to be able to confidently explain the effects of each technique and highlight its relevance to the question.

Person reading a book outside

Step 4: Be creative and original throughout

Anyone can write an essay using the tips above, but the thing that really makes it “perfect” is your own unique take on the topic. If you’ve noticed something intriguing or unusual in your reading, point it out – if you find it interesting, chances are the examiner will too!

Creative writing and essay writing are more closely linked than you might imagine. Keep the idea that you’re writing a speech or argument in mind, and you’re guaranteed to grab your reader’s attention.

It’s important to set out your line of argument in your introduction, introducing your main points and the general direction your essay will take, but don’t forget to keep something back for the conclusion, too. Yes, you need to summarise your main points, but if you’re just repeating the things you said in your introduction, the body of the essay is rendered pointless.

Think of your conclusion as the climax of your speech, the bit everything else has been leading up to, rather than the boring plenary at the end of the interesting stuff.

To return to Of Mice and Men once more, here’s an example of the ideal difference between an introduction and a conclusion:

Introduction

In John Steinbeck’s Of Mice and Men , Curley’s wife is portrayed as an ambiguous character. She could be viewed either as a cruel, seductive temptress or a lonely woman who is a victim of her society’s attitudes. Though she does seem to wield a form of sexual power, it is clear that Curley’s wife is largely a victim. This interpretation is supported by Steinbeck’s description of her appearance, other people’s attitudes, her dreams, and her evident loneliness and insecurity.
Overall, it is clear that Curley’s wife is a victim and is portrayed as such throughout the novel in the descriptions of her appearance, her dreams, other people’s judgemental attitudes, and her loneliness and insecurities. However, a character who was a victim and nothing else would be one-dimensional and Curley’s wife is not. Although she suffers in many ways, she is shown to assert herself through the manipulation of her femininity – a small rebellion against the victimisation she experiences.

Both refer back consistently to the question and summarise the essay’s main points. However, the conclusion adds something new which has been established in the main body of the essay and complicates the simple summary which is found in the introduction.

Hannah

Hannah is an undergraduate English student at Somerville College, University of Oxford, and has a particular interest in postcolonial literature and the Gothic. She thinks literature is a crucial way of developing empathy and learning about the wider world. When she isn’t writing about 17th-century court masques, she enjoys acting, travelling and creative writing. 

Recommended articles

A Day in the Life of an Oxford Scholastica Student: The First Monday

A Day in the Life of an Oxford Scholastica Student: The First Monday

Hello, I’m Abaigeal or Abby for short, and I attended Oxford Scholastica’s residential summer school as a Discover Business student.  During the Business course, I studied various topics across the large spectrum that is the world of business, including supply and...

Mastering Writing Competitions: Insider Tips from a Two-Time Winner

Mastering Writing Competitions: Insider Tips from a Two-Time Winner

I’m Costas, a third-year History and Spanish student at the University of Oxford. During my time in secondary school and sixth form, I participated in various writing competitions, and I was able to win two of them (the national ISMLA Original Writing Competition and...

Beyond the Bar: 15 Must-Read Books for Future Lawyers

Beyond the Bar: 15 Must-Read Books for Future Lawyers

Reading within and around your subject, widely and in depth, is one of the most important things you can do to prepare yourself for a future in Law. So, we’ve put together a list of essential books to include on your reading list as a prospective or current Law...

quality of the essay

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

A (Very) Simple Way to Improve Your Writing

  • Mark Rennella

quality of the essay

It’s called the “one-idea rule” — and any level of writer can use it.

The “one idea” rule is a simple concept that can help you sharpen your writing, persuade others by presenting your argument in a clear, concise, and engaging way. What exactly does the rule say?

  • Every component of a successful piece of writing should express only one idea.
  • In persuasive writing, your “one idea” is often the argument or belief you are presenting to the reader. Once you identify what that argument is, the “one-idea rule” can help you develop, revise, and connect the various components of your writing.
  • For instance, let’s say you’re writing an essay. There are three components you will be working with throughout your piece: the title, the paragraphs, and the sentences.
  • Each of these parts should be dedicated to just one idea. The ideas are not identical, of course, but they’re all related. If done correctly, the smaller ideas (in sentences) all build (in paragraphs) to support the main point (suggested in the title).

Ascend logo

Where your work meets your life. See more from Ascend here .

Most advice about writing looks like a long laundry list of “do’s and don’ts.” These lists can be helpful from time to time, but they’re hard to remember … and, therefore, hard to depend on when you’re having trouble putting your thoughts to paper. During my time in academia, teaching composition at the undergraduate and graduate levels, I saw many people struggle with this.

quality of the essay

  • MR Mark Rennella is Associate Editor at HBP and has published two books, Entrepreneurs, Managers, and Leaders and The Boston Cosmopolitans .  

Partner Center

Places on our 2024 summer school are filling fast. Don’t miss out. Enrol now to avoid disappointment

Other languages

  • How to Do Research for an Excellent Essay: The Complete Guide

quality of the essay

One of the biggest secrets to writing a good essay is the Boy Scouts’ motto: ‘be prepared’. Preparing for an essay – by conducting effective research – lays the foundations for a brilliant piece of writing, and it’s every bit as important as the actual writing part. Many students skimp on this crucial stage, or sit in the library not really sure where to start; and it shows in the quality of their essays. This just makes it easier for you to get ahead of your peers, and we’re going to show you how. In this article, we take you through what you need to do in order to conduct effective research and use your research time to best effect.

Allow enough time

First and foremost, it’s vital to allow enough time for your research. For this reason, don’t leave your essay until the last minute . If you start writing without having done adequate research, it will almost certainly show in your essay’s lack of quality. The amount of research time needed will vary according to whether you’re at Sixth Form or university, and according to how well you know the topic and what teaching you’ve had on it, but make sure you factor in more time than you think you’ll need. You may come across a concept that takes you longer to understand than you’d expected, so it’s better to allow too much time than too little.

Read the essay question and thoroughly understand it

If you don’t have a thorough understanding of what the essay question is asking you to do, you put yourself at risk of going in the wrong direction with your research. So take the question, read it several times and pull out the key things it’s asking you to do. The instructions in the question are likely to have some bearing on the nature of your research. If the question says “Compare”, for example, this will set you up for a particular kind of research, during which you’ll be looking specifically for points of comparison; if the question asks you to “Discuss”, your research focus may be more on finding different points of view and formulating your own.

Begin with a brainstorm

Start your research time by brainstorming what you already know. Doing this means that you can be clear about exactly what you’re already aware of, and you can identify the gaps in your knowledge so that you don’t end up wasting time by reading books that will tell you what you already know. This gives your research more of a direction and allows you to be more specific in your efforts to find out certain things. It’s also a gentle way of introducing yourself to the task and putting yourself in the right frame of mind for learning about the topic at hand.

Achieve a basic understanding before delving deeper

If the topic is new to you and your brainstorm has yielded few ideas, you’ll need to acquire a basic understanding of the topic before you begin delving deeper into your research. If you don’t, and you start by your research by jumping straight in at the deep end, as it were, you’ll struggle to grasp the topic. This also means that you may end up being too swayed by a certain source, as you haven’t the knowledge to question it properly. You need sufficient background knowledge to be able to take a critical approach to each of the sources you read. So, start from the very beginning. It’s ok to use Wikipedia or other online resources to give you an introduction to a topic, though bear in mind that these can’t be wholly relied upon. If you’ve covered the topic in class already, re-read the notes you made so that you can refresh your mind before you start further investigation.

Working through your reading list

If you’ve been given a reading list to work from, be organised in how you work through each of the items on it. Try to get hold of as many of the books on it as you can before you start, so that you have them all easily to hand, and can refer back to things you’ve read and compare them with other perspectives. Plan the order in which you’re going to work through them and try to allocate a specific amount of time to each of them; this ensures that you allow enough time to do each of them justice and that focus yourself on making the most of your time with each one. It’s a good idea to go for the more general resources before honing in on the finer points mentioned in more specialised literature. Think of an upside-down pyramid and how it starts off wide at the top and becomes gradually narrower; this is the sort of framework you should apply to your research.

Ask a librarian

Library computer databases can be confusing things, and can add an extra layer of stress and complexity to your research if you’re not used to using them. The librarian is there for a reason, so don’t be afraid to go and ask if you’re not sure where to find a particular book on your reading list. If you’re in need of somewhere to start, they should be able to point you in the direction of the relevant section of the library so that you can also browse for books that may yield useful information.

Use the index

If you haven’t been given specific pages to read in the books on your reading list, make use of the index (and/or table of contents) of each book to help you find relevant material. It sounds obvious, but some students don’t think to do this and battle their way through heaps of irrelevant chapters before finding something that will be useful for their essay.

Taking notes

As you work through your reading, take notes as you go along rather than hoping you’ll remember everything you’ve read. Don’t indiscriminately write down everything – only the bits that will be useful in answering the essay question you’ve been set. If you write down too much, you risk writing an essay that’s full of irrelevant material and getting lower grades as a result. Be concise, and summarise arguments in your own words when you make notes (this helps you learn it better, too, because you actually have to think about how best to summarise it). You may want to make use of small index cards to force you to be brief with what you write about each point or topic. We’ve covered effective note-taking extensively in another article, which you can read here . Note-taking is a major part of the research process, so don’t neglect it. Your notes don’t just come in useful in the short-term, for completing your essay, but they should also be helpful when it comes to revision time, so try to keep them organised.

Research every side of the argument

Never rely too heavily on one resource without referring to other possible opinions; it’s bad academic practice. You need to be able to give a balanced argument in an essay, and that means researching a range of perspectives on whatever problem you’re tackling. Keep a note of the different arguments, along with the evidence in support of or against each one, ready to be deployed into an essay structure that works logically through each one. If you see a scholar’s name cropping up again and again in what you read, it’s worth investigating more about them even if you haven’t specifically been told to do so. Context is vital in academia at any level, so influential figures are always worth knowing about.

Keep a dictionary by your side

You could completely misunderstand a point you read if you don’t know what one important word in the sentence means. For that reason, it’s a good idea to keep a dictionary by your side at all times as you conduct your research. Not only does this help you fully understand what you’re reading, but you also learn new words that you might be able to use in your forthcoming essay or a future one . Growing your vocabulary is never a waste of time!

Start formulating your own opinion

As you work through reading these different points of view, think carefully about what you’ve read and note your own response to different opinions. Get into the habit of questioning sources and make sure you’re not just repeating someone else’s opinion without challenging it. Does an opinion make sense? Does it have plenty of evidence to back it up? What are the counter-arguments, and on balance, which sways you more? Demonstrating your own intelligent thinking will set your essay apart from those of your peers, so think about these things as you conduct your research.

Be careful with web-based research

Although, as we’ve said already, it’s fine to use Wikipedia and other online resources to give you a bit of an introduction to a topic you haven’t covered before, be very careful when using the internet for researching an essay. Don’t take Wikipedia as gospel; don’t forget, anybody can edit it! We wouldn’t advise using the internet as the basis of your essay research – it’s simply not academically rigorous enough, and you don’t know how out of date a particular resource might be. Even if your Sixth Form teachers may not question where you picked up an idea you’ve discussed in your essays, it’s still not a good habit to get into and you’re unlikely to get away with it at a good university. That said, there are still reliable academic resources available via the internet; these can be found in dedicated sites that are essentially online libraries, such as JSTOR. These are likely to be a little too advanced if you’re still in Sixth Form, but you’ll almost certainly come across them once you get to university.

Look out for footnotes

In an academic publication, whether that’s a book or a journal article, footnotes are a great place to look for further ideas for publications that might yield useful information. Plenty can be hidden away in footnotes, and if a writer is disparaging or supporting the ideas of another academic, you could look up the text in question so that you can include their opinion too, and whether or not you agree with them, for extra brownie points.

Don’t save doing all your own references until last

If you’re still in Sixth Form, you might not yet be required to include academic references in your essays, but for the sake of a thorough guide to essay research that will be useful to you in the future, we’re going to include this point anyway (it will definitely come in useful when you get to university, so you may as well start thinking about it now!). As you read through various books and find points you think you’re going to want to make in your essays, make sure you note down where you found these points as you go along (author’s first and last name, the publication title, publisher, publication date and page number). When you get to university you will be expected to identify your sources very precisely, so it’s a good habit to get into. Unfortunately, many students forget to do this and then have a difficult time of going back through their essay adding footnotes and trying to remember where they found a particular point. You’ll save yourself a great deal of time and effort if you simply note down your academic references as you go along. If you are including footnotes, don’t forget to add each publication to a main bibliography, to be included at the end of your essay, at the same time.

Putting in the background work required to write a good essay can seem an arduous task at times, but it’s a fundamental step that can’t simply be skipped. The more effort you put in at this stage, the better your essay will be and the easier it will be to write. Use the tips in this article and you’ll be well on your way to an essay that impresses!

To get even more prepared for essay writing you might also want to consider attending an Oxford Summer School .

Image credits: banner

Stanford University

Search form

How to write a quality college essay.

By Steve Aedy

Essay writing is an essential part of college life. Some students will be lucky enough to have professors who will give them guidance on what makes a good essay. Others will be left to their own devices to figure it out as they fumble along. Learning to write good essays means learning how to research a subject and craft an argument. These are skills that will serve you well after college is over.

But quality essay writing has other elements too, such as making sure your essay “flows”, is free of grammar and spelling errors and has a tightly woven argument. Here are some tips on how you can improve your essay writing:

Read a lot of essays. Reading essays other people have written is a great way to study essay writing. Don’t just read for fun, read critically. Look at the author’s writing style: how do they introduce their topic, what tools do they use to formulate their argument? Is it effective? Could it be done better? If so, how? Did they leave anything important out? What would you include that they didn’t? The more you read essays, the more familiar you’ll become with different writing styles and the better your essays will become.

Do a lot of research. While you may have a strong opinion about a topic, it’s best to look to the experts in the subject to find out what they have to say. That’s basically the definition of research. Different scholars may have opposing views on the subject. You can explore these arguments in your essay to present the reader with a more complete view of the topic. An example is this article in which various experts express their arguments on whether or not Shakespeare was a Catholic. You may notice that the author does not express his personal opinion, but rather presents the arguments of both sides of the issue using quotes from authorities on the subject.

Use a thesaurus. Oftentimes, students get caught using the same word over and over again. This can become boring for the reader and sets a monotonous tone for your essay. In the above section on research, I used three different terms for the same idea: experts, scholars and authorities. A thesaurus is a great tool for helping you find new ways to express the same idea. Merriam-Webster has a combined dictionary-thesaurus resource and thesaurus.com has the largest word bank on the web.

Use transition words. Transition words help your essay flow. The cadence and rhythm of transition words are what make your essay enjoyable to read. While the quality of your research and your information are important, it’s also important how you present them. Transition words add finesse to your essay and help guide the reader through your argument, allowing them to follow along. Here’s a great list of 100 transition words to use in your essays.

Leave time to edit. Editing takes time. Literally. It’s like baking a cake. You mix all the ingredients and put it in the oven, you let it rise, then you let it cool. Then , you eat it. You need to leave some time for your thoughts to cool so you can have some perspective on what you wrote. This is essential to the editing process. Leave at least a few hours between when you wrote your last sentence to when you go over it for an edit. During that time, your brain will have a chance to refresh itself, making it easier to spot holes in your logic, spelling and punctuation errors and other issues. You can also use these tips for editing.

Proofread. Make sure your essay contains correct spelling, punctuation and grammar. If you’re not confident in your own proofreading skills, have a friend look it over for you. One thing that helps you spot errors is reading your essay out loud. The eye often autocorrects when you’re reading to yourself, but reading out loud is a way to turn off the autocorrect and allow you to see what’s actually on the page. It’s a good practice to cultivate. Want to brush up on your grammar skills? Check out this list of common grammar mistakes.

Good luck crafting A+ essays and happy writing!

Steve Aedy is a professional writer, editor and passionate blogger. He provides essay writing assistance at Fresh Essays and covers academic writing and education in his articles. Feel free to circle him on Google+ .

Leave a comment

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

5 Steps To Quality Essay Writing (With Examples)

Knowing how to write a good quality essay is a skill that will benefit you all your life not just in your college or university. Writing essays on different topics helps develop critical thinking and enhances your ability to observe and understand different perspectives and views. The ability to organize your ideas and thoughts can help you write not just essays but also research papers, blog post articles, business letters, company memos, and whatnot.

However, writing an essay is a strenuous and challenging task. It takes more than just your time to come up with a good-quality essay before the submission deadline. Even if you are a great writer you still need to learn a few things before you start the writing process for your essay. So, if you want to learn how to write a perfect essay in 5 easy steps then this guide is for you.

Are you feeling overburdened by loads of writing assignments and never-ending homework? Consult Essay Basics to hire our essay writing experts.

THESE ARE 5 EASY STEPS TO WRITE A PERFECT ESSAY

essay writing examples, short essay writing, what are the 5 steps to writing an essay, learning how to write an essay for beginners, how to write a good essay, 10 steps to writing an essay, steps for writing an essay in the correct order, essay writing steps in english

  • Determine the title and research your audience
  • Brainstorm ideas and make an outline
  • Identify your thesis statement and organize your thoughts
  • Start the writing process (introduction, body paragraphs, and conclusion)
  • Edit and proofread

Step 1: Determine the Title and Research Your Audience

The first step to writing a good essay is to come up with a good essay title. If the title is pre-assigned then you can skip this step. If you are free to choose a title then you should only choose the one that interests you the most. Choose any topic you find yourself arguing for or against or any topic you have a passion for. It is not necessary that you know everything about the topic of your interest before you start writing. If you have an interest in that topic then you can easily do research about that topic without feeling any burden. If you want to write an argumentative essay then it’s best to present arguments in favor of your thesis statement rather than against it.

Before choosing a topic you should also research your target audience and their intent on the topic. Knowing about what interests your audience and fulfilling their intent will surely help you come up with a better essay.

For example, if you are writing an essay about nature for your school assignment then you should use easy to understand words and should never add complex scientific calculations in your essay. However, if you are presenting your essay in front of Ph.D. doctors then you must add proof of your findings through references. For them, you can also use all kinds of complex calculations to support your argument.

Step 2: Brainstorm Ideas and Make an Outline

The second step is to make an outline by brainstorming about the main idea and all the supporting arguments. The best practice to organize thoughts is through mind mapping and noting down your ideas. Making an outline is simple, you just have to think about your main idea, identify key supporting ideas and start making a structure of your essay.

Many students don’t make the outline as they are eager to focus on the writing process but we never recommend it. Outlines are a great way to organize and detail your essay in a logical way to make a flow of information. An essay with a good outline will never bore your reader that will guarantee a good grade in your academic assignment .

For instance, if you don’t have relevant information about a topic sentence then you can gather random information from different sources. Find supporting ideas, research papers, relevant quotations, and statistics about your topic sentence and note them down in your own words. Use all this information to form an outline and to develop your thesis statement.

Step 3: Identify Your Thesis Statement and Organize Your Thoughts

The third step is the most important step towards writing a perfect essay. This is to identify your main idea and then put it into a single sentence. Remember that a good thesis statement is not too long but still contains all the necessary information about what you are going to talk about in your essay.

The thesis statement is usually added at the end of the introductory paragraph. This thesis statement signals to the readers about what they should expect from your essay, so it’s important that you be very clear about your main idea. This will also help you organize your thoughts about what supporting arguments you should add in the body paragraph to support your main argument.

A trick to refine your thesis statement is to read your thesis statement to a friend or relative and ask them what your essay or research paper is about. If they don’t give the expected answer then you need to refine your thesis statement.

five steps of writing process with examples, process writing examples, examples of drafting in writing process, process writing examples pdf, what makes writing a process, 5 steps of writing process, the writing process pdf, what are the 7 steps of the writing process, 5 Steps To Quality Essay Writing With Examples

Step 4: Start the Writing Process

Step 4 is where you actually start writing your essay and bring your ideas to life. You might think that it’s so late in the process to actually start writing but the fact is that all the above steps have made it easier for you to start writing. Now you just need to see the outline that you earlier made and the guidelines from your instructor to begin writing. Following the basic essay pattern that includes the introduction, body, and conclusion to convince the reader about your argument.

1. Introduction

Your introductory paragraph should start with the introduction and meaning of the topic in the first 2-3 lines. Then you should provide some background information about the topic and some context about why are you writing the essay. Remember that this is the next best chance to hook your reader after your title, so wisely use this space. In the last two lines, you should add your thesis statement.

2. Body paragraphs

The body of your essay is where you present supporting arguments to support your main point. In the body section, you should use your good writing skills to elaborate on all the key points that you highlighted while making the outline. While writing papers you should be careful about the sentence structure.

3. Conclusion

In the conclusion paragraph, you should sum up all your ideas and arguments. The conclusion must tie back to the introduction of your essay. This means that you should summarize how your body has provided evidence in favor of your thesis statement that was listed in the introduction. Remember to never add new information in the conclusion as this part is only used to summarize your essay.

Step 5: Edit and Proofread

Once you are done writing the first draft it’s time to edit and proofread the written words. The first thing that you should do is to look for spelling and grammatical mistakes. Secondly, you should read the text out loud and make that the essay is owing smoothly and you are always relevant to your main subject.

Some writers have difficulty pointing out their own mistakes. If you are still unsure about how your text sounds then it’s best to take a second opinion. Ask a friend to read your essay and then ask them if your arguments support your thesis statement or not. They can also point out mistakes and improve your essay structure. Once the draft is complete you can submit it to your instructor.

Related Articles

important things to do to prepare for a new job, starting a new job checklist, first day at new job checklist, advice for someone starting a new job, 5 questions to ask when starting a new job, fun things to do before starting a new job, what would you do in the first 3 months of a new job, tips for starting a new job remotely, seven deadly sins starting new job

6 Important Things to Do to Prepare for a New Job

How to Hire Python Developers? 7 Steps & 8 Skills – Perfect Explanation

How to Hire Python Developers: A Guide to Finding Natural Coding Talents Worldwide

how to make a chatbot: development nuances in a competitive market, how to build a chatbot python, chatbot implementation project plan, how to build a chatbot from scratch, chatbot marketing examples, how to create a chatbot for free, how to make a chatbot in html, how to make a chatbot in javascript

How To Make A Chatbot: Development Nuances in a Competitive Market

Data Science Demystified: Insights, Techniques, and Best Practices

Data Science Demystified: Insights, Techniques, and Best Practices

One comment.

These tips are very useful for writing essays. Many people can benefit from these in many ways. I look forward to reading more of this kind of instructional writing.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Southern Wesleyan University sign. Christian college in South Carolina.

Located in South Carolina, Southern Wesleyan University is a Christ-centered, student-focused learning community dedicated to transforming lives.

On-Campus Program

  • Majors & Minors
  • Campus Life
  • Tuition and Costs
  • Financial Aid
  • Request Info
  • Visit Campus

Online Program

  • Degrees & Certificates
  • Program Format
  • Admission Process
  • FAQ for Online Programs

Additional Resources

  • Transfer Students
  • Dual Enrollment / Gateway
  • International Students
  • Military Students
  • Career Outcomes
  • Financial Literacy

quality of the essay

Warriors at heart. Your campus experience may only last a few years, but your connection to Southern Wesleyan can last a lifetime. Stay connected to SWU through events, publications, and social media.

  • Alumni Awards
  • Alumni Association Board
  • Alumni Home
  • Alumni E-Newsletter
  • Alumni Benefits
  • Alumni Bricks
  • Transcript Request
  • The Warrior Zone - Campus Store
  • Warrior Merchandise
  • Give to SWU
  • Event Calendar
  • Submit Your Alumni News
  • Update Your Info
  • Contact Alumni Office

quality of the essay

The faith-filled community at SWU is comprised of students, faculty and staff who are passionate about learning and growing, both inside and outside of the classroom.

  • Career Services
  • Conference Services
  • Student Care Services
  • Technology Services

Other Resources

  • Course Schedule
  • Online Bookstore
  • Online Bookstore FAQ
  • Blue Hill Coffee & Tea
  • Campus Dining
  • Email Login

Top Searches:

  • Transcripts
  • Future Students
  • Alumni & Friends

7 Steps to Writing a Quality Essay

Home : SWU Blog : 7 Steps to Writing a Quality Essay

7 Steps to Writing a Quality Essay

by Isaac Clary on October 16, 2023

Writing an essay can be a daunting task for many college students, but it doesn't have to be. The following 7 steps provide guidance for writing a quality college essay with ease.

1. Understand the assignment details: The first step to writing a quality essay is comprehending the specifics of the assignment. The worst mistake you can make at this point is skimming over the details and writing a lengthy essay, only to discover that it does not adequately address all the key points of the assignment (I would know).

2. Select a topic with existing research: Now that you understand the assignment, it's time to choose a topic. When brainstorming possible topics, it's crucial to consider the amount of existing research available. If your topic has not been extensively researched, writing an essay will be more challenging. Generally, professors require 10 to 15 sources for longer essays. If you can easily locate 10 or more sources to use in your essay, you're on the right track.

3. Outline your section headings: The third step to writing a quality essay is outlining the structure. Building on step one, you might notice that your professor has described certain key sections that are expected in the essay. Use these as section headings! This is one of the simplest ways to organize the content of your essay and also demonstrates that you understand the assignment.

4. Begin writing a draft: Now that you have identified the necessary sections to be covered, it's time to start writing. Initially, focus on developing one section at a time and make use of at least two of your sources. This demonstrates to your professor that you know how to properly engage with multiple academic sources. Additionally, avoid writing the introduction, thesis statement(s), and conclusion at this point.

5. Fill in the gaps: With the majority of the content written, it's time to fill in the gaps. In the previous step, you were advised to refrain from writing the introduction, thesis statement(s), and conclusion. Now it's time to write these sections since you have a better understanding of the outline and content covered in the essay. Ensure that your introduction addresses the topic of the essay and includes a thesis statement about what will be covered in the essay. For the conclusion, start by restating the thesis statement and then summarize the overall findings – avoid presenting new information here.

6. Review: You're almost there. The next step is to review everything you've written. Start by reading over your essay (multiple times if needed) to ensure that you have covered everything detailed in the assignment. Additionally, make sure that the content flows smoothly from one section to another, that you've followed writing conventions (APA, MLA, etc.), and that there are no grammatical or spelling errors (use Word or any other grammar and spelling checker). Finally, wait for one or two days before reviewing your essay for the final time, so you can have fresh eyes to spot anything you might have missed.

7. Submit: You've done it! Your essay is complete, and now it's time to submit.

And that's all there is to it! By breaking down the process into these steps, you should now be able to more comfortably write a quality essay for any of your college classes. Moreover, you can use many of the same steps to write non-academic papers, journals, or blogs like this one.

quality of the essay

About the Author.

Isaac clary, more from blog.

__title__

Top Four Reasons Why Choosing SWU for Your STEM Degree is Your Best Option

November 09, 2023

__title__

You're Accepted, Now What?

October 16, 2023

__title__

Essentials to Pack for College

June 08, 2023

Top 10 Qualities of a Good Essay

qualities of good essay

Why are students often asked to write an essay? Because this is a type of writing work in which students learn to express their thoughts about different topics. The reasoning should be backed up by facts and statements about a particular topic. However, the author’s opinion is the main one here, and it develops students’ creative and scientific thinking at the same time.

So what makes a good essay? You will find the answer to this question in this article. Only through finding and studying additional information will you be able to develop your critical thinking .

Here are some basic points and qualities of good writing that will make your essay successful:

Top 10 qualities good essay

Small Volume.

Of course, when creativity is in its full flow, you want to write without stopping, but the first sign of what a good essay is precisely its compact volume.

Here you need to explain your opinion about the topic’s subject matter clearly, concisely, and with specific arguments that will support your opinion. Avoid arguing about your statements because it will confuse you and the reader and negate all the work on the arguments.

Specific Topic and Your Subjective Opinion.

The subject of the essay is always specific. It does not happen that the essay has several different thoughts and topics. This is what distinguishes it from other types of paper works.

Your arguments and subjective opinion should be the same. The purpose of your work is to convince the reader that your opinion is true, backed by strong facts, and is consistent throughout the work.

Free Composition

What does it mean? The essay has no specific writing rules. Yes, a certain structure distinguishes this type of work from others, but the essay composition itself is free.

If you are unsure where to start or how to write an essay, try to find free essays online by contacting professionals .

Do not use too long sentences or try to surprise the reader with difficult terms and strange words, especially if you are not sure about their meaning. On the contrary, your main task here is to engage the reader and make contact with him through the text.

As we have already written, the author needs to establish contact with the reader.

The sentences and arguments that you will use in your work should be specific but not written in strict form. You should choose the point of support of your arguments and the main opinion and stick to it throughout the work.

Sometimes arguments have to be more specific and sometimes unobtrusively described in the body of the essay.

You should try to provide information as if “by the way” so that the reader feels like you are referring to him through the text and want to explain your impressions specifically to him. Contact with the reader is very important in this form of writing.

The Paradox

One of the most important qualities of an essay is that it should surprise the reader. You should use some strong phrases or quotes that will support your argument throughout the work.

The advantage will be on your side if you use paradoxical definitions or phenomena, which will appear simultaneously as indisputable but mutually exclusive statements. Feel free to use abstracts and aphorisms.

Meaningful Unity

This is probably the only paradox of the genre (not including the point written above). Why? Because the essay must be subjective , it must have a free composition and characteristics, but at the same time, it must have an internal semantic unity.

Your personal opinion should be expressed and supported here. Actually, this is about supporting the same opinion, which you will support with different thesis and statements. All of them should bring the idea of ​​the essay to one specific conclusion, that is your opinion.

Use of Simple Language

Your essay can be written in any form, but at the same time, it is not necessary, or better to say, you cannot use slang, shortening words, light-hearted tone, using strange abbreviations, and formulaic phrases. Do not forget that this is a serious piece of work with specific arguments and not an arbitrary letter to a friend.

Authoritarianism

You should submit information in such a way as to persuade the reader to adopt the same position he supports in his work. The author should not use aggressive words, but his arguments should be specific, and they should be considered indisputable.

good essay qualities

Use an Element of Surprise

To make your essay memorable, do not be afraid to use catchy phrases, loud quotes, and unexpected arguments. This will arouse emotions in the reader, and your essay will be remembered. The main thing is that the used arguments are not aggressive, do not contradict your basic opinion, and are not written in rough language.

The Logic of Presentation.

Again, despite the free composition, the essay must have internal unity and the coherence of the author’s statements that express his opinion.

Because the rules of writing an essay are rather conditional, the author has the opportunity to fully indulge in his creative potential and use various interesting methods for writing this genre of paperwork.

The main thing to remember is that your thoughts and arguments should be united and interest the reader to read your work and take your side.

4 Innovative Strategies for Brand Differentiation Through Packaging

Your email address will not be published. Required fields are marked *

logo

  • SAT BootCamp
  • SAT MasterClass
  • SAT Private Tutoring
  • SAT Proctored Practice Test
  • ACT Private Tutoring
  • Academic Subjects
  • College Essay Workshop
  • Academic Writing Workshop
  • AP English FRQ BootCamp
  • 1:1 College Essay Help
  • Online Instruction
  • Free Resources

7 Qualities of a Successful College Essay

Bonus Material:  30 College Essays That Worked

The college essay is one of the most important aspects of a student’s application.

It gives applicants an opportunity to articulate their personal values, character traits, and perspectives. It’s also a chance to add more value to your application, simply by demonstrating who you are outside of your resume and transcript.

A “successful” college essay is one that makes the most of these opportunities and, in many cases, earns an acceptance.

We’ve demystified what most admissions officers look for in college applications . But what are these officers looking for in the college essay itself? What are the top qualities of a successful application essay?

In analyzing various essays of admitted applicants, we’ve come up with a list of the characteristics that most of these pieces have in common. We’ll be referring to some of these pieces throughout the post.

Plus, we give you access to 30 college essays that earned their writers acceptance into Ivy League schools. Grab these below.

Download 30 College Essays That Worked

Here’s what we cover:

  • What is The College Application Essay (in a nutshell)?
  • 7 Qualities of a Successful Essay
  • Bonus: 30 College Essays That Worked

The College Application Essay In a Nutshell

Most students applying to a college or university in the U.S. must submit an application essay (or “personal statement”) with their application.

Depending on the application platform the college uses (typically either Coalition or the Common App ), students have 500-650 words to craft a response. While each of these platforms has college essay prompts, it’s helpful to view these prompts as general guidelines as to what colleges are looking for in a response.

Based on these prompts and our own experience coaching college essay students , the application essay is:

  • the chance to say what the rest of your application doesn’t say
  • a demonstration of your character, values, and/or voice
  • the platform to show who you are outside of a resume/transcript
  • an introspective personal essay

The college essay is NOT :

  • a rehashing of your resume
  • an excuse or explanation of other components of your application
  • a formal, five-paragraph essay
  • what you think “colleges want to hear”

A standard college application includes an academic transcript, recommendation letters, extracurricular / activities section, an optional resume, and standardized test scores. The essay is an addition  to these 4 general components, so it makes sense that it should complement them by saying something new.

That’s why we like to define the essay as a “demonstration of character, values, and/or voice.” True, these elements can be inferred from other components of the application. But the essay is your opportunity to clearly and personally demonstrate what matters to you, who you are at the core, and/or your essential perspectives of the world.

For this reason, the college essay is introspective and personal. Colleges want to hear that “I” voice in the application essay, loud and clear, and they want active, intelligent reflection.

You can see this in action in the 30 college essays that worked, which you can download below.

( Note: Some colleges might require applicants to submit supplemental essays in addition to their personal statement. These often have very specific prompts and different word lengths. Here are 8 great tips for approaching supplemental essays . )

 7 Qualities of a Successful College Essay

We’ve assessed several college essays of applicants admitted to a wide range of schools, including Ivy League institutions. While extremely diverse, these pieces generally had the following characteristics in common.

1. Introspective and reflective

Many English teachers tell their students not to use the first-person “I” in their essays. While this might be the standard for some academic essays, the college essay  should  include that “I.” What’s more, it should include a  lot  of that “I”!

This can be understandably uncomfortable for students, many of whom may simply not be used to talking about themselves openly and declaratively on a page. It can also feel awkward from a stylistic point of view for students who are not used to writing in the first-person.

Yet colleges want to hear your words in your own voice, and they are especially interested in learning more about your perspectives on the world and insights gleaned from your various life experiences. That’s why many successful college essays are highly introspective, full of the writer’s active reflections on what they’ve learned, how they view the world, and who they are.

We typically see the bulk of such introspection at the  end  of an essay, where the writer summarizes these reflections (although this is by no means standard), as we can see in the conclusion to Erica’s essay here, which describes her earlier attempt to write and publish a novel:

Sometimes, when I’m feeling insecure about my ability as a novelist I open up my first draft again, turn to a random chapter, and read it aloud. Publishing that first draft would have been a horrible embarrassment that would have haunted me for the rest of my life. Over the past half-decade, I’ve been able to explore my own literary voice, and develop a truly original work that I will be proud to display. This experience taught me that “following your dreams” requires more than just wishing upon a star. It takes sacrifice, persistence, and grueling work to turn fantasy into reality.

In her personal statement, Aja reflects deeply on what she specifically learned from an experience described earlier on in the piece:

I found from my experiment and questioning within my mind that my practices distinguished me from others, thereby allowing me to form relationships on the basis of common interest or personality, rather than cultural similarities, that summer. I valued the relationships more, and formed a deep connection with my lab partner, whom I had found was similar to me in many ways. 

Notice how both of these selections contain a lot of that first-person voice, which is critical to elaborating perspectives, learning points, and introspective thoughts. And did we mention that admissions officers are  looking for  those specific perspectives, learning points, and thoughts that compose who you are?

2. Full of a student’s voice

An academic transcript can be revealing to admissions officers. The same goes for recommendation letters and resumes. But it’s hard to convey an individual voice in these application components. The college essay is your prime vehicle for speaking directly to colleges in your own words  about what matters to you.

Successful college essays thus veer away from the formal voice many students employ when writing academic essays. Rather, they showcase a student’s unique way of expressing themselves on a page, which can be, for example, humorous, informal, intimate, lyrical, and/or speculative.

Voice is at the forefront of Elizabeth’s essay about her love for “all that is spicy:”

I am an aspiring hot sauce sommelier. Ever since I was a child, I have been in search for all that is spicy. I began by dabbling in peppers of the jarred variety. Pepperoncini, giardiniera, sports peppers, and jalapeños became not only toppings, but appetizers, complete entrées, and desserts. As my palate matured, I delved into a more aggressive assortment of spicy fare. I’m not referring to Flamin’ Hot Cheetos, the crunchy snack devoured by dilettantes. No, it was bottles of infernal magma that came next in my tasting curriculum.

Notice how Elizabeth’s descriptions of her passion for spice are rich with her voice: playful, intelligent, and humorous. This also gives us insight into a specific aspect of her character–that’s the power of voice when it comes to personal essay writing, and college admissions officers are very interested in applicants’ characters.

3. Descriptive and engaging

You don’t have to be a natural creative writer to compose a successful college essay. Yet competitive essays aren’t afraid to dive deeply into a subject and describe it, whether that description relates to imagery, emotions, perspectives, or insights. A college essay shouldn’t leave the reader guessing in any way–it should be highly specific and it should tell your story in an engaging fashion.

Harry’s more intellectual essay presents his views on common values in society. He is careful to be very specific and descriptive in these views, incorporating both a relevant incident from history and his own direct relationship to the issue:

Admittedly, the problem of social integration is one I feel can be widely overstated – for example, when I was looking into some research for a similar topic a couple of years ago, I found numerous surveys indicating that ethnic minorities (especially Islam) identify much more closely with Britain than do the population at large. Still though, I, like many others, find myself constantly troubled by the prospect of the war from within that seems to be developing. This fear is fuelled by events such as the brutal killing of the soldier Lee Rigby at the hands of two British Muslims a couple of years ago.

In her essay, Amanda is extremely detailed in describing her experience as a caretaker for a difficult child. The result is a clear portrait of the challenge itself and Amanda’s relationship to this challenge, told from the perspective of an engaging storyteller:

Then I met Robyn, and I realized how wrong I was. Prone to anger, aggressive, sometimes violent (I have the scar to prove it). Every Sunday with Robyn was a challenge. Yoga, dancing, cooking, art, tennis – none of these activities held her interest for long before she would inevitably throw a tantrum or stalk over to a corner to sulk or fight with the other children. She alternated between wrapping her arms around my neck, declaring to anyone who passed by that she loved me, and clawing at my arms, screaming at me to leave her alone.

The successful college essays we see always  emerge from a place of honesty. Writing with honesty also is more likely to accurately convey a student’s unique voice, inspire reflection and introspection, and result in a descriptive, meaningful piece (all of the qualities listed in this post!).

Sometimes this means adopting a candid or direct voice on the page. James starts his essay frankly in this singular statement:

Simply put, my place of inner peace is the seat of that 50 foot sliver of carbon and kevlar called a rowing shell, cutting through the water in the middle of a race.

Or it might mean describing a challenge, vulnerability, or perspective truthfully, as Martin does in his essay about the experiences that have molded his character over the years:

Looking back, I have never been the “masculine boy” as society says my role to be. I have always thought I do not fit the social definition of a male as one who is “manly” and “sporty” and this alienating feeling of being different still persists today at times. However, I also have become more comfortable with myself, and I see my growth firsthand throughout high school.

Given that many universities value “truth” in their own mission statements and mottos, admissions officers will prioritize those essays that ring with a student’s honest voice.

5. Unconventional & distinct

This is by no means a requirement of a successful college essay. But many of the essays that earn students acceptance at their dream schools veer away from the predictable or expected, as we saw in Elizabeth’s essay above (“I am an aspiring hot sauce sommelier”). They are, in a nutshell, 100% unique.

We’ve seen some essays, for example, that follow more radical structures, such as list formats or experimental narratives. Others focus on unexpected subjects, like Shanaz’s piece on the relevance of Game of Thrones in her life and trajectory of learning.

And, time and again, successful college essays step away from what admissions officers already see in applications–academics, standardized tests, extracurricular activities, and classes. They may focus on something very specific (hot sauce or Game of Thrones ), seemingly ordinary (eating a kosher meal in public or working on a problem set), or personally interesting (a historic murder or wrestling game).

Regardless, the essays that “work” emphasize the unexpected, as opposed to the expected. Distinct essays will also feel as if they could not have been written by anyone else .

6. Well-written

This might also sound like an obvious quality of a successful essay, but it’s still worth mentioning. The most competitive application essays showcase strong writing skills, providing evidence of a student’s ability to tell a specific story artfully and well. 

Essays should also be error-free, grammatically precise, and stylistically on point. Successful pieces also might demonstrate versatility through varied sentence structure, word choice, and rhetorical or literary devices. Lastly, well-written essays typically adhere to a specific storytelling structure.

This excerpt from Justin’s essay about his experience in the California Cadet Corps, for example, displays a high command of language, word choice, and sentence structure:

Through Survival, I learned many things about myself and the way I approach the world. I realized that I take for granted innumerable small privileges and conveniences and that I undervalue what I do have. Now that I had experienced true and sustained hunger, I felt regret for times when I threw away food and behaved with unconscious waste. 

7. Meaningful

Above all, a successful college essay adds value to a student’s holistic college application. It is full of  meaning , in that it

  • showcases a student’s unique voice
  • elucidates an applicant’s particular perspective(s), character trait(s), and/or belief(s) and
  • honestly conveys a significant component of who a student is

It might be difficult to compress the entirety of who you are into 650 words. Yet it is most certainly possible to craft 650 words that add significant meaning to an overall application in terms of a student’s  personal potential for the future. This is exactly what admissions officers are looking for . 

What can you do to ensure that your college essay aligns with these successful qualities? You can check out examples of essays that do!

You can download 30 actual college essays that earned their writers acceptance into Ivy League schools, right now, for free.

quality of the essay

Kate is a graduate of Princeton University. Over the last decade, Kate has successfully mentored hundreds of students in all aspects of the college admissions process, including the SAT, ACT, and college application essay. 

CHECK OUT THESE RELATED POSTS

Admission Movie with Tina Fey

How Colleges Read Your Application: A 4 Step Process

March 1, 2021

Do you know what happens after you hit that "Submit" button? Learn about how selective colleges read your application.

4 Social Media Tips for College Applicants

4 Essential Social Media Tips for College Applicants

March 8, 2020

Yes, college admissions officers may view your Instagram profile! Use these social media tips to ensure what you post doesn't compromise your application.

Confused

5 Things Asian Parents Get Wrong About College Admissions

November 5, 2017

We address common misconceptions about college admissions and provide tips for overcoming them, based on the insights we’ve developed over the past 12 years.

Privacy Preference Center

Privacy preferences.

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

A Student’s Guide to Finding Quality Sources for Essays

A Student’s Guide to Finding Quality Sources for Essays

  • 9-minute read
  • 1st August 2023

So, you’ve been assigned your first college essay. You need to write at least a thousand words but have one issue: you must include quality sources, which will go in the reference list. Your professor has only told you, “Utilize academic databases and scholarly journals.”

Okay, so how exactly do you find credible sources for your essay ? Well, we’ll guide you through that in today’s post. We’ll explore finding quality sources and why you need them. By the time you finish reading, you’ll be ready to find sources for your essay.

Why Do I Need Sources?

You likely learned the importance of sources in high school. You need them to show that you are well-read on your chosen topic. You can’t ignore the importance of conducting academic research , as it will be part of your daily college grind.

Submitting an essay without sources would be like serving a hamburger with just the bun and beef patty. Think of sources as toppings on a burger. An essay lacking sources will undermine your credibility, leaving your professor wondering, “How do you know that?”

You also need to include citations to support your claims, which come from the sources you choose.

Finding Sources

Finding sources will depend on whether you want primary or secondary sources . Primary sources provide first-hand facts about your topic. For example, if your topic is related to literature, you would seek novels or poems as your primary sources. Secondary sources contain information from primary sources, such as journal articles.

Whether they’re primary or secondary sources, here are our suggestions for finding them.

1.   Consult the Textbook

Your course textbook is a great starting point, as it will likely contain valuable and relevant information about your topic. Many students believe the textbook won’t be accepted as a source for an essay, but this is false. Your professor will welcome citations from the textbook.

2.   Head to Your School’s Library

No, we’re not suggesting heading to the library’s on-site Starbucks, hoping for source-searching inspiration as you sip that frothy latte! Your school’s library contains numerous print sources, such as books, magazines, and newspapers. College libraries also subscribe to databases containing journal articles.

Journal articles are highly valued in academic research; every professor will expect at least a few of them in a reference list. Journal articles, or academic journals, are the most current sources in academia written by renowned scholars in the chosen field of study. Additionally, journal articles contribute to the field, summarize the current situation of it, and add new insight to the theory. They are also credible, as field experts review them before publication.

You don’t have to leave your dorm and head to the library. You can access various sources from your school’s library database online. Here’s an example of a student accessing the University of South Florida’s library database from their favorite coffee shop.

quality of the essay

Navigating your library’s database can seem daunting; however, the library staff will be more than happy to help you, so don’t be afraid to seek help.

Finally, your institution’s library uses an inter-library loan system, allowing students to request out-of-stock print or online sources. If the library doesn’t have a specific item you need, there’s a good chance they can get it from another library.

3.   Research Databases

You can use online research databases to find journal articles, other scholarly sources, and specific books. Research databases, which feature various search functions, can help you find the most current and relevant sources.

quality of the essay

These research databases are available through your school library, giving you access to popular subject-specific databases such as JSTOR, Project Muse, and PubMed. You can download and save relevant articles from such databases; however, you must be logged into your student account to access and download full-version articles.

Knowing the essay’s scope and relevant keywords is essential for an optimal experience with databases. Once you become familiar with databases, they’ll be your best friends when conducting academic research.

4. Google Scholar

If Google is your preferred poison, we suggest using Google Scholar . It’s Google’s academic search engine, which works like an ordinary Google search except that it finds relevant academic print and online sources. Take this example of a student using Google Scholar to search for sources related to cyberbullying in schools.

quality of the essay

Google Scholar presents various journal articles for the student. You can refine your search to find articles that have been published within the last year. One distinguishing Google Scholar feature is its Cited by function that shows the number of times a source has been cited. This function can inform you about a source’s credibility and importance to your topic.

quality of the essay

5.   Boolean Operators

We suggest using Boolean operators if your essay topic contains multiple search terms. Boolean operators expand or narrow your search parameters when using research databases. They use AND , OR , and NOT to include or exclude keywords from your search, allowing students to connect various keywords to optimize their search results. Boolean operators can be tricky if you aren’t familiar with using AND, OR, and NOT in search parameters.

Let’s say you’re searching for an article on cyberbullying written by an author named Bales in 2003. You can use AND to find the title of the article using keywords.

This will tell the database that all three terms must be present in the search result.

You can use OR to connect two or more similar concepts and broaden your search. This means any search terms you input can appear in the search results.

You can use NOT to exclude words or terms from your search by typing the excluded word after OR.

The search result will include soccer and omit football . This can be very useful in this example, as football is the UK word for soccer. It also means American football in US English. Because the student only wants to find soccer results, excluding football will avoid pulling up results related to American football.

Boolean operators are helpful if you clearly understand the scope of the assignment and know relevant keywords.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

6.   Additional Online Sources

Searching for general online sources is another way to go. You can find potential sources from websites and blogs. We suggest consulting popular news websites such as BBC News and the New York Times, as they often have current and relevant articles related to the topic.

We encourage you to err on the side of caution when using non-academic online sources. You need to ensure that online sources are credible . We recommend looking for sites with trusted domain extensions, such as . edu, .org , and .gov . URLs with .edu endings are educational resources, while .org endings are resources from organizations. Endings with .gov are government-related resources.

It’s also a good idea to look for sources that contain a Digital Object Finder ( DOI ). A DOI is a permanent string attached to online journal articles and books, making it simple to retrieve them. Articles with DOIs indicate that they have been published in peer-reviewed articles.

How Many Sources Should I Have?

The essay rubric will probably specify the number of sources required. However, this is not always the case, so you need to use some judgment. The basic rule is to gather sources until you have enough information to support your claims. If you’re writing an essay of 2,000 words, you should have at least six sources. Remember that your professor expects variety. Try this approach:

–    One book (if possible)

–    Two to three journal articles

–    One additional online source (preferably with a trusted domain extension)

Depending on the field of study, you may find that most of your sources come from journal articles.

 Here’s a recap of finding quality sources for your essay:

●  Professors want you to find a variety of sources (print and online)

●  Your school’s library has access to thousands of highly-valued journal articles from its database

●  Have a solid understanding of the topic and relevant keywords when using Boolean operators to narrow your search results

●  Evaluate the credibility of additional online sources

●  Look for websites with trusted domain extensions

●  As a rule, use at least six sources for an essay of 2,000 words

By following our suggestions, you can get your search off to a flying start. We also recommend keeping track of your sources as you conduct your research. This will make it easier to correctly format citations from your sources.

Finally, we urge you to search for sources right after your professor assigns the essay. Waiting until a few days before the essay is due to start searching is a bad idea.

1. What Types of Sources Are Recommended?

We recommend credible websites, books, journal articles, and newspapers.

2. How Do I Know if a Source Is Credible?

 A source is credible if:

●  The author is an expert in the field or is a well-respected publisher (New York Times)

●  It contains citations for sources used

●  The website has a trusted domain extension

●  It has current information on your topic

3. How Can I Get the Most Out of Research Databases?

Brainstorm specific keywords related to your topic. This will help you efficiently use Boolean operators. You should also have a clear understanding of the scope of your essay. Finally, use databases that are related to your topic. For instance, if your topic is literature then JSTOR is a good option.

4. Is Writing the Reference List Difficult?

This will depend on the required referencing style, such as APA, MLA, and Chicago. Remember to list the sources alphabetically in the reference list.

Once you’ve written the list, we recommend proofreading it. Your professor will be checking that your reference list meets the referencing style guidelines. A second pair of eyes always helps, so we recommend asking our proofreading experts to review your list . They can check that your sources are listed alphabetically. Additionally, our proofreaders will check that your list meets referencing style guidelines. Our proofreaders are pros with popular referencing styles such as MLA and APA. Consider submitting a 500-word document for free!

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

2-minute read

How to Cite the CDC in APA

If you’re writing about health issues, you might need to reference the Centers for Disease...

5-minute read

Six Product Description Generator Tools for Your Product Copy

Introduction If you’re involved with ecommerce, you’re likely familiar with the often painstaking process of...

3-minute read

What Is a Content Editor?

Are you interested in learning more about the role of a content editor and the...

4-minute read

The Benefits of Using an Online Proofreading Service

Proofreading is important to ensure your writing is clear and concise for your readers. Whether...

6 Online AI Presentation Maker Tools

Creating presentations can be time-consuming and frustrating. Trying to construct a visually appealing and informative...

What Is Market Research?

No matter your industry, conducting market research helps you keep up to date with shifting...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • Privacy Policy

Image default

  • Essay Writing

What are the Qualities of the Best Essay

From time to time students are given the task of writing quality essays. This task can be in college or university assignments. Initially, let’s find out Why are students are always asked to write essays? Students are asked for essay assignments because this is a type of writing work in which students learn to express their own thoughts about different topics. And most importantly it develops students’ creativity and thinking power too. If you are given the assignment to write an essay and in spite of having good ideas and thoughts about a title yet you don’t know about the structure of an essay and what are the qualities of the best essay. Don’t worry this article will help you to know about the 7 important qualities of a good essay.

How to Write An Invitation Letter For Visa Application

Everyone tries to write a quality essay in order to gain a good result and marks. A good essay has many qualities, as it has to be free of grammatical and spelling errors, a proper structure, and a good essay must have a strong supportive idea for arguments too. Check out below the 8 important qualities that an essay must contain.

Also Read: How to Write an Essay- The Best Step by Step Guide

1. You Essay Must be Clear and Concise

Of course, when you have to show your creativity then the number of pages you have written doesn’t matter at all. In fact,  you have to stay simple and your words should be clear for your readers. And the number of pages you have written does not add to your creativity. That’s why you have to keep your essay clear and concise. If your essay is lengthy and is not clear and related to the main topic, then it is useless. So the first quality is that you should try to keep your essay clear and concise.

2. Write Relevant to the Topic

It is not possible that the essay should discuss different aspects of the topic that’s why it is different from other types of paper works. The points and pieces of evidence should be specific and relevant to the topic. For proving your points of view, you have to find out pieces of evidence so that the reader should be convinced that your points are true and relevant to the topic.

What does it mean? Unity means to combine two different things together. In essay writing, the unity of the paragraphs is really important. All the paragraphs must have a connection between themselves so that the main point should be described in a clear way. The connection between the paragraphs depends on the thesis statement of the essay. If you have written a strong thesis statement, then their connection will remain till the end and this statement controls all your essays. And it has a leading role in writing an effective essay.

4. Use Transitions

Transitions words play an important role in writing a good quality essay because they connect two words, sentences, and even a paragraph with each other. As we discussed the importance of unity and relevance in the essay above, transitions are the words that persist the connection between the paragraphs and the relevance of the essay. Transitions beautify the essay therefore, it helps us write a good quality essay. It’s recommended to read Essential Transition Words and Phrases for Writing .

5. Don’t use Unnecessary or Informal Words or Expression

Unnecessary words and sentences don’t have any place in a good quality essay. If you are seeking a perfect and good-quality essay, then you should not use words or phrases that are not capable. For instance, say” I have found an amazing job for myself”, instead of saying” I have found an amazing and great job for myself”. You all can see the difference between the first sentence and the second one. In the second sentence, the words “amazing and great” have the same meaning therefore, there is no need of using both of them in one sentence at a time. Some informal and short-form phrases like “gonna” instead use going to. There are a lot of short-form phrases which you have to avoid using them.

6. Avoid Grammatical Mistakes

Obviously, an essay must be free of grammatical errors, spelling errors, run-ons, and fragments. All the basic grammar rules should be applied in the essay so that the essay should be of good quality. Most of the students complain that why they get fewer grades in their essay writing tests. The main reason behind getting fewer grades in exams is that students don’t follow the basic grammar rules which makes it less effective and perfect. So try to follow this step, because it is the pillar of a good quality essay.

7. Proofread Your Essay Carefully

A good-quality essay is revised several times so that it doesn’t contain any sort of error. Try to review it once. It is necessary to review the essay of rectifying some mistakes that might have been missed or for including some important points. Revising is an important part of writing a good-quality essay. Revisions are done to make sure your writing is solid, and your writing doesn’t have any spelling and grammatical errors. Indeed, revision is a must for writing a perfect essay and it is one of the qualities of the best essay. Recommended to read: 5 Tips to Help You Proofread Your Writing Like a Pro

The qualities mentioned are highly recommended by great content writers, who write a good and effective essay. What do you think about this title? Were these tips helpful? Please feel free to share your feedback and ideas with us in the comments section below. And let us know what you all want to read next. So for more articles and updates follow us and stay tuned.

Related posts

What is a reflective writing – definition, format & examples, top 5 paraphrasing tools to improve content writing, hire a writer – top 10 websites to hire the best writer in 2020, leave a comment cancel reply.

Save my name, email, and website in this browser for the next time I comment.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Is a Long Essay Always a Good Essay? The Effect of Text Length on Writing Assessment

Johanna fleckenstein.

1 Department of Educational Research and Educational Psychology, Leibniz Institute for Science and Mathematics Education, Kiel, Germany

Jennifer Meyer

Thorben jansen.

2 Institute for Psychology of Learning and Instruction, Kiel University, Kiel, Germany

Stefan Keller

3 School of Education, Institute of Secondary Education, University of Applied Sciences and Arts Northwestern Switzerland, Brugg, Switzerland

Olaf Köller

Associated data.

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.

The assessment of text quality is a transdisciplinary issue concerning the research areas of educational assessment, language technology, and classroom instruction. Text length has been found to strongly influence human judgment of text quality. The question of whether text length is a construct-relevant aspect of writing competence or a source of judgment bias has been discussed controversially. This paper used both a correlational and an experimental approach to investigate this question. Secondary analyses were performed on a large-scale dataset with highly trained raters, showing an effect of text length beyond language proficiency. Furthermore, an experimental study found that pre-service teachers tended to undervalue text length when compared to professional ratings. The findings are discussed with respect to the role of training and context in writing assessment.

Introduction

Judgments of students’ writing are influenced by a variety of text characteristics, including text length. The relationship between such (superficial) aspects of written responses and the assessment of text quality has been a controversial issue in different areas of educational research. Both in the area of educational measurement and of language technology, text length has been shown to strongly influence text ratings by trained human raters as well as computer algorithms used to score texts automatically ( Chodorow and Burstein, 2004 ; Powers, 2005 ; Kobrin et al., 2011 ; Guo et al., 2013 ). In the context of classroom language learning and instruction, studies have found effects of text length on teachers’ diagnostic judgments (e.g., grades; Marshall, 1967 ; Osnes, 1995 ; Birkel and Birkel, 2002 ; Pohlmann-Rother et al., 2016 ). In all these contexts, the underlying question is a similar one: Should text length be considered when judging students’ writing – or is it a source of judgment bias? The objective of this paper is to investigate to what degree text length is a construct-relevant aspect of writing competence, or to what extent it erroneously influences judgments.

Powers (2005) recommends both correlational and experimental approaches for establishing the relevance of response length in the evaluation of written responses: “the former for ruling out response length (and various other factors) as causes of response quality (by virtue of their lack of relationship) and the latter for establishing more definitive causal links” (p. 7). This paper draws on data from both recommended approaches: A correlational analysis of a large-scale dataset [MEWS; funded by the German Research Foundation (Grant Nr. CO 1513/12-1) and the Swiss National Science Foundation (Grant Nr. 100019L_162675)] based on expert text quality ratings on the one hand, and an experimental study with untrained pre-service teachers on the other. It thereby incorporates the measurement perspective with the classroom perspective. In the past, (language) assessment research has been conducted within different disciplines that rarely acknowledged each other. While some assessment issues are relevant for standardized testing in large-scale contexts only, others pertain to research on teaching and classroom instruction as well. Even though their assessments may serve different functions (e.g., formative vs. summative or low vs. high stakes), teachers need to be able to assess students’ performance accurately, just as well as professional raters in standardized texts. Thus, combining these different disciplinary angles and looking at the issue of text length from a transdisciplinary perspective can be an advantage for all the disciplines involved. Overall, this paper aims to present a comprehensive picture of the role of essay length in human and automated essay scoring, which ultimately amounts to a discussion of the elusive “gold standard” in writing assessment.

Theoretical Background

Writing assessment is about identifying and evaluating features of a written response that indicate writing quality. Overall, previous research has demonstrated clear and consistent associations between linguistic features on the one hand, and writing quality and development on the other. In a recent literature review, Crossley (2020) showed that higher rated essays typically include more sophisticated lexical items, more complex syntactic features, and greater cohesion. Developing writers also show movements toward using more sophisticated words and more complex syntactic structures. The studies presented by Crossley (2020) provide strong indications that linguistic features in texts can afford important insights into writing quality and development. Whereas linguistic features are generally considered to be construct-relevant when it comes to assessing writing quality, there are other textual features whose relevance to the construct is debatable. The validity of the assessment of students’ competences is negatively affected by construct-irrelevant factors that influence judgments ( Rezaei and Lovorn, 2010 ). This holds true for professional raters in the context of large-scale standardized writing assessment as well as for teacher judgments in classroom writing assessment (both formative or summative). Assigning scores to students’ written responses is a challenging task as different text-inherent factors influence the accuracy of the raters’ or teachers’ judgments (e.g., handwriting, spelling: Graham et al., 2011 ; length, lexical diversity: Wolfe et al., 2016 ). Depending on the construct to be assessed, the influence of these aspects can be considered judgment bias. One of the most relevant and well-researched text-inherent factors influencing human judgments is text length. Crossley (2020) points out that his review does “not consider text length as a linguistic feature while acknowledging that text length is likely the strongest predictor of writing development and quality.” Multiple studies have found a positive relationship between text length and human ratings of text quality, even when controlling for language proficiency ( Chenoweth and Hayes, 2001 ; McCutchen et al., 2008 ; McNamara et al., 2015 ). It is still unclear, however, whether the relation between text length and human scores reflects a true relation between text length and text quality (appropriate heuristic assumption) or whether it stems from a bias in human judgments (judgment bias assumption). The former suggests that text length is a construct-relevant factor and that a certain length is needed to effectively develop a point of view on the issue presented in the essay prompt, and this is one of the aspects taken into account in the scoring ( Kobrin et al., 2007 ; Quinlan et al., 2009 ). The latter claims that text length is either completely or partly irrelevant to the construct of writing proficiency and that the strong effect it has on human judgment can be considered a bias ( Powers, 2005 ). In the context of large-scale writing assessment, prompt-based essay tasks are often used to measure students’ writing competence ( Guo et al., 2013 ). These essays are typically scored by professionally trained raters. These human ratings have been shown to be strongly correlated with essay length, even if this criterion is not represented in the assessment rubric ( Chodorow and Burstein, 2004 ; Kobrin et al., 2011 ). In a review of selected studies addressing the relation between length and quality of constructed responses, Powers (2005) showed that most studies found correlations within the range of r = 0.50 to r = 0.70. For example, he criticized the SAT essay for encouraging wordiness as longer essays tend to score higher. Kobrin et al. (2007) found the number of words to explain 39% of the variance in the SAT essay score. The authors argue that essay length is one of the aspects taken into account in the scoring as it takes a certain length to develop an argument. Similarly, Deane (2013) argues in favor of regarding writing fluency a construct-relevant factor (also see Shermis, 2014 ; McNamara et al., 2015 ). In an analytical rating of text quality, Hachmeister (2019) could showed that longer texts typically contain more cohesive devices, which has a positive impact on ratings of text quality. In the context of writing assessment in primary school, Pohlmann-Rother et al. (2016) found strong correlations between text length and holistic ratings of text quality ( r = 0.62) as well as the semantic-pragmatic analytical dimension ( r = 0.62). However, they found no meaningful relationship between text length and language mechanics (i.e., grammatical and orthographical correctness; r = 0.09).

Text length may be considered especially construct-relevant when it comes to writing in a foreign language. Because of the constraints of limited language knowledge, writing in a foreign language may be hampered because of the need to focus on language rather than content ( Weigle, 2003 ). Silva (1993) , in a review of differences between writing in a first and second language, found that writing in a second language tends to be “more constrained, more difficult, and less effective” (p. 668) than writing in a first language. The necessity of devoting cognitive resources to issues of language may mean that not as much attention can be given to higher order issues such as content or organization (for details of this debate, see Weigle, 2003 , p. 36 f.). In that context, the ability of writing longer texts may be legitimately considered as indicative of higher competence in a foreign language, making text length a viable factor of assessment. For example, Ruegg and Sugiyama (2010) showed that the main predictors of the content score in English foreign language essays were first, organization and second, essay length.

The relevance of this issue has further increased as systems of automated essay scoring (AES) have become more widely used in writing assessment. These systems offer a promising way to complement human ratings in judging text quality ( Deane, 2013 ). However, as the automated scoring algorithms are typically modeled after human ratings, they are also affected by human judgment bias. Moreover, it has been criticized that, at this point, automated scoring systems mainly count words when computing writing scores ( Perelman, 2014 ). Chodorow and Burstein (2004) , for example, showed that 53% of the variance in human ratings can be explained by automated scoring models that use only the number of words and the number of words squared as predictors. Ben-Simon and Bennett (2007) provided evidence from National Assessment of Educational Progress (NAEP) writing test data that standard, statistically created e-rater models weighed essay length even more strongly than human raters (also see Perelman, 2014 ).

Bejar (2011) suggests that a possible tendency to reward longer texts could be minimized through the training of raters with responses at each score level that vary in length. However, Barkaoui (2010) and Attali (2016) both compared the holistic scoring of experienced vs. novice raters and – contrary to expectations – found that the correlation between essay length and scores was slightly stronger for the experienced group. Thus, the question of whether professional experience and training counteract or even reinforce the tendency to overvalue text length in scoring remains open.

Compared to the amount of research on the role of essay length in human and automated scoring in large-scale high-stakes contexts, little attention has been paid to the relation of text length and quality in formative or summative assessment by teachers. This is surprising considering the relevance of the issue for teachers’ professional competence: In order to assess the quality of students’ writing, teachers must either configure various aspects of text quality in a holistic assessment or hold them apart in an analytic assessment. Thus, they need to have a concept of writing quality appropriate for the task and they need to be aware of the construct-relevant and -irrelevant criteria (cf. the lens model; Brunswik, 1955 ). To our knowledge, only two studies have investigated the effect of text length on holistic teacher judgments, both of which found that longer texts receive higher grades. Birkel and Birkel (2002) found significant main effects of text length (long, medium, short) and spelling errors (many, few) on holistic teacher judgments. Osnes (1995) reported effects of handwriting quality and text length on grades.

Whereas research on the text length effect on classroom writing assessment is scarce, a considerable body of research has investigated how other text characteristics influence teachers’ assessment of student texts. It is well-demonstrated, for example, that pre-service and experienced teachers assign lower grades to essays containing mechanical errors ( Scannell and Marshall, 1966 ; Marshall, 1967 ; Cumming et al., 2002 ; Rezaei and Lovorn, 2010 ). Scannell and Marshall (1966) found that pre-service teachers’ judgments were affected by errors in punctuation, grammar and spelling, even though they were explicitly instructed to grade on content alone. More recently, Rezaei and Lovorn (2010) showed that high quality essays containing more structural, mechanical, spelling, and grammatical errors were assigned lower scores than texts without errors even in criteria relating solely to content. Teachers failed to distinguish between formal errors and the independent quality of content in a student essay. Similarly, Vögelin et al. (2018 , 2019) found that lexical features and spelling influenced not only holistic teacher judgments of students’ writing in English as a second or foreign language, but also their assessment of other analytical criteria (e.g., grammar). Even though these studies do not consider text length as a potential source of bias, they do show that construct-irrelevant aspects influence judgments of teachers.

This Research

Against this research background, it remains essential to investigate whether the relation between essay length and text quality represents a true relationship or a bias on the part of the rater or teacher ( Wolfe et al., 2016 ). First, findings of correlational studies can give us an indication of the effect of text length on human ratings above and beyond language proficiency variables. Second, going beyond correlational findings, there is a need for experimental research that examines essay responses on the same topic differing only in length in order to establish causal relationships ( Kobrin et al., 2007 ). The present research brings together both of these approaches.

This paper comprises two studies investigating the role of essay length in foreign language assessment using an interdisciplinary perspective including the fields of foreign language education, computer linguistics, educational research, and psychometrics. Study 1 presents a secondary analysis of a large-scale dataset with N = 2,722 upper secondary school students in Germany and Switzerland who wrote essays in response to “independent writing” prompts of the internet-based Test of English as a Foreign Language (TOEFL iBT). It investigates the question of how several indicators of students’ English proficiency (English grade, reading and listening comprehension, self-concept) are related to the length of their essays (word count). It further investigates whether or not essay length accounts for variance in text quality scores (expert ratings) even when controlling for English language proficiency and other variables (e.g., country, gender, cognitive ability). A weak relationship of proficiency and length as well as a large proportion of variance in text quality explained by length beyond proficiency would be in favor of the judgment bias assumption.

Study 2 focused on possible essay length bias in an experimental setting, investigating the effect of essay length on text quality ratings when there was (per design) no relation between essay length and text quality score. Essays from Study 1 were rated by N = 84 untrained pre-service teachers, using the same TOEFL iBT rubric as the expert raters. As text quality scores were held constant within all essay length conditions, any significant effect of essay length would indicate a judgment bias. Both studies are described in more detail in the following sections.

This study investigates the question of judgment bias assumption vs. appropriate heuristic assumption in a large-scale context with professional human raters. A weak relationship between text length and language proficiency would be indicative of the former assumption, whereas a strong relationship would support the latter. Moreover, if the impact of text length on human ratings was significant and substantial beyond language proficiency, this might indicate a bias on the part of the rater rather than an appropriate heuristic. Thus, Study 1 aims to answer the following research questions:

  • (1) How is essay length related to language proficiency?
  • (2) Does text length still account for variance in text quality when English language proficiency is statistically controlled for?

Materials and Methods

Sample and procedure.

The sample consisted of N = 2,722 upper secondary students (11th grade; 58.1% female) in Germany ( n = 894) and Switzerland ( n = 1828) from the interdisciplinary and international research project Measuring English Writing at Secondary Level (MEWS; for an overview see Keller et al., 2020 ). The target population were students attending the academic track of general education grammar schools (ISCED level 3a) in the German federal state Schleswig-Holstein as well as in seven Swiss cantons (Aargau, Basel Stadt, Basel Land, Luzern, St. Gallen, Schwyz, Zurich). In a repeated-measures design, students were assessed at the beginning (T1: August/September 2016; M age = 17.34; SD age = 0.87) and at the end of the school year (T2: May/June 2017; M age = 18.04; SD age = 0.87). The students completed computer-based tests on writing, reading and listening skills, as well as general cognitive ability. Furthermore, they completed a questionnaire measuring background variables and individual characteristics.

Writing prompt

All students answered two independent and two integrated essay writing prompts of the internet-based Test of English as a Foreign Language (TOEFL iBT ® ) that is administered by the Educational Testing Service (ETS) in Princeton. The task instruction was as follows: “In the writing task below you will find a question on a controversial topic. Answer the question in an essay in English. List arguments and counter-arguments, explain them and finally make it clear what your own opinion on the topic is. Your text will be judged on different qualities. These include the presentation of your ideas, the organization of the essay and the linguistic quality and accuracy. You have 30 min to do this. Try to use all of this time as much as possible.” This task instruction was followed by the essay prompt. The maximum writing time was 30 min according to the official TOEFL iBT ® assessment procedure. The essays were scored by trained human raters on the TOEFL 6-point rating scale at ETS. In addition to two human ratings per essay, ETS also provided scores from their automated essay scoring system (e-rater ® ; Burstein et al., 2013 ). For a more detailed description of the scoring procedure and the writing prompts see Rupp et al. (2019) and Keller et al. (2020) . For the purpose of this study, we selected the student responses to the TOEFL iBT independent writing prompt “Teachers,” which showed good measurement qualities (see Rupp et al., 2019 ). Taken together, data collections at T1 and T2 yielded N = 2,389 valid written responses to the following prompt: “A teacher’s ability to relate well with students is more important than excellent knowledge of the subject being taught.”

Text quality and length

The rating of text quality via human and machine scoring was done by ETS. All essays were scored by highly experienced human raters on the operational holistic TOEFL iBT rubric from 0 to 5 ( Chodorow and Burstein, 2004 ). Essays were scored high if they were well-organized and individual ideas were well-developed, if they used specific examples and support to express learners’ opinion on the subject, and if the English language was used accurately to express learners’ ideas. Essays were assigned a score of 0 if they were written in another language, were generally incomprehensible, or if no text was entered.

Each essay received independent ratings by two trained human raters. If the two ratings showed a deviation of 1, the mean of the two scores was used; if they showed a deviation of 2 or more, a third rater (adjudicator) was consulted. Inter-rater agreement, as measured by quadratic weighted kappa (QWK), was satisfying for the prompt “Teachers” at both time points (QWK = 0.67; Hayes and Hatch, 1999 ; see Rupp et al., 2019 for further details). The mean text quality score was M = 3.35 ( SD = 0.72).

Word count was used to measure the length of the essays. The number of words was calculated by the e-Rater scoring engine. The mean word count was M = 311.19 ( SD = 81.91) and the number of words ranged from 41 to 727. We used the number of words rather than other measures of text length (e.g., number of letters) as it is the measure which is most frequently used in the literature: 9 out of 10 studies in the research review by Powers (2005) used word count as the criterion (also see Kobrin et al., 2007 , 2011 ; Crossley and McNamara, 2009 ; Barkaoui, 2010 ; Attali, 2016 ; Wolfe et al., 2016 ; Wind et al., 2017 ). This approach ensures that our analyses can be compared with previous research.

English language proficiency and control variables

Proficiency was operationalized by a combination of different variables: English grade, English writing self-concept, reading and listening comprehension in English. The listening and reading skills were measured with a subset of items from the German National Assessment ( Köller et al., 2010 ). The tasks require a detailed understanding of long, complex reading and listening texts including idiomatic expressions and different linguistic registers. The tests consisted of a total of 133 items for reading, and 118 items for listening that were administered in a multi-matrix-design. Each student was assessed with two rotated 15-min blocks per domain. Item parameters were estimated using longitudinal multidimensional two-parameter item response models in M plus version 8 ( Muthén and Muthén, 1998–2012 ). Student abilities were estimated using 15 plausible values (PVs) per person. The PV reliabilities were 0.92 (T1) and 0.76 (T2) for reading comprehension, and 0.85 (T1) and 0.72 (T2) for listening comprehension. For a more detailed description of the scaling procedure see Köller et al. (2019) .

General cognitive ability was assessed at T1 using the subtests on figural reasoning (N2; 25 items) and on verbal reasoning (V3; 20 items) of the Cognitive Ability Test (KFT 4–12 + R; Heller and Perleth, 2000 ). For each scale 15 PVs were drawn in a two-dimensional item response model. For the purpose of this study, the two PVs were combined to 15 overall PV scores with a reliability of 0.86.

The English writing self-concept was measured with a scale consisting of five items (e.g., “I have always been good at writing in English”; Eccles and Wigfield, 2002 ; Trautwein et al., 2012 ; α = 0.90). Furthermore, country (Germany = 0/Switzerland = 1), gender (male = 0/female = 1) and time of measurement (T1 = 0; T2 = 1) were used as control variables.

Statistical Analyses

All analyses were conducted in M plus version 8 ( Muthén and Muthén, 1998–2012 ) based on the 15PV data sets using robust maximum likelihood estimation to account for a hierarchical data structure (i.e., students clustered in classes; type = complex). Full-information maximum likelihood was used to estimate missing values in background variables. Due to the use of 15PVs, all analyses were run 15 times and then averaged (see Rubin, 1987 ).

Confirmatory factor analysis was used to specify a latent proficiency factor. All four proficiency variables showed substantial loadings in a single-factor measurement model (English grade: 0.67; writing self-concept: 0.73; reading comprehension: 0.42; listening comprehension: 0.51). As reading and listening comprehension were measured within the same assessment framework and could thus be expected to share mutual variance beyond the latent factor, their residuals were allowed to correlate. The analyses yielded an acceptable model fit: χ 2 (1) = 3.65, p = 0.06; CFI = 0.998, RMSEA = 0.031, SRMR = 0.006.

The relationship between text length and other independent variables was explored with correlational analysis. Multiple regression analysis with latent and manifest predictors was used to investigate the relations between text length, proficiency, and text quality.

The correlation of the latent proficiency factor and text length (word count) was moderately positive: r = 0.36, p < 0.01. This indicates that more proficient students tended to write longer texts. Significant correlations with other variables showed that students tended to write longer texts at T1 ( r = -0.08, p < 0.01), girls wrote longer texts than boys ( r = 0.11, p < 0.01), and higher cognitive ability was associated with longer texts ( r = 0.07, p < 0.01). However, all of these correlations were very weak as a general rule. The association of country and text length was not statistically significant ( r = -0.06, p = 0.10).

Table 1 presents the results of the multiple linear regression of text quality on text length, proficiency and control variables. The analysis showed that proficiency and the covariates alone explained 38 percent of the variance in text quality ratings, with the latent proficiency factor being by far the strongest predictor (Model 1). The effect of text length on the text quality score was equally strong when including the control variables but not proficiency in the model (Model 2). When both the latent proficiency factor and text length were entered into the regression model (Model 3), the coefficient of text length was reduced but remained significant and substantial, explaining an additional 24% of the variance (ΔR 2 = 0.24 from Model 1 to Model 3). Thus, text length had an incremental effect on text quality beyond a latent English language proficiency factor.

Linear regression of text quality on text length, English language proficiency, and control variables: standardized regression coefficients (β) and standard errors (SE).

Study 1 approached the issue of text length by operationalizing the construct of English language proficiency and investigating how it affects the relationship of text length and text quality. This can give us an idea of how text length may influence human judgments even though it is not considered relevant to the construct of writing competence. These secondary analyses of an existing large-scale dataset yielded two central findings: First, text length was only moderately associated with language proficiency. Second, text length strongly influenced writing performance beyond proficiency. Thus, it had an impact on the assigned score that was not captured by the construct of proficiency. These findings could be interpreted in favor of the judgment bias assumption as text length may include both construct-irrelevant and construct-relevant information.

The strengths of this study were the large sample of essays on the same topic and the vast amount of background information that was collected on the student writers (proficiency and control variables). However, there were three major limitations: First, the proficiency construct captured different aspects of English language competence (reading and listening comprehension, writing self-concept, grade), but that operationalization was not comprehensive. Thus, the additional variance explained by text length may still have been due to other aspects that could not be included in the analyses as they were not in the data. Further research with a similar design (primary or secondary analyses) should use additional variables such as grammar/vocabulary knowledge or writing performance in the first language.

The second limitation was the correlational design, which does not allow a causal investigation of the effect of text length on text quality ratings. Drawing inferences which are causal in nature would require an experimental environment in which, for example, text quality is kept constant for texts of different lengths. For that reason, Study 2 was conducted exactly in such a research design.

Last but not least, the question of transferability of these findings remains open. Going beyond standardized large-scale assessment, interdisciplinary research requires us to look at the issue from different perspectives. Findings pertaining to professional raters may not be transferable to teachers, who are required to assess students’ writing in a classroom context. Thus, Study 2 drew on a sample of preservice English teachers and took a closer look at how their ratings were impacted by text length.

Research Questions

In Study 2, we investigated the judgment bias assumption vs. the appropriate heuristic assumption of preservice teachers. As recommended by Powers (2005) , we conducted an experimental study in addition to the correlational design used in Study 1. As text quality scores were held constant within all essay length conditions, any significant effect of essay length would be in favor of the judgment bias assumption. The objective of this study was to answer the following research questions:

  • (1) How do ratings of pre-service teachers correspond to expert ratings?
  • (2) Is there an effect of text length on the text quality ratings of preservice English teachers, when there is (per design) no relation between text length and text quality (main effect)?
  • (3) Does the effect differ for different levels of writing performance (interaction effect)?

Participants and Procedure

The experiment was conducted with N = 84 pre-service teachers ( M Age = 23 years; 80% female), currently enrolled in a higher education teacher training program at a university in Northern Germany. They had no prior rating experience of this type of learner texts. The experiment was administered with the Student Inventory ASSET ( Jansen et al., 2019 ), an online tool to assess students’ texts within an experimental environment. Participants were asked to rate essays from the MEWS project (see Study 1) on the holistic rubric used by the human raters at ETS (0–5; https://www.ets.org/s/toefl/pdf/toefl_writing_rubrics.pdf ). Every participant had to rate 9 out of 45 essays in randomized order, representing all possible combinations of text quality and text length. Before the rating process began, participants were given information about essay writing in the context of the MEWS study (school type; school year; students’ average age; instructional text) and they were presented the TOEFL writing rubric as the basis for their judgments. They had 15 min to get an overview of all nine texts before they were asked to rate each text on the rubric. Throughout the rating process, they were allowed to highlight parts of the texts.

The operationalization of text quality and text length as categorical variables as well as the procedure of selecting an appropriate essay sample for the study is explained in the following.

Text Length and Text Quality

The essays used in the experiment were selected on the basis of the following procedure, which took both text quality and text length as independent variables into account. The first independent variable of the essay (overall text quality) was operationalized via scores assigned by two trained human raters from ETS on a holistic six-point scale (0–5; see Study 1 and Appendix A). In order to measure the variable as precisely as possible, we only included essays for which both human raters had assigned the same score, resulting in a sample of N = 1,333 essays. As a result, three gradations of text quality were considered in the current study: lower quality (score 2), medium quality (score 3) and higher quality (score 4). The corpus included only few texts (10.4%) with the extreme scores of 0, 1, and 5; these were therefore excluded from the essay pool. We thus realized a 3 × 3 factorial within-subjects design. The second independent variable text length was measured via the word count of the essays, calculated by the e-rater (c) scoring engine. As with text quality, this variable was subdivided in three levels: rather short texts (s), medium-length texts (m), and long texts (l). All available texts were analyzed regarding their word count distribution. Severe outliers were excluded. The remaining N = 1308 essays were split in three even groups: the lower (=261 words), middle (262–318 words) and upper third (=319 words). Table 2 shows the distribution of essays for the resulting combinations of text length and text score.

Distribution of essays in the sample contingent on text quality and text length groupings.

Selection of Essays

For each text length group (s, m, and l), the mean word count across all three score groups was calculated. Then, the score group (2, 3, or 4) with the smallest number of essays in a text length group was taken as reference (e.g., n = 22 short texts of high quality or n = 15 long texts of low quality). Within each text length group, the five essays being – word count-wise – closest to the mean of the reference were chosen for the study. This was possible with mostly no or only minor deviations. In case of multiple possible matches, the essay was selected at random. This selection procedure resulted in a total sample of 45 essays, with five essays for each combination of score group (2, 3, 4) and length group (s, m, l).

A repeated-measures ANOVA with two independent variables (text quality and text length) was conducted to test the two main effects and their interaction on participants’ ratings (see Table 3 ). Essay ratings were treated as a within-subject factor, accounting for dependencies of the ratings nested within raters. The main effect of text quality scores on participants’ ratings showed significant differences between the three text quality conditions ( low , medium , high ) that corresponded to expert ratings; F (2, 82) = 209.04, p < 0.001, d = 4.52. There was also a significant main effect for the three essay length conditions ( short , medium , long ); F (2, 82) = 9.14, p < 0.001, d = 0.94. Contrary to expectations, essay length was negatively related to participants’ ratings, meaning that shorter texts received higher scores than longer texts. The interaction of text quality and text length also had a significant effect; F (4, 80) = 3.93, p < 0.01, d = 0.89. Post-hoc tests revealed that texts of low quality were especially impacted by essay length in a negative way (see Figure 1 ).

Participants’ ratings of text quality: means (M) and standard deviations (SD).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-11-562462-g001.jpg

Visualization of the interaction between text length and text quality.

The experiment conducted in Study 2 found a very strong significant main effect for text quality, indicating a high correspondence of pre-service teachers’ ratings with the expert ratings of text quality. The main effect of text length was also significant, but was qualified by a significant interaction effect text quality x text length, indicating that low quality texts were rated even more negative the longer they were. This negative effect of text length was contrary to expectations: The pre-service teachers generally tended to assign higher scores to shorter texts. Thus, they seemed to value shorter texts over longer texts. However, this was mainly true for texts of low quality.

These findings were surprising against the research background that would suggest that longer texts are typically associated with higher scores of text quality, particularly in the context of second language writing. Therefore, it is even more important to discuss the limitations of the design before interpreting the results: First, the sample included relatively inexperienced pre-service teachers. Further research is needed to show whether these findings are transferable to in-service teachers with reasonable experience in judging students’ writing. Moreover, further studies could use assessment rubrics that teachers are more familiar with, such as the CEFR ( Council of Europe, 2001 ; also see Fleckenstein et al., 2020 ). Second, the selection process of essays may have reduced the ecological validity of the experiment. As there were only few long texts of low quality and few short texts of high quality in the actual sample (see Table 2 ), the selection of texts in the experimental design was – to some degree – artificial. This could also have influenced the frame of reference for the pre-service teachers as the distribution of the nine texts was different from what one would find naturally in an EFL classroom. Third, the most important limitation of this study is the question of the reference norm, a point which applies to studies of writing assessment in general. In our study, writing quality was operationalized using expert ratings, which have been shown to be influenced by text length in many investigations as well as in Study 1. If the expert ratings are biased themselves, the findings of this study may also be interpreted as pre-service teachers (unlike expert raters) not showing a text length bias at all: shorter texts should receive higher scores than longer ones if the quality assigned by the expert raters is held constant. We discuss these issues concerning the reference norm in more detail in the next section.

All three limitations may have affected ratings in a way that could have reinforced a negative effect of text length on text quality ratings. However, as research on the effect of text length on teachers’ judgments is scarce, we should consider the possibility that the effect is actually different from the (positive) one typically found for professional human raters. There are a number of reasons to assume differences in the rating processes that are discussed in more detail in the following section. Furthermore, we will discuss what this means in terms of the validity of the gold standard in writing assessment.

General Discussion

Combining the results of both studies, we have reason to assume that (a) text length induces judgment bias and (b) the effect of text length largely depends on the rater and/or the rating context. More specifically, the findings of the two studies can be summarized as follows: Professional human raters tend to reward longer texts beyond the relationship of text length and proficiency. Compared to this standard, inexperienced EFL teachers tend to undervalue text length, meaning that they sanction longer texts especially when text quality is low. This in turn may be based on an implicit expectation deeply ingrained in the minds of many EFL teachers: that writing in a foreign language is primarily about avoiding mistakes, and that longer texts typically contain more of them than shorter ones ( Keller, 2016 ). Preservice teachers might be particularly afflicted with this view of writing as they would have experienced it as learners up-close and personal, not too long ago. Both findings point toward the judgment bias assumption, but with opposite directions. These seemingly contradictory findings lead to interesting and novel research questions – both in the field of standardized writing assessment and in the field of teachers’ diagnostic competence.

Only if we take professional human ratings as reliable benchmark scores can we infer that teachers’ ratings are biased (in a negative way). If we consider professional human ratings to be biased themselves (in a positive way), then the preservice teachers’ judgments might appear to be unbiased. However, it would be implausible to assume that inexperienced teachers’ judgments are less biased than those of highly trained expert raters. Even if professional human ratings are flawed themselves, they are the best possible measure of writing quality, serving as a reference even for NLP tools ( Crossley, 2020 ). It thus makes much more sense to consider the positive impact of text length on professional human ratings – at least to a degree – an appropriate heuristic. This means that teachers’ judgments would generally benefit from applying the same heuristic when assessing students’ writing, as long as it does not become a bias.

In his literature review, Crossley (2020) sees the nature of the writing task to be among the central limitations when it comes to generalizing findings in the context of writing assessment. Written responses to standardized tests (such as the TOEFL) may produce linguistic features that differ from writing samples produced in the classroom or in other, more authentic writing environments. Moreover, linguistic differences may also occur depending on a writing sample being timed or untimed. Timed samples provide fewer opportunities for planning, revising, and development of ideas as compared to untimed samples, where students are more likely to plan, reflect, and revise their writing. These differences may surface in timed writing in such a way that it would be less cohesive and less complex both lexically and syntactically.

In the present research, such differences may account for the finding that pre-service teachers undervalue text length compared to professional raters. Even though the participants in Study 2 were informed about the context in which the writing samples were collected, they may have underestimated the challenges of a timed writing task in an unfamiliar format. In the context of their own classrooms, students rarely have strict time limitations when working on complex writing tasks. If they do, in an exam consisting of an argumentative essay, for example, it is usually closer to 90 min than to 30 min (at least in the case of the German pre-service teachers who participated in this study). Thus, text length may not be a good indicator of writing quality in the classroom. On the contrary, professional raters may value length as a construct-relevant feature of writing quality in a timed task, for example as an indicator of writing fluency (see Peng et al., 2020 ).

Furthermore, text length as a criterion of quality cannot be generalized over different text types at random. The genres which are taught in EFL courses, or assessed in EFL exams, differ considerably with respect to expected length. In five paragraph essays, for example, developing an argument requires a certain scope and attention to detail, so that text length is a highly salient feature for overall text quality. The same might not be true for e-mail writing, a genre frequently taught in EFL classrooms ( Fleckenstein et al., in press ). E-mails are usually expected to be concise and to the point, so that longer texts might seem prolix, or rambling. Such task-specific demands need to be taken into account when it comes to interpreting our findings. The professional raters employed in our study were schooled extensively for rating five-paragraph essays, which included a keen appreciation of text length as a salient criterion of text quality. The same might not be said of classroom teachers, who encounter a much wider range of genres in their everyday teaching and might therefore be less inclined to consider text length as a relevant feature. Further research should consider different writing tasks in order to investigate whether text length is particularly important to the genre of the argumentative essay.

Our results underscore the importance of considering whether or not text length should be taken into account for different contexts of writing assessment. This holds true for classroom assessment, where teachers should make their expectations regarding text length explicit, as well as future studies with professional raters. Crossley (2020) draws attention to the transdisciplinary perspective of the field as a source for complications: “The complications arise from the interdisciplinary nature of this type of research which often combines writing, linguistics, statistics, and computer science fields. With so many fields involved, it is often easy to overlook confounding factors” (p. 428). The present research shows how the answer to one and the same research question – How does text length influence human judgment? – can be very different from different perspectives and within different areas of educational research. Depending on the population (professional raters vs. pre-service teachers) and the methodology (correlational analysis vs. experimental design), our findings illustrate a broad range of possible investigations and outcomes. Thus, it is a paramount example of why interdisciplinary research in education is not only desirable but imperative. Without an interdisciplinary approach, our view of the text length effect would be uni-dimensional and fragmentary. Only the combination of different perspectives and methods can live up to the demands of a complex issue such as writing assessment, identify research gaps, and challenge research traditions. Further research is needed to investigate the determinants of the strength and the direction of the bias. It is necessary to take a closer look at the rating processes of (untrained) teachers and (trained) raters, respectively, in order to investigate similarities and differences. Research pertaining to judgment heuristics/biases can be relevant for both teacher and rater training. However, the individual concerns and characteristics of the two groups need to be taken into account. This could be done, for example, by directly comparing the two groups in an experimental study. Both in teacher education and in text assessment studies, we should have a vigorous discussion about how appropriate heuristics of expert raters can find their way into the training of novice teachers and inexperienced raters in an effort to reduce judgement bias.

Data Availability Statement

Ethics statement.

The studies involving human participants were reviewed and approved by the Ministry of Education, Science and Cultural Affairs of the German federal state Schleswig-Holstein. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author Contributions

JF analyzed the data and wrote the manuscript. TJ and JM collected the experimental data for Study 2 and supported the data analysis. SK and OK provided the dataset for Study 1. TJ, JM, SK, and OK provided feedback on the manuscript. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Attali Y. (2016). A comparison of newly-trained and experienced raters on a standardized writing assessment. Lang. Test. 33 99–115. 10.1177/0265532215582283 [ CrossRef ] [ Google Scholar ]
  • Barkaoui K. (2010). Explaining ESL essay holistic scores: a multilevel modeling approach. Lang. Test. 27 515–535. 10.1177/0265532210368717 [ CrossRef ] [ Google Scholar ]
  • Bejar I. I. (2011). A validity-based approach to quality control and assurance of automated scoring. Assess. Educ. 18 319–341. 10.1080/0969594x.2011.555329 [ CrossRef ] [ Google Scholar ]
  • Ben-Simon A., Bennett R. E. (2007). Toward more substantively meaningful automated essay scoring. J. Technol. Learn. Asses. 6 [Epub ahead of print]. [ Google Scholar ]
  • Birkel P., Birkel C. (2002). Wie einig sind sich Lehrer bei der Aufsatzbeurteilung? Eine Replikationsstudie zur Untersuchung von Rudolf Weiss. Psychol. Erzieh. Unterr. 49 219–224. [ Google Scholar ]
  • Brunswik E. (1955). Representative design and probabilistic theory in a functional psychology. Psychol. Rev. 62, 193–217. 10.1037/h0047470 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burstein J., Tetreault J., Madnani N. (2013). “ The E-rater ® automated essay scoring system ,” in Handbook of Automated Essay Evaluation , eds Shermis M. D., Burstein J. (Abingdon: Routledge; ), 77–89. [ Google Scholar ]
  • Chenoweth N. A., Hayes J. R. (2001). Fluency in writing: generating text in L1 and L2. Written Commun. 18 80–98. 10.1177/0741088301018001004 [ CrossRef ] [ Google Scholar ]
  • Chodorow M., Burstein J. (2004). Beyond essay length: evaluating e-rater ® ’s performance on toefl ® essays. ETS Res. Rep. 2004 i–38. 10.1002/j.2333-8504.2004.tb01931.x [ CrossRef ] [ Google Scholar ]
  • Council of Europe (2001). Common European Framework of Reference for Languages: Learning, Teaching and Assessment. Cambridge, MA: Cambridge University Press. [ Google Scholar ]
  • Crossley S. (2020). Linguistic features in writing quality and development: an overview. J. Writ. Res. 11 415–443. 10.17239/jowr-2020.11.03.01 [ CrossRef ] [ Google Scholar ]
  • Crossley S. A., McNamara D. S. (2009). Computational assessment of lexical differences in L1 and L2 writing. J. Second. Lang. Writ. 18, 119–135. 10.1016/j.jslw.2009.02.002 [ CrossRef ] [ Google Scholar ]
  • Cumming A., Kantor R., Powers D. E. (2002). Decision making while rating ESL/EFL writing tasks: a descriptive framework. Modern Lang. J. 86 67–96. 10.1111/1540-4781.00137 [ CrossRef ] [ Google Scholar ]
  • Deane P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assess. Writ. 18 7–24. 10.1016/j.asw.2012.10.002 [ CrossRef ] [ Google Scholar ]
  • Eccles J. S., Wigfield A. (2002). Motivational beliefs, values, and goals. Annu. Rev.Psychol. 53 109–132. 10.1146/annurev.psych.53.100901.135153 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fleckenstein J., Keller S., Krüger M., Tannenbaum R. J., Köller O. (2020). Linking TOEFL iBT ® writing scores and validity evidence from a standard setting study. Assess. Writ. 43 : 100420 10.1016/j.asw.2019.100420 [ CrossRef ] [ Google Scholar ]
  • Fleckenstein J., Meyer J., Jansen T., Reble R., Krüger M., Raubach E., et al. (in press). “ Was macht Feedback effektiv? Computerbasierte Leistungsrückmeldung anhand eines Rubrics beim Schreiben in der Fremdsprache Englisch ,” in Tagungsband Bildung, Schule und Digitalisierung , eds Kaspar K., Becker-Mrotzek M., Hofhues S., König J., Schmeinck D. (Münster: Waxmann [ Google Scholar ]
  • Graham S., Harris K. R., Hebert M. (2011). It is more than just the message: presentation effects in scoring writing. Focus Except. Child. 44 1–12. [ Google Scholar ]
  • Guo L., Crossley S. A., McNamara D. S. (2013). Predicting human judgments of essay quality in both integrated and independent second language writing samples: a comparison study. Assess. Writ. 18 218–238. 10.1016/j.asw.2013.05.002 [ CrossRef ] [ Google Scholar ]
  • Hachmeister S. (2019). “ Messung von Textqualität in Ereignisberichten ,” in Schreibkompetenzen Messen, Beurteilen und Fördern (6. Aufl) , eds Kaplan I., Petersen I. (Münster: Waxmann Verlag; ), 79–99. [ Google Scholar ]
  • Hayes J. R., Hatch J. A. (1999). Issues in measuring reliability: Correlation versus percentage of agreement. Writt. Commun. 16 354–367. 10.1177/0741088399016003004 [ CrossRef ] [ Google Scholar ]
  • Heller K. A., Perleth C. (2000). KFT 4-12+ R Kognitiver Fähigkeitstest für 4. Bis 12. Klassen, Revision. Göttingen: Beltz Test. [ Google Scholar ]
  • Jansen T., Vögelin C., Machts N., Keller S. D., Möller J. (2019). Das Schülerinventar ASSET zur Beurteilung von Schülerarbeiten im Fach Englisch: Drei experimentelle Studien zu Effekten der Textqualität und der Schülernamen. Psychologie in Erziehung und Unterricht 66, 303–315. 10.2378/peu2019.art21d [ CrossRef ] [ Google Scholar ]
  • Keller S. (2016). Measuring Writing at Secondary Level (MEWS). Eine binationale Studie. Babylonia 3, 46–48. [ Google Scholar ]
  • Keller S. D., Fleckenstein J., Krüger M., Köller O., Rupp A. A. (2020). English writing skills of students in upper secondary education: results from an empirical study in Switzerland and Germany. J. Second Lang. Writ. 48 : 100700 10.1016/j.jslw.2019.100700 [ CrossRef ] [ Google Scholar ]
  • Kobrin J. L., Deng H., Shaw E. J. (2007). Does quantity equal quality? the relationship between length of response and scores on the SAT essay. J. Appl. Test. Technol. 8 1–15. 10.1097/nne.0b013e318276dee0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kobrin J. L., Deng H., Shaw E. J. (2011). The association between SAT prompt characteristics, response features, and essay scores. Assess. Writ. 16 154–169. 10.1016/j.asw.2011.01.001 [ CrossRef ] [ Google Scholar ]
  • Köller O., Fleckenstein J., Meyer J., Paeske A. L., Krüger M., Rupp A. A., et al. (2019). Schreibkompetenzen im Fach Englisch in der gymnasialen Oberstufe. Z. Erziehungswiss. 22 1281–1312. 10.1007/s11618-019-00910-3 [ CrossRef ] [ Google Scholar ]
  • Köller O., Knigge M., Tesch B. (eds.) (2010). Sprachliche Kompetenzen im Ländervergleich. Germany: Waxmann. [ Google Scholar ]
  • Marshall J. C. (1967). Composition errors and essay examination grades re-examined. Am. Educ. Res. J. 4 375–385. 10.3102/00028312004004375 [ CrossRef ] [ Google Scholar ]
  • McCutchen D., Teske P., Bankston C. (2008). “ Writing and cognition: implications of the cognitive architecture for learning to write and writing to learn ,” in Handbook of research on Writing: History, Society, School, Individual, Text , ed. Bazerman C. (Milton Park: Taylor & Francis Group; ), 451–470. [ Google Scholar ]
  • McNamara D. S., Crossley S. A., Roscoe R. D., Allen L. K., Dai J. (2015). A hierarchical classification approach to automated essay scoring. Assess. Writ. 23 35–59. 10.1016/j.asw.2014.09.002 [ CrossRef ] [ Google Scholar ]
  • Muthén L. K., Muthén B. O. (1998. –2012). Mplus user’s Guide. Los Angeles: Muthén & Muthén. [ Google Scholar ]
  • Osnes J. (1995). “ Der Einflus von Handschrift und Fehlern auf die Aufsatzbeurteilung ,” in Die Fragwürdigkeit der Zensurengebung (9. Aufl., S) , ed. Ingenkamp K. (Göttingen: Beltz; ), 131–147. [ Google Scholar ]
  • Peng J., Wang C., Lu X. (2020). Effect of the linguistic complexity of the input text on alignment, writing fluency, and writing accuracy in the continuation task. Langu. Teach. Res. 24 364–381. 10.1177/1362168818783341 [ CrossRef ] [ Google Scholar ]
  • Perelman L. (2014). When “the state of the art” is counting words. Assess. Writ. 21 104–111. 10.1016/j.asw.2014.05.001 [ CrossRef ] [ Google Scholar ]
  • Pohlmann-Rother S., Schoreit E., Kürzinger A. (2016). Schreibkompetenzen von Erstklässlern quantitativ-empirisch erfassen-Herausforderungen und Zugewinn eines analytisch-kriterialen Vorgehens gegenüber einer holistischen Bewertung. J. Educ. Res. Online 8 107–135. [ Google Scholar ]
  • Powers D. E. (2005). Wordiness”: a selective review of its influence, and suggestions for investigating its relevance in tests requiring extended written responses. ETS Res. Rep. i–14. [ Google Scholar ]
  • Quinlan T., Higgins D., Wolff S. (2009). Evaluating the construct-coverage of the e-rater ® scoring engine. ETS Res. Rep. 2009 i–35. 10.1002/j.2333-8504.2009.tb02158.x [ CrossRef ] [ Google Scholar ]
  • Rezaei A. R., Lovorn M. (2010). Reliability and validity of rubrics for assessment through writing. Assess. Writ. 15 18–39. 10.1016/j.asw.2010.01.003 [ CrossRef ] [ Google Scholar ]
  • Rubin D. B. (1987). The calculation of posterior distributions by data augmentation: comment: a noniterative sampling/importance resampling alternative to the data augmentation algorithm for creating a few imputations when fractions of missing information are modest: the SIR algorithm. J. Am. Stat. Assoc. 82 543–546. 10.2307/2289460 [ CrossRef ] [ Google Scholar ]
  • Ruegg R., Sugiyama Y. (2010). Do analytic measures of content predict scores assigned for content in timed writing? Melbourne Papers in Language Testing 15, 70–91. [ Google Scholar ]
  • Rupp A. A., Casabianca J. M., Krüger M., Keller S., Köller O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany. ETS Res. Rep. Ser. 2019 1–23. 10.1002/ets2.12249 [ CrossRef ] [ Google Scholar ]
  • Scannell D. P., Marshall J. C. (1966). The effect of selected composition errors on grades assigned to essay examinations. Am. Educ. Res. J. 3 125–130. 10.3102/00028312003002125 [ CrossRef ] [ Google Scholar ]
  • Shermis M. D. (2014). The challenges of emulating human behavior in writing assessment. Assess. Writ. 22 91–99. 10.1016/j.asw.2014.07.002 [ CrossRef ] [ Google Scholar ]
  • Silva T. (1993). Toward an understanding of the distinct nature of L2 writing: the ESL research and its implications. TESOL Q. 27, 657–77. 10.2307/3587400 [ CrossRef ] [ Google Scholar ]
  • Trautwein U., Marsh H. W., Nagengast B., Lüdtke O., Nagy G., Jonkmann K. (2012). Probing for the multiplicative term in modern expectancy–value theory: a latent interaction modeling study. J. Educ. Psychol. 104 763–777. 10.1037/a0027470 [ CrossRef ] [ Google Scholar ]
  • Vögelin C., Jansen T., Keller S. D., Machts N., Möller J. (2019). The influence of lexical features on teacher judgements of ESL argumentative essays. Assess. Writ. 39 50–63. 10.1016/j.asw.2018.12.003 [ CrossRef ] [ Google Scholar ]
  • Vögelin C., Jansen T., Keller S. D., Möller J. (2018). The impact of vocabulary and spelling on judgments of ESL essays: an analysis of teacher comments. Lang. Learn. J. 1–17. 10.1080/09571736.2018.1522662 [ CrossRef ] [ Google Scholar ]
  • Weigle S. C. (2003). Assessing Writing. Cambridge: Cambridge University Press. [ Google Scholar ]
  • Wind S. A., Stager C., Patil Y. J. (2017). Exploring the relationship between textual characteristics and rating quality in rater-mediated writing assessments: an illustration with L1 and L2 writing assessments. Assess. Writ. 34, 1–15. 10.1016/j.asw.2017.08.003 [ CrossRef ] [ Google Scholar ]
  • Wolfe E. W., Song T., Jiao H. (2016). Features of difficult-to-score essays. Assess. Writ. 27 1–10. 10.1016/j.asw.2015.06.002 [ CrossRef ] [ Google Scholar ]

'Quality' an Essay by John Galsworthy

Portrait of a shoemaker as an artist

Historical/Contributor/Getty Images

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

Best known today as the author of "The Forsyte Saga," John Galsworthy (1867-1933) was a popular and prolific English novelist and playwright in the early decades of the 20th century. Educated at New College, Oxford, where he specialized in marine law, Galsworthy had a lifelong interest in social and moral issues, in particular, the dire effects of poverty. He eventually chose to write instead of pursuing law and was awarded the Nobel Prize in Literature in 1932.

In the  narrative essay "Quality," published in 1912, Galsworthy depicts a German craftsman's efforts to survive in an era where success is determined "by adverdisement, nod by work." Galsworthy depicts shoemakers attempting to stay true to their crafts in the face of a world driven by money and immediate gratification — not by quality and certainly not by true art or craftsmanship.

" Quality" first appeared in "The Inn of Tranquility: Studies and Essays" (Heinemann, 1912). A portion of the essay appears below.

by John Galsworthy

1 I knew him from the days of my extreme youth because he made my father's boots; inhabiting with his elder brother two little shops let into one, in a small by-street — now no more, but then most fashionably placed in the West End.

2 That tenement had a certain quiet distinction; there was no sign upon its face that he made for any of the Royal Famil — merely his own German name of Gessler Brothers; and in the window a few pairs of boots. I remember that it always troubled me to account for those unvarying boots in the window, for he made only what was ordered, reaching nothing down, and it seemed so inconceivable that what he made could ever have failed to fit. Had he bought them to put there? That, too, seemed inconceivable. He would never have tolerated in his house leather on which he had not worked himself. Besides, they were too beautiful — the pair of pumps, so inexpressibly slim, the patent leathers with cloth tops, making water come into one's mouth, the tall brown riding boots with marvelous sooty glow, as if, though new, they had been worn a hundred years. Those pairs could only have been made by one who saw before him the Soul of Boot — so truly were they prototypes incarnating the very spirit of all foot-gear. These thoughts, of course, came to me later, though even when I was promoted to him, at the age of perhaps fourteen, some inkling haunted me of the dignity of himself and brother. For to make boots — such boots as he made — seemed to me then, and still seems to me, mysterious and wonderful.

3 I remember well my shy remark, one day while stretching out to him my youthful foot:

4 "Isn't it awfully hard to do, Mr. Gessler?"

5 And his answer, given with a sudden smile from out of the sardonic redness of his beard: "Id is an Ardt!"

6 Himself, he was a little as if made from leather, with his yellow crinkly face, and crinkly reddish hair and beard; and neat folds slanting down his cheeks to the corners of his mouth, and his guttural and one-toned voice; for leather is a sardonic substance, and stiff and slow of purpose. And that was the character of his face, save that his eyes, which were gray-blue, had in them the simple gravity of one secretly possessed by the Ideal. His elder brother was so very like him — though watery, paler in every way, with a great industry — that sometimes in early days I was not quite sure of him until the interview was over. Then I knew that it was he, if the words, "I will ask my brudder," had not been spoken; and, that, if they had, it was his elder brother.

7 When one grew old and wild and ran up bills, one somehow never ran them up with Gessler Brothers. It would not have seemed becoming to go in there and stretch out one's foot to that blue iron-spectacled glance, owing him for more than — say — two pairs, just the comfortable reassurance that one was still his client.

8 For it was not possible to go to him very often — his boots lasted terribly, having something beyond the temporary — some, as it were, essence of boot stitched into them.

9 One went in, not as into most shops, in the mood of: "Please serve me, and let me go!" but restfully, as one enters a church; and, sitting on the single wooden chair, waited — for there was never anybody there. Soon, over the top edge of that sort of well — rather dark, and smelling soothingly of leather — which formed the shop, there would be seen his face, or that of his elder brother, peering down. A guttural sound, and the tip-tap of bast slippers beating the narrow wooden stairs, and he would stand before one without coat, a little bent, in leather apron, with sleeves turned back, blinking — as if awakened from some dream of boots, or like an owl surprised in daylight and annoyed at this interruption.

10 And I would say: "How do you do, Mr. Gessler? Could you make me a pair of Russia leather boots?"

11 Without a word he would leave me, retiring whence he came, or into the other portion of the shop, and I would continue to rest in the wooden chair, inhaling the incense of his trade. Soon he would come back, holding in his thin, veined hand a piece of gold-brown leather. With eyes fixed on it, he would remark: "What a beaudiful biece!" When I, too, had admired it, he would speak again. "When do you wand dem?" And I would answer: "Oh! As soon as you conveniently can." And he would say: "To-morrow ford-nighd?" Or if he were his elder brother: "I will ask my brudder!"

12 Then I would murmur: "Thank you! Good-morning, Mr. Gessler." "Goot-morning!" he would reply, still looking at the leather in his hand. And as I moved to the door, I would hear the tip-tap of his bast slippers restoring him, up the stairs, to his dream of boots. But if it were some new kind of foot-gear that he had not yet made me, then indeed he would observe ceremony — divesting me of my boot and holding it long in his hand, looking at it with eyes at once critical and loving, as if recalling the glow with which he had created it, and rebuking the way in which one had disorganized this masterpiece. Then, placing my foot on a piece of paper, he would two or three times tickle the outer edges with a pencil and pass his nervous fingers over my toes, feeling himself into the heart of my requirements.

  • The Best Quotes From 19th Century Feminist Lucy Stone
  • Favorite Quotes From Faulkner's "As I Lay Dying"
  • How It Feels to Be Colored Me, by Zora Neale Hurston
  • Walking Tours, by Robert Louis Stevenson
  • Classic Essay on Observation: 'Look at Your Fish!'
  • 'A Farewell to Arms' Quotes
  • Striking Out: Sample Common Application Essay
  • 'The Stranger' by Albert Camus Quotes
  • 'Where the Red Fern Grows' Quotes
  • New Year's Eve, by Charles Lamb
  • My Best Teaching Experience
  • By the Railway Side, by Alice Meynell
  • 'The Sun Also Rises' Quotes
  • "Cyrano de Bergerac" Quotes
  • Unforgettable Quotes From 'All Quiet on the Western Front'
  • 'The Character of the Man in Black' by Oliver Goldsmith

Home — Essay Samples — Business — Quality Management — The Concept of Quality

test_template

The Concept of Quality

  • Categories: Quality Management

About this sample

close

Words: 907 |

Published: Dec 18, 2018

Words: 907 | Pages: 2 | 5 min read

Works Cited:

  • Lilienfeld, S. O., Lynn, S. J., Namy, L. L., & Woolf, N. J. (2019). Psychology: From inquiry to understanding. Pearson.
  • Myers, D. G., & Dewall, C. N. (2018). Psychology. Macmillan.
  • Cherry, K. (2021). The importance of psychology. Verywell Mind. Retrieved from https://www.verywellmind.com/why-is-psychology-important-2794878
  • American Psychological Association. (2021). What is psychology? Retrieved from https://www.apa.org/topics/what-is-psychology
  • Baumeister, R. F., & Bushman, B. J. (2014). Social psychology and human nature. Wadsworth.
  • Cacioppo, J. T., Freberg, L. A., & Eastwick, P. W. (2018). Discovering psychology. Cengage Learning.
  • Gross, R. (2019). Psychology: The science of mind and behaviour. Hodder Education.
  • Huitt, W. (2020). Why study psychology? Educational Psychology Interactive. Valdosta State University. Retrieved from http://www.edpsycinteractive.org/topics/intro/why.html
  • Maslow, A. H. (2013). Toward a psychology of being. Simon and Schuster.
  • Seligman, M. E. (2011). Flourish: A visionary new understanding of happiness and well-being. Simon and Schuster.

Image of Prof. Linda Burke

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Business

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

7 pages / 2346 words

2 pages / 785 words

3 pages / 1252 words

2 pages / 1116 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

The Concept of Quality Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Quality Management

In conclusion, management is the key to organizational success, and managers must possess the necessary skills to lead their teams effectively. Effective management involves planning, organizing, leading, and controlling, and [...]

Hitt, M. A., Ireland, R. D., & Hoskisson, R. E. (2012). Strategic management: Concepts and cases: competitiveness and globalisation. Nelson Education.Robbins, S. P., Coulter, M., & DeCenzo, D. A. (2017). Fundamentals of [...]

Six Sigma means a measure of quality that strives for near perfection. Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects in any process from manufacturing to transactional and from product [...]

Forensic science, often referred to as the "silent witness" in the courtroom, plays a pivotal role in the criminal justice system. Its importance cannot be overstated, as it not only aids in solving crimes but also ensures that [...]

The company believes good food should be accompanied by the best wine. Milton Sandford travels across the world to look for the topmost quality boutique wines. The company works with its customers to provide them wines from [...]

Oriental carpets are works of art –unique and exquisite - worthy of collections. But when it comes to buying oriental rugs, most people shy away or hesitate. It’s difficult to judge its quality and they can never know whether [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

quality of the essay

  • Skip to main content
  • Keyboard shortcuts for audio player

NPR defends its journalism after senior editor says it has lost the public's trust

David Folkenflik 2018 square

David Folkenflik

quality of the essay

NPR is defending its journalism and integrity after a senior editor wrote an essay accusing it of losing the public's trust. Saul Loeb/AFP via Getty Images hide caption

NPR is defending its journalism and integrity after a senior editor wrote an essay accusing it of losing the public's trust.

NPR's top news executive defended its journalism and its commitment to reflecting a diverse array of views on Tuesday after a senior NPR editor wrote a broad critique of how the network has covered some of the most important stories of the age.

"An open-minded spirit no longer exists within NPR, and now, predictably, we don't have an audience that reflects America," writes Uri Berliner.

A strategic emphasis on diversity and inclusion on the basis of race, ethnicity and sexual orientation, promoted by NPR's former CEO, John Lansing, has fed "the absence of viewpoint diversity," Berliner writes.

NPR's chief news executive, Edith Chapin, wrote in a memo to staff Tuesday afternoon that she and the news leadership team strongly reject Berliner's assessment.

"We're proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories," she wrote. "We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world."

NPR names tech executive Katherine Maher to lead in turbulent era

NPR names tech executive Katherine Maher to lead in turbulent era

She added, "None of our work is above scrutiny or critique. We must have vigorous discussions in the newsroom about how we serve the public as a whole."

A spokesperson for NPR said Chapin, who also serves as the network's chief content officer, would have no further comment.

Praised by NPR's critics

Berliner is a senior editor on NPR's Business Desk. (Disclosure: I, too, am part of the Business Desk, and Berliner has edited many of my past stories. He did not see any version of this article or participate in its preparation before it was posted publicly.)

Berliner's essay , titled "I've Been at NPR for 25 years. Here's How We Lost America's Trust," was published by The Free Press, a website that has welcomed journalists who have concluded that mainstream news outlets have become reflexively liberal.

Berliner writes that as a Subaru-driving, Sarah Lawrence College graduate who "was raised by a lesbian peace activist mother ," he fits the mold of a loyal NPR fan.

Yet Berliner says NPR's news coverage has fallen short on some of the most controversial stories of recent years, from the question of whether former President Donald Trump colluded with Russia in the 2016 election, to the origins of the virus that causes COVID-19, to the significance and provenance of emails leaked from a laptop owned by Hunter Biden weeks before the 2020 election. In addition, he blasted NPR's coverage of the Israel-Hamas conflict.

On each of these stories, Berliner asserts, NPR has suffered from groupthink due to too little diversity of viewpoints in the newsroom.

The essay ricocheted Tuesday around conservative media , with some labeling Berliner a whistleblower . Others picked it up on social media, including Elon Musk, who has lambasted NPR for leaving his social media site, X. (Musk emailed another NPR reporter a link to Berliner's article with a gibe that the reporter was a "quisling" — a World War II reference to someone who collaborates with the enemy.)

When asked for further comment late Tuesday, Berliner declined, saying the essay spoke for itself.

The arguments he raises — and counters — have percolated across U.S. newsrooms in recent years. The #MeToo sexual harassment scandals of 2016 and 2017 forced newsrooms to listen to and heed more junior colleagues. The social justice movement prompted by the killing of George Floyd in 2020 inspired a reckoning in many places. Newsroom leaders often appeared to stand on shaky ground.

Leaders at many newsrooms, including top editors at The New York Times and the Los Angeles Times , lost their jobs. Legendary Washington Post Executive Editor Martin Baron wrote in his memoir that he feared his bonds with the staff were "frayed beyond repair," especially over the degree of self-expression his journalists expected to exert on social media, before he decided to step down in early 2021.

Since then, Baron and others — including leaders of some of these newsrooms — have suggested that the pendulum has swung too far.

Legendary editor Marty Baron describes his 'Collision of Power' with Trump and Bezos

Author Interviews

Legendary editor marty baron describes his 'collision of power' with trump and bezos.

New York Times publisher A.G. Sulzberger warned last year against journalists embracing a stance of what he calls "one-side-ism": "where journalists are demonstrating that they're on the side of the righteous."

"I really think that that can create blind spots and echo chambers," he said.

Internal arguments at The Times over the strength of its reporting on accusations that Hamas engaged in sexual assaults as part of a strategy for its Oct. 7 attack on Israel erupted publicly . The paper conducted an investigation to determine the source of a leak over a planned episode of the paper's podcast The Daily on the subject, which months later has not been released. The newsroom guild accused the paper of "targeted interrogation" of journalists of Middle Eastern descent.

Heated pushback in NPR's newsroom

Given Berliner's account of private conversations, several NPR journalists question whether they can now trust him with unguarded assessments about stories in real time. Others express frustration that he had not sought out comment in advance of publication. Berliner acknowledged to me that for this story, he did not seek NPR's approval to publish the piece, nor did he give the network advance notice.

Some of Berliner's NPR colleagues are responding heatedly. Fernando Alfonso, a senior supervising editor for digital news, wrote that he wholeheartedly rejected Berliner's critique of the coverage of the Israel-Hamas conflict, for which NPR's journalists, like their peers, periodically put themselves at risk.

Alfonso also took issue with Berliner's concern over the focus on diversity at NPR.

"As a person of color who has often worked in newsrooms with little to no people who look like me, the efforts NPR has made to diversify its workforce and its sources are unique and appropriate given the news industry's long-standing lack of diversity," Alfonso says. "These efforts should be celebrated and not denigrated as Uri has done."

After this story was first published, Berliner contested Alfonso's characterization, saying his criticism of NPR is about the lack of diversity of viewpoints, not its diversity itself.

"I never criticized NPR's priority of achieving a more diverse workforce in terms of race, ethnicity and sexual orientation. I have not 'denigrated' NPR's newsroom diversity goals," Berliner said. "That's wrong."

Questions of diversity

Under former CEO John Lansing, NPR made increasing diversity, both of its staff and its audience, its "North Star" mission. Berliner says in the essay that NPR failed to consider broader diversity of viewpoint, noting, "In D.C., where NPR is headquartered and many of us live, I found 87 registered Democrats working in editorial positions and zero Republicans."

Berliner cited audience estimates that suggested a concurrent falloff in listening by Republicans. (The number of people listening to NPR broadcasts and terrestrial radio broadly has declined since the start of the pandemic.)

Former NPR vice president for news and ombudsman Jeffrey Dvorkin tweeted , "I know Uri. He's not wrong."

Others questioned Berliner's logic. "This probably gets causality somewhat backward," tweeted Semafor Washington editor Jordan Weissmann . "I'd guess that a lot of NPR listeners who voted for [Mitt] Romney have changed how they identify politically."

Similarly, Nieman Lab founder Joshua Benton suggested the rise of Trump alienated many NPR-appreciating Republicans from the GOP.

In recent years, NPR has greatly enhanced the percentage of people of color in its workforce and its executive ranks. Four out of 10 staffers are people of color; nearly half of NPR's leadership team identifies as Black, Asian or Latino.

"The philosophy is: Do you want to serve all of America and make sure it sounds like all of America, or not?" Lansing, who stepped down last month, says in response to Berliner's piece. "I'd welcome the argument against that."

"On radio, we were really lagging in our representation of an audience that makes us look like what America looks like today," Lansing says. The U.S. looks and sounds a lot different than it did in 1971, when NPR's first show was broadcast, Lansing says.

A network spokesperson says new NPR CEO Katherine Maher supports Chapin and her response to Berliner's critique.

The spokesperson says that Maher "believes that it's a healthy thing for a public service newsroom to engage in rigorous consideration of the needs of our audiences, including where we serve our mission well and where we can serve it better."

Disclosure: This story was reported and written by NPR Media Correspondent David Folkenflik and edited by Deputy Business Editor Emily Kopp and Managing Editor Gerry Holmes. Under NPR's protocol for reporting on itself, no NPR corporate official or news executive reviewed this story before it was posted publicly.

ORIGINAL RESEARCH article

Is a long essay always a good essay the effect of text length on writing assessment.

\r\nJohanna Fleckenstein*

  • 1 Department of Educational Research and Educational Psychology, Leibniz Institute for Science and Mathematics Education, Kiel, Germany
  • 2 Institute for Psychology of Learning and Instruction, Kiel University, Kiel, Germany
  • 3 School of Education, Institute of Secondary Education, University of Applied Sciences and Arts Northwestern Switzerland, Brugg, Switzerland

The assessment of text quality is a transdisciplinary issue concerning the research areas of educational assessment, language technology, and classroom instruction. Text length has been found to strongly influence human judgment of text quality. The question of whether text length is a construct-relevant aspect of writing competence or a source of judgment bias has been discussed controversially. This paper used both a correlational and an experimental approach to investigate this question. Secondary analyses were performed on a large-scale dataset with highly trained raters, showing an effect of text length beyond language proficiency. Furthermore, an experimental study found that pre-service teachers tended to undervalue text length when compared to professional ratings. The findings are discussed with respect to the role of training and context in writing assessment.

Introduction

Judgments of students’ writing are influenced by a variety of text characteristics, including text length. The relationship between such (superficial) aspects of written responses and the assessment of text quality has been a controversial issue in different areas of educational research. Both in the area of educational measurement and of language technology, text length has been shown to strongly influence text ratings by trained human raters as well as computer algorithms used to score texts automatically ( Chodorow and Burstein, 2004 ; Powers, 2005 ; Kobrin et al., 2011 ; Guo et al., 2013 ). In the context of classroom language learning and instruction, studies have found effects of text length on teachers’ diagnostic judgments (e.g., grades; Marshall, 1967 ; Osnes, 1995 ; Birkel and Birkel, 2002 ; Pohlmann-Rother et al., 2016 ). In all these contexts, the underlying question is a similar one: Should text length be considered when judging students’ writing – or is it a source of judgment bias? The objective of this paper is to investigate to what degree text length is a construct-relevant aspect of writing competence, or to what extent it erroneously influences judgments.

Powers (2005) recommends both correlational and experimental approaches for establishing the relevance of response length in the evaluation of written responses: “the former for ruling out response length (and various other factors) as causes of response quality (by virtue of their lack of relationship) and the latter for establishing more definitive causal links” (p. 7). This paper draws on data from both recommended approaches: A correlational analysis of a large-scale dataset [MEWS; funded by the German Research Foundation (Grant Nr. CO 1513/12-1) and the Swiss National Science Foundation (Grant Nr. 100019L_162675)] based on expert text quality ratings on the one hand, and an experimental study with untrained pre-service teachers on the other. It thereby incorporates the measurement perspective with the classroom perspective. In the past, (language) assessment research has been conducted within different disciplines that rarely acknowledged each other. While some assessment issues are relevant for standardized testing in large-scale contexts only, others pertain to research on teaching and classroom instruction as well. Even though their assessments may serve different functions (e.g., formative vs. summative or low vs. high stakes), teachers need to be able to assess students’ performance accurately, just as well as professional raters in standardized texts. Thus, combining these different disciplinary angles and looking at the issue of text length from a transdisciplinary perspective can be an advantage for all the disciplines involved. Overall, this paper aims to present a comprehensive picture of the role of essay length in human and automated essay scoring, which ultimately amounts to a discussion of the elusive “gold standard” in writing assessment.

Theoretical Background

Writing assessment is about identifying and evaluating features of a written response that indicate writing quality. Overall, previous research has demonstrated clear and consistent associations between linguistic features on the one hand, and writing quality and development on the other. In a recent literature review, Crossley (2020) showed that higher rated essays typically include more sophisticated lexical items, more complex syntactic features, and greater cohesion. Developing writers also show movements toward using more sophisticated words and more complex syntactic structures. The studies presented by Crossley (2020) provide strong indications that linguistic features in texts can afford important insights into writing quality and development. Whereas linguistic features are generally considered to be construct-relevant when it comes to assessing writing quality, there are other textual features whose relevance to the construct is debatable. The validity of the assessment of students’ competences is negatively affected by construct-irrelevant factors that influence judgments ( Rezaei and Lovorn, 2010 ). This holds true for professional raters in the context of large-scale standardized writing assessment as well as for teacher judgments in classroom writing assessment (both formative or summative). Assigning scores to students’ written responses is a challenging task as different text-inherent factors influence the accuracy of the raters’ or teachers’ judgments (e.g., handwriting, spelling: Graham et al., 2011 ; length, lexical diversity: Wolfe et al., 2016 ). Depending on the construct to be assessed, the influence of these aspects can be considered judgment bias. One of the most relevant and well-researched text-inherent factors influencing human judgments is text length. Crossley (2020) points out that his review does “not consider text length as a linguistic feature while acknowledging that text length is likely the strongest predictor of writing development and quality.” Multiple studies have found a positive relationship between text length and human ratings of text quality, even when controlling for language proficiency ( Chenoweth and Hayes, 2001 ; McCutchen et al., 2008 ; McNamara et al., 2015 ). It is still unclear, however, whether the relation between text length and human scores reflects a true relation between text length and text quality (appropriate heuristic assumption) or whether it stems from a bias in human judgments (judgment bias assumption). The former suggests that text length is a construct-relevant factor and that a certain length is needed to effectively develop a point of view on the issue presented in the essay prompt, and this is one of the aspects taken into account in the scoring ( Kobrin et al., 2007 ; Quinlan et al., 2009 ). The latter claims that text length is either completely or partly irrelevant to the construct of writing proficiency and that the strong effect it has on human judgment can be considered a bias ( Powers, 2005 ). In the context of large-scale writing assessment, prompt-based essay tasks are often used to measure students’ writing competence ( Guo et al., 2013 ). These essays are typically scored by professionally trained raters. These human ratings have been shown to be strongly correlated with essay length, even if this criterion is not represented in the assessment rubric ( Chodorow and Burstein, 2004 ; Kobrin et al., 2011 ). In a review of selected studies addressing the relation between length and quality of constructed responses, Powers (2005) showed that most studies found correlations within the range of r = 0.50 to r = 0.70. For example, he criticized the SAT essay for encouraging wordiness as longer essays tend to score higher. Kobrin et al. (2007) found the number of words to explain 39% of the variance in the SAT essay score. The authors argue that essay length is one of the aspects taken into account in the scoring as it takes a certain length to develop an argument. Similarly, Deane (2013) argues in favor of regarding writing fluency a construct-relevant factor (also see Shermis, 2014 ; McNamara et al., 2015 ). In an analytical rating of text quality, Hachmeister (2019) could showed that longer texts typically contain more cohesive devices, which has a positive impact on ratings of text quality. In the context of writing assessment in primary school, Pohlmann-Rother et al. (2016) found strong correlations between text length and holistic ratings of text quality ( r = 0.62) as well as the semantic-pragmatic analytical dimension ( r = 0.62). However, they found no meaningful relationship between text length and language mechanics (i.e., grammatical and orthographical correctness; r = 0.09).

Text length may be considered especially construct-relevant when it comes to writing in a foreign language. Because of the constraints of limited language knowledge, writing in a foreign language may be hampered because of the need to focus on language rather than content ( Weigle, 2003 ). Silva (1993) , in a review of differences between writing in a first and second language, found that writing in a second language tends to be “more constrained, more difficult, and less effective” (p. 668) than writing in a first language. The necessity of devoting cognitive resources to issues of language may mean that not as much attention can be given to higher order issues such as content or organization (for details of this debate, see Weigle, 2003 , p. 36 f.). In that context, the ability of writing longer texts may be legitimately considered as indicative of higher competence in a foreign language, making text length a viable factor of assessment. For example, Ruegg and Sugiyama (2010) showed that the main predictors of the content score in English foreign language essays were first, organization and second, essay length.

The relevance of this issue has further increased as systems of automated essay scoring (AES) have become more widely used in writing assessment. These systems offer a promising way to complement human ratings in judging text quality ( Deane, 2013 ). However, as the automated scoring algorithms are typically modeled after human ratings, they are also affected by human judgment bias. Moreover, it has been criticized that, at this point, automated scoring systems mainly count words when computing writing scores ( Perelman, 2014 ). Chodorow and Burstein (2004) , for example, showed that 53% of the variance in human ratings can be explained by automated scoring models that use only the number of words and the number of words squared as predictors. Ben-Simon and Bennett (2007) provided evidence from National Assessment of Educational Progress (NAEP) writing test data that standard, statistically created e-rater models weighed essay length even more strongly than human raters (also see Perelman, 2014 ).

Bejar (2011) suggests that a possible tendency to reward longer texts could be minimized through the training of raters with responses at each score level that vary in length. However, Barkaoui (2010) and Attali (2016) both compared the holistic scoring of experienced vs. novice raters and – contrary to expectations – found that the correlation between essay length and scores was slightly stronger for the experienced group. Thus, the question of whether professional experience and training counteract or even reinforce the tendency to overvalue text length in scoring remains open.

Compared to the amount of research on the role of essay length in human and automated scoring in large-scale high-stakes contexts, little attention has been paid to the relation of text length and quality in formative or summative assessment by teachers. This is surprising considering the relevance of the issue for teachers’ professional competence: In order to assess the quality of students’ writing, teachers must either configure various aspects of text quality in a holistic assessment or hold them apart in an analytic assessment. Thus, they need to have a concept of writing quality appropriate for the task and they need to be aware of the construct-relevant and -irrelevant criteria (cf. the lens model; Brunswik, 1955 ). To our knowledge, only two studies have investigated the effect of text length on holistic teacher judgments, both of which found that longer texts receive higher grades. Birkel and Birkel (2002) found significant main effects of text length (long, medium, short) and spelling errors (many, few) on holistic teacher judgments. Osnes (1995) reported effects of handwriting quality and text length on grades.

Whereas research on the text length effect on classroom writing assessment is scarce, a considerable body of research has investigated how other text characteristics influence teachers’ assessment of student texts. It is well-demonstrated, for example, that pre-service and experienced teachers assign lower grades to essays containing mechanical errors ( Scannell and Marshall, 1966 ; Marshall, 1967 ; Cumming et al., 2002 ; Rezaei and Lovorn, 2010 ). Scannell and Marshall (1966) found that pre-service teachers’ judgments were affected by errors in punctuation, grammar and spelling, even though they were explicitly instructed to grade on content alone. More recently, Rezaei and Lovorn (2010) showed that high quality essays containing more structural, mechanical, spelling, and grammatical errors were assigned lower scores than texts without errors even in criteria relating solely to content. Teachers failed to distinguish between formal errors and the independent quality of content in a student essay. Similarly, Vögelin et al. (2018 , 2019) found that lexical features and spelling influenced not only holistic teacher judgments of students’ writing in English as a second or foreign language, but also their assessment of other analytical criteria (e.g., grammar). Even though these studies do not consider text length as a potential source of bias, they do show that construct-irrelevant aspects influence judgments of teachers.

This Research

Against this research background, it remains essential to investigate whether the relation between essay length and text quality represents a true relationship or a bias on the part of the rater or teacher ( Wolfe et al., 2016 ). First, findings of correlational studies can give us an indication of the effect of text length on human ratings above and beyond language proficiency variables. Second, going beyond correlational findings, there is a need for experimental research that examines essay responses on the same topic differing only in length in order to establish causal relationships ( Kobrin et al., 2007 ). The present research brings together both of these approaches.

This paper comprises two studies investigating the role of essay length in foreign language assessment using an interdisciplinary perspective including the fields of foreign language education, computer linguistics, educational research, and psychometrics. Study 1 presents a secondary analysis of a large-scale dataset with N = 2,722 upper secondary school students in Germany and Switzerland who wrote essays in response to “independent writing” prompts of the internet-based Test of English as a Foreign Language (TOEFL iBT). It investigates the question of how several indicators of students’ English proficiency (English grade, reading and listening comprehension, self-concept) are related to the length of their essays (word count). It further investigates whether or not essay length accounts for variance in text quality scores (expert ratings) even when controlling for English language proficiency and other variables (e.g., country, gender, cognitive ability). A weak relationship of proficiency and length as well as a large proportion of variance in text quality explained by length beyond proficiency would be in favor of the judgment bias assumption.

Study 2 focused on possible essay length bias in an experimental setting, investigating the effect of essay length on text quality ratings when there was (per design) no relation between essay length and text quality score. Essays from Study 1 were rated by N = 84 untrained pre-service teachers, using the same TOEFL iBT rubric as the expert raters. As text quality scores were held constant within all essay length conditions, any significant effect of essay length would indicate a judgment bias. Both studies are described in more detail in the following sections.

This study investigates the question of judgment bias assumption vs. appropriate heuristic assumption in a large-scale context with professional human raters. A weak relationship between text length and language proficiency would be indicative of the former assumption, whereas a strong relationship would support the latter. Moreover, if the impact of text length on human ratings was significant and substantial beyond language proficiency, this might indicate a bias on the part of the rater rather than an appropriate heuristic. Thus, Study 1 aims to answer the following research questions:

(1) How is essay length related to language proficiency?

(2) Does text length still account for variance in text quality when English language proficiency is statistically controlled for?

Materials and Methods

Sample and procedure.

The sample consisted of N = 2,722 upper secondary students (11th grade; 58.1% female) in Germany ( n = 894) and Switzerland ( n = 1828) from the interdisciplinary and international research project Measuring English Writing at Secondary Level (MEWS; for an overview see Keller et al., 2020 ). The target population were students attending the academic track of general education grammar schools (ISCED level 3a) in the German federal state Schleswig-Holstein as well as in seven Swiss cantons (Aargau, Basel Stadt, Basel Land, Luzern, St. Gallen, Schwyz, Zurich). In a repeated-measures design, students were assessed at the beginning (T1: August/September 2016; M age = 17.34; SD age = 0.87) and at the end of the school year (T2: May/June 2017; M age = 18.04; SD age = 0.87). The students completed computer-based tests on writing, reading and listening skills, as well as general cognitive ability. Furthermore, they completed a questionnaire measuring background variables and individual characteristics.

Writing prompt

All students answered two independent and two integrated essay writing prompts of the internet-based Test of English as a Foreign Language (TOEFL iBT ® ) that is administered by the Educational Testing Service (ETS) in Princeton. The task instruction was as follows: “In the writing task below you will find a question on a controversial topic. Answer the question in an essay in English. List arguments and counter-arguments, explain them and finally make it clear what your own opinion on the topic is. Your text will be judged on different qualities. These include the presentation of your ideas, the organization of the essay and the linguistic quality and accuracy. You have 30 min to do this. Try to use all of this time as much as possible.” This task instruction was followed by the essay prompt. The maximum writing time was 30 min according to the official TOEFL iBT ® assessment procedure. The essays were scored by trained human raters on the TOEFL 6-point rating scale at ETS. In addition to two human ratings per essay, ETS also provided scores from their automated essay scoring system (e-rater ® ; Burstein et al., 2013 ). For a more detailed description of the scoring procedure and the writing prompts see Rupp et al. (2019) and Keller et al. (2020) . For the purpose of this study, we selected the student responses to the TOEFL iBT independent writing prompt “Teachers,” which showed good measurement qualities (see Rupp et al., 2019 ). Taken together, data collections at T1 and T2 yielded N = 2,389 valid written responses to the following prompt: “A teacher’s ability to relate well with students is more important than excellent knowledge of the subject being taught.”

Text quality and length

The rating of text quality via human and machine scoring was done by ETS. All essays were scored by highly experienced human raters on the operational holistic TOEFL iBT rubric from 0 to 5 ( Chodorow and Burstein, 2004 ). Essays were scored high if they were well-organized and individual ideas were well-developed, if they used specific examples and support to express learners’ opinion on the subject, and if the English language was used accurately to express learners’ ideas. Essays were assigned a score of 0 if they were written in another language, were generally incomprehensible, or if no text was entered.

Each essay received independent ratings by two trained human raters. If the two ratings showed a deviation of 1, the mean of the two scores was used; if they showed a deviation of 2 or more, a third rater (adjudicator) was consulted. Inter-rater agreement, as measured by quadratic weighted kappa (QWK), was satisfying for the prompt “Teachers” at both time points (QWK = 0.67; Hayes and Hatch, 1999 ; see Rupp et al., 2019 for further details). The mean text quality score was M = 3.35 ( SD = 0.72).

Word count was used to measure the length of the essays. The number of words was calculated by the e-Rater scoring engine. The mean word count was M = 311.19 ( SD = 81.91) and the number of words ranged from 41 to 727. We used the number of words rather than other measures of text length (e.g., number of letters) as it is the measure which is most frequently used in the literature: 9 out of 10 studies in the research review by Powers (2005) used word count as the criterion (also see Kobrin et al., 2007 , 2011 ; Crossley and McNamara, 2009 ; Barkaoui, 2010 ; Attali, 2016 ; Wolfe et al., 2016 ; Wind et al., 2017 ). This approach ensures that our analyses can be compared with previous research.

English language proficiency and control variables

Proficiency was operationalized by a combination of different variables: English grade, English writing self-concept, reading and listening comprehension in English. The listening and reading skills were measured with a subset of items from the German National Assessment ( Köller et al., 2010 ). The tasks require a detailed understanding of long, complex reading and listening texts including idiomatic expressions and different linguistic registers. The tests consisted of a total of 133 items for reading, and 118 items for listening that were administered in a multi-matrix-design. Each student was assessed with two rotated 15-min blocks per domain. Item parameters were estimated using longitudinal multidimensional two-parameter item response models in M plus version 8 ( Muthén and Muthén, 1998–2012 ). Student abilities were estimated using 15 plausible values (PVs) per person. The PV reliabilities were 0.92 (T1) and 0.76 (T2) for reading comprehension, and 0.85 (T1) and 0.72 (T2) for listening comprehension. For a more detailed description of the scaling procedure see Köller et al. (2019) .

General cognitive ability was assessed at T1 using the subtests on figural reasoning (N2; 25 items) and on verbal reasoning (V3; 20 items) of the Cognitive Ability Test (KFT 4–12 + R; Heller and Perleth, 2000 ). For each scale 15 PVs were drawn in a two-dimensional item response model. For the purpose of this study, the two PVs were combined to 15 overall PV scores with a reliability of 0.86.

The English writing self-concept was measured with a scale consisting of five items (e.g., “I have always been good at writing in English”; Eccles and Wigfield, 2002 ; Trautwein et al., 2012 ; α = 0.90). Furthermore, country (Germany = 0/Switzerland = 1), gender (male = 0/female = 1) and time of measurement (T1 = 0; T2 = 1) were used as control variables.

Statistical Analyses

All analyses were conducted in M plus version 8 ( Muthén and Muthén, 1998–2012 ) based on the 15PV data sets using robust maximum likelihood estimation to account for a hierarchical data structure (i.e., students clustered in classes; type = complex). Full-information maximum likelihood was used to estimate missing values in background variables. Due to the use of 15PVs, all analyses were run 15 times and then averaged (see Rubin, 1987 ).

Confirmatory factor analysis was used to specify a latent proficiency factor. All four proficiency variables showed substantial loadings in a single-factor measurement model (English grade: 0.67; writing self-concept: 0.73; reading comprehension: 0.42; listening comprehension: 0.51). As reading and listening comprehension were measured within the same assessment framework and could thus be expected to share mutual variance beyond the latent factor, their residuals were allowed to correlate. The analyses yielded an acceptable model fit: χ 2 (1) = 3.65, p = 0.06; CFI = 0.998, RMSEA = 0.031, SRMR = 0.006.

The relationship between text length and other independent variables was explored with correlational analysis. Multiple regression analysis with latent and manifest predictors was used to investigate the relations between text length, proficiency, and text quality.

The correlation of the latent proficiency factor and text length (word count) was moderately positive: r = 0.36, p < 0.01. This indicates that more proficient students tended to write longer texts. Significant correlations with other variables showed that students tended to write longer texts at T1 ( r = -0.08, p < 0.01), girls wrote longer texts than boys ( r = 0.11, p < 0.01), and higher cognitive ability was associated with longer texts ( r = 0.07, p < 0.01). However, all of these correlations were very weak as a general rule. The association of country and text length was not statistically significant ( r = -0.06, p = 0.10).

Table 1 presents the results of the multiple linear regression of text quality on text length, proficiency and control variables. The analysis showed that proficiency and the covariates alone explained 38 percent of the variance in text quality ratings, with the latent proficiency factor being by far the strongest predictor (Model 1). The effect of text length on the text quality score was equally strong when including the control variables but not proficiency in the model (Model 2). When both the latent proficiency factor and text length were entered into the regression model (Model 3), the coefficient of text length was reduced but remained significant and substantial, explaining an additional 24% of the variance (ΔR 2 = 0.24 from Model 1 to Model 3). Thus, text length had an incremental effect on text quality beyond a latent English language proficiency factor.

www.frontiersin.org

Table 1. Linear regression of text quality on text length, English language proficiency, and control variables: standardized regression coefficients (β) and standard errors (SE).

Study 1 approached the issue of text length by operationalizing the construct of English language proficiency and investigating how it affects the relationship of text length and text quality. This can give us an idea of how text length may influence human judgments even though it is not considered relevant to the construct of writing competence. These secondary analyses of an existing large-scale dataset yielded two central findings: First, text length was only moderately associated with language proficiency. Second, text length strongly influenced writing performance beyond proficiency. Thus, it had an impact on the assigned score that was not captured by the construct of proficiency. These findings could be interpreted in favor of the judgment bias assumption as text length may include both construct-irrelevant and construct-relevant information.

The strengths of this study were the large sample of essays on the same topic and the vast amount of background information that was collected on the student writers (proficiency and control variables). However, there were three major limitations: First, the proficiency construct captured different aspects of English language competence (reading and listening comprehension, writing self-concept, grade), but that operationalization was not comprehensive. Thus, the additional variance explained by text length may still have been due to other aspects that could not be included in the analyses as they were not in the data. Further research with a similar design (primary or secondary analyses) should use additional variables such as grammar/vocabulary knowledge or writing performance in the first language.

The second limitation was the correlational design, which does not allow a causal investigation of the effect of text length on text quality ratings. Drawing inferences which are causal in nature would require an experimental environment in which, for example, text quality is kept constant for texts of different lengths. For that reason, Study 2 was conducted exactly in such a research design.

Last but not least, the question of transferability of these findings remains open. Going beyond standardized large-scale assessment, interdisciplinary research requires us to look at the issue from different perspectives. Findings pertaining to professional raters may not be transferable to teachers, who are required to assess students’ writing in a classroom context. Thus, Study 2 drew on a sample of preservice English teachers and took a closer look at how their ratings were impacted by text length.

Research Questions

In Study 2, we investigated the judgment bias assumption vs. the appropriate heuristic assumption of preservice teachers. As recommended by Powers (2005) , we conducted an experimental study in addition to the correlational design used in Study 1. As text quality scores were held constant within all essay length conditions, any significant effect of essay length would be in favor of the judgment bias assumption. The objective of this study was to answer the following research questions:

(1) How do ratings of pre-service teachers correspond to expert ratings?

(2) Is there an effect of text length on the text quality ratings of preservice English teachers, when there is (per design) no relation between text length and text quality (main effect)?

(3) Does the effect differ for different levels of writing performance (interaction effect)?

Participants and Procedure

The experiment was conducted with N = 84 pre-service teachers ( M Age = 23 years; 80% female), currently enrolled in a higher education teacher training program at a university in Northern Germany. They had no prior rating experience of this type of learner texts. The experiment was administered with the Student Inventory ASSET ( Jansen et al., 2019 ), an online tool to assess students’ texts within an experimental environment. Participants were asked to rate essays from the MEWS project (see Study 1) on the holistic rubric used by the human raters at ETS (0–5; https://www.ets.org/s/toefl/pdf/toefl_writing_rubrics.pdf ). Every participant had to rate 9 out of 45 essays in randomized order, representing all possible combinations of text quality and text length. Before the rating process began, participants were given information about essay writing in the context of the MEWS study (school type; school year; students’ average age; instructional text) and they were presented the TOEFL writing rubric as the basis for their judgments. They had 15 min to get an overview of all nine texts before they were asked to rate each text on the rubric. Throughout the rating process, they were allowed to highlight parts of the texts.

The operationalization of text quality and text length as categorical variables as well as the procedure of selecting an appropriate essay sample for the study is explained in the following.

Text Length and Text Quality

The essays used in the experiment were selected on the basis of the following procedure, which took both text quality and text length as independent variables into account. The first independent variable of the essay (overall text quality) was operationalized via scores assigned by two trained human raters from ETS on a holistic six-point scale (0–5; see Study 1 and Appendix A). In order to measure the variable as precisely as possible, we only included essays for which both human raters had assigned the same score, resulting in a sample of N = 1,333 essays. As a result, three gradations of text quality were considered in the current study: lower quality (score 2), medium quality (score 3) and higher quality (score 4). The corpus included only few texts (10.4%) with the extreme scores of 0, 1, and 5; these were therefore excluded from the essay pool. We thus realized a 3 × 3 factorial within-subjects design. The second independent variable text length was measured via the word count of the essays, calculated by the e-rater (c) scoring engine. As with text quality, this variable was subdivided in three levels: rather short texts (s), medium-length texts (m), and long texts (l). All available texts were analyzed regarding their word count distribution. Severe outliers were excluded. The remaining N = 1308 essays were split in three even groups: the lower (=261 words), middle (262–318 words) and upper third (=319 words). Table 2 shows the distribution of essays for the resulting combinations of text length and text score.

www.frontiersin.org

Table 2. Distribution of essays in the sample contingent on text quality and text length groupings.

Selection of Essays

For each text length group (s, m, and l), the mean word count across all three score groups was calculated. Then, the score group (2, 3, or 4) with the smallest number of essays in a text length group was taken as reference (e.g., n = 22 short texts of high quality or n = 15 long texts of low quality). Within each text length group, the five essays being – word count-wise – closest to the mean of the reference were chosen for the study. This was possible with mostly no or only minor deviations. In case of multiple possible matches, the essay was selected at random. This selection procedure resulted in a total sample of 45 essays, with five essays for each combination of score group (2, 3, 4) and length group (s, m, l).

A repeated-measures ANOVA with two independent variables (text quality and text length) was conducted to test the two main effects and their interaction on participants’ ratings (see Table 3 ). Essay ratings were treated as a within-subject factor, accounting for dependencies of the ratings nested within raters. The main effect of text quality scores on participants’ ratings showed significant differences between the three text quality conditions ( low , medium , high ) that corresponded to expert ratings; F (2, 82) = 209.04, p < 0.001, d = 4.52. There was also a significant main effect for the three essay length conditions ( short , medium , long ); F (2, 82) = 9.14, p < 0.001, d = 0.94. Contrary to expectations, essay length was negatively related to participants’ ratings, meaning that shorter texts received higher scores than longer texts. The interaction of text quality and text length also had a significant effect; F (4, 80) = 3.93, p < 0.01, d = 0.89. Post-hoc tests revealed that texts of low quality were especially impacted by essay length in a negative way (see Figure 1 ).

www.frontiersin.org

Table 3. Participants’ ratings of text quality: means (M) and standard deviations (SD).

www.frontiersin.org

Figure 1. Visualization of the interaction between text length and text quality.

The experiment conducted in Study 2 found a very strong significant main effect for text quality, indicating a high correspondence of pre-service teachers’ ratings with the expert ratings of text quality. The main effect of text length was also significant, but was qualified by a significant interaction effect text quality x text length, indicating that low quality texts were rated even more negative the longer they were. This negative effect of text length was contrary to expectations: The pre-service teachers generally tended to assign higher scores to shorter texts. Thus, they seemed to value shorter texts over longer texts. However, this was mainly true for texts of low quality.

These findings were surprising against the research background that would suggest that longer texts are typically associated with higher scores of text quality, particularly in the context of second language writing. Therefore, it is even more important to discuss the limitations of the design before interpreting the results: First, the sample included relatively inexperienced pre-service teachers. Further research is needed to show whether these findings are transferable to in-service teachers with reasonable experience in judging students’ writing. Moreover, further studies could use assessment rubrics that teachers are more familiar with, such as the CEFR ( Council of Europe, 2001 ; also see Fleckenstein et al., 2020 ). Second, the selection process of essays may have reduced the ecological validity of the experiment. As there were only few long texts of low quality and few short texts of high quality in the actual sample (see Table 2 ), the selection of texts in the experimental design was – to some degree – artificial. This could also have influenced the frame of reference for the pre-service teachers as the distribution of the nine texts was different from what one would find naturally in an EFL classroom. Third, the most important limitation of this study is the question of the reference norm, a point which applies to studies of writing assessment in general. In our study, writing quality was operationalized using expert ratings, which have been shown to be influenced by text length in many investigations as well as in Study 1. If the expert ratings are biased themselves, the findings of this study may also be interpreted as pre-service teachers (unlike expert raters) not showing a text length bias at all: shorter texts should receive higher scores than longer ones if the quality assigned by the expert raters is held constant. We discuss these issues concerning the reference norm in more detail in the next section.

All three limitations may have affected ratings in a way that could have reinforced a negative effect of text length on text quality ratings. However, as research on the effect of text length on teachers’ judgments is scarce, we should consider the possibility that the effect is actually different from the (positive) one typically found for professional human raters. There are a number of reasons to assume differences in the rating processes that are discussed in more detail in the following section. Furthermore, we will discuss what this means in terms of the validity of the gold standard in writing assessment.

General Discussion

Combining the results of both studies, we have reason to assume that (a) text length induces judgment bias and (b) the effect of text length largely depends on the rater and/or the rating context. More specifically, the findings of the two studies can be summarized as follows: Professional human raters tend to reward longer texts beyond the relationship of text length and proficiency. Compared to this standard, inexperienced EFL teachers tend to undervalue text length, meaning that they sanction longer texts especially when text quality is low. This in turn may be based on an implicit expectation deeply ingrained in the minds of many EFL teachers: that writing in a foreign language is primarily about avoiding mistakes, and that longer texts typically contain more of them than shorter ones ( Keller, 2016 ). Preservice teachers might be particularly afflicted with this view of writing as they would have experienced it as learners up-close and personal, not too long ago. Both findings point toward the judgment bias assumption, but with opposite directions. These seemingly contradictory findings lead to interesting and novel research questions – both in the field of standardized writing assessment and in the field of teachers’ diagnostic competence.

Only if we take professional human ratings as reliable benchmark scores can we infer that teachers’ ratings are biased (in a negative way). If we consider professional human ratings to be biased themselves (in a positive way), then the preservice teachers’ judgments might appear to be unbiased. However, it would be implausible to assume that inexperienced teachers’ judgments are less biased than those of highly trained expert raters. Even if professional human ratings are flawed themselves, they are the best possible measure of writing quality, serving as a reference even for NLP tools ( Crossley, 2020 ). It thus makes much more sense to consider the positive impact of text length on professional human ratings – at least to a degree – an appropriate heuristic. This means that teachers’ judgments would generally benefit from applying the same heuristic when assessing students’ writing, as long as it does not become a bias.

In his literature review, Crossley (2020) sees the nature of the writing task to be among the central limitations when it comes to generalizing findings in the context of writing assessment. Written responses to standardized tests (such as the TOEFL) may produce linguistic features that differ from writing samples produced in the classroom or in other, more authentic writing environments. Moreover, linguistic differences may also occur depending on a writing sample being timed or untimed. Timed samples provide fewer opportunities for planning, revising, and development of ideas as compared to untimed samples, where students are more likely to plan, reflect, and revise their writing. These differences may surface in timed writing in such a way that it would be less cohesive and less complex both lexically and syntactically.

In the present research, such differences may account for the finding that pre-service teachers undervalue text length compared to professional raters. Even though the participants in Study 2 were informed about the context in which the writing samples were collected, they may have underestimated the challenges of a timed writing task in an unfamiliar format. In the context of their own classrooms, students rarely have strict time limitations when working on complex writing tasks. If they do, in an exam consisting of an argumentative essay, for example, it is usually closer to 90 min than to 30 min (at least in the case of the German pre-service teachers who participated in this study). Thus, text length may not be a good indicator of writing quality in the classroom. On the contrary, professional raters may value length as a construct-relevant feature of writing quality in a timed task, for example as an indicator of writing fluency (see Peng et al., 2020 ).

Furthermore, text length as a criterion of quality cannot be generalized over different text types at random. The genres which are taught in EFL courses, or assessed in EFL exams, differ considerably with respect to expected length. In five paragraph essays, for example, developing an argument requires a certain scope and attention to detail, so that text length is a highly salient feature for overall text quality. The same might not be true for e-mail writing, a genre frequently taught in EFL classrooms ( Fleckenstein et al., in press ). E-mails are usually expected to be concise and to the point, so that longer texts might seem prolix, or rambling. Such task-specific demands need to be taken into account when it comes to interpreting our findings. The professional raters employed in our study were schooled extensively for rating five-paragraph essays, which included a keen appreciation of text length as a salient criterion of text quality. The same might not be said of classroom teachers, who encounter a much wider range of genres in their everyday teaching and might therefore be less inclined to consider text length as a relevant feature. Further research should consider different writing tasks in order to investigate whether text length is particularly important to the genre of the argumentative essay.

Our results underscore the importance of considering whether or not text length should be taken into account for different contexts of writing assessment. This holds true for classroom assessment, where teachers should make their expectations regarding text length explicit, as well as future studies with professional raters. Crossley (2020) draws attention to the transdisciplinary perspective of the field as a source for complications: “The complications arise from the interdisciplinary nature of this type of research which often combines writing, linguistics, statistics, and computer science fields. With so many fields involved, it is often easy to overlook confounding factors” (p. 428). The present research shows how the answer to one and the same research question – How does text length influence human judgment? – can be very different from different perspectives and within different areas of educational research. Depending on the population (professional raters vs. pre-service teachers) and the methodology (correlational analysis vs. experimental design), our findings illustrate a broad range of possible investigations and outcomes. Thus, it is a paramount example of why interdisciplinary research in education is not only desirable but imperative. Without an interdisciplinary approach, our view of the text length effect would be uni-dimensional and fragmentary. Only the combination of different perspectives and methods can live up to the demands of a complex issue such as writing assessment, identify research gaps, and challenge research traditions. Further research is needed to investigate the determinants of the strength and the direction of the bias. It is necessary to take a closer look at the rating processes of (untrained) teachers and (trained) raters, respectively, in order to investigate similarities and differences. Research pertaining to judgment heuristics/biases can be relevant for both teacher and rater training. However, the individual concerns and characteristics of the two groups need to be taken into account. This could be done, for example, by directly comparing the two groups in an experimental study. Both in teacher education and in text assessment studies, we should have a vigorous discussion about how appropriate heuristics of expert raters can find their way into the training of novice teachers and inexperienced raters in an effort to reduce judgement bias.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.

Ethics Statement

The studies involving human participants were reviewed and approved by the Ministry of Education, Science and Cultural Affairs of the German federal state Schleswig-Holstein. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author Contributions

JF analyzed the data and wrote the manuscript. TJ and JM collected the experimental data for Study 2 and supported the data analysis. SK and OK provided the dataset for Study 1. TJ, JM, SK, and OK provided feedback on the manuscript. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Attali, Y. (2016). A comparison of newly-trained and experienced raters on a standardized writing assessment. Lang. Test. 33, 99–115. doi: 10.1177/0265532215582283

CrossRef Full Text | Google Scholar

Barkaoui, K. (2010). Explaining ESL essay holistic scores: a multilevel modeling approach. Lang. Test. 27, 515–535. doi: 10.1177/0265532210368717

Bejar, I. I. (2011). A validity-based approach to quality control and assurance of automated scoring. Assess. Educ. 18, 319–341. doi: 10.1080/0969594x.2011.555329

Ben-Simon, A., and Bennett, R. E. (2007). Toward more substantively meaningful automated essay scoring. J. Technol. Learn. Asses. 6, [Epub ahead of print].

Google Scholar

Birkel, P., and Birkel, C. (2002). Wie einig sind sich Lehrer bei der Aufsatzbeurteilung? Eine Replikationsstudie zur Untersuchung von Rudolf Weiss. Psychol. Erzieh. Unterr. 49, 219–224.

Brunswik, E. (1955). Representative design and probabilistic theory in a functional psychology. Psychol. Rev. 62, 193–217. doi: 10.1037/h0047470

PubMed Abstract | CrossRef Full Text | Google Scholar

Burstein, J., Tetreault, J., and Madnani, N. (2013). “The E-rater ® automated essay scoring system,” in Handbook of Automated Essay Evaluation , eds M. D. Shermis and J. Burstein (Abingdon: Routledge), 77–89.

Chenoweth, N. A., and Hayes, J. R. (2001). Fluency in writing: generating text in L1 and L2. Written Commun. 18, 80–98. doi: 10.1177/0741088301018001004

Chodorow, M., and Burstein, J. (2004). Beyond essay length: evaluating e-rater ® ’s performance on toefl ® essays. ETS Res. Rep. 2004, i–38. doi: 10.1002/j.2333-8504.2004.tb01931.x

Council of Europe (2001). Common European Framework of Reference for Languages: Learning, Teaching and Assessment. Cambridge, MA: Cambridge University Press.

Crossley, S. (2020). Linguistic features in writing quality and development: an overview. J. Writ. Res. 11, 415–443. doi: 10.17239/jowr-2020.11.03.01

Crossley, S. A., and McNamara, D. S. (2009). Computational assessment of lexical differences in L1 and L2 writing. J. Second. Lang. Writ. 18, 119–135. doi: 10.1016/j.jslw.2009.02.002

Cumming, A., Kantor, R., and Powers, D. E. (2002). Decision making while rating ESL/EFL writing tasks: a descriptive framework. Modern Lang. J. 86, 67–96. doi: 10.1111/1540-4781.00137

Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assess. Writ. 18, 7–24. doi: 10.1016/j.asw.2012.10.002

Eccles, J. S., and Wigfield, A. (2002). Motivational beliefs, values, and goals. Annu. Rev.Psychol. 53, 109–132. doi: 10.1146/annurev.psych.53.100901.135153

Fleckenstein, J., Keller, S., Krüger, M., Tannenbaum, R. J., and Köller, O. (2020). Linking TOEFL iBT ® writing scores and validity evidence from a standard setting study. Assess. Writ. 43:100420. doi: 10.1016/j.asw.2019.100420

Fleckenstein, J., Meyer, J., Jansen, T., Reble, R., Krüger, M., Raubach, E., et al. (in press). “Was macht Feedback effektiv? Computerbasierte Leistungsrückmeldung anhand eines Rubrics beim Schreiben in der Fremdsprache Englisch,” in Tagungsband Bildung, Schule und Digitalisierung , eds K. Kaspar, M. Becker-Mrotzek, S. Hofhues, J. König, and D. Schmeinck (Münster: Waxmann

Graham, S., Harris, K. R., and Hebert, M. (2011). It is more than just the message: presentation effects in scoring writing. Focus Except. Child. 44, 1–12.

Guo, L., Crossley, S. A., and McNamara, D. S. (2013). Predicting human judgments of essay quality in both integrated and independent second language writing samples: a comparison study. Assess. Writ. 18, 218–238. doi: 10.1016/j.asw.2013.05.002

Hachmeister, S. (2019). “Messung von Textqualität in Ereignisberichten,” in Schreibkompetenzen Messen, Beurteilen und Fördern (6. Aufl) , eds I. Kaplan and I. Petersen (Münster: Waxmann Verlag), 79–99.

Hayes, J. R., and Hatch, J. A. (1999). Issues in measuring reliability: Correlation versus percentage of agreement. Writt. Commun. 16, 354–367. doi: 10.1177/0741088399016003004

Heller, K. A., and Perleth, C. (2000). KFT 4-12+ R Kognitiver Fähigkeitstest für 4. Bis 12. Klassen, Revision. Göttingen: Beltz Test.

Jansen, T., Vögelin, C., Machts, N., Keller, S. D., and Möller, J. (2019). Das Schülerinventar ASSET zur Beurteilung von Schülerarbeiten im Fach Englisch: Drei experimentelle Studien zu Effekten der Textqualität und der Schülernamen. Psychologie in Erziehung und Unterricht 66, 303–315. doi: 10.2378/peu2019.art21d

Keller, S. (2016). Measuring Writing at Secondary Level (MEWS). Eine binationale Studie. Babylonia 3, 46–48.

Keller, S. D., Fleckenstein, J., Krüger, M., Köller, O., and Rupp, A. A. (2020). English writing skills of students in upper secondary education: results from an empirical study in Switzerland and Germany. J. Second Lang. Writ. 48:100700. doi: 10.1016/j.jslw.2019.100700

Kobrin, J. L., Deng, H., and Shaw, E. J. (2007). Does quantity equal quality? the relationship between length of response and scores on the SAT essay. J. Appl. Test. Technol. 8, 1–15. doi: 10.1097/nne.0b013e318276dee0

Kobrin, J. L., Deng, H., and Shaw, E. J. (2011). The association between SAT prompt characteristics, response features, and essay scores. Assess. Writ. 16, 154–169. doi: 10.1016/j.asw.2011.01.001

Köller, O., Fleckenstein, J., Meyer, J., Paeske, A. L., Krüger, M., Rupp, A. A., et al. (2019). Schreibkompetenzen im Fach Englisch in der gymnasialen Oberstufe. Z. Erziehungswiss. 22, 1281–1312. doi: 10.1007/s11618-019-00910-3

Köller, O. Knigge, M. and Tesch B. (eds.) (2010). Sprachliche Kompetenzen im Ländervergleich. Germany: Waxmann.

Marshall, J. C. (1967). Composition errors and essay examination grades re-examined. Am. Educ. Res. J. 4, 375–385. doi: 10.3102/00028312004004375

McCutchen, D., Teske, P., and Bankston, C. (2008). “Writing and cognition: implications of the cognitive architecture for learning to write and writing to learn,” in Handbook of research on Writing: History, Society, School, Individual, Text , ed. C. Bazerman (Milton Park: Taylor & Francis Group), 451–470.

McNamara, D. S., Crossley, S. A., Roscoe, R. D., Allen, L. K., and Dai, J. (2015). A hierarchical classification approach to automated essay scoring. Assess. Writ. 23, 35–59. doi: 10.1016/j.asw.2014.09.002

Muthén, L. K., and Muthén, B. O. (1998–2012). Mplus user’s Guide. Los Angeles: Muthén & Muthén.

Osnes, J. (1995). “Der Einflus von Handschrift und Fehlern auf die Aufsatzbeurteilung,” in Die Fragwürdigkeit der Zensurengebung (9. Aufl., S) , ed. K. Ingenkamp (Göttingen: Beltz), 131–147.

Peng, J., Wang, C., and Lu, X. (2020). Effect of the linguistic complexity of the input text on alignment, writing fluency, and writing accuracy in the continuation task. Langu. Teach. Res. 24, 364–381. doi: 10.1177/1362168818783341

Perelman, L. (2014). When “the state of the art” is counting words. Assess. Writ. 21, 104–111. doi: 10.1016/j.asw.2014.05.001

Pohlmann-Rother, S., Schoreit, E., and Kürzinger, A. (2016). Schreibkompetenzen von Erstklässlern quantitativ-empirisch erfassen-Herausforderungen und Zugewinn eines analytisch-kriterialen Vorgehens gegenüber einer holistischen Bewertung. J. Educ. Res. Online 8, 107–135.

Powers, D. E. (2005). Wordiness”: a selective review of its influence, and suggestions for investigating its relevance in tests requiring extended written responses. ETS Res. Rep. i–14.

Quinlan, T., Higgins, D., and Wolff, S. (2009). Evaluating the construct-coverage of the e-rater ® scoring engine. ETS Res. Rep. 2009, i–35. doi: 10.1002/j.2333-8504.2009.tb02158.x

Rezaei, A. R., and Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assess. Writ. 15, 18–39. doi: 10.1016/j.asw.2010.01.003

Rubin, D. B. (1987). The calculation of posterior distributions by data augmentation: comment: a noniterative sampling/importance resampling alternative to the data augmentation algorithm for creating a few imputations when fractions of missing information are modest: the SIR algorithm. J. Am. Stat. Assoc. 82, 543–546. doi: 10.2307/2289460

Ruegg, R., and Sugiyama, Y. (2010). Do analytic measures of content predict scores assigned for content in timed writing? Melbourne Papers in Language Testing 15, 70–91.

Rupp, A. A., Casabianca, J. M., Krüger, M., Keller, S., and Köller, O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany. ETS Res. Rep. Ser. 2019, 1–23. doi: 10.1002/ets2.12249

Scannell, D. P., and Marshall, J. C. (1966). The effect of selected composition errors on grades assigned to essay examinations. Am. Educ. Res. J. 3, 125–130. doi: 10.3102/00028312003002125

Shermis, M. D. (2014). The challenges of emulating human behavior in writing assessment. Assess. Writ. 22, 91–99. doi: 10.1016/j.asw.2014.07.002

Silva, T. (1993). Toward an understanding of the distinct nature of L2 writing: the ESL research and its implications. TESOL Q. 27, 657–77. doi: 10.2307/3587400

Trautwein, U., Marsh, H. W., Nagengast, B., Lüdtke, O., Nagy, G., and Jonkmann, K. (2012). Probing for the multiplicative term in modern expectancy–value theory: a latent interaction modeling study. J. Educ. Psychol. 104, 763–777. doi: 10.1037/a0027470

Vögelin, C., Jansen, T., Keller, S. D., Machts, N., and Möller, J. (2019). The influence of lexical features on teacher judgements of ESL argumentative essays. Assess. Writ. 39, 50–63. doi: 10.1016/j.asw.2018.12.003

Vögelin, C., Jansen, T., Keller, S. D., and Möller, J. (2018). The impact of vocabulary and spelling on judgments of ESL essays: an analysis of teacher comments. Lang. Learn. J. 1–17. doi: 10.1080/09571736.2018.1522662

Weigle, S. C. (2003). Assessing Writing. Cambridge: Cambridge University Press.

Wind, S. A., Stager, C., and Patil, Y. J. (2017). Exploring the relationship between textual characteristics and rating quality in rater-mediated writing assessments: an illustration with L1 and L2 writing assessments. Assess. Writ. 34, 1–15. doi: 10.1016/j.asw.2017.08.003

Wolfe, E. W., Song, T., and Jiao, H. (2016). Features of difficult-to-score essays. Assess. Writ. 27, 1–10. doi: 10.1016/j.asw.2015.06.002

Keywords : text length, writing assessment, text quality, judgment bias, English as a foreign language, human raters, pre-service teachers

Citation: Fleckenstein J, Meyer J, Jansen T, Keller S and KÖller O (2020) Is a Long Essay Always a Good Essay? The Effect of Text Length on Writing Assessment. Front. Psychol. 11:562462. doi: 10.3389/fpsyg.2020.562462

Received: 15 May 2020; Accepted: 31 August 2020; Published: 25 September 2020.

Reviewed by:

Copyright © 2020 Fleckenstein, Meyer, Jansen, Keller and Köller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Johanna Fleckenstein, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Frequently asked questions

What is an essay.

An essay is a focused piece of writing that explains, argues, describes, or narrates.

In high school, you may have to write many different types of essays to develop your writing skills.

Academic essays at college level are usually argumentative : you develop a clear thesis about your topic and make a case for your position using evidence, analysis and interpretation.

Frequently asked questions: Writing an essay

For a stronger conclusion paragraph, avoid including:

  • Important evidence or analysis that wasn’t mentioned in the main body
  • Generic concluding phrases (e.g. “In conclusion…”)
  • Weak statements that undermine your argument (e.g. “There are good points on both sides of this issue.”)

Your conclusion should leave the reader with a strong, decisive impression of your work.

Your essay’s conclusion should contain:

  • A rephrased version of your overall thesis
  • A brief review of the key points you made in the main body
  • An indication of why your argument matters

The conclusion may also reflect on the broader implications of your argument, showing how your ideas could applied to other contexts or debates.

The conclusion paragraph of an essay is usually shorter than the introduction . As a rule, it shouldn’t take up more than 10–15% of the text.

The “hook” is the first sentence of your essay introduction . It should lead the reader into your essay, giving a sense of why it’s interesting.

To write a good hook, avoid overly broad statements or long, dense sentences. Try to start with something clear, concise and catchy that will spark your reader’s curiosity.

Your essay introduction should include three main things, in this order:

  • An opening hook to catch the reader’s attention.
  • Relevant background information that the reader needs to know.
  • A thesis statement that presents your main point or argument.

The length of each part depends on the length and complexity of your essay .

Let’s say you’re writing a five-paragraph  essay about the environmental impacts of dietary choices. Here are three examples of topic sentences you could use for each of the three body paragraphs :

  • Research has shown that the meat industry has severe environmental impacts.
  • However, many plant-based foods are also produced in environmentally damaging ways.
  • It’s important to consider not only what type of diet we eat, but where our food comes from and how it is produced.

Each of these sentences expresses one main idea – by listing them in order, we can see the overall structure of the essay at a glance. Each paragraph will expand on the topic sentence with relevant detail, evidence, and arguments.

The topic sentence usually comes at the very start of the paragraph .

However, sometimes you might start with a transition sentence to summarize what was discussed in previous paragraphs, followed by the topic sentence that expresses the focus of the current paragraph.

Topic sentences help keep your writing focused and guide the reader through your argument.

In an essay or paper , each paragraph should focus on a single idea. By stating the main idea in the topic sentence, you clarify what the paragraph is about for both yourself and your reader.

A topic sentence is a sentence that expresses the main point of a paragraph . Everything else in the paragraph should relate to the topic sentence.

The thesis statement is essential in any academic essay or research paper for two main reasons:

  • It gives your writing direction and focus.
  • It gives the reader a concise summary of your main point.

Without a clear thesis statement, an essay can end up rambling and unfocused, leaving your reader unsure of exactly what you want to say.

The thesis statement should be placed at the end of your essay introduction .

Follow these four steps to come up with a thesis statement :

  • Ask a question about your topic .
  • Write your initial answer.
  • Develop your answer by including reasons.
  • Refine your answer, adding more detail and nuance.

A thesis statement is a sentence that sums up the central point of your paper or essay . Everything else you write should relate to this key idea.

An essay isn’t just a loose collection of facts and ideas. Instead, it should be centered on an overarching argument (summarized in your thesis statement ) that every part of the essay relates to.

The way you structure your essay is crucial to presenting your argument coherently. A well-structured essay helps your reader follow the logic of your ideas and understand your overall point.

The structure of an essay is divided into an introduction that presents your topic and thesis statement , a body containing your in-depth analysis and arguments, and a conclusion wrapping up your ideas.

The structure of the body is flexible, but you should always spend some time thinking about how you can organize your essay to best serve your ideas.

The vast majority of essays written at university are some sort of argumentative essay . Almost all academic writing involves building up an argument, though other types of essay might be assigned in composition classes.

Essays can present arguments about all kinds of different topics. For example:

  • In a literary analysis essay, you might make an argument for a specific interpretation of a text
  • In a history essay, you might present an argument for the importance of a particular event
  • In a politics essay, you might argue for the validity of a certain political theory

At high school and in composition classes at university, you’ll often be told to write a specific type of essay , but you might also just be given prompts.

Look for keywords in these prompts that suggest a certain approach: The word “explain” suggests you should write an expository essay , while the word “describe” implies a descriptive essay . An argumentative essay might be prompted with the word “assess” or “argue.”

In rhetorical analysis , a claim is something the author wants the audience to believe. A support is the evidence or appeal they use to convince the reader to believe the claim. A warrant is the (often implicit) assumption that links the support with the claim.

Logos appeals to the audience’s reason, building up logical arguments . Ethos appeals to the speaker’s status or authority, making the audience more likely to trust them. Pathos appeals to the emotions, trying to make the audience feel angry or sympathetic, for example.

Collectively, these three appeals are sometimes called the rhetorical triangle . They are central to rhetorical analysis , though a piece of rhetoric might not necessarily use all of them.

The term “text” in a rhetorical analysis essay refers to whatever object you’re analyzing. It’s frequently a piece of writing or a speech, but it doesn’t have to be. For example, you could also treat an advertisement or political cartoon as a text.

The goal of a rhetorical analysis is to explain the effect a piece of writing or oratory has on its audience, how successful it is, and the devices and appeals it uses to achieve its goals.

Unlike a standard argumentative essay , it’s less about taking a position on the arguments presented, and more about exploring how they are constructed.

You should try to follow your outline as you write your essay . However, if your ideas change or it becomes clear that your structure could be better, it’s okay to depart from your essay outline . Just make sure you know why you’re doing so.

If you have to hand in your essay outline , you may be given specific guidelines stating whether you have to use full sentences. If you’re not sure, ask your supervisor.

When writing an essay outline for yourself, the choice is yours. Some students find it helpful to write out their ideas in full sentences, while others prefer to summarize them in short phrases.

You will sometimes be asked to hand in an essay outline before you start writing your essay . Your supervisor wants to see that you have a clear idea of your structure so that writing will go smoothly.

Even when you do not have to hand it in, writing an essay outline is an important part of the writing process . It’s a good idea to write one (as informally as you like) to clarify your structure for yourself whenever you are working on an essay.

Comparisons in essays are generally structured in one of two ways:

  • The alternating method, where you compare your subjects side by side according to one specific aspect at a time.
  • The block method, where you cover each subject separately in its entirety.

It’s also possible to combine both methods, for example by writing a full paragraph on each of your topics and then a final paragraph contrasting the two according to a specific metric.

Your subjects might be very different or quite similar, but it’s important that there be meaningful grounds for comparison . You can probably describe many differences between a cat and a bicycle, but there isn’t really any connection between them to justify the comparison.

You’ll have to write a thesis statement explaining the central point you want to make in your essay , so be sure to know in advance what connects your subjects and makes them worth comparing.

Some essay prompts include the keywords “compare” and/or “contrast.” In these cases, an essay structured around comparing and contrasting is the appropriate response.

Comparing and contrasting is also a useful approach in all kinds of academic writing : You might compare different studies in a literature review , weigh up different arguments in an argumentative essay , or consider different theoretical approaches in a theoretical framework .

The key difference is that a narrative essay is designed to tell a complete story, while a descriptive essay is meant to convey an intense description of a particular place, object, or concept.

Narrative and descriptive essays both allow you to write more personally and creatively than other kinds of essays , and similar writing skills can apply to both.

If you’re not given a specific prompt for your descriptive essay , think about places and objects you know well, that you can think of interesting ways to describe, or that have strong personal significance for you.

The best kind of object for a descriptive essay is one specific enough that you can describe its particular features in detail—don’t choose something too vague or general.

If you’re not given much guidance on what your narrative essay should be about, consider the context and scope of the assignment. What kind of story is relevant, interesting, and possible to tell within the word count?

The best kind of story for a narrative essay is one you can use to reflect on a particular theme or lesson, or that takes a surprising turn somewhere along the way.

Don’t worry too much if your topic seems unoriginal. The point of a narrative essay is how you tell the story and the point you make with it, not the subject of the story itself.

Narrative essays are usually assigned as writing exercises at high school or in university composition classes. They may also form part of a university application.

When you are prompted to tell a story about your own life or experiences, a narrative essay is usually the right response.

The majority of the essays written at university are some sort of argumentative essay . Unless otherwise specified, you can assume that the goal of any essay you’re asked to write is argumentative: To convince the reader of your position using evidence and reasoning.

In composition classes you might be given assignments that specifically test your ability to write an argumentative essay. Look out for prompts including instructions like “argue,” “assess,” or “discuss” to see if this is the goal.

At college level, you must properly cite your sources in all essays , research papers , and other academic texts (except exams and in-class exercises).

Add a citation whenever you quote , paraphrase , or summarize information or ideas from a source. You should also give full source details in a bibliography or reference list at the end of your text.

The exact format of your citations depends on which citation style you are instructed to use. The most common styles are APA , MLA , and Chicago .

An argumentative essay tends to be a longer essay involving independent research, and aims to make an original argument about a topic. Its thesis statement makes a contentious claim that must be supported in an objective, evidence-based way.

An expository essay also aims to be objective, but it doesn’t have to make an original argument. Rather, it aims to explain something (e.g., a process or idea) in a clear, concise way. Expository essays are often shorter assignments and rely less on research.

An expository essay is a common assignment in high-school and university composition classes. It might be assigned as coursework, in class, or as part of an exam.

Sometimes you might not be told explicitly to write an expository essay. Look out for prompts containing keywords like “explain” and “define.” An expository essay is usually the right response to these prompts.

An expository essay is a broad form that varies in length according to the scope of the assignment.

Expository essays are often assigned as a writing exercise or as part of an exam, in which case a five-paragraph essay of around 800 words may be appropriate.

You’ll usually be given guidelines regarding length; if you’re not sure, ask.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • Share full article

Advertisement

Supported by

NPR in Turmoil After It Is Accused of Liberal Bias

An essay from an editor at the broadcaster has generated a firestorm of criticism about the network on social media, especially among conservatives.

Uri Berliner, wearing a dark zipped sweater over a white T-shirt, sits in a darkened room, a big plant and a yellow sofa behind him.

By Benjamin Mullin and Katie Robertson

NPR is facing both internal tumult and a fusillade of attacks by prominent conservatives this week after a senior editor publicly claimed the broadcaster had allowed liberal bias to affect its coverage, risking its trust with audiences.

Uri Berliner, a senior business editor who has worked at NPR for 25 years, wrote in an essay published Tuesday by The Free Press, a popular Substack publication, that “people at every level of NPR have comfortably coalesced around the progressive worldview.”

Mr. Berliner, a Peabody Award-winning journalist, castigated NPR for what he said was a litany of journalistic missteps around coverage of several major news events, including the origins of Covid-19 and the war in Gaza. He also said the internal culture at NPR had placed race and identity as “paramount in nearly every aspect of the workplace.”

Mr. Berliner’s essay has ignited a firestorm of criticism of NPR on social media, especially among conservatives who have long accused the network of political bias in its reporting. Former President Donald J. Trump took to his social media platform, Truth Social, to argue that NPR’s government funding should be rescinded, an argument he has made in the past.

NPR has forcefully pushed back on Mr. Berliner’s accusations and the criticism.

“We’re proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories,” Edith Chapin, the organization’s editor in chief, said in an email to staff on Tuesday. “We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world.” Some other NPR journalists also criticized the essay publicly, including Eric Deggans, its TV critic, who faulted Mr. Berliner for not giving NPR an opportunity to comment on the piece.

In an interview on Thursday, Mr. Berliner expressed no regrets about publishing the essay, saying he loved NPR and hoped to make it better by airing criticisms that have gone unheeded by leaders for years. He called NPR a “national trust” that people rely on for fair reporting and superb storytelling.

“I decided to go out and publish it in hopes that something would change, and that we get a broader conversation going about how the news is covered,” Mr. Berliner said.

He said he had not been disciplined by managers, though he said he had received a note from his supervisor reminding him that NPR requires employees to clear speaking appearances and media requests with standards and media relations. He said he didn’t run his remarks to The New York Times by network spokespeople.

When the hosts of NPR’s biggest shows, including “Morning Edition” and “All Things Considered,” convened on Wednesday afternoon for a long-scheduled meet-and-greet with the network’s new chief executive, Katherine Maher , conversation soon turned to Mr. Berliner’s essay, according to two people with knowledge of the meeting. During the lunch, Ms. Chapin told the hosts that she didn’t want Mr. Berliner to become a “martyr,” the people said.

Mr. Berliner’s essay also sent critical Slack messages whizzing through some of the same employee affinity groups focused on racial and sexual identity that he cited in his essay. In one group, several staff members disputed Mr. Berliner’s points about a lack of ideological diversity and said efforts to recruit more people of color would make NPR’s journalism better.

On Wednesday, staff members from “Morning Edition” convened to discuss the fallout from Mr. Berliner’s essay. During the meeting, an NPR producer took issue with Mr. Berliner’s argument for why NPR’s listenership has fallen off, describing a variety of factors that have contributed to the change.

Mr. Berliner’s remarks prompted vehement pushback from several news executives. Tony Cavin, NPR’s managing editor of standards and practices, said in an interview that he rejected all of Mr. Berliner’s claims of unfairness, adding that his remarks would probably make it harder for NPR journalists to do their jobs.

“The next time one of our people calls up a Republican congressman or something and tries to get an answer from them, they may well say, ‘Oh, I read these stories, you guys aren’t fair, so I’m not going to talk to you,’” Mr. Cavin said.

Some journalists have defended Mr. Berliner’s essay. Jeffrey A. Dvorkin, NPR’s former ombudsman, said Mr. Berliner was “not wrong” on social media. Chuck Holmes, a former managing editor at NPR, called Mr. Berliner’s essay “brave” on Facebook.

Mr. Berliner’s criticism was the latest salvo within NPR, which is no stranger to internal division. In October, Mr. Berliner took part in a lengthy debate over whether NPR should defer to language proposed by the Arab and Middle Eastern Journalists Association while covering the conflict in Gaza.

“We don’t need to rely on an advocacy group’s guidance,” Mr. Berliner wrote, according to a copy of the email exchange viewed by The Times. “Our job is to seek out the facts and report them.” The debate didn’t change NPR’s language guidance, which is made by editors who weren’t part of the discussion. And in a statement on Thursday, the Arab and Middle Eastern Journalists Association said it is a professional association for journalists, not a political advocacy group.

Mr. Berliner’s public criticism has highlighted broader concerns within NPR about the public broadcaster’s mission amid continued financial struggles. Last year, NPR cut 10 percent of its staff and canceled four podcasts, including the popular “Invisibilia,” as it tried to make up for a $30 million budget shortfall. Listeners have drifted away from traditional radio to podcasts, and the advertising market has been unsteady.

In his essay, Mr. Berliner laid some of the blame at the feet of NPR’s former chief executive, John Lansing, who said he was retiring at the end of last year after four years in the role. He was replaced by Ms. Maher, who started on March 25.

During a meeting with employees in her first week, Ms. Maher was asked what she thought about decisions to give a platform to political figures like Ronna McDaniel, the former Republican Party chair whose position as a political analyst at NBC News became untenable after an on-air revolt from hosts who criticized her efforts to undermine the 2020 election.

“I think that this conversation has been one that does not have an easy answer,” Ms. Maher responded.

Benjamin Mullin reports on the major companies behind news and entertainment. Contact Ben securely on Signal at +1 530-961-3223 or email at [email protected] . More about Benjamin Mullin

Katie Robertson covers the media industry for The Times. Email:  [email protected]   More about Katie Robertson

  • our Services
  • The way it works
  • Our Advantages
  • sample essays

Essay Services

Quality Custom Essays for Your Unique Needs

Quick order

Currently, we belong to one of the most trusted custom writing services on the web. We are here to make your academic life more successful and less stressful. Over 20 years, we deliver the promised services. Think wise, buy custom essay!

chevron-right

Your reputation makes our reputation. We are building our company on quality and understand the harm of plagiarism. Protecting your image in academic life is no less important to us than it is to you.

Is tomorrow your essay due date? Just tell us "Quickly write my essay!" and provide us with the details of the order and instructions - we will make sure it will be completed on time by our professional writing team.

image

Students have been able to buy term papers of the highest quality from Quality-Essay.com since 1995.

We are committed to performing high quality custom writing at prices that any student can afford. Our quality essay writing service specializes in providing custom essays that are tailored to the customer's precise specifications.

Here, at Quality-Essay.com, we offer a wide variety of services for students who need to buy term papers or any other type of academic writing. Our professional writers are experts at custom writing and can give our customers whatever they require. Each of our quality custom essays is accurately written, perfect in form and interesting to read. Each one contains the exact formatting and number of pages that the customer requests. Each one is worthy of an A+ grade, and each essay is reasonably priced. This is what keeps our customers coming back, term-after-term. Welcome to the most outstanding custom essay writing service!

We are famous for providing quality essay writing services to students from all over the world. Regardless of the type of academic writing or the topic that is needed, our experts can be expected to deliver expedient, high quality work. We are claimed to be a custom-oriented essay writing service. Our highly trained staff is comprised of experienced writers, professional researchers, expert editors and a quality assurance team that guarantees that every last detail is as it should be. This is why literally thousands of students each year buy term papers from Quality-Essay.com.

Are you finding it difficult to write your essays?

Ordering a paper from our easy-to-navigate website is quite simple, and there is always a customer service representative available to assist, if necessary.

We accept all major credit cards. All a customer needs to do is go to our site and follow the prompts to place first order. Once those simple steps have been completed, he/she can gain full access to all of the options available for registered users. There, the customer can browse through our terms and conditions, our various policies and guarantees. He/she can also peruse our various pricing options and different services that we offer. If the customer needs help deciding which option is best for the particular project that is needed, our customer service representative can help.

man

How Quality-Essay.com works

How it works:

1. The customer service department of Quality-Essay.com is outstanding! Our representatives are on duty 24 hours a day, every day of the week, all year long. They can be reached via toll free telephone call, email, or our online live chat interface that is located right on our website. There is no question insignificant enough for them to help a customer with. We take pride in the fact that all of our customer service agents are intelligent, well-educated and friendly. They are happy to help any customer as needed. Unlike other companies, we provide a full spectrum of custom essay writing services round-the-clock.

2. Once the project has been assigned to a writer, our researchers get busy coming up with the latest research findings on that particular topic. They quickly gather and organize the research before passing it along to the writers who work under the careful scrutiny of our editorial department to create the very best custom essay that money can buy.

3. Here, at Quality-Essay.com, we make it our primary objective to achieve customer satisfaction. In comparison with other writing services, we are primarily aimed at high quality. Our quality control department strives to do everything possible to make sure that this is achieved. They make sure that no paper is returned to any client if it contains grammatical or spelling errors. They make certain that the formatting style is correct and each of the custom details that the customer requests is included within the text. The most important thing is that we use some highly sophisticated anti-plagiarism software in existence to scan each document to assure that it is original.

Another thing that makes Quality-Essay.com so popular among the plethora of students who come to us for assistance each year is the fact that we completely eliminate any risk that might be inherent in doing business with our competition.

We do this by fully guaranteeing each and every aspect of our papers so that the student can receive a perfect essay every time. If a student believes that some instructions were not addressed by the writer, we provide a free revision option within 2 days. The writer will carefully revise the paper, but your revision details should correspond to original requirements. Providing quality essay writing services, we take into account every tiny detail.

If the customer has forgotten to include some important elements at the time he/she places an order, they can be changed only before we assign a writer to the project or during initial writing stages.

people

Therefore, we highly encourage all customers to provide correct and explicit order details at once because our writers work with initial instructions.

girl

Here, at Quality-Essay.com, we understand that confidentiality might be an issue, and it is rightfully so.

Therefore, we guarantee the utmost anonymity. Any information that has been provided to Quality-Essay.com by a customer is kept strictly private.

Many times each year, writing services are offered top dollar to sell their customers' personal data. The majority of writing services accepts these offers and sells their student's details. Quality-Essay.com does not do it. Anything that transpires between a customer and our writing service stays confidential. The pricing policy at Quality-Essay.com is another feature that keeps students coming back year after year. We employ a tiered pricing system that enables all students to choose the options they want and can afford, thus keeping the prices within reason.

Quality-Essay.com is the writing service that all students need every term.

IMAGES

  1. 5 Steps To Quality Essay Writing (With Examples)

    quality of the essay

  2. 011 Essay Structure Example Types Of Essays In ~ Thatsnotus

    quality of the essay

  3. Guidelines for the short essay

    quality of the essay

  4. A Detailed Guide on How to Write the Best Essay

    quality of the essay

  5. What Is an Evaluation Essay? Simple Examples To Guide You

    quality of the essay

  6. Important Steps for Writing a Top-Quality Classification Essay

    quality of the essay

VIDEO

  1. Importance of Equality Essay || Role of Equality || Importance of equality paragraph/Essay

  2. Essay:-Qualities of a good teacher. #15 sentences about the qualities of a good teacher

  3. Psalm 23 essay writing narration

  4. Writing Essay Items

  5. crux

  6. How to write philosophical essays in UPSC Mains 2021? #upsc #essay

COMMENTS

  1. How to Write A Quality Essay

    1. The introduction. This is the paragraph where you open your essay, pull in the reader, and share your thesis statement. A thesis statement is one sentence that tells the reader your main idea and makes a claim. The rest of your essay follows up on this claim and supports your idea. 2.

  2. How to Write the Perfect Essay

    Step 2: Have a clear structure. Think about this while you're planning: your essay is like an argument or a speech. It needs to have a logical structure, with all your points coming together to answer the question. Start with the basics! It's best to choose a few major points which will become your main paragraphs.

  3. A (Very) Simple Way to Improve Your Writing

    For instance, let's say you're writing an essay. There are three components you will be working with throughout your piece: the title, the paragraphs, and the sentences.

  4. Example of a Great Essay

    The structure of an essay is divided into an introduction that presents your topic and thesis statement, a body containing your in-depth analysis and arguments, and a conclusion wrapping up your ideas. The structure of the body is flexible, but you should always spend some time thinking about how you can organize your essay to best serve your ...

  5. The Four Main Types of Essay

    An essay is a focused piece of writing designed to inform or persuade. There are many different types of essay, but they are often defined in four categories: argumentative, expository, narrative, and descriptive essays. Argumentative and expository essays are focused on conveying information and making clear points, while narrative and ...

  6. Essay Writing: How to Write an Outstanding Essay

    The basic steps for how to write an essay are: Generate ideas and pick a type of essay to write. Outline your essay paragraph by paragraph. Write a rough first draft without worrying about details like word choice or grammar. Edit your rough draft, and revise and fix the details. Review your essay for typos, mistakes, and any other problems.

  7. The Beginner's Guide to Writing an Essay

    Essay writing process. The writing process of preparation, writing, and revisions applies to every essay or paper, but the time and effort spent on each stage depends on the type of essay.. For example, if you've been assigned a five-paragraph expository essay for a high school class, you'll probably spend the most time on the writing stage; for a college-level argumentative essay, on the ...

  8. How to Do Research for an Excellent Essay: The Complete Guide

    Allow enough time. First and foremost, it's vital to allow enough time for your research. For this reason, don't leave your essay until the last minute. If you start writing without having done adequate research, it will almost certainly show in your essay's lack of quality. The amount of research time needed will vary according to ...

  9. How To Write A Quality College Essay

    But quality essay writing has other elements too, such as making sure your essay "flows", is free of grammar and spelling errors and has a tightly woven argument. Here are some tips on how you can improve your essay writing: Read a lot of essays. Reading essays other people have written is a great way to study essay writing.

  10. 5 Steps To Quality Essay Writing (With Examples)

    These Are 5 Easy Steps To Write A Perfect Essay. Determine the title and research your audience. Brainstorm ideas and make an outline. Identify your thesis statement and organize your thoughts. Start the writing process (introduction, body paragraphs, and conclusion) Edit and proofread.

  11. 7 Steps to Writing a Quality Essay

    The following 7 steps provide guidance for writing a quality college essay with ease. 1. Understand the assignment details: The first step to writing a quality essay is comprehending the specifics of the assignment. The worst mistake you can make at this point is skimming over the details and writing a lengthy essay, only to discover that it ...

  12. Top 10 Qualities of a Good Essay

    Free Composition. The Ease. The Paradox. Meaningful Unity. Use of Simple Language. Authoritarianism. Use an Element of Surprise. The Logic of Presentation. Here are some basic points and qualities of good writing that will make your essay successful:

  13. 7 Qualities of a Successful College Essay

    6. Well-written. This might also sound like an obvious quality of a successful essay, but it's still worth mentioning. The most competitive application essays showcase strong writing skills, providing evidence of a student's ability to tell a specific story artfully and well.

  14. A Student's Guide to Finding Quality Sources for Essays

    Your course textbook is a great starting point, as it will likely contain valuable and relevant information about your topic. Many students believe the textbook won't be accepted as a source for an essay, but this is false. Your professor will welcome citations from the textbook. 2. Head to Your School's Library.

  15. What are the Qualities of the Best Essay

    6. Avoid Grammatical Mistakes. Obviously, an essay must be free of grammatical errors, spelling errors, run-ons, and fragments. All the basic grammar rules should be applied in the essay so that the essay should be of good quality. Most of the students complain that why they get fewer grades in their essay writing tests.

  16. Free AI-Powered Essay and Paper Checker—QuillBot AI

    Our free essay checking tool gives your essay one final review of usage, grammar, spelling, and punctuation. You can feel great every time you write an essay. Utilize our AI-powered essay and paper checker for precise analysis and correction. Enhance your writing with our efficient AI essay and paper checker tool.

  17. How to Structure an Essay

    The basic structure of an essay always consists of an introduction, a body, and a conclusion. But for many students, the most difficult part of structuring an essay is deciding how to organize information within the body. This article provides useful templates and tips to help you outline your essay, make decisions about your structure, and ...

  18. Is a Long Essay Always a Good Essay? The Effect of Text Length on

    Study 2 focused on possible essay length bias in an experimental setting, investigating the effect of essay length on text quality ratings when there was (per design) no relation between essay length and text quality score. Essays from Study 1 were rated by N = 84 untrained pre-service teachers, using the same TOEFL iBT rubric as the expert ...

  19. 'Quality' an Essay by John Galsworthy

    Galsworthy depicts shoemakers attempting to stay true to their crafts in the face of a world driven by money and immediate gratification — not by quality and certainly not by true art or craftsmanship. " Quality" first appeared in "The Inn of Tranquility: Studies and Essays" (Heinemann, 1912). A portion of the essay appears below.

  20. The Concept of Quality: [Essay Example], 907 words GradesFixer

    The concept of quality has existed for many years, though its meaning has changed and evolved over time. In the early twentieth century, quality management meant inspecting products to ensure that they met specifications. In the 1940s, during World War II, quality became more statistical in nature.

  21. NPR responds after editor says it has 'lost America's trust' : NPR

    Berliner says in the essay that NPR failed to consider broader diversity of viewpoint, noting, "In D.C., where NPR is headquartered and many of us live, I found 87 registered Democrats working in ...

  22. Frontiers

    Selection of Essays. For each text length group (s, m, and l), the mean word count across all three score groups was calculated. Then, the score group (2, 3, or 4) with the smallest number of essays in a text length group was taken as reference (e.g., n = 22 short texts of high quality or n = 15 long texts of low quality). Within each text length group, the five essays being - word count ...

  23. What is an essay?

    An essay is a focused piece of writing that explains, argues, describes, or narrates. In high school, you may have to write many different types of essays to develop your writing skills. Academic essays at college level are usually argumentative: you develop a clear thesis about your topic and make a case for your position using evidence ...

  24. NPR in Turmoil After It Is Accused of Liberal Bias

    An essay from an editor at the broadcaster has generated a firestorm of criticism about the network on social media, especially among conservatives. By Benjamin Mullin and Katie Robertson NPR is ...

  25. Best Quality Essay Writers in the US

    Here, at Quality-Essay.com, we offer a wide variety of services for students who need to buy term papers or any other type of academic writing. Our professional writers are experts at custom writing and can give our customers whatever they require. Each of our quality custom essays is accurately written, perfect in form and interesting to read.