Describe an object that is very special in your life Explain why this object is important to you Include details and examples in your explanation

Describe an object that is very special in your life. Explain why this object is important to you. Include details and examples in your explanation.

Profile picture for user Flavinha

Most of the people have some object that has an importance on their lives. It can be anything! A ring, a car, a book, and etc. I fell that a mint lace dress in an important object in my life for two reasons. First, it was a present from someone special. Fallowing I will be more detailed in my topics. Second, I felt fabulous wearing it in a special day.

First, this was a present from my grandmother. It is not uncommon to fell attached in a present given by a family member or a close friend. That object is able to bring memories and comfort. For example, my grandma knew that I needed a dress which I could wear during my graduation. Then, she started to search for a perfect dress to me because she knew that we had similar tastes even being born in different generations. One day, I that I was not busy I decided to visit her. When I entered the house, she seemed excited and told that had a surprise. Following, she took me to her bedroom where I found that amazing dress layed on the bed. She made the best choice, the dress was perfect. We spent an amazing evening together talking about my graduation, my dress, and our lives. That was one of the last visitations I did in the house of that sweet old lady house before she deceased. Nowadays, when I look to this dress, I remember about that day.

Second, I wore this beautiful dress on my university graduation celebration. In my home country, Brazil, the students take very seriously the clothing that they will wear in the party manny people complimented and told how I looked beautiful. That night was unforgettable, and this dress makes the memories more vivid. graduation night. All the family, friends, and professors are going to be there, so the students wish to look fabulous. For instance, while I was getting ready for the party, I could see in the mirror how the lace dress that my grandmother gave to me fitted perfectly on my body. The dress also had deep mint color and delicate lace valorizing my skin color. To complete the look, I wore golden jewelry and black high hill shoes. When I arrived at the

Concluding, often a person poses important objects in their lives. I consider my mint lace dress an important object that I have. It not only bring good memories from my prom night but also of my sweet grandmother.

  • Log in or register to post comments

Full essay evaluations

some object that has an importance some objects that have an importance

Sentence: It is not uncommon to fell attached in a present given by a family member or a close friend. Description: The word fell is not usually used as a verb, base: uninflected present, imperative or infinitive Suggestion: Refer to fell

Sentence: One day, I that I was not busy I decided to visit her. Description: The tag a pronoun, personal, nominative, not 3rd person singular is not usually followed by that Suggestion: Refer to I and that

Sentence: That was one of the last visitations I did in the house of that sweet old lady house before she deceased. Description: A pronoun, personal, nominative, 3rd person singular is not usually followed by an adjective Suggestion: Refer to she and deceased

Sentence: Following, she took me to her bedroom where I found that amazing dress layed on the bed. Error: layed Suggestion: No alternate word

Sentence: In my home country, Brazil, the students take very seriously the clothing that they will wear in the party manny people complimented and told how I looked beautiful. Error: manny Suggestion: many

Sentence: The dress also had deep mint color and delicate lace valorizing my skin color. Error: valorizing Suggestion: No alternate word

flaws: More sentences varieties wanted. Try to use less pronouns or not to use pronouns (like 'It, I, They, We, You...') as the subject of a sentence.

Attribute Value Ideal Score: 21 in 30 Category: Good Excellent No. of Grammatical Errors: 4 2 No. of Spelling Errors: 3 2 No. of Sentences: 29 15 No. of Words: 421 350 No. of Characters: 1816 1500 No. of Different Words: 211 200 Fourth Root of Number of Words: 4.53 4.7 Average Word Length: 4.314 4.6 Word Length SD: 2.387 2.4 No. of Words greater than 5 chars: 104 100 No. of Words greater than 6 chars: 78 80 No. of Words greater than 7 chars: 48 40 No. of Words greater than 8 chars: 27 20 Use of Passive Voice (%): 0 0 Avg. Sentence Length: 14.517 21.0 Sentence Length SD: 6.463 7.5 Use of Discourse Markers (%): 0.448 0.12 Sentence-Text Coherence: 0.311 0.35 Sentence-Para Coherence: 0.424 0.50 Sentence-Sentence Coherence: 0.111 0.07 Number of Paragraphs: 4 5

It is okay you check my

It is okay you check my corrected sentences? Sentence: It is not uncommon to fell attached in a present given by a family member or a close friend. It is not uncommon being attached with a present from a family member or a close friend.

Sentence: One day, I that I was not busy I decided to visit her. One day I decided to visit my grandmother.

Sentence: That was one of the last visitations I did in the house of that sweet old lady house before she deceased. That was the last time I visited my grandmother's house before her death. Description: A pronoun, personal, nominative, 3rd person singular is not usually followed by an adjective

Sentence: Following, she took me to her bedroom where I found that amazing dress on the bed.

Sentence: In my home country, Brazil, the students take very seriously the clothing that they will wear in the party.

Sentence: The dress also had deep mint color and delicate lace that looks nice with my skin color. Error: valorizing Suggestion: No alternate word What can I do to use less pronouns? I am a little confuse about that.

You corrected them

You corrected them correctly.

There are no problems to use pronouns. But if you want to get higher marks, you need to lean advanced essay writing. read some TOEFL essays by this user and get some ideas: http://www.testbig.com/users/ftn

Why you did not correct my

Why you did not correct my other essays?

http://www.testbig.com/questi

http://www.testbig.com/question/my-essays-0

Amazing post, thanks for…

Amazing post, thanks for sharing this article. Really Enjoyed a lot Thanks for Sharing Reading Taxi Service Taxi Tunbridge Wells

descriptive essay about an object that is special to you

Descriptive Essay: Your Guide to Writing an Effective One

descriptive essay about an object that is special to you

A descriptive essay is one of the four main types of essays, alongside narrative, argumentative, and expository essays. Among these, descriptive essays can be particularly challenging because they demand a keen eye for detail and an appreciation for aesthetics. By vividly describing scenes and details, you engage your reader’s senses, making your essay memorable and engaging. In this guide, our essay writers will break down the writing process for you, offering step-by-step instructions, practical examples, and clear definitions to help you excel in your next assignment.

What is a Descriptive Essay?

Descriptive writing aims to vividly portray something through essays, helping readers visualize and feel the scene or object being described. Such essays draw on detailed descriptions to create a clear and impactful image that not only presents the subject but also evokes emotions and memories.

There are three main techniques used in descriptive writing: naming, detailing, and comparing .

Naming identifies the subject and its characteristics, answering questions like 'What is it?' and 'What features does it have?'

Detailing elaborates on these features, providing answers to detailed questions such as 'How many are there?' and 'What is its value?' Techniques like synesthesia and comparisons enhance these descriptions.

Comparing uses similes and metaphors to make descriptions more vivid, linking the subject to familiar concepts.

Description vs. Descriptive Essay

What Is the Purpose of a Descriptive Essay?

The purpose of a descriptive essay is multifaceted. Primarily, it allows writers to give readers a vivid impression of a person, place, or event, making the subject come alive through words. By using detailed descriptions, writers can help readers visualize settings and characters as if they were seeing them firsthand.

Additionally, descriptive essays can serve to clarify abstract ideas. By describing these concepts with concrete images and examples, writers make complex ideas easier to understand and more relatable to the reader.

Descriptive essays also aim to make information more memorable. When details are vivid, they are more likely to stick in the reader's mind, enhancing recall and engagement with the text.

Lastly, it can bolster an argument by providing concrete, detailed evidence that supports a point of view. This helps persuade the reader by making the argument more tangible and credible.

Need Some Help?

You will get your written masterpiece delivered to you on time, with a smile on your face!

Today, you can request help with dissertation or any other written assignment, such as an essay, from competent writers with years of academic experience.

order descriptive essay

Descriptive Essay Topics

When you're tasked with writing a descriptive essay, you'll usually get a prompt that asks you to describe something. These descriptive essay prompts allow you to explore different settings, time periods, and imaginative scenarios in your essays. 

Personal Prompts:

  • Describe a favorite childhood memory.
  • Describe a treasured family heirloom.

Imaginative Prompts:

  • Describe a day in the life of a pirate.
  • Describe what it would be like to explore an underwater city.

Historical Prompts:

  • Describe the atmosphere of a bustling ancient marketplace.
  • Describe the experience of witnessing a significant moment in history, like the moon landing or the fall of the Berlin Wall.

Nature Prompts:

  • Describe the sights and sounds of a peaceful forest at dawn.
  • Describe the feeling of standing at the edge of a majestic waterfall.

Everyday Prompts:

  • Describe the chaos of a busy morning commute in a big city.
  • Describe the tranquility of a sunset picnic in the countryside.

If you need topic ideas for other essay genres, consult our guide on narrative essay topics .

How to Write a Descriptive Essay in 8 Steps

Now that you understand the essence and purpose of this type of essay let's explore some fundamental yet valuable tips for writing a descriptive essay. 

How to Write a Descriptive Essay in 8 Steps

Step 1: Select Your Topic

The first step in creating a captivating descriptive essay is choosing the right topic. Start by paying close attention to your surroundings. 

  • Consider describing a person you know well in your life, like a sibling, a close friend, or a teacher who has made a significant impact on you.
  • Alternatively, you could focus on a specific place or object that holds sentimental value to you, such as a favorite vacation spot, a cherished childhood toy, or a meaningful piece of jewelry.
  • Another option is to explore a strong emotion that you have experienced, like excitement, nostalgia, or determination. 

Avoid using overly technical or jargon-filled language in your topic selection. Instead, aim for simplicity and clarity to ensure that your chosen topic resonates with your audience and allows you to convey your unique perspective effectively.

Step 2: Gather Details

Once you've selected your topic for your descriptive essay, the next step is to gather details that will bring your chosen subject to life on the page. Start by closely observing your subject, whether it's a person, place, object, or emotion. Pay attention to its appearance, characteristics, and any unique features that stand out to you.

For example, if you've chosen to describe your childhood home, take note of its architectural style, color scheme, and any distinctive elements like a front porch or a cozy fireplace. Recall memories associated with the home, such as family gatherings or quiet moments spent reading in your favorite spot.

If your topic is a person, like a close friend or family member, observe their physical appearance, mannerisms, and personality traits. Consider the ways in which they interact with others and the impact they have on your life.

Step 3: Draft an Outline

When structuring your essay, you can organize your paragraphs from top to bottom or near to far, chronologically, or from general to specific. Here's a simple descriptive essay outline from our custom writers to guide you: 

Step 4: Develop a Thesis Statement

When developing your thesis statement, consider the main points or aspects of your subject that you want to highlight in your essay. Think about the emotions or impressions you want to evoke in the reader and tailor your thesis statement accordingly.

For example, if you're writing about your favorite childhood memory, your thesis statement could be: 'My summers spent at my grandparents' farm were filled with laughter, adventure, and a sense of belonging.'

Or, if you're describing a beautiful sunset, your thesis statement might be: 'The breathtaking colors and serene atmosphere of the sunset over the ocean evoke a sense of peace and wonder.'

Step 5: Craft the Introduction

Start your descriptive essay introduction by hooking the reader with an engaging opening sentence or anecdote related to your topic. This could be a vivid description, a thought-provoking question, or a surprising fact. For example:

  • Growing up on my grandparents' farm, each summer brought new adventures and unforgettable memories that still warm my heart to this day.

After hooking the reader, provide some background information or context for your topic. This could include brief details about the setting, time period, or significance of your subject. For instance:

  • Nestled in the rolling hills of the countryside, my grandparents' farm was a sanctuary of simple pleasures and cherished traditions.

Finally, end your introduction with your thesis statement, clearly stating the main point of your essay. This ties everything together and gives the reader a roadmap for what to expect in the rest of your essay. 

Step 6: Compose the Body Paragraphs

Once you've crafted your introduction, it's time to compose the body paragraphs, where you delve into the details and descriptions that bring your topic to life.

Each body paragraph should focus on a specific aspect or detail of your topic, expanding upon the ideas presented in your thesis statement. Use vivid language, sensory details, and descriptive devices to paint a clear picture for the reader.

For example, if you're writing about summers spent at your grandparents' farm, you could dedicate one body paragraph to describing the sights and sounds of the farm:

  • The rolling fields stretched out before me, golden waves of wheat swaying gently in the breeze. The air was filled with the sweet scent of wildflowers, mingling with the earthy aroma of freshly turned soil.

In another body paragraph, you might explore the adventures and activities that filled your days:

  • From sunrise to sunset, there was never a dull moment on the farm. Whether we were exploring the woods, splashing in the creek, or helping with chores, each day brought new excitement and adventure.

Continue with additional body paragraphs, each focusing on a different aspect of your topic and providing rich, detailed descriptions. Be sure to vary your language and sentence structure to keep the reader engaged and interested.

Step 7: Conclude the Essay

The conclusion should bring together all the ideas presented in your essay. Avoid introducing any new information in the conclusion. Instead, focus on evaluating your thoughts and reflections on the topic. End with a strong final sentence that leaves a lasting impression on the reader.

For example, if you were writing about summers spent at your grandparents' farm, your conclusion might reflect on the significance of those memories:

  • 'As I reminisce about the summers spent amid the rustic charm of my grandparents' farm, I am filled with a profound sense of gratitude for the simple pleasures and cherished moments that shaped my childhood. The laughter echoing through the fields, the adventures awaiting around every corner, and the sense of belonging that enveloped me there will forever hold a special place in my heart.'

Step 8: Refine Your Essay

Once you've finished writing your essay, it's time to refine it for clarity and impact. Start by reading your essay aloud to yourself. Listen for any sentences that sound awkward or unclear. Mark these sentences so you can revise them later.

You can also read your essay aloud to others and ask for their feedback. Invite friends, family members, teachers, or mentors to listen to your essay and share their thoughts. Ask them if there are any parts that are difficult to understand or if they have trouble picturing the subject you're describing.

Be receptive to constructive criticism and feedback. Use it as an opportunity to improve your essay and make it stronger. And if it sounds too demanding right now, you can buy cheap essay to sidestep the hassle and reclaim some much-needed free time.

Descriptive Essay Format

The standard format for a descriptive essay typically includes five paragraphs: an introduction, three body paragraphs, and a conclusion. However, you can also organize your essay into sections, allowing for flexibility in the length of the body paragraphs.

Introductory Paragraph: This paragraph sets the scene by describing where, when, and to whom the experience occurred. It should include descriptive words to capture the reader's attention.

First Body Paragraph: Here, the writer provides details that allow the reader to visualize the situation. Descriptive language is key in painting a clear picture for the reader.

Second Body Paragraph: More details are provided, with a focus on using descriptive adjectives. Figurative language, such as metaphor (e.g., describing the city as a 'jungle of concrete'), can enhance the imagery.

Third Body Paragraph: The writer continues to appeal to the reader's senses with visually descriptive words. Figurative language, like personification (e.g., describing the wind as a playful dancer), adds depth to the description.

Conclusion: The conclusion alludes to another sense, such as touch or sound, and uses strong words to signify closure. It ends with a powerful concluding sentence to leave a lasting impression on the reader.

Descriptive Essay Examples

In this section, you'll discover essay examples that demonstrate how to captivate your readers' attention effectively. After exploring these examples, you might find yourself tempted to ask, 'Can someone do my homework for me?' - and that's completely understandable! We're here to help you become more confident and articulate communicators through your writing!

3 Additional Tips for Writing

While writing a descriptive essay, your goal is to make your subject come alive for the reader. Unlike more formal essays, you have the freedom to be creative with your descriptions, using figurative language, sensory details, and precise word choices to make your writing memorable.

3 Additional Tips for Writing

Use Figurative Language: Figurative language, like metaphors and similes, adds flair to your descriptions. Instead of sticking to literal descriptions, use comparisons to create unique and memorable imagery. 

  • For instance, describing a city as a bustling beehive of activity ' or a forest as ' a blanket of whispers ' adds an unexpected twist that captures the reader's attention.

Engage Your Senses: In a descriptive essay, don't just focus on what something looks like; appeal to all the senses. Describe how things smell, sound, feel, and even taste, if applicable. This adds depth and richness to your descriptions, making them more immersive. 

  • For example, instead of just describing a beach visually, include sensory details like feeling the warm sand between your toes , hearing the rhythmic crash of waves , and t asting the salty sea breeze.

Choose Your Words Carefully: Use effective adjectives, verbs, and nouns to convey your impressions vividly. Avoid clichés and opt for original, precise language that reflects your unique perspective. Take the time to review your sentences and consider if there are better word choices that could enhance your description.

In Wrapping Up

To sum it up, descriptive essays are all about encouraging students like you to explore your surroundings and unleash your creativity by describing scenes in detail with words. When you carefully select and organize these descriptive details, it not only enhances your writing but also sharpens your critical thinking skills. Plus, diving into this expressive writing style allows you to appreciate the beauty of language and feel more connected to written communication. And remember, if you ever need a little boost in your writing journey, our descriptive essay writing service is here to help!

Need To Describe Something But DON'T KNOW HOW?

Let one of our essay writers do it for you, all you have to do is send us your paper requirements and wait for your original paper to be written.

How To Write A Descriptive Essay?

What is a descriptive essay, what is the purpose of a descriptive essay.

Daniel Parker

Daniel Parker

is a seasoned educational writer focusing on scholarship guidance, research papers, and various forms of academic essays including reflective and narrative essays. His expertise also extends to detailed case studies. A scholar with a background in English Literature and Education, Daniel’s work on EssayPro blog aims to support students in achieving academic excellence and securing scholarships. His hobbies include reading classic literature and participating in academic forums.

descriptive essay about an object that is special to you

is an expert in nursing and healthcare, with a strong background in history, law, and literature. Holding advanced degrees in nursing and public health, his analytical approach and comprehensive knowledge help students navigate complex topics. On EssayPro blog, Adam provides insightful articles on everything from historical analysis to the intricacies of healthcare policies. In his downtime, he enjoys historical documentaries and volunteering at local clinics.

  • New samples
  • New information on each of the rest sections 

Axelrod, R. B. and Cooper, R. C. (2008). The st martin’s guide to writing. (English Edition). New York: Bedford/St Martins

Okono, U. M. (2021). Descriptive essay: An assessment of performance by undergraduates of AkwaIbom State University. Erudite Journal of Linguistics and Languages . https://www.globalacademicstar.com/download/article/descriptive-essay-an-assessment-of-performance-by-undergraduates-of-akwa-ibom-state-university.pdf

Okono. U. M. (2020). “Qualities of a good essay: An assessment of the writings of Nigerian undergraduates.” International Journal on integrated Education. 3: vi.

https://irsc-asc.weebly.com/uploads/3/1/8/1/31813909/e7__descriptive_essay_guidelines.pdf

Related Articles

How to Write a Diversity Essay

descriptive essay about an object that is special to you

Descriptive Writing Task: How To Describe an Object

In this mini-lesson, I’ll go through the basics of how to describe an object in detail – and not just in a boring way! Many of the creative writing and descriptive writing questions in exams require you to focus on an object, person, place or thing.

Thanks for reading! If you find this useful, take a look at our full Basic Descriptive Writing and Advanced Descriptive Writing courses.

Lots of my students get stuck on description; they tell me that they can’t think of anything to write. Or they feel like they can only just list details or features of the object without being ‘creative’. So, if this is relatable for you then keep reading as we’ll be breaking down how to go beyond basic descriptions and transform them into something personal, powerful and meaningful.

Descriptive Writing: What is it and How to do it

Descriptive Writing Task:

  • Look around you and choose an object . This should be something interesting – it might have a history to it or a memory attached. It might be complex or beautiful in some way. It might mean something important to you.
  • Once you have picked the object, sketch it out using notes. What details do you notice there? Think about colour, shape, size, texture, patterns. How old or new is it? Has it been changed by humans at any point – maybe it’s broken or worn down? How does it make you think or feel? What memories or stories could you attach to it (these can be invented if you don’t have a real story or memory)?
  • Write 2 paragraphs describing the object. Try to use as many techniques as possible. Write as quickly as you can. Each paragraph should be on a different topic. For instance, you might choose to structure this as the item now vs it in the past, physical description vs memory, or zooming in on different details.
  • Edit your work. Could your descriptions be clearer in any way? Is there something you want to add or delete? Have you used enough techniques and a good range of different techniques rather than just repeating the same one? Have you used interesting punctuation, a variety of sentence lengths, a range of complex and specific vocabulary?

When doing this task with a student, I chose a candlestick that was on my desk. I decided to structure my paragraphs so that one was about the physical description of the candlestick, and the other was about its history and how it might have been made. Here’s my piece below:

A wax cylinder emerges from this candlestick, grey and ghostly – strange implacable colour; neither brown, nor black, nor white. Its burnt black wick curves slightly to the right, emerging out of a smoky pool of wax that has softened, melted and solidified again, into a curve – warped by the concentrated heat of a long snuffed-out flame. The monochromatic of the candle is directly opposed to the candlestick itself, which is awash with swirling patterns and colours, hand-painted onto a dark background and finished with a reflective gloss. Dust, however, has settled over its base and dulled the sheen to a milky pall over the bright colours below.

Far off, in an ancient land, I can imagine an old woman, sitting at her table in a village somewhere – perhaps in Siberia, Sweden or Finland. Wrapped in a patterned shawl to shelter herself from the biting Winter cold, she has only her fingerprints exposed, for dexterity, and in them, she holds an elegantly small paintbrush. The candlestick has been carved, turned and polished by her neighbour, and it is her task to transform it from a dull wooden object into something magical: a work of art. She paints luxurious crimson plums, bulbous yellow grapes (all perfectly circular) and intricate, swirling foliage that finishes off the folk art design. She will likely sell this object to a trader for a few pennies, perhaps enough to afford a measly potato soup for supper. Years later, I will purchase it as a fine antique in a high-end gallery, in an upmarket quarter of Harrogate, North Yorkshire.

descriptive essay about an object that is special to you

Final Task: Review your work

Take a look back over your written piece, and do a short analysis of it – you can either write this out or just think about it. How successfully do you think the piece was? Are you happy with it? Why/why not?

Find three details that you are most proud of in your piece. Why are they great?

Find three details that could be improved. How would you improve them?

Thanks for reading! If you found this useful, take a look at our full Basic Descriptive Writing and Advanced Descriptive Writing courses, as well as other English Language and Literature courses.

Related Posts

The Theme of Morality in To Kill A Mockingbird

The Theme of Morality in To Kill A Mockingbird

Unseen Poetry Exam Practice – Spring

Unseen Poetry Exam Practice – Spring

To Kill A Mockingbird Essay Writing – PEE Breakdown

To Kill A Mockingbird Essay Writing – PEE Breakdown

Emily Dickinson A Level Exam Questions

Emily Dickinson A Level Exam Questions

Unseen Poetry Exam Practice: The Man He Killed

Unseen Poetry Exam Practice: The Man He Killed

How to Get Started with Narrative Writing

How to Get Started with Narrative Writing

What do I need to do for AQA Language Paper 2?

What do I need to do for AQA Language Paper 2?

Fleance Character Analysis + Quotations

Fleance Character Analysis + Quotations

How to do well in the AQA GCSE Paper 2 Exam!

How to do well in the AQA GCSE Paper 2 Exam!

How to Write a Perfect Essay on The Crucible by Arthur Miller

How to Write a Perfect Essay on The Crucible by Arthur Miller

© Copyright Scrbbly 2022

descriptive essay about an object that is special to you

Get science-backed answers as you write with Paperpal's Research feature

What is a Descriptive Essay? How to Write It (with Examples)

What is a Descriptive Essay? How to Write It (with Examples)

A descriptive essay is a type of creative writing that uses specific language to depict a person, object, experience, or event. The idea is to use illustrative language to show readers what the writer wants to convey – it could be as simple as a peaceful view from the top of a hill or as horrific as living in a war zone. By using descriptive language, authors can evoke a mental image in the readers’ minds, engaging readers and leaving a lasting impression, instead of just providing a play-by-play narrative.

Note that a description and descriptive essay are not the same thing. A descriptive essay typically consists of five or more well-written paragraphs with vivid imagery that can help readers visualize the content, as opposed to a description, which is typically one or more plain paragraphs with no particular structure or appeal. If you are still unsure about how to write a compelling descriptive essay, continue reading!

Table of Contents

What is a descriptive essay, types of descriptive essay topics.

  • Characteristics of descriptive essays

How to write a descriptive essay using a structured outline

Frequently asked questions.

A simple descriptive essay definition is that it is a piece of writing that gives a thorough and vivid description of an object, person, experience, or situation. It is sometimes focused more on the emotional aspect of the topic rather than the specifics. The author’s intention when writing a descriptive essay is to help readers visualize the subject at hand. Generally, students are asked to write a descriptive essay to test their ability to recreate a rich experience with artistic flair. Here are a few key points to consider when you begin writing these.

  • Look for a fascinating subject

You might be assigned a topic for your descriptive essay, but if not, you must think of a subject that interests you and about which you know enough facts. It might be about an emotion, place, event, or situation that you might have experienced.

descriptive essay about an object that is special to you

  • Acquire specific details about the topic

The next task is to collect relevant information about the topic of your choice. You should focus on including details that make the descriptive essay stand out and have a long-lasting impression on the readers. To put it simply, your aim is to make the reader feel as though they were a part of the experience in the first place, rather than merely describing the subject.

  • Be playful with your writing

To make the descriptive essay memorable, use figurative writing and imagery to lay emphasis on the specific aspect of the topic. The goal is to make sure that the reader experiences the content visually, so it must be captivating and colorful. Generally speaking, “don’t tell, show”! This can be accomplished by choosing phrases that evoke strong emotions and engage a variety of senses. Making use of metaphors and similes will enable you to compare different things. We will learn about them in the upcoming sections.

  • Capture all the different senses

Unlike other academic articles, descriptive essay writing uses sensory elements in addition to the main idea. In this type of essay writing, the topic is described by using sensory details such as smell, taste, feel, and touch. Example “ Mahira feels most at home when the lavender scent fills her senses as she lays on her bed after a long, tiring day at work . As the candle melts , so do her worries” . It is crucial to provide sensory details to make the character more nuanced and build intrigue to keep the reader hooked. Metaphors can also be employed to explain abstract concepts; for instance, “ A small act of kindness creates ripples that transcend oceans .” Here the writer used a metaphor to convey the emotion that even the smallest act of kindness can have a larger impact.

  • Maintain harmony between flavor and flow

The descriptive essay format is one that can be customized according to the topic. However, like other types of essays, it must have an introduction, body paragraphs, and a conclusion. The number of body paragraphs can vary depending on the topic and available information.

It is crucial to remember that a descriptive essay should have a specific topic and goal, such as sharing personal experiences or expressing emotions like the satisfaction of a good meal. This is accomplished by employing exact language, imagery, and figurative language to illustrate concrete features. These language devices allow the writer to craft a descriptive essay that effectively transmits a particular mood, feeling, or incident to readers while also conjuring up strong mental imagery. A descriptive essay may be creative, or it may be based on the author’s own experiences. Below is a description of a few descriptive essay examples that fit into these categories.

  • Personal descriptive essay example

A personal essay can look like a descriptive account of your favorite activity, a place in your neighborhood, or an object that you value. Example: “ As I step out of the front door, the crisp morning air greets me with a gentle embrace; the big chestnut tree in front, sways in the wind as if saying hello to me. The world unfolds in a symphony of awakening colors, promising a day filled with untold possibilities that make me feel alive and grateful to be born again”.

  • Imaginative descriptive essay example

You may occasionally be required to write descriptive essays based on your imagination or on subjects unrelated to your own experiences. The prompts for these kinds of creative essays could be to describe the experience of someone going through heartbreak or to write about a day in the life of a barista. Imaginative descriptive essays also allow you to describe different emotions. Example, the feelings a parent experiences on holding their child for the first time.

Characteristics of descriptive essay s

The aim of a descriptive essay is to provide a detailed and vivid description of a person, place, object, event, or experience. The main goal is to create a sensory experience for the reader. Through a descriptive essay, the reader may be able to experience foods, locations, activities, or feelings that they might not otherwise be able to. Additionally, it gives the writer a way to relate to the readers by sharing a personal story. The following is a list of the essential elements of a descriptive essay:

  • Sensory details
  • Clear, succinct language
  • Organized structure
  • Thesis statement
  • Appeal to emotion

descriptive essay about an object that is special to you

How to write a descriptive essay, with examples

Writing an engaging descriptive essay is all about bringing the subject matter to life for the reader so they can experience it with their senses—smells, tastes, and textures. The upside of writing a descriptive essay is you don’t have to stick to the confinements of formal essay writing, rather you are free to use a figurative language, with sensory details, and clever word choices that can breathe life to your descriptive essay. Let’s take a closer look at how you can use these components to develop a descriptive essay that will stand out, using examples.

  • Figurative language

Have you ever heard the expression “shooting for the stars”? It refers to pushing someone to strive higher or establish lofty goals, but it does not actually mean shooting for the stars. This is an example of using figurative language for conveying strong motivational emotions. In a descriptive essay, figurative language is employed to grab attention and emphasize points by creatively drawing comparisons and exaggerations. But why should descriptive essays use metaphorical language? One it adds to the topic’s interest and humor; two, it facilitates the reader’s increased connection to the subject.

These are the five most often used figurative language techniques: personification, metaphor, simile, hyperbole, and allusion.

  • Simile: A simile is a figure of speech that is used to compare two things while emphasizing and enhancing the description using terms such as “like or as.”

Example: Life is like riding a bicycle. To keep your balance, you must keep moving – Albert Einstein

  • Metaphor: A metaphor are also used to draw similarities, but without using direct or literal comparisons like done in similes.   

Example: Books are the mirrors of the soul – Virginia Woolf, Between the acts

  • Personification: This is the process of giving nonhuman or abstract objects human traits. Any human quality, including an emotional component, a physical attribute, or an action, can be personified.

Example: Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world – Louis Pasteur

  • Hyperbole: This is an extreme form of exaggeration, frequently impractical, and usually employed to emphasize a point or idea. It gives the character more nuance and complexity.

Example: The force will be with you, always – Star Wars

  • Allusion: This is when you reference a person, work, or event without specifically mentioning them; this leaves room for the reader’s creativity.  

Example: In the text below, Robert Frost uses the biblical Garden of Eden as an example to highlight the idea that nothing, not even paradise, endures forever.

Then leaf subsides to leaf.

So Eden sank to grief,

So dawn goes down to day.

Nothing gold can stay

– Nothing Gold Can Stay by Robert Frost (1923)

Descriptive essays need a combination of figurative language and strong sensory details to make the essay more memorable. This is when authors describe the subject matter employing senses like smell, sound, touch, and taste so that the reader can relate to it better.

Example of a sensory-based descriptive essay: The earthy fragrance of freshly roasted chestnuts and the sight of bright pink, red, orange fallen leaves on the street reminded her that winter was around the corner.

  • Word choice

Word choice is everything in a descriptive essay. For the description to be enchanting, it is essential to utilize the right adjectives and to carefully consider the verbs, nouns, and adverbs. Use unusual terms and phrases that offer a new viewpoint on your topic matter instead of overusing clichés like “fast as the wind” or “lost track of time,” which can make your descriptive essay seem uninteresting and unoriginal.

See the following examples:

Bad word choice: I was so happy because the sunset was really cool.

Good word choice: I experienced immense joy as the sunset captivated me with its remarkable colors and breathtaking beauty.

  • Descriptive essay format and outline

Descriptive essay writing does not have to be disorganized, it is advisable to use a structured format to organize your thoughts and ensure coherent flow in your writing. Here is a list of components that should be a part of your descriptive essay outline:

  • Introduction
  • Opening/hook sentence
  • Topic sentence
  • Body paragraphs
  • Concrete details
  • Clincher statement

descriptive essay about an object that is special to you

Introduction:

  • Hook: An opening statement that captures attention while introducing the subject.
  • Background: Includes a brief overview of the topic the descriptive essay is based on.
  • Thesis statement: Clearly states the main point or purpose of the descriptive essay.

Body paragraphs: Each paragraph should have

  • Topic sentence: Introduce the first aspect or feature you will describe. It informs the reader about what is coming next.
  • Sensory details: Use emphatic language to appeal to the reader’s senses (sight, sound, touch, taste, and smell).
  • Concrete details: These are actual details needed to understand the context of the descriptive essay.
  • Supporting details: Include relevant information or examples to improve the description.

Conclusion:

  • Summarize key points: Here you revisit the main features or aspects of the subject.
  • Restate thesis statement: Reinforce the central impression or emotion.
  • Clincher statement: Conclude with a statement that summarizes the entire essay and serve as the last words with a powerful message.

Revision and editing:

  • Go over your essay to make sure it is coherent, clear, and consistent.
  • Check for logical paragraph transitions by proofreading the content.
  • Examine text to ensure correct grammar, punctuation, and style.
  • Use the thesaurus or AI paraphrasing tools to find the right words.

A descriptive essay often consists of three body paragraphs or more, an introduction that concludes with a thesis statement, and a conclusion that summarizes the subject and leaves a lasting impression on readers.

A descriptive essay’s primary goal is to captivate the reader by writing a thorough and vivid explanation of the subject matter, while appealing to their various senses. A list of additional goals is as follows: – Spark feeling and imagination – Create a vivid experience – Paint a mental picture – Pique curiosity – Convey a mood or atmosphere – Highlight specific details

Although they both fall within the creative writing category, narrative essays and descriptive essays have different storytelling focuses. While the main goal of a narrative essay is to tell a story based on a real-life experience or a made-up event, the main goal of a descriptive essay is to vividly describe a person, location, event, or emotion.

Paperpal is an AI academic writing assistant that helps authors write better and faster with real-time writing suggestions and in-depth checks for language and grammar correction. Trained on millions of published scholarly articles and 20+ years of STM experience, Paperpal delivers human precision at machine speed.    

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to Paperpal Copilot and premium features like academic translation, paraphrasing, contextual synonyms, consistency checks, submission readiness and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!  

Related Reads:

  • 7 Ways to Improve Your Academic Writing Process
  • Paraphrasing in Academic Writing: Answering Top Author Queries
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Addressing Your Queries on AI Ethics, Plagiarism, and AI Detection

4 Types of Transition Words for Research Papers 

What is a narrative essay how to write it (with examples), you may also like, mla works cited page: format, template & examples, how to ace grant writing for research funding..., powerful academic phrases to improve your essay writing , how to write a high-quality conference paper, how paperpal is enhancing academic productivity and accelerating..., academic editing: how to self-edit academic text with..., 4 ways paperpal encourages responsible writing with ai, what are scholarly sources and where can you..., how to write a hypothesis types and examples , what is academic writing: tips for students.

Writing a Descriptive Essay

  • Writing Essays
  • Writing Research Papers
  • English Grammar
  • M.Ed., Education Administration, University of Georgia
  • B.A., History, Armstrong State University

Your first task in writing a descriptive essay is to choose a topic that has many interesting parts or qualities to talk about. Unless you have a really vivid imagination, you'll find it difficult to write much about a simple object like a comb, for example. It's best to compare a few topics first to make sure they'll work.

The next challenge is to figure out the best way to describe your chosen subject in such a way as to relay a complete experience to the reader, so that he or she is able to see, hear, and feel through your words.

Organize Thoughts Before Drafting

As in any writing, the drafting stage is key to writing a successful descriptive essay. Since the purpose of the essay is to paint a mental image of a specific subject, it helps to make a list of all the things you associate with your topic.

For example, if your subject is the farm where you visited your grandparents as a child you would list all the things you associate with that place. Your list should include both general attributes associated with a farm and the more personal and specific things that make it special to you and the reader.

Start with general details

Then add the unique details:

  • That spot by the pig barn where you fell in the manure.
  • Playing hide and seek in the cornfields.
  • Picking wild greens for dinner with your grandmother.
  • The stray dogs that always wandered onto the farm.
  • Scary coyotes howling in the night.

By tying these details together you can make the essay more relatable to the reader. Making these lists will allow you to see how you can tie things from each list together.

Describing Descriptions 

At this stage, you should determine a good order for the objects you'll describe. For example, if you are describing an object, you should determine whether you want to describe its appearance from top to bottom or side to side.

Remember that it is important to begin your essay on a general level and work your way down to specifics. Start by outlining a simple five-paragraph essay with three main topics. Then you may expand on this basic outline.

Next, you will begin to construct a thesis statement and a trial topic sentence for each main paragraph.

  • The thesis sentence should convey your overall impression of your subject. Does it make you happy? Is it attractive or ugly? Is your object useful?
  • Each topic sentence should introduce a new part or stage of your chosen topic.

Don't worry, you can change these sentences later. It's time to start writing paragraphs !

Beginning to Draft

As you build your paragraphs, you should avoid confusing the reader by bombarding them with unfamiliar information immediately; you must ease your way into your topic in your introductory paragraph . For example, instead of saying,

The farm was where I spent most summers holidays. During the summer we played hide and seek in the cornfields and walked through the cow pastures to pick wild greens for supper. Nana always carried a gun for snakes.

Instead, give the reader a broad view of your subject and work your way into the details. A better example would be:

In a small rural town in central Ohio was a farm surrounded by miles of cornfields. In this place, on many warm summer days, my cousins and I would run through the cornfields playing hide and seek or making our own crop circles as clubhouses. My grandparents, whom I called Nana and Papa, lived on this farm for many years. The old farmhouse was large and always full of people, and it was surrounded by wild animals. I spent many of my childhood summers and holidays here. It was the family gathering place.

Another simple rule of thumb to remember is "show don't tell." If you want to describe a feeling or action you should reinvent it through the senses rather than just state it. For example, instead of:

I got excited every time we pulled into the driveway of my grandparent's house.

Try to elaborate on what was really going on in your head:

After sitting for several hours in the back seat of the car, I found the slow crawl up the driveway to be absolute torture. I just knew Nana was inside waiting with freshly baked pies and treats for me. Papa would have some toy or trinket hidden somewhere but he would pretend not to recognize me for a few minutes just to tease me before he gave it to me. As my parents would struggle to pry the suitcases out of the trunk, I would bounce all the way up the porch and rattle the door until someone finally let me in.

The second version paints a picture and puts the reader in the scene. Anyone can be excited. What your reader needs and wants to know is, what makes it exciting?

Keep It Specific

Finally, don't try to cram too much into one paragraph. Use each paragraph to describe a different aspect of your subject. Check to make sure that your essay flows from one paragraph to the next with good transition statements .

The conclusion of your paragraph is where you can tie everything together and restate the thesis of your essay. Take all the details and summarize what they mean to you and why it is important.

  • How to Write a Good Descriptive Paragraph
  • Examples of Great Introductory Paragraphs
  • 100 Persuasive Essay Topics
  • How to Write a Descriptive Paragraph
  • 40 Topics to Help With Descriptive Writing Assignments
  • Structure of a Descriptive Essay
  • How to Write a Solid Thesis Statement
  • What Is Expository Writing?
  • How to Structure an Essay
  • 6 Steps to Writing the Perfect Personal Essay
  • Supporting Detail in Composition and Speech
  • How to Write a Great Essay for the TOEFL or TOEIC
  • How To Write an Essay
  • The Ultimate Guide to the 5-Paragraph Essay
  • How to Help Your 4th Grader Write a Biography
  • How to Write a Narrative Essay or Speech

Descriptive Essay

Descriptive Essay Writing

Last updated on: Feb 9, 2023

How To Write An Impactful Descriptive Essay?

By: Cathy A.

12 min read

Reviewed By: Melisa C.

Published on: Dec 17, 2019

Descriptive Essay

Wondering how to write an impressive descriptive essay? Writing a descriptive essay is both fun and challenging. You need to describe the main topic in detail and by engaging the five senses of the readers.

Students usually get this type of essay in high school and college. Writing a descriptive essay is different from other essays.

You need to focus on describing a certain person, place, or event.

Luckily for you, the following blog post will provide some helpful tips on how to create an engaging essay.

Continue reading to learn how to write an A-worthy descriptive essay.

Descriptive Essay

On this Page

What is a Descriptive Essay?

A descriptive essay is a detailed paper that describes a place, person, situation, object, or emotion. Different people have different points of view and your job is to explain yours in detail.

You may be asked to write a descriptive essay about the beach or forest or about a person or situation. The purpose of this essay is to test the writer’s ability in expressing and explaining their experiences.

Descriptive writing should create a picture in the reader’s mind. You may be required to write a descriptive essay as a high school or college essay assignment.

For a compelling essay, using adjectives and adverbs, details, and figurative language is fundamental. Without proper usage of words, you will not be able to invoke the readers' emotions.

What is the Purpose of a Descriptive Essay?

The purpose of a descriptive essay is to describe a person, place, or personal experience in vivid detail so that the reader can create a picture in his mind.

The descriptive essay is written to get the reader to understand by using descriptive language. It is different from narrative essays, where the writer tells the story about someone else. Usually, it starts with a real-life event and then the content follows the author's imagination.

Descriptive essays are not intended to persuade the reader or show facts and figures to prove something. Descriptive essays are like word paintings that contain personal and descriptive details and these are mostly assigned to students of creative writing.

How to Start a Descriptive Essay

A strong start for your descriptive essay is essential. Analyze your topic from every angle and document the following details:

Analyze the main subjects in detail and observe minute things.

  • Start with observing all the possible aspects of the subject.
  • Don't just observe the object but also its surroundings.
  • Focus on details and features of the subject and develop opinions about them.
  • Be thoughtful; this first step will be the basis for the essay.

Physical Settings

Describing the physical settings is a must in a descriptive essay. When describing, keep the following points in mind.

  • Focus on the subject's position and observe nearby objects
  • Note the time of day and kind of lighting: natural or imitated
  • Physical settings: all the basic and decorative elements
  • The position and shape of the objects
  • Alignment and any other observable information

Physical Features

When describing the physical features of the subject, living or nonliving, consider the following points.

  • Living or nonliving; describe the features in detail
  • The subject's skin color, texture, smoothness, expression, and age
  • The features of inanimate objects in the picture, color, surface, and texture

Create Drama

Storytelling and drama are the life and blood of a good descriptive essay. It turns your essay into an exciting and interesting piece of writing. However, be subtle about adding drama to your sentence structure and add it to complement your story only.

Focus On Your Feelings

Focus on how you feel about the particular topic or person and stick to it. It is easy to get involved when working on the essay. But, focus on your own feelings and write an essay based on them.

Use Of Specific Vocabulary

Vocabulary is important. Select the best words for describing an action or object. Don't always use the first word that comes to mind.

Write slowly and thoughtfully, and use specific words to convey your thoughts.

Psychological Aspects

Writing about a certain situation or behavior of a person focuses on the mental aspects and emotions involved in them.

For Example, describe your emotions when your friend misplaced your notes right before the exam.

You may have had several emotions in that incident. Maybe you were prepared for exams, but this situation put you under pressure and made you feel frustrated and hurt.

Explore those emotions and describe the feelings they aroused. Describe the body language also, if relevant.

Ask Yourself, WHY?

This is the most valuable tip for students. When you are looking at a particular subject, and having difficulty analyzing its aspects, ask yourself "WHY".

  • Why is the subject the way it is?
  • Why does the person you are describing have such a deep-set and cold eyes?
  • Why is the animal so wounded and terrified?
  • Why is this particular place famous?

It is a good practice and after some time you will do it naturally. Knowing the why is important if you want to describe your topic properly.

Order Essay

Paper Due? Why Suffer? That's our Job!

How To Write A Descriptive Essay?

When you write a descriptive essay, you help your readers visualize an event, a person, or a story. It is written to make your readers feel what you feel about the respective subject.

A descriptive essay seeks to appeal to some or all of the audience’s five senses. Some key things to consider are:

  • Discussing your subject thoroughly
  • Focusing on details and adding them in your essay
  • Sharing your personal feelings and experience about the subject
  • Observing and describing all sensory details of your subject

Here are the steps to write a descriptive essay easily.

1- Choose an Engaging and Focused Essay Topic

An important step that all strong descriptive essays share is having a focused topic. Before you make the outline, identify the purpose of your essay and use it to create an appropriate thesis statement.This type of paper does not require much personal opinion from you. Its main goal should be focusing on information that will make a dominant impression in readers' minds instead.

2- Research and Gather Important Details

When writing a descriptive essay, it is important to make sure you include as many details and sensory information as possible. This helps your reader fully understand the images that are being presented in their mind's eye.You can organize these ideas into categories so they're easy for you to access when needed.

3- Create an Outline of Your Essay

Your essays must be organized by having subheadings that are clear and concise. Group your main points into individual body paragraphs, each of which should only cover one idea or topic at a time.

4- Write your Essay’s Introduction

A good introductory paragraph is much like a road map because it provides direction to your readers.

It provides relevant background information before diving into more specific details related to how something works or why something happens. These could include statistics or stories from real-life scenarios.

5- Write the Main Body Section of Your Essay

Each body paragraph should start with a topic sentence that keeps the reader hooked on what you are saying. Use specific details instead of making generalized statements, and make sure to give examples if necessary.

6- End with a Strong Conclusion

The conclusion of an essay is the final paragraph, and it should summarize all that you have said throughout. It's a good idea to restate the main points and key details from the essay in this section.

It is important so the reader has everything they need for better understanding before ending off on something new.

If necessary be sure not to introduce anything odd or unusual, to avoid any confusion.

7- Proofread and Revise the Essay Carefully

Once you are done writing the essay, proofread and revise it carefully. Make sure that it is free from all kinds of errors.

Descriptive Essay Outline

Like all the other essays, a descriptive essay also follows the usual 5-paragraph essay structure and format.Before starting, it is important to create an outline. Following are the fundamental elements of your descriptive essay outline:

Descriptive Essay Introduction

The introduction sets the footing for the entire essay. Before heading towards the body section, the reader will come across the introduction.

It is the first impression of your work. It is very important to write an engaging introduction so that the readers read the essay till the end.

Start the essay in an easy-to-understand way and language. Provide background information on your topic so they can understand it and its importance.

To make sure the reader feels your emotions and decides to continue reading further, incorporate the following points in your introduction.

The following tips will guide you on how to write a good introduction for a descriptive essay.

  • Attract the reader's attention with an interesting fact, phrase, or quote
  • Don't bombard them with information
  • Go straight to the main pointsInclude enough information to introduce the topic and its significance.
  • Summarize the argument and the main topic and craft your thesis statement

Descriptive Essay Thesis Statement

A thesis statement is an integral part of your essay. It focuses on the argument and the writer’s main idea, which is to be discussed in the essay.

This statement also provides the writer with a chance of explaining the purpose and scope of the topic. It is intriguing and engaging.

A thesis statement is written at the end of the introduction, it is mainly a single sentence that describes the essay objective. The thesis statement should act as a guide to the reader on what to expect in the essay body. It is like a table of contents of a book, to the reader on contents you will get an idea of what the book is all about so you get to understand it better.

It is like a table of contents of a book. By reading it, you will get an idea of what the book is all about.

A good thesis should contain the following things:

  • Define the essay scope - it should narrow down all the points to clarify its purpose.
  • Avoid using common words - you should be creative with your choice of words.
  • Create suspense - it should attract the reader to the body paragraphs of the essay.

For further information on how to write a thesis for a descriptive essay, check out the following examples.

  • Descriptive essay example about a Place

“Even though monarchy is long gone, Buckingham Palace is here to remind us of the aesthetic beauty of that era.”

  • Descriptive essay example about a Person

“One of the characteristics of Spider-Man is his youthfulness, and the fact that he talks to himself more than Hamlet.”

  • Descriptive essay example about an Emotion

“For numerous reasons, the dark forest is my greatest fear, though not a fear which is necessarily smart to face.”

Descriptive Essay Body Paragraphs

Body paragraphs of the essay come next after the introduction and thesis statement. It is the main part that continues your essay.

Usually, an essay consists of three body paragraphs but you can add more if needed.

Don't add more than one central idea in one paragraph. Fusing different ideas will confuse the reader.

Build your paragraphs according to the thesis and introduction.

  • Start each body paragraph with the main sentence
  • Use transitions to move between paragraphs smoothly
  • Each paragraph should be five to six sentences long

Descriptive Essay Conclusion

The concluding paragraph is the last part of an essay, and probably your last chance to impress your reader.

The last part that the reader can keep in mind is the conclusion, which is as important as the rest of the essay.

To make it interesting and thought-provoking, include the following points:

  • Restate the thesis statement
  • Summarize the main points
  • Add an intriguing closing statement

After writing the conclusion, make a review of your essay, identify the mistakes and maintain a good tone throughout the essay.

Descriptive Essay Format Sample

Here is the descriptive essay format to help you understand how you can write a winning descriptive essay.

DESCRIPTIVE ESSAY FORMAT (PDF)

Tough Essay Due? Hire Tough Writers!

Descriptive Essay Topics Ideas

Descriptive essay topics are often related to physical settings, locations, living beings, and objects.

Make sure that your essay includes the five senses, touch, taste, smell, sight, hearing, or at least one of them. It depends on the topic and the kind of feeling that you want to arouse.

Below are some descriptive essay ideas and ways to achieve them.

Living Beings

When you want to write about a person like a family member, consider the following elements:

  • Gender, age, complexion, and expressions
  • Physical features
  • Height, body type, and approximate weight
  • Kind of clothes

These details will add depth to the description and your readers will actually see your narrative.

When animals are the subject, you can add the above points plus the following details:

  • Species and animal
  • Size, weight, color
  • Behavior patterns
  • Temperament
  • Trained or wild?
  • Real or fictional?

Inanimate Subjects

Geographic locations and structures.

When your subject is a place or a building, add the following points:

  • Research about the place and its historical background
  • The color and the building's type
  • A famous place or landmark to draw a comparison and inspire interest

Human behavior and psychology is a compelling descriptive essay subject. When writing about it:

  • Describe the consequences of a particular behavior
  • Discuss the emotional dimension of the topic and how you perceive it personally

Event Or Travel Experience

A travel experience makes a good descriptive essay since you have experienced the event first hand.

Give a detailed description of the place, people at the venue, and the atmosphere of the location.

Idea, Concept, or Occupation

When writing on such topics, focus on how an idea or concept affects society and its different aspects.

Example Descriptive Essay Topics for Students

Choosing a topic for your descriptive essay is quite interesting. You get to choose something that you have an emotional connection with.

When writing a descriptive essay about a person or place, adding their personal traits will be helpful.

Some examples of descriptive essay topics include:

  • Compose a detailed descriptive essay about your best friend.
  • Describe a fancy place that you have created.
  • Describe your dream vacation destination.
  • Describe your favorite mall or store.
  • Describe your childhood home.
  • Descriptive essay about nature.
  • Descriptive essay about a place you visited.
  • Describe the personality of your Maths teacher.
  • Discuss the main characters of your favorite movie.
  • Descriptive essay about chocolate.
  • Write an essay using unique Words to describe yourself.
  • What makes me unique?
  • My first love.

Descriptive Essay Examples

Study these descriptive essay examples and sample papers to understand the main idea, structure, and purpose of descriptive essays.

DESCRIPTIVE ESSAY ON MARKET (PDF)

DESCRIPTIVE ESSAY EXAMPLE PERSON (PDF)

To help you understand how to write a great descriptive essay, we have a whole blog post dedicated to it. We know that talking about something is one thing and demonstrating it is completely different.

Having a descriptive essay assignment with a short deadline? Looking for someone to do my essay for me ?

5StarEssays.com academic writing professionals are ready to help you. They read the essay details before writing and make sure that they incorporate all the details in it.

Get 100% plagiarism-free content at affordable prices from our experts now!

Frequently Asked Questions

What are the features of a descriptive essay.

A descriptive essay provides a perfect opportunity for writers to express their feelings on any subject. Descriptive writing has rich sensory details which appeal to all of your senses.

How do you start a descriptive essay introduction?

The introduction to the descriptive essay should set the scene and introduce the main topic. You can use these sensory details to get a sense of what the essay is all about.

What are the two types of descriptive essays?

There are two types of descriptive essays. The first type deals with people, and the second one is about objects.

What are the elements of a descriptive essay?

Here are the key elements of a descriptive essay.

  • Sensory details
  • Figurative language
  • Central and main theme
  • Precise and clear language
  • Proper organization of ideas

What makes good descriptive writing?

Good and effective descriptive writing consists of vivid sensory details that appeal to all senses including the sense of sight, smell, touch, hearing, and taste. Moreover, these essays also explain people’s feelings in writing.

Cathy A.

Finance Essay, Literature

Cathy has been been working as an author on our platform for over five years now. She has a Masters degree in mass communication and is well-versed in the art of writing. Cathy is a professional who takes her work seriously and is widely appreciated by clients for her excellent writing skills.

Was This Blog Helpful?

Keep reading.

  • Interesting Descriptive Essay Topics Recommended by Experts

Descriptive Essay

  • Descriptive Essay Examples - 8 Examples To Help You Write Better

Descriptive Essay

People Also Read

  • writing personal statement
  • writing a descriptive essay conclusion
  • how to start a research paper
  • compare and contrast essay
  • persuasive speech topics

Burdened With Assignments?

Bottom Slider

Advertisement

  • Homework Services: Essay Topics Generator

© 2024 - All rights reserved

Facebook Social Icon

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Descriptive Essays

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

What is a descriptive essay?

The descriptive essay is a genre of essay that asks the student to describe something—object, person, place, experience, emotion, situation, etc. This genre encourages the student’s ability to create a written account of a particular experience. What is more, this genre allows for a great deal of artistic freedom (the goal of which is to paint an image that is vivid and moving in the mind of the reader).

One might benefit from keeping in mind this simple maxim: If the reader is unable to clearly form an impression of the thing that you are describing, try, try again!

Here are some guidelines for writing a descriptive essay.

  • Take time to brainstorm

If your instructor asks you to describe your favorite food, make sure that you jot down some ideas before you begin describing it. For instance, if you choose pizza, you might start by writing down a few words: sauce, cheese, crust, pepperoni, sausage, spices, hot, melted, etc. Once you have written down some words, you can begin by compiling descriptive lists for each one.

  • Use clear and concise language.

This means that words are chosen carefully, particularly for their relevancy in relation to that which you are intending to describe.

  • Choose vivid language.

Why use horse when you can choose stallion ? Why not use tempestuous instead of violent ? Or why not miserly in place of cheap ? Such choices form a firmer image in the mind of the reader and often times offer nuanced meanings that serve better one’s purpose.

  • Use your senses!

Remember, if you are describing something, you need to be appealing to the senses of the reader. Explain how the thing smelled, felt, sounded, tasted, or looked. Embellish the moment with senses.

  • What were you thinking?!

If you can describe emotions or feelings related to your topic, you will connect with the reader on a deeper level. Many have felt crushing loss in their lives, or ecstatic joy, or mild complacency. Tap into this emotional reservoir in order to achieve your full descriptive potential.

  • Leave the reader with a clear impression.

One of your goals is to evoke a strong sense of familiarity and appreciation in the reader. If your reader can walk away from the essay craving the very pizza you just described, you are on your way to writing effective descriptive essays.

  • Be organized!

It is easy to fall into an incoherent rambling of emotions and senses when writing a descriptive essay. However, you must strive to present an organized and logical description if the reader is to come away from the essay with a cogent sense of what it is you are attempting to describe.

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write a Descriptive Essay

4-minute read

  • 26th February 2020

A descriptive essay, as the name may suggest, is an essay in which you describe something. The idea is to create a vivid picture of something – a person, object, place or experience – for your reader.

But how do you write a descriptive essay? We have a few helpful tips to share.

1. Brainstorming and Organizing Your Ideas

Your first step should be to brainstorm ideas . Think about the qualities of what you’re describing. As well as physical qualities, make notes about any thoughts, memories, and emotions you associate with your subject matter.

This brainstorming will give you the raw material for your descriptive essay. The next step is to create an essay outline. Typically, this will include:

  • An Introduction – An outline of what you will describe and the “thesis” for your essay (i.e., a key theme that will run through your essay and guide your description). For instance, if writing about an inspirational teacher, you could mention the importance of education in the introduction.
  • Main Body – A series of paragraphs in which you describe your subject. Each paragraph should cover a single main point, then lead neatly on to the next one, adding to the overall picture you’re creating for the reader.
  • Conclusion – A final paragraph where you summarize your overall essay. This is also a good place to reaffirm your essay thesis, emphasizing how your description reflects this.

Before you start writing, then, make some notes about what each paragraph in your essay will include. This will then guide the drafting process, making sure your essay has a clear structure.

2. Use Vivid, Sensory Language

A descriptive essay should paint a picture for your reader. And this means you need to use vivid, exciting language rather than a formal, academic tone. Ideas for making your essay more linguistically engaging include:

  • Using sensory language to evoke how something looked, smelled, etc.
  • Writing in the present tense to make the situation feel immediate.
  • Describing feelings and thoughts elicited by the subject of your essay.
  • Looking for dynamic adjectives and adverbs to use (e.g., you could say something made you “happy,” but “elated” or “delighted” may be stronger).
  • Using metaphors, similes, and other literary techniques .

Keep your introduction in mind while writing. The language you use should serve the “thesis” you set out there, drawing the reader’s attention to specific aspects of the thing you’re describing.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

3. Show, Don’t Tell

“Show, don’t tell” refers to a technique used by authors to make their writing more engaging. Essentially, all this means is using action, description, and dialogue to paint a picture for the reader rather than simply stating something in plain language. We can see the difference below:

Telling: Miss Hardy was an engaging speaker.

Showing: When Miss Hardy spoke, everyone listened. Her voice bubbled with enthusiasm, bringing even the most mundane subjects to life.

In the first sentence, we simply tell the reader that Miss Hardy was an engaging speaker. But in the second, we try to help the reader picture being in her class, listening to her speak. And by engaging the reader’s imagination like this, we can make our description more memorable.

4. Editing and Proofreading Your Descriptive Essay

Once you have a first draft, you’ll be ready to start editing. The idea here is to go back over your essay – at least once, but possibly multiple times – to look for ways you could improve it. This drafting process may involve:

  • Making sure your writing is clear, well structured, and impactful.
  • Rewriting passages that feel clichéd or that could be stronger.
  • Reading your essay out loud to see how well it flows.
  • Ensuring that the central theme of your essay is present throughout.

And when you’ve finished redrafting, go through the essay one more time to remove any typos that remain. Alternatively, you can submit your descriptive essay for proofreading . With the expert eye of a professional editor on your side, you can be confident your writing is the best it can be.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

How to Ace Slack Messaging for Contractors and Freelancers

Effective professional communication is an important skill for contractors and freelancers navigating remote work environments....

3-minute read

How to Insert a Text Box in a Google Doc

Google Docs is a powerful collaborative tool, and mastering its features can significantly enhance your...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

Reading Worksheets, Spelling, Grammar, Comprehension, Lesson Plans

50 Descriptive Essay Topics

Make your reader see, smell, hear and feel with these inspirational descriptive essay topics ! We’ve collected 50 descriptive essay topics to sprout some flowery language. Our descriptive essay topics are designed to spark creative thinking and can be modified for students in elementary, middle and high school. They are grouped by topic for easy student and teacher reference. Feel free to print the entire list for plenty of inspiration for your next descriptive essay assignment!

Descriptive Essay Topics: Place

  • Describe your favorite place.
  • Describe your ideal bedroom.
  • Describe the house in which you grew up.
  • Describe what the first house on the moon would look like.
  • Describe some of your favorite places in your hometown.
  • Describe a peaceful place that you’ve visited.
  • Describe a place that exists only in your imagination.
  • Describe a friend’s or family member’s house where you enjoy spending time.
  • Describe your perfect fantasy vacation destination.
  • Describe your favorite store.
  • Describe your favorite teacher’s classroom.
  • Describe a museum that you’ve visited recently.
  • Describe a place you have dreamed about that doesn’t exist in real life.
  • Describe a place where your pet likes spending time.
  • Describe an outdoor place that you know well.

Descriptive Essay Topics: People

  • Describe your favorite person.
  • Describe each of your family members.
  • Describe a famous person that you would like to meet.
  • Describe one of your friends.
  • Describe one aspect of someone that you like (for example: laugh, style of dress, words that the person likes to use, etc.)
  • Describe yourself to someone who has never met you.
  • Describe the average human to an alien who has never before seen a person.
  • Describe your pet.
  • Look at some old family photos and describe an older family member as he or she was when at your age.
  • Describe someone whom you miss.

Descriptive Essay Topics: Objects

  • Describe an object that is special to you.
  • Give a tour of one room in your house by describing the most important objects in that room.
  • Describe one of your favorite outfits.
  • Describe your favorite toy as a child.
  • Describe how you get around (for example: a bicycle, skateboard, sneakers, your parents’ car, the school bus).
  • Describe your favorite piece of furniture where you like to spend time and relax.
  • Describe something that you would bury in a time capsule to tell people about what life is like today.
  • Describe an object that has been in your family for a long time.
  • Choose a piece of food to eat; then, write a description of it that includes the way it looks, smells and tastes.
  • Describe a smartphone to a time traveler from the 1900s.

Descriptive Essay Topics: Memories

  • Describe your oldest memory.
  • Describe your best summer vacation.
  • Describe a memorable concert you attended.
  • Describe a memorable trip you took.
  • Describe a special time that you and your family had together.
  • Describe the first time you met one of your friends.
  • Describe a time you met someone famous.
  • Describe one of your happiest memories.
  • Describe one of your saddest memories.
  • Describe a time that you felt scared.
  • Describe a time that you felt excited.
  • Describe a time that something totally unexpected happened.
  • Describe a memory of someone whom you miss.
  • Describe one of your most memorable first days of school.
  • Describe one of your most embarrassing moments.

Looking for more essay topics? Compare and Contrast Essay Topics Cause and Effect Essay Topics Narrative Essay Topics Persuasive Essay and Speech Topics

Descriptive Essay Writing

Descriptive Essay Examples

Barbara P

Amazing Descriptive Essay Examples for Your Help

Published on: Jun 21, 2023

Last updated on: Mar 1, 2024

Descriptive Essay Examples

People also read

Interesting Descriptive Essay Topics - 2024

Writing a Descriptive Essay Outline - Tips & Examples

Descriptive Essay: Definition, Tips & Examples

Share this article

Descriptive essays are very commonly assigned essays. This type of essay enhances students' writing skills and allows them to think critically. 

A descriptive essay is often referred to as the parent essay type. Other essays like argumentative essays, narrative essays, and expository essays fall into descriptive essays. Also, this essay helps the student enhance their ability to imagine the whole scene in mind by appealing senses.

It is assigned to high school students and all other students at different academic levels. Students make use of the human senses like touch, smell, etc., to make the descriptive essay more engaging for the readers. 

On This Page On This Page -->

Examples make it easy for readers to understand things in a better way. Also, in a descriptive essay, different types of descriptions can be discussed. 

Here are some amazing examples of a descriptive essay to make the concept easier for you. 

Descriptive Essay Example 5 Paragraph

5 paragraphs essay writing format is the most common method of composing an essay. This format has 5 paragraphs in total. The sequence of the paragraphs is as follows;

  • Introduction
  • Body Paragraph 1
  • Body Paragraph 2 
  • Body Paragraph 3
  • Conclusion 

Following is an example of a descriptive essay written using the famous 5 paragraph method. 

5 Paragraph Descriptive Essay

Order essay

Get More Examples From Our AI Essay Writer

Descriptive Essay Example About A Person

Descriptive essays are the best option when it comes to describing and writing about a person.  A descriptive essay is written using the five human senses. It helps in creating a vivid image in the reader’s mind and understanding what the writer is trying to convey. 

Here is one of the best descriptive essay examples about a person. Read it thoroughly and try to understand how a good descriptive essay is written on someone’s personality.

Descriptive Essay Example About a Person

Descriptive Essay Example About A Place

If you have visited a good holiday spot or any other place and want to let your friends know about it. A descriptive essay can help you explain every detail and moment you had at that place. 

Here is one of the good descriptive essay examples about a place. Use it as a sample and learn how you can write such an essay. 

Order Essay

Tough Essay Due? Hire Tough Writers!

Descriptive Essay Example for Grade 6

Descriptive essays are frequently assigned to school students. This type of essay helps the students enhance their writing skills and helps them see things in a more analytical way.

If you are a 6 grader and looking for a good descriptive essay example, you are in the right place.  

Descriptive Essay Example for Grade 7

Here is one of the best descriptive essay examples for grade 7. 

Descriptive Essay Example for Grade 8

If you are looking for some amazing descriptive essay examples for grade 8, you have already found one. Look at the given example and see what a well-written descriptive essay looks like. 

Descriptive Essay Example for Grade 10

Essay writing is an inevitable part of a student's academic life . No matter your grade, you will get to write some sort of essay at least once. 

Here is an example of a descriptive essay writing for grade10. If you are also a student of this grade, this example might help you to complete your assignment.

Descriptive Essay Example for Grade 12

If you are a senior student and looking for some essay examples, you are exactly where you should be. 

Use the below-mentioned example and learn how to write a good essay according to the instructions given to you. 

Descriptive Essay Example College

Descriptive essays are a great way to teach students how they can become better writers. Writing a descriptive essay encourages them to see the world more analytically.

Below is an example that will help you and make your writing process easy.

College Descriptive Essay Example

Descriptive Essay Example for University

Descriptive essays are assigned to students at all academic levels. University students are also assigned descriptive essay writing assignments. As they are students of higher educational levels, they are often given a bit of difficult and more descriptive topics. 

See the example below and know what a descriptive essay at the university level looks like. 

Short Descriptive Essay Example

Every time a descriptive essay isn't written in detail. It depends on the topic of how long the essay will be.  

For instance, look at one of the short descriptive essay examples given below. See how the writer has conveyed the concept in a composed way. 

Objective Descriptive Essay Example

When writing an objective description essay, you focus on describing the object without conveying your emotions, feelings, or personal reactions. The writer uses sight, sound, or touch for readers' minds to bring life into pictures that were painted by words.

Here is an example that you can use for your help. 

Narrative and Descriptive Essay Example

A narrative descriptive essay can be a great way to share your experiences with others. It is a story that teaches a lesson you have learned. The following is an example of a perfect narrative descriptive essay to help you get started.

Paper Due? Why Suffer? That's our Job!

How to Start a Descriptive Essay? - Example

If you don't know how to start your descriptive essay, check this example and create a perfect one. 

How to Start a Descriptive Essay - Example

Subjective Descriptive Essay Example

It is a common concept that a descriptive essay revolves around one subject. Be it a place, person, event, or any other object you can think of. 

Following is one of the subjective descriptive, easy examples. Use it as a guide to writing an effective descriptive essay yourself. 

Writing a descriptive essay is a time-consuming yet tricky task. It needs some very strong writing, analytical, and critical thinking skills. Also, this is a type of essay that a student can not avoid and bypass. 

But if you think wisely, work smart, and stay calm, you can get over it easily. Learn how to write a descriptive essay from a short guide given below. 

How to Write a Descriptive Essay?

A writer writes a descriptive essay from their knowledge and imaginative mind. In this essay, the writer describes what he has seen or experienced, or ever heard from someone. For a descriptive essay, it is important to stay focused on one point. Also, the writer should use figurative language so that the reader can imagine the situation in mind. 

The following are some very basic yet important steps that can help you write an amazing descriptive essay easily. 

  • Choose a Topic

For a descriptive essay, you must choose a vast topic to allow you to express yourself freely. Also, make sure that the topic you choose is not overdone. An overdone will not grab the attention of your intended audience. Check out our descriptive essay topics blog for a variety of intriguing topic suggestions.

  • Create a Strong Thesis Statement

A thesis statement is the essence of any academic writing. When you select the descriptive essay topic, then you create a strong thesis statement for your essay.  

A thesis statement is a sentence or two that explains the whole idea of your essay to the reader. It is stated in the introductory paragraph of the essay. The word choice for creating the thesis statement must be very expressive, composed, and meaningful. Also, use vivid language for the thesis statement.  

  • Collect the Necessary Information

Once you have created the thesis statement and are done writing your essay introduction . Now, it's time to move toward the body paragraphs. 

Collect all necessary information related to your topic. You would be adding this information to your essay to support your thesis statement. Make sure that you collect information from authentic sources. 

To enhance your essay, make use of some adjectives and adverbs. To make your descriptive essay more vivid, try to incorporate sensory details like touch, taste, sight, and smell.

  • Create a Descriptive Essay Outline

An outline is yet another necessary element of your college essay. By reading the descriptive essay outline , the reader feels a sense of logic and a guide for the essay. 

In the outline, you need to write an introduction, thesis statement, body paragraphs and end up with a formal conclusion.

Proofreading is a simple procedure in which the writer revises the written essay. This is done in order to rectify the document for any kind of spelling or grammatical mistakes. Thus, proofreading makes high-quality content and gives a professional touch to it. 

You might be uncertain about writing a good enough descriptive essay and impress your teacher. However, it is very common, so you do not need to stress out. 

Hit us up at CollegeEssay.org and get an essay written by our professional descriptive essay writers. Our essay writing service for students aims to help clients in every way possible and ease their stress. Get in touch with our customer support team, and they will take care of all your queries related to your writing. 

You can always enhance your writing skills by leveraging the power of our AI essay writing tools .

Place your order now and let all your stress go away in a blink! 

Barbara P (Literature)

Barbara is a highly educated and qualified author with a Ph.D. in public health from an Ivy League university. She has spent a significant amount of time working in the medical field, conducting a thorough study on a variety of health issues. Her work has been published in several major publications.

Paper Due? Why Suffer? That’s our Job!

Get Help

Keep reading

Descriptive Essay Examples

Legal & Policies

  • Privacy Policy
  • Cookies Policy
  • Terms of Use
  • Refunds & Cancellations
  • Our Writers
  • Success Stories
  • Our Guarantees
  • Affiliate Program
  • Referral Program
  • AI Essay Writer

Disclaimer: All client orders are completed by our team of highly qualified human writers. The essays and papers provided by us are not to be used for submission but rather as learning models only.

descriptive essay about an object that is special to you

evoessays.org

evoessays.org

Anyone who stops learning is old, whether at twenty or eighty

  • Essay examples

Related Posts

reflective practice essay example

Reflective Practice Essay Example

example of essay about who am i

Example Of Essay About Who Am I

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

3.5: Descriptive Essays

  • Last updated
  • Save as PDF
  • Page ID 107758

  • Kathryn Crowther et al.
  • Georgia Perimeter College via GALILEO Open Learning Materials

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Writing a Description Essay

Choosing a subject is the first step in writing a description essay. Once you have chosen the person, place, or object you want to describe, your challenge is to write an effective thesis statement to guide your essay. The remainder of your essay describes your subject in a way that best expresses your thesis. Remember, you should have a strong sense of how you will organize your essay. Choose a strategy and stick to it. Every part of your essay should use vivid sensory details. The more you can appeal to your readers’ senses, the more they will be engaged in your essay. You can read two sample essays at the end of this section.

Sample Thesis Statement

Although Minnesota may seem drab and cold to outsiders, natives of the state find it a wonderful place to live.

We can see in this thesis statement that the writer will attempt to show the aspects of Minnesota that make it a great place to live. After detailing a thesis statement, you should come up with a list of sensory words that provide vivid detail and support the thesis. You may start by thinking about the five senses. How does your particular place look, smell, feel, taste, and sound like? How can you best describe these senses so the reader feels what you feel? By organizing the elements of descriptive language into easier to handle sections, like the five senses, you are able to more specifically engage in what elements of the description are most useful.

Order of Presentation

The writer in this case could choose to present the positive aspects of Minnesota in terms of the seasons and weather changes. The details could be presented linearly, starting with spring and going through the winter, highlighting the aspects of each season that most closely support the thesis, that Minnesota is a great place to live.

Prior to starting the essay, give some thought to the audience of your piece. Who is going to read the essay, and what effect would you like it to have upon the readers? An awareness of audience is important in choosing the level of formality you take with your writing. Knowing your audience will also help you distinguish which details to include throughout your essay. Assume that your audience knows very little or nothing about your subject matter, and include details that may seem obvious to you.

Example Audience: In this particular essay, the writer wants to show an outsider to the state why Minnesota natives are so happy to live there. The essay should help break down stereotypes for those outsiders about Minnesota’s cold weather and apparent drabness. Because the essay is designed for those who do not live in Minnesota, and maybe have never been there, it is important to include details about the state that may seem obvious to a native.

With the preparatory work complete, it is time now to begin writing your essay. Use your thesis statement to begin to construct an introductory paragraph. The introduction should set up the basis for your essay, and the thesis statement should state its purpose.

Example Introduction

Many who have not traveled to the state of Minnesota only hear of its cold weather and boring reputation. They are sure missing out on the great opportunities that Minnesota affords. Each season offers different senses that native Minnesotans and tourists know and love. Although Minnesota may seem drab and cold to outsiders, natives of the state find it a wonderful place to live.

With the introduction complete, it is time to start constructing the body paragraphs of your essay. Each body paragraph should have a central theme in itself, and that theme should be represented in a topic sentence. Consequently, each sentence of the paragraph should relate to and support the topic sentence. The body paragraphs are where the majority of the details should be given. When writing the first draft of your descriptive essay, include as many details as is reasonably possible. You can always eliminate the ones that do not serve the essay as well when you are revising your draft. In the case of the Minnesota nature essay, we have decided to set up the body paragraphs in terms of season, starting with spring.

Example Body Paragraph

Spring in Minnesota brings new life to the state after the long winter season. The rain washes the landscape clean, leaving its fresh aroma for all to enjoy. The flowers soak up the golden sun’s rays and begin to show their vibrant colors. The first birds can be seen and heard throughout the woods and fields, telling their stories in beautiful songs. The lakes begin to show their glossy finish as the ice melts away slowly under the heat of the season.

With the body paragraphs complete, it is time to bring the essay to a close with the conclusion. The conclusion should draw a conclusion based on what has been presented throughout the body of the essay. It needs to return to the thesis, but not in an overt way. The conclusion should give the reader a final sense of what the essay was meant to portray. Remember that there should not be any new material introduced in the conclusion, and the way it is worded should give the reader a sense of finality.

Example Conclusion

The variety of activities and distinct seasons found in Minnesota reveal diverse beauty of this state. As one considers the benefits of each season, it becomes clearer why so many native Minnesotans are content with their home state. Minnesota is truly a wonderful place to live.

With the essay complete, it is time to reread and revise your essay (also see revision sections of this textbook). Read your first draft and pinpoint all of the descriptor words you used. If possible, go back and add more after the ones you already used in the essay. If you can, read your essay aloud to a friend and have him/her tell you what images are vivid and what images need more development. Rework any images that are cloudy with more descriptions. Also, check to see if your descriptions have made use of all of the five senses: sound, smell, texture, sight, and taste. Repeat these steps as many times as necessary until you are happy with your product.

Sample Descriptive Essays

America's Pastime

As the sun hits my face and I breathe in the fresh air, I temporarily forget that I am at a sporting event. But, when I open my eyes and look around, I am reminded of all things American. From the national anthem to the international players on the field, all the sights and sounds of a baseball game come together like a slice of Americana pie.

First, the entrance turnstiles click and clank, and then a hallway of noise bombards me. All the fans’ voices coalesce in a chorus of sound, rising to a humming clamor. The occasional, “Programs, get your programs, here!” jumps out through the hum to get my attention. I navigate my way through the crowded walkways of the stadium, moving to the right of some people, and to the left of others, I eventually find the section number where my seat is located. As I approach my seat I hear the announcer’s voice echo around the ball park, “Attention fans. In honor of our country, please remove your caps for the singing of the national anthem.” His deep voice echoes around each angle of the park, and every word is heard again and again. The crowd sings and hums “The Star-Spangled Banner,” and I feel a surprising amount of national pride through the voices. I take my seat as the umpire shouts, “Play ball!” and the game begins.

In the fifth inning of the game, I decide to find a concessions stand. Few tastes are as American as hot dogs and soda pop, and they cannot be missed at a ball game. The smell of hot dogs carries through the park, down every aisle, and inside every concourse. They are always as unhealthy as possible, dripping in grease, while the buns are soft and always too small for the dog. The best way to wash down the Ball Park Frank is with a large soda pop, so I order both. Doing my best to balance the cold pop in one hand and the wrapped-up dog in the other, I find the nearest condiments stand to load up my hot dog. A dollop of bright green relish and chopped onions, along with two squirts of the ketchup and mustard complete the dog. As I continue the balancing act between the loaded hot dog and pop back to my seat, a cheering fan bumps into my pop hand. The pop splashes out of the cup and all over my shirt, leaving me drenched. I make direct eye contact with the man who bumped into me. He looks me in the eye, looks at my shirt, and tells me how sorry he is. I just shake my head and keep walking. “It’s all just part of the experience,” I tell myself.

Before I am able to get back to my seat, I hear the crack of a bat, followed by an uproar from the crowd. Everyone is standing, clapping, and cheering. I missed a home run. I find my aisle and ask everyone to excuse me as I slip past them to my seat. “Excuse me. Excuse me. Thank you. Thank you. Sorry,” is all I can say as I inch past each fan. Halfway to my seat I can hear discarded peanut shells crunch beneath my feet, and each step is marked with a pronounced crunch.

When I finally get to my seat I realize it is the start of the seventh inning stretch. I quickly eat my hot dog and wash it down with what is left of my soda pop. The organ starts playing and everyone begins to sing “Take Me Out to the Ball Game.” While singing the song, putting my arms around friends and family with me, I watch all the players taking the field. It is wonderful to see the overwhelming number of players on one team from around the world: Japan, the Dominican Republic, the United States, Canada, and Venezuela. I cannot help but feel a bit of national pride at this realization. Seeing the international representation on the field reminds me of the ways that Americans, though from many different backgrounds and places, still come together under common ideals. For these reasons and for the whole experience in general, going to a Major League Baseball game is the perfect way to glimpse a slice of Americana.

Student Essay

In the following student essay, notice how the writer uses sensory details to describe not only the visual appearance of Dr. Martin Luther King, Jr.’s tomb, but also the experience of visiting such a historically significant and emotionally moving monument. Pay particular attention to the organization of the description; how does the author move us around the monument and describe its characteristics? Is it effective?

Professor Smith

English 1101

11 June 2014

The King’s Tomb

The water is always so beautiful, a hypnotic shade of baby blue, with a few autumn colored leaves floating in the ripples made by the wind. This isn’t a natural body of water. No wildlife swim in the shallow waves, but this water is as full of life as any ocean. In the middle of what is fittingly called the Reflecting Pool lies the closest thing African Americans have ever had to royalty. Here lie the remains of Dr. Martin Luther King and Coretta Scott King.

Nestled between the King Center for Nonviolent Social Change and the original Ebenezer Baptist Church is a beautiful white marble monument, warmly bathed in the lights circling the tomb of our late civil rights leaders. Following Dr. King’s assassination in April 1968, he was first interred at South View Cemetery, a final resting place largely reserved for African Americans during that period. It took nearly a decade before he was exhumed and placed in the beautiful ivory stone structure that he now shares with his beloved wife Coretta. The tomb, erected in 1977, sits within the south end of the Reflecting Pool. Seemingly suspended on the bright blue water, the tomb displays scriptures that only capture a small portion of the legacy left by these great leaders. Engraved on Dr. King’s portion reads, “Free at last, Free at last, Thank God Almighty I’m free at last!” from his pivotal “I Have a Dream” speech given during the March on Washington in 1963. I can’t help but attempt to recite the mantra in my head with the same bravado and conviction as Dr. King had when he gave the speech over fifty years ago. While the saying is a beautiful incantation, fit for a King, the cost at which that freedom was attained is still heartbreaking.

In a scene reminiscent of Romeo and Juliet, Mrs. Coretta Scott King, who passed away in January 2006 after a prolonged illness, lies next to her slain husband. For a short period following her death Mrs. King was interred in a smaller yet equally beautiful tomb directly across from her late husband. Spectacular floral arrangements surrounded her tomb as scores of mourners came from afar to pay their respects to the First Lady of the Civil Rights Movement. In November 2006, she was laid to rest in a beautiful new tomb aside her husband. The words “And now abide Faith, Hope, Love, These Three, but the greatest of these is Love,” emblazon her final resting place. No truer instance could describe her legacy.

Auburn Avenue, shrouded in darkness, is void of people aside from the few vagrants that aimlessly roam the streets. Heat from the Eternal Flame warms my back as I stare off into space. The brilliant glow of the LED lights strategically placed around the tomb and the amber flicker of the Eternal Flame are the only lights that seem to suit this moment. Kneeling as if I’m preparing to pray, I take a moment to reflect. Through my clenched eyes I can hear the soft splashes of the water, the gas fueled roar of the Eternal Flame. The ambient noise of car horns, traffic and construction fade to nearly a whisper. I envision the March on Washington. I can feel the sting of water hoses pelting my black skin. I can hear the sharp sonics of police dogs barking. The feeling is overwhelming. My eyelashes clump together from the tears winning their battle against my eyelids. Nearby is a place of worship, a place where anyone can still feel the spirit of past congregations, a place where the walls hold almost as much history as any Smithsonian exhibit.

Just a few feet away sits the original Ebenezer Baptist Church, a beautiful, rustic old building left largely intact from the days of Atlanta’s past. Walking inside is like stepping into a time warp, instantly sending you to the heart of the Civil Rights Movement. With the exception of a few strategically placed speakers, the church is left in its pure form. Dr. King’s voice echoes through wooden pews playing his famous “Drum Major” speech, given during his final sermon at Ebenezer on February 4, 1968. With closed eyes, I have difficulty telling what era I am in. Given with almost Machiavellian prediction and passion, ten minutes engulfed with his powerful words makes me feel as though I’ve been baptized, born again.

Surrounded with reminders of our history’s darkest time, this place brings me peace. There’s an aura in this place. A powerful spirit that infiltrates my conscience with thoughts of struggle, loss and freedom. The reality of this place forces my mind to reevaluate my own mortality. Even with the knowledge of how Dr. King was vilified, degraded, and executed, his death serves as a shining beacon of light. A lone ray of sun through the seemingly endless cloud of racism and intolerance. Coretta’s grace, beauty and resilience in the face of unspeakable tragedy and injustice is incomparable. Her social work and philanthropy should be an influence to women of all walks of life.

The legacy that Dr. and Mrs. King leave behind is an unfulfilled one. Equality in America has improved since Dr. King’s assassination but his dream is still unrealized. There is turmoil within the King family regarding funding and management of the King Memorial, leaving the future of this serene place uncertain. Engraved on the Stone of Hope, a newly completed Martin Luther King Jr. Memorial in Washington, D.C., reads, “Out of the Mountain of Despair, a Stone of Hope.” Although we still have a mountain to climb, The King’s Tomb is surely my Stone of Hope.

External Links

Checklist of Things to Consider ( https://tinyurl.com/y7zegezs ) when writing a description.

Susan Berne visits New York and describes her impressions in " Where Nothing Says Everything " ( https://tinyurl.com/yboc9m9s ), also called "Ground Zero." Another link to the story is here ( https://tinyurl.com/y99fchlw ).

Contributors and Attributions

Adapted from  Successful College Composition (Crowther et al.) . Sourced from  LibreTexts , licensed under  CC BY-NC-SA  .

Adapted from  Let's Get Writing (Browning, DeVries, Boylan, Kurtz and Burton) . Sourced from  LibreTexts , licensed under  CC BY-NC-SA  .

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • College University and Postgraduate
  • Academic Writing

How to Start a Descriptive Essay

Last Updated: May 14, 2024 Fact Checked

This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 113,923 times.

Jake Adams

Brainstorming Topics for the Essay

Step 1 Choose a person to describe.

  • If you are writing the descriptive essay for a college application, you may choose a person who is a role model or a mentor to you. Describing this person in the essay will give you the chance to discuss why this person is important to you and the lessons you have learned from this person.

Step 2 Describe an object.

  • For example, you may choose your favorite childhood toy as the topic for the essay. You could then describe the toy and what it meant to you growing up.

Step 3 Select a place to describe.

  • For example, you may choose the most beautiful place you have ever been to. You can then describe the experience of the place and how it made you feel.

Step 4 Pick an event or memory to describe.

  • For example, you may choose the first time you got your period or the first time you visited a relative in the hospital.

Outlining the Essay

Step 1 Go for a chronological pattern.

  • Paragraph 1: Introduction
  • Paragraph 2: Scene 1
  • Paragraph 3: Scene 2
  • Paragraph 4: Scene 3
  • Paragraph 5: Conclusion
  • You can use five paragraphs for this outline or have more than one paragraph for each scene.

Step 2 Use a spatial pattern.

  • Paragraph 2: Location 1
  • Paragraph 3: Location 2
  • Paragraph 4: Location 3

Step 3 Try a climatic pattern.

  • Paragraph 2: Least important point or detail
  • Paragraph 3: Second least important point or detail
  • Paragraph 4: Key point or detail

Step 4 Create a thesis...

  • For example, if you are writing about a person who is your role model in the essay, your thesis statement may be, “Based on her actions that day in my 6th grade classroom, she taught me how to rise above negativity and be confident in my abilities as an artist.”

Creating a Strong Opening for the Essay

Step 1 Begin with a hook first line.

  • For example, you may describe the first time you held an important object, “The first time I held the All American Girl doll in my hands, with its porcelain skin and glassy blue eyes, I swore to protect it with my life.”

Step 2 Provide context and background.

  • For example, you may briefly explain why the object was so significant to you based on your experience or knowledge at the time. You may write, “Up to this point, I had never owned a doll before and while other little girls waved around their dolls in the playground, I had to wait until my fifth birthday to get my own.”

Step 3 Use sensory details.

  • For example, rather than write “The doll was pretty,” you may write with sensory detail. “The doll felt soft and cold in my hands. It smelled like flowers and baby powder. It sounded hollow when I pressed it to my chest.”

Step 4 Show, rather than tell.

  • For example, you may describe how it feels to be in your childhood home by writing, “The best memories in my childhood home appear on the walls, dents, scratches, and markings made by my siblings and I when we wrestled or ran around inside.”
  • If you are writing about a person, use examples of their behavior to show the reader their character, rather than simply tell the reader what to think.
  • For example, you may write, “Mrs. Sands showed me compassion by always taking the time to work with me after class. I would sit on the small wooden chair by her desk, pencil in hand, while she explained how to conjugate a verb. 'To be,' she said, her voice patient but firm.”

Expert Q&A

Jake Adams

Reader Videos

Share a quick video tip and help bring articles to life with your friendly advice. Your insights could make a real difference and help millions of people!

You Might Also Like

Write a Reflection Paper

  • ↑ Jake Adams. Academic Tutor & Test Prep Specialist. Expert Interview. 20 May 2020.
  • ↑ https://essaypro.com/blog/descriptive-essay/
  • ↑ https://www.collegeessay.org/blog/descriptive-essay-writing/descriptive-essay-outline
  • ↑ http://www.scoolwork.com/EssayWritingGuide/how_to_write_a_descriptive_essay_on_any_topic.aspx
  • ↑ http://www.grammarcheck.net/how-to-write-a-descriptive-essay/
  • ↑ https://www.butte.edu/departments/cas/tipsheets/style_purpose_strategy/descriptive_essay.html

About This Article

Jake Adams

  • Send fan mail to authors

Reader Success Stories

Elizabeth Murphy

Elizabeth Murphy

Nov 30, 2020

Did this article help you?

Sonia Mercie

Sonia Mercie

Jun 8, 2020

Do I Have a Dirty Mind Quiz

Featured Articles

How to Safely Remove Stubborn Earwax at Home

Trending Articles

What Does “If They Wanted to, They Would” Mean and Is It True?

Watch Articles

Clean Silver Jewelry with Vinegar

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

Descriptive Essay

Caleb S.

Descriptive Essay - A Complete Guide

10 min read

descriptive essay

People also read

Descriptive Essay Examples & Writing Tips

Top 250+ Descriptive Essay Topics & Ideas

Creating a Descriptive Essay Outline - Format & Example

Crafting an Authentic Portrait: A Guide to Writing a Descriptive Essay About a Person

Writing a Descriptive Essay About Myself - Tips and Tricks

Writing a Descriptive Essay About A Place - Guide With Examples

How to Craft the Perfect Descriptive Essay About A Person You Admire

Descriptive Essay About My Mother - A Guide to Writing

Delicious Descriptions: A Guide to Writing a Descriptive Essay About Food

Write A Descriptive Essay About Nature With This Guide

Learn Tips to Write a Descriptive Essay About Autumn - Step into the Golden Season

Have you ever found yourself struggling to paint a vivid picture with your words, to capture the essence of a scene, person, or experience in your writing?

Don’t worry, you’re not alone! Many writers face this challenge when tasked with crafting descriptive essays.

For that, MyPerfectWords.com has come up with a solution!

In this blog, you’ll get easy steps to write good descriptive essays. Along with a step-by-step guide, you’ll also get impressive example essays to learn from.

With expert examples and helpful tips, you'll discover the secrets to crafting captivating descriptive essays. 

So let’s get into it!

Arrow Down

  • 1. What Is a Descriptive Essay?
  • 2. Elements of a Descriptive Essay
  • 3. How to Write a Descriptive Essay? 6 Steps
  • 4. Descriptive Essay Topics
  • 5. Descriptive Essay Examples
  • 6. Tips for Writing an Effective Descriptive Essay

What Is a Descriptive Essay?

According to descriptive essay definition,

“It is a type of essay that is used to describe an event, a place, a person, or anything in detail.”

In a descriptive essay, you're not merely telling the reader about something; you're showing it to them. You're using your powers of observation and imagination to transport your audience to the scene you're describing. 

Whether it's a bustling city street, a serene natural landscape, a beloved childhood memory, or a complex character in a novel, a well-crafted descriptive essay can make the subject come alive in the reader's mind.

Purpose of a Descriptive Essay

The purpose of a descriptive essay is to evoke a strong, sensory experience in the reader's mind. 

Unlike other forms of writing that may aim to inform, persuade, or argue, the primary objective of a descriptive essay is to create a detailed and vivid portrayal of a subject. 

Whether you're describing a person, place, object, or experience, the goal is to transport your audience to that specific moment or location. 

This allows them to feel as if they are seeing, hearing, touching, tasting, and smelling what you're describing.

Here’s a short video that explains descriptive writing:

Types of Descriptive Essay

Descriptive essays come in various forms, each serving a unique purpose and style of writing. 

Here are some common types of descriptive essays:

  • Spatial Descriptive Essays

These essays focus on describing a specific location or setting. Whether it's a serene beach, a bustling city street, spatial descriptive essays transport the reader to a particular place, allowing them to visualize it vividly.

  • Personal Descriptive Essays

In these essays, writers delve into their personal experiences, memories, and emotions to create a connection with the reader. They often describfge a significant moment in their life, a cherished memory, or a transformative event.

  • Object Descriptive Essays

These essays revolve around the detailed description of a particular object. It could be a family heirloom, a work of art, a unique gadget, or any item that holds personal or historical significance.

  • Character Descriptive Essays

These essays offer a comprehensive portrayal of a character's physical appearance, personality, motivations, and development within the narrative.

  • Process Descriptive Essays

These essays break down a complex process into a step-by-step description. Whether it's a cooking recipe, a scientific experiment, or an artistic technique. Process descriptive essays help readers understand how something is done.

Order Essay

Tough Essay Due? Hire Tough Writers!

Elements of a Descriptive Essay

There are five basic features of descriptive essay: 

  • Sensory Details

A descriptive essay involves arousing the emotions of the readers and creating an association with them. Sensory details paint a picture of the subject for the reader and engage their senses like sight, touch, smell, and taste.

  • Figurative Language

Using figurative language is one of the main elements of a descriptive essay. The use of metaphors, similes, adjectives, and adverbs, etc. creates the character sketch of the subject. 

This sketch helps the readers feel what the writer felt about the subject and helps him visualize the subject.

  • Central Theme

The central theme shapes and directs the essay’s content and helps organize the details. It should be well defined and focused on a single point.

  • Precise Language

The effect of your essay depends on the type of language that you have used in it. The language should emphasize the main theme and aim of the essay. Therefore, avoid using vague and ambiguous words. 

  • Organized Ideas    

An organized structure is an essential element of this essay. Also, the chronology, spatial location, and order play an important role.

How to Write a Descriptive Essay? 6 Steps

Writing an effective descriptive essay involves topic selection, creating an outline of parts of the descriptive essay, organizing ideas, and adding relevant information into the essay. 

The following is the process of descriptive writing.

Step# 1. Choose an Engaging Topic

Selecting the right topic is the crucial first step in writing a descriptive essay. Your topic should be captivating, drawing the reader in and keeping them engaged throughout the essay. 

A well-chosen topic sets the stage for an immersive and memorable descriptive experience.

Step# 2. Craft a Detailed Outline

Crafting an outline is essential to ensure your descriptive essay flows cohesively. It serves as a roadmap, helping you organize your thoughts and sensory details in a logical sequence. 

An effective outline keeps you on track to include all the necessary elements that make your description come alive.

Here's the typical descriptive essay structure for you to follow:

Explore this blog about creating a structured descriptive essay outline for organized essay writing.

Step# 3. Begin with a Compelling Introduction

The essay introduction sets the tone for your descriptive essay. It not only introduces the central theme but also incorporates a strong, captivating opinion that makes an initial impact on the reader.

In this section, you provide a concise preview of what the essay will explore, leaving your readers eager to delve further into your descriptive narrative.

Step# 4. Craft an Informative Thesis Statement

A thesis statement defines the scope and purpose of the essay. It is a narrow subject line, which should be clear and precise. Write the statement in a creative way and choose descriptive words for it. 

Creating mystery in your thesis statement attracts the reader to the body of your essay.

Paper Due? Why Suffer? That's our Job!

Step# 5. Writing the Body Paragraphs

To create good body paragraphs for your essay, start each one with a topic sentence that relates to your thesis statement. 

Then, use evidence to support your point and explain how it backs up your argument. Make sure your paragraphs are well-organized, especially if you're talking about personal experiences or memories. 

Finally, summarize the main points in each paragraph to keep your essay easy to follow and well-structured. This will help your essay flow smoothly and support your main idea.

Step# 6. Ending with a Strong Descriptive Essay Conclusion

Crafting a strong essay conclusion is your final opportunity to make a lasting impression on your reader. 

This section should effectively tie together the key elements of your essay. Begin by using appropriate transition words like "to finish with," "in conclusion," or "lastly" to signal the end of your essay. 

Moreover, offer insightful closing thoughts that resonate with the reader, whether it's a thought-provoking idea or a call to action.

Descriptive Essay Topics

Whether you are writing about a person or a place, your topic should have good supporting points that explain the topic. 

Choosing an engaging topic will develop curiosity and hook the reader to the last bit of the essay. Here we have prepared a list of amazing descriptive essay topics for you.

  • A Place of Childhood Memories: Describe your favorite childhood location.
  • The Perfect Sunset: Depict a mesmerizing evening sky.
  • A Walk in the Enchanted Forest: Explore the depths of a magical forest.
  • A Day at the Beach: Capture the sights, sounds, and sensations of a beach day.
  • An Abandoned House: Describe the mysterious allure of an abandoned building.
  • The Art of Street Photography: Portray the life and characters of a city street.
  • A Significant Family Heirloom: Tell the story of a cherished family keepsake.
  • A Visit to a Cultural Festival: Share the experience of a vibrant cultural event.
  • A Place of Solitude: Describe a location where you find peace and tranquility.
  • A Family Reunion: Capture the joy of a memorable gathering with family members.
  • My High School Cafeteria: Recount the bustling atmosphere and diverse interactions in the high school cafeteria.

Descriptive Essay Examples

You should read some good essay examples before writing your own essay. An essay example would help you demonstrate, compile, and organize your essay in a structured form.

Below we have provided some amazing examples to help you know the process.

A School Lunch Hall Descriptive Essay Example

The Weekend Market Descriptive Essay Sample

Descriptive Essay on Historical Place

Descriptive Essay on a Teacher that I Remember

Descriptive Essay on my Village

My Favorite Place Descriptive Essay

5 Paragraph Essay - Descriptive Essay PDF

Descriptive Essay about a person

Descriptive Essay Example about a place

The ultimate aim of this practice is to identify and learn different techniques for writing an impressive descriptive essay. Find more descriptive essay examples here to read and learn from.

Tips for Writing an Effective Descriptive Essay

Writing a compelling descriptive essay requires more than just describing a subject; it demands the skill to make your readers truly see, feel, and experience what you're portraying. Here are some valuable tips to help you craft an effective descriptive essay:

  • Choose an Engaging Topic: Start with a captivating subject that resonates with you and your audience. The more connected you are to the topic, the more vividly you can describe it.
  • Create a Detailed Outline: Plan the structure of your essay. Identify the key elements and sensory details you want to include in your description. A well-organized outline will keep your essay coherent.
  • Use Vivid Language: Your words are the paintbrush for your reader's imagination. Employ descriptive adjectives, strong verbs, and figurative language to create a vivid picture. Paint with words.
  • Engage the Senses: Appeal to all five senses – sight, sound, touch, taste, and smell. This immersive approach helps readers connect with your narrative on a deeper level.
  • Show, Don't Tell: Rather than telling your readers about a subject, show it to them through sensory descriptions and tangible experiences. Let them draw their own conclusions.
  • Use Metaphors and Similes: Comparing your subject to something familiar can enhance the reader's understanding. Metaphors and similes create memorable images.
  • Organize Your Description: Present your sensory details logically. Consider the order in which you introduce them, ensuring a smooth flow that makes sense to the reader.
  • Engage Emotions: Your description should evoke emotions in the reader. Describe not only what is visible but also the feelings and atmosphere surrounding the subject.

Summing it up,

Descriptive essay writing is a skill that requires thorough practice. It involves the ability to craft an engaging story with vivid descriptions, sounding as realistic as possible. 

The above-mentioned steps and examples are a great way for students to learn how to write a descriptive essay. 

However, if you still need expert help to write a flawless essay, we’ve got your back.

You can hire an expert descriptive essay writer at MyPerfectWords.com. Our custom essay service is your go-to choice for all types of essay writing help. 

Moreover, we provide non-plagiarized essays and high-quality papers based on your custom requirements. So contact our descriptive essay writing service now to get the best essay help at an affordable price.

AI Essay Bot

Write Essay Within 60 Seconds!

Caleb S.

Caleb S. has been providing writing services for over five years and has a Masters degree from Oxford University. He is an expert in his craft and takes great pride in helping students achieve their academic goals. Caleb is a dedicated professional who always puts his clients first.

Get Help

Paper Due? Why Suffer? That’s our Job!

Keep reading

descriptive essay examples

descriptive essay about an object that is special to you

IGCSE English Language: Writing Techniques for Descriptive Essays

descriptive essay about an object that is special to you

Descriptive essays are an essential part of the IGCSE English Language exam. They require you to vividly describe a person, place, object, or experience using sensory details and figurative language. Here are some techniques to help you write effective descriptive essays for the IGCSE exam:

 1. Utilize Sensory Language

- Appeal to the Senses: Use vivid and descriptive language that appeals to the five senses (sight, sound, smell, taste, and touch) to create a sensory-rich experience for the reader.

- Create Vivid Imagery: Use specific and concrete nouns, strong adjectives, and active verbs to paint a clear picture in the reader's mind. 

- Use Figurative Language: Incorporate similes, metaphors, and personification to add depth and creativity to your descriptions.

 2. Create a Clear Structure

- Introduction: Begin with a compelling opening sentence or hook that sets the scene and captures the reader's attention. Provide a brief overview of what you will be describing.

- Body Paragraphs: Organize your essay into paragraphs, each focusing on a different aspect of your subject. Use descriptive details and sensory language to describe each aspect in detail.

- Conclusion: Summarize your main points and leave the reader with a lasting impression of your subject.

 3. Use a Variety of Sentence Types

- Simple Sentences: Use simple sentences to convey straightforward information.

- Compound Sentences: Use compound sentences to link related ideas or actions.

- Complex Sentences: Use complex sentences to show causation, contrast, or conditions.

 4. Focus on Detail

- Be Specific: Use specific and detailed descriptions to convey a clear and vivid picture of your subject.

- Avoid Generalizations: Avoid using vague or general language. Instead, provide concrete and specific details that paint a clear picture for the reader.

 5. Include Dialogue and Conversation

- Add Dialogue: Use dialogue to bring your subject to life. Use quotation marks to indicate when someone is speaking.

- Use Conversations: Include conversations or interactions between characters to add depth and realism to your descriptions.

 6. Employ Punctuation for Effect

- Emphasize with Punctuation: Use punctuation marks such as exclamation points, ellipses, and dashes to add emphasis and drama to your descriptions.

- Control the Pace: Use commas and periods to control the pace of your writing and guide the reader through your descriptions.

 7. Show, Don't Tell

- Use Action Verbs: Use action verbs to show the subject in motion and convey a sense of movement.

- Avoid Passive Voice: Avoid using the passive voice, as it can make your writing less dynamic and engaging.

- Be Descriptive: Use adjectives and adverbs to describe the subject in detail.

 8. Revise and Edit

- Review Your Work: Take the time to review and revise your essay. Look for areas where you can add more detail or improve the flow of your descriptions.

- Edit for Grammar and Punctuation: Check your essay for grammar and punctuation errors. Make sure your sentences are clear and concise.

- Seek Feedback: Ask a teacher, friend, or family member to read your essay and provide feedback. Use their suggestions to improve your writing.

By incorporating these techniques into your descriptive essays, you can create vivid and engaging descriptions that will captivate your reader and earn you high marks in the IGCSE English Language exam. Remember to practice regularly and seek feedback to improve your writing skills. Good luck!

You Might Also Like

descriptive essay about an object that is special to you

Post Scholarship Application Steps to Follow

So what Happens post Submission? What are the things and factors to keep in mind. This Guide covers all the factors in and around the scholarship

descriptive essay about an object that is special to you

Planning for Successful College Applications

Know the right way for successful college application and how to get prepared for college admission to gain admission in your dream college - Read our blog

descriptive essay about an object that is special to you

Know How to Build a Great College List

Want to choose best college for your study? Get some amazing guidelines that will help you to create a great college list for your admission - Read our blog

AP Guru has been helping students since 2010 gain admissions to their dream universities by helping them in their college admissions and SAT and ACT Prep

Free Resources

Examples

Descriptive Essay

Descriptive essay generator.

descriptive essay about an object that is special to you

Essays are written due to various reasons and purposes. Some of the authors want to inform, some want to expose while some want to persuade. However, in descriptive essay writing , the essayist composes for the sake of displaying a picture out of his/her describing words. It may sound easy and simple but don’t be deceived, there are still more to learn. Read through this article to get hold of significant and beneficial new knowledge.

What is Descriptive Essay? A descriptive essay is a type of writing that aims to vividly describe a person, place, object, or event. In this type of essay, the writer uses sensory details such as sight, sound, smell, taste, and touch to create a clear and vivid image in the reader’s mind. The goal of a descriptive essay is to evoke a strong emotional response or create a vivid impression of the subject being described.

Descriptive Essay Format

Introduction.

Hook: Start with a sentence that captures the reader’s attention. This could be a striking fact, a question, or a vivid description. Context: Provide some background information to set the scene. Describe the setting, the situation, or the object of the essay. Thesis Statement: End the introduction with a clear thesis statement that outlines the main aspects or the overall impression of your subject.

Body Paragraphs

Each body paragraph should focus on a specific aspect or a detail that contributes to the overall picture you are trying to paint. Use the “show, don’t tell” technique by employing vivid imagery and sensory details.

Paragraph 1: Sight

Topic Sentence: Introduce the aspect of sight. Details: Describe what you see in vivid detail. Use adjectives and adverbs to bring the scene to life. Closing Sentence: Wrap up the paragraph by summarizing the importance of the visual details.

Paragraph 2: Sound

Topic Sentence: Focus on the sounds related to your topic. Details: Describe what can be heard, whether it’s the background noise, a specific sound related to the subject, or the absence of sound. Closing Sentence: Conclude by explaining how the sounds contribute to the overall impression.

Paragraph 3: Smell

Topic Sentence: Highlight the aspect of smell. Details: Describe the aromas and scents. Whether it’s pleasant or pungent, detail how it impacts the scene or the subject. Closing Sentence: Summarize how the smell adds to the depth of your description.

Paragraph 4: Touch

Topic Sentence: Discuss the sense of touch. Details: Describe the textures and temperatures. Explain how something feels to the touch and why it’s important to your description. Closing Sentence: Link the tactile details to the overall experience.

Paragraph 5: Taste (if applicable)

Topic Sentence: Introduce the sense of taste, if relevant. Details: Describe the flavors and the experience of tasting something related to your subject. Closing Sentence: Reflect on how taste enhances the description.
Summary: Briefly restate your thesis and summarize the main points of your essay. Significance: Explain the significance of the subject and the impact it has made on you or the impression it leaves. Closing Thought: End with a final thought or reflection, leaving the reader with something to ponder.

Example of Descriptive Essay

“The Sunset at the Beach” As I walked down the sandy path towards the ocean, the first thing that struck me was the vast expanse of the sea, stretching endlessly towards the horizon. The sun was beginning to set, painting the sky in shades of orange, pink, and purple. The beauty of the sunset at the beach was a breathtaking spectacle that I had come to witness. Introduction The beach has always been a place of serenity for me, especially during the sunset. The way the sun dipped below the horizon, leaving behind a tapestry of colors, always seemed magical. On this particular evening, the scene was set for a perfect display of nature’s artistry. Body Paragraphs The Vision of the Sunset As I stepped onto the soft, warm sand, my eyes were immediately drawn to the horizon. The sun, a fiery orb, was slowly descending, casting its golden glow across the sky. The clouds, mere wisps earlier in the day, now looked like cotton candy, stained with hues of pink and lavender. The reflection of the sunset on the water added a layer of brilliance to the scene, with the light dancing on the waves as they gently lapped against the shore. The Symphony of the Waves The sound of the waves provided a soothing background melody to the visual spectacle. Each wave crashed against the shore with a rhythm that was both calming and invigorating. In the distance, seagulls called to one another, their cries adding to the orchestral performance of nature. The rustling of the palm leaves in the gentle breeze played a soft, whispering harmony, creating a symphony that only the beach at sunset could offer. The Aromatic Breeze With every breath, the salty tang of the sea air filled my lungs, a distinctive aroma that immediately relaxed my body and mind. There was a freshness to it, a reminder of the vast, untamed ocean before me. Mixed with the faint scent of sunscreen and the earthiness of wet sand, the beach’s aroma was invigorating, grounding me in the moment. The Touch of Nature As I walked along the water’s edge, the cool water washed over my feet, providing relief from the day’s residual heat. The sand, now cooler than the afternoon sun, felt soft and comforting beneath my toes. Occasionally, a stronger wave would rush further up the beach, encouraging me to dig my feet into the sand, feeling the grains shift against my skin. Conclusion The sunset at the beach was not just a visual masterpiece; it was an experience that engaged all the senses. As the sun finally disappeared, leaving behind a sky painted in dark blues and purples, I felt a sense of peace and contentment. The beach at sunset had offered me a moment of beauty, tranquility, and a deep connection with nature. It was an unforgettable scene, etched in my memory, reminding me of the simple, yet profound joys of life.

Descriptive essays generally focus more on visualizing a specific topic of interest. Considering that aspect, showing you what it looks like may be helpful as well. Thus, we cautiously gathered the best samples and templates of descriptive essays for you to rely on, here are they:

Bright Topic Ideas for Your Descriptive Essay

The list of the possible topic ideas for your descriptive essay is limitless. There are a lot of choices to choose from and sometimes, it is really difficult to pick one. If you are being indecisive regarding your topic idea, here are some smart concepts to help you select one.

Descriptive Essay Ideas About People

  • Description of your favorite music genre
  • Treating a popular villain as a good protagonist
  • The right words that would compliment your singing idol
  • Why your squad is the best?
  • What qualities should your future spouse possess?
  • Why your aunt is the best?

Descriptive Essay Ideas About Places

  • Why Manila Bay has the best sunset?
  • The perfect adjective to describe your hometown
  • Details on your recent vacation destination
  • Why your favorite coffee shop is worth the visit?
  • What makes Paris unique?
  • The best description for your workplace

Descriptive Essay Ideas About Things

  • Why your wedding ring is the most luxurious?
  • The description of your favorite blanket
  • What makes your research paper great?
  • Description of your proposed food product
  • Perfume: more than just the bottle
  • Why your bag is great

Descriptive Essay Examples & Templates

Descriptive narrative essay example.

Descriptive Narrative Essay

Descriptive Essay Outline Example

Descriptive Essay Outline

Short Essay Plan Example

Short Essay Plan1

Biographical Narrative Essay Example

Biographical Narrative Essay1

College Narrative Essay Example

College Narrative Essay

Personal Narrative Essay Example

Personal Narrative Essay

Short Narrative Essay Example

Short Narrative Essay1

High School Descriptive Essay Example

High School Narrative Essay1

Free Simple Descriptive Essay Plan

Free Simple Essay Plan

Basic Descriptive Essay Writing Example

Writing Descriptive Example

latterdaylearning.org

Short Descriptive Essay Example

Short Descriptive Essay

trudyamiller.wikispaces.com

Descriptive Essay Structuring Example

Structuring Descriptive Essay

colegiobennett.org

Simple Descriptive Essay Example

Descriptive Sample

essssay.com

Narrative Descriptive Essay Example

Narrative Descriptive Sample

preservearticles.com

Descriptive Essay Prewriting Example

Prewriting Descriptive Essay

fileserver.net-texts.com

Personal Descriptive Essay Example

Personal Descriptive

indiacelebrating.com

Descriptive Essay Characteristics Example

Characteristics Essay Example

Descriptive Essay Description Guide Example

Descriptive Essay Description Guide Example

ortbinyaminaenglish.yolasite.com

Descriptive Essays about Places Example

Descriptive Essays about Places Example

Excellent Descriptive Essay Example

Excellent Descriptive Essay Examples

hoddereducation.co.uk

Descriptive Essay Writing Exercise Example

Descriptive Essay Writing Exercise Examples

Educational Descriptive Essay Example

Educational Descriptive Essay Examples

owll.massey.ac.nz

Spring Break Descriptive Essay Example

Spring Break Descriptive Essay Examples

cheylin.com

Descriptive Essay Sentence Writing Example

Decriptive Essay Sentence Writing Example

Descriptive Essay Paragraph Guidelines Example

Descriptive Essay Paragraph Guidelines Examples

Stylish Descriptive Essay Rubric Example

Stylish Descriptive Essay Rubric Examples

Descriptive Essay Writing Techniques Example

Descriptive Essay Writing Techniques Examples

multifangled.com.au

Free Descriptive Essay Example

Descriptive Essay Example

asc.weebly.com

Basic Descriptive Essay Example

Basic Descriptive Essay

hortonskids.org

Sample Descriptive Essay Example

Sample Descriptive Essay

essaytigers.com

Descriptive Essay in PDF Example

Descriptive Essay in PDF

Printable Descriptive Essay Example

Printable Descriptive Essay

Direction Descriptive Essay Example

Direction Descriptive Essay

wba.aplusanywhere.com

Descriptive Essay Scoring Guide

Descriptive Essay Scoring Guide

washoeschools.net

Professional Descriptive Essay

Professional Discriptive Eassy

Descriptive Essay Format Example

Discriptive Eassy Format

staff.kings.edu

Assignment Descriptive Essay Example

Assignment Discriptive Eassy

fd.valenciacollege.edu

What are the 4 types of essays?

An essay is an extended piece or composition that shows and supports a thesis or proposition. Essays help the expression of an author’s ideas in various ways. Before composing your own essay, it is important to identify its purpose first, and in doing that, distinguishing its type would be a great beginning. Correspondingly, here are the four different types of essays:

Narrative Essays: to tell

Taking it into its most basic sense, narrative essays are used if the author wants to tell a story about a real-life adventure. This type of essay is expressed in a particular point-of-view. Commonly, it is the author’s viewpoint that is being followed. Moreover, in writing your own short narrative essay , apply realistic emotions and appropriate sensory details to provide your readers with the full taste of your story. By doing this, you are not simply telling them but also engaging them in the story’s sequence and elements. It is also advisable to state verbs as vivid and as precise as possible. The thesis statement of a narrative essay is commonly found in the opening sentence or the last sentence of the introductory paragraph.

Descriptive Essays: to describe

You may confuse yourself between narrative and descriptive essays ; however, differentiating both is really easy. Rather than telling a story, a descriptive essay illustrates a specific topic such as a person, place, experience, emotion, event, etc. by means of words. You don’t simply state your experience in this type of essay; on top of that, you let your reader experience the same thing through your descriptions. In writing your own short descriptive essay , it is important to remember that you are not writing to tell but to show. Using sensory and vivid words is also recommended.

Expository Essays: to uncover and clarify

From its name itself, an expository essay is used to expose something on matters that are known to others. This type of essay is a genre of composition that aims to explain, illustrate, clarify or explicate a certain subject for the readers. Thus, an expository essay could include investigation and evaluation of ideas. This could be derived through comparison and contrast, definition, giving examples, assessment of cause and effect, etc. Moreover, in composing an expository essay, the author set his/her emotions aside for this type of essay is based on mere facts. The first point-of-view is not applied in this essay as well.

Persuasive Essays: to convince

If the expository essays talk about the facts then persuasive essays talk about arguments. The main purpose of a persuasive essay is to win over the trust of the reader to accept your viewpoint, opinion or proposition as the author. In writing a persuasive essay, your opinions should be supported by relevant facts and logical and sound reasoning. Though the essayist should lay all necessary details from both sides of the argument, he/she must comprehensibly explain why one side is correct or more favorable than the other.

Despite essays being categorized into four types, it is also important to know that an essay is not limited to one type only. In some cases,  a narrative essay could also be mixed with a short descriptive essay or a short persuasive essay combined with an expository type. Nevertheless, identifying the purpose of your essay is vital before writing. However, if doing it challenges you, knowing these types is a great substitute.

What Is the Purpose of a Descriptive Essay?

Some people like to watch movies rather than to read books. This is because an actual image is easier to absorb than that on writing. This is why it’s important for a writer to pay close attention to detail. A descriptive essay conclusion should provide the reader with a mental picture of a given matter.

This is especially essential when writing pieces meant for a younger audience, as they have a more imaginative mind than the average adult. A writer must be creative when using imaginative language in order for the reader to properly comprehend what is being portrayed. To do so, the writer should also be knowledgeable about the topic. After all, you don’t want to give your readers the wrong interpretation .

How to Write a Descriptive Essay

A good descriptive essay comes from a knowledgeable and imaginative mind. Thus, in  descriptive writing , it’s important for one to be specific on details. After seeing a few samples that we have shown earlier, here is a step-by-step guideline to help you in composing a descriptive essay worth reading.

1. Choose a topic.

If there is no given topic, it would be great to select one that you are knowledgeable and familiar with. Considering that your whole descriptive essay would revolve on this specific subject, choosing a topic that you recognize would keep everything simpler for you. By doing such, you can freely decide what words are the most appropriate to use; as a result, it will be easier for you to describe your topic. Furthermore, your reader could be meticulous and educated on your subject, so being knowledgeable about your own topic is wise prevention against bad impression.

2. Construct your thesis statement.

Alright, now that you have your own topic already, it is important to know what specific message you want your reader should focus on reading your whole essay. Thus, it is important to always provide a thesis statement , the umbrella sentence of all your ideas. Write this in one concise sentence in your introduction and conclusion. Often, a thesis statement is mentioned in the last sentence of your introductory paragraph.

3. Gather the necessary information and ideas.

Though you are already proficient in your topic, it is still recommendable to research about your specific subject. With this, you are not just gaining new information but also checking the correctness of your knowledge. It would also be great to expand your vocabulary, especially in adjectives and adverbs, since writing one of these involve loads of describing. Moreover, also focus on the sensory words that correspond to sight, smell, taste, sound, and touch of the given subject.

4. Create an outline.

Obtaining all of the significant details, crafting an  essay outline  for your work will allow you to arrange your contents in a rational and chronological order. Also, being educated with different formats in writing an essay would really make a great difference in your composition.

5. Proofread.

After writing your own descriptive essay, it might feel perfect already, but most of the time, it is not. Hence, read your entire work and review if there are any errors pertaining to your grammar and spelling. Furthermore, asking for help from a well-versed friend of yours to conduct a peer-review to your work would be extremely useful.

6. Finalize your composition.

The next thing to do after the editing is to finalize your descriptive essay to its finest version. Make sure that your essay follows a specific format, consisting of the proper  parts of the essay .

Smart Tips for Writing a Descriptive Essay

The fundamentals of the descriptive writing procedures are now given to you; nevertheless, it would always be great to aim for something better. Now, here are some intelligent tips that would make your essay certainly more compelling.

Establish a connection with your writing.

The key to writing a good effective essay  is to have the passion to write it; thus, in choosing your topic it would be great to have a familiar one or a subject that truly makes you curious. Let your interest be the seed of your fruitful composition.

Spend time to think.

In writing your own descriptive essay, let your brain do its job. Do not rush, give yourself an adequate amount of time to ponder on the necessary details that you should include and what approach you should apply. Provide yourself a clear plan of your descriptive essay writing. Moreover, look at your topic from different angles. This will allow you to take a closer look at every detail of your subject.

Apply the word vomit technique.

The word vomit technique or also called as “ free writing ” is the spontaneous use of words without considering any rules. This is a good technique in making a draft of your  starting an essay . It allows your ideas to keep flowing without exerting much effort. Once this is done, you can pick out points that would go well with your essay.

Take a break before finalizing it.

Because right after writing your composition, your thought highly recognizes your word construction; thus, it does not really notice the errors and automatically treats them as correct pieces of your work. Allowing your mind to clear out for a while will make it easier for you to critic your own work. Furthermore, utilizing grammar-checking software is also a splendid move.

Twitter

Text prompt

  • Instructive
  • Professional

Write a descriptive essay about a place you love to visit and what makes it special.

Describe in a descriptive essay your dream job and what it would be like to work there.

Calculate for all schools

Your chance of acceptance, your chancing factors, extracurriculars, examples of language devices.

Hey everyone! I'm working on my college essays and want to use some language devices for better effect. Can you give me examples or ideas of devices I could use to enhance my writing style? Just looking for inspiration!

Hi there! You've got a great approach by seeking to incorporate language devices in your college essays to make your writing more engaging. Here are some examples of devices that could help elevate your work:

1. Imagery - Use vivid and descriptive language that helps paint a picture for your reader. This can include sensory details, such as sight, sound, taste, touch, and smell.

2. Simile - Create comparisons using "like" or "as" to link two different objects or ideas (e.g., "Her laugh was as contagious as a catchy song").

3. Metaphor - Make a direct comparison between two unrelated objects or ideas (e.g., "The world is a stage").

4. Alliteration - Repeat the same consonant sound at the beginning of adjacent or closely connected words (e.g., "Peter Piper picked a peck of pickled peppers").

5. Personification - Give human-like qualities to inanimate objects or abstract concepts (e.g., "The wind whispered softly through the trees").

6. Hyperbole - Use exaggeration or overstatement for emphasis or effect (e.g., "I'm so hungry I could eat a horse").

7. Onomatopoeia - Use words that imitate the sound they describe (e.g., "buzz," "hiss," "clang").

8. Anaphora - Repeat a word or phrase at the beginning of successive clauses to emphasize a point (e.g., "I have a dream...").

9. Oxymoron - Combine two seemingly contradictory words to create a new meaning (e.g., "bittersweet," "deafening silence").

10. Parallelism - Use similar grammatical structures or patterns to create a sense of balance and rhythm (e.g., "I came, I saw, I conquered").

Remember to use these devices sparingly and purposefully, as too much can make your writing come across as forced or overly complex. Make sure they add value to your essay and enhance the reader's understanding of your message. Good luck with your college essays!

About CollegeVine’s Expert FAQ

CollegeVine’s Q&A seeks to offer informed perspectives on commonly asked admissions questions. Every answer is refined and validated by our team of admissions experts to ensure it resonates with trusted knowledge in the field.

descriptive essay about an object that is special to you

Advertisement

All About Adjectives: Examples, Types and Uses

  • Share Content on Facebook
  • Share Content on LinkedIn
  • Share Content on Flipboard
  • Share Content on Reddit
  • Share Content via Email

descriptive essay about an object that is special to you

There are countless adjective examples in everyday language, but for simplicity's sake, most adjectives are one-word components that enhance or modify nouns. Any sentence may contain two or more adjectives, but the function of each adjective remains the same.

What Is a Possessive Adjective?

12 other types of adjectives you can use to describe nouns, adjectives vs. adverbs.

Possessive adjectives are one of the most common components of the english language. These broad examples of adjectives tie a noun or object to a person. While other adjectives describe a noun, possessive adjectives provide an owner for the object, such as his, her or their.

Although adjectives perform the same function, there are a few different types of specialized adjectives that will take your reading and writing knowledge to the next level. The following categories are some of the most common adjective examples you'll need to describe size, color and other attributes.

1. Absolute Adjectives

Also known as "incomparable" or "ultimate," an absolute adjective describes something that is infinite. For example, an empty glass or an impossible mission. These specific adjectives produce a sense of finality.

2. Attributive Adjectives

These are often appearance adjectives that describe people, places and things, but they don't always have to be. Attributive adjectives are commonly found right alongside the nouns and pronouns they modify. A beautiful hat or red car both have attributive adjectives that help paint the picture.

3. Comparative Adjectives

As the name entails, a comparative adjective is used to compare and contrast two nouns. Common comparative adjectives include better, worse, larger and smaller.

4. Compound Adjectives

A compound adjective clause uses two or more words to describe the same noun. Something rural could be " blue-collar ," while a poor decision may become "ill-minded." Generally, you can pinpoint compound adjectives by the hyphen, but it is not a vital prerequisite.

5. Condition Adjectives

These descriptive adjectives are used to describe the condition of a noun. For instance, a messy desk is in a state of disarray, but it can be cleaned and organized. Therefore, condition adjectives differ from absolute adjectives because they have flux and the ability to change their current state.

6. Demonstrative Adjectives

These common examples of adjectives include common words like this, that, these and those. A demonstrative adjective adds direct specificity when describing a specific noun.

7. Descriptive Adjectives

All adjectives are descriptive words, but not all adjectives fall under the descriptive adjective umbrella. People use these adjectives to describe characteristics and conditions, so they encompass many of the subcategories in this list.

8. Predicate Adjectives

A predicate adjective will always connect with a linking verb to add further conditions to a clause. Common predicate adjectives include any form of "be," such as is, was and were. Other examples include sense-based clauses like smell, appear and feel.

9. Proper Adjectives

Proper adjective examples include many adjectives that use proper nouns that convey a simple explanation and a strong feeling. Spanish, Christian and Victorian are all adjectives that describe a noun or pronoun by placing them squarely in a time, region or realm of cultural significance.

10. Qualitative Adjectives

A phrase using qualitative adjectives will describe known attributes. For example, in the phrase "He pet the fluffy cat ," the word "fluffy" gives you a clear idea of the cat's appearance.

11. Quantitative Adjectives

Quantitative adjectives describe the objective characteristics of a noun. An adjective phrase is likely quantitative if it includes a countable or uncountable factor. Several, few and infinite are all ways to describe nouns with cumulative amounts. An indefinite adjective may also be quantitative.

12. Superlative Adjectives

A superlative adjective describes the supreme characteristic (or superlative form) of a noun. Superlative adjectives include tallest , fattest, fastest and smallest. This common type of adjective describing pronounced characteristics will almost always follow a linking verb.

The difference between these two parts of speech is simple: Adjectives describe nouns, whereas adverbs typically describe adjectives and verbs. An adverb will often end in "ly" — curiously, stubbornly, quickly — but not always. The words "always" and "very," for instance, are also adverbs.

Anyone familiar with the classic Disney film Mary Poppins will know the catchy tune describing the absurdly long adjective, "Supercalifragilisticexpialidocious." However, that isn't the longest adjective in the English language. In a sick twist of dark humor, someone who is afraid of long words could be hippopotomonstrosesquippedaliophobic. Good luck saying this 36-letter monster of a word on the first try.

Please copy/paste the following text to properly cite this HowStuffWorks.com article:

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

The use of cameras and computational algorithms for noninvasive, low-cost and scalable measurement of physiological (e.g., cardiac and pulmonary) vital signs is very attractive. However, diverse data representing a range of environments, body motions, illumination conditions and physiological states is laborious, time consuming and expensive to obtain. Synthetic data have proven a valuable tool in several areas of machine learning, yet are not widely available for camera measurement of physiological states. Synthetic data offer "perfect" labels (e.g., without noise and with precise synchronization), labels that may not be possible to obtain otherwise (e.g., precise pixel level segmentation maps) and provide a high degree of control over variation and diversity in the dataset. We present SCAMPS, a dataset of synthetics containing 2,800 videos (1.68M frames) with aligned cardiac and respiratory signals and facial action intensities. The RGB frames are provided alongside segmentation maps and precise descriptive statistics about the underlying waveforms, including inter-beat interval, heart rate variability, and pulse arrival time. Finally, we present baseline results training on these synthetic data and testing on real-world datasets to illustrate generalizability.

thumbnail

In reasoning about sequential events it is natural to pose probabilistic queries such as “when will event A occur next” or “what is the probability of A occurring before B”, with applications in areas such as user modeling, language models, medicine, and finance. These types of queries are complex to answer compared to next-event prediction, particularly for neural autoregressive models such as recurrent neural networks and transformers. This is in part due to the fact that future querying involves marginalization over large path spaces, which is not straightforward to do efficiently in such models. In this paper we introduce a general typology for predictive queries in neural autoregressive sequence models and show that such queries can be systematically represented by sets of elementary building blocks. We leverage this typology to develop new query estimation methods based on beam search, importance sampling, and hybrids. Across four large-scale sequence datasets from different application domains, as well as for the GPT-2 language model, we demonstrate the ability to make query answering tractable for arbitrary queries in exponentially-large predictive path-spaces, and find clear differences in cost-accuracy tradeoffs between search and sampling methods.

thumbnail

Multi-modal learning is essential for understanding information in the real world. Jointly learning from multi-modal data enables global integration of both shared and modality-specific information, but current strategies often fail when observa- tions from certain modalities are incomplete or missing for part of the subjects. To learn comprehensive representations based on such modality-incomplete data, we present a semi-supervised neural network model called CLUE (Cross-Linked Unified Embedding). Extending from multi-modal VAEs, CLUE introduces the use of cross-encoders to construct latent representations from modality-incomplete observations. Representation learning for modality-incomplete observations is common in genomics. For example, human cells are tightly regulated across multi- ple related but distinct modalities such as DNA, RNA, and protein, jointly defining a cell’s function. We benchmark CLUE on multi-modal data from single cell measurements, illustrating CLUE’s superior performance in all assessed categories of the NeurIPS 2021 Multimodal Single-cell Data Integration Competition. While we focus on analysis of single cell genomic datasets, we note that the proposed cross-linked embedding strategy could be readily applied to other cross-modality representation learning problems.

The incorporation of cutting planes within the branch-and-bound algorithm, known as branch-and-cut, forms the backbone of modern integer programming solvers. These solvers are the foremost method for solving discrete optimization problems and thus have a vast array of applications in machine learning, operations research, and many other fields. Choosing cutting planes effectively is a major research topic in the theory and practice of integer programming. We conduct a novel structural analysis of branch-and-cut that pins down how every step of the algorithm is affected by changes in the parameters defining the cutting planes added to the input integer program. Our main application of this analysis is to derive sample complexity guarantees for using machine learning to determine which cutting planes to apply during branch-and-cut. These guarantees apply to infinite families of cutting planes, such as the family of Gomory mixed integer cuts, which are responsible for the main breakthrough speedups of integer programming solvers. We exploit geometric and combinatorial structure of branch-and-cut in our analysis, which provides a key missing piece for the recent generalization theory of branch-and-cut.

We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36.

We study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market.At each step, the agents are presented with a dynamical context, where the contexts determine the utilities. The planner controls the transition of the contexts to maximize the cumulative social welfare, while the agents aim to find a myopic stable matching at each step. Such a setting captures a range of applications including ridesharing platforms. We formalize the problem by proposing a reinforcement learning framework that integrates optimistic value iteration with maximum weight matching. The proposed algorithm addresses the coupled challenges of sequential exploration, matching stability, and function approximation. We prove that the algorithm achieves sublinear regret.

We study a constructive procedure that approximates Gateaux derivatives for statistical functionals by finite-differencing, with attention to causal inference functionals. We focus on the case where probability distributions are not known a priori but need also to be estimated from data, leading to empirical Gateaux derivatives, and study relationships between empirical, numerical, and analytical Gateaux derivatives. Starting with a case study of counterfactual mean estimation, we verify the exact relationship between finite-differences and the analytical Gateaux derivative. We then derive requirements on the rates of numerical approximation in perturbation and smoothing that preserve statistical benefits. We study more complicated functionals such as dynamic treatment regimes and the linear-programming formulation for policy optimization infinite-horizon Markov decision processes. In the case of the latter, this approach can be used to approximate bias adjustments in the presence of arbitrary constraints, illustrating the usefulness of constructive approaches for Gateaux derivatives. We find that, omitting unfavorable dimension dependence of smoothing, although rate-double robustness permits for coarser rates of perturbation size than implied by generic approximation analysis of finite-differences for the case of the counterfactual mean, this is not the case for the infinite-horizon MDP policy value.

In this paper, we study the gyrovector space structure (gyro-structure) of matrix manifolds. Our work is motivated by the success of hyperbolic neural networks (HNNs) that have demonstrated impressive performance in a variety of applications. At the heart of HNNs is the theory of gyrovector spaces that provides a powerful tool for studying hyperbolic geometry. Here we focus on two matrix manifolds, i.e., Symmetric Positive Definite (SPD) and Grassmann manifolds, and consider connecting the Riemannian geometry of these manifolds with the basic operations, i.e., the binary operation and scalar multiplication on gyrovector spaces. Our work reveals some interesting facts about SPD and Grassmann manifolds. First, SPD matrices with an Affine-Invariant (AI) or a Log-Euclidean (LE) geometry have rich structure with strong connection to hyperbolic geometry. Second, linear subspaces, when equipped with our proposed basic operations, form what we call gyrocommutative and gyrononreductive gyrogroups. Furthermore, they share remarkable analogies with gyrovector spaces. We demonstrate the applicability of our approach for human activity understanding and question answering.

thumbnail

We present a novel method for guaranteeing linear momentum in learned physics simulations. Unlike existing methods, we enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers. We combine these strict constraints with a hierarchical network architecture, a carefully constructed resampling scheme, and a training approach for temporal coherence. In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially. In addition, the induced physical bias leads to significantly better generalization performance and makes our method more reliable in unseen test cases. We evaluate our method on a range of different, challenging fluid scenarios. Among others, we demonstrate that our approach generalizes to new scenarios with up to one million particles. Our results show that the proposed algorithm can learn complex dynamics while outperforming existing approaches in generalization and training performance. An implementation of our approach is available at https://github.com/tum-pbs/DMCF.

thumbnail

This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify -in pure exploration- or play -in regret minimization- its optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [Shi et al., 2021]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.

Consider the task of learning an unknown concept from a given concept class; to what extent does interacting with a domain expert accelerate the learning process? It is common to measure the effectiveness of learning algorithms by plotting the "learning curve", that is, the decay of the error rate as a function of the algorithm's resources (examples, queries, etc). Thus, the overarching question in this work is whether (and which kind of) interaction accelerates the learning curve. Previous work in interactive learning focused on uniform bounds on the learning rates which only capture the upper envelope of the learning curves over families of data distributions. We thus formalize our overarching question within the distribution dependent framework of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring a single upper bound which applies uniformly to all distributions. Our main result reveals a fundamental trichotomy of interactive learning rates, thus providing a complete characterization of universal interactive learning. As a corollary we deduce a strong affirmative answer to our overarching question, showing that interaction is beneficial. Remarkably, we show that in important cases such benefits are realized with label queries, that is, by …

thumbnail

We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations, whose training dynamics and dependence on training set size can be predicted by our effective theory (in a toy setting). We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning.

Similarity metrics such as representational similarity analysis (RSA) and centered kernel alignment (CKA) have been used to understand neural networks by comparing their layer-wise representations. However, these metrics are confounded by the population structure of data items in the input space, leading to inconsistent conclusions about the \emph{functional} similarity between neural networks, such as spuriously high similarity of completely random neural networks and inconsistent domain relations in transfer learning. We introduce a simple and generally applicable fix to adjust for the confounder with covariate adjustment regression, which improves the ability of CKA and RSA to reveal functional similarity and also retains the intuitive invariance properties of the original similarity measures. We show that deconfounding the similarity metrics increases the resolution of detecting functionally similar neural networks across domains. Moreover, in real-world applications, deconfounding improves the consistency between CKA and domain similarity in transfer learning, and increases the correlation between CKA and model out-of-distribution accuracy similarity.

thumbnail

Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible.This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.

thumbnail

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) (Kothari and Klivans 2014, Goel et al. 2020a, Diakonikolas et al. 2020a) or restricted models such as correlational SQ (Goel et al. 2020b, Diakonikolas et al. 2020b). Prior work hinted at the impossibility of our result: Vempala and Wilmes (2019) showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi (2021) that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from membership queries.

thumbnail

We provide the first complete continuous time framework for denoising diffusion models of discrete data. This is achieved by formulating the forward noising process and corresponding reverse time generative process as Continuous Time Markov Chains (CTMCs). The model can be efficiently trained using a continuous time version of the ELBO. We simulate the high dimensional CTMC using techniques developed in chemical physics and exploit our continuous time framework to derive high performance samplers that we show can outperform discrete time methods for discrete data. The continuous time treatment also enables us to derive a novel theoretical result bounding the error between the generated sample distribution and the true data distribution.

thumbnail

A central problem in online learning and decision making---from bandits to reinforcement learning---is to understand what modeling assumptions lead to sample-efficient learning guarantees. We consider a general adversarial decision making framework that encompasses (structured) bandit problems with adversarial rewards and reinforcement learning problems with adversarial dynamics. Our main result is to show---via new upper and lower bounds---that the Decision-Estimation Coefficient, a complexity measure introduced by Foster et al. in the stochastic counterpart to our setting, is necessary and sufficient to obtain low regret for adversarial decision making. However, compared to the stochastic setting, one must apply the Decision-Estimation Coefficient to the convex hull of the class of models (or, hypotheses) under consideration. This establishes that the price of accommodating adversarial rewards or dynamics is governed by the behavior of the model class under convexification, and recovers a number of existing results --both positive and negative. En route to obtaining these guarantees, we provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures, including the Information Ratio of Russo and Van Roy and the Exploration-by-Optimization objective of Lattimore and György.

thumbnail

Coupling-based normalizing flows (e.g. RealNVP) are a popular family of normalizing flow architectures that work surprisingly well in practice. This calls for theoretical understanding. Existing work shows that such flows weakly converge to arbitrary data distributions. However, they make no statement about the stricter convergence criterion used in practice, the maximum likelihood loss. For the first time, we make a quantitative statement about this kind of convergence: We prove that all coupling-based normalizing flows perform whitening of the data distribution (i.e. diagonalize the covariance matrix) and derive corresponding convergence bounds that show a linear convergence rate in the depth of the flow. Numerical experiments demonstrate the implications of our theory and point at open questions.

thumbnail

In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they depend on or trust the advice. In this paper, we present an initial exploration that suggests showing AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice). We first train a model to predict human incorporation of AI advice using data from thousands of human-AI interactions. This enables us to explicitly estimate how to transform the AI's prediction confidence, making the AI uncalibrated, in order to improve the final human prediction. We empirically validate our results across four different tasks---dealing with images, text and tabular data---involving hundreds of human participants. We further support our findings with simulation analysis. Our findings suggest the importance of jointly optimizing the human-AI system as opposed to the standard paradigm of optimizing the AI model alone.

thumbnail

We show that label noise exists in adversarial training. Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples – the true label distribution is distorted by the adversarial perturbation, but is neglected by the common practice that inherits labels from clean examples. Recognizing label noise sheds insights on the prevalence of robust overfitting in adversarial training, and explains its intriguing dependence on perturbation radius and data quality. Also, our label noise perspective aligns well with our observations of the epoch-wise double descent in adversarial training. Guided by our analyses, we proposed a method to automatically calibrate the label to address the label noise and robust overfitting. Our method achieves consistent performance improvements across various models and datasets without introducing new hyper-parameters or additional tuning.

thumbnail

Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science. However, teaching and giving feedback on such software is time-consuming — standard approaches require instructors to manually grade student-implemented interactive programs. As a result, online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs, which critically hinders students’ ability to learn. One approach toward automatic grading is to learn an agent that interacts with a student’s program and explores states indicative of errors via reinforcement learning. However, existing work on this approach only provides binary feedback of whether a program is correct or not, while students require finer-grained feedback on the specific errors in their programs to understand their mistakes. In this work, we show that exploring to discover errors can be cast as a meta-exploration problem. This enables us to construct a principled objective for discovering errors and an algorithm for optimizing this objective, which provides fine-grained feedback. We evaluate our approach on a set of over 700K real anonymized student programs from a Code.org interactive assignment. Our approach provides feedback with 94.3% accuracy, improving over existing approaches by 17.7% and coming within …

thumbnail

Understanding the consequences of mutation for molecular fitness and function is a fundamental problem in biology. Recently, generative probabilistic models have emerged as a powerful tool for estimating fitness from evolutionary sequence data, with accuracy sufficient to predict both laboratory measurements of function and disease risk in humans, and to design novel functional proteins. Existing techniques rest on an assumed relationship between density estimation and fitness estimation, a relationship that we interrogate in this article. We prove that fitness is not identifiable from observational sequence data alone, placing fundamental limits on our ability to disentangle fitness landscapes from phylogenetic history. We show on real datasets that perfect density estimation in the limit of infinite data would, with high confidence, result in poor fitness estimation; current models perform accurate fitness estimation because of, not despite, misspecification. Our results challenge the conventional wisdom that bigger models trained on bigger datasets will inevitably lead to better fitness estimation, and suggest novel estimation strategies going forward.

Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple sentence attributes (e.g., sentiment), there has been little progress on complex, fine-grained controls (e.g., syntactic structure). To address this challenge, we develop a new non-autoregressive language model based on continuous diffusions that we call Diffusion-LM. Building upon the recent successes of diffusion models in continuous domains, Diffusion-LM iteratively denoises a sequence of Gaussian vectors into word vectors, yielding a sequence of intermediate latent variables. The continuous, hierarchical nature of these intermediate variables enables a simple gradient-based algorithm to perform complex, controllable generation tasks. We demonstrate successful control of Diffusion-LM for six challenging fine-grained control tasks, significantly outperforming prior work.

Mean-Field Game (MFG) serves as a crucial mathematical framework in modeling the collective behavior of individual agents interacting stochastically with a large population. In this work, we aim at solving a challenging class of MFGs in which the differentiability of these interacting preferences may not be available to the solver, and the population is urged to converge exactly to some desired distribution. These setups are, despite being well-motivated for practical purposes, complicated enough to paralyze most (deep) numerical solvers. Nevertheless, we show that Schrödinger Bridge — as an entropy-regularized optimal transport model — can be generalized to accepting mean-field structures, hence solving these MFGs. This is achieved via the application of Forward-Backward Stochastic Differential Equations theory, which, intriguingly, leads to a computational framework with a similar structure to Temporal Difference learning. As such, it opens up novel algorithmic connections to Deep Reinforcement Learning that we leverage to facilitate practical training. We show that our proposed objective function provides necessary and sufficient conditions to the mean-field problem. Our method, named Deep Generalized Schrödinger Bridge (DeepGSB), not only outperforms prior methods in solving classical population navigation MFGs, but is also capable of solving 1000-dimensional opinion depolarization, setting a new state-of-the-art numerical solver …

thumbnail

Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting. In recent years, the family of Any-Order Autoregressive Models (AO-ARMs) -- closely related to popular models such as BERT and XLNet -- has shown breakthrough performance in arbitrary conditional tasks across a sweeping range of domains. But, in spite of their success, in this paper we identify significant improvements to be made to previous formulations of AO-ARMs. First, we show that AO-ARMs suffer from redundancy in their probabilistic model, i.e., they define the same distribution in multiple different ways. We alleviate this redundancy by training on a smaller set of univariate conditionals that still maintains support for efficient arbitrary conditional inference. Second, we upweight the training loss for univariate conditionals that are evaluated more frequently during inference. Our method leads to improved performance with no compromises on tractability, giving state-of-the-art likelihoods in arbitrary conditional modeling on text (Text8), image (CIFAR10, ImageNet32), and continuous tabular data domains.

thumbnail

Distributional shifts in photometry and texture have been extensively studied for unsupervised domain adaptation, but their counterparts in optical distortion have been largely neglected. In this work, we tackle the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between source and target images. To this end, we propose a distortion-aware domain adaptation (DaDA) framework that boosts the unsupervised segmentation performance. We first present a relative distortion learning (RDL) approach that is capable of modeling domain shifts in fine-grained geometric deformation based on diffeomorphic transformation. Then, we demonstrate that applying additional global affine transformations to the diffeomorphically transformed source images can further improve the segmentation adaptation. Besides, we find that our distortion-aware adaptation method helps to enhance self-supervised learning by providing higher-quality initial models and pseudo labels. To evaluate, we propose new distortion adaptation benchmarks, where rectilinear source images and fisheye target images are used for unsupervised domain adaptation. Extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. Datasets and more information are available at https://sait-fdd.github.io/.

thumbnail

Among the most striking features of retinal organization is the grouping of its output neurons, the retinal ganglion cells (RGCs), into a diversity of functional types. Each of these types exhibits a mosaic-like organization of receptive fields (RFs) that tiles the retina and visual space. Previous work has shown that many features of RGC organization, including the existence of ON and OFF cell types, the structure of spatial RFs, and their relative arrangement, can be predicted on the basis of efficient coding theory. This theory posits that the nervous system is organized to maximize information in its encoding of stimuli while minimizing metabolic costs. Here, we use efficient coding theory to present a comprehensive account of mosaic organization in the case of natural videos as the retinal channel capacity---the number of simulated RGCs available for encoding---is varied. We show that mosaic density increases with channel capacity up to a series of critical points at which, surprisingly, new cell types emerge. Each successive cell type focuses on increasingly high temporal frequencies and integrates signals over larger spatial areas. In addition, we show theoretically and in simulation that a transition from mosaic alignment to anti-alignment across pairs of cell types is observed …

We introduce a multi-modes tensor clustering method that implements a fused version of the alternating least squares algorithm (Fused-Orth-ALS) for simultaneous tensor factorization and clustering. The statistical convergence rates of recovery and clustering are established when the data are a noise contaminated tensor with a latent low rank CP decomposition structure. Furthermore, we show that a modified alternating least squares algorithm can provably recover the true latent low rank factorization structure when the data form an asymmetric tensor with perturbation. Clustering consistency is also established. Finally, we illustrate the accuracy and computational efficient implementation of the Fused-Orth-ALS algorithm by using both simulations and real datasets.

thumbnail

In recent years, deep neural networks have demonstrated increasingly strong abilities to recognize objects and activities in videos. However, as video understanding becomes widely used in real-world applications, a key consideration is developing human-centric systems that understand not only the content of the video but also how it would affect the wellbeing and emotional state of viewers. To facilitate research in this setting, we introduce two large-scale datasets with over 60,000 videos manually annotated for emotional response and subjective wellbeing. The Video Cognitive Empathy (VCE) dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states. The Video to Valence (V2V) dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing. In experiments, we show how video models that are primarily trained to recognize actions and find contours of objects can be repurposed to understand human preferences and the emotional content of videos. Although there is room for improvement, predicting wellbeing and emotional response is on the horizon for state-of-the-art models. We hope our datasets can help foster further advances at the intersection of commonsense video understanding and human preference learning.

thumbnail

Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling. The structural (or topological) information is implicitly taken into account in these two steps. We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation. This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance, which encodes simultaneously feature and structure dissimilarities by solving a soft graph-matching problem. We postulate that the vector of FGW distances to a set of template graphs has a strong discriminative power, which is then fed to a non-linear classifier for final predictions. Distance embedding can be seen as a new layer, and can leverage on existing message passing techniques to promote sensible feature representations. Interestingly enough, in our work the optimal set of template graphs is also learnt in an end-to-end fashion by differentiating through this layer. After describing the corresponding learning procedure, we empirically validate our claim on several synthetic and real life graph classification datasets, where our method is competitive or surpasses kernel and GNN state-of-the-art …

thumbnail

Recently, Miller et al. showed that a model's in-distribution (ID) accuracy has a strong linear correlation with its out-of-distribution (OOD) accuracy, on several OOD benchmarks, a phenomenon they dubbed ``accuracy-on-the-line''. While a useful tool for model selection (i.e., the model most likely to perform the best OOD is the one with highest ID accuracy), this fact does not help to estimate the actual OOD performance of models without access to a labeled OOD validation set. In this paper, we show a similar surprising phenomena also holds for the agreement between pairs of neural network classifiers: whenever accuracy-on-the-line holds, we observe that the OOD agreement between the predictions of any two pairs of neural networks (with potentially different architectures) also observes a strong linear correlation with their ID agreement. Furthermore, we observe that the slope and bias of OOD vs ID agreement closely matches that of OOD vs ID accuracy. This phenomenon which we call agreement-on-the-line, has important practical applications: without any labeled data, we can predict the OOD accuracy of classifiers, since OOD agreement can be estimated with just unlabeled data. Our prediction algorithm outperforms previous methods both in shifts where agreement-on-the-line holds and, surprisingly, when accuracy is not on …

thumbnail

Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that this behavior is driven by the distributions of the training data itself. In-context learning emerges when the training data exhibits particular distributional properties such as burstiness (items appear in clusters rather than being uniformly distributed over time) and having a large number of rarely occurring classes. In-context learning also emerges more strongly when item meanings or interpretations are dynamic rather than fixed. These properties are exemplified by natural language, but are also inherent to naturalistic data in a wide range of other domains. They also depart significantly from the uniform, i.i.d. training distributions typically used for standard supervised learning. In our initial experiments, we found that in-context learning traded off against more conventional weight-based learning, and models were unable to achieve both simultaneously. However, our later experiments uncovered that the two modes of learning could co-exist in a single model when it was trained on data following a skewed Zipfian distribution -- another common property of naturalistic data, including language. In further experiments, we …

thumbnail

Decision-focused learning (DFL) was recently proposed for stochastic optimization problems that involve unknown parameters. By integrating predictive modeling with an implicitly differentiable optimization layer, DFL has shown superior performance to the standard two-stage predict-then-optimize pipeline. However, most existing DFL methods are only applicable to convex problems or a subset of nonconvex problems that can be easily relaxed to convex ones. Further, they can be inefficient in training due to the requirement of solving and differentiating through the optimization problem in every training iteration. We propose SO-EBM, a general and efficient DFL method for stochastic optimization using energy-based models. Instead of relying on KKT conditions to induce an implicit optimization layer, SO-EBM explicitly parameterizes the original optimization problem using a differentiable optimization layer based on energy functions. To better approximate the optimization landscape, we propose a coupled training objective that uses a maximum likelihood loss to capture the optimum location and a distribution-based regularizer to capture the overall energy landscape. Finally, we propose an efficient training procedure for SO-EBM with a self-normalized importance sampler based on a Gaussian mixture proposal. We evaluate SO-EBM in three applications: power scheduling, COVID-19 resource allocation, and non-convex adversarial security game, demonstrating the effectiveness and efficiency …

thumbnail

Equilibrium systems are a powerful way to express neural computations. As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning. Here, we present a new principle for learning such systems with a temporally- and spatially-local rule. Our principle casts learning as a \emph{least-control} problem, where we first introduce an optimal controller to lead the system towards a solution state, and then define learning as reducing the amount of control needed to reach such a state. We show that incorporating learning signals within a dynamics as an optimal control enables transmitting activity-dependent credit assignment information, avoids storing intermediate states in memory, and does not rely on infinitesimal learning signals. In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of problems involving recurrent neural networks and meta-learning. Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.

thumbnail

Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral graph convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral graph convolutions with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomenon. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph convolutions and achieve superior performance in both full- and semi-supervised node classification tasks. Most notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that spectral-based GNNs have superior performance. …

thumbnail

Autonomous agents have made great strides in specialist domains like Atari games and Go. However, they typically learn tabula rasa in isolated environments with limited and manually conceived objectives, thus failing to generalize across a wide spectrum of tasks and capabilities. Inspired by how humans continually learn and adapt in the open world, we advocate a trinity of ingredients for building generalist agents: 1) an environment that supports a multitude of tasks and goals, 2) a large-scale database of multimodal knowledge, and 3) a flexible and scalable agent architecture. We introduce MineDojo, a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse open-ended tasks and an internet-scale knowledge base with Minecraft videos, tutorials, wiki pages, and forum discussions. Using MineDojo's data, we propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function. Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward. We open-source the simulation suite, knowledge bases, algorithm implementation, and pretrained models (https://minedojo.org) to promote research towards the goal of generally capable embodied agents.

thumbnail

Supervised learning aims to train a classifier under the assumption that training and test data are from the same distribution. To ease the above assumption, researchers have studied a more realistic setting: out-of-distribution (OOD) detection, where test data may come from classes that are unknown during training (i.e., OOD data). Due to the unavailability and diversity of OOD data, good generalization ability is crucial for effective OOD detection algorithms. To study the generalization of OOD detection, in this paper, we investigate the probably approximately correct (PAC) learning theory of OOD detection, which is proposed by researchers as an open problem. First, we find a necessary condition for the learnability of OOD detection. Then, using this condition, we prove several impossibility theorems for the learnability of OOD detection under some scenarios. Although the impossibility theorems are frustrating, we find that some conditions of these impossibility theorems may not hold in some practical scenarios. Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios. Lastly, we also offer theoretical supports for several representative OOD detection works based on our OOD theory.

Recent studies show that graph convolutional network (GCN) often performs worse for low-degree nodes, exhibiting the so-called structural unfairness for graphs with long-tailed degree distributions prevalent in the real world. Graph contrastive learning (GCL), which marries the power of GCN and contrastive learning, has emerged as a promising self-supervised approach for learning node representations. How does GCL behave in terms of structural fairness? Surprisingly, we find that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN. We theoretically show that this fairness stems from intra-community concentration and inter-community scatter properties of GCL, resulting in a much clear community structure to drive low-degree nodes away from the community boundary. Based on our theoretical analysis, we further devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes. Extensive experiments on various benchmarks and evaluation protocols validate the effectiveness of the proposed method.

We consider experiments in dynamical systems where interventions on some experimental units impact other units through a limiting constraint (such as a limited supply of products). Despite outsize practical importance, the best estimators for this `Markovian' interference problem are largely heuristic in nature, and their bias is not well understood. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, apparently incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator: the Differences-In-Q's (DQ) estimator. We show that the DQ estimator can in general have exponentially smaller variance than off-policy evaluation. At the same time, its bias is second order in the impact of the intervention. This yields a striking bias-variance tradeoff so that the DQ estimator effectively dominates state-of-the-art alternatives. From a theoretical perspective, we introduce three separate novel techniques that are of independent interest in the theory of Reinforcement Learning (RL). Our empirical evaluation includes a set of experiments on a city-scale ride-hailing simulator.

Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code available …

thumbnail

Partially monotone regression is a regression analysis in which the target values are monotonically increasing with respect to a subset of input features. The TensorFlow Lattice library is one of the standard machine learning libraries for partially monotone regression. It consists of several neural network layers, and its core component is the lattice layer. One of the problems of the lattice layer is that it requires the projected gradient descent algorithm with many constraints to train it. Another problem is that it cannot receive a high-dimensional input vector due to the memory consumption. We propose a novel neural network layer, the hierarchical lattice layer (HLL), as an extension of the lattice layer so that we can use a standard stochastic gradient descent algorithm to train HLL while satisfying monotonicity constraints and so that it can receive a high-dimensional input vector. Our experiments demonstrate that HLL did not sacrifice its prediction performance on real datasets compared with the lattice layer.

thumbnail

We derive a novel approximation error bound with explicit prefactor for Sobolev-regular functions using deep convolutional neural networks (CNNs). The bound is non-asymptotic in terms of the network depth and filter lengths, in a rather flexible way. For Sobolev-regular functions which can be embedded into the H\"older space, the prefactor of our error bound depends on the ambient dimension polynomially instead of exponentially as in most existing results, which is of independent interest. We also establish a new approximation result when the target function is supported on an approximate lower-dimensional manifold. We apply our results to establish non-asymptotic excess risk bounds for classification using CNNs with convex surrogate losses, including the cross-entropy loss, the hinge loss (SVM), the logistic loss, the exponential loss and the least squares loss. We show that the classification methods with CNNs can circumvent the curse of dimensionality if input data is supported on a neighborhood of a low-dimensional manifold.

thumbnail

Molecule generation is central to a variety of applications. Current attention has been paid to approaching the generation task as subgraph prediction and assembling. Nevertheless, these methods usually rely on hand-crafted or external subgraph construction, and the subgraph assembling depends solely on local arrangement. In this paper, we define a novel notion, principal subgraph that is closely related to the informative pattern within molecules. Interestingly, our proposed merge-and-update subgraph extraction method can automatically discover frequent principal subgraphs from the dataset, while previous methods are incapable of. Moreover, we develop a two-step subgraph assembling strategy, which first predicts a set of subgraphs in a sequence-wise manner and then assembles all generated subgraphs globally as the final output molecule. Built upon graph variational auto-encoder, our model is demonstrated to be effective in terms of several evaluation metrics and efficiency, compared with state-of-the-art methods on distribution learning and (constrained) property optimization tasks.

A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm's privacy loss remain open---even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy. Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings.

The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether the brain could adopt a similar strategy to correctly modify its synapses. Recent models have attempted to bridge this gap while being consistent with a range of experimental observations. However, these models are either unable to effectively backpropagate error signals across multiple layers or require a multi-phase learning process, neither of which are reminiscent of learning in the brain. Here, we introduce a new model, Bursting Cortico-Cortical Networks (BurstCCN), which solves these issues by integrating known properties of cortical networks namely bursting activity, short-term plasticity (STP) and dendrite-targeting interneurons. BurstCCN relies on burst multiplexing via connection-type-specific STP to propagate backprop-like error signals within deep cortical networks. These error signals are encoded at distal dendrites and induce burst-dependent plasticity as a result of excitatory-inhibitory top-down inputs. First, we demonstrate that our model can effectively backpropagate errors through multiple layers using a single-phase learning process. Next, we show both empirically and analytically that learning in our model approximates backprop-derived gradients. Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10). Overall, …

thumbnail

Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.

thumbnail

Transformation invariances are present in many real-world problems. For example, image classification is usually invariant to rotation and color transformation: a rotated car in a different color is still identified as a car. Data augmentation, which adds the transformed data into the training set and trains a model on the augmented data, is one commonly used technique to build these invariances into the learning process. However, it is unclear how data augmentation performs theoretically and what the optimal algorithm is in presence of transformation invariances. In this paper, we study PAC learnability under transformation invariances in three settings according to different levels of realizability: (i) A hypothesis fits the augmented data; (ii) A hypothesis fits only the original data and the transformed data lying in the support of the data distribution; (iii) Agnostic case. One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal. Furthermore, this type of algorithms can even ``harm'' the accuracy. In setting (i), although it is unnecessary to distinguish between the two …

thumbnail

We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks. In contrast to previous models, UViM has the same functional form for all tasks; it requires no task-specific modifications which require extensive human expertise. The approach involves two components: (I) a base model (feed-forward) which is trained to directly predict raw vision outputs, guided by a learned discrete code and (II) a language model (autoregressive) that is trained to generate the guiding code. These components complement each other: the language model is well-suited to modeling structured interdependent data, while the base model is efficient at dealing with high-dimensional outputs. We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks: panoptic segmentation, depth prediction and image colorization, where we achieve competitive and near state-of-the-art results. Our experimental results suggest that UViM is a promising candidate for a unified modeling approach in computer vision.

We introduce a simple but general online learning framework in which a learner plays against an adversary in a vector-valued game that changes every round. Even though the learner's objective is not convex-concave (and so the minimax theorem does not apply), we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our framework by using it to (re)derive optimal bounds and efficient algorithms across a variety of domains, ranging from multicalibration to a large set of no-regret algorithms, to a variant of Blackwell's approachability theorem for polytopes with fast convergence rates. As a new application, we show how to ``(multi)calibeat'' an arbitrary collection of forecasters --- achieving an exponentially improved dependence on the number of models we are competing against, compared to prior work.

thumbnail

Many reinforcement learning (RL) applications have combinatorial action spaces, where each action is a composition of sub-actions. A standard RL approach ignores this inherent factorization structure, resulting in a potential failure to make meaningful inferences about rarely observed sub-action combinations; this is particularly problematic for offline settings, where data may be limited. In this work, we propose a form of linear Q-function decomposition induced by factored action spaces. We study the theoretical properties of our approach, identifying scenarios where it is guaranteed to lead to zero bias when used to approximate the Q-function. Outside the regimes with theoretical guarantees, we show that our approach can still be useful because it leads to better sample efficiency without necessarily sacrificing policy optimality, allowing us to achieve a better bias-variance trade-off. Across several offline RL problems using simulators and real-world datasets motivated by healthcare, we demonstrate that incorporating factored action spaces into value-based RL can result in better-performing policies. Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space when applying RL to observational datasets.

We study the problem of learning generalized linear models under adversarial corruptions.We analyze a classical heuristic called the \textit{iterative trimmed maximum likelihood estimator} which is known to be effective against \textit{label corruptions} in practice. Under label corruptions, we prove that this simple estimator achieves minimax near-optimal risk on a wide range of generalized linear models, including Gaussian regression, Poisson regression and Binomial regression. Finally, we extend the estimator to the much more challenging setting of \textit{label and covariate corruptions} and demonstrate its robustness and optimality in that setting as well.

thumbnail

Despite impressive successes, deep reinforcement learning (RL) systems still fall short of human performance on generalization to new tasks and environments that differ from their training. As a benchmark tailored for studying RL generalization, we introduce Avalon, a set of tasks in which embodied agents in highly diverse procedural 3D worlds must survive by navigating terrain, hunting or gathering food, and avoiding hazards. Avalon is unique among existing RL benchmarks in that the reward function, world dynamics, and action space are the same for every task, with tasks differentiated solely by altering the environment; its 20 tasks, ranging in complexity from eat and throw to hunt and navigate, each create worlds in which the agent must perform specific skills in order to survive. This setup enables investigations of generalization within tasks, between tasks, and to compositional tasks that require combining skills learned from previous tasks. Avalon includes a highly efficient simulator, a library of baselines, and a benchmark with scoring metrics evaluated against hundreds of hours of human performance, all of which are open-source and publicly available. We find that standard RL baselines make progress on most tasks but are still far from human performance, suggesting Avalon is challenging enough …

thumbnail

We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility.

thumbnail

The fundamental problem of causal inference -- that we never observe counterfactuals -- prevents us from identifying how many might be negatively affected by a proposed intervention. If, in an A/B test, half of users click (or buy, or watch, or renew, etc.), whether exposed to the standard experience A or a new one B, hypothetically it could be because the change affects no one, because the change positively affects half the user population to go from no-click to click while negatively affecting the other half, or something in between. While unknowable, this impact is clearly of material importance to the decision to implement a change or not, whether due to fairness, long-term, systemic, or operational considerations. We therefore derive the tightest-possible (i.e., sharp) bounds on the fraction negatively affected (and other related estimands) given data with only factual observations, whether experimental or observational. Naturally, the more we can stratify individuals by observable covariates, the tighter the sharp bounds. Since these bounds involve unknown functions that must be learned from data, we develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned, remains consistent when some are mislearned, and still gives …

thumbnail

We consider the classic facility location problem in fully dynamic data streams, where elements can be both inserted and deleted. In this problem, one is interested in maintaining a stable and high quality solution throughout the data stream while using only little time per update (insertion or deletion). We study the problem and provide the first algorithm that at the same time maintains a constant approximation and incurs polylogarithmic amortized recourse per update. We complement our theoretical results with an experimental analysis showing the practical efficiency of our method.

thumbnail

AI systems are becoming increasingly intertwined with human life. In order to effectively collaborate with humans and ensure safety, AI systems need to be able to understand, interpret and predict human moral judgments and decisions. Human moral judgments are often guided by rules, but not always. A central challenge for AI safety is capturing the flexibility of the human moral mind — the ability to determine when a rule should be broken, especially in novel or unusual situations. In this paper, we present a novel challenge set consisting of moral exception question answering (MoralExceptQA) of cases that involve potentially permissible moral exceptions – inspired by recent moral psychology studies. Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MoralCoT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments. MoralCoT outperforms seven existing LLMs by 6.2% F1, suggesting that modeling human reasoning might be necessary to capture the flexibility of the human moral mind. We also conduct a detailed error analysis to suggest directions for future work to improve AI safety using MoralExceptQA. Our data is open-sourced at https://huggingface.co/datasets/feradauto/MoralExceptQA …

Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness. Such models typically require joint reasoning in the image and text representation spaces for downstream inference tasks. Contrary to prior beliefs, we demonstrate that the image and text representations learned via a standard contrastive objective are not interchangeable and can lead to inconsistent downstream predictions. To mitigate this issue, we formalize consistency and propose CyCLIP, a framework for contrastive representation learning that explicitly optimizes for the learned representations to be geometrically consistent in the image and text space. In particular, we show that consistent representations can be learned by explicitly symmetrizing (a) the similarity between the two mismatched image-text pairs (cross-modal consistency); and (b) the similarity between the image-image pair and the text-text pair (in-modal consistency). Empirically, we show that the improved consistency in CyCLIP translates to significant gains over CLIP, with gains ranging from 10%-24% for zero-shot classification on standard benchmarks (CIFAR-10, CIFAR-100, ImageNet1K) and 10%-27% for robustness to various natural distribution shifts.

thumbnail

Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets …

thumbnail

Comparing the representations learned by different neural networks has recently emerged as a key tool to understand various architectures and ultimately optimize them. In this work, we introduce GULP, a family of distance measures between representations that is explicitly motivated by downstream predictive tasks. By construction, GULP provides uniform control over the difference in prediction performance between two representations, with respect to regularized linear prediction tasks. Moreover, it satisfies several desirable structural properties, such as the triangle inequality and invariance under orthogonal transformations, and thus lends itself to data embedding and visualization. We extensively evaluate GULP relative to other methods, and demonstrate that it correctly differentiates between architecture families, converges over the course of training, and captures generalization performance on downstream linear tasks.

thumbnail

Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices. However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning. Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous. As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained. Based on our study, we propose a simple yet effective compression pipeline for extreme compression. Our simplified pipeline demonstrates that(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT; (2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks.

Subgraph GNNs are a recent class of expressive Graph Neural Networks (GNNs) which model graphs as collections of subgraphs. So far, the design space of possible Subgraph GNN architectures as well as their basic theoretical properties are still largely unexplored. In this paper, we study the most prominent form of subgraph methods, which employs node-based subgraph selection policies such as ego-networks or node marking and deletion. We address two central questions: (1) What is the upper-bound of the expressive power of these methods? and (2) What is the family of equivariant message passing layers on these sets of subgraphs?. Our first step in answering these questions is a novel symmetry analysis which shows that modelling the symmetries of node-based subgraph collections requires a significantly smaller symmetry group than the one adopted in previous works. This analysis is then used to establish a link between Subgraph GNNs and Invariant Graph Networks (IGNs). We answer the questions above by first bounding the expressive power of subgraph methods by 3-WL, and then proposing a general family of message-passing layers for subgraph methods that generalises all previous node-based Subgraph GNNs. Finally, we design a novel Subgraph GNN dubbed SUN, which theoretically unifies previous architectures …

Most knowledge graphs (KGs) are incomplete, which motivates one important research topic on automatically complementing knowledge graphs. However, evaluation of knowledge graph completion (KGC) models often ignores the incompleteness---facts in the test set are ranked against all unknown triplets which may contain a large number of missing facts not included in the KG yet. Treating all unknown triplets as false is called the closed-world assumption. This closed-world assumption might negatively affect the fairness and consistency of the evaluation metrics. In this paper, we study KGC evaluation under a more realistic setting, namely the open-world assumption, where unknown triplets are considered to include many missing facts not included in the training or test sets. For the currently most used metrics such as mean reciprocal rank (MRR) and Hits@K, we point out that their behavior may be unexpected under the open-world assumption. Specifically, with not many missing facts, their numbers show a logarithmic trend with respect to the true strength of the model, and thus, the metric increase could be insignificant in terms of reflecting the true model improvement. Further, considering the variance, we show that the degradation in the reported numbers may result in incorrect comparisons between different models, where stronger …

thumbnail

Equipping artificial agents with useful exploration mechanisms remains a challenge to this day. Humans, on the other hand, seem to manage the trade-off between exploration and exploitation effortlessly. In the present article, we put forward the hypothesis that they accomplish this by making optimal use of limited computational resources. We study this hypothesis by meta-learning reinforcement learning algorithms that sacrifice performance for a shorter description length (defined as the number of bits required to implement the given algorithm). The emerging class of models captures human exploration behavior better than previously considered approaches, such as Boltzmann exploration, upper confidence bound algorithms, and Thompson sampling. We additionally demonstrate that changing the description length in our class of models produces the intended effects: reducing description length captures the behavior of brain-lesioned patients while increasing it mirrors cognitive development during adolescence.

thumbnail

Phylogenetics is a classical methodology in computational biology that today has become highly relevant for medical investigation of single-cell data, e.g., in the context of development of cancer. The exponential size of the tree space is unfortunately a formidable obstacle for current Bayesian phylogenetic inference using Markov chain Monte Carlo based methods since these rely on local operations. And although more recent variational inference (VI) based methods offer speed improvements, they rely on expensive auto-differentiation operations for learning the variational parameters. We propose VaiPhy, a remarkably fast VI based algorithm for approximate posterior inference in an \textit{augmented tree space}. VaiPhy produces marginal log-likelihood estimates on par with the state-of-the-art methods on real data, and is considerably faster since it does not require auto-differentiation. Instead, VaiPhy combines coordinate ascent update equations with two novel sampling schemes: (i) \textit{SLANTIS}, a proposal distribution for tree topologies in the augmented tree space, and (ii) the \textit{JC sampler}, the, to the best of our knowledge, first ever scheme for sampling branch lengths directly from the popular Jukes-Cantor model. We compare VaiPhy in terms of density estimation and runtime. Additionally, we evaluate the reproducibility of the baselines. We provide our code on GitHub: \url{https://github.com/Lagergren-Lab/VaiPhy}.

thumbnail

We propose Active Surrogate Estimators (ASEs), a new method for label-efficient model evaluation. Evaluating model performance is a challenging and important problem when labels are expensive. ASEs address this active testing problem using a surrogate-based estimation approach that interpolates the errors of points with unknown labels, rather than forming a Monte Carlo estimator. ASEs actively learn the underlying surrogate, and we propose a novel acquisition strategy, XWED, that tailors this learning to the final estimation task. We find that ASEs offer greater label-efficiency than the current state-of-the-art when applied to challenging model evaluation problems for deep neural networks.

thumbnail

Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.

thumbnail

Offline reinforcement learning (RL) methods can generally be categorized into two types: RL-based and Imitation-based. RL-based methods could in principle enjoy out-of-distribution generalization but suffer from erroneous off-policy evaluation. Imitation-based methods avoid off-policy evaluation but are too conservative to surpass the dataset. In this study, we propose an alternative approach, inheriting the training stability of imitation-style methods while still allowing logical out-of-distribution generalization. We decompose the conventional reward-maximizing policy in offline RL into a guide-policy and an execute-policy. During training, the guide-poicy and execute-policy are learned using only data from the dataset, in a supervised and decoupled manner. During evaluation, the guide-policy guides the execute-policy by telling where it should go so that the reward can be maximized, serving as the \textit{Prophet}. By doing so, our algorithm allows \textit{state-compositionality} from the dataset, rather than \textit{action-compositionality} conducted in prior imitation-style methods. We dumb this new approach Policy-guided Offline RL (\texttt{POR}). \texttt{POR} demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline RL. We also highlight the benefits of \texttt{POR} in terms of improving with supplementary suboptimal data and easily adapting to new tasks by only changing the guide-poicy.

In fair rent division, the problem is to assign rooms to roommates and fairly split the rent based on roommates' reported valuations for the rooms. Envy-free rent division is the most popular application on the fair division website Spliddit. The standard model assumes that agents can correctly report their valuations for each room. In practice, agents may be unsure about their valuations, for example because they have had only limited time to inspect the rooms. Our goal is to find a robust rent division that remains fair even if agent valuations are slightly different from the reported ones. We introduce the lexislack solution, which selects a rent division that remains envy-free for valuations within as large a radius as possible of the reported valuations. We also consider robustness notions for valuations that come from a probability distribution, and use results from learning theory to show how we can find rent divisions that (almost) maximize the probability of being envy-free, or that minimize the expected envy. We show that an almost optimal allocation can be identified based on polynomially many samples from the valuation distribution. Finding the best allocation given these samples is NP-hard, but in practice such an allocation can …

thumbnail

Modern neural networks often have great expressive power and can be trained to overfit the training data, while still achieving a good test performance. This phenomenon is referred to as “benign overfitting”. Recently, there emerges a line of works studying “benign overfitting” from the theoretical perspective. However, they are limited to linear models or kernel/random feature models, and there is still a lack of theoretical understanding about when and how benign overfitting occurs in neural networks. In this paper, we study the benign overfitting phenomenon in training a two-layer convolutional neural network (CNN). We show that when the signal-to-noise ratio satisfies a certain condition, a two-layer CNN trained by gradient descent can achieve arbitrarily small training and test loss. On the other hand, when this condition does not hold, overfitting becomes harmful and the obtained CNN can only achieve a constant level test loss. These together demonstrate a sharp phase transition between benign overfitting and harmful overfitting, driven by the signal-to-noise ratio. To the best of our knowledge, this is the first work that precisely characterizes the conditions under which benign overfitting can occur in training convolutional neural networks.

We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high-dimensional regime. We prove limit theorems for the trajectories of summary statistics (i.e., finite-dimensional functions) of SGD as the dimension goes to infinity. Our approach allows one to choose the summary statistics that are tracked, the initialization, and the step-size. It yields both ballistic (ODE) and diffusive (SDE) limits, with the limit depending dramatically on the former choices. We find a critical scaling regime for the step-size below which this ``effective dynamics" matches gradient flow for the population loss, but at which, a new correction term appears which changes the phase diagram. About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate. We demonstrate our approach on popular examples including estimation for spiked matrix and tensor models and classification via two-layer networks for binary and XOR-type Gaussian mixture models. These examples exhibit surprising phenomena including multimodal timescales to convergence as well as convergence to sub-optimal solutions with probability bounded away from zero from random (e.g., Gaussian) initializations.

thumbnail

We study sequential general online regression, known also as sequential probability assignments, under logarithmic loss when compared against a broad class of experts. We obtain tight, often matching, lower and upper bounds for sequential minimax regret, which is defined as the excess loss incurred by the predictor over the best expert in the class. After proving a general upper bound we consider some specific classes of experts from Lipschitz class to bounded Hessian class and derive matching lower and upper bounds with provably optimal constants. Our bounds work for a wide range of values of the data dimension and the number of rounds. To derive lower bounds, we use tools from information theory (e.g., Shtarkov sum) and for upper bounds, we resort to new "smooth truncated covering" of the class of experts. This allows us to find constructive proofs by applying a simple and novel truncated Bayesian algorithm. Our proofs are substantially simpler than the existing ones and yet provide tighter (and often optimal) bounds.

thumbnail

A longstanding goal of the field of AI is a method for learning a highly capable, generalist agent from diverse experience. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model – with a single set of weights – trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction.

Social media has become the fulcrum of all forms of communication. Classifying social texts such as fake news, rumour, sarcasm, etc. has gained significant attention. The surface-level signals expressed by a social-text itself may not be adequate for such tasks; therefore, recent methods attempted to incorporate other intrinsic signals such as user behavior and the underlying graph structure. Oftentimes, the public wisdom expressed through the comments/replies to a social-text acts as a surrogate of crowd-sourced view and may provide us with complementary signals. State-of-the-art methods on social-text classification tend to ignore such a rich hierarchical signal. Here, we propose Hyphen, a discourse-aware hyperbolic spectral co-attention network. Hyphen is a fusion of hyperbolic graph representation learning with a novel Fourier co-attention mechanism in an attempt to generalise the social-text classification tasks by incorporating public discourse. We parse public discourse as an Abstract Meaning Representation (AMR) graph and use the powerful hyperbolic geometric representation to model graphs with hierarchical structure. Finally, we equip it with a novel Fourier co-attention mechanism to capture the correlation between the source post and public discourse. Extensive experiments on four different social-text classification tasks, namely detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen generalises …

thumbnail

With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy document collections. One such setting is the Civil Rights Litigation Clearinghouse (CRLC, https://clearinghouse.net), which posts information about large-scale civil rights lawsuits, serving lawyers, scholars, and the general public. Today, summarization in the CRLC requires extensive training of lawyers and law students who spend hours per case understanding multiple relevant documents in order to produce high-quality summaries of key events and outcomes. Motivated by this ongoing real-world summarization effort, we introduce Multi-LexSum, a collection of 9,280 expert-authored summaries drawn from ongoing CRLC writing. Multi-LexSum presents a challenging multi-document summarization task given the length of the source documents, often exceeding two hundred pages per case. Furthermore, Multi-LexSum is distinct from other datasets in its multiple target summaries, each at a different granularity (ranging from one-sentence "extreme" summaries to multi-paragraph narrations of over five hundred words). We present extensive analysis demonstrating that despite the high-quality summaries in the training data (adhering to strict content and style guidelines), state-of-the-art summarization models perform poorly on this task. We release Multi-LexSum for further summarization research and to facilitate the …

thumbnail

Probabilistic circuits (PCs) are a tractable representation of probability distributions allowing for exact and efficient computation of likelihoods and marginals. There has been significant recent progress on improving the scale and expressiveness of PCs. However, PC training performance plateaus as model size increases. We discover that most capacity in existing large PC structures is wasted: fully-connected parameter layers are only sparsely used. We propose two operations: pruning and growing, that exploit the sparsity of PC structures. Specifically, the pruning operation removes unimportant sub-networks of the PC for model compression and comes with theoretical guarantees. The growing operation increases model capacity by increasing the dimensions of latent states. By alternatingly applying pruning and growing, we increase the capacity that is meaningfully used, allowing us to significantly scale up PC learning. Empirically, our learner achieves state-of-the-art likelihoods on MNIST-family image datasets and an Penn Tree Bank language data compared to other PC learners and less tractable deep generative models such as flow-based models and variational autoencoders (VAEs).

Many high-dimensional statistical inference problems are believed to possess inherent computational hardness. Various frameworks have been proposed to give rigorous evidence for such hardness, including lower bounds against restricted models of computation (such as low-degree functions), as well as methods rooted in statistical physics that are based on free energy landscapes. This paper aims to make a rigorous connection between the seemingly different low-degree and free-energy based approaches. We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal. By leveraging these rigorous connections we are able to: establish that for Gaussian additive models the "algebraic" notion of low-degree hardness implies failure of "geometric" local MCMC algorithms, and provide new low-degree lower bounds for sparse linear regression which seem difficult to prove directly. These results provide both conceptual insights into the connections between different notions of hardness, as well as concrete technical tools such as new methods for proving low-degree lower bounds.

thumbnail

Many applications of text generation require incorporating different constraints to control the semantics or style of generated text. These constraints can be hard (e.g., ensuring certain keywords are included in the output) and soft (e.g., contextualizing the output with the left- or right-hand context). In this paper, we present Energy-based Constrained Decoding with Langevin Dynamics (COLD), a decoding framework which unifies constrained generation as specifying constraints through an energy function, then performing efficient differentiable reasoning over the constraints through gradient-based sampling. COLD decoding is a flexible framework that can be applied directly to off-the-shelf left-to-right language models without the need for any task-specific fine-tuning, as demonstrated through three challenging text generation applications: lexically-constrained generation, abductive reasoning, and counterfactual reasoning. Our experiments on these constrained generation tasks point to the effectiveness of our approach, both in terms of automatic and human evaluation.

Decision trees are well-known due to their ease of interpretability.To improve accuracy, we need to grow deep trees or ensembles of trees.These are hard to interpret, offsetting their original benefits. Shapley values have recently become a popular way to explain the predictions of tree-based machine learning models. It provides a linear weighting to features independent of the tree structure. The rise in popularity is mainly due to TreeShap, which solves a general exponential complexity problem in polynomial time. Following extensive adoption in the industry, more efficient algorithms are required. This paper presents a more efficient and straightforward algorithm: Linear TreeShap.Like TreeShap, Linear TreeShap is exact and requires the same amount of memory.

thumbnail

While vision-and-language models perform well on tasks such as visual question answering, they struggle when it comes to basic human commonsense reasoning skills. In this work, we introduce WinoGAViL: an online game of vision-and-language associations (e.g., between werewolves and a full moon), used as a dynamic evaluation benchmark. Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player tries to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We use the game to collect 3.5K instances, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. We release the dataset, the code and the interactive game, allowing future data collection that can be used to develop models with better association abilities.

Large-scale vision-language pre-training has achieved promising results on downstream tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from the Internet are in perfect one-to-one correspondence. However, in real scenarios, this assumption can be difficult to hold: the text description, obtained by crawling the affiliated metadata of the image, often suffers from the semantic mismatch and the mutual compatibility. To address these issues, we introduce PyramidCLIP, which constructs an input pyramid with different semantic levels for each modality, and aligns visual elements and linguistic elements in the form of hierarchy via peer-level semantics alignment and cross-level relation alignment. Furthermore, we soften the loss of negative samples (unpaired samples) so as to weaken the strict constraint during the pre-training stage, thus mitigating the risk of forcing the model to distinguish compatible negative pairs. Experiments on five downstream tasks demonstrate the effectiveness of the proposed PyramidCLIP. In particular, with the same amount of 15 million pre-training image-text pairs, PyramidCLIP exceeds CLIP on ImageNet zero-shot classification top-1 accuracy by 10.6%/13.2%/10.0% with ResNet50/ViT-B32/ViT-B16 based image encoder respectively. When scaling to larger datasets, PyramidCLIP achieves the state-of-the-art results on several downstream tasks. In particular, the results of PyramidCLIP-ResNet50 trained on 143M …

thumbnail

We introduce Breaking Bad, a large-scale dataset of fractured objects. Our dataset consists of over one million fractured objects simulated from ten thousand base models. The fracture simulation is powered by a recent physically based algorithm that efficiently generates a variety of fracture modes of an object. Existing shape assembly datasets decompose objects according to semantically meaningful parts, effectively modeling the construction process. In contrast, Breaking Bad models the destruction process of how a geometric object naturally breaks into fragments. Our dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding. We analyze our dataset with several geometry measurements and benchmark three state-of-the-art shape assembly deep learning methods under various settings. Extensive experimental results demonstrate the difficulty of our dataset, calling on future research in model designs specifically for the geometric shape assembly task. We host our dataset at https://breaking-bad-dataset.github.io/.

thumbnail

From optimal transport to robust dimensionality reduction, many machine learning applicationscan be cast into the min-max optimization problems over Riemannian manifolds. Though manymin-max algorithms have been analyzed in the Euclidean setting, it has been elusive how theseresults translate to the Riemannian case. Zhang et al. (2022) have recently identified that geodesic convexconcave Riemannian problems admit always Sion’s saddle point solutions. Immediately, an importantquestion that arises is if a performance gap between the Riemannian and the optimal Euclidean spaceconvex concave algorithms is necessary. Our work is the first to answer the question in the negative:We prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate at alinear convergence rate at the geodesically strongly convex concave case, matching the euclidean one.Our results also extend to the stochastic or non-smooth case where RCEG & Riemanian gradientascent descent (RGDA) achieve respectively near-optimal convergence rates up to factors dependingon curvature of the manifold. Finally, we empirically demonstrate the effectiveness of RCEG insolving robust PCA.

We give a simple, generic conformal prediction method for sequential prediction that achieves target empirical coverage guarantees on adversarial data. It is computationally lightweight --- comparable to split conformal prediction --- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score. Furthermore, it gives stronger than marginal coverage guarantees in two ways. First, it gives threshold-calibrated prediction sets that have correct empirical coverage even conditional on the threshold used to form the prediction set from the conformal score. Second, the user can specify an arbitrary collection of subsets of the feature space --- possibly intersecting --- and the coverage guarantees will also hold conditional on membership in each of these subsets. We call our algorithm MVP, short for MultiValid Prediction. We give both theory and an extensive set of empirical evaluations.

Humans have a remarkable ability to rapidly generalize to new tasks that is difficult to reproduce in artificial learning systems.Compositionality has been proposed as a key mechanism supporting generalization in humans, but evidence of its neural implementation and impact on behavior is still scarce. Here we study the computational properties associated with compositional generalization in both humans and artificial neural networks (ANNs) on a highly compositional task. First, we identified behavioral signatures of compositional generalization in humans, along with their neural correlates using whole-cortex functional magnetic resonance imaging (fMRI) data. Next, we designed pretraining paradigms aided by a procedure we term primitives pretraining to endow compositional task elements into ANNs. We found that ANNs with this prior knowledge had greater correspondence with human behavior and neural compositional signatures. Importantly, primitives pretraining induced abstract internal representations, excellent zero-shot generalization, and sample-efficient learning. Moreover, it gave rise to a hierarchy of abstract representations that matched human fMRI data, where sensory rule abstractions emerged in early sensory areas, and motor rule abstractions emerged in later motor areas. Our findings give empirical support to the role of compositional generalization in humans behavior, implicate abstract representations as its neural implementation, and illustrate that these representations …

thumbnail

Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated …

thumbnail

Learning set functions becomes increasingly important in many applications like product recommendation and compound selection in AI-aided drug discovery. The majority of existing works study methodologies of set function learning under the function value oracle, which, however, requires expensive supervision signals. This renders it impractical for applications with only weak supervisions under the Optimal Subset (OS) oracle, the study of which is surprisingly overlooked. In this work, we present a principled yet practical maximum likelihood learning framework, termed as EquiVSet, that simultaneously meets the following desiderata of learning neural set functions under the OS oracle: i) permutation invariance of the set mass function being modeled; ii) permission of varying ground set; iii) minimum prior and iv) scalability. The main components of our framework involve: an energy-based treatment of the set mass function, DeepSet-style architectures to handle permutation invariance, mean-field variational inference, and its amortized variants. Thanks to the delicate combination of these advanced architectures, empirical studies on three real-world applications (including Amazon product recommendation, set anomaly detection, and compound selection for virtual screening) demonstrate that EquiVSet outperforms the baselines by a large margin.

thumbnail

The logit outputs of a feedforward neural network at initialization are conditionally Gaussian, given a random covariance matrix defined by the penultimate layer. In this work, we study the distribution of this random matrix. Recent work has shown that shaping the activation function as network depth grows large is necessary for this covariance matrix to be non-degenerate. However, the current infinite-width-style understanding of this shaping method is unsatisfactory for large depth: infinite-width analyses ignore the microscopic fluctuations from layer to layer, but these fluctuations accumulate over many layers. To overcome this shortcoming, we study the random covariance matrix in the shaped infinite-depth-and-width limit. We identify the precise scaling of the activation function necessary to arrive at a non-trivial limit, and show that the random covariance matrix is governed by a stochastic differential equation (SDE) that we call the Neural Covariance SDE. Using simulations, we show that the SDE closely matches the distribution of the random covariance matrix of finite networks. Additionally, we recover an if-and-only-if condition for exploding and vanishing norms of large shaped networks based on the activation function.

thumbnail

In single positive multi-label learning (SPML), only one of multiple positive labels is observed for each instance. The previous work trains the model by simply treating unobserved labels as negative ones, and designs the regularization to constrain the number of expected positive labels. However, in many real-world scenarios, the true number of positive labels is unavailable, making such methods less applicable. In this paper, we propose to solve SPML problems by designing a Label-Aware global Consistency (LAC) regularization, which leverages the manifold structure information to enhance the recovery of potential positive labels. On one hand, we first perform pseudo-labeling for each unobserved label based on its prediction probability. The consistency regularization is then imposed on model outputs to balance the fitting of identified labels and exploring of potential positive labels. On the other hand, by enforcing label-wise embeddings to maintain global consistency, LAC loss encourages the model to learn more distinctive representations, which is beneficial for recovering the information of potential positive labels. Experiments on multiple benchmark datasets validate that the proposed method can achieve state-of-the-art performance for solving SPML tasks.

thumbnail

This paper considers doing quantile regression on censored data using neural networks (NNs). This adds to the survival analysis toolkit by allowing direct prediction of the target variable, along with a distribution-free characterisation of uncertainty, using a flexible function approximator. We begin by showing how an algorithm popular in linear models can be applied to NNs. However, the resulting procedure is inefficient, requiring sequential optimisation of an individual NN at each desired quantile. Our major contribution is a novel algorithm that simultaneously optimises a grid of quantiles output by a single NN. To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximisation, and secondly that it exhibits a desirable `self-correcting' property. Experimentally, the algorithm produces quantiles that are better calibrated than existing methods on 10 out of 12 real datasets.

Diffusion probabilistic models (DPMs) are emerging powerful generative models. Despite their high-quality generation performance, DPMs still suffer from their slow sampling as they generally need hundreds or thousands of sequential function evaluations (steps) of large neural networks to draw a sample. Sampling from DPMs can be viewed alternatively as solving the corresponding diffusion ordinary differential equations (ODEs). In this work, we propose an exact formulation of the solution of diffusion ODEs. The formulation analytically computes the linear part of the solution, rather than leaving all terms to black-box ODE solvers as adopted in previous works. By applying change-of-variable, the solution can be equivalently simplified to an exponentially weighted integral of the neural network. Based on our formulation, we propose DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. DPM-Solver is suitable for both discrete-time and continuous-time DPMs without any further training. Experimental results show that DPM-Solver can generate high-quality samples in only 10 to 20 function evaluations on various datasets. We achieve 4.70 FID in 10 function evaluations and 2.87 FID in 20 function evaluations on the CIFAR10 dataset, and a 4~16x speedup compared with previous state-of-the-art training-free samplers on various datasets.

thumbnail

We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO). Given a pre-trained base language model and a sequence-level boolean oracle function, we aim to decompose the oracle function into token-level guidance to steer the base model in text generation. Specifically, the token-level guidance is provided by NADO, a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. Based on posterior regularization, we present the close-form optimal solution to incorporate the decomposed token-level guidance into the base model for controllable generation. We further discuss how the neural approximation affects the quality of the solution. These experiments conducted on two different applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while keeping high generation quality.

thumbnail

Attention mechanisms have become a standard tool for sequence modeling tasks, in particular by stacking self-attention layers over the entire input sequence as in the Transformer architecture. In this work we introduce a novel attention procedure called staircase attention that, unlike self-attention, operates across the sequence (in time) recurrently processing the input by adding another step of processing. A step in the staircase comprises of backward tokens (encoding the sequence so far seen) and forward tokens (ingesting a new part of the sequence). Thus our model can trade off performance and compute, by increasing the amount of recurrence through time and depth. Staircase attention is shown to be able to solve tasks that involve tracking that conventional Transformers cannot, due to this recurrence. Further, it is shown to provide improved modeling power for the same size model (number of parameters) compared to self-attentive Transformers on large language modeling and dialogue tasks, yielding significant perplexity gains.

thumbnail

Groundbreaking language-vision architectures like CLIP and DALL-E proved the utility of training on large amounts of noisy image-text data, without relying on expensive accurate labels used in standard vision unimodal supervised learning. The resulting models showed capabilities of strong text-guided image generation and transfer to downstream tasks, while performing remarkably at zero-shot classification with noteworthy out-of-distribution robustness. Since then, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and Imagen made further improvements. Studying the training and capabilities of such models requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. To address this problem and democratize research on large-scale multi-modal models, we present LAION-5B - a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language. We show successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further experiments enabled with an openly available dataset of this scale. Additionally we provide several nearest neighbor indices, an improved web-interface for dataset exploration and subset generation, and detection scores for watermark, NSFW, and toxic content detection.

Forecasting complex time series is ubiquitous and vital in a range of applications but challenging. Recent advances endeavor to achieve progress by incorporating various deep learning techniques (e.g., RNN and Transformer) into sequential models. However, clear patterns are still hard to extract since time series are often composed of several intricately entangled components. Motivated by the success of disentangled variational autoencoder in computer vision and classical time series decomposition, we plan to infer a couple of representations that depict seasonal and trend components of time series. To achieve this goal, we propose LaST, which, based on variational inference, aims to disentangle the seasonal-trend representations in the latent space. Furthermore, LaST supervises and disassociates representations from the perspectives of themselves and input reconstruction, and introduces a series of auxiliary objectives. Extensive experiments prove that LaST achieves state-of-the-art performance on time series forecasting task against the most advanced representation learning and end-to-end forecasting models. For reproducibility, our implementation is publicly available on Github.

In-context learning is the ability of a model to condition on a prompt sequence consisting of in-context examples (input-output pairs corresponding to some task) along with a new query input, and generate the corresponding output. Crucially, in-context learning happens only at inference time without any parameter updates to the model. While large language models such as GPT-3 exhibit some ability to perform in-context learning, it is unclear what the relationship is between tasks on which this succeeds and what is present in the training data. To investigate this, we consider the problem of training a model to in-context learn a function class (e.g., linear functions): given data derived from some functions in the class, can we train a model (e.g., a Transformer) to in-context learn most functions from that class? We show empirically that standard Transformers can be trained from scratch to perform in-context learning of linear functions---that is, the trained model is able to learn unseen linear functions from in-context examples with performance comparable to the optimal least squares estimator. In fact, in-context learning is possible even under two forms of distribution shift: (i) between the training data of the Transformer and inference-time prompts, and (ii) between the in-context …

thumbnail

Neural representations are popular for representing shapes as they can be used for data cleanup, model completion, shape editing, and shape synthesis. Current neural representations can be categorized as either overfitting to a single object instance, or representing a collection of objects. However, neither allows accurate editing of neural scene representations: on the one hand, methods that overfit objects achieve highly accurate reconstructions but do not support editing, as they do not generalize to unseen object configurations; on the other hand, methods that represent a family of objects with variations do generalize but produce approximate reconstructions. We propose NeuForm to combine the advantages of both overfitted and generalizable representations by adaptively overfitting a generalizable representation to regions where reliable data is available, while using the generalizable representation everywhere else. We achieve this with a carefully designed architecture and an approach that blends the network weights of the two representations. We demonstrate edits that successfully reconfigure parts of human-made shapes, such as chairs, tables, and lamps, while preserving the accuracy of an overfitted shape representation. We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.

thumbnail

Multi-armed bandit problems provide a framework to identify the optimal intervention over a sequence of repeated experiments. Without additional assumptions, minimax optimal performance (measured by cumulative regret) is well-understood. With access to additional observed variables that d-separate the intervention from the outcome (i.e., they are a d-separator), recent "causal bandit" algorithms provably incur less regret. However, in practice it is desirable to be agnostic to whether observed variables are a d-separator. Ideally, an algorithm should be adaptive; that is, perform nearly as well as an algorithm with oracle knowledge of the presence or absence of a d-separator. In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator. Crucially, our algorithm does not require any oracle knowledge of whether a d-separator is observed. We also generalize this adaptivity to other conditions, such as the front-door criterion.

thumbnail

Language model (LM) pre-training is useful in many language processing tasks. But can pre-trained LMs be further leveraged for more general machine learning problems? We propose an approach for using LMs to scaffold learning and generalization in general sequential decision-making problems. In this approach, goals and observations are represented as a sequence of embeddings, and a policy network initialized with a pre-trained LM predicts the next action. We demonstrate that this framework enables effective combinatorial generalization across different environments and supervisory modalities. We begin by assuming access to a set of expert demonstrations, and show that initializing policies with LMs and fine-tuning them via behavior cloning improves task completion rates by 43.6% in the VirtualHome environment. Next, we integrate an active data gathering procedure in which agents iteratively interact with the environment, relabel past "failed" experiences with new goals, and update their policies in a self-supervised loop. Active data gathering further improves combinatorial generalization, outperforming the best baseline by 25.1%. Finally, we explain these results by investigating three possible factors underlying the effectiveness of the LM-based policy. We find that sequential input representations (vs. fixed-dimensional feature vectors) and LM-based weight initialization are both important for generalization. Surprisingly, however, the format …

thumbnail

Molecular conformer generation is a fundamental task in computational chemistry. Several machine learning approaches have been developed, but none have outperformed state-of-the-art cheminformatics methods. We propose torsional diffusion, a novel diffusion framework that operates on the space of torsion angles via a diffusion process on the hypertorus and an extrinsic-to-intrinsic score model. On a standard benchmark of drug-like molecules, torsional diffusion generates superior conformer ensembles compared to machine learning and cheminformatics methods in terms of both RMSD and chemical properties, and is orders of magnitude faster than previous diffusion-based models. Moreover, our model provides exact likelihoods, which we employ to build the first generalizable Boltzmann generator. Code is available at https://github.com/gcorso/torsional-diffusion.

Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. Information can be found at https://amos22.grand-challenge.org.

thumbnail

The new generation of state-of-the-art computer vision systems are trained from natural language supervision, ranging from simple object category names to descriptive captions. This form of supervision ensures high generality and usability of the learned visual models, based on the broad concept coverage achieved through large-scale data collection process. Alternatively, we argue that learning with external knowledge about images is a promising way which leverages a much more structured source of supervision and offers sample efficiency. In this paper, we propose K-LITE (Knowledge-augmented Language-Image Training and Evaluation), a simple strategy to leverage external knowledge for building transferable visual systems: In training, it enriches entities in natural language with WordNet and Wiktionary knowledge, leading to an efficient and scalable approach to learning image representations that uses knowledge about the visual concepts; In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts (or describe new ones) to enable zero-shot and few-shot transfer of the pre-trained models. We study the performance of K-LITE on two important computer vision problems, image classification and object detection, benchmarking on 20 and 13 different existing datasets, respectively. The proposed knowledge-augmented models show significant improvement in transfer learning performance …

thumbnail

Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.Score-based generative modelling (SGM) consists of a noising'' stage, whereby a diffusion is used to gradually add Gaussian noise to data, and a generative model, which entails a denoising'' process defined by approximating the time-reversal of the diffusion. Existing SGMs assume that data is supported on a Euclidean space, i.e. a manifold with flat geometry. In many domains such as robotics, geoscience or protein modelling, data is often naturally described by distributions living on Riemannian manifolds and current SGM techniques are not appropriate. We introduce here \emph{Riemannian Score-based Generative Models} (RSGMs), a class of generative models extending SGMs to Riemannian manifolds. We demonstrate our approach on a variety of compact manifolds, and in particular with earth and climate science spherical data.

We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).

thumbnail

U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied. In this paper, we formulate a multi-resolution framework which identifies U-Nets as finite-dimensional truncations of models on an infinite-dimensional function space. We provide theoretical results which prove that average pooling corresponds to projection within the space of square-integrable functions and show that U-Nets with average pooling implicitly learn a Haar wavelet basis representation of the data. We then leverage our framework to identify state-of-the-art hierarchical VAEs (HVAEs), which have a U-Net architecture, as a type of two-step forward Euler discretisation of multi-resolution diffusion processes which flow from a point mass, introducing sampling instabilities. We also demonstrate that HVAEs learn a representation of time which allows for improved parameter efficiency through weight-sharing. We use this observation to achieve state-of-the-art HVAE performance with half the number of parameters of existing models, exploiting the properties of our continuous-time formulation.

This work identifies the existence and cause of a type of posterior collapse that frequently occurs in the Bayesian deep learning practice. For a general linear latent variable model that includes linear variational autoencoders as a special case, we precisely identify the nature of posterior collapse to be the competition between the likelihood and the regularization of the mean due to the prior. Our result also suggests that posterior collapse may be a general problem of learning for deeper architectures and deepens our understanding of Bayesian deep learning.

thumbnail

Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even if the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method---multi-domain temperature scaling---uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.

thumbnail

We analyze graph smoothing with mean aggregation, where each node successively receives the average of the features of its neighbors. Indeed, it has quickly been observed that Graph Neural Networks (GNNs), which generally follow some variant of Message-Passing (MP) with repeated aggregation, may be subject to the oversmoothing phenomenon: by performing too many rounds of MP, the node features tend to converge to a non-informative limit. In the case of mean aggregation, for connected graphs, the node features become constant across the whole graph. At the other end of the spectrum, it is intuitively obvious that some MP rounds are necessary, but existing analyses do not exhibit both phenomena at once: beneficial ``finite'' smoothing and oversmoothing in the limit. In this paper, we consider simplified linear GNNs, and rigorously analyze two examples for which a finite number of mean aggregation steps provably improves the learning performance, before oversmoothing kicks in. We consider a latent space random graph model, where node features are partial observations of the latent variables and the graph contains pairwise relationships between them. We show that graph smoothing restores some of the lost information, up to a certain point, by two phenomena: graph smoothing shrinks non-principal directions …

Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space. For objective functions whose gradient cannot be evaluated directly, Bayesian optimization offers one solution -- we construct a probabilistic model of the objective, design a policy to learn about the gradient at the current location, and use the resulting information to navigate the objective landscape. Previous work has realized this scheme by minimizing the variance in the estimate of the gradient, then moving in the direction of the expected gradient. In this paper, we re-examine and refine this approach. We demonstrate that, surprisingly, the expected value of the gradient is not always the direction maximizing the probability of descent, and in fact, these directions may be nearly orthogonal. This observation then inspires an elegant optimization scheme seeking to maximize the probability of descent while moving in the direction of most-probable descent. Experiments on both synthetic and real-world objectives show that our method outperforms previous realizations of this optimization scheme and is competitive against other, significantly more complicated baselines.

thumbnail

Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information. We found, however, that there is still great room for improvement in how to preserve historical information in neural networks while avoiding overfitting to noise present in the history. Addressing this allows better utilization of the capabilities of deep learning models. To this end, we design a Frequency improved Legendre Memory model, or FiLM: it applies Legendre polynomial projections to approximate historical information, uses Fourier projection to remove noise, and adds a low-rank approximation to speed up computation. Our empirical studies show that the proposed FiLM significantly improves the accuracy of state-of-the-art models in multivariate and univariate long-term forecasting by (19.2%, 22.6%), respectively. We also demonstrate that the representation module developed in this work can be used as a general plugin to improve the long-term prediction performance of other deep learning modules. Code is available at https://github.com/tianzhou2011/FiLM/.

thumbnail

The multiple-try Metropolis (MTM) algorithm is an extension of the Metropolis-Hastings (MH) algorithm by selecting the proposed state among multiple trials according to some weight function. Although MTM has gained great popularity owing to its faster empirical convergence and mixing than the standard MH algorithm, its theoretical mixing property is rarely studied in the literature due to its complex proposal scheme. We prove that MTM can achieve a mixing time bound smaller than that of MH by a factor of the number of trials under a general setting applicable to high-dimensional model selection problems with discrete state spaces. Our theoretical results motivate a new class of weight functions called locally balanced weight functions and guide the choice of the number of trials, which leads to improved performance over standard MTM algorithms. We support our theoretical results by extensive simulation studies and real data applications with several Bayesian model selection problems.

The ability to extrapolate from short problem instances to longer ones is an important form of out-of-distribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics problems, and reading/summarizing novels. In this paper, we run careful empirical studies exploring the length generalization capabilities of transformer-based language models. We first establish that naively finetuning transformers on length generalization tasks shows significant generalization deficiencies independent of model scale. We then show that combining pretrained large language models' in-context learning abilities with scratchpad prompting (asking the model to output solution steps before producing an answer) results in a dramatic improvement in length generalization. We run careful failure analyses on each of the learning modalities and identify common sources of mistakes that highlight opportunities in equipping language models with the ability to generalize to longer problems.

thumbnail

We propose and analyze a reinforcement learning principle thatapproximates the Bellman equations by enforcing their validity onlyalong a user-defined space of test functions. Focusing onapplications to model-free offline RL with function approximation, weexploit this principle to derive confidence intervals for off-policyevaluation, as well as to optimize over policies within a prescribedpolicy class. We prove an oracle inequality on our policyoptimization procedure in terms of a trade-off between the value anduncertainty of an arbitrary comparator policy. Different choices oftest function spaces allow us to tackle different problems within acommon framework. We characterize the loss of efficiency in movingfrom on-policy to off-policy data using our procedures, and establishconnections to concentrability coefficients studied in past work. Weexamine in depth the implementation of our methods with linearfunction approximation, and provide theoretical guarantees withpolynomial-time implementations even when Bellman closure does nothold.

thumbnail

Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the UniMASK framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single UniMASK model is often capable of carrying out many tasks with performance similar to or better than single-task models. Additionally, after fine-tuning, our UniMASK models consistently outperform comparable single-task models.

Widely observed neural scaling laws, in which error falls off as a power of the training set size, model size, or both, have driven substantial performance improvements in deep learning. However, these improvements through scaling alone require considerable costs in compute and energy. Here we focus on the scaling of error with dataset size and show how in theory we can break beyond power law scaling and potentially even reduce it to exponential scaling instead if we have access to a high-quality data pruning metric that ranks the order in which training examples should be discarded to achieve any pruned dataset size. We then test this improved scaling prediction with pruned dataset size empirically, and indeed observe better than power law scaling in practice on ResNets trained on CIFAR-10, SVHN, and ImageNet. Next, given the importance of finding high-quality pruning metrics, we perform the first large-scale benchmarking study of ten different data pruning metrics on ImageNet. We find most existing high performing metrics scale poorly to ImageNet, while the best are computationally intensive and require labels for every image. We therefore developed a new simple, cheap and scalable self-supervised pruning metric that demonstrates comparable performance to the best supervised metrics. …

thumbnail

Despite their wide adoption, the underlying training and memorization dynamics of very large language models is not well understood. We empirically study exact memorization in causal and masked language modeling, across model sizes and throughout the training process. We measure the effects of dataset size, learning rate, and model size on memorization, finding that larger language models memorize training data faster across all settings. Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process. We also analyze the memorization dynamics of different parts of speech and find that models memorize nouns and numbers first; we hypothesize and provide empirical evidence that nouns and numbers act as a unique identifier for memorizing individual training examples. Together, these findings present another piece of the broader puzzle of trying to understand what actually improves as models get bigger.

thumbnail

Web-crawled datasets have enabled remarkable generalization capabilities in recent image-text models such as CLIP (Contrastive Language-Image pre-training) or Flamingo, but little is known about the dataset creation processes. In this work, we introduce a testbed of six publicly available data sources---YFCC, LAION, Conceptual Captions, WIT, RedCaps, Shutterstock---to investigate how pre-training distributions induce robustness in CLIP. We find that the performance of the pre-training data varies substantially across distribution shifts, with no single data source dominating. Moreover, we systematically study the interactions between these data sources and find that mixing multiple sources does not necessarily yield better models, but rather dilutes the robustness of the best individual data source. We complement our empirical findings with theoretical insights from a simple setting, where combining the training data also results in diluted robustness. In addition, our theoretical model provides a candidate explanation for the success of the CLIP-based data filtering technique recently employed in the LAION dataset. Overall our results demonstrate that simply gathering a large amount of data from the web is not the most effective way to build a pre-training dataset for robust generalization, necessitating further study into dataset design. Code is available at https://github.com/mlfoundations/clip quality not_quantity.

Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities. However, for many sequential decision domains such as robotics, video games, and computer use, publicly available data does not contain the labels required to train behavioral priors in the same way. We extend the internet-scale pretraining paradigm to sequential decision domains through semi-supervised imitation learning wherein agents learn to act by watching online unlabeled videos. Specifically, we show that with a small amount of labeled data we can train an inverse dynamics model accurate enough to label a huge unlabeled source of online data -- here, online videos of people playing Minecraft -- from which we can then train a general behavioral prior. Despite using the native human interface (mouse and keyboard at 20Hz), we show that this behavioral prior has nontrivial zero-shot capabilities and that it can be fine-tuned, with both imitation learning and reinforcement learning, to hard-exploration tasks that are impossible to learn from scratch via reinforcement learning. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craft diamond tools, which can take …

thumbnail

Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks). Due to the fact that past training data is not available, it is valuable to explore the influence difference on S and P among training examples, which may improve the learning pattern towards better SP. Inspired by Influence Function (IF), we first study example influence via adding perturbation to example weight and computing the influence derivation. To avoid the storage and calculation burden of Hessian inverse in neural networks, we propose a simple yet effective MetaSP algorithm to simulate the two key steps in the computation of IF and obtain the S- and P-aware example influence. Moreover, we propose to fuse two kinds of example influence by solving a dual-objective optimization problem, and obtain a fused influence towards SP Pareto optimality. The fused influence can be used to control the update of model and optimize the storage of rehearsal. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on both task- and class-incremental benchmark CL datasets.

thumbnail

Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On …

Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are “small” but “realistic” semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier.

thumbnail

Multi-label learning (MLL) learns from the examples each associated with multiple labels simultaneously, where the high cost of annotating all relevant labels for each training example is challenging for real-world applications. To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label and show that one can successfully learn a theoretically grounded multi-label classifier for the problem. In this paper, a novel SPMLL method named SMILE, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed. Specifically, an unbiased risk estimator is derived, which could be guaranteed to approximately converge to the optimal risk minimizer of fully supervised learning and shows that one positive label of each instance is sufficient to train the predictive model. Then, the corresponding empirical risk estimator is established via recovering the latent soft label as a label enhancement process, where the posterior density of the latent soft labels is approximate to the variational Beta density parameterized by an inference model. Experiments on benchmark datasets validate the effectiveness of the proposed method.

thumbnail

In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text. Our code and checkpoints are available at: https://github.com/zinengtang/TVLT

We consider the problem of producing fair probabilistic classifiers for multi-class classification tasks. We formulate this problem in terms of ``projecting'' a pre-trained (and potentially unfair) classifier onto the set of models that satisfy target group-fairness requirements. The new, projected model is given by post-processing the outputs of the pre-trained classifier by a multiplicative factor. We provide a parallelizable, iterative algorithm for computing the projected classifier and derive both sample complexity and convergence guarantees. Comprehensive numerical comparisons with state-of-the-art benchmarks demonstrate that our approach maintains competitive performance in terms of accuracy-fairness trade-off curves, while achieving favorable runtime on large datasets. We also evaluate our method at scale on an open dataset with multiple classes, multiple intersectional groups, and over 1M samples.

As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.

thumbnail

Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems (RS), closing the gap between metric learning and Collaborative Filtering. Following the convention of RS, existing methods exploit unique user representation in their model design. This paper focuses on a challenging scenario where a user has multiple categories of interests. Under this setting, we argue that the unique user representation might induce preference bias, especially when the item category distribution is imbalanced. To address this issue, we propose a novel method called Diversity-Promoting Collaborative Metric Learning (DPCML), with the hope of considering the commonly ignored minority interest of the user. The key idea behind DPCML is to include a multiple set of representations for each user in the system. Based on this embedding paradigm, user preference toward an item is aggregated from different embeddings by taking the minimum item-user distance among the user embedding set. Furthermore, we observe that the diversity of the embeddings for the same user also plays an essential role in the model. To this end, we propose a diversity control regularization term to accommodate the multi-vector representation strategy better. Theoretically, we show that DPCML could generalize well to unseen test data by …

One concern with the rise of large language models lies with their potential for significant harm, particularly from pretraining on biased, obscene, copyrighted, and private information. Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take context into account. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material. First, we gather and make available the Pile of Law, a ~256GB (and growing) dataset of open-source English-language legal and administrative data, covering court opinions, contracts, administrative rules, and legislative records. Pretraining on the Pile of Law may help with legal tasks that have the promise to improve access to justice. Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons for researchers and discuss how our dataset reflects these norms. Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data, providing an exciting new research direction in model-based processing.

thumbnail

Optimal experimental design seeks to determine the most informative allocation of experiments to infer an unknown statistical quantity. In this work, we investigate optimal design of experiments for {\em estimation of linear functionals in reproducing kernel Hilbert spaces (RKHSs)}. This problem has been extensively studied in the linear regression setting under an estimability condition, which allows estimating parameters without bias. We generalize this framework to RKHSs, and allow for the linear functional to be only approximately inferred, i.e., with a fixed bias. This scenario captures many important modern applications such as estimation of gradient maps, integrals and solutions to differential equations. We provide algorithms for constructing bias-aware designs for linear functionals. We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise, enabling us to certify estimation with bounded error with high probability.

thumbnail

Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets, based on which researchers and developers can easily evaluate and improve their learning methods. Since the data collection is usually time-consuming or even expensive, how to protect their copyrights is of great significance and worth further exploration. In this paper, we revisit dataset ownership verification. We find that existing verification methods introduced new security risks in DNNs trained on the protected dataset, due to the targeted nature of poison-only backdoor watermarks. To alleviate this problem, in this work, we explore the untargeted backdoor watermarking scheme, where the abnormal model behaviors are not deterministic. Specifically, we introduce two dispersibilities and prove their correlation, based on which we design the untargeted backdoor watermark under both poisoned-label and clean-label settings. We also discuss how to use the proposed untargeted backdoor watermark for dataset ownership verification. Experiments on benchmark datasets verify the effectiveness of our methods and their resistance to existing backdoor defenses.

thumbnail

Model-based offline reinforcement learning (RL) aims to find highly rewarding policy, by leveraging a previously collected static dataset and a dynamics model. While the dynamics model learned through reuse of the static dataset, its generalization ability hopefully promotes policy learning if properly utilized. To that end, several works propose to quantify the uncertainty of predicted dynamics, and explicitly apply it to penalize reward. However, as the dynamics and the reward are intrinsically different factors in context of MDP, characterizing the impact of dynamics uncertainty through reward penalty may incur unexpected tradeoff between model utilization and risk avoidance. In this work, we instead maintain a belief distribution over dynamics, and evaluate/optimize policy through biased sampling from the belief. The sampling procedure, biased towards pessimism, is derived based on an alternating Markov game formulation of offline RL. We formally show that the biased sampling naturally induces an updated dynamics belief with policy-dependent reweighting factor, termed Pessimism-Modulated Dynamics Belief. To improve policy, we devise an iterative regularized policy optimization algorithm for the game, with guarantee of monotonous improvement under certain condition. To make practical, we further devise an offline RL algorithm to approximately find the solution. Empirical results show that the proposed approach …

thumbnail

Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions. The target distributions are often chosen to be the filtering distributions, but these ignore information from future observations, leading to practical and theoretical limitations in inference and model learning. We introduce SIXO, a method that instead learns target distributions that approximate the smoothing distributions, incorporating information from all observations. The key idea is to use density ratio estimation to fit functions that warp the filtering distributions into the smoothing distributions. We then use SMC with these learned targets to define a variational objective for model and proposal learning. SIXO yields provably tighter log marginal lower bounds and offers more accurate posterior inferences and parameter estimates in a variety of domains.

The past few years have seen the development of many benchmarks for Neural Architecture Search (NAS), fueling rapid progress in NAS research. However, recent work, which shows that good hyperparameter settings can be more important than using the best architecture, calls for a shift in focus towards Joint Architecture and Hyperparameter Search (JAHS). Therefore, we present JAHS-Bench-201, the first collection of surrogate benchmarks for JAHS, built to also facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms. To the best of our knowledge, JAHS-Bench-201 is based on the most extensive dataset of neural network performance data in the public domain. It is composed of approximately 161 million data points and 20 performance metrics for three deep learning tasks, while featuring a 14-dimensional search and fidelity space that extends the popular NAS-Bench-201 space. With JAHS-Bench-201, we hope to democratize research on JAHS and lower the barrier to entry of an extremely compute intensive field, e.g., by reducing the compute time to run a JAHS algorithm from 5 days to only a few seconds.

thumbnail

Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time.

thumbnail

Understanding decision-making is a core goal in both neuroscience and psychology, and computational models have often been helpful in the pursuit of this goal. While many models have been developed for characterizing behavior in binary decision-making and bandit tasks, comparatively little work has focused on animal decision-making in more complex tasks, such as navigation through a maze. Inverse reinforcement learning (IRL) is a promising approach for understanding such behavior, as it aims to infer the unknown reward function of an agent from its observed trajectories through state space. However, IRL has yet to be widely applied in neuroscience. One potential reason for this is that existing IRL frameworks assume that an agent's reward function is fixed over time. To address this shortcoming, we introduce dynamic inverse reinforcement learning (DIRL), a novel IRL framework that allows for time-varying intrinsic rewards. Our method parametrizes the unknown reward function as a time-varying linear combination of spatial reward maps (which we refer to as "goal maps"). We develop an efficient inference method for recovering this dynamic reward function from behavioral data. We demonstrate DIRL in simulated experiments and then apply it to a dataset of mice exploring a labyrinth. Our method returns interpretable reward …

thumbnail

Structured prediction of tree-shaped objects is heavily studied under the name of syntactic dependency parsing. Current practice based on maximum likelihood or margin is either agnostic to or inconsistent with the evaluation loss. Risk minimization alleviates the discrepancy between training and test objectives but typically induces a non-convex problem. These approaches adopt explicit regularization to combat overfitting without probabilistic interpretation. We propose a moment-based distributionally robust optimization approach for tree structured prediction, where the worst-case expected loss over a set of distributions within bounded moment divergence from the empirical distribution is minimized. We develop efficient algorithms for arborescences and other variants of trees. We derive Fisher consistency, convergence rates and generalization bounds for our proposed method. We evaluate its empirical effectiveness on dependency parsing benchmarks.

thumbnail

Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron and Deepspeed, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational “tasklets” in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for …

thumbnail

Predicting multimodal future behavior of traffic participants is essential for robotic vehicles to make safe decisions. Existing works explore to directly predict future trajectories based on latent features or utilize dense goal candidates to identify agent's destinations, where the former strategy converges slowly since all motion modes are derived from the same feature while the latter strategy has efficiency issue since its performance highly relies on the density of goal candidates. In this paper, we propose the Motion TRansformer (MTR) framework that models motion prediction as the joint optimization of global intention localization and local movement refinement. Instead of using goal candidates, MTR incorporates spatial intention priors by adopting a small set of learnable motion query pairs. Each motion query pair takes charge of trajectory prediction and refinement for a specific motion mode, which stabilizes the training process and facilitates better multimodal predictions. Experiments show that MTR achieves state-of-the-art performance on both the marginal and joint motion prediction challenges, ranking 1st on the leaderbaords of Waymo Open Motion Dataset. Code will be available at https://github.com/sshaoshuai/MTR.

thumbnail

Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. It is particularly problematic in scientific applications where one aims to reconstruct the underlying dynamical system. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights we suggest …

Large-scale language models often learn behaviors that are misaligned with user expectations. Generated text may contain offensive or toxic language, contain significant repetition, or be of a different sentiment than desired by the user. We consider the task of unlearning these misalignments by fine-tuning the language model on signals of what not to do. We introduce Quantized Reward Konditioning (Quark), an algorithm for optimizing a reward function that quantifies an (un)wanted property, while not straying too far from the original model. Quark alternates between (i) collecting samples with the current language model, (ii) sorting them into quantiles based on reward, with each quantile identified by a reward token prepended to the language model’s input, and (iii) using a standard language modeling loss on samples from each quantile conditioned on its reward token, while remaining nearby the original language model via a KL-divergence penalty. By conditioning on a high-reward token at generation time, the model generates text that exhibits less of the unwanted property. For unlearning toxicity, negative sentiment, and repetition, our experiments show that Quark outperforms both strong baselines and state-of-the-art reinforcement learning methods like PPO, while relying only on standard language modeling primitives.

thumbnail

There has been a significant research effort focused on explaining predictive models, for example through post-hoc explainability and recourse methods. Most of the proposed techniques operate upon a single, fixed, predictive model. However, it is well-known that given a dataset and a predictive task, there may be a multiplicity of models that solve the problem (nearly) equally well. In this work, we investigate the implications of this kind of model indeterminacy on the post-hoc explanations of predictive models. We show how it can lead to explanatory multiplicity, and we explore the underlying drivers. We show how predictive multiplicity, and the related concept of epistemic uncertainty, are not reliable indicators of explanatory multiplicity. We further illustrate how a set of models showing very similar aggregate performance on a test dataset may show large variations in their local explanations, i.e., for a specific input. We explore these effects for Shapley value based explanations on three risk assessment datasets. Our results indicate that model indeterminacy may have a substantial impact on explanations in practice, leading to inconsistent and even contradicting explanations.

thumbnail

Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.

We explore how generating a chain of thought---a series of intermediate reasoning steps---significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.

The availability of large pre-trained models is changing the landscape of Machine Learning research and practice, moving from a "training from scratch" to a "fine-tuning'' paradigm. While in some applications the goal is to "nudge'' the pre-trained distribution towards preferred outputs, in others it is to steer it towards a different distribution over the sample space. Two main paradigms have emerged to tackle this challenge: Reward Maximization (RM) and, more recently, Distribution Matching (DM). RM applies standard Reinforcement Learning (RL) techniques, such as Policy Gradients, to gradually increase the reward signal. DM prescribes to first make explicit the target distribution that the model is fine-tuned to approximate. Here we explore the theoretical connections between the two paradigms and show that methods such as KL-control developed in the RM paradigm can also be construed as belonging to DM. We further observe that while DM differs from RM, it can suffer from similar training difficulties, such as high gradient variance. We leverage connections between the two paradigms to import the concept of baseline into DM methods. We empirically validate the benefits of adding a baseline on an array of controllable language generation tasks such as constraining topic, sentiment, and gender distributions in …

thumbnail

Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly. This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and …

thumbnail

A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during Bayesian inference, with the goal of reducing computational cost. Although past work has shown empirically that there often exists a coreset with low inferential error, efficiently constructing such a coreset remains a challenge. Current methods tend to be slow, require a secondary inference step after coreset construction, and do not provide bounds on the data marginal evidence. In this work, we introduce a new method---sparse Hamiltonian flows---that addresses all three of these challenges. The method involves first subsampling the data uniformly, and then optimizing a Hamiltonian flow parametrized by coreset weights and including periodic momentum quasi-refreshment steps. Theoretical results show that the method enables an exponential compression of the dataset in a representative model, and that the quasi-refreshment steps reduce the KL divergence to the target. Real and synthetic experiments demonstrate that sparse Hamiltonian flows provide accurate posterior approximations with significantly reduced runtime compared with competing dynamical-system-based inference methods.

thumbnail

In any given machine learning problem, there may be many models that could explain the data almost equally well. However, most learning algorithms return only one of these models, leaving practitioners with no practical way to explore alternative models that might have desirable properties beyond what could be expressed within a loss function. The Rashomon set is the set of these all almost-optimal models. Rashomon sets can be extremely complicated, particularly for highly nonlinear function classes that allow complex interaction terms, such as decision trees. We provide the first technique for completely enumerating the Rashomon set for sparse decision trees; in fact, our work provides the first complete enumeration of any Rashomon set for a non-trivial problem with a highly nonlinear discrete function class. This allows the user an unprecedented level of control over model choice among all models that are approximately equally good. We represent the Rashomon set in a specialized data structure that supports efficient querying and sampling. We show three applications of the Rashomon set: 1) it can be used to study variable importance for the set of almost-optimal trees (as opposed to a single tree), 2) the Rashomon set for accuracy enables enumeration of the Rashomon …

thumbnail

Cascading bandits is a natural and popular model that frames the task of learning to rank from Bernoulli click feedback in a bandit setting. For the case of unstructured rewards, we prove matching upper and lower bounds for the problem-independent (i.e., gap-free) regret, both of which strictly improve the best known. A key observation is that the hard instances of this problem are those with small mean rewards, i.e., the small click-through rates that are most relevant in practice. Based on this, and the fact that small mean implies small variance for Bernoullis, our key technical result shows that variance-aware confidence sets derived from the Bernstein and Chernoff bounds lead to optimal algorithms (up to log terms), whereas Hoeffding-based algorithms suffer order-wise suboptimal regret. This sharply contrasts with the standard (non-cascading) bandit setting, where the variance-aware algorithms only improve constants. In light of this and as an additional contribution, we propose a variance-aware algorithm for the structured case of linear rewards and show its regret strictly improves the state-of-the-art.

There are over 300 sign languages in the world, many of which have very limited or no labelled sign-to-text datasets. To address low-resource data scenarios, self-supervised pretraining and multilingual finetuning have been shown to be effective in natural language and speech processing. In this work, we apply these ideas to sign language recognition.We make three contributions.- First, we release SignCorpus, a large pretraining dataset on sign languages comprising about 4.6K hours of signing data across 10 sign languages. SignCorpus is curated from sign language videos on the internet, filtered for data quality, and converted into sequences of pose keypoints thereby removing all personal identifiable information (PII).- Second, we release Sign2Vec, a graph-based model with 5.2M parameters that is pretrained on SignCorpus. We envisage Sign2Vec as a multilingual large-scale pretrained model which can be fine-tuned for various sign recognition tasks across languages.- Third, we create MultiSign-ISLR -- a multilingual and label-aligned dataset of sequences of pose keypoints from 11 labelled datasets across 7 sign languages, and MultiSign-FS -- a new finger-spelling training and test set across 7 languages. On these datasets, we fine-tune Sign2Vec to create multilingual isolated sign recognition models. With experiments on multiple benchmarks, we show that pretraining and …

thumbnail

This paper studies the fundamental problem of learning energy-based model (EBM) in the latent space of the generator model. Learning such prior model typically requires running costly Markov Chain Monte Carlo (MCMC). Instead, we propose to use noise contrastive estimation (NCE) to discriminatively learn the EBM through density ratio estimation between the latent prior density and latent posterior density. However, the NCE typically fails to accurately estimate such density ratio given large gap between two densities. To effectively tackle this issue and further learn more expressive prior model, we develop the adaptive multi-stage density ratio estimation which breaks the estimation into multiple stages and learn different stages of density ratio sequentially and adaptively. The latent prior model can be gradually learned using ratio estimated in previous stage so that the final latent space EBM prior can be naturally formed by product of ratios in different stages. The proposed method enables informative and much sharper prior than existing baselines, and can be trained efficiently. Our experiments demonstrate strong performances in terms of image generation and reconstruction as well as anomaly detection.

thumbnail

Reinforcement learning (RL) with diverse offline datasets can have the advantage of leveraging the relation of multiple tasks and the common skills learned across those tasks, hence allowing us to deal with real-world complex problems efficiently in a data-driven way. In offline RL where only offline data is used and online interaction with the environment is restricted, it is yet difficult to achieve the optimal policy for multiple tasks, especially when the data quality varies for the tasks. In this paper, we present a skill-based multi-task RL technique on heterogeneous datasets that are generated by behavior policies of different quality. To learn the shareable knowledge across those datasets effectively, we employ a task decomposition method for which common skills are jointly learned and used as guidance to reformulate a task in shared and achievable subtasks. In this joint learning, we use Wasserstein Auto-Encoder (WAE) to represent both skills and tasks on the same latent space and use the quality-weighted loss as a regularization term to induce tasks to be decomposed into subtasks that are more consistent with high-quality skills than others. To improve the performance of offline RL agents learned on the latent space, we also augment datasets with imaginary …

thumbnail

Satellite imagery is increasingly available, high resolution, and temporally detailed. Changes in spatio-temporal datasets such as satellite images are particularly interesting as they reveal the many events and forces that shape our world. However, finding such interesting and meaningful change events from the vast data is challenging. In this paper, we present new datasets for such change events that include semantically meaningful events like road construction. Instead of manually annotating the very large corpus of satellite images, we introduce a novel unsupervised approach that takes a large spatio-temporal dataset from satellite images and finds interesting change events. To evaluate the meaningfulness on these datasets we create 2 benchmarks namely CaiRoad and CalFire which capture the events of road construction and forest fires. These new benchmarks can be used to evaluate semantic retrieval/classification performance. We explore these benchmarks qualitatively and quantitatively by using several methods and show that these new datasets are indeed challenging for many existing methods.

Bayesian coresets approximate a posterior distribution by building a small weighted subset of the data points. Any inference procedure that is too computationally expensive to be run on the full posterior can instead be run inexpensively on the coreset, with results that approximate those on the full data. However, current approaches are limited by either a significant run-time or the need for the user to specify a low-cost approximation to the full posterior. We propose a Bayesian coreset construction algorithm that first selects a uniformly random subset of data, and then optimizes the weights using a novel quasi-Newton method. Our algorithm is a simple to implement, black-box method, that does not require the user to specify a low-cost posterior approximation. It is the first to come with a general high-probability bound on the KL divergence of the output coreset posterior. Experiments demonstrate that our method provides significant improvements in coreset quality against alternatives with comparable construction times, with far less storage cost and user input required.

COMMENTS

  1. How to Write a Descriptive Essay

    Tips for writing descriptively. The key to writing an effective descriptive essay is to find ways of bringing your subject to life for the reader. You're not limited to providing a literal description as you would be in more formal essay types. Make use of figurative language, sensory details, and strong word choices to create a memorable ...

  2. Describe an object that is very special in your life Explain ...

    Most of the people have some object that has an importance on their lives. It can be anything! A ring, a car, a book, and etc. I fell that a mint lace dress in an important object in my life for two reasons. First, it was a present from someone special. Fallowing I will be more detailed in my topics. Second, I felt fabulous wearing it in a ...

  3. Descriptive Essay: Definition, Format & Writing Tips

    A descriptive essay is one of the four main types of essays, alongside narrative, argumentative, and expository essays. Among these, descriptive essays can be particularly challenging because they demand a keen eye for detail and an appreciation for aesthetics. By vividly describing scenes and details, you engage your reader's senses, making ...

  4. Guide to a Perfect Descriptive Essay [Examples & Outline Included]

    The use of literary devices such as personification and metaphor makes the banyan tree in the second example come to life. This is how you can make your writing more vivid, descriptive, and poetic. 2. Use your senses. Sensory descriptors are one of the most important aspects of a descriptive essay.

  5. Descriptive Writing Task: How To Describe an Object

    Write 2 paragraphs describing the object. Try to use as many techniques as possible. Write as quickly as you can. Each paragraph should be on a different topic. For instance, you might choose to structure this as the item now vs it in the past, physical description vs memory, or zooming in on different details. Edit your work.

  6. What is a Descriptive Essay? How to Write It (with Examples)

    A descriptive essay's primary goal is to captivate the reader by writing a thorough and vivid explanation of the subject matter, while appealing to their various senses. A list of additional goals is as follows: - Spark feeling and imagination. - Create a vivid experience. - Paint a mental picture. - Pique curiosity.

  7. How to Write a Descriptive Observational Essay

    Your first task in writing a descriptive essay is to choose a topic that has many interesting parts or qualities to talk about. Unless you have a really vivid imagination, you'll find it difficult to write much about a simple object like a comb, for example. It's best to compare a few topics first to make sure they'll work.

  8. Descriptive Essay

    A descriptive essay is a detailed paper that describes a place, person, situation, object, or emotion. Different people have different points of view and your job is to explain yours in detail. You may be asked to write a descriptive essay about the beach or forest or about a person or situation.

  9. How to Write a Descriptive Essay: Writing Tips & Examples

    Objects and Items. Descriptive essays about objects and items give you the opportunity to examine everyday items or objects with personal significance in a new light. From a cherished family heirloom to a simple household item, the possibilities are endless. To create a vivid description, focus on sensory details such as texture, color, smell ...

  10. Descriptive Essays

    The descriptive essay is a genre of essay that asks the student to describe something—object, person, place, experience, emotion, situation, etc. This genre encourages the student's ability to create a written account of a particular experience. What is more, this genre allows for a great deal of artistic freedom (the goal of which is to ...

  11. Descriptive Essay

    A descriptive essay describes an object, person, place, or event that the writer has experienced. Writers use illustrative language to "show" the reader that topic that is described in the essay.

  12. How to Write a Descriptive Essay

    This brainstorming will give you the raw material for your descriptive essay. The next step is to create an essay outline. Typically, this will include: An Introduction - An outline of what you will describe and the "thesis" for your essay (i.e., a key theme that will run through your essay and guide your description). For instance, if ...

  13. What Is a Descriptive Essay? Examples and Guide

    A descriptive essay is a type of essay that involves describing a person, object, or any type of noun. We guide you through writing one with examples. Dictionary Thesaurus Sentences Grammar ... With a better understanding of how to approach a descriptive essay, you're ready to prosper and write an essay of your own. We can't write your ...

  14. How to Write a Descriptive Essay

    Descriptive Essay Definition. A descriptive essay is a type of paper where the writer describes an experience, person, place, or object (the essay topic) in great detail.. Overview of a Descriptive Essay. A descriptive essay is written in order to have the reader experience a person, place, object, event, or thing just as the writer did. In a descriptive essay, the writer uses several ...

  15. 50 Descriptive Essay Topics

    Descriptive Essay Topics: Objects. Describe an object that is special to you. Give a tour of one room in your house by describing the most important objects in that room. Describe one of your favorite outfits. Describe your favorite toy as a child. Describe how you get around (for example: a bicycle, skateboard, sneakers, your parents' car ...

  16. 15 Good Descriptive Essay Examples for All Students

    Descriptive Essay Example 5 Paragraph. 5 paragraphs essay writing format is the most common method of composing an essay. This format has 5 paragraphs in total. The sequence of the paragraphs is as follows; Introduction. Body Paragraph 1. Body Paragraph 2. Body Paragraph 3. Conclusion.

  17. Descriptive Essay About An Object Example

    A descriptive essay іs a kind of writing which describes something. It may be an object, person, place, situation, emotion, etc. It is written in a wаy that makes it understandable to all parties involved. The descriptive essay is a kind of creative writing, which іs aimed to make the audience feel the same way you do.

  18. 3.5: Descriptive Essays

    Writing a Description Essay. Choosing a subject is the first step in writing a description essay. Once you have chosen the person, place, or object you want to describe, your challenge is to write an effective thesis statement to guide your essay. The remainder of your essay describes your subject in a way that best expresses your thesis.

  19. How to Write a Descriptive Essay in 7 Steps

    How to Write a Descriptive Essay in 7 Steps. Written by MasterClass. Last updated: Jun 7, 2021 • 3 min read. Descriptive essays teach students the basics of writing and self-expression. Depending on your line of work and your writing goals, you may continue writing descriptive essays well into your professional career.

  20. How to Start a Descriptive Essay: 12 Steps (with Pictures)

    3. Use sensory details. A key element of a good descriptive essay is a lot of details that focus on the five senses: smell, taste, touch, sight, and sound. Put a lot of sensory details into your opening paragraph. Describe how a scene sounds or tastes. Discuss how an object feels or smells.

  21. 20 Descriptive Essay Examples for Your Help

    3. Write a Thesis Statement. It is the most important part of any essay. When you are planning a descriptive essay, you need to come up with a strong thesis statement. A thesis statement is usually one or two sentences that explain the whole point of your essay to the reader. 4.

  22. How to Write a Descriptive Essay: Examples & Tips

    Choose an Engaging Topic. Selecting the right topic is the crucial first step in writing a descriptive essay. Your topic should be captivating, drawing the reader in and keeping them engaged throughout the essay. A well-chosen topic sets the stage for an immersive and memorable descriptive experience. Step# 2.

  23. IGCSE English Language: Writing Techniques for Descriptive Essays

    Descriptive essays are an essential part of the IGCSE English Language exam. They require you to vividly describe a person, place, object, or experience using sensory details and figurative language. Here are some techniques to help you write effective descriptive essays for the IGCSE exam: 1. Utilize Sensory Language

  24. Descriptive Essay

    Describe the setting, the situation, or the object of the essay. Thesis Statement: ... here is a step-by-step guideline to help you in composing a descriptive essay worth reading. 1. Choose a topic. ... Write a descriptive essay about a place you love to visit and what makes it special. Describe in a descriptive essay your dream job and what it ...

  25. Examples of language devices

    Hi there! You've got a great approach by seeking to incorporate language devices in your college essays to make your writing more engaging. Here are some examples of devices that could help elevate your work: 1. Imagery - Use vivid and descriptive language that helps paint a picture for your reader. This can include sensory details, such as sight, sound, taste, touch, and smell.

  26. Edmund Husserl

    Edmund Gustav Albrecht Husserl (/ ˈ h ʊ s ɜːr l / HUUSS-url, US also / ˈ h ʊ s ər əl / HUUSS-ər-əl, German: [ˈɛtmʊnt ˈhʊsɐl]; 8 April 1859 - 27 April 1938) was an Austrian-German philosopher and mathematician who established the school of phenomenology.. In his early work, he elaborated critiques of historicism and of psychologism in logic based on analyses of intentionality.

  27. 400 Glorious Adjectives to Describe a Person

    Whether you realize it or not, you use adjectives all the time in speaking and writing. Anytime you pay a friend a compliment (you're amazing!) or decide what size fries you want with your hamburger (large, please), you're relying on adjectives to help you describe.Without adjectives, our sentences would feel flat, unclear, and dull.Adjectives make our sentences exciting, and sometimes we ...

  28. Ethics

    Ethics or moral philosophy is the philosophical study of moral phenomena. It investigates normative questions about what people ought to do or which behavior is morally right. It is usually divided into three major fields: normative ethics, applied ethics, and metaethics. Normative ethics discovers and justifies universal principles that govern how people should act in any situation.

  29. All About Adjectives: Examples, Types and Uses

    Generally, you can pinpoint compound adjectives by the hyphen, but it is not a vital prerequisite. 5. Condition Adjectives. These descriptive adjectives are used to describe the condition of a noun. For instance, a messy desk is in a state of disarray, but it can be cleaned and organized.

  30. NeurIPS 2022 Oral-Equivalent Papers

    The new generation of state-of-the-art computer vision systems are trained from natural language supervision, ranging from simple object category names to descriptive captions. This form of supervision ensures high generality and usability of the learned visual models, based on the broad concept coverage achieved through large-scale data ...