REPORTED SPEECH

  • Game Code: 10255
  •  English     23      Public Practice how to change from directed speech into reported speech.
  •   Play   Study   Slideshow   Share  HUYEN PHAM  915

Share REPORTED SPEECH

Use Class PIN to share Baamboozle+ games with your students. Upgrade

Save to Folder

reported speech online wordwall

  • "You can have this bag". She told him that ... he COULD have the bag
  • "I'm arresting you" The police officer told him... (that) he was arresting him
  • "You stole the bag". The police officer said that .... he had stolen the bag

reported speech online wordwall

  • "I'm innocent!" She said that... she was innocent
  • "Criminals always pretend they haven't done anything wrong!". The officer said that criminals... ...always pretended they hadn't done anything wrong.

reported speech online wordwall

  • "I don't want to watch a film". Mike said that... he didn't want to watch a film

reported speech online wordwall

  • "I'm going to ring my mum". She said that... ... she was going to ring her mum.
  • "I can't find my mobile" . Nam said that ... ...he couldn't find his mobile.
  • "My mother doesn't have my best friend's number on her phone" said John John said that his mother didn't have his best friend's number on her phone.
  • "I'm going to Lan's house" said Mike. Mike said that he was going to Lan's house.
  • "I want to catch the shoplifter" said the security guard. The security guard said that he wanted to catch the shoplifter.
  • "The police arrested a vandal" she said. She said that the police had arrested a vandal.
  • "It's difficult to catch drug dealers" said the police inspector. The police inspector said that it was difficult to catch drug dealers.
  • "We are questioning two teenagers about the burglary" the police officer said. The police officer said that they were questioning two teenagers about the burglary.
  • "I sometimes go joyriding with my friends" he said. He said that he sometimes went joyriding with his friends.
  • "The police are looking for the bank robbers" she said. She said that the police were looking for the bank robbers.

reported speech online wordwall

Sign up for a trial to unlock features.

Search form

  • B1-B2 grammar

Reported speech

Daisy has just had an interview for a summer job. 

Instructions

As you watch the video, look at the examples of reported speech. They are in  red  in the subtitles. Then read the conversation below to learn more. Finally, do the grammar exercises to check you understand, and can use, reported speech correctly.

Sophie:  Mmm, it’s so nice to be chilling out at home after all that running around.

Ollie: Oh, yeah, travelling to glamorous places for a living must be such a drag!

Ollie: Mum, you can be so childish sometimes. Hey, I wonder how Daisy’s getting on in her job interview.

Sophie: Oh, yes, she said she was having it at four o’clock, so it’ll have finished by now. That’ll be her ... yes. Hi, love. How did it go?

Daisy: Well, good I think, but I don’t really know. They said they’d phone later and let me know.

Sophie: What kind of thing did they ask you?

Daisy: They asked if I had any experience with people, so I told them about helping at the school fair and visiting old people at the home, that sort of stuff. But I think they meant work experience.

Sophie: I’m sure what you said was impressive. They can’t expect you to have had much work experience at your age.

Daisy:  And then they asked me what acting I had done, so I told them that I’d had a main part in the school play, and I showed them a bit of the video, so that was cool.

Sophie:  Great!

Daisy: Oh, and they also asked if I spoke any foreign languages.

Sophie: Languages?

Daisy: Yeah, because I might have to talk to tourists, you know.

Sophie: Oh, right, of course.

Daisy: So that was it really. They showed me the costume I’ll be wearing if I get the job. Sending it over ...

Ollie: Hey, sis, I heard that Brad Pitt started out as a giant chicken too! This could be your big break!

Daisy: Ha, ha, very funny.

Sophie: Take no notice, darling. I’m sure you’ll be a marvellous chicken.

We use reported speech when we want to tell someone what someone said. We usually use a reporting verb (e.g. say, tell, ask, etc.) and then change the tense of what was actually said in direct speech.

So, direct speech is what someone actually says? Like 'I want to know about reported speech'?

Yes, and you report it with a reporting verb.

He said he wanted to know about reported speech.

I said, I want and you changed it to he wanted .

Exactly. Verbs in the present simple change to the past simple; the present continuous changes to the past continuous; the present perfect changes to the past perfect; can changes to could ; will changes to would ; etc.

She said she was having the interview at four o’clock. (Direct speech: ' I’m having the interview at four o’clock.') They said they’d phone later and let me know. (Direct speech: ' We’ll phone later and let you know.')

OK, in that last example, you changed you to me too.

Yes, apart from changing the tense of the verb, you also have to think about changing other things, like pronouns and adverbs of time and place.

'We went yesterday.'  > She said they had been the day before. 'I’ll come tomorrow.' >  He said he’d come the next day.

I see, but what if you’re reporting something on the same day, like 'We went yesterday'?

Well, then you would leave the time reference as 'yesterday'. You have to use your common sense. For example, if someone is saying something which is true now or always, you wouldn’t change the tense.

'Dogs can’t eat chocolate.' > She said that dogs can’t eat chocolate. 'My hair grows really slowly.' >  He told me that his hair grows really slowly.

What about reporting questions?

We often use ask + if/whether , then change the tenses as with statements. In reported questions we don’t use question forms after the reporting verb.

'Do you have any experience working with people?' They asked if I had any experience working with people. 'What acting have you done?' They asked me what acting I had done .

Is there anything else I need to know about reported speech?

One thing that sometimes causes problems is imperative sentences.

You mean like 'Sit down, please' or 'Don’t go!'?

Exactly. Sentences that start with a verb in direct speech need a to + infinitive in reported speech.

She told him to be good. (Direct speech: 'Be good!') He told them not to forget. (Direct speech: 'Please don’t forget.')

OK. Can I also say 'He asked me to sit down'?

Yes. You could say 'He told me to …' or 'He asked me to …' depending on how it was said.

OK, I see. Are there any more reporting verbs?

Yes, there are lots of other reporting verbs like promise , remind , warn , advise , recommend , encourage which you can choose, depending on the situation. But say , tell and ask are the most common.

Great. I understand! My teacher said reported speech was difficult.

And I told you not to worry!

Check your grammar: matching

Check your grammar: error correction, check your grammar: gap fill, worksheets and downloads.

What was the most memorable conversation you had yesterday? Who were you talking to and what did they say to you?

reported speech online wordwall

Sign up to our newsletter for LearnEnglish Teens

We will process your data to send you our newsletter and updates based on your consent. You can unsubscribe at any time by clicking the "unsubscribe" link at the bottom of every email. Read our privacy policy for more information.

Notification Bell

Reported speech

Loading ad...

Profile picture for user Gabbygrb

Reported speech practice

  • Google Classroom
  • Microsoft Teams
  • Download PDF

Reported speech

Reported Speech Exercises

Perfect english grammar.

reported speech online wordwall

Here's a list of all the reported speech exercises on this site:

( Click here to read the explanations about reported speech )

Reported Statements:

  • Present Simple Reported Statement Exercise (quite easy) (in PDF here)
  • Present Continuous Reported Statement Exercise (quite easy) (in PDF here)
  • Past Simple Reported Statement Exercise (quite easy) (in PDF here)
  • Present Perfect Reported Statement Exercise (quite easy) (in PDF here)
  • Future Simple Reported Statement Exercise (quite easy) (in PDF here)
  • Mixed Tense Reported Statement Exercise (intermediate) (in PDF here)
  • 'Say' and 'Tell' (quite easy) (in PDF here)

Reported Questions:

  • Present Simple Reported Yes/No Question Exercise (intermediate) (in PDF here)
  • Present Simple Reported Wh Question Exercise (intermediate) (in PDF here)
  • Mixed Tense Reported Question Exercise (intermediate) (in PDF here)

Reported Orders and Requests:

  • Reported Requests and Orders Exercise (intermediate) (in PDF here)
  • Reported Speech Mixed Exercise 1 (difficult) (in PDF here)
  • Reported Speech Mixed Exercise 2 (difficult) (in PDF here)

Seonaid Beckwith

Hello! I'm Seonaid! I'm here to help you understand grammar and speak correct, fluent English.

method graphic

Read more about our learning method

English Grammar Online Exercises and Downloadable Worksheets

Online exercises.

  • Reported Speech

Levels of Difficulty : Elementary Intermediate Advanced

  • RS012 - Reported Speech Intermediate
  • RS011 - Reported Speech Intermediate
  • RS010 - Reporting Verbs Advanced
  • RS009 - Reporting Verbs Advanced
  • RS008 - Reporting Verbs Advanced
  • RS007 - Reporting Verbs Intermediate
  • RS006 - Reported Speech Intermediate
  • RS005 - Reported Speech - Introductory Verbs Advanced
  • RS004 - Reported Speech Intermediate
  • RS003 - Reporting Verbs Intermediate
  • RS002 - Reported Speech Intermediate
  • RS001 - Reported Speech Intermediate
  • Gerund - Infinitive
  • Adjective - Adverb
  • Modal Verbs
  • Passive Voice
  • Definite and Indefinite Articles
  • Prepositions
  • Connectives and Linking Words
  • Quantifiers
  • Question and Negations
  • Relative Pronouns
  • Indefinite Pronouns
  • Possessive Pronouns
  • Phrasal Verbs
  • Common Mistakes
  • Missing Word Cloze
  • Word Formation
  • Multiple Choice Cloze
  • Prefixes and Suffixes
  • Key Word Transformation
  • Editing - One Word Too Many
  • Collocations
  • General Vocabulary
  • Adjectives - Adverbs
  • Gerund and Infinitive
  • Conjunctions and Linking Words
  • Question and Negation
  • Error Analysis
  • Translation Sentences
  • Multiple Choice
  • Banked Gap Fill
  • Open Gap Fill
  • General Vocabulary Exercises
  • Argumentative Essays
  • Letters and Emails
  • English News Articles
  • Privacy Policy

Reported speech statements

Examples from our community, 3,467 results for 'reported speech statements'.

Reported Speech

Reported Speech – Free Exercise

Write the following sentences in indirect speech. Pay attention to backshift and the changes to pronouns, time, and place.

  • Two weeks ago, he said, “I visited this museum last week.” → Two weeks ago, he said that   . I → he|simple past → past perfect|this → that|last …→ the … before
  • She claimed, “I am the best for this job.” → She claimed that   . I → she|simple present→ simple past|this→ that
  • Last year, the minister said, “The crisis will be overcome next year.” → Last year, the minister said that   . will → would|next …→ the following …
  • My riding teacher said, “Nobody has ever fallen off a horse here.” → My riding teacher said that   . present perfect → past perfect|here→ there
  • Last month, the boss explained, “None of my co-workers has to work overtime now.” → Last month, the boss explained that   . my → his/her|simple present→ simple past|now→ then

Rewrite the question sentences in indirect speech.

  • She asked, “What did he say?” → She asked   . The subject comes directly after the question word.|simple past → past perfect
  • He asked her, “Do you want to dance?” → He asked her   . The subject comes directly after whether/if |you → she|simple present → simple past
  • I asked him, “How old are you?” → I asked him   . The subject comes directly after the question word + the corresponding adjective (how old)|you→ he|simple present → simple past
  • The tourists asked me, “Can you show us the way?” → The tourists asked me   . The subject comes directly after whether/if |you→ I|us→ them
  • The shop assistant asked the woman, “Which jacket have you already tried on?” → The shop assistant asked the woman   . The subject comes directly after the question word|you→ she|present perfect → past perfect

Rewrite the demands/requests in indirect speech.

  • The passenger requested the taxi driver, “Stop the car.” → The passenger requested the taxi driver   . to + same wording as in direct speech
  • The mother told her son, “Don’t be so loud.” → The mother told her son   . not to + same wording as in direct speech, but remove don’t
  • The policeman told us, “Please keep moving.” → The policeman told us   . to + same wording as in direct speech ( please can be left off)
  • She told me, “Don’t worry.” → She told me   . not to + same wording as in direct speech, but remove don’t
  • The zookeeper told the children, “Don’t feed the animals.” → The zookeeper told the children   . not to + same wording as in direct speech, but remove don’t

How good is your English?

Find out with Lingolia’s free grammar test

Take the test!

Maybe later

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 April 2024

Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS

  • Miguel Angrick 1 ,
  • Shiyu Luo 2 ,
  • Qinwan Rabbani 3 ,
  • Daniel N. Candrea 2 ,
  • Samyak Shah 1 ,
  • Griffin W. Milsap 4 ,
  • William S. Anderson 5 ,
  • Chad R. Gordon 5 , 6 ,
  • Kathryn R. Rosenblatt 1 , 7 ,
  • Lora Clawson 1 ,
  • Donna C. Tippett 1 , 8 , 9 ,
  • Nicholas Maragakis 1 ,
  • Francesco V. Tenore 4 ,
  • Matthew S. Fifer 4 ,
  • Hynek Hermansky 10 , 11 ,
  • Nick F. Ramsey 12 &
  • Nathan E. Crone 1  

Scientific Reports volume  14 , Article number:  9617 ( 2024 ) Cite this article

488 Accesses

14 Altmetric

Metrics details

  • Amyotrophic lateral sclerosis
  • Neuroscience

Brain–computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant’s voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.

Similar content being viewed by others

reported speech online wordwall

A high-performance speech neuroprosthesis

reported speech online wordwall

Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity

reported speech online wordwall

Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis

Introduction.

A variety of neurological disorders, including amyotrophic lateral sclerosis (ALS), can severely affect speech production and other purposeful movements while sparing cognition. This can result in varying degrees of communication impairments, including Locked-In Syndrome (LIS) 1 , 2 , in which patients can only answer yes/no questions or select from sequentially presented options using eyeblinks, eye movements, or other residual movements. Individuals such as these may use augmentative and alternative technologies (AAT) to select among options on a communication board, but this communication can be slow, effortful, and may require caregiver intervention. Recent advances in implantable brain-computer interfaces (BCIs) have demonstrated the feasibility of establishing and maintaining communication using a variety of direct brain control strategies that bypass weak muscles, for example to control a switch scanner 3 , 4 , a computer cursor 5 , to write letters 6 or to spell words using a hybrid approach of eye-tracking and attempted movement detection 7 . However, these communication modalities are still slower, more effortful, and less intuitive than speech-based BCI control 8 .

Recent studies have also explored the feasibility of decoding attempted speech from brain activity, outputting text or even acoustic speech, which could potentially carry more linguistic information such as intonation and prosody. Previous studies have reconstructed acoustic speech in offline analysis from linear regression models 9 , convolutional 10 and recurrent neural networks 11 , 12 , and encoder-decoder architectures 13 . Concatenative approaches from the text-to-speech synthesis domain have also been explored 14 , 15 , and voice activity has been identified in electrocorticographic (ECoG) 16 and stereotactic EEG recordings 17 . Moreover, speech decoding has been performed at the level of American English phonemes 18 , spoken vowels 19 , 20 , spoken words 21 and articulatory gestures 22 , 23 .

Until now, brain-to-speech decoding has primarily been reported in individuals with unimpaired speech, such as patients temporarily implanted with intracranial electrodes for epilepsy surgery. To date, it is unclear to what extent these findings will ultimately translate to individuals with motor speech impairments, as in ALS and other neurological disorders. Recent studies have demonstrated how neural activity acquired from an ECoG grid 24 or from microelectrodes 25 can be used to recover text from a patient with anarthria due to a brainstem stroke, or from a patient with dysarthria due to ALS, respectively. Prior to these studies, a landmark study allowed a locked-in volunteer to control a real-time synthesizer generating vowel sounds 26 . More recently, Metzger et al. 27 demonstrated in a clinical trial participant diagnosed with quadriplegia and anarthria a multimodal speech-neuroprosthetic system that was capable of synthesizing sentences in a cued setting from silent speech attempts. In our prior work, we presented a ‘plug-and-play’ system that allowed a clinical trial participant living with ALS to issue commands to external devices, such as a communication board, by using speech as a control mechanism 28 .

In related work, BCIs based on non-invasive modalities, such as electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS) or functional magnetic resonance imaging (fMRI) have been investigated for speech decoding applications. These studies have largely focused on imagined speech 29 to avoid contamination by movement artifacts 30 . Recent work by Dash et al., for example, reported speech decoding results for imagined and spoken phrases from 3 ALS patients using magnetoencephalography (MEG) 31 . While speech decoding based on non-invasive methodologies is an important branch in the BCI field as they do not require a surgery and may be adopted by a larger population more easily, their current state of the art comes with disadvantages compared to implantable BCI’s as they lack either temporal or spatial resolution, or are currently not feasible for being used at home.

Here, we show that an individual living with ALS and participating in a clinical trial of an implantable BCI (ClinicalTrials.gov, NCT03567213) was able to produce audible, intelligible words that closely resembled his own voice, spoken at his own pace. Speech synthesis was accomplished through online decoding of ECoG signals generated during overt speech production from cortical regions previously shown to represent articulation and phonation, following similar previous work 11 , 19 , 32 , 33 . Our participant had considerable impairments in articulation and phonation. He was still able to produce some words that were intelligible when spoken in isolation, but his sentences were often unintelligible. Here, we focused on a closed vocabulary of 6 keywords, originally used for decoding spoken commands to control a communication board. Our participant was capable of producing these 6 keywords individually with a high degree of intelligibility. We acquired training data over a period of 6 weeks and deployed the speech synthesis BCI in several separate closed-loop sessions. Since the participant could still produce speech, we were able to easily and reliably time-align the individual’s neural and acoustic signals to enable a mapping between his cortical activity during overt speech production processes and his voice’s acoustic features. We chose to provide delayed rather than simultaneous auditory feedback in anticipation of ongoing deterioration in the patient’s speech due to ALS, with increasing discordance and interference between actual and BCI-synthesized speech. This design choice would be ideal for a neuroprosthetic device that remains capable of producing intelligible words as an individual’s speech becomes increasingly unintelligible, as was expected in our participant due to ALS.

Here, we present a self-paced BCI that translates brain activity directly to acoustic speech that resembles characteristics of the user’s voice profile, with most synthesized words of sufficient intelligibility to be correctly recognized by human listeners. This work makes an important step in adding more evidence that recent speech synthesis from neural signals in patients with intact speech can be translated to individuals with neurological speech impairments, by first focusing on a closed vocabulary that the participant can reliably generate at his own pace, before generalizing towards unseen words. Synthesizing speech from the neural activity associated with overt speech allowed us to demonstrate the feasibility of reproducing the acoustic features of speech when ground truth is available and its alignment with an acoustic target is straightforward, in turn setting a standard for future efforts when ground truth is unavailable, as in the Locked In Syndrome. Moreover, because our speech synthesis model was trained on data that preceded testing by several months, our results also support the stability of ECoG as a basis for speech BCIs.

In order to synthesize acoustic speech from neural signals, we designed a pipeline that consisted of three recurrent neural networks (RNNs) to (1) identify and buffer speech-related neural activity, (2) transform sequences of speech-related neural activity into an intermediate acoustic representation, and (3) eventually recover the acoustic waveform using a vocoder. Figure  1 shows a schematic overview of our approach. We acquired ECoG signals from two electrode grids that covered cortical representations for speech production including ventral sensorimotor cortex and the dorsal laryngeal area (Fig.  1 A). Here, we focused only on a subset of electrodes that had previously been identified as showing significant changes in high-gamma activity associated with overt speech production (see Supplementary Fig.  2 ). From the raw ECoG signals, our closed-loop speech synthesizer extracted broadband high-gamma power features (70–170 Hz) that had previously been demonstrated to encode speech-related information useful for decoding speech (Fig.  1 B) 10 , 14 .

figure 1

Overview of the closed-loop speech synthesizer. ( A ) Neural activity is acquired from a subset of 64 electrodes (highlighted in orange) from two 8 × 8 ECoG electrode arrays covering sensorimotor areas for face and tongue, and for upper limb regions. ( B ) The closed-loop speech synthesizer extracts high-gamma features to reveal speech-related neural correlates of attempted speech production and propagates each frame to a neural voice activity detection (nVAD) model ( C ) that identifies and extracts speech segments ( D ). When the participant finishes speaking a word, the nVAD model forwards the high-gamma activity of the whole extracted sequence to a bidirectional decoding model ( E ) which estimates acoustic features ( F ) that can be transformed into an acoustic speech signal. ( G ) The synthesized speech is played back as acoustic feedback.

We used a unidirectional RNN to identify and buffer sequences of high-gamma activity frames and extract speech segments (Fig.  1 C,D). This neural voice activity detection (nVAD) model internally employed a strategy to correct misclassified frames based on each frame's temporal context, and additionally included a context window of 0.5 s to allow for smoother transitions between speech and non-speech frames. Each buffered sequence was forwarded to a bidirectional decoding model that mapped high-gamma features onto 18 Bark-scale cepstral coefficients 34 and 2 pitch parameters, henceforth referred to as LPC coefficients 35 , 36 (Fig.  1 E,F). We used a bidirectional architecture to include past and future information while making frame-wise predictions. Estimated LPC coefficients were transformed into an acoustic speech signal using the LPCNet vocoder 36 and played back as delayed auditory feedback (Fig.  1 G).

Synthesis performance

When deployed in sessions with the participant for online decoding, our speech-synthesis BCI was reliably capable of producing acoustic speech that captured many details and characteristics of the voice and pacing of the participant’s natural speech, often yielding a close resemblance to the words spoken in isolation from the participant. Figure  2 A provides examples of original and synthesized waveforms for a representative selection of words time-aligned by subtracting the duration of the extracted speech segment from the nVAD. Onset timings from the reconstructed waveforms indicate that the decoding model captured the flow of the spoken word while also synthesizing silence around utterances for smoother transitions. A comparison between voice activity for spoken and synthesized speech revealed a median Levenstein distance of 235 ms, hinting that the synthesis approach was capable of generating speech that adequately matched the timing of the spoken counterpart. Figure  2 B shows the corresponding acoustic spectrograms for the spoken and synthesized words, respectively. The spectral structures of the original and synthesized speech shared many common characteristics and achieved average correlation scores of 0.67 (± 0.18 standard deviation) suggesting that phoneme and formant-specific information were preserved.

figure 2

Evaluation of the synthesized words. ( A ) Visual example of time-aligned original and reconstructed acoustic speech waveforms and their spectral representations ( B ) for 6 words that were recorded during one of the closed-loop sessions. Speech spectrograms are shown between 100 and 8000 Hz with a logarithmic frequency range to emphasize formant frequencies. ( C ) The confusion matrix between human listeners and ground truth. ( D ) Distribution of accuracy scores from all who performed the listening test for the synthesized speech samples. Dashed line shows chance performance (16.7%).

We conducted 3 sessions across 3 different days (approximately 5 and a half months after the training data was acquired, each session lasted 6 min) to repeat the experiment with acoustic feedback from the BCI to the participant (see Supplementary Video 1 for an excerpt). Other experiment parameters were not changed. All synthesized words were played back on loudspeakers while simultaneously recorded for evaluation.

To assess the intelligibility of the synthesized words, we conducted listening tests in which human listeners played back individual samples of the synthesized words and selected the word that most closely resembled each sample. Additionally, we mixed in samples that contained the originally spoken words. This allowed us to assess the quality of the participant’s natural speech. We recruited a cohort of 21 native English speakers to listen to all samples that were produced during our 3 closed-loop sessions. Out of 180 samples, we excluded 2 words because the nVAD model did not detect speech activity and therefore no speech output was produced by the decoding model. We also excluded a few cases where speech activity was falsely detected by the nVAD model, which resulted in synthesized silence and remained unnoticed to the participant.

Overall, human listeners achieved an accuracy score of 80%, indicating that the majority of synthesized words could be correctly and reliably recognized. Figure  2 C presents the confusion matrix regarding only the synthesized samples where the ground truth labels and human listener choices are displayed on the X- and Y-axes respectively. The confusion matrix shows that human listeners were able to recognize all but one word at very high rates. “Back” was recognized at low rates, albeit still above chance, and was most often mistaken for “Left”. This could have been due in part to the close proximity of the vowel formant frequencies for these two words. The participant’s weak tongue movements may have deemphasized the acoustic discriminability of these words, in turn resulting in the vocoder synthesizing a version of “back” that was often indistinct from “left”. In contrast, the confusion matrix also shows that human listeners were confident in distinguishing the words “Up” and “Left”. The decoder synthesized an intelligible but incorrect word in only 4% of the cases, and all listeners accurately recognized the incorrect word. Note that all keywords in the vocabulary were chosen for intuitive command and control of a computer interface, for example a communication board, and were not designed to be easily discriminable for BCI applications.

Figure  2 D summarizes individual accuracy scores from all human listeners from the listening test in a histogram. All listeners recognized between 75 and 84% of the synthesized words. All human listeners achieved accuracy scores above chance (16.7%). In contrast, when tested on the participant’s natural speech, our human listeners correctly recognized almost all samples of the 6 keywords (99.8%).

Anatomical and temporal contributions

In order to understand which cortical areas contributed to identification of speech segments, we conducted a saliency analysis 37 to reveal the underlying dynamics in high-gamma activity changes that explain the binary decisions made by our nVAD model. We utilized a method from the image processing domain 38 that queries spatial information indicating which pixels have contributed to a classification task. In our case, this method ranked individual high-gamma features over time by their influence on the predicted speech onsets (PSO). We defined the PSO as the first occurrence when the nVAD model identified spoken speech and neural data started to get buffered before being forwarded to the decoding model. The absolute values of their gradients allowed interpretations of which contributions had the highest or lowest impact on the class scores from anatomical and temporal perspectives.

The general idea is illustrated in Fig.  3 B. In a forward pass, we first estimated for each trial the PSO by propagating through each time step until the nVAD model made a positive prediction. From here, we then applied backpropagation through time to compute all gradients with respect to the model’s input high-gamma features. Relevance scores |R| were computed by taking the absolute value of each partial derivative and the maximum value across time was used as the final score for each electrode 38 . Note that we only performed backpropagation through time for each PSO, and not for whole speech segments.

figure 3

Changes in high-gamma activity across motor, premotor and somatosensory cortices trigger detection of speech output. ( A ) Saliency analysis shows that changes in high-gamma activity predominantly from 300 to 100 ms prior to predicted speech onset (PSO) strongly influenced the nVAD model’s decision. Electrodes covering motor, premotor and somatosensory cortices show the impact of model decisions, while electrodes covering the dorsal laryngeal area only modestly added information to the prediction. Grey electrodes were either not used, bad channels or had no notable contributions. ( B ) Illustration of the general procedure on how relevance scores were computed. For each time step t , relevance scores were computed by backpropagation through time across all previous high-gamma frames X t . Predictions of 0 correspond to no-speech, while 1 represents speech frames. ( C ) Temporal progression of mean magnitudes of the absolute relevance score in 3 selected channels that strongly contributed to PSOs. Shaded areas reflect the standard error of the mean (N = 60). Units of the relevance scores are in 10 –3 .

Results from the saliency analysis are shown in Fig.  3 A. For each channel, we display the PSO-specific relevance scores by encoding the maximum magnitude of the influence in the size of the circles (bigger circles mean stronger influence on the predictions), and the temporal occurrence of that maximum in the respective color coding (lighter electrodes have their maximal influence on the PSO earlier). The color bar at the bottom limits the temporal influence to − 400 ms prior to PSO, consistent with previous reports about speech planning 39 and articulatory representations 19 . The saliency analysis showed that the nVAD model relied on a broad network of electrodes covering motor, premotor and somatosensory cortices whose collective changes in the high-gamma activity were relevant for identifying speech. Meanwhile, voice activity information encoded in the dorsal laryngeal area (highlighted electrodes in the upper grid in Fig.  3 A) 19 only mildly contributed to the PSO.

Figure  3 C shows relevance scores over a time period of 1 s prior to PSO for 3 selected electrodes that strongly contributed to predicting speech onsets. In conjunction with the color coding from Fig.  3 A, the temporal associations were consistent with previous studies that examined phoneme decoding over fixed window sizes of 400 ms 18 and 500 ms 40 , 41 around speech onset times, suggesting that the nVAD model benefited from neural activity during speech planning and phonological processing 39 when identifying speech onset. We hypothesize that the decline in the relevance scores after − 200 ms can be explained by the fact that voice activity information might have already been stored in the long short-term memory of the nVAD model and thus changes in neural activity beyond this time had less influence on the prediction.

Here we demonstrate the feasibility of a closed-loop BCI that is capable of online synthesis of intelligible words using intracranial recordings from the speech cortex of an ALS clinical trial participant. Recent studies 10 , 11 , 13 , 27 suggest that deep learning techniques are a viable tool to reconstruct acoustic speech from ECoG signals. We found an approach consisting of three consecutive RNN architectures that identify and transform neural speech correlates into an acoustic waveform that can be streamed over the loudspeaker as neurofeedback, resulting in an 80% intelligibility score on a closed-vocabulary, keyword reading task.

The majority of human listeners were able to correctly recognize most synthesized words. All words from the closed vocabulary were chosen for a prior study 28 that explored speech decoding for intuitive control of a communication board rather than being constructed to elicit discriminable neural activity that benefits decoder performance. The listening tests suggest that the words “Left” and “Back” were responsible for the majority of misclassified words. These words share very similar articulatory features, and our participant’s speech impairments likely made these words less discriminable in the synthesis process.

Saliency analysis showed that our nVAD approach used information encoded in the high-gamma band across predominantly motor, premotor and somatosensory cortices, while electrodes covering the dorsal laryngeal area only marginally contributed to the identification of speech onsets. In particular, neural changes previously reported to be important for speech planning and phonological processing 19 , 39 appeared to have a profound impact. Here, the analysis indicates that our nVAD model learned a proper representation of spoken speech processes, providing a connection between neural patterns learned by the model and the spatio-temporal dynamics of speech production.

Our participant was chronically implanted with 128 subdural ECoG electrodes, roughly half of which covered cortical areas where similar high-gamma responses have been reliably elicited during overt speech 18 , 19 , 40 , 42 and have been used for offline decoding and reconstruction of speech 10 , 11 . This study and others like it 24 , 27 , 43 , 44 explored the potential of ECoG-based BCIs to augment communication for individuals with motor speech impairments due to a variety of neurological disorders, including ALS and brainstem stroke. A potential advantage of ECoG for BCI is the stability of signal quality over long periods of time 45 . In a previous study of an individual with locked-in syndrome due to ALS, a fully implantable ECoG BCI with fewer electrodes provided a stable switch for a spelling application over a period of more than 3 years 46 . Similarly, Rao et al. reported robust responses for ECoG recordings over the speech-auditory cortex for two drug-resistant epilepsy patients over a period of 1.5 years 47 . More recently, we showed that the same clinical trial participant could control a communication board with ECoG decoding of self-paced speech commands over a period of 3 months without retraining or recalibration 28 . The speech synthesis approach we demonstrated here used training data from five and a half months prior to testing and produced similar results over 3 separate days of testing, with recalibration but no retraining in each session. These findings suggest that the correspondence between neural activity in ventral sensorimotor cortex and speech acoustics were not significantly changed over this time period. Although longitudinal testing over longer time periods will be needed to explicitly test this, our findings provide additional support for the stability of ECoG as a BCI signal source for speech synthesis.

Our approach used a speech synthesis model trained on neural data acquired during overt speech production. This constrains our current approach to patients with speech motor impairments in which vocalization is still possible and in which speech may still be intelligible. Given the increasing use of voice banking among people living with ALS, it may also be possible to improve the intelligibility of synthetic speech using an approach similar to ours, even in participants with unintelligible or absent speech. This speech could be utilized as a surrogate but would require careful alignment to speech attempts. Likewise, the same approach could be used with a generic voice, though this would not preserve the individual’s speech characteristics. Here our results were achieved without the added challenge of absent ground truth, but they serve as an important demonstration that if adequate alignment is achieved, direct synthesis of acoustic speech from ECoG is feasible, accurate, and stable, even in a person with dysarthria due to ALS. Nevertheless, it remains to be seen how long our approach will continue to produce intelligible speech as our patient’s neural responses and articulatory impairments change over time due to ALS. Previous studies of long-term ECoG signal stability and BCI performance in patients with more severe motor impairments suggest that this may be possible 3 , 48 .

Although our approach allowed for online, closed-loop production of synthetic speech that preserved our participant’s individual voice characteristics, the bidirectional LSTM imposed a delay in the audible feedback until after the patient spoke each word. We considered this delay to be not only acceptable, but potentially desirable, given our patient’s speech impairments and the likelihood of these impairments worsening in the future due to ALS. Although normal speakers use immediate acoustic feedback to tune their speech motor output 49 , individuals with progressive motor speech impairments are likely to reach a point at which there is a significant, and distracting, mismatch between the subject’s speech and the synthetic speech produced by the BCI. In contrast, providing acoustic feedback immediately after each utterance gives the user clear and uninterrupted output that they can use to improve subsequent speech attempts, if necessary.

While our results are promising, the approach used here did not allow for synthesis of unseen words. The bidirectional architecture of the decoding model learned variations of the neural dynamics of each word and was capable of recovering their acoustic representations from corresponding sequences of high-gamma frames. This approach did not capture more fine-grained and isolated part-of-speech units, such as syllables or phonemes. However, previous research 11 , 27 has shown that speech synthesis approaches based on bidirectional architectures can generalize to unseen elements that were not part of the training set. Future research will be needed to expand the limited vocabulary used here, and to explore to what extent similar or different approaches are able to extrapolate to words that are not in the vocabulary of the training set.

Our demonstration here builds on previous seminal studies of the cortical representations for articulation and phonation 19 , 32 , 40 in epilepsy patients implanted with similar subdural ECoG arrays for less than 30 days. These studies and others using intraoperative recordings have also supported the feasibility of producing synthetic speech from ECoG high-gamma responses 10 , 11 , 33 , but these demonstrations were based on offline analysis of ECoG signals that were previously recorded in subjects with normal speech, with the exception of the work by Metzger et al. 27 Here, a participant with impaired articulation and phonation was able to use a chronically implanted investigational device to produce acoustic speech that retained his unique voice characteristics. This was made possible through online decoding of ECoG high-gamma responses, using an algorithm trained on data collected months before. Notwithstanding the current limitations of our approach, our findings here provide a promising proof-of-concept that ECoG BCIs utilizing online speech synthesis can serve as alternative and augmentative communication devices for people living with ALS. Moreover, our findings should motivate continued research on the feasibility of using BCIs to preserve or restore vocal communication in clinical populations where this is needed.

Materials and methods

Participant.

Our participant was a male native English speaker in his 60s with ALS who was enrolled in a clinical trial (NCT03567213), approved by the Johns Hopkins University Institutional Review Board (IRB) and by the FDA (under an investigational device exemption) to test the safety and preliminary efficacy of a brain-computer interface composed of subdural electrodes and a percutaneous connection to external EEG amplifiers and computers. All experiments conducted in this study complied with all relevant guidelines and regulations, and were performed according to a clinical trial protocol approved by the Johns Hopkins IRB. Diagnosed with ALS 8 years prior to implantation, our participant’s motor impairments had chiefly affected bulbar and upper extremity muscles and had resulted in motor impairments sufficient to render continuous speech mostly unintelligible (though individual words were intelligible), and to require assistance with most activities of daily living. Our participant’s ability to carry out activities of daily living were assessed using the ALSFRS-R measure 50 , resulting in a score of 26 out of 48 possible points (speech was rated at 1 point, see Supplementary Data S5 ). Furthermore, speech intelligibility and speaking rate were evaluated by a certified speech-language pathologist, whose detailed assessment may be found in the Supplementary Note . The participant gave informed consent after being counseled about the nature of the research and implant-related risks and was implanted with the study device in July 2022. Additionally, the participant gave informed consent for use of his audio and video recordings in publications of the study results.

Study device and implantation

The study device was composed of two 8 × 8 subdural electrode grids (PMT Corporation, Chanhassen, MN) connected to a percutaneous 128-channel Neuroport pedestal (Blackrock Neurotech, Salt Lake City, UT). Both subdural grids contained platinum-iridium disc electrodes (0.76 mm thickness, 2-mm diameter exposed surface) with 4 mm center-to-center spacing and a total surface area of 12.11 cm 2 (36.6 mm × 33.1 mm).

The study device was surgically implanted during a standard awake craniotomy with a combination of local anesthesia and light sedation, without neuromuscular blockade. The device’s ECoG grids were placed on the pial surface of sensorimotor representations for speech and upper extremity movements in the left hemisphere. Careful attention was made to assure that the scalp flap incision was well away from the external pedestal. Cortical representations were targeted using anatomical landmarks from pre-operative structural (MRI) and functional imaging (fMRI), in addition to somatosensory evoked potentials measured intraoperatively. Two reference wires attached to the Neuroport pedestal were implanted in the subdural space on the outward facing surface of the subdural grids. The participant was awoken during the craniotomy to confirm proper functioning of the study device and final placement of the two subdural grids. For this purpose, the participant was asked to repeatedly speak a single word as event-related ECoG spectral responses were noted to verify optimal placement for the implanted electrodes. On the same day, the participant had a post-operative CT which was then co-registered to a pre-operative MRI to verify the anatomical locations of the two grids.

Data recording

During all training and testing sessions, the Neuroport pedestal was connected to a 128-channel NeuroPlex-E headstage that was in turn connected by a mini-HDMI cable to a NeuroPort Biopotential Signal Processor (Blackrock Neurotech, Salt Lake City, UT, USA) and external computers. We acquired neural signals at a sampling rate of 1000 Hz.

Acoustic speech was recorded through an external microphone (BETA® 58A, SHURE, Niles, IL) in a room isolated from external acoustic and electronic noise, then amplified and digitized by an external audio interface (H6-audio-recorder, Zoom Corporation, Tokyo, Japan). The acoustic speech signal was split and forwarded to: (1) an analog input of the NeuroPort Biopotential Signal Processor (NSP) to be recorded at the same frequency and in synchrony with the neural signals, and (2) the testing computer to capture high-quality (48 kHz) recordings. We applied cross-correlation to align the high-quality recordings with the synchronized audio signal from the NSP.

Experiment recordings and task design

Each recording day began with a syllable repetition task to acquire cortical activity to be used for baseline normalization. Each syllable was audibly presented through a loudspeaker, and the participant was instructed to recite the heard stimulus by repeating it aloud. Stimulus presentation lasted for 1 s, and trial duration was set randomly in the range of 2.5 s and 3.5 s with a step size of 80 ms. In the syllable repetition task, the participant was instructed to repeat 12 consonant–vowel syllables (Supplementary Table S4 ), in which each syllable was repeated 5 times. We extracted high-gamma frames from all trials to compute for each day the mean and standard deviation statistics for channel-specific normalization.

To collect data for training our nVAD and speech decoding model, we recorded ECoG during multiple blocks of a speech production task over a period of 6 weeks. During the task, the participant read aloud single words that were prompted on a computer screen, interrupted occasionally by a silence trial in which the participant was instructed to say nothing. The words came from a closed vocabulary of 6 words ("Left", "Right", "Up", "Down", "Enter", "Back", and “…” for silence) that were chosen for a separate study in which these spoken words were decoded from ECoG to control a communication board 28 . In each block, there were ten repetitions of each word (60 words in total) that appeared in a pseudo-randomized order by having a fixed set of seeds to control randomization orders. Each word was shown for 2 s per trial with an intertrial interval of 3 s. The participant was instructed to read the prompted word aloud as soon as it appeared. Because his speech was slow, effortful, and dysarthric, the participant may have sometimes used some of the intertrial interval to complete word production. However, offline analysis verified at least 1 s between the end of each spoken word and the beginning of the next trial, assuring that enough time had passed to avoid ECoG high-gamma responses leaking into subsequent trials. In each block, neural signals and audibly vocalized speech were acquired in parallel and stored to disc using BCI2000 51 .

We recorded training, validation, and test data for 10 days, and deployed our approach for synthesizing speech online five and a half months later. During the online task, the synthesized output was played to the participant while he performed the same keyword reading task as in the training sessions. The feedback from each synthesized word began after he spoke the same word, avoiding any interference with production from the acoustic feedback. The validation dataset was used for finding appropriate hyperparameters to train both nVAD and the decoding model. The test set was used to validate final model generalizability before online sessions. We also used the test set for the saliency analysis. In total, the training set was comprised of 1570 trials that aggregated to approximately 80 min of data (21.8 min are pure speech), while the validation and test set contained 70 trials each with around 3 min of data (0.9 min pure speech). The data in each of these datasets were collected on different days, so that no baseline or other statistics in the training set leaked into the validation or test set.

Signal processing and feature extraction

Neural signals were transformed into broadband high-gamma power features that have been previously reported to closely track the timing and location of cortical activation during speech and language processes 42 , 52 . In this feature extraction process, we first re-referenced all channels within each 64-contact grid to a common-average reference (CAR filtering), excluding channels with poor signal quality in any training session. Next, we selected all channels that had previously shown significant high-gamma responses during the syllable repetition task described above. This included 64 channels (Supplementary Fig. S2 , channels with blue outlines) across motor, premotor and somatosensory cortices, including the dorsal laryngeal area. From here, we applied two IIR Butterworth filters (both with filter order 8) to extract the high-gamma band in the range of 70 to 170 Hz while subsequently attenuating the first harmonic (118–122 Hz) of the line noise. For each channel, we computed logarithmic power features based on windows with a fixed length of 50 ms and a frameshift of 10 ms. To estimate speech-related increases in broadband high-gamma power, we normalized each feature by the day-specific statistics of the high-gamma power features accumulated from the syllable repetition task.

For the acoustic recordings of the participant’s speech, we downsampled the time-aligned high-quality microphone recordings from 48 to 16 kHz. From here, we padded the acoustic data by 16 ms to account for the shift introduced by the two filters on the neural data and estimated the boundaries of speech segments using an energy-based voice activity detection algorithm 53 . Likewise, we computed acoustic features in the LPC coefficient space through the encoding functionality of the LPCNet vocoder. Both voice activity detection and LPC feature encoding were configured to operate on 10 ms frameshifts to match the number of samples from the broadband high-gamma feature extraction pipeline.

Network architectures

Our proposed approach relied on three recurrent neural network architectures: (1) a unidirectional model that identified speech segments from the neural data, (2) a bidirectional model that translated sequences of speech-related high-gamma activity into corresponding sequences of LPC coefficients representing acoustic information, and (3) LPCNet 36 , which converted those LPC coefficients into an acoustic speech signal.

The network architecture of the unidirectional nVAD model was inspired by Zen et al. 54 in using a stack of two LSTM layers with 150 units each, followed by a linear fully connected output layer with two units representing speech or non-speech class target logits (Fig.  4 ). We trained the unidirectional nVAD model using truncated backpropagation through time (BPTT) 55 to keep the costs of single parameter updates manageable. We initialized this algorithm’s hyperparameters k 1 and k 2 to 50 and 100 frames of high-gamma activity, respectively, such that the unfolding procedure of the backpropagation step was limited to 100 frames (1 s) and repeated every 50 frames (500 ms). Dropout was used as a regularization method with a probability of 50% to counter overfitting effects 56 . Comparison between predicted and target labels was determined by the cross-entropy loss. We limited the network training using an early stopping mechanism that evaluated after each epoch the network performance on a held-out validation set and kept track of the best model weights by storing the model weights only when the frame-wise accuracy score was bigger than before. The learning rate of the stochastic gradient descent optimizer was dynamically adjusted in accordance with the RMSprop formula 57 with an initial learning rate of 0.001. Using this procedure, the unidirectional nVAD model was trained for 27,975 update steps, achieving a frame-wise accuracy of 93.4% on held-out validation data. The architecture of the nVAD model had 311,102 trainable weights.

figure 4

System overview of the closed-loop architecture. The computational graph is designed as a directed acyclic network. Solid shapes represent ezmsg units, dotted ones represent initialization parameters. Each unit is responsible for a self-contained task and distributes their output to all its subscribers. Logger units run in separate processes to not interrupt the main processing chain for synthesizing speech.

The network architecture of the bidirectional decoding model had a very similar configuration to the unidirectional nVAD but employed a stack of bidirectional LSTM layers for sequence modelling 11 to include past and future contexts. Since the acoustic space of the LPC components was continuous, we used a linear fully connected output layer for this regression task. Figure  4 contains an illustration of the network architecture of the decoding model. In contrast to the unidirectional nVAD model, we used standard BPTT to account for both past and future contexts within each extracted segment identified as spoken speech. The architecture of the decoding model had 378,420 trainable weights and was trained for 14,130 update steps using a stochastic gradient descent optimizer. The initial learning rate was set to 0.001 and dynamically updated in accordance with the RMSProp formula. Again, we used dropout with a 50% probability and employed an early stopping mechanism that only updated model weights when the loss on the held-out validation set was lower than before.

Both the unidirectional nVAD and the bidirectional decoding model were implemented within the PyTorch framework. For LPCNet, we used the C-implementation and pretrained model weights by the original authors and communicated with the library via wrapper functions through the Cython programming language.

Closed-loop architecture

Our closed-loop architecture was built upon ezmsg, a general-purpose framework which enables the implementation of streaming systems in the form a directed acyclic network of connected units, which communicate with each other through a publish/subscribe software engineering pattern using asynchronous coroutines. Here, each unit represents a self-contained operation which receives many inputs, and optionally propagates its output to all its subscribers. A unit consists of a settings and state class for enabling initial and updatable configurations and has multiple input and output connection streams to communicate with other nodes in the network. Figure  4 shows a schematic overview of the closed-loop architecture. ECoG signals were received by connecting to BCI2000 via a custom ZeroMQ (ZMQ) networking interface that sent packages of 40 ms over the TCP/IP protocol. From here, each unit interacted with other units through an asynchronous message system that was implemented on top of a shared-memory publish-subscribe multi-processing pattern. Figure  4 shows that the closed-loop architecture was comprised of 5 units for the synthesis pipeline, while employing several additional units that acted as loggers and wrote intermediate data to disc.

In order to play back the synthesized speech during closed-loop sessions, we wrote the bytes of the raw PCM waveform to standard output (stdout) and reinterpreted them by piping them into SoX. We implemented our closed-loop architecture in Python 3.10. To keep the computational complexity manageable for this streamlined application, we implemented several functionalities, such as ringbuffers or specific calculations in the high-gamma feature extraction, in Cython.

Contamination analysis

Overt speech production can cause acoustic artifacts in electrophysiological recordings, allowing learning machines such as neural networks to rely on information that is likely to fail once deployed—a phenomenon widely known as Clever Hans 58 . We used the method proposed by Roussel et al. 59 to assess the risk that our ECoG recordings had been contaminated. This method compares correlations between neural and acoustic spectrograms to determine a contamination index which describes the average correlation of matching frequencies. This contamination index is compared to the distribution of contamination indices resulting from randomly permuting the rows and columns of the contamination matrix—allowing statistical analysis of the risk when assuming that no acoustic contamination is present.

For each recording day among the train, test and validation set, we analyzed acoustic contamination in the high-gamma frequency range. We identified 1 channel (Channel 46) in our recordings that was likely contaminated during 3 recording days (D 5 , D 6 , and D 7 ), and we corrected this channel by taking the average of high-gamma power features from neighboring channels (8-neighbour configuration, excluding the bad channel 38). A detailed report can be found in Supplementary Fig. S1 , where each histogram corresponds to the distribution of permuted contamination matrices, and colored vertical bars indicate the actual contamination index, where green and red indicate the statistical criterion threshold (green: p > 0.05, red: p ≤ 0.05). After excluding the neural data from channel 46, Roussel’s method suggested that the null hypothesis could be rejected, and thus we concluded that no acoustic speech has interfered with neural recording.

Listening test

We conducted a forced-choice listening test similar to Herff et al. 14 in which 21 native English speakers evaluated the intelligibility of the synthesized output and the originally spoken words. Listeners were asked to listen to one word at a time and select which word out of the six options most closely resembled it. Here, the listeners had the opportunity to listen to each sample many times before submitting a choice. We implemented the listening test on top of the BeaqleJS framework 60 . All words that were either spoken or synthesized during the 3 closed-loop sessions were included in the listening test, but were randomly sampled from a uniform distribution for unique randomized sequences across listeners. Supplementary Fig. S3 provides a screenshot of the interface with which the listeners were working.

All human listeners were only recruited through indirect means such as IRB-approved flyers placed on campus sites and had no direct connection to the PI. Anonymous demographic data was collected at the end of the listening test asking for age and preferred gender. Overall, recruited participants were 23.8% male and 61.9% female (14% other or preferred not to answer) ranging between 18 to 30 years old.

Statistical analysis

Original and reconstructed speech spectrograms were compared using Pearson's correlation coefficients for 80 mel-scaled spectral bins. For this, we transformed original and reconstructed waveforms into the spectral domain using the short-time Fourier transform (window size: 50 ms, frameshift: 10 ms, window function: Hanning), applied 80 triangular filters to focus only on perceptual differences for human listeners 61 , and Gaussianized the distribution of the acoustic space using the natural logarithm. Pearson correlation scores were calculated for each sample by averaging the correlation coefficients across frequency bins. The 95% confidence interval (two-sided) was used in the feature selection procedure while the z-criterion was Bonferroni corrected across time points. Lower and upper bounds for all channels and time points can be found in the supplementary data . Contamination analysis is based on permutation tests that use t-tests as their statistical criterion with a Bonferroni corrected significance level of α = 0.05/N, where N represents the number of frequency bins multiplied by the number of selected channels.

Overall, we used the SciPy stats package (version 1.10.1) for statistical evaluation, but the contamination analysis has been done in Matlab with the statistics and machine learning toolbox (version 12.4).

Data availability

Neural data and anonymized speech audio are publicly available at http://www.osf.io/49rt7/ . This includes experiment recordings used as training data and experiment runs from our closed-loop sessions. Additionally, we also included supporting data used for rendering the figures in the main text and in the supplementary material.

Code availability

Corresponding source code for the closed-loop BCI and scripts for generating figures can be obtained from the official Crone Lab Github page at: https://github.com/cronelab/delayed-speech-synthesis . This includes source files for training, inference, and data analysis/evaluation. The ezmsg framework can be obtained from https://github.com/iscoe/ezmsg .

Bauer, G., Gerstenbrand, F. & Rumpl, E. Varieties of the locked-in syndrome. J. Neurol. 221 , 77–91 (1979).

Article   CAS   PubMed   Google Scholar  

Smith, E. & Delargy, M. Locked-in syndrome. BMJ 330 , 406–409 (2005).

Article   PubMed Central   PubMed   Google Scholar  

Vansteensel, M. J. et al. Fully implanted brain–computer interface in a locked-in patient with ALS. N. Engl. J. Med. 375 , 2060–2066 (2016).

Chaudhary, U. et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat. Commun. 13 , 1236 (2022).

Article   ADS   CAS   PubMed Central   PubMed   Google Scholar  

Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical brain–computer interface. eLife 6 , e18554 (2017).

Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M. & Shenoy, K. V. High-performance brain-to-text communication via handwriting. Nature 593 , 249–254 (2021).

Oxley, T. J. et al. Motor neuroprosthesis implanted with neurointerventional surgery improves capacity for activities of daily living tasks in severe paralysis: First in-human experience. J. NeuroInterventional Surg. 13 , 102–108 (2021).

Article   Google Scholar  

Chang, E. F. & Anumanchipalli, G. K. Toward a speech neuroprosthesis. JAMA 323 , 413–414 (2020).

Herff, C. et al. Towards direct speech synthesis from ECoG: A pilot study. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1540–1543 (2016).

Angrick, M. et al. Speech synthesis from ECoG using densely connected 3D convolutional neural networks. J. Neural Eng. 16 , 036019 (2019).

Article   ADS   PubMed Central   PubMed   Google Scholar  

Anumanchipalli, G. K., Chartier, J. & Chang, E. F. Speech synthesis from neural decoding of spoken sentences. Nature 568 , 493–498 (2019).

Wairagkar, M., Hochberg, L. R., Brandman, D. M. & Stavisky, S. D. Synthesizing speech by decoding intracortical neural activity from dorsal motor cortex. In 2023 11th International IEEE/EMBS Conference on Neural Engineering (NER) 1–4 (2023).

Kohler, J. et al. Synthesizing speech from intracranial depth electrodes using an encoder-decoder framework. Neurons Behav. Data Anal. Theory https://doi.org/10.51628/001c.57524 (2022).

Herff, C. et al. Generating natural, intelligible speech from brain activity in motor, premotor, and inferior frontal cortices. Front. Neurosci. https://doi.org/10.3389/fnins.2019.01267 (2019).

Wilson, G. H. et al. Decoding spoken English from intracortical electrode arrays in dorsal precentral gyrus. J. Neural Eng. 17 , 066007 (2020).

Kanas, V. G. et al. Joint spatial-spectral feature space clustering for speech activity detection from ECoG signals. IEEE Trans. Biomed. Eng. 61 , 1241–1250 (2014).

Soroush, P. Z., Angrick, M., Shih, J., Schultz, T. & Krusienski, D. J. Speech activity detection from stereotactic EEG. In 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 3402–3407 (2021).

Mugler, E. M. et al. Direct classification of all American English phonemes using signals from functional speech motor cortex. J. Neural Eng. 11 , 035015 (2014).

Bouchard, K. E., Mesgarani, N., Johnson, K. & Chang, E. F. Functional organization of human sensorimotor cortex for speech articulation. Nature 495 , 327–332 (2013).

Bouchard, K. E. & Chang, E. F. Neural decoding of spoken vowels from human sensory-motor cortex with high-density electrocorticography. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society 6782–6785 (2014).

Kellis, S. et al. Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7 , 056007 (2010).

Mugler, E. M., Goldrick, M., Rosenow, J. M., Tate, M. C. & Slutzky, M. W. Decoding of articulatory gestures during word production using speech motor and premotor cortical activity. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 5339–5342 (2015).

Mugler, E. M. et al. Differential representation of articulatory gestures and phonemes in precentral and inferior frontal gyri. J. Neurosci. 38 , 9803–9813 (2018).

Article   CAS   PubMed Central   PubMed   Google Scholar  

Moses, D. A. et al. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. N. Engl. J. Med. 385 , 217–227 (2021).

Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620 , 1031–1036 (2023).

Guenther, F. H. et al. A wireless brain–machine interface for real-time speech synthesis. PLoS ONE 4 , e8218 (2009).

Metzger, S. L. et al. A high-performance neuroprosthesis for speech decoding and avatar control. Nature 620 , 1037–1046 (2023).

Luo, S. et al. Stable decoding from a speech BCI enables control for an individual with ALS without recalibration for 3 months. Adv. Sci. 10 , 2304853 (2023).

Cooney, C., Folli, R. & Coyle, D. Neurolinguistics research advancing development of a direct-speech brain–computer interface. iScience 8 , 103–125 (2018).

Herff, C. & Schultz, T. Automatic speech recognition from neural signals: A focused review. Front. Neurosci. https://doi.org/10.3389/fnins.2016.00429 (2016).

Dash, D. et al. Neural Speech Decoding for Amyotrophic Lateral Sclerosis , 2782–2786 (2020). https://doi.org/10.21437/Interspeech.2020-3071 .

Chartier, J., Anumanchipalli, G. K., Johnson, K. & Chang, E. F. Encoding of articulatory kinematic trajectories in human speech sensorimotor cortex. Neuron 98 , 1042-1054.e4 (2018).

Akbari, H., Khalighinejad, B., Herrero, J. L., Mehta, A. D. & Mesgarani, N. Towards reconstructing intelligible speech from the human auditory cortex. Sci. Rep. 9 , 874 (2019).

Moore, B. An introduction to the psychology of hearing: Sixth edition. In An Introduction to the Psychology of Hearing (Brill, 2013).

Taylor, P. Text-to-Speech Synthesis (Cambridge University Press, 2009).

Book   Google Scholar  

Valin, J.-M. & Skoglund, J. LPCNET: Improving neural speech synthesis through linear prediction. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 5891–5895 (2019).

Montavon, G., Samek, W. & Müller, K.-R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 73 , 1–15 (2018).

Article   MathSciNet   Google Scholar  

Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. In International Conference on Learning Representations (ICLR) (2014).

Indefrey, P. the spatial and temporal signatures of word production components: A critical update. Front. Psychol. https://doi.org/10.3389/fpsyg.2011.00255 (2011).

Ramsey, N. F. et al. Decoding spoken phonemes from sensorimotor cortex with high-density ECoG grids. NeuroImage 180 , 301–311 (2018).

Jiang, W., Pailla, T., Dichter, B., Chang, E. F. & Gilja, V. Decoding speech using the timing of neural signal modulation. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1532–1535 (2016).

Crone, N. E. et al. Electrocorticographic gamma activity during word production in spoken and sign language. Neurology 57 , 2045–2053 (2001).

Moses, D. A., Leonard, M. K., Makin, J. G. & Chang, E. F. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat. Commun. 10 , 3096 (2019).

Herff, C. et al. Brain-to-text: Decoding spoken phrases from phone representations in the brain. Front. Neurosci. https://doi.org/10.3389/fnins.2015.00217 (2015).

Morrell, M. J. Responsive cortical stimulation for the treatment of medically intractable partial epilepsy. Neurology 77 , 1295–1304 (2011).

Article   PubMed   Google Scholar  

Pels, E. G. M. et al. Stability of a chronic implanted brain–computer interface in late-stage amyotrophic lateral sclerosis. Clin. Neurophysiol. 130 , 1798–1803 (2019).

Rao, V. R. et al. Chronic ambulatory electrocorticography from human speech cortex. NeuroImage 153 , 273–282 (2017).

Silversmith, D. B. et al. Plug-and-play control of a brain–computer interface through neural map stabilization. Nat. Biotechnol. 39 , 326–335 (2021).

Denes, P. B. & Pinson, E. The Speech Chain (Macmillan, 1993).

Google Scholar  

Cedarbaum, J. M. et al. The ALSFRS-R: A revised ALS functional rating scale that incorporates assessments of respiratory function. J. Neurol. Sci. 169 , 13–21 (1999).

Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N. & Wolpaw, J. R. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51 , 1034–1043 (2004).

Leuthardt, E. et al. Temporal evolution of gamma activity in human cortex during an overt and covert word repetition task. Front. Hum. Neurosci. https://doi.org/10.3389/fnhum.2012.00099 (2012).

Povey, D. et al. The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (IEEE Signal Processing Society, 2011).

Zen, H. & Sak, H. Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 4470–4474 (2015).

Sutskever, I. Training Recurrent Neural Networks (University of Toronto, 2013).

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 , 1929–1958 (2014).

MathSciNet   Google Scholar  

Ruder, S. An overview of gradient descent optimization algorithms. Preprint at https://arxiv.org/abs/1609.04747 (2016).

Lapuschkin, S. et al. Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10 , 1096 (2019).

Roussel, P. et al. Observation and assessment of acoustic contamination of electrophysiological brain signals during speech production and sound perception. J. Neural Eng. 17 , 056028 (2020).

Article   ADS   PubMed   Google Scholar  

Kraft, S. & Zölzer, U. BeaqleJS: HTML5 and JavaScript based framework for the subjective evaluation of audio quality. In Linux Audio Conference (2014).

Stevens, S. S., Volkmann, J. & Newman, E. B. A scale for the measurement of the psychological magnitude pitch. J. Acoust. Soc. Am. 8 , 185–190 (1937).

Article   ADS   Google Scholar  

Download references

Acknowledgements

Research reported in this publication was supported by the National Institute Of Neurological Disorders And Stroke of the National Institutes of Health under Award Number UH3NS114439 (PI N.E.C., co-PI N.F.R.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and affiliations.

Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Miguel Angrick, Samyak Shah, Kathryn R. Rosenblatt, Lora Clawson, Donna C. Tippett, Nicholas Maragakis & Nathan E. Crone

Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Shiyu Luo & Daniel N. Candrea

Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA

Qinwan Rabbani

Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA

Griffin W. Milsap, Francesco V. Tenore & Matthew S. Fifer

Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

William S. Anderson & Chad R. Gordon

Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Chad R. Gordon

Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Kathryn R. Rosenblatt

Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Donna C. Tippett

Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA

Hynek Hermansky

Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA

UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands

Nick F. Ramsey

You can also search for this author in PubMed   Google Scholar

Contributions

M.A. and N.C. wrote the manuscript. M.A., S.L., Q.R. and D.C. analyzed the data. M.A. and S.S. conducted the listening test. S.L. collected the data. M.A. and G.M. implemented the code for the online decoder and the underlying framework. M.A. made the visualizations. W.A., C.G. and K.R., L.C. and N.M. conducted the surgery/medical procedure. D.T. made the speech and language assessment. F.T. handled the regulatory aspects. H.H. supervised the speech processing methodology. M.F. N.R. and N.C. supervised the study and the conceptualization. All authors reviewed and revised the manuscript.

Corresponding authors

Correspondence to Miguel Angrick or Nathan E. Crone .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information..

Supplementary Video 1.

Supplementary Legends.

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Angrick, M., Luo, S., Rabbani, Q. et al. Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS. Sci Rep 14 , 9617 (2024). https://doi.org/10.1038/s41598-024-60277-2

Download citation

Received : 19 October 2023

Accepted : 21 April 2024

Published : 26 April 2024

DOI : https://doi.org/10.1038/s41598-024-60277-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

reported speech online wordwall

Biden administration faces pressure to step up its response to antisemitic incidents on college campuses

As antisemitic incidents mushroom on college campuses, some Jewish leaders and lawmakers from both parties are accusing President Joe Biden’s administration of taking a lax approach toward enforcement of civil rights laws, exposing Jewish students to continued harassment.

They point to a surge of complaints filed with the Education Department that have yet to be resolved, creating a backlog that effectively eases pressure on school administrators to take action needed to protect Jewish students amid protests over the Israel-Hamas war.

Rep. Josh Gottheimer, a New Jersey Democrat, sent a letter Thursday to Education Secretary Miguel Cardona objecting to the “speed of these investigations, delayed conclusions, and lack of adequate resources allocated to these investigations.”

The congressman asked Cardona for an update on the pending investigations into antisemitism on college campuses, noting that at Columbia University in New York, “the eruption of antisemitism … has created a particularly hostile environment for Jewish students.”

Neither Columbia nor the New York Police Department has released data on the number of antisemitic incidents at the school.

The tumult spreading through college campuses is especially tricky for Biden as he works to rebuild the voting coalition from the 2020 presidential race. Many of the students protesting the war in Gaza say they are unhappy with him for not bringing about a cease-fire.

At the same time, some Jewish students and their defenders are displeased that the Biden administration isn’t showing more resolve in stamping out antisemitic harassment on campus.

The breakdown in support is not so tidy, though. Plenty of Jewish Americans across the country oppose Israel’s conduct of the war and have joined with protesters in demanding a cease-fire and an end to U.S. military aid to Israel. At Columbia last week, one Jewish student stood near an encampment that has cropped up in the shadow of Butler Library and told NBC News he had quietly celebrated Passover inside the tents with other protesters.

In this volatile atmosphere, any position Biden takes is bound to alienate someone — and has.

“They want to get re-elected, and they’re afraid of what’s going to happen in the swing states,” Michael Oren, former Israeli ambassador to the U.S., said of team Biden. “The joke is that the two-state solution is Michigan and Pennsylvania.”

Herbie Ziskend, White House deputy communications director, said that electoral considerations don’t drive the president’s actions.

“Politics doesn’t enter in,” he said.

Hamas’ surprise attack on Israel on Oct. 7 and the ensuing Israel counteroffensive touched off a wave of antisemitic incidents around the country and turned college campuses into flashpoints of anger. In the two and a half months between the start of the war and the end of 2023, the Anti-Defamation League tallied more than 5,200 antisemitic incidents nationwide, exceeding the total for all of 2022 — though  the group said  that number includes 1,317 rallies that were marked by “antisemitic rhetoric, expressions of support for terrorism against the state of Israel and/or anti-Zionism,” which weren’t “necessarily” counted in prior tallies.

What followed was a rash of complaints filed with the Education Department’s civil rights office — an arm of the Biden administration that enforces Title VI, a provision of the 1964 Civil Rights Act that prohibits discrimination in programs receiving federal aid.

Since the war began, the office opened 93 investigations into cases of discrimination against members of ethno-religious groups — about seven times the number begun in a comparable period before Hamas’ attack. The complaints involve both secondary schools and some of the nation’s most prestigious universities, including Columbia, Harvard, Princeton and Yale.

“While the evidence is often clear and convincing, many Title VI investigations have remained unresolved for months, and even years,” Gottheimer wrote.

“The proliferation of attacks and threats on Jewish and pro-Israel students demands immediate action,” he added.

Starting an inquiry is meaningless unless the department moves swiftly to resolve the complaint and hold schools accountable, others said.

“Something needs to give,” said Brian Cohen of the Kraft Center for Jewish Student Life at Columbia/Barnard Hillel. “The universities aren’t acting fast enough. I don’t think the Department of Education is working fast enough. Universities around the country are spiraling out of control and that’s not good for anybody connected to higher education.”

The Education Department did not make Cardona available for comment. In a prepared statement, he said: “As the nation’s secretary of education, I am incredibly concerned by the reports we are hearing about antisemitic hate being directed at students. The Department’s Office for Civil Rights is committed to actively investigating complaints from those who feel their institution is not protecting their civil rights.”

Said Ziskend: “The president has forcefully condemned antisemitism and hate. He has spoken out with moral clarity on the need to condemn antisemitism. The whole administration has done that and continues to do that.”

Last week, Douglas Emhoff, husband of Vice President Kamala Harris, spoke to Cohen by phone.

Emhoff, who is Jewish, “wanted to check in on me and our Jewish students and offer his support,” Cohen said. “He ended the call by reminding me that the work we’re doing is incredibly important and not to forget Jewish joy, as well.”

A longtime supporter of Israeli statehood, Biden has an affinity for the Jewish community, advisers say, and has taken myriad steps to combat antisemitism. Last May, he released a 60-page national strategy to counter antisemitism , billing it as the first of its kind.

“This blatant antisemitism is reprehensible and dangerous — and it has absolutely no place on college campuses, or anywhere in our country,” Biden said in a recent statement.

A month into the war in Gaza, the Education Department’s civil rights office sent a letter to schools reminding them that they are legally obligated to prevent discrimination against students, be they Jewish, Israeli, Muslim or Palestinian.

But such gestures aren’t keeping students safe as pro-Palestinian protests spread to other campuses, lawmakers and Jewish rights groups say.

Kenneth Marcus, who headed the Education Department’s civil rights office in the George W. Bush and Donald Trump administrations, said, “The department’s office of civil rights should be seizing the moment and taking charge of this situation. It’s not enough merely to wait passively for complaints to come in and log them and indicate that investigations have been opened.”

“They should be proactively opening investigations rather than waiting,” added Marcus, who chairs the Brandeis Center, which promotes civil and human rights for Jews. 

In a visit to Columbia on Wednesday, House Speaker Mike Johnson, R-La., met privately with Jewish students who, he said, showed him flyers appearing on campus that “looked like Nazi propaganda from the 1930s.” Johnson’s staff showed one to NBC News: a drawing of a skunk with a Jewish star on its side. “Skunk on Campus,” the caption read.

Some Jewish students at Columbia said that they have felt threatened walking on school grounds. After meeting with Johnson, one told NBC News that he has spotted Hamas flags on campus. Another said that he’s heard chants of “Go back to Europe.”

“It’s time to say, ‘Enough,’” said Ben Solomon, 22, who is studying economics and political science. “This isn’t speech. This is disruption. This is intimidation.”

As he left the university, Johnson told NBC News that he planned to call Cardona with a message: He would tell him “what I saw here and encourage him to come and make a visit himself.”

reported speech online wordwall

Peter Nicholas is a senior national political reporter for NBC News.

The Federal Register

The daily journal of the united states government, request access.

Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs.

If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated.

An official website of the United States government.

If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Chants of ‘shame on you’ greet guests at White House correspondents’ dinner shadowed by war in Gaza

Pro-Palestinian protesters gathered outside the site of the White House Correspondents’ Dinner on Saturday, accusing U.S. journalists of undercovering the war in Gaza and misrepresenting it. (AP video by Serkan Gurbuz)

reported speech online wordwall

An election-year roast of President Joe Biden before journalists, celebrities and politicians at the annual White House correspondents’ dinner on Saturday butted up against growing public discord over the Israel-Hamas war.

reported speech online wordwall

President Joe Biden, right, introduces host Colin Jost at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

  • Copy Link copied

President Joe Biden, right, and host Colin Jost attend the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

President Joe Biden makes a toast to a free press at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

A demonstrator with red paint on their hand and face is seen behind a police barricade during a pro-Palestinian protest over the Israel-Hamas war at the White House Correspondents’ Association Dinner, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

Demonstrators lay in the street during a pro-Palestinian protest over the Israel-Hamas war before the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

Demonstrators hold a sign while press vest lay on the ground covered in red paint during a pro-Palestinian protest over the Israel-Hamas war at the White House Correspondents’ Association Dinner, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

Demonstrators protest as guests arrive at the White House Correspondents’ Association Dinner at the Washington Hilton on Saturday, April 27, 2024, in Washington. (AP Photo/Kevin Wolf)

A Palestinian flag hangs on the side of the Washington Hilton as demonstrators protest the Israel-Hamas war before the start of the White House Correspondents’ Association Dinner, Saturday, April 27, 2024, in Washington. (AP Photo/Pablo Martinez Monsivais)

President Joe Biden applauds at the conclusion of the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

Host Colin Jost speaks at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. Looking on at left is President Joe Biden. (AP Photo/Manuel Balce Ceneta)

A demonstrator protests as guests arrive at the White House Correspondents Association Dinner at the Washington Hilton on Saturday, April 27, 2024, in Washington. (AP Photo/Kevin Wolf)

Scarlett Johansson, right, waves as Lorne Michaels, “Saturday Night Live” creator and producer, looks on at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

Demonstrators protest as guests arrive at the White House Correspondents’ Association Dinner, Saturday, April 27, 2024, in Washington. (AP Photo/Pablo Martinez Monsivais)

Demonstrators protest before the start of the White House Correspondents’ Association Dinner, Saturday, April 27, 2024, in Washington. (AP Photo/Pablo Martinez Monsivais)

George Washington University students protest the Israel-Hamas war at the university in Washington, Saturday, April 27, 2024. President Joe Biden is set to deliver an election-year roast at the annual White House Correspondents’ Association dinner on Saturday, April 27, 2024, before a large crowd of journalists, celebrities and politicians against the backdrop of growing protests over his handling of the Israel-Hamas war. (AP Photo/Cliff Owen)

White House Correspondents’ Association Dinner attendee, second right, confronts a protester before the start of the event outside the Washington Hilton on Saturday, April 27, 2024, in Washington. (AP Photo/Kevin Wolf)

Demonstrators protest as attendees arrive at the White House Correspondents’ Association Dinner at the Washington Hilton on Saturday, April 27, 2024, in Washington. (AP Photo/Kevin Wolf)

Demonstrators protest the Israel-Hamas war as a guest, left, arrives at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

A demonstrator lays candles on the ground next to press vest covered in red paint during a pro-Palestinian protest over the Israel-Hamas war at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

Host Colin Jost speaks at the White House Correspondents’ Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

WASHINGTON (AP) —

The war in Gaza spurred large protests outside a glitzy roast with President Joe Biden, journalists, politicians and celebrities Saturday but went all but unmentioned by participants inside, with Biden instead using the annual White House correspondents’ dinner to make both jokes and grim warnings about Republican rival Donald Trump’s fight to reclaim the U.S. presidency.

An evening normally devoted to presidents, journalists and comedians taking outrageous pokes at political scandals and each other often seemed this year to illustrate the difficulty of putting aside the coming presidential election and the troubles in the Middle East and elsewhere.

Biden opened his roast with a direct but joking focus on Trump, calling him “sleepy Don,” in reference to a nickname Trump had given the president previously.

President Joe Biden applauds at the conclusion of the White House Correspondents' Association Dinner at the Washington Hilton, Saturday, April 27, 2024, in Washington. (AP Photo/Manuel Balce Ceneta)

Despite being similar in age, Biden said, the two presidential hopefuls have little else in common. “My vice president actually endorses me,” Biden said. Former Trump Vice President Mike Pence has refused to endorse Trump’s reelection bid.

But the president quickly segued to a grim speech about what he believes is at stake this election, saying that another Trump administration would be even more harmful to America than his first term.

“We have to take this serious — eight years ago we could have written it off as ‘Trump talk’ but not after January 6,” Biden told the audience, referring to the supporters of Trump who stormed the Capitol after Biden defeated Trump in the 2020 election.

A Pro-Palestinian demonstration encampment is seen at the Columbia University, Saturday, April 27, 2024, in New York. (AP Photo/Yuki Iwamura)

Trump did not attend Saturday’s dinner and never attended the annual banquet as president. In 2011, he sat in the audience, and glowered through a roasting by then-President Barack Obama of Trump’s reality-television celebrity status. Obama’s sarcasm then was so scalding that many political watchers linked it to Trump’s subsequent decision to run for president in 2016.

Biden’s speech, which lasted around 10 minutes, made no mention of the ongoing war or the growing humanitarian crisis in Gaza.

One of the few mentions came from Kelly O’Donnell, president of the correspondents’ association, who briefly noted some 100 journalists killed in Israel’s 6-month-old war against Hamas in Gaza. In an evening dedicated in large part to journalism, O’Donnell cited journalists who have been detained across the world, including Americans Evan Gershkovich in Russia and Austin Tice, who is believed to be held in Syria. Families of both men were in attendance as they have been at previous dinners.

To get inside Saturday’s dinner, some guests had to hurry through hundreds of protesters outraged over the mounting humanitarian disaster for Palestinian civilians in Gaza. They condemned Biden for his support of Israel’s military campaign and Western news outlets for what they said was undercoverage and misrepresentation of the conflict.

“Shame on you!” protesters draped in the traditional Palestinian keffiyeh cloth shouted, running after men in tuxedos and suits and women in long dresses holding clutch purses as guests hurried inside for the dinner.

Demonstrators hold a sign while press vest lay on the ground covered in red paint during a pro-Palestinian protest over the Israel-Hamas war at the White House Correspondents' Association Dinner, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

“Western media we see you, and all the horrors that you hide,” crowds chanted at one point.

Other protesters lay sprawled motionless on the pavement, next to mock-ups of flak vests with “press” insignia.

Ralliers cried “Free, free Palestine.” They cheered when at one point someone inside the Washington Hilton — where the dinner has been held for decades — unfurled a Palestinian flag from a top-floor hotel window.

A demonstrator with red paint on their hand and face is seen behind a police barricade during a pro-Palestinian protest over the Israel-Hamas war at the White House Correspondents' Association Dinner, Saturday April 27, 2024, in Washington. (AP Photo/Terrance Williams)

Criticism of the Biden administration’s support for Israel’s military offensive in Gaza has spread through American college campuses , with students pitching encampments and withstanding police sweeps in an effort to force their universities to divest from Israel. Counterprotests back Israel’s offensive and complain of antisemitism.

Biden’s motorcade Saturday took an alternate route from the White House to the Washington Hilton than in previous years, largely avoiding the crowds of demonstrators.

Saturday’s event drew nearly 3,000 people. Celebrities included Academy Award winner Da’Vine Joy Randolph, Scarlett Johansson, Jon Hamm and Chris Pine.

Both the president and comedian Colin Jost, who spoke after Biden, made jabs at the age of both the candidates for president. “I’m not saying both candidates are old. But you know Jimmy Carter is out there thinking, ‘maybe I can win this thing,’” Jost said. “He’s only 99.”

Law enforcement, including the Secret Service, instituted extra street closures and other measures to ensure what Secret Service spokesman Anthony Guglielmi said would be the “highest levels of safety and security for attendees.”

Protest organizers said they aimed to bring attention to the high numbers of Palestinian and other Arab journalists killed by Israel’s military since the war began in October.

More than two dozen journalists in Gaza wrote a letter last week calling on their colleagues in Washington to boycott the dinner altogether.

“The toll exacted on us for merely fulfilling our journalistic duties is staggering,” the letter stated. “We are subjected to detentions, interrogations, and torture by the Israeli military, all for the ‘crime’ of journalistic integrity.”

A demonstrator protests as guests arrive at the White House Correspondents Association Dinner at the Washington Hilton on Saturday, April 27, 2024, in Washington. (AP Photo/Kevin Wolf)

One organizer complained that the White House Correspondents’ Association — which represents the hundreds of journalists who cover the president — largely has been silent since the first weeks of the war about the killings of Palestinian journalists. WHCA did not respond to a request for comment.

According to a preliminary investigation released Friday by the Committee to Protect Journalists, nearly 100 journalists have been killed covering the war in Gaza. Israel has defended its actions, saying it has been targeting militants.

“Since the Israel-Gaza war began, journalists have been paying the highest price — their lives — to defend our right to the truth. Each time a journalist dies or is injured, we lose a fragment of that truth,” CPJ Program Director Carlos Martínez de la Serna said in a statement.

Sandra Tamari, executive director of Adalah Justice Project, a U.S.-based Palestinian advocacy group that helped organize the letter from journalists in Gaza, said “it is shameful for the media to dine and laugh with President Biden while he enables the Israeli devastation and starvation of Palestinians in Gaza.”

In addition, Adalah Justice Project started an email campaign targeting 12 media executives at various news outlets — including The Associated Press — expected to attend the dinner who previously signed onto a letter calling for the protection of journalists in Gaza.

“How can you still go when your colleagues in Gaza asked you not to?” a demonstrator asked guests heading in. “You are complicit.”

___ Associated Press writers Mike Balsamo, Aamer Madhani, Fatima Hussein and Tom Strong contributed to this report.

FARNOUSH AMIRI

Dozens Arrested at U.Va. as Others Show Defiance at Commencement

  • Share full article

People in the foreground wearing goggles. One is facing the camera and is wearing a black mask. The others are facing away from the camera toward a line of police officers in riot gear.

Here is the latest on campus protests.

At least 25 people were arrested on Saturday at the University of Virginia, as protests over the war in Gaza continued to disrupt university campuses and puncture the celebratory atmosphere around graduation ceremonies across the country.

The arrests and aggressive efforts to clamp down on protests underscored just how tumultuous the end of the spring semester has been for universities, many of which are now holding commencement ceremonies this weekend against the backdrop of tense protests on their campuses.

Pro-Palestinian students, for their part, have signaled that they will continue to challenge their universities over their financial ties to Israel and military companies; express outrage over the violence in Gaza; and condemn aggressive treatment of protesters on campus. At one point on Saturday, police in riot gear sprayed dozens of people at the University of Virginia with chemical irritants.

Other protests have extended from campus property to commencement. At the University of Michigan’s ceremony, pro-Palestinian supporters briefly disrupted the ceremony and were met by state police. At Indiana University in Bloomington, students walked out of the commencement remarks in protest.

School officials have struggled with how to respond to the protests as they try to balance free speech with campus security. For the graduation ceremonies, some universities plan to set up designated areas for protests in an attempt to allow the ceremonies to go forward without suppressing speech.

Among other schools set to hold ceremonies this weekend are Northeastern University and Ohio State University — both universities that have grappled with unrest over student protests.

Across the country, more than 2,300 people have been arrested or detained on campuses in the past two weeks, according to a tally by The New York Times .

Here is what else to know:

In Charlottesville, Va., at least three law enforcement agencies moved in to clear out the protesters at the University of Virginia, who said their demonstration was peaceful. Police officials said two people had been released, and all those arrested had been charged with trespassing.

Dozens of protesters were arrested at the Art Institute of Chicago on Saturday, after the school asked the police to intervene and remove demonstrators from school property.

The University of Mississippi said it was investigating at least one student after counterprotesters directed racist taunts at pro-Palestinian protesters this week, school officials said. The university chancellor, Glenn F. Boyce, said that statements made at the demonstration were “offensive, hurtful and unacceptable.”

The University of Michigan has seen repeated protests during its graduation festivities. One person was arrested Friday evening during a protest outside a dinner for recipients of honorary degrees, while the Saturday graduation ceremony saw cheers and boos as people brought Palestinian flags down the venue’s aisles.

At the University of Chicago, which adopted a set of free speech standards in 2015 that have been adopted by colleges across the country, the school’s president said an encampment there “cannot continue,” citing disruptions and vandalism .

A handful of universities have agreed to some of the protesters’ demands , bringing peaceful ends to demonstrations but also criticism from some Jewish groups. The schools announcing agreements this week included the University of California, Riverside ; Brown ; Northwestern ; Rutgers ; and the University of Minnesota . It is unclear how many of them might work .

— Emily Cochrane ,  John Yoon ,  Ryan Patrick Hooper and Jackson Landers

Police aggressively push U.Va. protesters off a campus lawn and arrest 25 people.

Video player loading

The police arrested at least 25 pro-Palestinian protesters on Saturday at the University of Virginia in Charlottesville after aggressively clearing demonstrators off a university lawn and at one point using chemical irritants on dozens of people.

Like hundreds of students, faculty and staff across the country, students in Charlottesville protested this week in the heart of their campus, calling for the university to divest from Israel, weapons manufacturers and companies with ties to Israeli institutions, and to pledge to protect students’ right to peacefully protest. Tents were set up Friday, but cleared the next day.

In a news release, the university said the protesters had violated school policy on Friday by setting up tents on the lawn and by using megaphones. But the encampment was not forcibly removed then, the statement read, “given continued peaceful behavior and the presence of young children at the demonstration site, and due to heavy rain Friday night.”

Jim Ryan, the university president, wrote in a letter to the campus, “I sincerely wish it were otherwise, but this repeated and intentional refusal to comply with reasonable rules intended to secure the safety, operations and rights of the entire university community left us with no other choice than to uphold the neutral application and enforcement of those rules.”

By Saturday afternoon, protesters were met with police officers in riot gear. At one point, the police used chemical irritants against the crowd to get people to disperse.

The university said it was not immediately clear how many of the 25 who were arrested were affiliated with the school. All were charged with trespassing, according to a police official.

“Shame on you, shame on you!” chanted a crowd of hundreds of students and Charlottesville locals as a combined force of dozens of officers from at least three law enforcement agencies pushed them into the street in front of the university’s Rotunda building.

“This is absolutely obscene,” said Colden Dorfman, a third-year student majoring in computer science, who faced down the cordon as the police sprayed chemical irritants. “This is insanity. Everyone came here with peaceful intentions. I’m ashamed that this is what our police force is being used for.”

Some protesters and their supporters directly questioned the magnitude of the police response, particularly compared with the school’s response in 2017 to hundreds of white nationalists marching on campus with torches .

“What did you do when the K.K.K. came to town?” protesters could be heard yelling, as the police moved to push them into University Avenue, which had been blocked off to traffic.

Even as it began to rain, hundreds of people remained for hours before dispersing. Some people headed to the Albemarle-Charlottesville Regional Jail, where a new protest was forming.

— Jackson Landers ,  Hawes Spencer and Emily Cochrane Jackson Landers and Hawes Spencer reported from Charlottesville, Va.

Advertisement

At Michigan, commencement is briefly disrupted by dozens of pro-Palestinian graduates.

Pro-palestinian graduates briefly interrupt michigan’s commencement, protesters holding flags and signs marched toward the stage during the university of michigan’s commencement ceremony..

“Arrest them.” [crowd boos] [expletives] [expletives]

Video player loading

On a balmy Saturday in Ann Arbor, Mich., thousands of graduates in caps and gowns filed into the biggest stadium in the country for the University of Michigan’s graduation ceremony.

As tens of thousands of spectators found their seats in the packed Michigan Stadium, planes with dueling messages circled overhead: one with a banner that read, “We stand with Israel. Jewish lives matter,” and another with the message, “Divest from Israel now! Free Palestine!”

Then, dozens of pro-Palestinian graduates draped in flags, kaffiyeh and graduation caps marched down the center aisle toward the stage. They chanted, “Regents, regents, you can’t hide! You are funding genocide!” calling for the university to divest from investments that have benefited Israel.

At least a dozen officers from the Michigan State Police quickly followed to block the parade from making it to the stage, urging protesters to retreat to the back of the graduates section.

As the chants reverberated throughout the stadium and demonstrators talked to the police, some students got up from their seats and joined in, disobeying police officers who told them to sit down.

But other students — some with the Star of David on their caps — were enraged by the disruption and demanded that the protesters be kicked out. “You’re ruining our graduation!” one yelled. Some patrons in private boxes hung Israeli flags from their seats.

Once the demonstrators moved to the back of the ceremony, tensions simmered, and the protest remained peaceful. University officials said that peaceful protests are not uncommon at graduation or university events.

The chants never stopped — though how audible and distracting it was might have depended on where people sat in the stadium — but the audience returned their attention to the stage as the ceremony carried on.

About a mile away from graduation, a pro-Palestinian encampment on the university’s Diag, a central quadrangle on campus, was abuzz with campers, activists and recently graduated students and their parents.

Nestled between brick academic buildings and lush greenery, the encampment sits just outside the steps of a library and not far from a busy pedestrian strip of shops and restaurants. The occupation has seen as many as 200 protesters overnight and includes dozens of tents.

Salma Hamamy, 22, one of the organizers of the encampment, was still wearing her graduation cap and gown after marching down the aisle in protest at commencement. She does not regret protesting at her graduation — a “once in a lifetime” moment, she said.

“It would feel completely wrong of me to not use graduation as an opportunity to call attention to this. That’s where all the regents are,” she said. “It’s important that they can physically see us. They can’t ignore us.”

Jonathan Ellis contributed reporting.

— Ryan Patrick Hooper Reporting from Ann Arbor, Mich.

At least one student at Ole Miss is being investigated after racist counterprotest.

The University of Mississippi is investigating the conduct of at least one student after counterprotesters directed racist taunts at pro-Palestinian protesters this week, school officials said.

In a letter to students, faculty and staff members on Friday evening, Glenn F. Boyce, the university chancellor, said the school had begun to investigate one student and may look at more.

“From yesterday’s demonstration, university leaders are aware that some statements made were offensive, hurtful and unacceptable, including actions that conveyed hostility and racist overtones,” Mr. Boyce wrote. He did not identify the student, citing privacy law.

He added, “To be clear, people who say horrible things to people because of who they are will not find shelter or comfort on this campus.”

Video captured by the Mississippi Free Press and the Daily Mississippian showed a crowd of white male students jeering and taunting a lone Black woman standing in front of the protest on campus, with one man making monkey gestures and hooting at her. Another video compilation showed the men yelling profane and derogatory insults.

The few dozen pro-Palestinian protesters appeared widely outnumbered by the crowd of counterdemonstrators, though university officials said no one was arrested or injured.

Gov. Tate Reeves of Mississippi, a Republican, approvingly captioned a separate video of the demonstrations that showed the counterprotesters singing “The Star-Spangled Banner” over the protest chants, though he made no mention of the other video clips that soon circulated. And former President Donald J. Trump, the presumptive Republican nominee for president, also shared a separate video on social media from the protests where the men could be heard chanting “we want Trump.”

The university has a painful history of racist episodes, and, for some, the videos evoked the mob and deadly riots that sought to stop the enrollment of James Meredith, the first Black student at the school, in 1962. And while the school has shed some of its Confederate imagery, in 2012, two students were arrested after racial slurs were chanted at a protest over former President Barack Obama’s re-election. In 2014, a noose was placed around a statue of Mr. Meredith.

“It is important to acknowledge our challenging history, and incidents like this can set us back,” Mr. Boyce wrote. “It is one reason why we do not take this lightly and cannot let the unacceptable behavior of a few speak for our institution or define us.”

— Emily Cochrane

Echoing Vietnam War protests, demonstrators at Kent State University call for the university to divest.

Hundreds of pro-Palestinian demonstrators gathered at Kent State University in Ohio on Saturday to protest the war in Gaza, exactly 54 years after a similar campus demonstration ended in four student deaths.

The activists were silent but impossible to miss. They assembled in a semicircle around a stage on Kent State’s commons where speakers were commemorating the events of May 4, 1970: James Rhodes, then the governor of Ohio, had called in the National Guard to quell a demonstration against U.S. involvement in the Vietnam War. The troops opened fire. Four people — Allison Krause, William Schroeder, Sandra Scheuer and Jeffrey Miller — were killed. Several others were wounded.

The campus still bears the scars of the 1970 shooting. Illuminated columns mark the precise spots where the four students were killed, and the tragedy was immortalized in the song “Ohio” performed by the folk-rock quartet Crosby, Stills, Nash & Young.

In a speech on Saturday to honor the victims, Sophia Swengel, a sophomore and the president of the May 4 Task Force, a group formed in 1975 to keep the students’ legacy alive, also acknowledged the protesters. Many of them were hoisting signs calling on the university to divest from weapons manufacturers and military contractors.

“Once again students are taking a stand against bloodshed abroad,” she said, referring to Israel’s assault on Gaza, which followed the Hamas-led attack of Oct. 7. “Much like they did against the Vietnam War back in the ’60s,” Ms. Swengel added.

Among the student demands in 1970 were abolishing the R.O.T.C. program, ending the university’s ties with police training programs and halting the research and development of the liquid crystal used in heat detectors that guided bombs dropped on Cambodia.

Today, demonstrators at Kent State are asking the university to divest its portfolio of instruments of war. “The university is profiting from war, and they were arguing in ’69 and ’70 that the university was also profiting from war,” said Camille Tinnin, a 31-year-old Ph.D. student studying political science who has met with the school’s administration to discuss divestiture.

While Kent State cannot end the war in Gaza, “what the university can control is its own investment portfolio,” said Yaseen Shaikh, 19, a member of Students for Justice in Palestine who is about to graduate with a degree in computer science.

Ms. Tinnin and Mr. Shaikh, along with two other students, met with Mark Polatajko, senior vice president for finance and administration for Kent State, on Dec. 4, a meeting confirmed in a statement from Rebecca Murphy, a Kent State spokeswoman. Mr. Polatajko shared the university’s investment portfolio with the four activists during the meeting, Ms. Tinnin said in an interview before Saturday’s protest. She said activists who scrutinized the portfolio found that it included investments in weapons manufacturers.

On Saturday, in a nod to nationwide student demonstrations against the war in Gaza, Ms. Swengel said that encampments and demonstrations “stand as living, breathing monuments of the willingness of students to stand up against genocide and for what they believe in.”

In a statement emailed to reporters, Ms. Murphy said the university “upholds the First Amendment rights of free speech and peaceful assembly for all.”

“Consistent with our core values, we encourage open dialogue and respectful civil discourse in an inclusive environment,” she added.

— Patrick Cooley

Vassar protesters removed their tents after the college agreed to review its investments.

Pro-Palestinian protesters dismantled their encampment at Vassar College in Poughkeepsie, N.Y., on Saturday after reaching an agreement with the institution that requires administrators to review a divestment proposal.

Student demonstrators pitched dozens of tents on Vassar’s campus, starting on Tuesday. The liberal arts college is a bastion of progressive ideas with a long history of student protest , and Vassar’s president said in a statement this week that she hoped to resolve the current disagreement with pro-Palestinian demonstrators peacefully.

In the agreement reached on Saturday, Vassar officials agreed to review a proposal to divest funds from “defense-related investments, such as militarized surveillance and arms production,” and to support student fund-raising efforts in support of refugees, according to a statement by the president , Elizabeth H. Bradley.

The divestment language did not mention Israel or the war in the Gaza Strip, as the protesters had in their demands.

But Ms. Bradley said administrators had also agreed to “recruit and support Palestinian students and scholars-at-risk, who have lost educational and professional opportunities” since Oct. 7, a reference to the attacks in Israel by Hamas and its allies that prompted Israel’s war in Gaza.

“With these commitments, the college will work to improve our understanding, dialogue about, and educational programming concerning peace and conflict, with focus on Gaza and the Middle East,” she said.

The Vassar agreement is one of several in which student protesters have agreed to clear camps in exchange for commitments to discuss institutional investment policies around Israel. Students for Justice in Palestine at Vassar, the group that organized the encampment and negotiated with administrators, said in a statement on social media that it did not feel like a victory.

“We are not happy about the concessions we’ve made, but our work is not done,” the group said in the statement, adding that the administration had not agreed to all of the demands laid out by protesters when they launched the encampment. Those demands included calls for the Vassar administration to release a public statement calling for “an immediate end to Israel’s siege on Gaza and an end to U.S. aid for Israel,” and to completely boycott Israeli academic institutions, including Vassar-sponsored study abroad programs in Israel.

“At this time, we believe this is the most strategic decision we can make in order to further our efforts for divestment and Palestinian liberation,” the students said of the agreement.

They said they would donate the roughly $7,000 they had raised since launching their encampment to families in Gaza, and redistribute any donated supplies to people and organizations in Poughkeepsie.

— Erin Nolan

Dozens of Indiana University graduates walked out in protest during commencement.

Dozens of students walked out of Indiana University’s graduation ceremony on Saturday in protest of the war in Gaza, moving instead to a green space on campus where students had been demonstrating for weeks .

More than 6,700 graduates filed into Memorial Stadium in Bloomington, Ind., to receive their diplomas. There were more than 40,000 people in attendance, according to the university. Outside the stadium, the police presence was heavy. Above it, a plane circled towing a banner that said, “let Gaza live.”

The students walked out in two groups. The first briefly interrupted the ceremony, leaving and chanting “Shut it down” and “Free, free Palestine” as the school’s embattled president, Pamela Whitten, opened the program. The beginning of her remarks was largely drowned out by jeers, but she continued without pausing.

“We have been looking forward to celebrating this moment with you,” she said at one point in her brief remarks. She made no mention of the protests.

The second batch of protesters walked out during a speech by the commencement speaker, the tech entrepreneur Scott Dorsey. Protesters chanted “Free, free Palestine” as they filed out. They were drowned out by boos.

Lauren Ulrich, 21, of Rolla, Mo., graduated on Saturday with degrees in journalism and environmental studies. But she did not stay at the commencement ceremony long enough to turn her tassel. Her decision to walk out was one that Ms. Ulrich said she had not made lightly.

“I think sometimes it is scary to do the right thing,” she said. “I was scared. But people are dying and there’s no way I could not do something about it.”

After months of participating in protests and the school’s encampment, Ms. Ulrich said she planned to leave campus the day after graduation. She said she was “incredibly sad” but felt that the protest movement had enough supporters to keep up momentum over the summer.

“I think they will get creative in how they will continue it,” Ms. Ulrich said.

Liz Capp, 22, of Indianapolis, graduated on Saturday with a degree in therapy and did not participate in the protest. Before the ceremony, she anticipated that there would be some kind of demonstration. But it had not concerned her.

“Everyone has the right to peacefully protest,” she said.

— Kevin Williams

The president of the University of Chicago says an ‘encampment cannot continue.’

The president of the University of Chicago said on Friday that the pro-Palestinian encampment on his campus’s quad “cannot continue,” a position that was being closely watched in higher education because the university has long held itself up as a national model for free expression.

Administrators had initially taken a permissive approach to the camp and pointed toward what is known as the Chicago statement, a set of free speech standards adopted in 2015 that have become a touchstone and guide for colleges across the country. But President Paul Alivisatos said on Friday that those protections were not absolute, and that the encampment had run afoul of university policies.

“On Monday, I stated that we would only intervene if what might have been an exercise of free expression blocks the learning or expression of others or substantially disrupts the functioning or safety of the university,” Dr. Alivisatos said in a message to the campus. “Without an agreement to end the encampment, we have reached that point.”

In the hours after his announcement, hundreds of protesters remained at the encampment, where they chanted and held signs as counterprotesters gathered nearby. At one point, some pro-Palestinian demonstrators and counterprotesters briefly fought one another. By early afternoon, more police officers, both from the university and the city, were visible near the quad.

The scene had quieted down, at least temporarily, by early Friday evening. Several security guards were stationed around the quad, where protesters moved quietly around their encampment while others studied or walked nearby. There was no effort by law enforcement to forcibly disband the encampment.

Chicago’s mayor, Brandon Johnson, issued a statement saying he had been in touch with Dr. Alivisatos and had “made clear my commitment to free speech and safety on college campuses.”

Like at dozens of colleges across the country, Chicago students have erected tents on campus and issued a set of demands to administrators, including divesting from weapons manufacturers. A member of a group leading the encampment, UChicago United for Palestine, accused the university of “negotiating in bad faith” in a statement on Friday.

The protest group “refuses to accept President Alivisatos’s repeated condescending offer of a public forum to discuss ‘diverse viewpoints’ on the genocide, as this is clearly a poor attempt at saving face without material change,” said Christopher Iacovetti, a student who participated in negotiations.

Dr. Alivisatos, a chemist who became president of the university in 2021, said in his message to campus that the encampment had become far more than a cluster of tents. He accused protesters of vandalizing buildings, blocking walkways, destroying a nearby installation of Israeli flags and flying a Palestinian flag from a university flagpole.

“The encampment has created systematic disruption of campus,” Dr. Alivisatos said. “Protesters are monopolizing areas of the Main Quad at the expense of other members of our community. Clear violations of policies have only increased.”

The University of Chicago, a private college that is one of the country’s most selective, has been praised by conservatives and free speech advocates in recent years for its approach to expression on its campus.

As part of its free speech philosophy, the university also put forward the principle of institutional neutrality.

In a 1967 declaration , the university called for schools to remain neutral on political and social matters, saying a campus “is the home and sponsor of critics; it is not itself the critic.” But at other colleges, students over the years have frequently and successfully pressed their administrations to take positions on matters like police brutality and global warming.

In August 2016, the University of Chicago informed incoming freshmen : “We do not support so-called trigger warnings, we do not cancel invited speakers because their topics might prove controversial, and we do not condone the creation of intellectual safe spaces where individuals can retreat from ideas and perspectives at odds with their own.”

Versions of the university’s declaration of free speech principles have been adopted by dozens of other colleges in recent years.

“In a word, the university’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the university community to be offensive, unwise, immoral or wrong-headed,” that declaration said.

But the statement also describes clear limits, including a right to prohibit illegal activities and speech “that constitutes a genuine threat or harassment.”

— Mitch Smith and Robert Chiarito Reporting from Chicago

Encampment ends at U.C. Riverside after protesters and school officials reach a deal.

Protesters took down their encampment at the University of California, Riverside, Friday night after they came to an agreement with school administrators.

As part of the deal, the school agreed to disclose and examine its investments; create a task force made of students and faculty members to look at the administering of its endowment; and end a business school study-abroad program in Israel, Jordan, Egypt and other countries because the school said it was not consistent with university policies.

The task force will produce a report by the end of the winter quarter of 2025 to present to the board of trustees, the school said.

In a letter to the campus community on Friday, the school’s chancellor, Kim Wilcox, said that his goal had been to resolve this peacefully and that he was encouraged by the result. He said that school leaders had been meeting with leaders of the student encampment on campus since Wednesday.

The school’s chapter of Students for Justice in Palestine called the agreement “a win for all of us” in a statement posted on Instagram , adding that all of its demands were met.

The school also agreed, at the request of students, to review the availability of Sabra hummus on campus. Pro-Palestinian activists have frequently called on people to boycott the brand over the years, as one of Sabra’s joint owners is the Strauss Group, an Israeli food company. In 2010, the Strauss Group said on its website that it had provided financial support to part of Israel’s military force . And today it says it maintains contact with “IDF divisions.” Sabra is co-owned by PepsiCo. Efforts to reach that company were unsuccessful.

The agreement at U.C. Riverside is not the first between protesters and universities since protests on campus began against the war in Gaza.

Earlier this week, officials at Brown University also made an agreement with pro-Palestinian protesters. Demonstrators agreed to dismantle their encampment at Brown, which had been removed by Tuesday evening, and university leaders said they would discuss, and later vote on, divesting funds from companies connected to the Israeli military campaign in Gaza.

Several days ago, an agreement was reached between Northwestern University and the pro-Palestinian demonstrators on campus. The agreement included a promise by the university to be more transparent about its financial holdings. In turn, demonstrators removed the tent camp they built last week at Deering Meadow, a stretch of lawn on campus.

Jewish leaders, including officials from the American Jewish Committee , strongly objected to the agreement at Northwestern, saying it “succumbed to the demands of a mob,” and seven members of a Northwestern committee created to advise the university’s president on preventing antisemitism stepped down in protest on Wednesday.

Agreements between school administrators and student protesters have also taken place at other schools, including Rutgers University .

— Anna Betts

Police treatment of a Dartmouth professor stirs anger and debate.

The video is jarring: A gray-haired woman tumbles, gets up to reach for her phone, held by police officers, and is yanked and taken to the ground. “Are you kidding me?” a bystander asks.

“What are they doing to her?” another adds.

Annelise Orleck, a labor historian who has taught at Dartmouth College for more than three decades, was at a protest for Palestinians in Gaza on Wednesday night, when she was knocked to the ground. Dr. Orleck, 65, was zip-tied and was one of 90 people who were arrested, according to the local police.

The professor walked away with a case of whiplash. But a short video clip of the episode flew around the internet, intensifying the debate over the relatively swift decision by Dartmouth’s president, Sian Leah Beilock, to call in police to arrest students and clear out an encampment.

Unlike other campuses where tents were tolerated for days, the police action at Dartmouth began a little more than two hours after the encampment first appeared, according to the college’s newspaper, The Dartmouth , and students who observed the events on Wednesday.

Dr. Beilock defended her decision.

“Last night, people felt so strongly about their beliefs that they were willing to face disciplinary action and arrest,” Dr. Beilock said in a message to campus on Thursday. “While there is bravery in that, part of choosing to engage in this way is not just acknowledging — but accepting — that actions have consequences.”

Dr. Beilock did not directly address the treatment of Dr. Orleck, who called the message “outrageous.”

“Her actions have consequences, too,” Dr. Orleck said in an interview. “The campus is in an uproar. Neither the students nor the faculty have been as radicalized in a long time as they’re feeling today.”

“I’ve been teaching here for 34 years,” she added. “There have been many protests, but I’ve never, ever seen riot police called to the green.” Dartmouth declined to comment on the incident.

How to handle the encampments has become a grinding challenge for university administrators. Earlier this month, the decision by Columbia University’s president to call in police stirred up protests at campuses across the country.

Demonstrations over the war in Gaza have led to more than 2,000 arrests over the last two weeks at universities across the country, according to a New York Times tally . The arrests have also angered some faculty, who have sometimes stepped in to try to help students.

The police in Hanover, N.H., the home of Dartmouth, said that the arrested included students and nonstudents, but did not provide a breakdown. The charges included criminal trespassing and resisting arrest. When the Hanover Police Department and the state police asked students to disperse, some did and others didn’t, police officials said.

It was unclear what disciplinary action, if any, the arrested students would face from the university.

Dr. Orleck said she was charged with criminal trespass and temporarily banned from campus, as a condition of her bail. The college’s administrators said on Thursday that the suspension was an error in the bail process, which they were working to fix.

In her message, Dr. Beilock strongly defended the decision to sweep away the encampment. And, she said, a key demand of protesters — that trustees vote on divestment from companies connected with Israel — violated the rules for making such decisions.

“Dartmouth’s endowment is not a political tool,” she said, “and using it to take sides on such a contested issue is an extraordinarily dangerous precedent to set.”

Dr. Orleck, who once served as the head of Jewish studies at the university, said she had watched with unease as police confrontations with student protesters escalated across the country.

She said she wanted to be at the Dartmouth protest because as an older Jewish professor — joined by many other older Jewish professors — her presence, she thought, could help keep her students safe.

As the police moved in, arresting students, Dr. Orleck said she started taking videos.

“I said to them, and I said it with some anger, ‘Leave our students alone. They’re students. They’re not criminals,’” she said. “The next thing I knew, I was rushed from the back.”

Messages left for the local and state police were not immediately returned.

One of the short viral videos begins with Dr. Orleck tumbling to the ground. She gets up. She moves toward an officer with her hand extended — grasping for her phone, she said. She is jerked and knocked down again. It is unclear what took place before the video begins.

Ivy Schweitzer, a recently retired English professor at the college, said the situation took a turn when campus security stepped back, and outside law enforcement moved in to make the arrests.

Dr. Orleck, she said, was recording the police with her phone.

“Annelise would never be physical with a police officer,” Dr. Schweitzer said. “But she would put her phone in their face, and I’m sure they wouldn’t like that.”

Jenna Russell contributed reporting. Sheelagh McNeill contributed research.

— Vimal Patel

Columbia’s president urges the university to ‘rebuild community’ in a video.

Columbia University’s president, Nemat Shafik, released a video message late on Friday, following several weeks of tension over Gaza war protests on campus that have spawned a wave of antiwar activism at universities across the country.

On Tuesday, those tensions erupted after Dr. Shafik asked the New York Police Department to clear a building occupied by pro-Palestinian protesters and encampments on campus. Police officers in riot gear arrested more than 100 demonstrators at Columbia University .

It was the second time in two weeks that Columbia officials had asked the police to enter the Manhattan campus to remove demonstrators. On April 18, another 100 or so Columbia students were arrested . The decision to bring law enforcement on campus, and also to request that they remain on campus until May 17 , has drawn criticism from many members of the Columbia community, including faculty, alumni and students.

Over the last six months, the university has released numerous letters to its students, faculty and alumni regarding the Oct. 7 Hamas-led attack, the war in Gaza and the related protests and unrest on campus. But the video released on Friday was the first one by Dr. Shafik released on the school’s Vimeo page in months.

In the video message, Dr. Shafik discussed the need for the community to work together to return civility to the campus after weeks of unrest.

“These past two weeks have been among the most difficult in Columbia’s history,” Dr. Shafik said. “The turmoil and tension, division and disruption have impacted the entire community.”

Speaking directly to the students, Dr. Shafik highlighted the fact that many seniors are now spending their final days in college the way they began in 2020 — online.

“No matter where you stand on any issue, Columbia should be a community that feels welcoming and safe for everyone,” she said.

In the video, Dr. Shafik said that her administration tried “very hard to resolve” the issue of the encampment through dialogue and discussion with the student protesters, but that, ultimately, they could not reach an agreement.

When a group of protesters broke into and occupied Hamilton Hall, Dr. Shafik said, it “crossed a new line,” and put students at risk.

Despite the turmoil of the last few weeks and months, Dr. Shafik told the Columbia community that she has confidence in the future.

“During the listening sessions I held with many students in recent months, I’ve been heartened by your intelligence, thoughtfulness and kindness,” she said.

“Every one of us has a role to play in bringing back the values of truth and civil discourse that polarization has severely damaged,” she added. “Here at Columbia, parallel realities and parallel conversations have walled us off from other perspectives. Working together, I know we can break down these barriers.”

In a break from what the Columbia community may be used to from Dr. Shafik, she also shared personal anecdotes about her upbringing in the video.

“As many of you know, I was born in the Middle East. I grew up in a Muslim family, with many Christian and Jewish friends,” she said. “I spent two decades working in international organizations with people from every nationality and religion in the world where if you can’t bridge divides and see each other’s point of view, you can’t get anything done.”

Dr. Shafik said that she learned from that experience, that “people can disagree and still make progress.”

The issues that are challenging us, she said, namely “the Palestinian-Israeli conflict, antisemitism, and anti-Arab and anti-Muslim bias” have existed for a long time, she said, adding that Columbia University cannot solve them single-handedly.

“What we can do is be an exemplar of a better world where people who disagree do so civilly, recognize each other’s humanity, and show empathy and compassion for one another,” she said. “We have a lot to do, but I am committed to working at it, every day and with each of you, to rebuild community on our campus.”

Earlier on Friday, more than 700 Columbia University community members attended an online meeting of the university’s Senate, a policymaking body made up of faculty members, students and others.

During the meeting, many expressed a lack of confidence in university leadership. Eventually, the chat was shut down because of arguing.

Jeanine D’Armiento, chair of the Senate, said in the meeting that the group’s executive committee had recommended the university continue negotiations with students instead of calling in the police on Tuesday. But, she said, “We were not asked for our opinion.”

Sharon Otterman contributed reporting.

IMAGES

  1. Reported Speech: How To Use Reported Speech

    reported speech online wordwall

  2. Reported Speech: A Complete Grammar Guide ~ ENJOY THE JOURNEY

    reported speech online wordwall

  3. Reported Speech

    reported speech online wordwall

  4. Reported speech interactive and downloadable worksheet. You can do the

    reported speech online wordwall

  5. Reported speech wordwall M.5/9 No.18

    reported speech online wordwall

  6. Reported speech wordwall

    reported speech online wordwall

VIDEO

  1. Wordwall

  2. Wordwall: Search and Share

  3. Easy ESL Activities with WordWall! 💻✨

  4. WORDWALL интерактивті тапсырмалар құру. Новая технология. Сабақты қызықты өткіз

COMMENTS

  1. Reported speech

    Reported Speech Speaking cards. by Kellicrows. Figures of Speech Group sort. by Ustadhahaneen. 4th Grade ELA Figure of Speech. Speech & Language: Describing common events Speaking cards. by Ajett. Kindergarten speech and language. Speech & Language: Describing common events Speaking cards.

  2. Reported speech

    by Hwright2. Adult education English ESOL. Reported speech - statements Spin the wheel. by Akeles. Adult education ESOL. Direct Speech NEF TsBk Comm ex 9B Labelled diagram. by Davidw. B1 ESL ESOL Reported Speech. Match the Halves (Reported Speech) Match up.

  3. Reported speech

    Reported Speech 1 Speaking cards. by Marikulish. 0-100 English (ESL) Grammar Английский Reported speech. Reported Speech Lead-In Unjumble. by Yuliaa. EFL English Grammar Reported Speech. Reported Questions Laser B2 u8 Match up. by Galinabekker001. 12-15 Laser B2 Laser B2 u8 reported speech.

  4. REPORTED SPEECH

    Game Code: 10255. Practice how to change from directed speech into reported speech. "You can have this bag". She told him that ...

  5. Reported speech

    Yes, and you report it with a reporting verb. He said he wanted to know about reported speech. I said, I want and you changed it to he wanted. Exactly. Verbs in the present simple change to the past simple; the present continuous changes to the past continuous; the present perfect changes to the past perfect; can changes to could; will changes ...

  6. Reported speech online practice

    Reported speech practice. School subject: English as a Second Language (ESL) (1061958) Main content: Reported speech (2013113)

  7. Reported Speech

    Watch my reported speech video: Here's how it works: We use a 'reporting verb' like 'say' or 'tell'. ( Click here for more about using 'say' and 'tell' .) If this verb is in the present tense, it's easy. We just put 'she says' and then the sentence: Direct speech: I like ice cream. Reported speech: She says (that) she likes ice cream.

  8. Reported Speech Exercises

    Here's a list of all the reported speech exercises on this site: (Click here to read the explanations about reported speech) Reported Statements: Present Simple Reported Statement Exercise (quite easy) (in PDF here) Present Continuous Reported Statement Exercise (quite easy) (in PDF here) Past Simple Reported Statement Exercise (quite easy) (in ...

  9. Reported Speech

    RS007 - Reporting Verbs Intermediate. RS006 - Reported Speech Intermediate. RS005 - Reported Speech - Introductory Verbs Advanced. RS004 - Reported Speech Intermediate. RS003 - Reporting Verbs Intermediate. RS002 - Reported Speech Intermediate. RS001 - Reported Speech Intermediate. Reported Speech - English Grammar Exercises.

  10. Reported speech statements

    Examples from our community. 3,445 results for 'reported speech statements'. Reported Speech Speaking cards. by Ednauvapds. Reported Speech Quiz. by E4cmarianatavar. Reported speech Quiz. by Mariela98. Reported Speech Spin the wheel.

  11. Reported Speech ESL Games Activities Worksheets

    ESL Reported Speech Activity - Grammar and Speaking: Asking and Answering Questions, Forming Sentences, True or False, Guessing - Group Work - Pre-intermediate (A2) - 40 minutes. In this entertaining reported speech speaking activity, students interview each other giving true or false answers and then use reported speech to compare what the ...

  12. Reported Speech

    Rewrite the demands/requests in indirect speech. The passenger requested the taxi driver, "Stop the car.". → The passenger requested the taxi driver . to + same wording as in direct speech. The mother told her son, "Don't be so loud.". → The mother told her son . not to + same wording as in direct speech, but remove don't.

  13. Indirect speech

    What is indirect speech or reported speech? When we tell people what another person said or thought, we often use reported speech or indirect speech. To do that, we need to change verb tenses (present, past, etc.) and pronouns (I, you, my, your, etc.) if the time and speaker are different.For example, present tenses become past, I becomes he or she, and my becomes his or her, etc.

  14. Online speech synthesis using a chronically implanted brain ...

    Recent work by Dash et al., for example, reported speech decoding results for imagined and spoken phrases from 3 ALS patients using magnetoencephalography (MEG) 31. While speech decoding based on ...

  15. Biden faces pressure to step up response to antisemitic incidents on

    Biden administration faces pressure to step up its response to antisemitic incidents on college campuses. The tumult spreading through college campuses is especially tricky for the president as he ...

  16. How Trump's Rhetoric at Rallies Has Escalated

    1550. By Charles Homans. Charles Homans covers politics for The Times. He has attended seven Trump rallies in seven states since October. April 27, 2024. It was Super Tuesday at Mar-a-Lago, and ...

  17. Arizona Lawmakers Repeal 1864 Abortion Ban, Creating Rift on the Right

    Spectators filled the gallery in the Arizona House chamber last week. The State Senate on Wednesday voted to repeal the 1864 ban. Ash Ponders for The New York Times. By Jack Healy and Elizabeth ...

  18. Live updates: Warren Buffett at Berkshire Hathaway Annual ...

    CNBC will be livestreaming Berkshire Hathaway's annual shareholder meeting on Saturday, beginning at 9:30 a.m. ET. Viewers can expect a lively discussion that will provide insight into Warren ...

  19. Federal Register :: Medical Devices; Laboratory Developed Tests

    These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition. ... test results are not reported to patients or their healthcare providers (88 FR 68006 at 68023). The results of these tests are generally used for trending on a population basis or public health outbreaks ...

  20. White House correspondents' dinner shadowed by war in Gaza

    George Washington University students protest the Israel-Hamas war at the university in Washington, Saturday, April 27, 2024. President Joe Biden is set to deliver an election-year roast at the annual White House Correspondents' Association dinner on Saturday, April 27, 2024, before a large crowd of journalists, celebrities and politicians against the backdrop of growing protests over his ...

  21. Universities Roiled by Protests Set to Hold Graduations

    Among them: the University of Michigan and Indiana University Bloomington, which hold graduations on Saturday, and Northeastern University and Ohio State University, which have ceremonies Sunday ...