Updated January 19, 2024 by Phil Hubbard

Ph.D. in Chinese Linguistics

The Ph.D. program is designed to prepare students for a doctoral degree in Chinese linguistics.

Students should consult the most up-to-date version of the degree plan on the Stanford Bulletin  as well as the EALC Graduate Handbook . Each student should meet with their faculty advisor at least once per quarter to discuss the degree requirements and their progress.

Admission to Candidacy

Candidacy is the most important University milestone on the way to the Ph.D. degree. Admission to candidacy rests both on the fulfillment of department requirements and on an assessment by department faculty that the student has the potential to successfully complete the Ph.D.

Following University policy ( GAP 4.6.1 ), students are expected to complete the candidacy requirements by Spring Quarter of the second year of graduate study.

Pre-Candidacy Requirements

  • CHINLANG 103 - Third-Year Modern Chinese, Third Quarter (5 units)
  • CHINLANG 103B - Third-Year Modern Chinese for Bilingual Speakers, Third Quarter (3 units)
  • CHINA 208 - Advanced Classical Chinese: Philosophical Texts (3-5 units)
  • CHINA 209 - Advanced Classical Chinese: Historical Narration (2-5 units)
  • CHINA 210 - Advanced Classical Chinese: Literary Essays (2-5 units)
  • EALC 201 - Proseminar in East Asian Humanities I: Skills and Methodologies (3 units)
  • CHINA 290  - Research in Chinese Linguistics (2-3 units)
  • Four courses numbered above 200 in the field of China studies, at least two of which must be listed with the CHINA  subject code, and the other two of which may be in different sub-fields such as anthropology, art history, history, philosophy, political science, religious studies, or another relevant field, as approved by the student’s advisor.

All doctoral students must complete an MA qualifying paper. An MA thesis is accepted instead of a qualifying paper for students initially admitted as EALC MA students. Students seeking an MA en route to the PhD must secure approval from the primary advisor and submit an MA thesis.

A graded MA qualifying paper or thesis must be submitted to the DGS and SSO with an accompanying note from the student’s primary advisor by week five of spring quarter of the second year of study for the annual review and candidacy decision.

During the quarter when students complete the MA qualifying paper or thesis (25-30 pages), they must enroll in EALC 299 .

Teaching Requirement

  • DLCL 301 - The Learning and Teaching of Second Languages (3 units)
  • Demonstrate pedagogical proficiency by serving as a teaching assistant for at least three quarters, starting no later than autumn quarter of the third year of graduate study. The department may approve exceptions to the timing of the language teaching requirement.

Post-Candidacy Requirements

Demonstrate proficiency in at least one supporting language (beyond the near-native level required in Chinese and English) to be chosen in consultation with the primary advisor according to the candidate’s specific research goals. For this supporting language (typically Japanese, Korean, or a European language), students must be proficient at a second-year level at the minimum; a higher level of proficiency may be required depending on the advisor’s recommendation. Reading proficiency must be certified through a written examination or an appropriate amount of coursework to be determined on a case-by-case basis. This requirement must be fulfilled by the end of the fourth year of graduate study.

Students in Chinese linguistics must take at least one literature course.

Complete two relevant seminars at the 300 level. EALC 200  may be substituted for one of these two seminars.

Pass three comprehensive written examinations, one of which tests the candidate’s methodological competence in the relevant discipline. The remaining two fields are chosen, with the approval of the student’s advisor, from the following: Chinese literature, Japanese literature, Korean literature, archaeology, anthropology, art history, comparative literature, communication, history, linguistics, philosophy, and religious studies. With the advisor’s approval, a PhD minor in a supporting field may be deemed equivalent to completing one of these three examinations.

Students should submit a dissertation prospectus before advancing to Terminal Graduate Registration (TGR) status. The prospectus should comprehensively describe the dissertation project and include sections on the project rationale, key research questions, contributions to the field, a literature review, a chapter-by-chapter outline, a projected timeline, and a bibliography.

Pass the University Oral Examination (dissertation defense). General regulations governing the oral examination are found in Graduate Academic Policies and Procedures ( GAP 4.7.1 ). The candidate is examined on questions related to the dissertation after acceptable parts have been completed in draft form.

Following university policy ( GAP 4.8.1 ), submit a dissertation demonstrating the ability to undertake original research based on primary and secondary materials in Chinese.

ALPS Lab

Principal Investigator

Judith Degen

Judith Degen

Personal website

Judith did her undergrad and MSc work in Cognitive Science at the University of Osnabrück and her PhD work in Brain & Cognitive Sciences and Linguistics at the University of Rochester. She is interested in how people construct meaning in communication. She spends her time thinking about how to characterize the interaction of linguistic information, context, and world knowledge in language production and comprehension.

Graduate students

Jiayi Lu

Jiayi did his undergraduate study in Linguistics, Neuroscience, and Integrated Sciences at Northwestern University before coming to Stanford. Jiayi is primarily interested in psycholinguistics and syntax. Specifically, he is interested in exploring the various factors that affect sentence acceptability judgments, and how experimental methods can inform syntactic theories.

Brandon Waldon

Brandon Waldon

Brandon did his BA in Linguistics at the University of Chicago before spending a year as a visiting student researcher at Leibniz-ZAS Berlin. He is interested in experimental approaches to semantics and pragmatics, corpus linguistics, and philosophy of language.

Bran Papineau

Bran Papineau

Bran is a PhD student of Linguistics. Their interests include language and gender, language and music, and socio- and psycholinguistics more broadly. They also occasionally enjoy straying into morphology, and their current QP deals with English gender morphology and social ideologies. They also enjoy the language-learning side of linguistics, and have studied Spanish, Faroese, Russian, Mandarin, and Greek.

Anthony Velasquez

Anthony Velasquez

Tony is a PhD student in Linguistics. He is interested in sociolinguistics, especially third-wave variationist work, and the intersection between social and semantic/pragmatic meaning, as well as exploring how an understanding of language as socially and cognitively embedded can provide paths forward in modelling language behavior. His current work focuses on Bayesian modelling of the impacts of social information on semantic interpretation.

Jesús Adolfo Hermosillo

Jesús Adolfo Hermosillo

Adolfo is a PhD student in the Department of Linguistics. He is interested in computational linguistics, semantics, sociolinguistics and multilingualism. He uses computational and experimental methods to answer questions about meaning and linguistic variation.

Penny Pan

Penny received her B.A. in Cognitive Science from Vassar College and is currently a master’s student in the Symbolic Systems program at Stanford. She is interested in using computational models as well as behavioral experiments to study how people process language. Her work focuses on psycholinguistics, pragmatics, and bilingualism.

Ahmad Jabbar

Ahmad Jabbar

Ahmad is a Linguistics PhD student. He works on formal pragmatics and semantics, with interests in computation theory, nlp, and psycholinguistics. His current projects focus on compositionality and discourse structure.

Undergraduate Students

Madigan Brodsky

Madigan Brodsky

Madigan is an undergraduate linguistcs student at Stanford. She is broadly interested in language as a product of more general cognitive processes, as well as linguistic applications in the legal field. She is currently working on projects involoving the resolution of linguistic vagueness in legal contracts, investigations into the linguistic effects of orthographic constructions in English, and building further research regarding uncertainity and causation.

Neil Rathi

Neil is an undergraduate student studying Linguistics and Math. He is interested in morphology and pragmatics. Specifically, he is interested in probabilistic models of language processing and comprehension, as well as their broader cross-linguistic and typological implications.

Lian Wang

Lian is an undergraduate linguistics major. She is broadly interested in situating language in the human cognitive system. She works on syntax and computational models of language.

Sebastian Schuster

Sebastian Schuster

Ciyang Qing

Ciyang Qing

Masoud Jasbi

Masoud Jasbi

Daisy Leigh

Daisy Leigh

Morgan Moyer

Morgan Moyer

Michael Hahn

Michael Hahn

Elisa Kreiss

Elisa Kreiss

Stefan Pophristic

Stefan Pophristic

Eva Portelance

Eva Portelance

Dhara Yu

Leyla Kursat

Graduate Admissions

linguistics phd stanford

The department welcomes applications from those seeking a graduate program that allows students to craft individualized programs of study within broad guidelines and provides them with considerable flexibility in developing their research directions.

Through course work, dedicated faculty advising, and collaborative projects, our students learn how to approach significant theoretical questions using diverse empirical methodologies and detailed linguistic description. They are encouraged to undertake original research that spans subfields of linguistics or makes contact with neighboring disciplines.

The department receives approximately 150 applications for the Ph.D. program each year, from which, on average, 7 students are admitted. The department does not admit external applicants to the M.A. program.

The Stanford Department of Linguistics considers graduate admissions applications once a year.  The online application opens in late September and the deadline to apply to the Ph.D. program is November 30, 2023 for study beginning in the 2024-25 academic year.  

Start Your Application

Bios for Christopher Manning

Stanford webpage 1999.

Chris Manning works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, text understanding and mining, constraint-based theories of grammar (HPSG and LFG), computational lexicography (involving work in XML, XSL, and information visualization), information extraction, and syntactic typology.

Stanford SoE Faculty and Resource Guide 1999

NAME: Christopher D. Manning TITLE: Assistant Professor of Computer Science and Linguistics AREA OF INTEREST: Human Language Technology / Natural Language Processing BRIEF DESCRIPTION: Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, and information extraction. Ph.D. Stanford 1994.

Computer Forum, 1999

Christopher Manning, Assistant Professor of Computer Science and Linguistics works on systems and formalisms that can intelligently process and produce human languages. His research interests range from applying statistical natural language processing techniques to problems of information retrieval, information extraction, text data mining, and computational lexicography through building probabilistic models of language phenomena to constraint-based theories of grammar (HPSG and LFG), and their use in explaining grammatical structures and their variation across languages.

For AAAI-2000 tutorial on Statistical NLP

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include statistical models of language, information extraction, and computational lexicography. He is co-author of Foundations of Statistical Natural Language Processing (MIT Press, 1999).

CSLI Industrial Affiliate Program 2000

Christopher Manning is Assistant Professor of Computer Science and Linguistics. He works primarily on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction, and topics in linguistic typology, including argument structure, serial verbs, causatives, and ergativity.

For AI broad area colloq 2000

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, and syntactic typology. He received his Ph.D. in linguistics from Stanford University in 1994. From 1994-1996, he was on the faculty of the Computational Linguistics Program at Carnegie Mellon University, and from 1996-1999 he was "back home" at the University of Sydney, before returning to Stanford at the start of this academic year. His most recent book is Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

For AI intro, 2000 CS admits weekend

Chris Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction and text mining, and topics in syntactic theory.

For 9th Logic, Language, and Computation conference, 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction, and topics in linguistic typology.

For Australian Linguistic Institute brochure

Christopher Manning received his BA (Hons) from the Australian National University and then a Ph.D. from Stanford University in 1994. Since then he has held faculty positions in the Computational Linguistics Program (Philosophy Dept) at Carnegie Mellon University, the Linguistics Department at the University of Sydney, and since September 1999 he is Assistant Professor of Computer Science and Linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, semi-structured data and XML, information extraction, and topics in syntactic typology including ergativity and argument structure. For the last two years, he has been working with Jane Simpson and other colleagues on projects in computational lexicography and dictionary usability, focussing particularly on Australian languages. As well as various articles and book chapters, he is author or co-author of three books, Ergativity: Argument Structure and Grammatical Relations (CSLI Publications, 1996), Complex Predicates and Information Spreading in LFG (CSLI Publications, 1999, with Avery Andrews), and Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

For Biomedical Informatics brochure, 2000

  • Miriam Corris, Christopher Manning, Susan Poetsch, and Jane Simpson. 2000. Bilingual Dictionaries for Australian Languages: User studies on the place of paper and electronic dictionaries. Proceedings of Euralex 2000, Stuttgart.
  • Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press.
  • Avery D. Andrews and Christopher D. Manning. 1999. Complex Predicates and Information Spreading in LFG. Stanford, CA: CSLI Publications.
  • Christopher D. Manning and Bob Carpenter. 1997. Probabilistic Parsing Using Left Corner Language Models. Proceedings of the Fifth International Workshop on Parsing Technologies (IWPT-97), MIT, pp. 147-158.

For CMU LTI talk, 2000

Bio for company, 2000.

Christopher Manning is the only faculty member at Stanford University with appointments in both the Computer Science and Linguistics departments. He works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, text understanding and text mining, constraint-based theories of grammar (HPSG and LFG), computational lexicography (involving work in XML, XSL, and information visualization), information extraction, and syntactic typology. Chris received his BA (Hons) from the Australian National University, in mathematics, computer science and linguistics; and his PhD from Stanford in Linguistics. Prior to joining the Stanford faculty, he held faculty positions at Carnegie Mellon University and the University of Sydney. He is the author or coauthor of three books including Foundations of Statistical Natural Language Processing.

Bio for DB Seminar, Sep 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar, parsing systems, computational lexicography, information extraction and text mining, and topics in syntactic theory and typology.

Five line bio for talk at company, Oct 2000

Bio for csli iap meeting, nov 2000.

Christopher Manning, Assistant Professor of Computer Science and Linguistics at Stanford University, works primarily on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography (working with XML, XSL, and information visualization), information extraction and text mining, and topics in syntax and cross-linguistic typology. His most recent book is Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schütze).

Bio for NIPS 2001 tutorial, December 2001

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and served on the faculty of the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. His research interests include probabilistic models of language, natural language parsing, constraint-based linguistic theories, syntactic typology, information extraction and text mining, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for MIT Press book, December 2001

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and served on the faculty of the Computational Linguistics Program at Carnegie Mellon University and the Linguistics Department at the University of Sydney before returning to Stanford. His research interests include probabilistic models of language, statistical natural language processing, constraint-based linguistic theories, syntactic typology, information extraction, text mining, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schütze).

Bio for Berkeley/JHU talks, September 2002

Bio for acl 2003 tutorial, march 2003.

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he has held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, information extraction, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for Computer Forum 2003, April 2003

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he has held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, computational lexicography, information extraction and text mining. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for Pattern Recognition Journal 2004

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Prior to this, he received his BA (Hons) from the Australian National University, his PhD from Stanford in 1994, and held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, parsing, computational lexicography, information extraction and text mining. He is the author of three books, including the well-known text Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for KDD grant 2004

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and held faculty positions in the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and in the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. He is a Terman Fellow and recipient of an IBM Faculty Award. His recent work has concentrated on statistical parsing, grammar induction, and probabilistic approaches to problems such as word sense disambiguation, part-of-speech tagging, and named entity recognition, with an emphasis on complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and (with Dan Klein) received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing.

MSFT talk 2005

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and held faculty positions in the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and in the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. His recent work has concentrated on statistical parsing, grammar induction, and probabilistic approaches to problems such as part-of-speech tagging, named entity recognition, and learning semantic relations, with an emphasis on complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and with Dan Klein received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing.

Bio for MIT Talk, October 2005

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language parsing, syntax, information extraction and text mining. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

LSA Institute 2007

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction. His work emphasizes considering different languages and complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2007). Together with Dan Klein, he received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For TILR 2007 (Toward the Interoperability of Language Resources)

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction, but has also involved ongoing work on computational lexicography and dictionary usability, focussing particularly on Australian languages. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2008). Together with Dan Klein, he received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Computer Forum 2007

Professor Manning works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, robust textual understanding and inference, named entity recognition, information extraction, and text mining, statistical parsing of various languages, constraint-based theories of grammar and probabilistic extensions of them, and computational lexicography (involving work in XML, XSL, and information visualization).

For letter 2007

I am an associate professor of computer science and linguistics at Stanford University. Previously, I graduated with a PhD from Stanford Linguistics in 1994, and then held faculty positions at Carnegie Mellon University and the University of Sydney. My research interests include probabilistic natural language parsing, statistical parsing, grammar induction and probabilistic approaches to information extraction, text mining, and linguistic questions. In general my work emphasizes complementing leading machine learning methods with use of rich linguistic features. I am the author of three published books, including \emph{Foundations of Statistical Natural Language Processing} (MIT Press, 1999, with Hinrich Sch\"utze), and also of the in-press textbook \emph{Introduction to Information Retrieval} (Cambridge, 2008, with Prabhakar Raghavan and Hinrich Sch\"utze). Together with Dan Klein, I received the best paper award at the 2003 meeting of the Association for Computational Linguistics.

For NLP retreat 2008

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2008). He is Australian; his Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Google Faculty Summit 2008

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His work concentrates on probabilistic approaches to NLP, particularly statistical parsing, robust textual inference, and grammar induction. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and a new book this summer, Introduction to Information Retrieval (with Raghavan and Schuetze). Australian. Stanford Ph.D. 1994. Previous faculty positions at Carnegie Mellon University and the University of Sydney.

Textual Inference workshop 2009

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has recently coauthored Introduction to Information Retrieval (with Raghavan and Schuetze). His work concentrates on probabilistic approaches to NLP, including statistical parsing, grammar induction, named entity recognition, and machine translation. For the last 5 years he has been particularly interested in pursuing approaches to text understanding and computational semantics. His group has participated in all of the RTE Challenges, and a recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award.

DARPA Machine Reading 2009

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University (PhD, Stanford, 1994). Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schuetze 1999) and information retrieval (Manning et al. 2008). His recent work concentrates on statistical parsing, text understanding and computational semantics, machine translation, and large-scale joint inference for NLP. His recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. Dan Jurafsky is an Associate Professor of Linguistics and Computer Science, by courtesy, at Stanford University (PhD, Berkeley, 1992). His recent work concentrates on semantic role labeling, prosody in speech, and lexical relation acquisition. Jurafsky recently co-authored a second edition of the standard textbook Speech and Language Processing (2008). He received a MacArthur Fellowship in 2003. Andrew Ng is an Assistant Professor of Computer Science at Stanford University (PhD, Berkeley, 2003). His research interests include machine learning theory, lexical relation acquisition, reinforcement learning for robot control, computer vision, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship.

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work concentrates on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He has won several best paper awards; most recently his paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Specialties: Natural Language Processing, Computational Linguistics

For NSF Panel 2009

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He also maintains interests in probabilistic approaches to linguistics and work on computational lexicography and dictionary usability, focusing particularly on Australian languages. His recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Google 2009

For learning workshop 2011.

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, part-of-speech tagging, and named entity recognition; robust textual inference; machine translation; grammar induction; and large-scale joint inference for NLP. Recently he has been trying to swap back in memories of Rumelhart and McClelland (1986).

For Computer Forum 2012

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a fellow of AAAI and the Association for Computational Linguistics. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

For ICLR 2013

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on machine learning approaches to various NLP problems, including statistical parsing, named entity recognition, robust textual inference, machine translation, recursive deep learning models for NLP, and large-scale joint inference for NLP.

Web page around 2012

Chris Manning works on systems and formalisms that can intelligently process and produce human languages. His research concentrates on probabilistic models of language and statistical natural language processing; including text understanding, text mining, machine translation, information extraction, named entity recognition, part-of-speech tagging, probabilistic parsing and semantic role labeling, syntactic typology, computational lexicography, and other topics in computational linguistics and machine learning.

For Poetics journal 2013

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing and information retrieval. His recent research concentrates on machine learning approaches to various computational linguistic problems, including parsing, semantic similarity, and textual inference.

For SWANK 2014

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing and information retrieval. His recent research concentrates on machine learning approaches to various computational linguistic problems and computational semantics, including parsing, textual inference, machine translation, and hierarchical deep learning for NLP.

For EngX 2014

Christopher Manning is a professor of computer science and linguistics at Stanford University. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and hierarchical deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

For Tencent 2014

Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and recursive deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

For Rework 2016

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. Manning is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

Neural Machine Translation tutorial, 2016

Christopher Manning Stanford University, [email protected], @chrmanning Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. Manning is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

LinkedIN 2017

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL. Research of his has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

100 words for Harker Programming Invitational

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing and has explored tree recursive neural networks, the GloVe word vectors, neural machine translation, parsing, and multilingual language processing, including developing Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

Tsinghua 2017

Christopher Manning is the Thomas M. Siebel Professor in Machine Learning at Stanford University, in the Departments of Computer Science and Linguistics. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. His computational linguistics work also covers probabilistic models of language, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL. Research of his has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

Website 2019

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

90 words for Alexa Prize

Christopher Manning is a professor of computer science and linguistics at Stanford University and Director of the Stanford AI Lab. He is a leader in applying deep neural networks to Natural Language Processing, including work on tree recursive models, sentiment analysis, neural machine translation and parsing, and the GloVe word vectors. He founded the Stanford NLP group (@stanfordnlp), developed Stanford Dependencies and Universal Dependencies, and manages development of the Stanford CoreNLP software. Manning is an ACM, AAAI, and ACL Fellow, and a Past President of ACL.

Bengio Turing Prize letter

Christopher Manning is a Professor at Stanford University, who has worked on Natural Language Processing since 1993. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). He is the most-cited researcher within the field of NLP, and he has won best paper awards at ACL, Coling, EMNLP, and CHI. He is a leader in applying deep learning to NLP, with well-known work on sentiment analysis, dependency parsing, the GloVe model of word vectors, neural machine translation, question answering, and summarization.

Tenure letter 2019

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). He has worked on Natural Language Processing since 1993 and is the most-cited researcher within the NLP field, with best paper awards at ACL, Coling, EMNLP, and CHI. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is a leader in applying deep learning to NLP, with well-known work on sentiment analysis, dependency parsing, the GloVe model of word vectors, neural machine translation, question answering, and summarization.

Tenure letter 2020, 100 words

Christopher Manning is a professor of computer science and linguistics at Stanford University, Director of the Stanford Artificial Intelligence Lab, and an Associate Director of the Stanford Institute for Human-Centered AI. He is a leader in applying deep neural networks to Natural Language Processing, including work on tree-recursive models, neural machine translation, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

With teaching 2020

Christopher Manning is a professor of computer science and linguistics at Stanford University, Director of the Stanford Artificial Intelligence Lab (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is a leader in applying deep neural networks to Natural Language Processing (NLP), including work on tree-recursive models, neural machine translation, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), teaches and has written textbooks for NLP (CS 224N) and information retrieval (CS 276), co-developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

Longer 2021

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). His research goal is computers that can intelligently process, understand, and generate human language material. Manning has worked on Natural Language Processing (NLP) since 1992 and is the most-cited researcher within NLP, with best paper awards at ACL, Coling, EMNLP, and CHI, and with well-known work on applying deep neural networks to NLP, including on tree-recursive models, neural machine translation, parsing, sentiment analysis, natural language inference, question answering, summarization, and the GloVe word vectors. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Christopher Manning is a professor of linguistics and computer science at Stanford University, Director of the Stanford Artificial Intelligence Lab (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is a leader in applying deep neural networks to natural language processing (NLP), including work on neural machine translation, tree-recursive models, natural language inference, summarization, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), teaches and has co-written textbooks for NLP (CS 224N) and information retrieval (CS 276), co-developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP and Stanza software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

Longer 2022

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), an Associate Director of the Stanford Institute for Human-Centered AI (HAI), and an Investment Partner at AIX Ventures. He is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). His research goal is computers that can intelligently process, understand, and generate human language. Manning is the most-cited researcher within NLP, with best paper awards at ACL, Coling, EMNLP, and CHI and well-known work on applying deep neural networks to NLP, including neural machine translation, parsing, sentiment analysis, natural language inference, question answering, and summarization. He founded the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP and Stanza software. Manning has coauthored leading textbooks on statistical Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Shorter verbal 2022

Christopher Manning is a Professor in the Departments of Computer Science and Linguistics at Stanford University, Director of SAIL, the Stanford Artificial Intelligence Laboratory, and an Associate Director at HAI, the Stanford Institute for Human-Centered AI. His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and very well-known work on applying deep neural networks to NLP. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.

Homepage bio mid 2023

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is a leader in applying Deep Learning to Natural Language Processing (NLP), with well-known research on the GloVe model of word vectors, attention, machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, dependency parsing, sentiment analysis, and summarization. He also focuses on computational linguistic approaches to parsing, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies . Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group ( @stanfordnlp ) and manages development of the Stanford CoreNLP and Stanza software.

Shorter 2023

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and an ACL Test of Time award for his pioneering work on applying neural network or deep learning approaches to human language understanding. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is best-known as a leader in applying Deep Learning to Natural Language Processing (NLP), with well-known early research on the GloVe model of word vectors, attention, self-supervised model pre-training, tree-recursive neural networks, and machine reasoning. Earlier on he worked on probabilistic NLP models for parsing, sequence tagging, and grammar induction and he also focuses on computational linguistic approaches to natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning manages development of the open-source Stanford CoreNLP and Stanza software, and his software and algorithms are used in the systems of many companies for tasks such as sentiment analysis, machine translation, dependency parsing, summarization, and question answering. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards and an ACL Test of Time award. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford, where he founded the Stanford NLP group (@stanfordnlp).

Christopher Manning is the inaugural Thomas M. Siebel Professor of Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is a leader in applying Deep Learning to Natural Language Processing (NLP), with pioneering research on the GloVe model of word vectors, attention, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, and direct preference optimization. Earlier on, he worked on probabilistic NLP models for parsing, sequence tagging, and grammar induction and he also focuses on computational linguistic approaches to natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning manages development of the open-source Stanford CoreNLP and Stanza software, and his software and algorithms are used in the systems of many companies for tasks such as sentiment analysis, machine translation, dependency parsing, summarization, and question answering. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), and teaches the popular NLP class CS224N, which is available online. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won the 2024 IEEE John von Neumann Medal; ACL, Coling, EMNLP, and CHI Best Paper Awards; and a 2023 ACL Test of Time award. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford, where he founded the Stanford NLP group (@stanfordnlp).

Linguistics professor overlooks student with headphones

Linguistics

Main navigation.

School of Humanities and Sciences

Explore the principal areas of linguistics (phonetics, phonology, morphology, syntax, semantics, pragmatics, historical linguistics, and sociolinguistics) and gain the skills to do more advanced work in these subfields.

What You'll Study

The mission of the undergraduate program in Linguistics is to provide students with basic knowledge in the principal areas of linguistics (phonetics, phonology, morphology, syntax, semantics, pragmatics, historical linguistics, and sociolinguistics) and the skills to do more advanced work in these subfields. Courses in the major also involve interdisciplinary work with connections to other departments including computer science, psychology, cognitive science, communication, anthropology, and foreign language. The program provides students with excellent preparation for further study in graduate or professional schools as well as careers in business, social services, government agencies, and teaching.

Degrees Offered

More information.

Learn more about Linguistics in the Stanford Bulletin

  • Stanford Linguistics
  • School of Humanities & Sciences
  • Explore IntroSems related to this major

Exploratory Courses

Linguist 1.

Introduction to Linguistics

LINGUIST 150

Language and Society

LINGUIST 156

Language, Gender, & Sexuality (FEMGEN 156X)

LINGUIST 30N

Linguistic Meaning and the Law

LINGUIST 47N

Languages, Dialects, Speakers

Linguistic Anthropology

Linguistic and semiotic anthropology have been a vital part of the anthropological intellectual tradition, and one in which the Anthropology Department at Stanford has had historic strength. Linguistic anthropology examines language in social and cultural practices and contexts. Ethnographies of language operate across multiple scales, from local face to face interaction to the circulation of discourse throughout regional and global networks. Anthropologists at Stanford are especially interested in the boundaries and connections between ostensibly linguistic and non-linguistic phenomena, researching new media platforms, digital and media technologies, formations of mass politics and power structures, language learning and identity in education, ethnoracial and linguistic borders, and religious networks and practices. Closely aligned with the critical theory tradition, linguistic anthropology at Stanford offers theoretical and methodological tools to investigate the constitutive and performative role of language in the formation of different identities and social relationships, as well as the production and reproduction of ideologies and power relations.

Miyako Inoue

Miyako Inoue

Kabir Tambar

Kabir Tambar

LING-MA - Linguistics (MA)

Program overview.

Master of Arts in Linguistics

See Graduate Degrees for the university’s basic requirements for the MA degree.

The Department of Linguistics occasionally admits graduate students already enrolled at Stanford for the MA degree.

Stanford University

SUNetID Login

Stanford

Search form

Prospective postdocs.

  • How To Apply

Open Postdoctoral Positions

  • Finding a Faculty Mentor
  • Cost of Living
  • Fellowships at Stanford
  • Fellowships outside Stanford
  • Postdoc Benefits
  • Open Postdoc Positions
  • Diversity in Postdoctoral Scholar Training
  • Postdoc Emergency Resources
  • Verify Appointments
  • Funding Guidelines
  • Budgeting for Fellowships

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

Stanford University

CDDRL Logo

Center on Democracy, Development and the Rule of Law is housed in the Freeman Spogli Institute for International Studies

Liubov Sobol

Lyubov Sobol

  • CDDRL Visiting Scholar, 2022

Lyubov Sobol is a Russian political and public figure. She consistently advocates the democratization of Russia and opposes Putin's policies. She produces the YouTube channel " Navalny Live " of Alexei Navalny (more than 2.7 million subscribers, more than 90 million views per month, of which more than 20 million unique viewers). She participated in the election campaign for the Moscow City Duma in 2019 and the State Duma of Russia in 2021 but was illegally admitted because of her political position: opposing the actions of the current government. In May 2018 she became a member of the Central Council of Alexei Navalny's political party Russia of the Future. Sobol was a lawyer for the Anti-Corruption Foundation until its closure in 2021.

In The News

Pushing back on putin: the fight for democracy within russia, cddrl welcomes anti-corruption activist lyubov sobol as visiting scholar.

The Sesquipedalian

The newsletter of the Stanford Department of Linguistics is published every Friday during the academic year.

If you'd like to submit news of your own or on behalf of someone else, use this  link !

Previous News Listings

The Archives

For issues from before mid-April 2015, go to the archives.

View Sesquipedalian Archive

Russian Linguistic Bulletin

Journal Information

Russian Linguistic Bulletin is a peer-reviewed academic journal dedicated to the problems of linguistics, language studies  and literature. Our journal provides an opportunity to publish scientific achievements to graduate students, university professors, persons with an academic degree and other interested parties.

ISSN (Print): 2313-0288.

ISSN (Online): 2411-2968.

Russian Linguistic Bulletin   is registered in the Federal service for supervision of communications, information technology, and mass media (Roskomnadzor). The Mass Media Registration Certificate PI № FS 77 — 58339 ( link ) (terminated: 07.04.2021). The Mass Media Registration Certificate EL № FS 77 - 73011 ( link ).

The journal is published in English and Russian every month.

Terms and conditions are available at the link .

Publication Ethics and Publication Malpractice Statement .

Impact Factor:   ResearchGate 2018/2019 = 0,11 .

Publisher and founder: Cifra LLC.

Editor in chief: Smirnova N.L. ( RSCI profile ), PhD, Institute for the Development of Education of the Sverdlovsk Region, smirnova[at]rulb.org, information page .

Main handing editor: Mavrin, V.I. ( RSCI profile ), PhD, Ural Federal University, editors[at]rulb.org.

Mailing address: Ekaterinburg, Akademicheskaya 11A – 1, 620066, Russia.

Editorial office phone number: +7-900-202-3909.

Editorial board :

Rastyagaev AV., Doctor of Philology, Moscow City University (Moscow, Russia)  RSCI .

Slojenikina YV., Doctor of Philology, Moscow City University (Moscow, Russia)  RSCI .

Shtreker N.Y., Doctor of Pedagogy, PhD in Philology, Kaluga State Pedagogical University (Kaluga, Russia)  RSCI .

Levitsky A.E., Doctor of Philology, Moscow State University (Moscow, Russia)  RSCI .

Alikaev R.S., Doctor of Philology, Kabardino-Balkarian State University (Nalchik, Russia)  RSCI .

Erofeeva E.V., Doctor of Philology, Perm State University (Perm, Russia)  RSCI  / Scopus .

Ivanov A.V., Doctor of Philology, State Linguistic University of Nizhny Novgorod (Nizhny Novgorod, Russia)  RSCI  / ResearcherID .

Magirovskaya O.V., Doctor of Philology, Siberian Federal University (Krasnoyarsk, Russia)  RSCI  /  ResearcherID .

Kyuchukov H., Visiting Professor at Magdeburg-Stendal University of Applied Sciences (Stendal, Germany)  Scopus /  ResearcherID .

Jurkovič V. PhD, Associate Professor, University of Ljubljana (Ljubljana, Slovenia).  Scopus  /  ORCID .

Ma Daoshan, PhD, Professor, Tianjin Polytechnic University (Tianjin, China).

Iseni B.A., Professor, PhD in Applied Linguistics and PhD in Comparative Literature, Faculty of Philology, State University of Tetovo (Tetovo, North Macedonia).

Gutovskaya G.S., PhD in Philology, Associate Professor, Belarusian State University (Minsk, Republic of Belarus)  ORCID .

Moradi H. PhD, Lecturer, Shahid Chamran University of Ahvaz, (Ahzav, Iran).

Medvedeva O., PhD in Humanities, Vilnius University (Vilnius, Republic of Lithuania).

Korostenskienė J., PhD in Linguistics, Professor, Vilnius University (Vilnius, Republic of Lithuania)  ORCID .

Alimi D., PhD in Linguistics, Assoc.Prof., State University of Tetovo (Tetovo, North Macedonia).

Pllana Gani, Doctor, PhD in Linguistics, University of Prishtina "Hasan Prishtina"  (Prishtina, Republic of Kosovo).

Kulesh H.I., Professor, Doctor of Pedagogy, PhD in Philology, Belarusian State University (Minsk, Republic of Belarus)  ORCID .

Ashrapov B.P., Candidate of Philological Sciences, Khujand National University named after Academician B. Gafurova (Khujand, Tajikistan); RSCI /  WoS .

Samin Khan

IMAGES

  1. Free LSA Membership for Recent Ph.D. Graduates

    linguistics phd stanford

  2. Linguistics

    linguistics phd stanford

  3. John RICKFORD

    linguistics phd stanford

  4. James N. Stanford

    linguistics phd stanford

  5. stanford linguistics newsletter

    linguistics phd stanford

  6. stanford linguistics newsletter

    linguistics phd stanford

VIDEO

  1. General Linguistics

  2. Stanford CS229 Machine Learning I Gaussian discriminant analysis, Naive Bayes I 2022 I Lecture 5

  3. Hear from our learners

  4. Stanford CS224N NLP with Deep Learning

  5. Ph.D. in Linguistics' terrible advice: "dictionaries take a back seat"

  6. 2018 Pavlenko, Jarvis, and Hepford

COMMENTS

  1. Doctoral Program

    The program awards up to 100 high-achieving students every year with full funding to pursue a graduate education at Stanford, including the Ph.D. degree in Linguistics. Additional information is available about the student budget, Stanford graduate fellowships, and other support programs. Community

  2. Graduate Programs

    Our graduate programs provide a unique environment where linguistic theory, multiple methodologies, and computational research not only coexist, but interact in a highly synergistic fashion. Our focus is on the Ph.D. degree. The department occasionally admits students already enrolled at Stanford for the M.A. degree. Ph.D. students in other ...

  3. Linguistics

    Linguistics. The Stanford University Department of Linguistics is a vibrant center of research and teaching, with a thriving undergraduate major and a top-ranked PhD program. Our program emphasizes intellectual breadth, both disciplinary—integrating diverse theoretical linguistic perspectives with empirical investigation across languages ...

  4. LING-PHD Program

    Program Overview. Through the completion of advanced coursework and strong methodological and analytical training, the PhD program prepares students to make original contributions to knowledge in linguistics, articulate the results of their work, and demonstrate its significance to linguistics and related fields. At every stage in the program ...

  5. Computational Linguistics

    Research. We take a very broad view of computational linguistics, from theoretical investigations to practical natural language processing applications, ranging across linguistic areas like computational semantics and pragmatics, discourse and dialogue, sociolinguistics, historical linguistics, syntax and morphology, phonology ...

  6. Requirements

    Core Requirements. Coursework: Ph.D. students must satisfy a basic course requirement that ensures they achieve breadth across the different areas of linguistics, while setting a foundation for their future research. There is considerable flexibility in how a student meets this requirement, so that it can be tailored to the student's background ...

  7. Resources for Graduate Students

    Margaret Jacks Hall Building 460 Rm. 127 Stanford, CA 94305-2150 Phone: (650) 723-4284 Fax: (650) 723-5666 linguistics [at] stanford.edu (linguistics[at]stanford[dot]edu) Campus Map

  8. Ph.D. in Japanese Linguistics

    The Ph.D. program is designed to prepare students for a doctoral degree in Japanese linguistics. Students should consult the most up-to-date version of the degree plan on the Stanford Bulletin as well as the EALC Graduate Handbook.Each student should meet with their faculty advisor at least once per quarter to discuss the degree requirements and their progress.

  9. The Stanford Natural Language Processing Group

    The Stanford NLP Group is always on the lookout for budding new computational linguists. Stanford has a great program at the cutting edge of modern computational linguistics. The best way to get a sense of what goes on in the NLP Group is to look at our research blog , publications, and students' and faculty's homepages.

  10. Phil Hubbard

    Stanford University School of Humanities and Sciences : Phil Hubbard, PhD . Senior Lecturer Emeritus Stanford Language Center. [email protected]: Books and Papers Available Online ... Reflectively with Technology (2017) Google Scholar Profile Computer Assisted Language Learning: Critical Concepts in Linguistics (2009) CV TESOL Technology ...

  11. Ph.D. in Chinese Linguistics

    The Ph.D. program is designed to prepare students for a doctoral degree in Chinese linguistics. Students should consult the most up-to-date version of the degree plan on the Stanford Bulletin as well as the EALC Graduate Handbook.Each student should meet with their faculty advisor at least once per quarter to discuss the degree requirements and their progress.

  12. People

    Personal website. Bran is a PhD student of Linguistics. Their interests include language and gender, language and music, and socio- and psycholinguistics more broadly. They also occasionally enjoy straying into morphology, and their current QP deals with English gender morphology and social ideologies. They also enjoy the language-learning side ...

  13. Graduate Admissions

    The Stanford Department of Linguistics considers graduate admissions applications once a year. The online application opens in late September and the deadline to apply to the Ph.D. program is November 30, 2023 for study beginning in the 2024-25 academic year. Start Your Application.

  14. Christopher Manning, Stanford NLP

    PhD Stanford Linguistics 1994 Asst Professor, Carnegie Mellon University Computational Linguistics Program 1994-96 Lecturer B, University of Sydney Dept of Linguistics 1996-99 Asst Professor, Stanford University Depts of Computer Science and Linguistics 1999-2006

  15. Bios for Christopher Manning

    Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University (PhD, Stanford, 1994). Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schuetze 1999) and information retrieval (Manning et al. 2008).

  16. Linguistics

    The program provides students with excellent preparation for further study in graduate or professional schools as well as careers in business, social services, government agencies, and teaching. Degrees Offered. BA; Coterm; More Information. Learn more about Linguistics in the Stanford Bulletin. Stanford Linguistics; School of Humanities & Sciences

  17. Linguistic Anthropology

    Linguistic and semiotic anthropology have been a vital part of the anthropological intellectual tradition, and one in which the Anthropology Department at Stanford has had historic strength. Linguistic anthropology examines language in social and cultural practices and contexts. Ethnographies of language operate across multiple scales, from local face to face interaction to the circulation of ...

  18. LING-MA Program

    Program Overview. Master of Arts in Linguistics. See Graduate Degrees for the university's basic requirements for the MA degree.. The Department of Linguistics occasionally admits graduate students already enrolled at Stanford for the MA degree.

  19. Open Postdoctoral Positions

    Mgmt Sci & Engineering, Stanford Impact Lab (SIL) September 1, 2024. Jelena Obradović. Open postdoctoral position through the Stanford Impact Labs Postdoctoral Fellowship Program with Dr. Jelena Obradović and the SPARK Lab. Graduate School of Education, Stanford Impact Lab (SIL) September 1, 2024. Tomas Jimenez.

  20. Lyubov Sobol

    Lyubov Sobol is a Russian political and public figure. She consistently advocates the democratization of Russia and opposes Putin's policies. She produces the YouTube channel "Navalny Live" of Alexei Navalny (more than 2.7 million subscribers, more than 90 million views per month, of which more than 20 million unique viewers).She participated in the election campaign for the Moscow City Duma ...

  21. The Sesquipedalian

    Margaret Jacks Hall Building 460 Rm. 127 Stanford, CA 94305-2150 Phone: (650) 723-4284 Fax: (650) 723-5666 linguistics [at] stanford.edu (linguistics[at]stanford[dot]edu) Campus Map

  22. Journal Information

    Russian Linguistic Bulletin is a peer-reviewed academic journal dedicated to the problems of linguistics, language studies and literature. Our journal provides an opportunity to publish scientific achievements to graduate students, university professors, persons with an academic degree and other interested parties. ISSN (Print): 2313-0288.

  23. Samin Khan

    Samin Khan is an AI product-builder for higher education with a former life as an AI researcher and founder. Holding a background in computer science and cognitive science from the University of Toronto, Samin leveraged computational linguistics to predict mental health trends. This research was the foundation of his start-up, Autumn, which leveraged a privacy-first AI model to assist remote ...

  24. Svetlana SOROKINA

    Svetlana SOROKINA, Professor (Associate) | Cited by 1 | of Moscow City University, Moscow | Read 10 publications | Contact Svetlana SOROKINA