Internet Archive Scholar logo (vaporwave)

The BMJ logo

When is the evidence too old?

A few weeks ago, when submitting an abstract to a nursing conference, I was suddenly faced with a dilemma about age. Not my own age, but the age of evidence I was using to support my work. One key element of the submission criteria was to provide five research citations to support the abstract, and all citations were to be less than ten years old.  This requirement left me stumped for a while. The research I wanted to cite was more than ten years old, yet it was excellent research within a very small body of work on the topic. Suddenly I struggled to meet the criteria and almost gave up on the submission, thinking my abstract would not tick all of the boxes if I used research now deemed to be ‘out of date’. I suddenly thought about all of the work I had published more than ten years ago – all that hard work past its use-by date.

Way back in the mid 1990s, a colleague and I started to have conversations with Australian nurses about the importance of evidence based practice (EBP) for the future of Australian nursing.  The movement away from the comfort of ‘ritual and routine’ to the uncertainty of EBP was challenging. At the time we described EBP according to the principle that “all interventions should be based on the best currently available scientific evidence” 1 . We had embraced the ideas of authors such as Ian Chalmers 2 and were keen to educate nurses and nursing students about “practices that had been clearly shown to work and question practices for which no evidence exists and discard those which have been shown to do harm” 1 It was very much about the importance of using the most ‘robust’ and ‘reliable’ evidence that we had available to guide us in clinical decision making, taking into account individual patients at the centre of care. It was also about teaching nurses and nursing students about how to ask the right questions, where to look for answers and how to recognize when you have found the right answer to support individualized patient care.

Definitions of evidence-based practice are quite varied and I have heard nurses talk about using “current best evidence” while others use the “most current evidence”. These are quite different approaches, with the latter statement suggesting that more recent is best. This is sometimes reinforced in nursing education, where students are graded according to the use of recent research, with limitations placed on the age of resources used to support their work. However, I wonder if we are losing something in this translation about the meaning of ‘best evidence’ to support care. When does the published evidence get too old and where do we draw the line and stop reading research from our past?

Personally I have always expected my students to use up to date research when supporting their recommendations for care. However, I have also encouraged them to look back to see where the new research has come from and to acknowledge the foundation it has been built on.  I am always keen to hear about the latest developments in healthcare and work to support the readers of EBN who need and want to know about what is new and important in the health care literature. Keeping up to date with new evidence is critically important for change. But I wonder how we strike a balance between absorbing recent research and taking into account robust research that preceded its publication by more than a decade?

So, let’s think about these ideas for a minute. If we put our blinkers on and ignore important research from the recently ‘outdated’ literature from the 1990s (when I first became interested in doing research), we could miss some important foundational work that still influences practice today. The two references I have used below, both from the 1990s, would not be included in the discussion at all. If we only consider literature that is recent, and value that more highly than if it is robust, then we will be missing important evidence to inform practice. Researchers could start asking the same research questions over and over (I have seen some of this already in nursing literature) and even feel pressured to repeat previous studies all over again to check if the findings still hold true in the contemporary world. Perhaps that is something to watch for in the future.

It is important to keep up to date with current research findings, new innovations in care, recent trends in patient problems, trends in patient outcomes and changes in the social, political and system context of the care we provide. But it is also important to look back as we move forward, thinking about the strength of the evidence as well as its age.

Allison Shorten RN RM PhD

Yale University School of Nursing

References:

  • Shorten A. & Wallace MC. ‘Evidence-based practice – The future is clear’. Australian Nurses Journal, 1996, Vol. 4, No. 6, pp. 22-24.
  • Chalmers I. The Cochrane collaboration: Preparing, maintaining, and disseminating systematic reviews of the effects of health care, Annals New York Academy of Science, 1993, Vol. 703, pp. 156-165.

Comment and Opinion | Open Debate

The views and opinions expressed on this site are solely those of the original authors. They do not necessarily represent the views of BMJ and should not be used to replace medical advice. Please see our full website terms and conditions .

All BMJ blog posts are posted under a CC-BY-NC licence

BMJ Journals

' height=

Scholar Blog

Classic papers: articles that have stood the test of time.

Share on Twitter

  •     March
  •     July
  •     April
  •     June
  •     January
  •     February
  •     September
  •     August
  •     October
  • Classic Papers: Articles That Have Stood The Test ...
  •     November
  •     May

The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

Why are Authors Citing Older Papers?

  • Metrics and Analytics

Caption here.

With so much new literature published each year, why are authors increasingly citing older papers?

Late last year, computer scientists at Google Scholar published a report describing how authors were citing older papers. The researchers posed several explanations for the trend that focused on the digitization of publishing and the marvelous improvements to search and relevance ranking.

However, as I wrote in my critique of their paper, the trend to cite older papers began decades before Google Scholar, Google, or even the Internet was invented. When you are in the search business, everything good in this world must be the result of search.

In order to validate their results, the helpful folks at Thomson Reuters Web of Science sent me a dataset that included the cited half-life for 13,455 unique journal names reported in their Journal Citation Report (the report that discloses journal Impact Factors). Rather than relying on the individual citation as the unit of observation (the approach used by Google Scholar), we base our analysis  on the cited half-life of journals. This approach has the obvious advantage of scale, allowing us to approach the problem using thousands of journals rather than tens of millions of citations.

In order to approximate a citation-based analysis, each journal was weighted by the number of papers it published, so that small quarterly journals don’t have the same weight as mega-journals like PLOS ONE . Each journal was also classified into one or more subject categories and measured each year over the 17-year observation period. Our variable of interest is the  cited half-life , which is the median age of articles cited in a given journal for a given year. By definition, half of the articles in a journal will be older than the cited half-life; the other half will be younger. The concept of half-life can also be applied to article downloads .

For the entire dataset of journals, the mean weighted cited half-life was 6.5 years, which grew at a rate of 0.13 years per annum. For those journals that had been indexed continuously in the dataset over the 17 years, the mean weighted cited half-life was 7.1 years, which grew at the same rate. For the newer journals, the cited half-life was just 5.1 years, but grew at a rate of 0.19 years per annum.

Focusing on the journals for which we have a continuous series of cited half-life observations, 91% (209 of 229) of subject categories experienced increasing half-lives. Some of these categories grew significantly more than average. For example, Developmental Biology journals grew at 0.25 years per annum, Genetics & Heredity journals grew at 0.20 years per annum and Cell Biology journals grew at 0.17 years per annum.

Conversely, the cited half-life of 20 (9%) of journal categories decreased over the observation period. With few exceptions, these fields covered the general fields of Chemistry and Engineering. For example, the cited half-life for journals classified under Energy & Fuels declined by 0.11 years per annum, Chemistry-Multidisciplinary declined by 0.07 years per annum, Engineering-Multidisciplinary by 0.05 years per annum, and Engineering-Chemical by 0.04 years per annum. Granted, these are smaller declines, but they do run contrary to overall trends.

Add caption

We also discovered that cited half-life increases with total citations, meaning, as a journal attracts more citations, a larger proportion of these citations target older articles. This can be seen in Figure 2, as journal categories move from the bottom left to the upper right quadrant of the graph over the observation period.

Caption here.

The next figure highlights the trajectory of highly-cited journals from 1997 to 2013, illustrating how cited half-life increases with the total citations to a journal. While most highly-cited journals move toward the upper-right quadrant of the graph, we highlight three chemistry journals that run contrary to this trend: Journal of the American Chemical Society , Angewandte Chemie-Int Ed ., and Chemical Communications.  Those readers wishing to speculate why Chemistry and Engineering journals were bucking the overall trend are welcome to do so in the comment section below.

Readers are also welcome to explore the data (for categories  and for journals ). The files (.swf) require the Adobe Flash plug-in. Mac users may need to hold the Control key and selecting one’s browser when opening these files. Categories may be be split into component journals. Other controls moderate the size, speed and display of the data.

Caption here

In sum, we were able to validate the claims by the Google Scholar team that scholars have been citing older materials, with some exceptions.

The citation behavior of authors reflects cultural, technological, and normative behaviors, all acting in concert. While digital publishing and technologies were invented to aid the reader in discovering, retrieving, and citing the literature, the  trend appears to predate many of these technologies. Indeed, equal credit may be due to the photocopier, the fax machine, FTP, and email as is given to Google, EndNote, or the DOI.

Nevertheless, a growing cited half-life might also reflect major structural shifts in the way science is funded and the way scientists are rewarded. A gradual move to fund incremental and applied research may result in fewer fundamental and theoretical studies being published. Giving credit to these founders may require authors cite an increasingly aging literature.

Correction note: Table 1 of the manuscript “ Cited Half-Life of the Journal Literature ” (arXiv) contains a sorting error. A corrected version (v2) was submitted and will become live at 8pm (EDT). Thanks to Dr. Jacques Carette, Dept. of Computing and Software at McMaster University for spotting this error.

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

26 Thoughts on "Why are Authors Citing Older Papers?"

' src=

This is interesting stuff. I wonder how big the availability effect is. Being able as we are now to search (even full text) and retrieve the full backfiles of journals compared to browsing recent issues/volumes as we did say 20 years ago may be influencing these patterns. Of course this effect is dampened by authors’ decisions not to cite if they deem the article too old or its contents out of date. For the recent years this effect is extra strong because of the hybrid way Google Scholar ranks, with citations (and thus age) strongly affecting the ranking.

Another effect may be simply the growing historical volume of journals. Probably with all other things being equal a journal with a 100 year history will have a higher half life than a journal with a 20 year history, simply because of the chance of encountering relevant older papers. This effect is especially strong for young journals that in the research window 1997-2013 have grown from for instance 5 to 22 available years (because relatively fewer topically relevant papers will be rejected for citation by authors because of their age). Consequently perhaps there is also an effect of the growth of the number of journals in the JCR with many journals added that have a short history.

  • By Jeroen Bosman
  • Apr 29, 2015, 6:13 AM

' src=

Availability may be creating a mild cultural shift. My research suggests that most citations occur early in an article, where the history of the problem is explained. Reviewers may simply be calling for more history, especially in scientific fields.

  • By David Wojick
  • Apr 29, 2015, 7:21 AM

' src=

A quick comment on the culture of mathematics. I find it fascinating really, in that once a theorem is proved, it stands for eternity. A proven theorem is essentially a building block for future research. This means that for mathematicians, their articles’ citation half lives are often long – a cultural dimension.

  • By Robert Harington
  • Apr 29, 2015, 8:17 AM

This is also largely true for science and engineering. Paradigms may be overthrown by revolution, but individual research results seldom are. So while we may lurch from paradigm to paradigm, there is also a steady accumulation of knowledge. Falsification seldom occurs at the project level. (This has implications for the data sharing issue as well.)

  • Apr 29, 2015, 8:50 AM

' src=

Advocates of what we have been calling “basic research” are now wanting to call it “discovery research,” because basic research has become dull and more difficult to fund, while also becoming more politicized in many fields. This lack of funding for discovery research — discoveries that can blow open paradigms, make us rethink what’s possible or even probable, and lead to entirely new ways of pursuing science — has been underfunded for decades now. I’m tempted to believe much of this is based in the incrementalism of modern research, which is necessary but is not being matched by an equal rate of really new discoveries.

The recent reviews of the first decades of the Hubble telescope show how funding breakthrough devices and approaches can yield all sorts of discoveries. In the case of Hubble, things like dark energy and the idea that black holes were at the center of every galaxy weren’t anticipated, but were discovered through Hubble. We need more of this in every field.

  • By Kent Anderson
  • Apr 29, 2015, 8:51 AM

' src=

Classic papers can’t help getting older. Perhaps it takes time for most classics to get that status. Indeed it’s only by being cited over a long period of time, that they generally get this status.

So younger papers that fail to get cited enough early on find it hard to keep getting cited. And fall off. The good ones keep getting cited and so eventually continue as they move beyond the half life. And keep being cited – perhaps by habit because they really are irreplaceabale.

So if we say papers cite on average 30 other papers. And there are 10 classics. And 10 established mid age papers. And 10 recent papers.

In 5 years times, the equivalent paper still cites 8 of those classics. Plus 5 of those previously established papers that are now classics. Plus 7 of the then recent papers that are now mid aged. This leaves 10 that are recent still. But the average age overall is likely to have got older as 66% (20 of 30) of the cited papers are the same ones but are now five years older.

Can we test this? Can we examine if there is a concentration on the number of older papers cited (ie fewer but more often) whereas more younger papers show the opposite (more cited but less often)? This might be an affect of so many similar younger papers getting published that only a select few can gain classic status. Statistics can mask all sorts of effects.

  • Apr 29, 2015, 9:26 AM

' src=

Search and sort could also influence this behavior; Papers with higher numbers of citations (often older papers) land on the first page of Google Scholar results, and on the first page of citation database results when the option to sort by citation count is available. More people citing a paper means more people see that paper, means more people cite that paper. So driving the Matthew Effect – the rich (in citations) get richer.

It is also true that people are tweeting and posting and so re-invigorating older papers.

You can only legitimately cite a paper that you’ve seen. Anything that contributes to the discoverability and visibility of a paper creates a condition favorable to citation.

I’d love to see this: for a each year of citing papers, one can determine the number of citations per cited paper for each year of age. You could also see if citation concentration has changed by comparing across successive citing years.

  • By Marie McVeigh
  • Apr 29, 2015, 1:29 PM

' src=

I think this is an excellent point Marie. I also wonder if this may reflect poor searching strategies from newer researchers in emerging markets. In my experience in working with many of these authors, they often restrict their literature search to Google Scholar (free) and review articles (which often cite older papers). Because of the citation bias in Google Scholar (and the fact that it indexes too broadly; i.e., non-peer reviewed material), I usually recommend younger authors to use stricter databases that are both broad (like Web of Science and Scopus) and field-specific (like PsychINFO, PubMed, SciFinder, etc.).

I think educating younger researchers, particularly in emerging markets, in how to proper find recently published quality articles related to their topic is essential in doing good science (identifying current trends and relevant research questions).

  • By Dr. Jeffrey Robens (@jeffreyrobens)
  • Apr 29, 2015, 5:55 PM

' src=

Older articles are more accessible today than they have ever been. Factors such as the publisher’s own archive digitisation programmes – I have been involved in two major retro digitisation programmes over the last 10-15 years, one in Physics going to back to 1874 and another in healthcare going back to the late 60s. The library’s continued drive and desire to provide access where possible to older materials for researchers, made possible often by last minute or year end funding. Digitisation of retrospective journal indexes (e.g. INSPEC, BNI etc) also helps the postgrad or librarian doing the discovering if not for the author themselves. Also, lastly but by no means least, CrossRef has provided a huge service for access to retrospective literature. Many publishers will quite sensibly include the one time cost of uploading their DOIs to CrossRef when they are building their budgets for their own journal archive digitisation projects.

It’s amazing to see how quite how heavily older material is used when compared to current literature.

  • By Tony O’Rourke
  • Apr 29, 2015, 9:35 AM

' src=

Digitizing and making older content accessible, particularly in search, is certainly a factor. But, just because it’s there doesn’t mean people will cite it. Clearly there is value to the older content, not just in downloads but also in the “thumbs up” citation. This is encouraging for publishers looking to include backfile content in their digital archives.

  • By Angela Cochran
  • Apr 29, 2015, 9:57 AM

' src=

Back in the good old days, even B4 Current Content, at least in the chem/physics arena, it was important to cite key, early, papers as foundational (e.g. the works of Einstein) as a way for validating and, perhaps, equally important, other searching for that key paper would find yours because of the citation. That got you more requests for reprints and raised your article’s prestige and thus your rank in the community. It was, in many ways, the early version of “gaming” the system. Of course your work did stand on the shoulders of those who came before- sort of.

  • Apr 29, 2015, 10:46 AM

' src=

This is v interesting, Phil. In the spirit of “When you are in the X business, everything good in this world must be the result of X.” I also wonder whether new services that enable researchers to breathe new life into their older publications, connecting them to later work / developments in the field, and explaining how the one influenced the other, could lead to more citations of older work. (Probably not yet, but it might be another variable to factor in to future analysis of this kind).

  • By Charlie Rapple
  • Apr 29, 2015, 12:56 PM

' src=

While the growth in fraction of older citations has indeed been occurring over a long period, an acceleration in the growth occurred over the timeframe that digitization of archives brought the treasure trove of long-hidden scholarship to light and relevance-ranked fulltext search made it easy to dig deep into it. You can see this in Fig 1 of our paper. It is also summarized as one of the conclusions of our article:

“”” Second, for most areas, the change over the second half (2002-2013) was significantly larger than that over first half (1990-2001). Overall, the increase in the second half was double the increase in the first half. Note that most archival digitization efforts as well as the move to fulltext relevance-ranked search occurred over the second half. “”””

To quantify this at a bit more granular level, I recently computed percentiles. Of the total growth in the fraction of older citations over 1990-2013, the first one-third occurred over about 12 years (1990-2012), the next one-third over 7 years, the final one-third occurred over the last 4 years.

Digitization of journal archives has been a great achievement by scholarly publishers. It has made it possible, easy and indeed expected for researchers to dig deeper into the scholarly record. As someone who grew up in a place with limited libraries, the ease of being able to search over such a large collection is magical.

As for why researchers dig deeper now than before, the obvious answer, because it is so easy to do, is more likely than other more complicated causes. That which is useful and is easy to do gets done a lot more. We all do this. And do it all the time. Consider how often you use maps, write notes, publish short notes (like this one), lookup information and so on compared to 20 years ago…

  • By Anurag Acharya
  • Apr 29, 2015, 12:59 PM

' src=

What is the contribution of review articles in these data sets? I would assume that with the increasing number of scientific articles being published (from the US and the world) that more reviews are being written. In my experience, reviews tend to cite older literature.

  • By Clay Comstock
  • Apr 29, 2015, 4:38 PM

' src=

Clay, good question. The data set is based on the journal level, not the article level. But I could take a first cut of titles with the word “Review” in it and see if they perform any differently.

  • By Phil Davis
  • Apr 29, 2015, 4:49 PM

Certainly agree that in your dataset it may be difficult to discern the contributions of journals that publish more reviews. If I recall correctly rewiews were treated like articles in the Acharya datasets and it should be possible to discern there.

Also, I’m not sure pulling titles with “review” in them will help, at least for oncology many don’t include “review” in the title.

  • Apr 29, 2015, 6:25 PM

' src=

This is interesting but it would seem to me to require research into the amount and type of research as well as funding.

  • Apr 30, 2015, 6:31 AM

' src=

We are glad to learn that the results offered by Phil Davis’ work match the ones we presented earlier this year ( http://arxiv.org/abs/1501.02084 ), even though the designs of the studies are slightly different from one another (we used the Aggregate Cited Half-Life for the 220 subject categories present in the Journal Citation Reports and limited the study to the period comprised between 2003 and 2013, while Davis’ work analyzes a longer period). Nevertheless, probably the most interesting addition to this debate is that we discussed the causes that might explain this phenomenon. We pointed out three causes, in this order:

The first factor that should be considered has already been studied extensively, and it is the relation between the exponential growth of scientific production and the pace of obsolescence. Well, if we know that growth and obsolescence are closely related, what does the increase in citations to old documents reported in this study means? Does it mean that we are in a period of slow scientific growth? Or what is the same, is science growing exponentially like in previous periods? Or, is today’s scientific production of a lower quality, not providing as many new discoveries and techniques? These are all interesting as well as disturbing questions ( http://goo.gl/HAk8xt ).

The second factor is the one the Google Scholar team pointed out in their work and now also here: the advancements in information and communications technologies has increased accessibility to old documents. The truth is that these arguments seems quite reasonable, and they are supported by the changes in scientist’s reading habits detected by Tenpoir & King ( http://goo.gl/WTgZL9 ): “the age of articles read appears to be fairly stable over the years, with a recent increase in reading of older articles. Electronic technologies have enhanced access to older articles, since nearly 80% of articles over ten years old are found by on-line searching or from citation (linkages) and nearly 70% of articles over ten years old are provided by libraries (mostly electronic collections)”.

The third factor that should be considered is the influence of Google Scholar on these changes. It is undeniable that Google Scholar has revolutionized the way we search and access scientific information. A clear manifestation of this is the way results are nowadays displayed in most search engines and databases, a key issue that determines how the document is accessed, read, and potentially cited. The “first results page syndrome”, which is causing that users are increasingly getting used to access only those documents that are displayed in the first results pages. In Google Scholar, as opposed to traditional bibliographic databases (Web of Science, Scopus, Proquest) and library catalogues, documents are sorted by relevance and not by their publication date. Relevance, in the eyes of Google Scholar, is strongly influenced by citations ( http://goo.gl/bqrwgs , http://arxiv.org/abs/1410.8464 ).

Google Scholar favours the most cited documents (which obviously are also the oldest documents) over more recent documents, which have had less time to accumulate citations. Although it is true that GS offers the possibility of sorting and filtering searches by publication date, this option is not used by default. On the other hand, traditional database do the exact opposite: trying to prioritize novelty and recentness in their searches (the criterion the have always thought the user will be most interested in) they sort their results by publication date by default, allowing the user to select other criterion if they are so inclined (citation, relevance, name of first author, publication name, etc…). The question is served. Is Google Scholar contributing to change reading and citation habits because of the way information is searched and accessed through its search engine?

  • By Emilio Delgado López-Cózar
  • Apr 30, 2015, 7:06 AM

I think Phil has said that this trend is far too old to be due to Google Scholar, or even to the Web. The Internet was fielded in the scholarly community in the 1970’s, plus other availability technologies have emerged. GS might have helped with recent acceleration, if there really is some. My conjecture is that adding more history to the article’s opening background section has been the trend, simply because of the increase in availability. There may be nothing particularly deep in this trend.

  • Apr 30, 2015, 9:34 AM

' src=

The explanation may be simple: life sciences had a boom period from the late 70s to early 2000s – molecular biology revolution, genome sequencing, etc. A lot of path-breaking, field-opening papers were published then. We are still productive and making progress, but we are making that progress in the same fields so we all cite those papers that opened those fields.

  • May 3, 2015, 8:13 AM

' src=

Thanks for a very nice analysis that is provoking much valuable discussion. Amongs other things, it helps to give the lie to Impact Factor [sic] with its very narrow time window (citations within two years of publication). Many of us in research areas with longer ‘half-lives’ typical for papers rightly bemoan the grip the benighted Impact Factor still has on the science publishing and funding world. This article helps to reveal another aspect of Impact Factor’s irrelevance to assessment of real impact, long-term significance, real ground breaking or measuring ‘esteem’ in the community of researchers. Much food for thought!

  • By David Miller
  • May 9, 2015, 6:51 AM

' src=

In chemistry, the reduced citation half-life of multidisciplinary journals might have to do with the growing percentage of life sciences-related content these journals publish. Since life sciences have a shorter citation half-life than traditional chemistry areas such as organic chemistry, that might bring the averages for chemistry journals down.

  • By Stefano Tonzani
  • May 15, 2015, 12:35 PM

Comments are closed.

Related Articles:

Screengrab from video illustrating the Texas Sharpshooter fallacy

Next Article:

Man jumping into water

View the latest institution tables

View the latest country/territory tables

The growth of papers is crowding out old classics

The benefits of middle age. 

Gemma Conroy

old research papers

Credit: Raj Kumar Pan et al.

The benefits of middle age.

4 November 2019

old research papers

Raj Kumar Pan et al.

Freshly published and very old articles are missing out on citations as researchers try to stay abreast of the rapidly growing scientific literature, an analysis of more than 32 million papers reveals.

An analysis led by Raj Kumar Pan, a computer scientist at Alto University in Finland , found that the number of academic papers is increasing by 4% each year. The total number of citations is growing by 5.6% each year, and doubling every 12 years.

According to Alexander Petersen,co-author of the study, this huge volume of new articles isn’t just reshaping scientific publishing, it’s also changing how researchers "follow the reference trail". Rather than sift through large volumes of new papers, researchers are opting for middle-aged articles that have gained greater visibility and more citations.

"The deluge of new literature is crowding out old literature," says Petersen, a computational social scientist at the University of California, Merced .

Uneven citations

Over the past decade, the number of papers published each year has grown to around 2.5 million. The advance of online-only "mega-journals" has been a significant driver of this increase, with PLOS ONE growing by 78.6% annually in its first six years.

In 2012 alone, this open-access mega-journal had published more than 23,000 articles , which accounted for 1.4% of the entire collection indexed by the Web of Science for that year.

To examine how this growth is influencing how papers are cited over time, Petersen and his colleagues analyzed papers that were published between 1965 and 2012 and their reference lists.

They found that the growth of papers and reference lists has increased the 'citability' of articles overall. For instance, in 1980, around 30% of Science papers remained uncited for the first five years after publication. By 2005 90% of papers had at least one citation within five years.

Peterson’s team used a network growth model to visualize the evolution of scientific literature and how it has impacted the spread of citations over 150 years.

alt

The image, published with the analysis in the Journal of Informetrics , shows that citations are unevenly distributed, with very new and very old papers missing out on being referenced.

Zooming in on the visualization reveals a collection of circles and interconnected lines, representing papers and their citations.

The size of the circles is determined by their citation count, and the colour scale reveals the publishing year.

The dark green circles forming the tail of the visualization represent the oldest papers, while the bright green circles accumulated around the head of the image are the youngest. As these densely linked, bright green circles show, younger articles tend to cite middle-aged papers over brand new or very old research.

Petersen says that, while the visualization captures the growth of science, it also reveals how knowledge gathering has become more narrowly focused.

"There are fewer citations to research that is younger than six years old, and to papers that are older than 50 years," says Petersen. "This makes it challenging for search engines to ensure that search query results are appropriately balanced in time."

View a high-resolution version of the image here .

  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

How to find an academic research paper

Looking for research on a particular topic? We’ll walk you through the steps we use here at Journalist's Resource.

old research papers

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by David Trilling, The Journalist's Resource October 18, 2017

This <a target="_blank" href="https://journalistsresource.org/home/find-academic-research-paper-for-journalists/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

Journalists frequently contact us looking for research on a specific topic. While we have published a number of resources on how to understand an academic study and how to pick a good one — and why using social science research enriches journalism and public debate — we have little on the mechanics of how to search. This tip sheet will briefly discuss the resources we use.

Google Scholar

Let’s say we’re looking for papers on the opioid crisis. We often start with Google Scholar, a free service from Google that searches scholarly articles, books and documents rather than the entire web: scholar.google.com .

But a search for the keyword “opioids” returns almost half a million results, some from the 1980s. Let’s narrow down our search. On the left, you see options “anytime” (the default), “since 2013,” “since 2016,” etc. Try “since 2017” and the results are now about 17,000. You can also insert a custom range to search for specific years. And you can include patents or citations, if you like (unchecking these will slightly decrease the number of results).

Still too many results. To narrow the search further, try any trick you’d use with Google. (Here are some tips from MIT on how to supercharge your Google searches.) Let’s look for papers on opioids published in 2015 that look at race and exclude fentanyl (Google: “opioids +race -fentanyl”). Now we’re down to 2,750 results. Better.

old research papers

Unless you tell Google to “sort by date,” the search engine will generally weight the papers that have been cited most often so you will see them first.

Try different keywords. If you’re looking for a paper that studies existing research, include the term “meta-analysis.” Try searching by the author’s name, if you know it, or title of the paper. Look at the endnotes in papers you like for other papers. And look at the papers that cited the paper you like; they’ll probably be useful for your project.

If you locate a study and it’s behind a paywall, try these steps:

  • Click on “all versions.” Some may be available for free. (Though check the date, as this may include earlier drafts of a paper.)
  • Reach out to the journal and the scholar. (The scholar’s email is often on the abstract page. Also, scholars generally have an easy-to-find webpage.) One is likely to give you a free copy of the paper, especially if you are a member of the press.
  • In regular Google, search for the study by title and you might find a free version.

More tips on using Google Scholar from MIT and Google .

Other databases

  • PubMed Central at the National Library of Medicine: If you are working on a topic that has a relationship to health, try this database run by the National Institutes of Health. This free site hosts articles or abstracts and links to free versions of a paper if they are available. Often Google Scholar will point you here.
  • If you have online access to a university library or a local library, try that.
  • Directory of Open Access Journals .
  • Digital Public Library of America .
  • Subscription services include org and Web of Science .

For more on efforts to make scholarly research open and accessible for all, check out SPARC , a coalition of university libraries.

Citations as a measure of impact

How do you know if a paper is impactful? Some scholars use the number of times the paper has been cited by other scholars. But that can be problematic: Some papers cite papers that are flawed simply to debunk them. Some topics will be cited more often than others. And new research, even if it’s high-quality, may not be cited yet.

The impact factor measures how frequently a journal, not a paper, is cited.

This guide from the University of Illinois, Chicago, has more on metrics.

Here’s a useful source of new papers curated by Boston Globe columnist Kevin Lewis for National Affairs.

Another way to monitor journals for new research is to set up an RSS reader like Feedly . Most journals have a media page where you can sign up for press releases or newsletters featuring the latest research.

Relevant tip sheets from Journalist’s Resource:

  • 10 things we wish we’d known earlier about research
  • How to tell good research from bad: 13 questions journalists should ask  (This post also discusses how to determine if a journal is good.)
  • Lessons on online search techniques, reading studies, understanding data and methods
  • Guide to critical thinking, research, data and theory: Overview for journalists

About The Author

' src=

David Trilling

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

https://www.nist.gov/nist-research-library/journal-research-nist/past-papers

Journal of Research of NIST

Journal of research past papers, journal of research of the national institute of standards and technology, 1989 – present, journal of research of the national bureau of standards, 1977 - 1988, journal of research of the national bureau of standards, section a: physics and chemistry, 1959-1977, journal of research of the national bureau of standards, section b: mathematical sciences, 1968-1977, journal of research of the national bureau of standards, section b: mathematics and mathematical physics, 1959-1967, journal of research of the national bureau of standards, section c: engineering and instrumentation, 1959-1977, journal of research of the national bureau of standards, section d: radio science, 1964-1965, journal of research of the national bureau of standards, section d: radio propagation, 1959-1963, journal of research of the national bureau of standards, 1934-1958, bureau of standards journal of research, 1928-1934, scientific papers of the bureau of standards, 1919-1928 1, technologic papers of the bureau of standards, 1910-1928 1, bulletin of the bureau of standards, 1904-1919 1.

1 Results of research in science and technology from the National Bureau of Standards were reported in the Scientific Papers. The first 14 volumes of the Scientific Papers were issued as the Bulletin of the Bureau of Standards (1904-1919). Volumes 15-22 were issued as Scientific Papers of the Bureau of Standards (1919-1928). Results of investigations of materials and methods of testing were reported in the 22 volumes of Technologic Papers (1910-1928). In July 1928 the Scientific Papers and Technologic Papers were combined and issued under the title Bureau of Standards Journal of Research .

Research paper

Research paper is a written report which contains the results of original scientific research (primary research article) or the review of published scientific papers on one or several science topics (review article). In primary research articles, the authors give vital information about the research that allows other members of the scientific community to evaluate it, reproduce science experiments, and also to assess the reasoning and conclusions drawn from them. Review articles are designed to analyze, evaluate, summarize or synthesize research already conducted in primary academic sources. Quite often, a science article combines these two types of scientific text, including the overview and original parts.

Currently, the number of scientific articles in open access is growing fast, but all of them are spread on numerous science websites on the Internet, and therefore it is hard for a researcher to find the necessary information for new science discoveries or download PDF due to the unreliability of websites.

CyberLeninka is intended to solve this problem. We provide platform, which aggregates a lot of free articles from various open access peer-reviewed journals . And our global goal is to build new research infrastructure for academia.

Directory of open access articles based on OECD fields of science and technology

  • Medical and Health sciences
  • Basic medicine
  • Clinical medicine
  • Health sciences
  • Health biotechnology
  • Natural sciences
  • Mathematics
  • Computer and information sciences
  • Physical sciences
  • Chemical sciences
  • Earth and related environmental sciences
  • Biological sciences
  • Engineering and Technology
  • Civil engineering
  • Electrical engineering, electronic engineering, information engineering
  • Mechanical engineering
  • Chemical engineering
  • Materials engineering
  • Medical engineering
  • Environmental engineering
  • Environmental biotechnology
  • Industrial biotechnology
  • Nano technology
  • Agricultural sciences
  • Agriculture, forestry, and fisheries
  • Animal and dairy science
  • Veterinary science
  • Agricultural biotechnology
  • Social sciences
  • Economics and business
  • Educational sciences
  • Political science
  • Social and economic geography
  • Media and communications
  • History and archaeology
  • Languages and literature
  • Philosophy, ethics and religion
  • Arts, history of arts, performing arts, music

Association "Open Science"

Shapiro Library

FAQ: How old should or can a source be for my research?

  • 7 Academic Integrity & Plagiarism
  • 60 Academic Support, Writing Help, & Presentation Help
  • 27 Access/Remote Access
  • 7 Accessibility
  • 9 Building/Facilities
  • 7 Career/Job Information
  • 26 Catalog/Print Books
  • 26 Circulation
  • 128 Citing Sources
  • 14 Copyright
  • 311 Databases
  • 24 Directions/Location
  • 18 Faculty Resources/Needs
  • 7 Hours/Contacts
  • 2 Innovation Lab & Makerspace/3D Printing
  • 25 Interlibrary Loan
  • 43 IT/Computer/Printing Support
  • 3 Library Instruction
  • 37 Library Technology Help
  • 6 Multimedia
  • 17 Online Programs
  • 19 Periodicals
  • 25 Policies
  • 8 RefWorks/Citation Managers
  • 4 Research Guides (LibGuides)
  • 216 Research Help
  • 23 University Services

Last Updated: Jun 22, 2023 Views: 124187

How old your research sources can be, using the publication date or date of creation as the defining criteria, is either stated in your assignment rubric or depends on your field of study or academic discipline.  If it’s a requirement for your assignment, look for words like “sources must be published in the last 10 years” or words to that effect that specify the publication date or range required.  If the currency of sources is not a requirement of your assignment, think about the course involved and what an appropriate age might be.

How fast-changing is the field of study?

Sources for a history paper might, by their very nature, be older if they are diaries, personal letters, or other documents created long ago and used as primary sources.  Sources used for research in the sciences (health care, nursing, engineering), business and finance, and education and other social science fields require more “cutting edge” research, as these fields change quickly with the acquisition of new knowledge and the need to share it rapidly with practitioners in those fields.

A good rule of thumb is to use sources published in the past 10 years for research in the arts, humanities, literature, history, etc.

For faster-paced fields, sources published in the past 2-3 years is a good benchmark since these sources are more current and reflect the newest discoveries, theories, processes, or best practices.

Use the library’s Multi-Search search results page to limit your sources to those published within a date range you specify.  Use the Publication Date custom setting seen on the left side of the search results page:

Screenshot of the publication date area in multisearch

For further assistance with this or other search techniques, contact the Shapiro Library email at [email protected]  or use our 24/7 chat service.

  • Share on Facebook

Was this helpful? Yes 173 No 44

Frequently Asked Questions (FAQs) are a self-serve option for users to search and find answers to their questions. 

Use the search box above to type your question to search for an answer or browse existing FAQs by group, topic, etc.

Tell Me More

Link to Question Form

More assistance.

Submit a Question

Related FAQs

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.9(7); 2013 Jul

Logo of ploscomp

Ten Simple Rules for Writing a Literature Review

Marco pautasso.

1 Centre for Functional and Evolutionary Ecology (CEFE), CNRS, Montpellier, France

2 Centre for Biodiversity Synthesis and Analysis (CESAB), FRB, Aix-en-Provence, France

Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications [1] . For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively [2] . Given such mountains of papers, scientists cannot be expected to examine in detail every single new paper relevant to their interests [3] . Thus, it is both advantageous and necessary to rely on regular summaries of the recent literature. Although recognition for scientists mainly comes from primary research, timely literature reviews can lead to new synthetic insights and are often widely read [4] . For such summaries to be useful, however, they need to be compiled in a professional way [5] .

When starting from scratch, reviewing the literature can require a titanic amount of work. That is why researchers who have spent their career working on a certain research issue are in a perfect position to review that literature. Some graduate schools are now offering courses in reviewing the literature, given that most research students start their project by producing an overview of what has already been done on their research issue [6] . However, it is likely that most scientists have not thought in detail about how to approach and carry out a literature review.

Reviewing the literature requires the ability to juggle multiple tasks, from finding and evaluating relevant material to synthesising information from various sources, from critical thinking to paraphrasing, evaluating, and citation skills [7] . In this contribution, I share ten simple rules I learned working on about 25 literature reviews as a PhD and postdoctoral student. Ideas and insights also come from discussions with coauthors and colleagues, as well as feedback from reviewers and editors.

Rule 1: Define a Topic and Audience

How to choose which topic to review? There are so many issues in contemporary science that you could spend a lifetime of attending conferences and reading the literature just pondering what to review. On the one hand, if you take several years to choose, several other people may have had the same idea in the meantime. On the other hand, only a well-considered topic is likely to lead to a brilliant literature review [8] . The topic must at least be:

  • interesting to you (ideally, you should have come across a series of recent papers related to your line of work that call for a critical summary),
  • an important aspect of the field (so that many readers will be interested in the review and there will be enough material to write it), and
  • a well-defined issue (otherwise you could potentially include thousands of publications, which would make the review unhelpful).

Ideas for potential reviews may come from papers providing lists of key research questions to be answered [9] , but also from serendipitous moments during desultory reading and discussions. In addition to choosing your topic, you should also select a target audience. In many cases, the topic (e.g., web services in computational biology) will automatically define an audience (e.g., computational biologists), but that same topic may also be of interest to neighbouring fields (e.g., computer science, biology, etc.).

Rule 2: Search and Re-search the Literature

After having chosen your topic and audience, start by checking the literature and downloading relevant papers. Five pieces of advice here:

  • keep track of the search items you use (so that your search can be replicated [10] ),
  • keep a list of papers whose pdfs you cannot access immediately (so as to retrieve them later with alternative strategies),
  • use a paper management system (e.g., Mendeley, Papers, Qiqqa, Sente),
  • define early in the process some criteria for exclusion of irrelevant papers (these criteria can then be described in the review to help define its scope), and
  • do not just look for research papers in the area you wish to review, but also seek previous reviews.

The chances are high that someone will already have published a literature review ( Figure 1 ), if not exactly on the issue you are planning to tackle, at least on a related topic. If there are already a few or several reviews of the literature on your issue, my advice is not to give up, but to carry on with your own literature review,

An external file that holds a picture, illustration, etc.
Object name is pcbi.1003149.g001.jpg

The bottom-right situation (many literature reviews but few research papers) is not just a theoretical situation; it applies, for example, to the study of the impacts of climate change on plant diseases, where there appear to be more literature reviews than research studies [33] .

  • discussing in your review the approaches, limitations, and conclusions of past reviews,
  • trying to find a new angle that has not been covered adequately in the previous reviews, and
  • incorporating new material that has inevitably accumulated since their appearance.

When searching the literature for pertinent papers and reviews, the usual rules apply:

  • be thorough,
  • use different keywords and database sources (e.g., DBLP, Google Scholar, ISI Proceedings, JSTOR Search, Medline, Scopus, Web of Science), and
  • look at who has cited past relevant papers and book chapters.

Rule 3: Take Notes While Reading

If you read the papers first, and only afterwards start writing the review, you will need a very good memory to remember who wrote what, and what your impressions and associations were while reading each single paper. My advice is, while reading, to start writing down interesting pieces of information, insights about how to organize the review, and thoughts on what to write. This way, by the time you have read the literature you selected, you will already have a rough draft of the review.

Of course, this draft will still need much rewriting, restructuring, and rethinking to obtain a text with a coherent argument [11] , but you will have avoided the danger posed by staring at a blank document. Be careful when taking notes to use quotation marks if you are provisionally copying verbatim from the literature. It is advisable then to reformulate such quotes with your own words in the final draft. It is important to be careful in noting the references already at this stage, so as to avoid misattributions. Using referencing software from the very beginning of your endeavour will save you time.

Rule 4: Choose the Type of Review You Wish to Write

After having taken notes while reading the literature, you will have a rough idea of the amount of material available for the review. This is probably a good time to decide whether to go for a mini- or a full review. Some journals are now favouring the publication of rather short reviews focusing on the last few years, with a limit on the number of words and citations. A mini-review is not necessarily a minor review: it may well attract more attention from busy readers, although it will inevitably simplify some issues and leave out some relevant material due to space limitations. A full review will have the advantage of more freedom to cover in detail the complexities of a particular scientific development, but may then be left in the pile of the very important papers “to be read” by readers with little time to spare for major monographs.

There is probably a continuum between mini- and full reviews. The same point applies to the dichotomy of descriptive vs. integrative reviews. While descriptive reviews focus on the methodology, findings, and interpretation of each reviewed study, integrative reviews attempt to find common ideas and concepts from the reviewed material [12] . A similar distinction exists between narrative and systematic reviews: while narrative reviews are qualitative, systematic reviews attempt to test a hypothesis based on the published evidence, which is gathered using a predefined protocol to reduce bias [13] , [14] . When systematic reviews analyse quantitative results in a quantitative way, they become meta-analyses. The choice between different review types will have to be made on a case-by-case basis, depending not just on the nature of the material found and the preferences of the target journal(s), but also on the time available to write the review and the number of coauthors [15] .

Rule 5: Keep the Review Focused, but Make It of Broad Interest

Whether your plan is to write a mini- or a full review, it is good advice to keep it focused 16 , 17 . Including material just for the sake of it can easily lead to reviews that are trying to do too many things at once. The need to keep a review focused can be problematic for interdisciplinary reviews, where the aim is to bridge the gap between fields [18] . If you are writing a review on, for example, how epidemiological approaches are used in modelling the spread of ideas, you may be inclined to include material from both parent fields, epidemiology and the study of cultural diffusion. This may be necessary to some extent, but in this case a focused review would only deal in detail with those studies at the interface between epidemiology and the spread of ideas.

While focus is an important feature of a successful review, this requirement has to be balanced with the need to make the review relevant to a broad audience. This square may be circled by discussing the wider implications of the reviewed topic for other disciplines.

Rule 6: Be Critical and Consistent

Reviewing the literature is not stamp collecting. A good review does not just summarize the literature, but discusses it critically, identifies methodological problems, and points out research gaps [19] . After having read a review of the literature, a reader should have a rough idea of:

  • the major achievements in the reviewed field,
  • the main areas of debate, and
  • the outstanding research questions.

It is challenging to achieve a successful review on all these fronts. A solution can be to involve a set of complementary coauthors: some people are excellent at mapping what has been achieved, some others are very good at identifying dark clouds on the horizon, and some have instead a knack at predicting where solutions are going to come from. If your journal club has exactly this sort of team, then you should definitely write a review of the literature! In addition to critical thinking, a literature review needs consistency, for example in the choice of passive vs. active voice and present vs. past tense.

Rule 7: Find a Logical Structure

Like a well-baked cake, a good review has a number of telling features: it is worth the reader's time, timely, systematic, well written, focused, and critical. It also needs a good structure. With reviews, the usual subdivision of research papers into introduction, methods, results, and discussion does not work or is rarely used. However, a general introduction of the context and, toward the end, a recapitulation of the main points covered and take-home messages make sense also in the case of reviews. For systematic reviews, there is a trend towards including information about how the literature was searched (database, keywords, time limits) [20] .

How can you organize the flow of the main body of the review so that the reader will be drawn into and guided through it? It is generally helpful to draw a conceptual scheme of the review, e.g., with mind-mapping techniques. Such diagrams can help recognize a logical way to order and link the various sections of a review [21] . This is the case not just at the writing stage, but also for readers if the diagram is included in the review as a figure. A careful selection of diagrams and figures relevant to the reviewed topic can be very helpful to structure the text too [22] .

Rule 8: Make Use of Feedback

Reviews of the literature are normally peer-reviewed in the same way as research papers, and rightly so [23] . As a rule, incorporating feedback from reviewers greatly helps improve a review draft. Having read the review with a fresh mind, reviewers may spot inaccuracies, inconsistencies, and ambiguities that had not been noticed by the writers due to rereading the typescript too many times. It is however advisable to reread the draft one more time before submission, as a last-minute correction of typos, leaps, and muddled sentences may enable the reviewers to focus on providing advice on the content rather than the form.

Feedback is vital to writing a good review, and should be sought from a variety of colleagues, so as to obtain a diversity of views on the draft. This may lead in some cases to conflicting views on the merits of the paper, and on how to improve it, but such a situation is better than the absence of feedback. A diversity of feedback perspectives on a literature review can help identify where the consensus view stands in the landscape of the current scientific understanding of an issue [24] .

Rule 9: Include Your Own Relevant Research, but Be Objective

In many cases, reviewers of the literature will have published studies relevant to the review they are writing. This could create a conflict of interest: how can reviewers report objectively on their own work [25] ? Some scientists may be overly enthusiastic about what they have published, and thus risk giving too much importance to their own findings in the review. However, bias could also occur in the other direction: some scientists may be unduly dismissive of their own achievements, so that they will tend to downplay their contribution (if any) to a field when reviewing it.

In general, a review of the literature should neither be a public relations brochure nor an exercise in competitive self-denial. If a reviewer is up to the job of producing a well-organized and methodical review, which flows well and provides a service to the readership, then it should be possible to be objective in reviewing one's own relevant findings. In reviews written by multiple authors, this may be achieved by assigning the review of the results of a coauthor to different coauthors.

Rule 10: Be Up-to-Date, but Do Not Forget Older Studies

Given the progressive acceleration in the publication of scientific papers, today's reviews of the literature need awareness not just of the overall direction and achievements of a field of inquiry, but also of the latest studies, so as not to become out-of-date before they have been published. Ideally, a literature review should not identify as a major research gap an issue that has just been addressed in a series of papers in press (the same applies, of course, to older, overlooked studies (“sleeping beauties” [26] )). This implies that literature reviewers would do well to keep an eye on electronic lists of papers in press, given that it can take months before these appear in scientific databases. Some reviews declare that they have scanned the literature up to a certain point in time, but given that peer review can be a rather lengthy process, a full search for newly appeared literature at the revision stage may be worthwhile. Assessing the contribution of papers that have just appeared is particularly challenging, because there is little perspective with which to gauge their significance and impact on further research and society.

Inevitably, new papers on the reviewed topic (including independently written literature reviews) will appear from all quarters after the review has been published, so that there may soon be the need for an updated review. But this is the nature of science [27] – [32] . I wish everybody good luck with writing a review of the literature.

Acknowledgments

Many thanks to M. Barbosa, K. Dehnen-Schmutz, T. Döring, D. Fontaneto, M. Garbelotto, O. Holdenrieder, M. Jeger, D. Lonsdale, A. MacLeod, P. Mills, M. Moslonka-Lefebvre, G. Stancanelli, P. Weisberg, and X. Xu for insights and discussions, and to P. Bourne, T. Matoni, and D. Smith for helpful comments on a previous draft.

Funding Statement

This work was funded by the French Foundation for Research on Biodiversity (FRB) through its Centre for Synthesis and Analysis of Biodiversity data (CESAB), as part of the NETSEED research project. The funders had no role in the preparation of the manuscript.

Help | Advanced Search

Computer Science > Computation and Language

Title: realm: reference resolution as language modeling.

Abstract: Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds. This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user's screen or those running in the background. While LLMs have been shown to be extremely powerful for a variety of tasks, their use in reference resolution, particularly for non-conversational entities, remains underutilized. This paper demonstrates how LLMs can be used to create an extremely effective system to resolve references of various types, by showing how reference resolution can be converted into a language modeling problem, despite involving forms of entities like those on screen that are not traditionally conducive to being reduced to a text-only modality. We demonstrate large improvements over an existing system with similar functionality across different types of references, with our smallest model obtaining absolute gains of over 5% for on-screen references. We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. ...page of Albert Einstein's paper on the General Theory of Relativity

    old research papers

  2. History Research Paper Sample Outline

    old research papers

  3. (PDF) A review of handwriting research: Progress and prospects from

    old research papers

  4. (PDF) Engaging the oldest old in research: Lessons from the Newcastle

    old research papers

  5. Barrett research with Libraries’ Special Collections reveals secrets of

    old research papers

  6. (PDF) How to write an original research paper (and get it published)I

    old research papers

VIDEO

  1. RTC building 2nd floor

  2. Thoughts Kiely Rodni

  3. RTC building 3rd floor

  4. English Old Papers

  5. KPSC OLD QUESTION PAPER SOLUTIONS|VA EXAM OLD QUESTION PAPERS|TOP GK QUESTIONS TO PDO EXAM

  6. AP RCET 2022 Question Paper Section

COMMENTS

  1. Internet Archive Scholar

    Search Millions of Research Papers. This fulltext search index includes over 35 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and preprints crawled from the World Wide Web.

  2. Classics

    Classics. PNAS Classics are landmark papers from the archives of PNAS—significant scientific reports that have withstood the test of time. The authors of PNAS Classics opened new avenues of research within their respective fields. These groundbreaking discoveries offer insights into how the best science is practiced.

  3. When is the evidence too old?

    A few weeks ago, when submitting an abstract to a nursing conference, I was suddenly faced with a dilemma about age. Not my own age, but the age of evidence I was using to support my work. One key element of the submission criteria was to provide five research citations to support the abstract, and all citations were to be less than ten years old.

  4. When is data too old to inform nursing science and practice?

    A long delay between research data collection and publication time can impede clinicians from having the most up-to-date information to inform their practice. This is concerning since an often-cited statistic states that it takes an average of 17 years for 14% of research evidence to be widely implemented into clinical practice (Westfall et al ...

  5. Home

    Advanced. Journal List. PubMed Central ® (PMC) is a free full-text archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health's National Library of Medicine (NIH/NLM)

  6. publications

    Specifically I would like to know if any academics out there (especially in the fields of Maths, Physics, and Engineering) know of good resources for old papers/journal articles. Most of the papers of interest to me are in German and are between the years 1900 and 1930. A lot of these journals do not exist anymore or were renamed.

  7. Classic Papers: Articles That Have Stood The Test of Time

    The list of classic papers includes articles that presented new research. It specifically excludes review articles, introductory articles, editorials, guidelines, commentaries, etc. It also excludes articles with fewer than 20 citations and, for now, is limited to articles written in English.

  8. Why are Authors Citing Older Papers?

    So if we say papers cite on average 30 other papers. And there are 10 classics. And 10 established mid age papers. And 10 recent papers. In 5 years times, the equivalent paper still cites 8 of those classics. Plus 5 of those previously established papers that are now classics. Plus 7 of the then recent papers that are now mid aged.

  9. Older papers are increasingly remembered—and cited

    But a 2008 study published in Science reached the opposite conclusion: Obsolescence has accelerated over the past 2 decades as journals have gone online, with authors tending to ignore older papers. For a study to mark Google Scholar's 10th anniversary celebration, its researchers analyzed scientific papers published between 1990 and 2013. They ...

  10. The growth of papers is crowding out old classics

    Over the past decade, the number of papers published each year has grown to around 2.5 million. The advance of online-only "mega-journals" has been a significant driver of this increase, with PLOS ...

  11. citations

    I have come across a few times in research papers or in books, where the authors refer a past (mostly an old) research paper as classic paper. For example, in a book Bratko refers the following: The procedural meaning of Prolog is based on the resolution principle for mechanical theorem proving introduced by Robinson in his classic paper (1965 ...

  12. OATD

    OATD.org aims to be the best possible resource for finding open access graduate theses and dissertations published around the world. Metadata (information about the theses) comes from over 1100 colleges, universities, and research institutions. OATD currently indexes 7,426,620 theses and dissertations. About OATD (our FAQ). Visual OATD.org

  13. Using Old Data: When Is It Appropriate?

    Defining what is "old" depends on the context. A 15-year-old dog is old, but a 15-year-old human is young. A good starting point for deciding if data are old is considering whether leading popular press publications, such as the Wall Street Journal, would report the results of a study using the data. The answer, no doubt, varies depending ...

  14. How Are Academic Age, Productivity and Collaboration Related to Citing

    After examining the journal we realized that BAMS publishes primarily explanatory papers, not original research articles. Because genre is one of the main determinants of referencing behavior, we removed this journal from the analysis. ... However, the old fraction (fraction of references older than 10 years, ...

  15. How to find an academic research paper

    Reach out to the journal and the scholar. (The scholar's email is often on the abstract page. Also, scholars generally have an easy-to-find webpage.) One is likely to give you a free copy of the paper, especially if you are a member of the press. In regular Google, search for the study by title and you might find a free version.

  16. How to find old and well-cited papers?

    There is a Python code that can be used to filter the search results of "Google Scholar" based on the number of cites, the number of cites per year, year, etc. You can find the code and some ...

  17. Literature review as a research methodology: An ...

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. ... the author begins by describing previous research to map and assess the research area to motivate the aim of the study ...

  18. Journal Of Research Past Papers

    In July 1928 the Scientific Papers and Technologic Papers were combined and issued under the title Bureau of Standards Journal of Research. Chemistry, Energy and Mathematics and statistics. Created August 1, 2012, Updated January 8, 2021. Journal of Research of the National Institute of Standards and Technology, 1989 - Present.

  19. Research results have expiration dates: ensuring timely systematic

    Introduction. Systematic reviews are on the crest of a wave of popularity in the health care disciplines as the pressure for evidence-based practice grows more intense. Systematic reviews of research, especially syntheses of research findings, are advanced as a way to make sense of the hundreds of results of the many studies conducted in common ...

  20. Research papers that are free to download in PDF legally (open access

    Research paper is a written report which contains the results of original scientific research (primary research article) or the review of published scientific papers on one or several science topics (review article). In primary research articles, the authors give vital information about the research that allows other members of the scientific ...

  21. FAQ: How old should or can a source be for my research?

    A good rule of thumb is to use sources published in the past 10 years for research in the arts, humanities, literature, history, etc. For faster-paced fields, sources published in the past 2-3 years is a good benchmark since these sources are more current and reflect the newest discoveries, theories, processes, or best practices. Use the ...

  22. Reviewing Assessment Tools for Measuring Country Statistical Capacity

    This paper offers the first review study that fills this gap, paying particular attention to data and practical measurement challenges. It compares the World Bank's recently developed Statistical Performance Indicators and Index with other widely used indexes, such as the Open Data Inventory index, the Global Data Barometer index, and other ...

  23. Mandating indoor air quality for public buildings

    Vol 383, Issue 6690. pp. 1418 - 1420. DOI: 10.1126/science.adl0677. People living in urban and industrialized societies, which are expanding globally, spend more than 90% of their time in the indoor environment, breathing indoor air (IA). Despite decades of research and advocacy, most countries do not have legislated indoor air quality (IAQ ...

  24. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  25. Historical Newspapers from the 1700's-2000s

    The largest online newspaper archive, established in 2012. Used by millions for genealogy and family history, historical research, crime investigations, journalism, and entertainment. Search for ...

  26. [2403.20329] ReALM: Reference Resolution As Language Modeling

    ReALM: Reference Resolution As Language Modeling. Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds. This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user's screen or those running in the ...

  27. Seattle's AI2 Incubator launches online forum for spotlighting research

    Seattle's AI2 Incubator launched a new online forum called Harmonious for discussing research papers and advances related to artificial intelligence, saying the aim was to cut through the glut ...