LKS ASE logo

Guide to Sources for Finding Unpublished Research

Unpublished research.

  • Research Networks
  • Conference Proceedings
  • Clinical Research in Progress
  • Grey Literature
  • Institutional Repositories
  • Preprint Servers
  • Finding Theses

This guide takes you through the tools and resources for finding research in progress and unpublished research in Paramedicine. 

What do we mean by unpublished?  

Typically we mean anything that is publicly available on the internet that isn't published formally in a journal article or conference proceedings.  By their nature these  unpublications  are varied but might include things like: 

  • Preprints, work in progress or an early version of an article intended for publication that is made available for comment by interested researchers, 
  • Presentations, posters, conference papers  published on personal websites or research networks like  ResearchGate  or  Mendeley ,
  • Theses and dissertations  published on the web or through repositories.

Unpublished research can be harder to find a number of reasons.  There is no one place to look. You have to dig a little deeper.  The tools you can use o do this are covered in this  Guide . Also, there isn't that much of it.  There are a number of reasons for this. Paramedic researchers are relatively few and widely dispersed geographically and across different organizations (academic and EMS/Ambulance Services).  Compared to similar areas Paramedic research is in the early stages of development.  To use an analogy, Paramedic research is till taxing up the runway while other areas are already up and flying. It's not impossible; it's just harder than in more established research areas.

Why would you want to look?

If you are wondering why you would want to search for  unpublished  material, there could be a number of reasons: 

  • Completeness,   if you need to cover a complete topic including work in progress and projects and ideas that haven't made it to formal publication,
  • Real- world  examples and case studies , not every project or every implementation will make it to formal publication but may be reported informally as a presentation, theses or dissertation,
  • Currency,   the lengthy publication process encourages researchers to find alternative routes to promote research in progress to share ideas and inform current practice. Typically this would be preprints but there are other informal methods such as copies of posters and presentations. 
  • Next: Research Networks >>
  • Last Updated: Nov 9, 2023 10:51 PM
  • URL: https://ambulance.libguides.com/unpublishedresearch
  • Library Home
  • Library Guides

Preemption Check Checklist

  • Step 5: Unpublished Materials
  • Getting Started
  • Step 1: Search Terms
  • Step 2: Law Articles
  • Step 3: Non-Law Articles
  • Step 4: Books
  • Step 6: Current Awareness Alerts
  • Additional Resources: International Law Topics

Step 5: Searching for Unpublished Articles

The publication process takes a long time—sometimes a year or more—so it's important to search for articles on your topic that have already been written but not yet published. SSRN and bepress are the best sources for unpublished articles and working papers:

  • Social Science Research Network (SSRN) This link opens in a new window Try starting with a broad keyword search, then use the "Search Within Results" search box to narrow your results. & more less... Disseminates abstracts and full text documents. For case studies, click on Search then select Title Only. Then follow the instructions above (the asterisk doesn't work in SSRN so you have to type "case study" or "case studies."
  • bepress Legal Repository This link opens in a new window Try both a keyword search and browsing by "Subject Areas" to find articles on your topic. & more less... The Berkeley Electronic Press hosts working papers from many law schools, including Yale, Berkeley, Michigan, and Virginia, and is especially useful for law and economics research. The searching is unreliable, but you can browse papers by topic.
  • Google Scholar Useful for finding conference papers and other grey literature that is not published in article databases.

Searching for Conferences & Workshops

Check the Legal Scholarship Blog for conferences, workshops, and calls for papers on your topic. You may find that a law journal is hosting an entire symposium on your topic, or a law professor is currently researching your topic. This will alert you to potential preemption issues that may crop up down the road.

  • Legal Scholarship Blog This link opens in a new window Use the search box (top right corner) or browse the subjects listed under "Categories" to find conferences related to your topic. & more less... Carries news of upcoming legal academic conferences.

Searching for Blog Posts

Search for law blog posts on your topic. While a blog post cannot preempt a scholarly article or paper on the same topic, it can help you identify scholars who are interested in your topic. You can then review their published and unpublished works, and set alerts to receive notification of their future publications.

  • Justia BlawgSearch Sort by date, instead of relevance, to see the most recent posts on your topic. If you get too many results with a keyword search, try browsing the subjects listed under "Categories."

Ask a Law Librarian

Profile Photo

  • << Previous: Step 4: Books
  • Next: Step 6: Current Awareness Alerts >>
  • Updated: Jan 10, 2023 2:55 PM
  • URL: https://guides.lib.uchicago.edu/preemption
  • Report a problem
  • Login to LibApps

Open sourcetools

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Out of sight but not...

Out of sight but not out of mind: how to search for unpublished clinical trial evidence

  • Related content
  • Peer review
  • An-Wen Chan , assistant professor and Phelan scientist
  • 1 Women’s College Research Institute, University of Toronto, Toronto, Ontario, Canada
  • Corrrespondence to: A-W Chan anwen.chan{at}utoronto.ca

A key challenge in conducting systematic reviews is to identify the existence and results of unpublished trials, and unreported methods and outcomes within published trials. An-Wen Chan provides guidance for reviewers on adopting a comprehensive strategy to search beyond the published literature

Summary points

The validity of systematic reviews relies on the identification of all relevant evidence

Systematic reviewers should search for unpublished information on the methods and results of published and unpublished clinical trials

The potential sources of unpublished information on clinical trials have expanded over recent years

Recognition of the strengths and limitations of these key information sources can help to identify areas for further emphasis and improvement

Systematic reviews of randomised trials play a key role in guiding patient care and health policy. Their validity depends to a large extent on reviewers’ ability to retrieve relevant information from all existing trials. Unfortunately, about half of clinical trials remain unpublished after receiving ethics approval—particularly those with statistically non-significant findings. 1 Even when published, most journal articles do not report all of the outcome data or key methodological information. 2 3 The overall result is that the published literature tends to overestimate the efficacy and underestimate the harms of a given intervention, while providing insufficient information for readers to evaluate the risk of bias.

It is thus important that systematic reviewers adopt a comprehensive strategy to search beyond the published literature. The optimal systematic review would have complete information about every trial—the full protocol, final study report, raw dataset, and any journal publications and regulatory submissions. 4 The eligibility and risk of bias for each trial could then be evaluated, regardless of its publication status.

There are several potential sources of unpublished information on trial methods and results (table ⇓ ). These sources can help to identify the existence and results of unpublished trials, as well as unreported outcomes within published trials. They can also provide methodological information that facilitates assessment of risk of bias, including the detection of discrepancies between unpublished and published methods. 5 6 Systematic reviewers should consider using all potential information sources as part of their search strategy, while keeping in mind the strengths and limitations of each source (table ⇓ ).

Potential sources of unpublished information on trial methods and results

  • View inline

Trial registries and results databases

Trial registries serve as a readily accessible online resource for identifying unpublished trials and unreported outcomes. Since 2005, prospective trial registration has gained broad acceptance as an important means of enhancing transparency and tracking the existence of clinical trials at inception. Key stakeholders—including medical journal editors, legislators, and funding agencies—provide enforcement mechanisms that have greatly improved adherence to registration practices.

Basic protocol information on ongoing and completed trials of any intervention type can be retrieved via the World Health Organization’s International Clinical Trials Registry Platform Search Portal ( www.who.int/trialsearch/ ). This searches records from national and international trial registries that meet certain standards, including WHO Primary Registries and ClinicalTrials.gov. Users can search the main registry fields using key words related to the study topic, sponsor, recruitment status, and sites. When the same trial is registered in multiple registries, the WHO Search Portal displays similar records together to facilitate identification of duplicate records. Some registry websites also provide access to the history of changes to the registered information fields.

In addition to basic protocol information, certain registries house study results. Since 2008, ClinicalTrials.gov has had the legislative mandate to record summary results for trials (other than phase I) that involve a drug or device regulated by the US Food and Drug Administration. 7 Sponsors are required by law to provide summary baseline and outcome data, which are displayed in a standard format.

Some pharmaceutical companies also maintain their own voluntary trial registers and results databases for drugs that have received regulatory approval. Systematic reviews have previously incorporated unpublished data retrieved from industry registers. 8 These public registers provide a synopsis of trial methods and summary results as dictated by company policy. Information is presented in various formats with non-standardised content. For certain companies, there may be information posted for older trials of some marketed interventions. It should be noted that ClinicalStudyResults.org, the results database launched by the International Federation of Pharmaceutical Manufacturers and Associations in 2004, was to be discontinued by the end of 2011 because of overlap with other registries.

Beyond basic protocol information and results, trial registries have the potential to be the repository for full protocols. Legislation in the US allows for the possibility of requiring submission of full protocols to ClinicalTrials.gov for applicable trials. 7 Furthermore, certain pharmaceutical companies are recognising the importance of public access to full protocols and have committed to posting them on their register for all published trials. 9 These are promising first steps towards facilitating access to protocols for all trials, regardless of publication status.

Despite their importance, trial registries and results databases have several limitations. Firstly, there is no universal mechanism for ensuring adherence to standards for registration or results disclosure, meaning that not all trials will be captured. Journal policy will be ineffective for trials that are not intended for publication, while current legislation does not pertain to procedural, educational, and other unregulated interventions. Secondly, the quality of registered information is highly variable and often uninformative. 7 10 11 12 13 Changes to registered information are common, 12 meaning that systematic reviewers should review the history of amendments for each registry record. Thirdly, even when a trial is fully registered with complete summary results presented, there is a limited amount of methodological information available that is largely inadequate for assessing the risk of bias. 10 This concern would be addressed if full protocols were made available on the registries. 9 14 Finally, most trials will not have been registered prior to the introduction of International Committee of Medical Journal Editors policy and WHO standards in 2005.

Regulatory agencies

Regulatory agencies have access to substantially more clinical trial information than the healthcare providers, patients, and researchers who use and evaluate the interventions. Successful attempts to obtain access to regulatory data have previously necessitated litigation and incurred lengthy delays. 15 16 17 Over recent years, regulatory agencies have recognised the need to address this untenable situation by increasing public access to information from regulatory submissions. 18 19

There are currently two main routes for reviewers to obtain trial data from regulatory agencies—scientific reviews posted in online databases, 20 21 and written requests to regulatory agencies. 15 Scientific reviews of regulatory submissions contain a narrative summary of the clinical trials that form the basis for approval of regulated drugs. These documents are generally available on searchable internet databases provided by the US Food and Drug Administration and the European Medicines Agency:

Drugs@FDA— www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm

European public assessment reports (EPAR)— www.ema.europa.eu/ema/index.jsp?curl=pages/medicines/landing/epar_search.jsp&murl=menus/medicines/medicines.jsp&mid=WC0b01ac058001d125&jsenabled=true

Relevant clinical trial summaries are generally labelled as “Statistical review” on Drugs@FDA, and “Scientific Discussion” in EPAR. The Pharmaceuticals and Medical Devices Agency in Japan ( http://www.pmda.go.jp/english/service/approved.html ) also posts a limited number of reviews with English translations for select drugs and devices.

Limitations of the scientific reviews obtained from regulatory agency websites include the variable presentation format and the lack of text search facility for some scanned documents. In addition, the content is not standardised, information deemed to be commercially sensitive is redacted, and insufficient methodological detail is provided to assess the risk of bias for a trial. Furthermore, many trials are not included in regulatory databases, such as trials of devices and non-regulated interventions. Most trials conducted after regulatory approval would not be captured. For the European Medicines Agency, drugs that are approved by regulators in individual countries but not the central agency will not have public assessment reports available. Drugs@FDA includes information on withdrawn drugs but does not provide scientific reviews for unapproved drugs or drugs approved before 1998.

A second approach has the potential to yield more detailed information from regulatory agencies. Reviewers can make written requests to access the trial protocols and detailed clinical study reports submitted by sponsors. As of December 2010, the European Medicines Agency has committed to accommodating such requests for documents contained in regulatory submissions for drugs, subject to redaction of commercially sensitive information. 19 This important advance will be expanded in the future to include proactive public disclosure of documents on the European Medicines Agency website as part of routine practice. The US Food and Drug Administration has previously granted access to clinical trial documents in response to litigation relating to freedom of information requests 16 17 and is also exploring ways to increase transparency. 18

Limitations of this second approach include potentially lengthy delays in receiving a final decision from regulators, resource-intensive appeals or litigation for denied requests, redaction of potentially important information from documents, and lack of information on interventions other than regulated drugs and devices.

Contacting trialists and sponsors

Systematic reviewers have had variable success in contacting trialists, clinicians, and sponsors for information about unpublished trials. 4 22 23 24 25 Efforts to obtain full trial protocols from trialists have been largely disappointing. 26 27 On the other hand, surveys soliciting information on the existence and statistical significance of unreported outcomes for published trials have had higher response rates from trialists. 28 29 These surveys have also yielded information about the reasons for changing or omitting trial outcomes.

Logistical obstacles include the burden of identifying up to date contact information and sending inquiries and reminders to a potentially large number of individuals who might have knowledge about existing trials. It is also likely that trials for which additional information is provided by investigators or sponsors will differ systematically from trials without such information provided.

Systematic reviewers will need to weigh up the potential yield and costs of contacting investigators and sponsors, which will vary depending on the topic and scope of the review. At a minimum, for each trial identified in the systematic review, it would be reasonable for reviewers to contact investigators to request full protocols as well as information on unreported outcomes, unpublished trials, and other areas of potential bias.

Other sources of information

In some cases trial protocols and results can be obtained from litigation documents. Examples include researchers who had access to internal company documents while serving as expert witnesses in litigation against pharmaceutical companies. 30 31 32 In many jurisdictions, these documents are deemed confidential and their use is restricted to the purposes of the particular litigation—unless unsealed through a court order or agreement by the company. Systematic reviewers who are external to the litigation could submit a request to have the documents unsealed by the court to serve the public interest, although this approach has not been widely tested for pharmaceutical data. More extensive experience with public availability and archiving of litigation documents exists for other industries. 33

Another potential source of information consists of conference abstracts. 34 The Cochrane handbook lists several databases of abstracts that can be useful to search. 35 Given the limited amount of information on trial methods and results contained in abstracts, their usefulness lies mainly with identifying the existence of a trial and the types of outcomes measured.

Finally, an internet search of key words can be done to locate full trial protocols in a relatively short amount of time. The median search time in one systematic review was 12 minutes per trial, with protocols being found for five of 42 trials. 36 The retrieved documents are often those posted on the websites of specific trials, trial groups, and funders.

Conclusions

Given the dangers of selective data suppression and biased study design or conduct, it is critical that systematic reviewers search beyond the literature for additional information on both published and unpublished trials. The potential sources of information on study methods and results have expanded over recent years, particularly for pharmaceutical trials. These sources can provide complementary trial information that can be collated and compared to identify discrepancies and evaluate the risk of bias.

It is important to recognise the limitations and variable yield of existing information sources. Much work remains to ensure that comprehensive, high quality information is publicly available for all trials, including full protocols, clinical study reports, and raw datasets. 4 14 37 There is also a need to develop rigorous methods for reviewing the large amount of unpublished trial information that can potentially be retrieved. 4 15 Only with continued advances in access to clinical trial information can the systematic evaluation of health interventions become more accurate, efficient, and reliable for patient care.

Cite this as: BMJ 2012;344:d8013

  • Editorials doi:10.1136/bmj.d8158

Contributors: A-WC was responsible for interpretation of information, drafting the article, and final approval of the version to be published.

Competing interests: All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.”

Provenance and peer review: Commissioned; externally peer reviewed.

  • ↵ Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al. Dissemination and publication of research findings: An updated review of related biases. Health Technol Assess 2010 ; 14 : 1 -193. OpenUrl PubMed Web of Science
  • ↵ Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008 ; 3 : e3081 . OpenUrl CrossRef PubMed
  • ↵ Hopewell S, Dutton S, Yu LM, Chan A-W, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ 2010 ; 340 : c723 . OpenUrl Abstract / FREE Full Text
  • ↵ Jefferson T, Doshi P, Thompson M, Heneghan C. Ensuring safe and effective evidence for drugs—who can do what it takes? BMJ 2011 ; 342 : c7258 . OpenUrl FREE Full Text
  • ↵ Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011 ; 343 : d5928 . OpenUrl FREE Full Text
  • ↵ Chan A-W, Hróbjartsson A, Jørgensen KJ, Gøtzsche PC, Altman DG. Discrepancies in sample size calculations and data analyses reported in randomized trials: comparison of publications with protocols. BMJ 2008 ; 337 : a2299 . OpenUrl Abstract / FREE Full Text
  • ↵ Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database—update and key issues. N Engl J Med 2011 ; 364 : 852 -60. OpenUrl CrossRef PubMed Web of Science
  • ↵ Nissen SE, Wolski K. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. N Engl J Med 2007 ; 356 : 2457 -71. OpenUrl CrossRef PubMed Web of Science
  • ↵ GlaxoSmithKline. Public disclosure of clinical research. Global Public Policy Issues, October 2011. Available from www.gsk.com/policies/GSK-on-disclosure-of-clinical-trial-information.pdf .
  • ↵ Reveiz L, Chan A-W, Krleža-Jerić K, Granados CE, Pinart M, Etxeandia I, et al. Reporting of methodologic information on trial registries for quality assessment: A study of trial records retrieved from the WHO search portal. PLoS ONE 2010 ; 5 : e12484 . OpenUrl CrossRef PubMed
  • ↵ Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis. PLoS Med 2009 ; 6 : e1000144 . OpenUrl CrossRef PubMed
  • ↵ Huić M, Marušić M, Marušić A. Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy. PLoS One 2011 ; 6 : e25258 . OpenUrl CrossRef PubMed
  • ↵ Viergever RF, Ghersi D. The quality of registration of clinical trials. PLoS One 2011 ; 6 : e14701 . OpenUrl CrossRef PubMed
  • ↵ Chan A-W. Access to clinical trial data. BMJ 2011 ; 342 : d80 . OpenUrl CrossRef PubMed
  • ↵ Gøtzsche PC, Jørgensen AW. Opening up data at the European Medicines Agency. BMJ 2011 ; 342 : d2686 . OpenUrl FREE Full Text
  • ↵ Kesselheim AS, Mello MM. Confidentiality laws and secrecy in medical research: Improving public access to data on drug safety. Health Aff (Millwood) 2007 ; 26 : 483 -91. OpenUrl Abstract / FREE Full Text
  • ↵ Lurie P, Zieve A. Sometimes the silence can be like the thunder: Access to pharmaceutical data at the FDA. Law Contemporary Problems 2008 ; 69 : 85 -97. OpenUrl
  • ↵ Asamoah AK, Sharfstein JM. Transparency at the Food and Drug Administration. N Engl J Med 2010 ; 362 : 2341 -3. OpenUrl CrossRef PubMed Web of Science
  • ↵ European Medicines Agency. EMA/110196/2006. European Medicines Agency policy on access to documents (related to medicinal products for human and veterinary use), POLICY/0043. 2010.
  • ↵ Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the food and drug administration: review of publication and presentation. PLoS Med 2008 ; 5 : e217 . OpenUrl CrossRef PubMed
  • ↵ Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008 ; 358 : 252 -60. OpenUrl CrossRef PubMed Web of Science
  • ↵ Reveiz L, Cardona AF, Ospina EG, de Agular S. An e-mail survey identified unpublished studies for systematic reviews. J Clin Epidemiol 2006 ; 59 : 755 -8. OpenUrl CrossRef PubMed
  • ↵ McGrath J, Davies G, Soares K. Writing to authors of systematic reviews elicited further data in 17% of cases. BMJ 1998 ; 316 : 631 . OpenUrl FREE Full Text
  • ↵ Clarke M, Greaves L. Identifying relevant studies for systematic reviews. BMJ 1995 ; 310 : 741 . OpenUrl FREE Full Text
  • ↵ Hetherington J, Dickersin K, Chalmers I, Meinert CL. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 1989 ; 84 : 374 -80. OpenUrl Abstract / FREE Full Text
  • ↵ Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR. Frequency and reasons for outcome reporting bias in clinical trials: Interviews with trialists. BMJ 2011 ; 342 : c7153 . OpenUrl Abstract / FREE Full Text
  • ↵ Hahn S, Williamson PR, Hutton JL. Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract 2002 ; 8 : 353 -9. OpenUrl CrossRef PubMed Web of Science
  • ↵ Chan A-W, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ 2005 ; 330 : 753 . OpenUrl Abstract / FREE Full Text
  • ↵ Chan A-W, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 2004 ; 291 : 2457 -65. OpenUrl CrossRef PubMed Web of Science
  • ↵ Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 2009 ; 361 : 1963 -71. OpenUrl CrossRef PubMed Web of Science
  • ↵ Ross JS, Madigan D, Hill KP, Egilman DS, Wang Y, Krumholz HM. Pooled analysis of rofecoxib placebo-controlled clinical trial data: lessons for postmarket pharmaceutical safety surveillance. Arch Intern Med 2009 ; 169 : 1976 -85. OpenUrl CrossRef PubMed
  • ↵ Psaty BM, Kronmal RA. Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: A case study based on documents from rofecoxib litigation. JAMA 2008 ; 299 : 1813 -7. OpenUrl CrossRef PubMed Web of Science
  • ↵ Bero L. Implications of the tobacco industry documents for public health and policy. Annu Rev Public Health 2003 ; 24 : 267 -88. OpenUrl CrossRef PubMed Web of Science
  • ↵ Dundar Y, Dodd S, Dickson R, Walley T, Haycox A, Williamson PR. Comparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies. Health Technol Assess 2006 ;10(5).
  • ↵ Higgins JPT, Green S, eds. 6.2.2.4 Conference abstracts or proceedings. In: Cochrane handbook for systematic reviews of interventions . Version 5.1.0. Cochrane Collaboration, 2011 . Available from www.cochrane-handbook.org .
  • ↵ Hartling L, Bond K, Vandermeer B, Seida J, Dryden DM, Rowe BH. Applying the risk of bias tool in a systematic review of combination long-acting beta-agonists and inhaled corticosteroids for persistent asthma. PLoS One 2011 ; 6 : e17242 . OpenUrl CrossRef PubMed
  • ↵ Krumholz HM, Ross JS. A model for dissemination and independent analysis of industry data. JAMA 2011 ; 306 : 1593 -4. OpenUrl CrossRef PubMed Web of Science

how to find unpublished research papers

how to find unpublished research papers

Writing about Design

Principles and tips for design-oriented research.

Writing about Design

How to get access to articles that are not Open Access

Although there is a strong push within the academic community to make research freely accessible (“open access”) to everyone, many research papers are unfortunately still published and available “behind the paywall”. That means that the publisher of the journal, book or a conference proceedings requires someone to subscribe the content. Typically that payer is a university library, who then can make the content accessible to the university students and employees.

But making use of this subscription can be complicated. Even if the seeker for papers – such as a student or a researcher – would have an affiliation with a university, it is not self-evident how one can get past the paywall and download the paper.

This post presents ways to get through the paywall, but requires that you have a user account in a university.

The focus of this post builds on an assumption that the seeker’s main literature search tool is Google Scholar, due to its simple use and excellent coverage of different research papers.

Google Scholar with basic internet connection: limited access to “full texts”

It is important to notice that Google Scholar provides access to articles differently depending on the type of internet connection. The difference shows in the links that Scholar displays in its search results.

When a user searches papers with Google Scholar with an ordinary internet connection, Google Scholar helps find papers, but cannot always provide the user with the “full text”. By full text, publishing companies mean the PDF that contains the article in its entirety. If access is not full text, then the user can only see the title, abstract, references, and maybe the first page of the paper.

It is sometimes possible to get an access to a full text even with this internet setup. This happens if:

  • The text has been published as Open Access (OA) or with a permission that the authors are allowed to upload a copy of the text to their personal public repository of papers.
  • Someone has uploaded the text somewhere in the Internet even if it is is not OA, thus possibly breaking the copyright of that content. Google has then found that paper.
  • The paper’s preprint or so-called “accepted manuscript” version is available from the authors of the paper. A pre-print is often an earlier version of the paper: similar or even identical in its content with the final one, but not copy-edited to the final layout, and therefore lacks correct page numbers, and may have minor typing errors.

The screenshot below shows an example of what a Google Scholar search can produce as an outcome.

From the screenshot, different kinds of links can be seen. They provide different levels of access to the content:

  • [PDF] : full text can be downloaded from the linked web page
  • [HTML] : full text can be usually downloaded, but not always
  • Getit@Grifols : this is a link whose purpose is not clear to me. No full texts available, anyway.
  • The link of the paper title : takes the user to the publisher’s website. Full text may or may not be downloadable from it.

As a base rule, if the text does not have any links in the right-side column, the full text PDF is not available from any source.

Google Scholar with university’s VPN connection

The access to papers can dramatically improve if you tunnel your Internet connection through a university’s VPN service. With VPN, all the Internet traffic from a computer will travel (i.e., is “tunneled”) through a designated server. Both Google Scholar and the publishing companies then recognize that the user is connecting them from an Internet address that has a right to access also subscribed contents.

It therefore makes sense to use VPN in using Google Scholar. Here is how the same search results look like with VPN. Note that the two previously inaccessible papers are now downloadable via a “ sfx@Aalto ” link. Sadly, Klein & Weitzenfeld’s text in Educational Psychologist still remains unaccessible: my university does not have a subscription to its contents:

I cannot be sure, but I believe all the universities provide a similar service as Aalto University does, and therefore VPN opens doors to papers (for those who have a university account, of course). Here are the instructions for Aalto users for how you install the VPN client on your computer.

Accessing papers without VPN

It is not necessary to use VPN to access texts, however. An alternative is to navigate to a university library’s search interface and download the paper through it. Every library works slightly differently in service its users. At Aalto University, it is possible to enter e.g., the paper’s title, and the service tries to find a matching piece of content from its digitally subscribed sources. Aalto University’s article search interface is here .

A good idea is to use the text’s DOI (digital object identifier) as the search term instead of a paper’s name, because otherwise you may get lots of unnecessary search hits too. DOI uniquely identifies the paper, and can be always found somewhere from the publisher’s website.

When you find the desired search result, click on the links that promises to take you to the electronic full text. At that stage you will need to provide your user name and password, to prove that you are entitled to access the content.

Another Aalto-specific tip is to use the list of digital paper libraries in libproxy.aalto.fi . If you know the publisher of a paper your are interested in, you can go to that library via a link in the libproxy page. As long as you browse papers in that library, you are surfing within the paywall, and will be able to download full texts.

Paper request from ResearchGate

As the last resort, you may go to ResearchGate  or Academia.edu . These are services where researchers can upload their works and create profile pages. If a researcher has uploaded the paper to the service, ResearchGate/Academia.edu provides a feature where you can ask a researcher to send a copy of the paper to you privately.

A word of advice: It is best to use this feature only after you have tried the other possibilities above and they have failed. By my personal experience, it is always irritating to react to ResearchGate’s paper requests if the requested paper is also available as Open Access. A request in such a situation only shows that the requester has not spent even a minimal bit of effort to find it. I get paper requests from ResearchGate approximately once a week. Much more cited researchers therefore probably get several requests each day.

Final words

As can be seen, retrieval of a text can require quite a bit of work. It is better to minimize the amount of work that you need to spend in downloading. Therefore, always download and store the paper on your computer! Also make sure that you can easily find that paper later on from your computer. For example, use a systematic file-naming principle (e.g., authorname year paper title.pdf) for all the papers, or start using reference manager programs such as Mendeley or Zotero. That will save you time next time, and lets you annotate the texts that you read with your own observations.

Acknowledgments

Thanks to Markku Reunanen for informing about the digital library listing at libproxy.aalto.fi.

One thought on “ How to get access to articles that are not Open Access ”

Pingback: How the most useful background literature can be found? | Writing about Design

Comments are closed.

Open Access Theses and Dissertations

Thursday, April 18, 8:20am (EDT): Searching is temporarily offline. We apologize for the inconvenience and are working to bring searching back up as quickly as possible.

Advanced research and scholarship. Theses and dissertations, free to find, free to use.

Advanced search options

Browse by author name (“Author name starts with…”).

Find ETDs with:

Written in any language English Portuguese French German Spanish Swedish Lithuanian Dutch Italian Chinese Finnish Greek Published in any country US or Canada Argentina Australia Austria Belgium Bolivia Brazil Canada Chile China Colombia Czech Republic Denmark Estonia Finland France Germany Greece Hong Kong Hungary Iceland India Indonesia Ireland Italy Japan Latvia Lithuania Malaysia Mexico Netherlands New Zealand Norway Peru Portugal Russia Singapore South Africa South Korea Spain Sweden Switzerland Taiwan Thailand UK US Earliest date Latest date

Sorted by Relevance Author University Date

Only ETDs with Creative Commons licenses

Results per page: 30 60 100

October 3, 2022. OATD is dealing with a number of misbehaved crawlers and robots, and is currently taking some steps to minimize their impact on the system. This may require you to click through some security screen. Our apologies for any inconvenience.

Recent Additions

See all of this week’s new additions.

how to find unpublished research papers

About OATD.org

OATD.org aims to be the best possible resource for finding open access graduate theses and dissertations published around the world. Metadata (information about the theses) comes from over 1100 colleges, universities, and research institutions . OATD currently indexes 6,912,508 theses and dissertations.

About OATD (our FAQ) .

Visual OATD.org

We’re happy to present several data visualizations to give an overall sense of the OATD.org collection by county of publication, language, and field of study.

You may also want to consult these sites to search for other theses:

  • Google Scholar
  • NDLTD , the Networked Digital Library of Theses and Dissertations. NDLTD provides information and a search engine for electronic theses and dissertations (ETDs), whether they are open access or not.
  • Proquest Theses and Dissertations (PQDT), a database of dissertations and theses, whether they were published electronically or in print, and mostly available for purchase. Access to PQDT may be limited; consult your local library for access information.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Cochrane Database Syst Rev

Methods for obtaining unpublished data

In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data), as either an abstract or full‐text paper, as well as missing data (data available to original researchers but not reported) in published abstracts or full‐text publications. The effectiveness of different methods used to obtain unpublished or missing data has not been systematically evaluated.

To assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews.

Search methods

We identified primary studies comparing different methods of obtaining unpublished studies (data) or missing data by searching the Cochrane Methodology Register (Issue 1, 2010), MEDLINE and EMBASE (1980 to 28 April 2010). We also checked references in relevant reports and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by other sources (19 June 2009).

Selection criteria

Primary studies comparing different methods of obtaining unpublished studies (data) or missing data in the healthcare setting.

Data collection and analysis

The primary outcome measure was the proportion of unpublished studies (data) or missing data obtained, as defined and reported by the authors of the included studies. Two authors independently assessed the search results, extracted data and assessed risk of bias using a standardised data extraction form. We resolved any disagreements by discussion.

Main results

Six studies met the inclusion criteria; two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.

Methods to obtain missing data

Five studies, two randomised studies and three observational comparative studies, assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).

Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well‐known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

Methods to obtain unpublished studies

One observational comparative study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non‐specific request.

Authors' conclusions

Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e‐mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.

Plain language summary

This methodology review was conducted to assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews. Six studies met the inclusion criteria, two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.

Five studies assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study). Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well‐known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

One study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non‐specific request.

Description of the problem or issue

Reporting bias arises when dissemination of research findings is influenced by the nature and direction of results. Publication bias (the selective publication of research studies as a result of the strength of study findings), time‐lag bias (the rapid or delayed publication of results depending on the results) and language bias (the publication in a particular language depending on the nature and direction of the results) are typical types of reporting bias ( Higgins 2009 ).

Publication bias, especially, is a major threat to the validity of systematic reviews ( Song 2000 ; Sterne 2008 ). Hopewell et al examined the impact of grey literature (literature which has not formally been published) in meta‐analysis of randomised trials of healthcare interventions and found that published trials tend to be larger and show an overall greater treatment effect than trials from grey literature ( Hopewell 2007 ). Not making an attempt to include unpublished data in a systematic review can thus result in biased larger treatment effects ( Higgins 2009 ).

In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data) as either an abstract or full‐text paper, as well as missing data (data available to the original researchers but not reported) in published abstracts or full‐text publications. Types of data commonly missing from published papers include details of allocation concealment and blinding, information about loss to follow‐up and standard deviations. This is different from data that are 'missing' because the original researchers do not have them, but might be able to get them (e.g. a specific subgroup analysis not done by the original researchers, but which could be carried out retrospectively in response to a request from systematic review authors) or data that are missing because they were never collected by the original researchers and are not retrievable by other means (e.g. patient’s quality of life at specific time points).

Often, the search for and retrieval of unpublished and missing data delays the time to review completion.

Description of the methods being investigated

Different methods are used to search for and obtain unpublished data or missing data from studies to be included in systematic reviews.

Authors of systematic reviews informally contact colleagues to find out if they know about unpublished studies ( Greenhalgh 2005 ). In addition, formal requests for information on completed but unpublished studies, as well as ongoing studies, are sent to researchers (authors of identified included studies of the relevant review), experts in the field, research organisations and pharmaceutical companies ( Lefebvre 2008 ; Song 2000 ). Some organisations might set up websites for systematic review projects, listing the studies identified to date and inviting submission of information on studies not already listed ( Lefebvre 2008 ).

Prospective clinical trial registries, both national and international, are also searched to identify ongoing studies. Plus, registries of grey literature are searched to identify unpublished studies.

In order to obtain details about missing data (data available to the original researchers but not reported) authors of systematic reviews contact the authors of studies included in the review by telephone, e‐mail or letters by post.

How these methods might work

Approaching researchers for information about completed but never published studies has had varied results, ranging from being willing to share information to no response ( Greenhalgh 2005 ). The willingness of investigators of located unpublished studies to provide data may depend upon the findings of the study, where more favourable results may be shared more willingly ( Smith 1998 ).

Why it is important to do this review

The effectiveness of the different methods used to obtain unpublished or missing data has not been systematically evaluated. This review will systematically evaluate these effects and will thus assist authors of reviews in improving their efficiency in conducting their reviews.  

Criteria for considering studies for this review

Types of studies.

Primary studies comparing different methods of obtaining unpublished studies (data) or missing data. We excluded studies without a comparison of methods.

Types of data

All relevant studies in the healthcare setting.

Types of methods

Any method designed to obtain unpublished studies (data) or missing data (i.e. data available to researchers but not reported).

Types of outcome measures

Primary outcomes.

Methods to obtain missing data (data available to researchers but not reported in the published study).

  • Proportion of missing data obtained as defined and reported by authors.

Methods to obtain unpublished studies (data for studies that have never been published).

  • Proportion of unpublished studies (data) obtained as defined and reported by authors.

Secondary outcomes

Methods to obtain missing data (data available to the original researchers but not reported in the published study).

  • Completeness (extent to which data obtained answers to the questions posed by those seeking the data) of missing data obtained.
  • Type (e.g. outcome data, baseline data) of missing data obtained.   
  • Time taken to obtain missing data (i.e. time from when efforts start until data are obtained).
  • Number of attempts (as defined by the authors) made to obtain missing data.
  • Resources required.
  • Time taken to obtain unpublished studies (i.e. time from when efforts start until data are obtained).
  • Number of attempts (as defined by the authors) made to obtain unpublished studies (data).

Search methods for identification of studies

To identify studies we carried out both electronic and manual searches. All languages were included.

Electronic searches

We searched the Cochrane Methodology Register (CMR) (Issue 1, 2009) using the search terms in Appendix 1 . We searched MEDLINE and Ovid MEDLINE(R) In‐Process & Other Non‐Indexed Citations using OVID (1950 to 10 February 2009) ( Appendix 2 ) and adapted these terms for use in EMBASE (1980 to 2009 Week 06) ( Appendix 3 ). We conducted an updated search in EMBASE, MEDLINE and the Cochrane Methodology Register on 28 April 2010.

Searching other resources

We also checked references in relevant reports ( Horsley 2011 ) and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by the sources above (19 June 2009).

Selection of studies

Two authors independently screened titles, abstracts and descriptor terms of the electronic search results for relevance based on the criteria for considering studies for this review. We obtained full‐text articles (where available) of all selected abstracts and used an eligibility form to determine final study selection. We resolved any disagreements through discussion.

Data extraction and management

Two authors independently extracted data using a standardised data extraction form. We resolved any disagreements by discussion.

Data extracted included the following.

  • Administrative details for the study ‐ identification; author(s); published or unpublished; year of publication; year in which study was conducted; details of other relevant papers cited.
  • Details of study ‐ study design; inclusion and exclusion criteria; country and location of the study.
  • Details of intervention ‐ method(s) used to obtain unpublished or missing data.
  • Details of outcomes and results ‐ proportion, completeness and type of unpublished or missing data; time taken to obtain unpublished or missing data; number of attempts made and resources required.

Assessment of risk of bias in included studies

Two authors independently evaluated the risk of bias of included studies, which included both the inherent properties of the study and the adequacy of its reporting.

For randomised studies comparing different methods to obtain data we assessed the following criteria, based on The Cochrane Collaboration's 'Risk of bias' tool and classified as adequate, inadequate or unclear:

  • generation of the allocation sequence;
  • concealment of the allocation sequence;
  • blinding of the participants, personnel and outcome assessor.

For non‐randomised studies comparing different methods to obtain data we assessed the following criteria and reported whether they were adequate, inadequate or unclear:

  • how allocation occurred;
  • attempt to balance groups by design;
  • use of blinding.

Based on these criteria, we assessed studies as being at 'high', 'low' or 'moderate' risk of bias.

Dealing with missing data

If any of the data were insufficient or missing, we sought data from the contact author of the empirical study using e‐mail. This was successful for one of the two studies ( Higgins 1999 ) for which we contacted the authors.

Data synthesis

Due to significant differences in study design, it was not possible to carry out a meta‐analysis of the included studies. Therefore the results of the individual studies are presented descriptively, reporting individual study effect measures and 95% confidence intervals where available.

Description of studies

See Characteristics of included studies and Characteristics of excluded studies .

Results of the search

Of 4768 identified abstracts and titles, we selected 18 potentially eligible publications, referring to 15 studies, for detailed independent eligibility assessment ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is nMR000027-AFig-FIG01.jpg

Study flow diagram.

Included studies

Six studies ( Brown 2003 ; Gibson 2006 ; Guevara 2005 ; Higgins 1999 ; Milton 2001 ; Shukla 2003 ) met the inclusion criteria. Of these five were published as abstracts for Cochrane Colloquia and one as a full paper ( Gibson 2006 ). Only one observational comparative study evaluated the effects of different methods to obtain unpublished data ( Shukla 2003 ). The other five studies, two randomised studies ( Higgins 1999 ; Milton 2001 ) and three observational comparative studies ( Brown 2003 ; Gibson 2006 ; Guevara 2005 ), evaluated different methods for obtaining missing data. Table 1 provides a summary of the interventions studies and outcomes measured.

Excluded studies

Nine studies ( Bohlius 2003 ; Eysenbach 2001 ; Hadhazy 1999 ; Hetherington 1987 ; Kelly 2002 ; Kelly 2004 ; McGrath 1998 ; Reveiz 2004 ; Wille‐Jorgensen 2001 ) did not meet the inclusion criteria as there was no comparison of different methods of obtaining missing data.

Risk of bias in included studies

Brown 2003 , Gibson 2006 , Guevara 2005 and Shukla 2003 , the four observational, comparative studies, did not report on the methodology used and therefore assessments of risk of bias for these studies are incomplete.

We assessed risk of bias for the two randomised studies by looking at the methods used for allocation sequence generation, allocation concealment and blinding. Allocation concealment was adequate for Higgins 1999 and unclear for Milton 2001 . Allocation sequence generation and blinding were not reported.

Effect of methods

Five of the six studies assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).

Proportion of missing data obtained as defined and reported by authors

All five studies provided information on the proportion of missing data obtained.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters. The study was designed as a comparative study but data per study arm were not reported. Twenty‐one replies (19%) were received. One study published in the period 1980‐1984 elicited no response, nine 1985‐1989 studies elicited two responses, 41 1990‐1994 studies elicited six responses, 38 1995‐1999 studies elicited eight responses and 21 2000‐2002 studies elicited four responses.

Gibson 2006 used a non‐randomised design to compare contacting authors by e‐mail, letter or both. Two hundred and forty‐one studies (40%) had missing or incomplete data. They were unable to locate 95 authors (39%). Of the remaining 146 authors, 46 authors (32%) responded to information requests. The response rate differed by mode of contact ‐ letter (24%), e‐mail (47%) and both (73%). Response was significantly higher with e‐mail compared to using letters (hazard ratio 2.5; 95% confidence interval (CI) 1.3 to 4.0). Combining letter and e‐mail had a higher response rate, however, it was not significantly different from using e‐mail alone (reported P = 0.36). The combination of methods (letter plus e‐mail follow‐up) rather than multiple contacts using the same method was more effective for eliciting a response from the author. Response rates from US authors did not differ from those of other countries. The older the article, the less likely the response.

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax. Fifteen authors (60%) responded to information requests. E‐mail resulted in fewer attempts and a greater response rate than post or fax. Authors of studies published after 1990 were as likely to respond (67% versus 50%, reported P = 0.45) as authors of studies published earlier. Similarly, corresponding authors were no more likely to respond (58% versus 9%, reported P = 0.44) than secondary authors, although few secondary authors were contacted.

Higgins 1999 used a randomised comparison of single request for missing information (by e‐mail or surface mail) (n = 116) versus a multistage approach involving pre‐notification, request for missing information and active follow‐up (n = 117) and found no significant difference between the two groups (risk ratio (RR) 1.04; 95% CI 0.74 to 1.45) in response rate.

Milton 2001 compared, using a randomised design, the response of clinical trial investigators to requests for information signed by either Richard Smith (RS), editor of the British Medical Journa l (n = 96), or an unknown researcher (n = 48) and found no significant differences between signatory groups in response rates. By three weeks, 34% in the former and 27% in the unknown researcher's group had responded (odds ratio (OR) 1.35; 95% CI 0.59 to 3.11). No baseline data had been provided by three weeks. By the end of the study, at five weeks, 74% and 67% respectively had responded (OR 1.42; 95% CI 0.62 to 3.22) and 16 out of 53 studies in the RS group and five out of 27 authors in the unknown researcher's group had provided baseline data (OR 1.90; CI 0.55 to 6.94).

Completeness of data

One of the five studies assessed the extent to which data obtained answers to the questions posed by those seeking the data.

Higgins 1999 compared, using a randomised design, the completeness of information retrieved between study arms (single request for missing information (by e‐mail or surface mail) (n = 116) versus multistage approach involving pre‐notification, request for missing information and active follow‐up (n = 117)) and found no significant difference between the two study methods. 

Type of missing data obtained

Two of the five studies assessed the type of missing data obtained.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters and received 21 replies (19%), of which nine provided relevant outcome and quality data, one provided additional data on study quality only and one provided information regarding duplicate publications. Eleven studies provided no useful information. Data per study arm were not reported.

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that requests for clarification of methods resulted in a greater response (50% versus 32%, P = 0.03) than requests for missing data. Once again, data per study arm were not reported.

Time taken to obtain missing data

Two of the five studies assessed the time taken to obtain missing data (i.e. time from when efforts start until data are obtained).

Gibson 2006 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that the time to respond differed significantly by contact method (P < 0.05): e‐mail (3 +/‐ 3 days; median one day), letter (27 +/‐ 30 days; median 10 days) and both (13 +/‐ 12 days; median nine days).

Guevara 2005 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that e‐mail had a shorter response time than post or fax.

Number of attempts made to obtain missing data

One of the five studies assessed the number of attempts made to obtain missing data.

Gibson 2006 used a non‐randomised design to compare e‐mail versus letter versus fax and reported that the number of items requested per authors averaged two or more. The number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response.

Resources required

One of the five studies assessed the resources required to obtain missing data.

Brown 2003 used a non‐randomised design to compare contacting 112 authors (of 139 studies) via 39 e‐mails and 73 letters and reported total costs of 80 GBP for printing and postage. Cost was not reported per study arm.

One of the six included studies assessed methods to obtain unpublished studies (i.e. data for studies that have never been published).

Proportion of unpublished studies (data) obtained as defined and reported by authors

Shukla 2003 , using a non‐randomised design, assessed two different approaches to seek unpublished information from the drug industry. The outcome of a general request letter was compared with efforts to identify unpublished data and then contacting the industry to provide further specific detail. With the first approach, no unpublished information was obtained. With the second approach, relevant unpublished information was obtained for four of five of the systematic reviews (in the form of manuscripts or oral/poster presentations).

No information was available for the following secondary outcome measures.

Despite extensive searches we identified only six studies as eligible for inclusion in this review. Of these, five were published as abstracts and one as a full paper. Due to lack of high‐quality studies the results should be interpreted with caution. Five studies, two randomised studies and three observational comparative studies evaluated different methods for obtaining missing data (e.g. data available to the original researchers but not reported in the published study). Two studies found that correspondence with study authors by e‐mail resulted in the greatest response rate with the fewest number of attempts and the shortest time to respond, when compared with correspondence by fax or letter. Combining letter and e‐mail had a higher response rate, however, it was not significantly different from using e‐mail alone. Another study found that you were more likely to solicit a response from authors whose studies were published more recently. In addition, requests for clarification of the study methods appeared to result in a greater response rate than requests for missing data about the study results.

The effect of a single request for missing information (by e‐mail or surface mail) versus a multistage approach (pre‐notification, request for missing information and active follow‐up) did not appear to affect the rate of response or the completeness of information retrieved; neither did the number of attempts made to obtain missing data or the number of items requested. Interestingly, the use of a well‐known signatory also had no significant effect on the likelihood of authors responding to a request for unpublished information. Only one study evaluated the effects of different methods to obtain unpublished data (e.g. data for studies that have never been published). This found that leg‐work ahead of time to clarify and request the specific unpublished study information required can prove to be more fruitful than sending of a non‐specific request. The Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2009 ) suggests that review authors also consider contacting colleagues to find out if they are aware of any unpublished studies; we did not find any studies addressing the effectiveness of this approach.

When considering the findings from this review it is important to consider the limitations in the completeness of the available data and how this weakens the strength of any recommendations we are able to draw. The general problem that a large proportion of conference abstracts do not get published in full has been shown by others ( Scherer 2007 ) and it was recently found that about two‐thirds of the research studies presented at Cochrane Colloquia do not get published in full ( Chapman 2010 ). We encountered this problem in this review with five of the six studies being available only as abstracts at Colloquia. They lacked information about the study methodology and detailed results, and were never written up and published in full. Despite attempts to contact the authors of these studies we were only able to obtain additional information for one of the five studies. Ironically, our systematic review is subject to the same problems of obtaining missing data which our review is trying to address. Assessment of risk of bias was also hampered by incomplete data; the four observational studies did not report on the study methods and only one of the two randomised studies reported on the method of allocation concealment. The Brown 2003 study was designed as a comparative study, however only combined results were reported. The study is therefore reported in this review as though it was a non‐comparative study report of the experience of contacting original authors.

Missing and incomplete data continue to be a major problem and potential source of bias for those carrying out systematic reviews. If data were missing from study reports at random then there would be less information around but that missing information would not necessarily be biased. The problem is that there is considerable evidence showing that studies are more likely to be published and published more quickly if they have significant findings ( Scherer 2007 ). Even when study results are published, there is evidence to show that authors are more likely to report significant study outcomes as opposed to non‐significant study outcomes ( Kirkham 2010 ). The findings from our review support the current recommendations in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2009 ) that those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. In the absence of being able to contact authors to obtain missing data, review authors should also consider the potential benefits of searching prospective clinical trial registries and trial results registers for missing data. For example, in 2007 the US government passed legislation that the findings for all US government funded research should be included on www.clinicaltrials.gov within one year of study completion, thus making available previously unpublished information. The setting up of websites for systematic review projects, listing the studies identified to date and inviting submission of information on studies not already listed ( Lefebvre 2008 ), has also been proposed as a way of identifying unpublished studies.

Implication for methodological research

The strength of the evidence included in this review is limited by the completeness of the available data; five of the six studies included in this review lacked information about the study methodology and their results. Despite extensive searching only one study assessed methods for obtaining unpublished data. Further robust, comparative, well‐conducted and reported studies are needed on strategies to obtain missing and unpublished data.

Acknowledgements

We are very grateful to Julian Higgins who provided us with additional information regarding the First Contact study.

Appendix 1. Cochrane Methodology Register search strategy

#1 ("study identification" next general) or ("study identification" next "publication bias") or ("study identification" next "prospective registration") or ("study identification" next internet) or ("data collection") or ("missing data") or ("information retrieval" next general) or ("information retrieval" next "retrieval techniques") or ("information retrieval" next "comparisons of methods"):kw  in Methods Studies

#2 (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (grey or unpublished or "un published" or "not published"):ti or (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (grey or unpublished or "un published" or "not published"):ab

#3 (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (missing or missed or insufficient or incomplete or lack* or addition*):ti or (request* or obtain* or identify* or locat* or find* or detect* or search or "ask for") NEAR/3 (missing or missed or insufficient or incomplete or lack* or addition*):ab

#4 (missing or incomplete or unpublished or "un published" or "not published") NEAR/3 (data or information or study or studies or evidence or trial or trials):ti or (missing or incomplete or unpublished or "un published" or "not published") NEAR/3 (data or information or study or studies or evidence or trial or trials):ab

#5 (bad or ambiguous or insufficient or incomplete) NEAR/6 report*:ti or (bad or ambiguous or insufficient or incomplete) NEAR/3 report*:ab

#6 (#1 OR #2 OR #3 OR #4 OR #5)

Appendix 2. MEDLINE search strategy

1. ((request$ or obtain$ or identify$ or locat$ or find$ or detect$ or search or ask for) adj3 (grey or unpublished or "un published" or "not published") adj3 (data or information or evidence or study or studies or trial? or paper? or article? or report? or literature or work)).tw.

2. ((request$ or obtain$ or identify$ or locat$ or find$ or detect$ or search or ask for) adj3 (missing or insufficient or incomplete or lack$ or addition$) adj3 (data or information or evidence)).tw.

3. ((bad or ambiguous or insufficient or incomplete) adj6 reporting).tw.

4. 1 or 2 or 3

5. (2000$ or 2001$ or 2002$ or 2003$ or 2004$ or 2005$ or 2006$ or 2007$ or 2008$ or 2009$).ep.

Appendix 3. EMBASE search strategy

5. (2004$ or 2005$ or 2006$ or 2007$ or 2008$ or 2009$).em.

Characteristics of studies

Characteristics of included studies [ordered by study id].

CCOHTA: Canadian Co‐ordinating Office for Health Technology Assessment NSAID: non‐steroidal anti‐inflammatory drug RCT: randomised controlled trial

Characteristics of excluded studies [ordered by study ID]

Contributions of authors.

Taryn Young (TY) developed and Sally Hopewell (SH) provided comments on the protocol. Both authors reviewed the search results, selected potential studies for inclusion, worked independently to do a formal eligibility assessment and then extracted data from included studies. TY drafted the review with input from SH.

Sources of support

Internal sources.

  • South African Cochrane Centre, South Africa.
  • UK Cochrane Centre, NHS Research & Development Programme, UK.

External sources

  • No sources of support supplied

Declarations of interest

None known.

References to studies included in this review

Brown 2003 {published data only}.

  • Brown T, Hooper L. Effectiveness of brief contact with authors . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

Gibson 2006 {published data only}

  • Gibson CA, Bailey BW, Carper MJ, Lecheminant JD, Kirk EP, Huang G, et al. Author contacts for retrieval of data for a meta‐analysis on exercise and diet restriction . International Journal of Technology Assessment in Health Care 2006; 22 ( 2 ):267‐70. [ PubMed ] [ Google Scholar ]

Guevara 2005 {published data only}

  • Guevara J, Keren R, Nihtianova S, Zorc J. How do authors respond to written requests for additional information? . XIII Cochrane Colloquium; 2005 Oct 22‐26; Melbourne, Australia . 2005.

Higgins 1999 {published data only}

  • Higgins J, Soornro M, Roberts I, Clarke M. Collecting unpublished data for systematic reviews: a proposal for a randomised trial . 7th Annual Cochrane Colloquium Abstracts, October 1999 in Rome . 1999.

Milton 2001 {published data only}

  • Milton J, Logan S, Gilbert R. Well‐known signatory does not affect response to a request for information from authors of clinical trials: a randomised controlled trial . 9th Annual Cochrane Colloquium Abstracts, October 2001 in Lyon . 2001.

Shukla 2003 {published data only}

  • Shukla V. The challenge of obtaining unpublished information from the drug industry . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

References to studies excluded from this review

Bohlius 2003 {published data only}.

  • Bohlius J, Langensiepen S, Engert A. Data hunting: a case report . XI Cochrane Colloquium: Evidence, Health Care and Culture; 2003 Oct 26‐31; Barcelona, Spain . 2003.

Eysenbach 2001 {published data only}

  • Eysenbach G, Tuische J, Diepgen TL. Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews . Medical Informatics and the Internet in Medicine 2001; 26 ( 3 ):203‐18. [ PubMed ] [ Google Scholar ]
  • Eysenbach G, Tuische J, Diepgen TL. Evaluation of the usefulness of internet searches to identify unpublished clinical trials for systematic reviews . Chinese Journal of Evidence‐Based Medicine 2002; 2 ( 3 ):196‐200. [ PubMed ] [ Google Scholar ]

Hadhazy 1999 {published data only}

  • Hadhazy V, Ezzo J, Berman B. How valuable is effort to contact authors to obtain missing data in systematic reviews . 7th Annual Cochrane Colloquium Abstracts, October 1999 in Rome . 1999.

Hetherington 1987 {published data only}

  • Hetherington J. An international survey to identify unpublished and ongoing perinatal trials [abstract] . Controlled Clinical Trials 1987; 8 :287. [ Google Scholar ]
  • Hetherington J, Dickersin K, Chalmers I, Meinert CL. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians . Pediatrics 1989; 84 ( 2 ):374‐80. [ PubMed ] [ Google Scholar ]

Kelly 2002 {published data only}

  • Kelley GA, Kelley KS, Tran ZV. Retrieval of individual patient data for an exercise‐related meta‐analysis . Medicine & Science in Sports & Exercise 2002; 34 ( 5 (Suppl 1) ):S225. [ Google Scholar ]

Kelly 2004 {published data only}

  • Kelley GA, Kelley KS, Tran ZV. Retrieval of missing data for meta‐analysis: a practical example . International Journal of Technology Assessment in Health Care 2004; 20 ( 3 ):296–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]

McGrath 1998 {published data only}

  • McGrath J, Davies G, Soares K. Writing to authors of systematic reviews elicited further data in 17% of cases . BMJ 1998; 316 :631. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Reveiz 2004 {published data only}

  • Reveiz L, Andres Felipe C, Egdar Guillermo O. Using e‐mail for identifying unpublished and ongoing clinical trials and those published in non‐indexed journals . 12th Cochrane Colloquium: Bridging the Gaps; 2004 Oct 2‐6; Ottawa, Ontario, Canada . 2004.
  • Reveiz L, Cardona AF, Ospina EG, Agular S. An e‐mail survey identified unpublished studies for systematic reviews . Journal of Clinical Epidemiology 2006; 59 ( 7 ):755‐8. [ PubMed ] [ Google Scholar ]

Wille‐Jorgensen 2001 {published data only}

  • Wille‐Jorgensen. Problems with retrieving original data: is it a selection bias? . 9th Annual Cochrane Colloquium, Lyon . October 2001.

Additional references

Chapman 2010.

  • Chapman S, Eisinga A, Clarke MJ, Hopewell S. Passport to publication? Do methodologists publish after Cochrane Colloquia? . Joint Cochrane and Campbell Colloquium . 2010 Oct 18‐22; Keystone, Colorado, USA. Cochrane Database of Systematic Reviews, Supplement 2010; Suppl: 14.

Greenhalgh 2005

  • Greenhalgh T, Peacock R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources . BMJ 205; 331 ( 7524 ):1064‐5. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Higgins 2009

  • Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.2 [updated September 2009]. The Cochrane Collaboration, 2009 . Available from www.cochrane‐handbook.org . The Cochrane Collaboration, 2008. Available from www.cochrane‐handbook.org.

Hopewell 2007

  • Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta‐analyses of randomized trials of health care interventions . Cochrane Database of Systematic Reviews 2007, Issue 2 . [DOI: 10.1002/14651858.MR000010.pub3] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Horsley 2011

  • Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews . Cochrane Database of Systematic Reviews 2011, Issue 8 . [DOI: 10.1002/14651858.MR000026.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Kirkham 2010

  • Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smith R, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews . BMJ 2010; 340 :c365. [ PubMed ] [ Google Scholar ]

Lefebvre 2008

  • Lefebvre C, Manheimer E, Glanville J on behalf of the Cochrane Information Retrieval Methods Group. Chapter 6: Searching for studies. In: Higgins JPT, Green S (eds). Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008 . Available from www.cochrane‐handbook.org .

Scherer 2007

  • Scherer RW, Langenberg P, Elm E. Full publication of results initially presented in abstracts . Cochrane Database of Systematic Reviews 2007, Issue 2 . [DOI: 10.1002/14651858.MR000005.pub3] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith GD, Egger M. Meta‐analysis. Unresolved issues and future developments . BMJ 1998; 316 ( 7126 ):221‐5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases . Health Technology Assessment 2000; 4 ( 10 ):1‐115. [ PubMed ] [ Google Scholar ]

Sterne 2008

  • Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S (editors). Cochrane Handbook of Systematic Reviews of Interventions. Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008 . Available from www.cochrane‐handbook.org .

Cochrane Methods Information Retrieval

Searching for unpublished studies..

A consortium consisting of York Health Economics Consortium and the Cochrane Information Retrieval Methods Group has looked into the issue of searching for unpublished studies and obtaining access to unpublished data and has produced the following report and bibliography:

Arber M, Cikalo M, Glanville J, Lefebvre C, Varley D, Wood H. Annotated bibliography of published studies addressing searching for unpublished studies and obtaining access to unpublished data. York: York Health Economics Consortium; 2013.

This work was a sub-project of a larger project entitled “Searching for unpublished trials using trials registers and trials web sites and obtaining unpublished trial data and corresponding trial protocols from regulatory agencies”.

Other outputs of this project include:

Schroll, JB, Bero, L, Gotzsche, P. Searching for unpublished data for Cochrane reviews: Cross sectional study.  BMJ 2013;346:f2231

Wolfe, N, Gotzsche, PC and Bero, L.  Strategies for obtaining unpublished drug trial data:  A qualitative interview study.  Systematic Reviews. 2013; 2:31. http://www.systematicreviewsjournal.com/content/2/1/31

The project was a collaboration between the San Francisco Branch of the United States Cochrane Center, Nordic Cochrane Centre, Cochrane Acute Respiratory Infections Group, York Health Economics Consortium and the Cochrane Information Retrieval Methods Group. This sub-project was undertaken by staff of York Health Economics Consortium and Carol Lefebvre of Lefebvre Associates Ltd, for which some funding was provided by the Cochrane Collaboration under the Methods Innovation Funding initiative.

We thank the authors for allowing us to link to the full text of the report from this site.

Banner

APA Referencing - Education & CCSC students: Unpublished or informally published work

  • Abbreviations
  • Journal article
  • Quotes & citations
  • Reference lists
  • Referencing questions
  • Audiovisual works
  • Brochure or pamphlet
  • Conference paper
  • Dictionary/Encyclopedia
  • Government publication
  • Gray literature
  • Group author
  • Interviews/Research data
  • Lecture notes/Tutorial material
  • Newspaper/Magazine
  • Personal communication
  • Self-referencing
  • Software app
  • Figures & tables

Unpublished or informally published work

How to reference an unpublished or informally published work.

As with all referencing in academic writing, referencing is a matter of establishing the authority of the source or information you are relying upon as evidence to support the claims you make in your writing. This is the reason for peer review as it is a process that establishes the authority of a work through expert checking. Peer-reviewed published works are accepted as having greater authority than works that are not peer reviewed. Sometimes, however, the most useful research article might not be available as a peer-reviewed published article but it is available to us in an unpublished form. Use other peer-reviewed articles if possible but if there is a lack of published research reports and, for example, a pre-press version is available directly from the author, you may use it. Check whether the article has been published before submitting your final assignment or thesis and, if it has, reference the final version, taking into account any changes that the editors may have required in the peer-review process.

Unpublished and informally published works include:

  • work in progress
  • work submitted for publication
  • work prepared for publication but not submitted

a university website

An electronic archive such as academia.edu or researchgate.

  • the author's personal website

In-text citation

Reference list

Author, A. A. (Year).  Title of manuscript.  Unpublished manuscript [or "manuscript submitted for publication," or "Manuscript in preparation"].

If the unpublished manuscript is from a university, give this information at the end.

If you locate the work on an electronic archive, give this information at the end.

If a URL is available, give it at the end. 

If you use a pre-print version of an article that is later published, reference the published version.

  • << Previous: Figures & tables
  • Last Updated: Apr 9, 2024 5:05 PM
  • URL: https://morlingcollege.libguides.com/apareferencing

College & Research Libraries News  ( C&RL News ) is the official newsmagazine and publication of record of the Association of College & Research Libraries,  providing articles on the latest trends and practices affecting academic and research libraries.

C&RL News  became an online-only publication beginning with the January 2022 issue.

Reference and instruction librarian at George Washington University-Virginia Campus Library

how to find unpublished research papers

ALA JobLIST

Advertising Information

  • Preparing great speeches: A 10-step approach (224330 views)
  • The American Civil War: A collection of free online primary sources (205212 views)
  • 2018 top trends in academic libraries: A review of the trends and issues affecting academic libraries in higher education (77874 views)

ACRL College & Research Libraries News

College & Research Libraries News

Internet resources: gray literature: resources for locating unpublished research.

by Brian S. Mathews

Gray or grey literature has long been considered the proverbial needle in the haystack. It is commonly defined as any documentary material that is not commercially published and is typically composed of technical reports, working papers, business documents, and conference proceedings. The greatest challenges involved with these items are the process of identification, since there is limited indexing, and acquisition, since availabil- ity is usually marred with uncertainty. Added to this is the absence of editorial control, raising questions about authenticity and reliability. Yet despite these considerations, gray literature is con- tinually referenced in scholarly articles and disser- tations and therefore remains an issue that aca- demic librarians must contend with.

While the search for these eclectic materials is not new, the development of the Web has increased opportunities. Gray literature is now freely available on many Web sites and is selectively indexed by numerous commercial database vendors. Many organizations and individuals are also providing access to their works online. While all these factors present a new optimism, they also raise expectations that everything is available quickly, if not instantly, creating unrealistic perceptions.

Included in this section are Web sites that aid in understanding the nature of gray literature as well as various search tools. The focus is upon freely available resources that offer some full-text coverage. While the majority of these selections concentrate upon scientific and technical literature, other resources have been included to illustrate the wide range and variation of gray literature.

About gray literature

• IL Toolkit—Finding Information: Gray Literature. This site provides detailed definitions of gray literature and links to many technical report Web sites. Access: http://www.bcinow.com/ demo/iltoolkit2/Types_Gray_Lit.htm.

• The Role of Grey literature in the Sciences. This site provides a concise overview of gray literature and offers insight regarding its impact on the sciences. Access : http://library. brooklyn.cuny.edu/access/greyliter.htm.

• TextRelease: Grey literature Program and Conference Bureau. This site provides information about the International Conference on Grey Literature and includes purchasing information for past proceedings as well as updates on future sessions. Access: http://www.textrelease. com.

Directories

• NY Academy of Medicine: Grey Literature Page. This site focuses on gray literature resources from the medical field and includes an extensive listing of agencies and organizations that produce health-related materials. The site also features a quarterly “Grey Literature Report,” listing many items that are available freely online. Access: http://www.nyam.org/library/grey.shtml.

About the author

Brian S. Mathews is reference and instruction librarian at George Washington University-Virginia Campus Library, e-mail: [email protected]

• Virtual Technical Reports Central. This site is hosted by the University of Maryland Li- braries and offers a large listing of gray literature- producing institutions. While technical and re- search reports are prominent, it also features pre- prints, reprints, and e-prints. Access: http:// www.lib.umd.edu/ENGIN/TechReports/Vir- tual-TechReports.html.

• U.S. Government Information: Techni- cal Reports. This site is hosted by the University of Colorado Libraries and provides an annotated listing of government agencies and resources re- lated to technical reports. Access: http://www- libraries.colorado.edu/ps/gov/us/techrep.htm.

Scientific and technical reports: Government resources

• Agricultural Online Access (Agricola). This database is administrated by the National Agricultural Library and provides access to records of articles, chapters, reports, and reprints, encom- passing all aspects of agriculture and allied disci- plines. Access: http://agicola.nal.usda.gov/.

how to find unpublished research papers

• Energy Citations Database (ECD). This resource was designed by the Department of Energy and covers the areas of physics, mathematics, engineering, and com- puter science. The database provides limited full-text availability and indexes books, articles, reports, and conference papers back to 1948. Access: http://www. osti.gov/energycitations/.

• GrayLIT Network. This site provides information on technical reports generated through federally funded research and development projects. This meta-search portal includes archives from the Defense Department reports collection, the DOE Information Bridge, EPA Reports, and a selection of NASA reports. Access: http:// graylit.osti.gov/.

how to find unpublished research papers

• NASA Technical Reports Server. This search tool provides comprehensive access to the numerous NASA report archives, including documents from its predecessor NACA. The site provides some full-text coverage and indexes back to 1917. The advanced search includes additional physics-related resources. Access: http:// ntrs.nasa.gov/.

how to find unpublished research papers

• National Technical Information Service (NTIS). The NTIS site is a comprehensive re- source for federally funded scientific, engineering, and business-related information. The database provides some full-text access and indexes over two million publications back to 1990. Access: http://www.ntis.gov/search/.

• Scientific Network and Information Network (STINET). The STINET site pro- vides access to citations of unclassified de- fense research documents. Many of the records are available in full-text, and indexing coverage extends back to 1974. Access-. http://stinet. dtic.mil/.

• Transportation Research Information Services (TRIS Online). This database provides access to nearly 500,000 records in transporta- tion-related fields. Sources include books and ar- ticles as well as some full-text research studies, technical reports, and conference papers. Access. http://trisonline.bts.gov/sundev/search.cfm.

• U.S. Patent Database. This site provides full image access to all U.S.-granted patents back to 1790, with full-text searching back to 1976 and the full-text of published applications. The database also enables searching for references, which can include articles, reports, and proceed- ings. Access: http://www.uspto.gov/patft/ index.html.

Other scientific and technical report resources

how to find unpublished research papers

• CiteSeer: Scientific Literature Digital Library. This site aims toward improving the distribution and response of scientific literature. It indexes over 600,000 full-text documents and includes fea- tures allow- ing for cita- tion analysis, reference linking, awareness tracking, and more. Additionally, the site provides algorithms, techniques, and software that can be used by other digital libraries. Access: http:// citeseer.nj.nec.com/cs.

• MAGiC Project. 'The site aims toward establishing a collaborative system for the collection, storage, and use of engineering gray literature. Adminis- tered by the Cranfield University Library in the United Kingdom, MAGiC Project includes a citation database of over 120,000 international technical reports. Access: http://www.magic.ac.uk/.

how to find unpublished research papers

• MathSearch. This site searches a collection of more than 200,000 documents on academic mathematics and statistics servers and includes the full text of some technical reports, working papers, and other eclectic materials. Access: http://www.maths.usyd.edu.au: 8000/ MathSearch.html.

• Spires High-Energy Physics (HEP) Da- tabase. This database comprehensively indexes over 500,000 articles, papers, pre- prints, and techni- cal reports. Most of the materials are available in full text, with coverage extending back to 1974. Access: http:// www.slac.stanford.edu/spires/hep/ .

how to find unpublished research papers

Information technology reports and resources

• HP Labs Technical Reports. This site provides access to technical reports authored by HP researchers. Indexing goes back to 1990, and the site includes mostly full-text materials. Access: http://www.hpl.hp.com/techreports/.

• IBM Research: Technical Paper Search This database provides access to article citations and technical reports authored by the IBM Research community. Indexing goes back to 1987 and includes some full-text materials. Access: http://domino.watson.ibm.com/library/ cyberdig.nsf/home.

• Microsoft Research: Technical Reports. This mostly full-text database provides access to computer science publications, technical reports, and projects authored by Microsoft researchers. Access, http://research.microsoft.com/ pubs/.

• Networked Computer Science Technical Library (NCSTRL). This resource provides access to an international collection of computer science research reports and papers. The database is designed for educational use and is a collaborative effort between academic, industrial, and government research laboratories. Access: http:// www.ncstrl.org.

• ZDNET IT Directory. This site provides a full-text digital library of technical white papers, Webcasts, and case studies on vari- ous IT-related topics. The site requires a free reg- istration to view materials. Access: http:// itpapers.zdnet.com/.

Miscellaneous resources

• National Standards Systems Network (NSSN). Although standards are available commercially, they are typically grouped into the gray literature category. This site provides a federated search tool of more than 250,000 records from more than 600 developers and links to purchasing information. Access, http:// www.nssn.org/.

• Political Science Sites of Working Pa- pers. This site provides links to scholarly working papers on political science and related fields. Ac- cess. http://www.workingpapers.org/.

how to find unpublished research papers

• Research Papers in Economics (RePEc). This site is an international collaborative effort to enhance the dissemination of research in economics and related fields. The project includes a mostly full-text database of working papers, journal articles, chapter list- ings, and downloadable software components. Ac- cess: http://www.repec.org/.

• Search Adobe PDF Online. This Adobe search tool scans millions of PDF documents online. It indexes many files undetected by major search engines, and, although results can be var- ied, it does include a wealth of gray literature. Access, http://searchpdf.adobe.com/.

• World Bank Research Resources. This site provides access to working papers, current studies, and datasets from the World Bank Group. Access: http://econ.worldbank.org/ resource.php.

E-print archives and resources

• Clinical Medicine and Health Research NetPrints. This site provides a full-text archive for medical researchers to post their completed clinical studies. Access: http://clinmed.netprints. org/.

how to find unpublished research papers

• Cogprints. This self-archive provides material relevant to the study of cognition in the areas of psychology, biology, linguis- tics, and philosophy, as well as in the computer, physical, social, and mathematical seiences. The site includes full-text access to ar- ticles, chapters, technical reports, and confer- ence papers. Access. http://cogprints.ecs.soton. ac.uk/.

• Education-line. Administered by the Brotherton Library in the United Kingdom, this site provides full-text access to reports, work- ing papers, and conference documents that sup- port educational research, policy, and practice. Access: http://www.leeds.ac.uk/educol/.

• E-print Network. This site is a meta- search for scientific e-print resources and en- ables federated searching of more than 30 ma- jor databases and servers. Access: http:// www.osti.gov/ eprints/.

• GNU E-Prints Archives. This site pro- vides a listing of more than 100 online ar- chives using e-print software, and includes many academic institutions. Access: http:// software eprints org/archives.php.

Discussion lists

• Engineering Librarians Division (ELDNET-L). This moderated list addresses “issues of interest to engineering and related subject area libraries and librarians.” ELDNET- L is sponsored by the American Society for Engineering Education, Engineering Libraries Division, but is open to all interested subscrib- ers. Access: http://www.englib.cornell.edu/ eld/listserv/eldnetfile.html.

how to find unpublished research papers

• GreyNet Listserv. This international mod- erated list seeks “to facilitate dialog and communication between persons and organi- sations in the field of grey literature.” in addition to the electronic lists, the site includes information about the International Conference Series on Grey Literature and pro- vides an extensive categorical listing of resources. Access: http://www.greynet.org.

• Science and Technology Librarians Section Listserv (STS-L). This moderated list “provides a forum through which librarians in scientific and technical subject fields can achieve and maintain awareness of the impact and range of information with which they work.” STS-L is designed for communication of ACRL’s Science and Technology Section, but is open to all interested subscribers. Access: http://listserv.utk.edu/archives/sts-1. html.

Article Views (Last 12 Months)

Contact ACRL for article usage statistics from 2010-April 2017.

Article Views (By Year/Month)

© 2024 Association of College and Research Libraries , a division of the American Library Association

Print ISSN: 0099-0086 | Online ISSN: 2150-6698

ALA Privacy Policy

ISSN: 2150-6698

APA – Citing Sources

Unpublished material.

These types of publications are manuscripts that might be submitted for publication or works in progress. In this category, you might also find manuscripts that are not formally published but are retrievable online on personal or institutional websites. If no year can be identified, use "n.d." instead (= no date).

In-text citation:

  • According to a study (Blackwell & Conrod, 2003) there are . . .
  • Blackwell and Conrod (2003) argue that . . .

In the reference list:

Author, A. A., & Author, B. B. (year). Title of manuscript .  [Unpublished manuscript or Manuscript submitted for publication or Manuscript in preparation]. https://xxxx

If a university or other organization can be identified:

Author, A. A., & Author, B.B. (year). Title of manuscript . [Unpublished manuscript or Manuscript submitted for publication or Manuscript in preparation], Department, University. https://xxxx

Reference example:

Blackwell, E., & Conrod, P. J. (2003). A five-dimensional measure of drinking motives . [Unpublished manuscript], Department of Psychology, University of British Colombia, Vancouver, Canada.

  • << Previous: Tables and figures
  • Next: Websites >>

Contact the library

  • Ask the library
  • Book time for search help
  • Suggest an acquisition
  • 036-10 10 10
  • Follow us on Instagram
  • Follow us on Facebook

Visiting address

University library Jönköping University campus, building C Gjuterigatan 5 553 18 Jönköping

  • Delivery addresses

Opening hours

  • Mondays 8 – 20
  • Tuesdays 8 – 20
  • Wednesdays 8 – 20
  • Thursdays 8 – 20
  • Fridays 8 – 18
  • Saturdays 11 – 15
  • Sundays Closed

See more opening hours .

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Literature Search: Databases and Gray Literature

The literature search.

  • A systematic review search includes a search of databases, gray literature, personal communications, and a handsearch of high impact journals in the related field.  See our list of recommended databases and gray literature sources on this page.
  • a comprehensive literature search can not be dependent on a single database, nor on bibliographic databases only.
  • inclusion of multiple databases helps avoid publication bias (georaphic bias or bias against publication of negative results).
  • The Cochrane Collaboration recommends PubMed, Embase and the Cochrane Central Register of Controlled Trials (CENTRAL) at a minimum.     
  • NOTE:  The Cochrane Collaboration and the IOM recommend that the literature search be conducted by librarians or persons with extensive literature search experience. Please contact the NIH Librarians for assistance with the literature search component of your systematic review. 

Cochrane Library

A collection of six databases that contain different types of high-quality, independent evidence to inform healthcare decision-making. Search the Cochrane Central Register of Controlled Trials here.

European database of biomedical and pharmacologic literature.

PubMed comprises more than 21 million citations for biomedical literature from MEDLINE, life science journals, and online books.

Largest abstract and citation database of peer-reviewed literature and quality web sources. Contains conference papers.

Web of Science

World's leading citation databases. Covers over 12,000 of the highest impact journals worldwide, including Open Access journals and over 150,000 conference proceedings. Coverage in the sciences, social sciences, arts, and humanities, with coverage to 1900.

Subject Specific Databases

APA PsycINFO

Over 4.5 million abstracts of peer-reviewed literature in the behavioral and social sciences. Includes conference papers, book chapters, psychological tests, scales and measurement tools.

CINAHL Plus

Comprehensive journal index to nursing and allied health literature, includes books, nursing dissertations, conference proceedings, practice standards and book chapters.

Latin American and Caribbean health sciences literature database

Gray Literature

  • Gray Literature  is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers
  • hard to find studies, reports, or dissertations
  • conference abstracts or papers
  • governmental or private sector research
  • clinical trials - ongoing or unpublished
  • experts and researchers in the field     
  • Library catalogs
  • Professional association websites
  • Google Scholar  - Search scholarly literature across many disciplines and sources, including theses, books, abstracts and articles.
  • Dissertation Abstracts - dissertation and theses database - NIH Library biomedical librarians can access and search for you.
  • NTIS  - central resource for government-funded scientific, technical, engineering, and business related information.
  • AHRQ  - agency for healthcare research and quality
  • Open Grey  - system for information on grey literature in Europe. Open access to 700,000 references to the grey literature.
  • World Health Organization  - providing leadership on global health matters, shaping the health research agenda, setting norms and standards, articulating evidence-based policy options, providing technical support to countries and monitoring and assessing health trends.
  • New York Academy of Medicine Grey Literature Report  - a bimonthly publication of The New York Academy of Medicine (NYAM) alerting readers to new gray literature publications in health services research and selected public health topics. NOTE: Discontinued as of Jan 2017, but resources are still accessible.
  • Gray Source Index
  • OpenDOAR - directory of academic repositories
  • International Clinical Trials Registery Platform  - from the World Health Organization
  • Australian New Zealand Clinical Trials Registry
  • Brazilian Clinical Trials Registry
  • Chinese Clinical Trial Registry - 
  • ClinicalTrials.gov   - U.S.  and international federally and privately supported clinical trials registry and results database
  • Clinical Trials Registry  - India
  • EU clinical Trials Register
  • Japan Primary Registries Network  
  • Pan African Clinical Trials Registry

Methods for obtaining unpublished data

Affiliation.

  • 1 Centre for Evidence-based Health Care, Faculty of Health Sciences, Stellenbosch University, Tygerberg, South Africa. [email protected].
  • PMID: 22071866
  • PMCID: PMC7390448
  • DOI: 10.1002/14651858.MR000027.pub2

Background: In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data), as either an abstract or full-text paper, as well as missing data (data available to original researchers but not reported) in published abstracts or full-text publications. The effectiveness of different methods used to obtain unpublished or missing data has not been systematically evaluated.

Objectives: To assess the effects of different methods for obtaining unpublished studies (data) and missing data from studies to be included in systematic reviews.

Search methods: We identified primary studies comparing different methods of obtaining unpublished studies (data) or missing data by searching the Cochrane Methodology Register (Issue 1, 2010), MEDLINE and EMBASE (1980 to 28 April 2010). We also checked references in relevant reports and contacted researchers who were known or who were thought likely to have carried out relevant studies. We used the Science Citation Index and PubMed 'related articles' feature to identify any additional studies identified by other sources (19 June 2009).

Selection criteria: Primary studies comparing different methods of obtaining unpublished studies (data) or missing data in the healthcare setting.

Data collection and analysis: The primary outcome measure was the proportion of unpublished studies (data) or missing data obtained, as defined and reported by the authors of the included studies. Two authors independently assessed the search results, extracted data and assessed risk of bias using a standardised data extraction form. We resolved any disagreements by discussion.

Main results: Six studies met the inclusion criteria; two were randomised studies and four were observational comparative studies evaluating different methods for obtaining missing data.Methods to obtain missing dataFive studies, two randomised studies and three observational comparative studies, assessed methods for obtaining missing data (i.e. data available to the original researchers but not reported in the published study).Two studies found that correspondence with study authors by e-mail resulted in the greatest response rate with the fewest attempts and shortest time to respond. The difference between the effect of a single request for missing information (by e-mail or surface mail) versus a multistage approach (pre-notification, request for missing information and active follow-up) was not significant for response rate and completeness of information retrieved (one study). Requests for clarification of methods (one study) resulted in a greater response than requests for missing data. A well-known signatory had no significant effect on the likelihood of authors responding to a request for unpublished information (one study). One study assessed the number of attempts made to obtain missing data and found that the number of items requested did not influence the probability of response. In addition, multiple attempts using the same methods did not increase the likelihood of response. METHODS TO OBTAIN UNPUBLISHED STUDIES: One observational comparative study assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Identifying unpublished studies ahead of time and then asking the drug industry to provide further specific detail proved to be more fruitful than sending of a non-specific request.

Authors' conclusions: Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e-mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.

Publication types

  • Research Support, Non-U.S. Gov't
  • Systematic Review
  • Access to Information*
  • Documentation / methods*
  • Electronic Mail*
  • Randomized Controlled Trials as Topic

Library Guides

Systematic Reviews

  • Introduction to Systematic Reviews
  • Systematic review
  • Systematic literature review
  • Scoping review
  • Rapid evidence assessment / review
  • Evidence and gap mapping exercise
  • Meta-analysis
  • Systematic Reviews in Science and Engineering
  • Timescales and processes
  • Question frameworks (e.g PICO)
  • Inclusion and exclusion criteria
  • Using grey literature
  • Search Strategy This link opens in a new window
  • Subject heading searching (e.g MeSH)
  • Database video & help guides This link opens in a new window
  • Documenting your search and results
  • Data management
  • How the library can help
  • Systematic reviews A to Z

how to find unpublished research papers

Where to search

In conducting a systematic review, it is important that you search widely through published and unpublished research, to find all information available on a particular topic.This usually includes searching sources such as:   

  • Bibliographic databases
  • Trials registers
  • Reviews and guidelines
  • Grey literature
  • Hand searching other relevant sources.  

The databases you choose to search will depend on the topic of your systematic review. It is important to search a range of multidisciplinary and subject specific databases. 

You can access a wide variety of databases via the Database A-Z linked from the Primo homepage.

There are guides and tutorials on how to use each database here .

You can create personal accounts on most database provider website which will allow you to save searches, mark papers to return to and to set up alerts when new items are added to the database on your research topic.

Trials Registers

Why are the results of trials important to consider?

It is important that all healthcare decisions are informed by all available evidence, thus overcoming publication bias and selective reporting.  The data contained in clinical trials that are unpublished or ongoing, can provide important additional clinical evidence.

Development of trial registers

In the past decade mandates around registration of trials has increased their retrievability in trial registers. There is increasing  acceptance on behalf of investigators of the importance of registering trials at inception and unwillingness of  leading medical journal  publishe rs  to  publish reports of trials  not properly registered.  Cochrane Handbook  (Part 2, section 6.2.3).

No  single resource gives access to all trials, and multiple registers should be searched as broadly as possible .  Examples include: 

  • PROSPERO - Contains over 2000 records of prospectively registered systematic reviews, valuable resource for identifying on-going reviews to help avoid unplanned duplication of reviews.
  • WHO International Clinical Trials Registry Platform (ICTRP)  -   key trials register portal
  • Cochrane Central Register of Controlled trials (CENTRAL)  -  incudes trials published in bibliographic databases such as  Medline  and  Embase , as well as those from  other published and unpublished sources  
  • Clinicaltrials.gov  (a servi ce of the U.S. National Institutes of Health)
  • ISRCTN Registry  -  first online service that provided unique numbers to randomized controlled trials  in all areas of health care and from all countries around the world.

Further discussion on this topic can be found in the article below:

  • Challenges of identifying unpublished data from clinical trials: Getting the best out of clinical trials registers and other novel sources.

Grey Literature

It will depend on the purpose and scope of the review whether or not to include grey literature in your search, however, there are a number of reasons why it may be important to include in your review:  

  • Grey literature can provide further sources of evidence for your review 
  • It can provide a more rounded view of your field of research with access to different perspectives 
  • It can help balance publication bias e.g. often negative and neutral research results are not published by conventional means 
  • It can be a source of raw data 
  • It can provide more currency to your review through access to the most up-to-date and pre-publication material

For more information see the Using grey literature guide.

Hand Searching

"Handsearching involves a manual page-by-page examination of the entire contents of a journal issue or conference proceedings to identify all eligible reports of trials. In journals, reports of trials may appear in articles, abstracts, news columns, editorial, letters or other text"  (Cochrane Handbook, 6.2.2.1)                                                                

Why is handsearching important?  ​

For the trials that are reported, they may not be easily identified as  trials  due to indexing issues associated with some databases

Note:  Conference proceedings are important to hand search because individual conference papers are rarely indexed.

For more information, refer to:

(See Section 6.2.2.1)

Cochrane Collaboration Class - Handsearching

Systematic Reviews in the Health Sciences: Handsearching (Rutgers University Libraries)

  • << Previous: Inclusion and exclusion criteria
  • Next: Using grey literature >>
  • Last Updated: Jan 23, 2024 10:52 AM
  • URL: https://plymouth.libguides.com/systematicreviews

ACAP

ACAP LEARNING RESOURCES

Reference in APA 7

  • Printable Guides & Sample Papers
  • Headings & Page Order
  • ACAP Presentation Requirements This link opens in a new window
  • APA Style Guidelines, Blog & Socials
  • Paraphrasing
  • Time Stamps, Verbatim, Transcripts & Personal Comms
  • Secondary Sources
  • Tables & Figures
  • Missing, Same, Repeated, Multiples, Parts & Abbreviations
  • Reference List Elements
  • Formatting the Reference List
  • DOIs, URLs & Hyperlinks
  • Missing Information
  • Annotated Bibliographies
  • Edited, Republished & Translated Books
  • Reference Works
  • Diagnostic Manuals (DSM & ICD)
  • Religious & Ancient Works
  • Newspaper Articles
  • Conferences & Theses
  • Reports, Policies & Grey Literature
  • YouTube & Other Streaming
  • Podcasts, TV & Radio
  • Transcripts
  • Artwork & Images
  • Social Media
  • Legislation
  • Standards & Patents
  • Unpublished Works
  • Statistics, Tests & Data Sets
  • Generative Artificial Intelligence

Reference Elements: Unpublished & Informally Published Material

Author, a. a., & author, b. b. (year). title of work in italics [description of unpublished manuscript]. department name, university name. https://xxxxxx, author, a. a., & author, b. b. (year). title of work in italics  (publication no. ###). name of database or archive. https://doi.org/xxxxxx.

Use specific manuscript descriptions, e.g. [Unpublished manuscript]. [Manuscript in preparation]. [Manuscript submitted for publication]. Always use a DOI if the resource has one. Include a URL if there isn't a DOI available and if it resolves without authentication. 
  • REFERENCE LIST EXAMPLES
  • IN TEXT EXAMPLES

Leemans, S. J. J. & Artem, P. (2019). Proofs with stochastic-aware conformance checking: An entropy-based approach  [Unpublished manuscript]. Faculty of Science and Technology, Queensland University of Technology.  https://eprints.qut.edu.au/129860/

Winegard, B. M., Winegard, B. M., Geary, D. C., & Clark, C. J. (2018). The status competition model of cultural production . PsyArXiv.  https://doi.org/10.31234/osf.io/apw5e/

Parenthetical Style

See theorem one as follows "for any log L and model M (given as SDFAs), it holds that 0 ≤ recall(L, M) ≤ 1 and 0 ≤ precision(L, M) ≤ 1" (Leemans et al., 2019, p. 2).

In this example, the architecture of Frank Lloyd Wright is used for its functional and aesthetic qualities (Winegard et al., 2018).

Narrative Style

Leemans et al. (2019) proposes "for any log L and model M (given as SDFAs), it holds that 0 ≤ recall(L, M) ≤ 1 and 0 ≤ precision(L, M) ≤ 1" (p. 2).

Winegard et al. (2018) use the architecture of Frank Lloyd Wright house as an example for its functional and aesthetic qualities.

  • << Previous: Course Material & Unpublished
  • Next: Statistics, Tests & Data Sets >>
  • Last Updated: Mar 13, 2024 1:57 PM
  • URL: https://libguides.navitas.com/apa7

10.3.2  Including unpublished studies in systematic reviews

Publication bias clearly is a major threat to the validity of any type of review, but particularly of unsystematic, narrative reviews. Obtaining and including data from unpublished trials appears to be one obvious way of avoiding this problem.  Hopewell and colleagues conducted a review of studies comparing the effect of the inclusion or exclusion of ‘grey’ literature (defined here as reports that are produced by all levels of government, academics, business and industry in print and electronic formats but that are not controlled by commercial publishers) in meta-analyses of randomized trials (Hopewell 2007b) .  They included five studies (Fergusson 2000, McAuley 2000, Burdett 2003, Hopewell 2004) , all of which showed that published trials had an overall greater intervention effect than grey trials. A meta-analysis of three of these studies suggested that, on average, published trials showed a 9% larger intervention effect than grey trials (Hopewell 2007b) .

The inclusion of data from unpublished studies can itself introduce bias. The studies that can be located may be an unrepresentative sample of all unpublished studies. Unpublished studies may be of lower methodological quality than published studies: a study of 60 meta-analyses that included published and unpublished trials found that unpublished trials were less likely to conceal intervention allocation adequately and to blind outcome assessments (Egger 2003). In contrast, Hopewell and colleagues found no difference in the quality of reporting of this information (Hopewell 2004).

A further problem relates to the willingness of investigators of located unpublished studies to provide data. This may depend upon the findings of the study, more favourable results being provided more readily. This could again bias the findings of a systematic review. Interestingly, when Hetherington et al., in a massive effort to obtain information about unpublished trials in perinatal medicine, approached 42,000 obstetricians and paediatricians in 18 countries they identified only 18 unpublished trials that had been completed for more than two years (Hetherington 1989) .

A questionnaire assessing the attitudes toward inclusion of unpublished data was sent to the authors of 150 meta-analyses and to the editors of the journals that published them (Cook 1993). Researchers and editors differed in their views about including unpublished data in meta-analyses. Support for the use of unpublished material was evident among a clear majority (78%) of meta-analysts while journal editors were less convinced (47%) (Cook 1993).  This study was recently repeated, with a focus on the inclusion of grey literature in systematic reviews, and it was found that acceptance of inclusion of grey literature has increased and, although differences between groups remain (systematic review authors: 86%, editors: 69%), they may have decreased compared with the data presented by Cook et al. (Tetzlaff 2006).

Reasons for reluctance to include grey literature included the absence of peer-review of unpublished literature. It should be kept in mind, however, that the refereeing process has not always been a successful way of ensuring that published results are valid (Godlee 1999) . The team involved in preparing a Cochrane review should have at least a similar level of expertise with which to appraise unpublished studies as a peer reviewer for a journal. On the other hand, meta-analyses of unpublished data from interested sources are clearly a cause for concern.

X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills

Unpublished report

  • A-Z of Harvard references
  • Citing authors with Harvard
  • Page numbers and punctuation
  • References with missing details
  • Secondary referencing
  • Example reference list
  • Journal article
  • Magazine article
  • Newspaper article
  • Online video
  • Radio and internet radio
  • Television advertisement
  • Television programme
  • Ancient text
  • Bibliography
  • Book (printed, one author or editor)
  • Book (printed, multiple authors or editors)
  • Book (printed, with no author)
  • Chapter in a book (print)
  • Collected works
  • Dictionaries and Encyclopedia entries
  • Multivolume work
  • Religious text
  • Thesis or dissertation
  • Translated work
  • Census data
  • Financial report
  • Mathematical equation
  • Scientific dataset
  • Book illustration, Figure or Diagram
  • Inscription on a building
  • Installation
  • Painting or Drawing
  • Interview (on the internet)
  • Interview (newspaper)
  • Interview (radio or television)
  • Interview (as part of research)
  • Act of the UK parliament (statute)
  • Bill (House of Commons/Lords)
  • Birth/Death/Marriage certificate
  • British standards
  • Command paper
  • European Union publication
  • Government/Official publication
  • House of Commons/Lords paper
  • Legislation from UK devolved assemblies
  • Statutory instrument
  • Military record
  • Film/Television script
  • Musical score
  • Play (live performance)
  • Play script
  • Song lyrics
  • Conference paper
  • Conference proceedings
  • Discussion paper
  • Minutes of meeting
  • Personal communication
  • PowerPoint presentation
  • Published report
  • Student's own work
  • Tutor materials for academic course
  • Working paper
  • Referencing glossary

To be made up of:

  • Author or organisation.
  • Year produced (in round brackets).
  • Title of report (in italics).
  • Internal report (including name of institution).
  • Unpublished.

In-text citation:

(Hegenbarth, 2014)

Reference List:

Hegenbarth, L. (2014).  Focus group recommendations . Internal LGU report. Unpublished.

Quick links

  • Harvard references A-Z

Published reports  are reports which an individual or organisation have distributed, either electronically or in print, to the wider public. This can include annual reports and research reports. 

Unpublished reports, such as internal reports, are referenced differently to published reports. 

Reports by government departments should be referenced as Official publications .

  • Reference a published report
  • Reference an official publication
  • << Previous: Tutor materials for academic course
  • Next: Working paper >>
  • Last Updated: Feb 28, 2024 12:08 PM
  • URL: https://library-guides.ucl.ac.uk/harvard

Banner

APA Style Examples

  • Books/eBooks
  • Gov't/legal

Unpublished examples

  • Stats/Figures
  • Ask a Librarian
  • Class documents
  • Interview/letter/email
  • How to cite an online course or MOOC
  • How to cite PowerPoint slides or lecture notes

PERSONAL COMMUNICATION

IN TEXT 

(Communicator, personal communication, Date of communication ) .

L. Mardis (personal communication, July 29, 2019) reported that the library's guides underwent usability testing.

(L. A. Mardis, personal communication, January 22, 2020 ) .

(L. A. Mardis, class handouts, January 21, 2020).

REFERENCE LIST

[ APA Citation Example for an Interview ]

Mardis, L. A. (2018, October 19). Social media success in academic libraries [Interview]. Maryville, MO: Northwest

Missouri State University.

(For more examples, see p. 340 of the 7th edition)

[ APA Citing Example - Test ]

Goldberg, I. K. (2003).    Screening for Bipolar Spectrum Disorders     [Measurement instrument]. http://psychiatryassociatespc.com/doc/Goldberg's_bipolar_screening_scale.pdf

     IN TEXT 

In this study, Goldberg's (2003) Screening for Bipolar Spectrum Disorders was used to identify whether individuals were most likely suffering from major (unipolar) depression.  

  • << Previous: AI
  • Next: Stats/Figures >>
  • Last Updated: May 17, 2024 12:55 PM
  • URL: https://libguides.nwmissouri.edu/apa

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 7, Issue 10
  • Search for unpublished data by systematic reviewers: an audit
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Hedyeh Ziai 1 , 2 ,
  • Rujun Zhang 1 , 3 ,
  • An-Wen Chan 4 , 5 ,
  • Nav Persaud 3 , 6 , 7
  • 1 Keenan Research Centre, Li Ka Shing Knowledge Institute, St Michael’s Hospital , Toronto , Canada
  • 2 Faculty of Medicine , University of Ottawa , Ottawa , Canada
  • 3 Department of Family and Community Medicine , University of Toronto , Canada
  • 4 Women’s College Research Institute, Women’s College Hospital , Toronto , Canada
  • 5 Department of Medicine , University of Toronto , Toronto , Canada
  • 6 Department of Family and Community Medicine , St Michael’s Hospital , Toronto , Canada
  • 7 Centre for Urban Health Solutions , Li Ka Shing Knowledge Institute, St. Michael’s Hospital , Toronto , Ontario , Canada
  • Correspondence to Dr Nav Persaud; nav.persaud{at}utoronto.ca

Objectives We audited a selection of systematic reviews published in 2013 and reported on the proportion of reviews that researched for unpublished data, included unpublished data in analysis and assessed for publication bias.

Design Audit of systematic reviews.

Data sources We searched PubMed and Ovid MEDLINE In-Process & Other Non-Indexed Citations between 1 January 2013 and 31 December 2013 for the following journals: Journal of the American Medical Association , The British Medical Journal , Lancet , Annals of Internal Medicine and the Cochrane Database of Systematic Reviews . We also searched the Cochrane Library and included 100 randomly selected Cochrane reviews.

Eligibility criteria Systematic reviews published in 2013 in the selected journals were included. Methodological reviews were excluded.

Data extraction and synthesis Two reviewers independently reviewed each included systematic review. The following data were extracted: whether the review searched for grey literature or unpublished data, the sources searched, whether unpublished data were included in analysis, whether publication bias was assessed and whether there was evidence of publication bias.

Main findings 203 reviews were included for analysis. 36% (73/203) of studies did not describe any attempt to obtain unpublished studies or to search grey literature. 89% (116/130) of studies that sought unpublished data found them. 33% (68/203) of studies included an assessment of publication bias, and 40% (27/68) of these found evidence of publication bias.

Conclusion A significant fraction of systematic reviews included in our study did not search for unpublished data. Publication bias may be present in almost half the published systematic reviews that assessed for it. Exclusion of unpublished data may lead to biased estimates of efficacy or safety in systematic reviews.

  • Systematic Publication Bias Unpublished Data

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

https://doi.org/10.1136/bmjopen-2017-017737

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

To our knowledge, this is the first study to report the proportion of systematic reviews that searched for and included unpublished data from multiple sources including registries.

Two reviewers independently reviewed each included review to minimise bias.

Only reports of trials published in selected high-impact journals were reviewed, and this may reduce the external validity of our results.

Introduction

Readers of systematic reviews should know whether unpublished data was sought as the conclusions of the review may depend on studies that have not been published. Studies with positive results have a three times higher odds of being published than those with negative or null results. 1 The results of studies in favour of a new treatment also have a higher chance of being published than negative studies. 2 3 Published studies are likely to have larger treatment effects than unpublished studies. 4–6 An analysis of 42 meta-analyses on drug trials found that 92% of the efficacy estimates were altered when unpublished studies that were submitted to regulatory agencies were included. 7 For example, agomelatine was reported to be an effective depression treatment by several systematic reviews of only published studies, 8 9 but no treatment effect was demonstrated when seven unpublished studies were included. 10 11 Other examples of publication bias include oseltamivir for influenza in adults, 12 statins for the prevention of venous thromboembolic events, 13 quinine for nocturnal leg cramps 14 and reboxetine for depression. 15

The Cochrane Handbook for Systematic Reviews of Interventions states that ‘the convincing evidence for the presence of several types of reporting biases demonstrates the need to search comprehensively for studies that meet the eligibility criteria for a Cochrane review’. 16 Several studies have demonstrated that widely agreed on reporting standards are not always followed. Following the publication of reporting guidelines, the reporting of randomised control trials (RCTs) 17 and abstracts 18 remained poor. Audits of adherence to reporting standards might have improved the reporting of RCTs. 19 Just as changes to reporting guidelines for RCTs improved RCT reporting, changes to reporting guidelines for systematic reviews may improve systematic review reporting. A recent editorial and other published guidance suggest that data should routinely be sought from regulatory agencies and trial registries. 20 21

We performed an audit of systematic reviews of healthcare interventions published in 2013 to determine the proportion of reviews that reported searching for unpublished data. We also determined how many systematic reviews included unpublished data in the analysis and how many assessed for publication bias.

Eligibility criteria

Ethics approval was not required as the study was based on published systematic reviews. We included all full-text systematic reviews (with prespecified methods) that were published in 2013 in Journal of the American Medical Association  ( JAMA ), British Medical Journal  ( BMJ ), Lancet and Annals of Internal Medicine , as well as a random subset of 100 reviews published in the Cochrane Database of Systematic Reviews . Methodological reviews were excluded.

Search strategy

PubMed and Ovid MEDLINE In-Process & Other Non-Indexed Citations database were used to identify all published systematic reviews between 1 January 2013 and 31 December 2013 for the following journals: JAMA , BMJ , Lancet and Annals of Internal Medicine . The search for each source included the journal name, ‘systematic review’ and the publication date (eg, the search strategy used for JAMA in PubMed was: ‘JAMA’[journal] AND ‘systematic review’[title] AND‘2013’[pdat]). The journals were also manually searched to identify potential relevant papers that were missing from the electronic searches.

The Cochrane Library was also searched to identify literature published in the Cochrane Database of Systematic Reviews not retrieved by the previous electronic searches. The search was limited to the ‘Cochrane Database of Systematic Reviews’, the publication date was set to ‘2013 to 2013’ and the product type was identified as ‘Review’. A random number sequence generator was used to select and include 100 reviews from the Cochrane Database of Systematic Reviews . Based on our piloting of the data abstraction instrument, we believed that 100 reviews would be sufficient to determine the common approaches to unpublished data. The date of the last search for all databases was 11 February 2014.

Data extraction

Unpublished data included complete trials that have never been published as well as specific outcomes that are not reported in published trials. For this study, we considered data appearing in conference proceedings, research reports and dissertations as part of the grey literature, and these were included as a type of unpublished data.

From each systematic review included in the analysis, the following general characteristics were collected: journal name, title, first author and the date of publication. The following main outcomes were extracted: whether there was any search for unpublished data as described in the methods or evidence of unpublished data inclusion, whether unpublished data were used in meta-analysis, whether the interventions of review were pharmacological in nature and the results of any assessment of publication bias.

Two authors (HZ, NP or RZ) independently extracted data from the Abstract, Methods and Results sections using a standardised electronic form. Disagreements were discussed until consensus was achieved or resolved by a third individual. The following information was collected from the methods sections of each report: databases searched, electronic search strategy, other sources searched (conference abstracts, unpublished studies, ongoing studies, contact with study authors/experts and so on), any restrictions to publication status, language, whether publication bias was assessed and methods used to assess publication bias. For the results section, we reviewed any analysis of unpublished data and assessment of publication bias. For the discussion section, we assessed whether the authors commented on the presence of publication bias. Additionally, the following terms were searched electronically in the article text to ensure inclusion of all relevant information: ‘unpublish’, ‘publication’, ‘bias’, ‘funnel’, ‘Hegg’, ‘Egger’, ‘gray’, ‘grey’ and ‘reporting’.

Data synthesis and analysis

The primary purpose of this study was to determine the proportion of systematic reviews that reported searching for unpublished data. The secondary objective was to determine the proportion of systematic reviews that found unpublished data and included them in the meta-analysis. Pearson χ 2 tests were completed to assess for any difference between systematic reviews of pharmacological versus other interventions, and for any difference between reviews published in the Cochrane Database of Systematic Reviews versus the other journals.

We identified 104 systematic reviews published in JAMA (n=13), BMJ (n=40), Lancet (n=10) and Annals of Internal Medicine (n=41). The Cochrane Library search strategy yielded 1090 results, and subsequently 100 articles were randomly selected. Among the 204 systematic reviews reviewed in full text, one review was excluded as a methodological review. Our final cohort consisted of 203 reviews. Seventy-one reviews (35%) were pharmaceutical reviews.

Search for unpublished data

Of the 203 included systematic reviews, 73 (36%) did not describe in the methods whether there was any search for unpublished or grey literature data and 130 (64%) described some search for unpublished data. Overall, 42% (86/203) of reviews described searching for unpublished completed studies (with or without other types of unpublished data) in their search strategy and 22% (44/203) of reviews described searching the grey literature without specifically describing searching for unpublished studies. No reviews described only searching for unreported data from published studies included in the review in their methods.

Table 1 summarises the number of reviews that searched each source according to what was reported in the entire report including the results and discussion in addition to the methods section. National and international trial registries were the most significant source of unpublished data collection, followed by data received from contacting individuals or organisation and conference proceedings. Specifically, 42% (85/203) searched national and international trials registers, 41% (84/203) of the reviews contacted individuals or organisations (eg, sponsors or research organisations), 35% (71/203) searched conference proceedings, 8% (16/203) searched pharmaceutical industry trials registers and 2% (4/203) searched subject-specific trials registers. No review included a search for data from regulators via their websites or information requests.

  • View inline

Sources of data and yield of searches in systematic reviews. Types of unpublished data are in rows, and sources of unpublished data are in columns

Some reviews identified types of unpublished data that were not described in the search strategy in the Methods section (ie, they included unpublished data in the results section without describing the methodology of searching for unpublished data in the methods). Among the 130 reviews that described searching sources for unpublished data, 55 of them (43%) described only searching one source, 37 (28%) reviews described searching two sources and the remaining 38 (29%) reviews described searching three or more sources. No review described searching all sources for unpublished data. Furthermore, some included systematic reviews did not specify the source of unpublished data.

In a prespecified analysis of factors associated with performing a search for unpublished studies, we did not find any difference between reviews of pharmacological and non-pharmacological interventions. However, reviews published in the Cochrane Database of Systematic Reviews were significantly more likely to search for unpublished studies than those published in standard journals (p<0.0001).

The unpublished data discovered by systematic reviews were often not included in the results and analyses. Among the 130 reviews that included a search for unpublished or grey literature data, 89% (n=116) found such data. Of the 116 reviews that searched and found unpublished data, 46 reviews (40%) both identified and included unpublished data. The remaining 70 reviews did not include the supplementary data. Among the 46 studies that included unpublished data, 23 (50%) included data from unpublished studies. Twenty reviews (43%) included unpublished data from published studies and 7% (3/46) included data from abstracts of published papers only.

Assessment of publication bias

Thirty-three per cent (68/203) of the reviews included an assessment of publication bias. An additional 27% (55/203) of the reviews planned a publication bias assessment, but these analyses were not reported for a number of reasons: 10 did not find any eligible studies, 42 had insufficient quantity (<10) of selected studies or other reasons and three did not provide an explanation. Thirty-nine per cent (80/203) of the reviews did not describe an intent to assess publication bias.

Of the 68 systematic reviews that assessed publication bias, 58 reviews performed statistical or graphical analysis, and 10 reviews employed other methods such as a discussion of the likelihood that highly statistically significant results could be explained by publication bias. Thirty-four per cent (20/58) of the non-qualitative assessments and 70% (7/10) of the qualitative assessments were significant for publication bias ( table 2 ). There was a trend towards systematic reviews that searched for unpublished data being less likely to indicate publication bias.

Results of assessment for publication bias

Among the 203 systematic reviews published in high-impact general medical journals in 2013, 36% did not describe any attempt to search for unpublished studies, although guidelines recommend searching for unpublished data. Of the 116 reviews that completed a search and found unpublished data or grey literature, 40% included unpublished data for analysis. Thirty-three per cent of the reviews included a publication bias assessment, and 40% of these reviews revealed evidence of publication bias. The quantitative and/or qualitative suggestion of publication bias was more prevalent in reviews that did not search for unpublished data, when compared with those reviews that searched for unpublished data.

To our knowledge, this is the largest study that has investigated the proportion of systematic reviews that searched for and included unpublished data. A 2017 audit found that 52% of selected systematic reviews did not report a search of trial registries. 22 A cross-sectional survey of corresponding authors of Cochrane reviews with 37% response rate found that 76% of Cochrane respondents reported searching for unpublished data, 82% of unpublished data were used in analysis and that the most common source of unpublished data was from contacting the study investigators. 23 Over 10% of Cochrane reviews from 2000 to 2006 included unpublished studies. 24

The results of this study highlight inadequacies in identifying unpublished data in systematic reviews. Cochrane reviews might have been more likely to search for unpublished data because of guidance in the Cochrane handbook as well as rigorous protocol review and editorial practices. Clear standards regarding the search for and inclusion of unpublished studies may help. There is some evidence that unpublished data are sometimes misleading 25 and that unpublished data may not contain sufficient information to assess methodological quality and in turn may be unreliable. Conference abstracts may not always be reliable; 40%–60% may not have reported the main outcome results in the same way as the final published study. 25–30 Further studies will be needed to determine whether including unpublished data improves systematic reviews as their unpublished data may be unreliable.

Current reporting guidelines for systematic reviews such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) require only specification of the data sources searched, but they do not recommend describing whether a search for unpublished data was performed. 31 One of the 27 items is on the description of information sources, but the checklist does not mention unpublished data. The PRISMA statement explanation and elaboration indicates that describing the search for unpublished information is ‘useful’ but not mandatory. Specifically, they indicate: ‘ Authors should also report if they attempted to acquire any missing information (such as on study methods or results) from investigators or sponsors; it is useful to describe briefly who was contacted and what unpublished information was obtained ’. 31 Two systematic review quality assessment tools, A Measurement Tool to Assess Systematic Reviews and the Risk of Bias in Systematic Reviews (ROBIS), explicitly include the search for unpublished data. 32 33

The reporting of the search for unpublished data in systematic reviews may be improved by its explicit inclusion as a recommendation in the PRISMA reporting guideline. Similarly, the Consolidated Standards of Reporting Trials (CONSORT) statement was developed in response to concerns about the quality of RCTs. 34 35 Evaluations of its effectiveness have suggested that journal adoption of CONSORT is associated with improved quality of reporting of RCTs. 17 36–38 Furthermore, the inclusion of unpublished studies in systematic reviews might lead to those primary studies eventually being published through initiatives such as the Restoration Invisible and Abandoned Trials process. 26

To reduce potential reporting bias, it is important to search for unpublished data using several sources: subject matter experts, investigators, commercial sponsors, trial registries, regulatory agency documents and conference proceedings. 21 Searching for trial protocols can help to better understand the study methods and identify selective reporting of outcomes within published studies. 1 If there are concerns about the quality of data in any included study, whether it is published or not, a sensitivity analysis should be conducted to determine the effect of the suspect data.

Limitations

There are some limitations to this study. Only reports of trials published in the JAMA , BMJ , Lancet , Annals of Internal Medicine and the Cochrane Database of Systematic Reviews were reviewed. Systematic reviews published in lower impact factor journals were not included. We also randomly included 100 reviews from a group of 1090 reviews in the Cochrane Library. Therefore, the findings might not be representative of all published systematic reviews in 2013. We report the searches as described in the methods section of the publications, and the actual searches may have been different. We found that two systematic reviews obtained unreported data regarding published studies from registries, although they did not describe searching these registries. Time-lag bias that may have overestimated the under-reporting of unpublished data, although time-lag bias cannot explain the poor descriptions of the search strategy.

The search for unpublished data in systematic reviews is still suboptimal. A significant number of systematic reviews published in 2013 did not search for grey literature and unpublished data. Inadequate reporting of unpublished data and grey literature can lead to reporting bias. Almost half of the included reviews that assessed for publication bias suggested the presence of publication bias. Improving reporting guidelines for systematic reviews and better adherence to reporting guidelines may help address this issue.

  • Vickers A , et al
  • Dickersin K
  • Benjamin DK ,
  • Murphy MD , et al
  • Hopewell S ,
  • McDonald S ,
  • Clarke M , et al
  • McAuley L ,
  • Tugwell P , et al
  • Abraham NS ,
  • Moayyedi P ,
  • Daniels B , et al
  • Montgomery SA ,
  • Koesters M ,
  • Guaiana G ,
  • Cipriani A , et al
  • Shinholser J
  • Kamphuisen P , et al
  • Man-Son-Hing M ,
  • Lelgemann M ,
  • Grouven U , et al
  • Higgins JPT ,
  • Yu LM , et al
  • Ghimire S ,
  • Kang W , et al
  • Schulz KF ,
  • Altman DG ,
  • Schroll J ,
  • Baudard M ,
  • Yavchitz A ,
  • Ravaud P , et al
  • Schroll JB ,
  • Gøtzsche PC
  • van Driel ML ,
  • De Sutter A ,
  • De Maeseneer J , et al
  • Saldanha IJ ,
  • Scherer RW ,
  • Rodriguez-Barraquer I , et al
  • Bhandari M ,
  • Devereaux PJ ,
  • Guyatt GH , et al
  • Kleweno CP ,
  • Bryant WK ,
  • Jacir AM , et al
  • McAlister FA ,
  • Bialy L , et al
  • Klassen TP ,
  • Russell K , et al
  • Rosmarakis ES ,
  • Soteriades ES ,
  • Vergidis PI , et al
  • Liberati A ,
  • Tetzlaff J , et al
  • Whiting P ,
  • Savović J ,
  • Higgins JP , et al
  • Bouter LM ,
  • Peterson J , et al
  • Eastwood S , et al
  • Morrison A , et al
  • Ghali WA , et al
  • Shamseer L ,
  • Altman DG , et al

Twitter @NavPersaud

Contributors HZ contributed to the conception and design of the project and the acquisition, analysis and interpretation of the data; drafted the work and revised it; approved the final version to be published; and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy and integrity are appropriately investigated and resolved. RZ contributed to the analysis and interpretation of the data; drafted the work and revised it; provided final approval of the version to be published; and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy and integrity are appropriately investigated and resolved. A-WC contributed to the conception of the project, reviewed it critically for important intellectual content, provided final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy and integrity are appropriately investigated and resolved. NP contributed to the conception and design of the project; the interpretation of the data; revised the work; approved the final version to be published; and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy and integrity are appropriately investigated and resolved.

Funding There was no specific funding for this project. NP is supported by a Physician Services Incoroporated Graham Farquharson Knowledge Translation Fellowship.

Disclaimer The views expressed in the submitted article are those of the authors and not an official position of the institution or funder.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Data sharing statement We have no further data to share.

Read the full text or download the PDF:

Citation Guide: How to cite UNPUBLISHED SOURCES

  • APA General Guidelines
  • Citing Common Resources
  • MLA General Guidelines
  • Author/Editor ASA Format
  • Basic ASA Rules
  • How to cite AUDIO/VISUAL MATERIALS
  • How to cite BOOKS, eBOOKS, and CHAPTERS
  • How to cite ENCYCLOPEDIAS
  • How to cite MAGAZINES
  • How to cite JOURNALS
  • How to cite NEWSPAPERS
  • How to cite PERSONAL COMMUNICATIONS
  • How to cite WEBSITES
  • In-text Citations
  • Citation Software - Zotero

Theses and Dissertations

Note :       Note number. Author First Last Name, “Title” (Type of dissertation, Location of Publisher, Year of Pub.), pages cited, URL or database (if online).

Sample Note :

      43. Afrah Daaimah Richmond, “Unmasking the Boston Brahmin: Race and Liberalism in the Long Struggle for Reform at Harvard and Radcliff, 1945-1990” (PhD diss., New York University, 2011), 211-12, ProQuest Dissertations & Theses.

Bibliography :

Author Last, First Name. “Title.” Type of Dissertation, Location of Publisher, Year of Pub. URL or database (if online).

Sample Citation :

Culcasi, Karen Leigh. “Cartographic Representations of Kurdistan in the Print Media.” Master’s Thesis, Syracuse University, 2003.

Lectures or Papers presented at a meeting

Note number. Author First Last Name, “Title” (Sponsor, Location, Year). URL or database (if online).

43. Irineu de Carvalho Filho and Renato P. Colistete, “Education Performance: Was it All Determined 100 Years Ago? Evidence from Sao Paulo, Brazil” (Paper presented at the 70th annual meeting of the Economic History Association, Evanston, IL, September 24-26, 2010). http://mpra.ub.uni-muenchen.de/24494/1/MPRA_paper_24494.pdf.

Bibliograpyy :

Author Last, First Name. “Title of Speech or lecture.” Sponsor, Location, Year. URL or database (if online).

Crane, Gregory R. “Contextualizing Early Modern Religion in a Digital World.” Lecture, Newberry Library, Chicago, September 16, 2011.

Carvalho Filho, Irineu de, and Renato P. Colistete. “Education Performance: Was it All Determined 100 Years Ago? Evidence from Sao Paulo, Brazil.” Paper presented at the 70 th annual meeting of the Economic History Association, Evanston, IL, September 24-26, 2010. http://mpra.ub.uni-muenchen.de/24494/1/MPRA_paper_24494.pdf.

  • Last Updated: Aug 22, 2023 2:00 PM
  • URL: https://utahtech.libguides.com/citationguide

Holocaust-Era Assets

National Archives Logo

Unpublished Research Papers

Unpublished Research papers, relating to Holocaust-Era Assets, made available online

  • Berenbaum, Michael. Testimony before the Nazi War Criminals Interagency Working Group , June 24, 1999.
  • Bradsher, Greg. Archivists, Archival Records, and Looted Cultural Property Research . Paper presented at the Vilnius International Forum on Holocaust-Era Looted Cultural Assets, Lithuania, October 3, 2000.
  • Bradsher, Greg. Turning history into justice: the search for records relating to Holocaust-Era Assets at the National Archives . Paper given at the Society of American Archivists, Pittsburgh, PA, August 27, 1999.
  • Kleiman, Miriam. My search for "GOLD" at the National Archives . Paper given at the Society of American Archives, Pittsburgh, PA, August 27, 1999.
  • Marchesano, Louis. Classified Records, Nazi Collecting, and Looted Art: An Art Historian's Perspective . Paper delivered to the Nazi War Criminal Records Interagency Working Group at the Simon Wiesenthal Center, Los Angeles, June 24, 1999.
  • Rickman, Gregg. The Truth Shall Set You Free: The Archives and the Swiss Bank . Paper delivered at the Society of American Archivists, Pittsburgh, PA, August 27, 1999. Rickman is scheduled to discuss his new book, Swiss Banks and Jewish Souls, at the National Archives Author Lecture and Booksigning event on September 9, 1999.
  • Sullivan, Steve. Marta's List: The Pursuit of Holocaust Survivors' Lost Insurance Claims .
  • Wolfe, Robert. A Brief Chronology of the National Archives Captured Records Staff

Symposium participants are invited to send their papers, electronically or in hardcopy, to [email protected] or to Lida Churchville National Archives Library, Rm. 2380 8061 Adelphi Rd, College Park, MD 20740

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Cite an Unpublished Paper or Manuscript in APA Referencing

3-minute read

  • 23rd June 2020

Did you know that you can cite unpublished works, such as in-progress research papers or manuscripts, in an essay? Well, you can! The key is citing them correctly. And in this post, we will look at how to cite an unpublished paper or manuscript in APA referencing .

How to Cite an Unpublished Paper in APA referencing

In APA referencing, you can cite an unpublished work in the same way as you would a published one. This means giving an author’s name and a date in brackets . The only difference is that you give a year of production (i.e., when the paper was written) rather than a year of publication:

Few fully understand the publication process (Clarke, 2020).

Like other sources, if you name the author in the text, you do not need to repeat it in the brackets. And if you quote an unpublished paper, you should give page numbers. For example:

According to Clarke (2020), publication “is a complex process” (p. 20).

When a paper has been accepted for publication but not yet published, however, you should use the term “in press” in place of a year in citations:

Few fully understand the publication process (Clarke, in press).

How to Reference an Unpublished Work in APA Referencing

When adding an unpublished paper to an APA reference list , the correct format will depend on where it is in the publication process. But let’s start with works that will not be published at all (e.g., a paper that the author never submitted or that the publisher rejected).

In this case, the correct format is:

Author Surname, Initial(s). (Year of Production). Title of manuscript [Unpublished manuscript]. Department, University Name.

So, in practice, we could cite an unpublished paper like this:

Clarke, J. (2020). The publication process explained [Unpublished manuscript]. School of Journalism, Media and Performance, University of Central Lancashire.

Referencing a Work Submitted for Publication

If a paper has been submitted for publication but not yet accepted, the reference should state “manuscript submitted for publication.” However, you should not include any other information about the submission, such as where it was submitted, as this information could go out of date quickly.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

The correct format in this case is therefore:

Author Surname, Initial(s). (Year of Production). Title of manuscript [Manuscript submitted for publication]. Department, University Name.

For example, we would list the paper above as follows:

Clarke, J. (2020). The publication process explained [Manuscript submitted for publication]. School of Journalism, Media and Performance, University of Central Lancashire.

Referencing a Paper in Press

If a paper has been accepted for publication, use the following format:

Author Surname, Initial(s). (in press). Title. Periodical or Journal Title .

As you can see, we now include both:

  • The phrase “in press” to show that the paper has been accepted by the journal and is now awaiting publication.
  • The title of the journal that accepted it (note, too, that we only use italics for the journal title here, not the title of the paper itself).

In practice, then, we would reference a paper awaiting publication like this:

Clarke, J. (in press). The publication process explained, Publishing Research Quarterly .

It is always worth checking the status of submitted papers before finalizing your reference list, too, as they can go from “submitted for publication” to “in press” quite suddenly, leaving your reference out of date.

Hopefully, you will now be able to cite an unpublished paper or manuscript correctly. But if you would like any further help with your writing, why not submit a document for proofreading ?

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

How to Ace Slack Messaging for Contractors and Freelancers

Effective professional communication is an important skill for contractors and freelancers navigating remote work environments....

How to Insert a Text Box in a Google Doc

Google Docs is a powerful collaborative tool, and mastering its features can significantly enhance your...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

IRSC Libraries Home

APA 7th Edition Style Guide: Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)

  • About In-text Citations
  • In-Text Examples
  • What to Include
  • Volume/Issue
  • Bracketed Descriptions
  • URLs and DOIs
  • Book with Editor(s)
  • Book with No Author
  • Book with Organization as Author
  • Book with Personal Author(s)
  • Chapters and Parts of Books
  • Classical Works
  • Course Materials
  • Journal Article
  • Magazine Article
  • Multi-Volume Works
  • Newspaper Article
  • Patents & Laws
  • Personal Communication
  • Physicians' Desk Reference
  • Social Media

Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)

  • Formatting Your Paper
  • Formatting Your References
  • Annotated Bibliography
  • Headings in APA
  • APA Quick Guide
  • NEW!* Submit your Paper for APA Review

Formatting your References

Once you type your references on the reference page, you will need to put in a hanging indent and double-space the entire reference list. In Microsoft Word, highlight the references from A to Z, then find the paragraph function in the Word ribbon. Select Hanging under Indentation and Double under spacing. See the Formatting your References tab for instructions on doing this on a Mac or in Google Docs.

Abbas, D. D. F. (2020). Manipulating of audio-visual aids in the educational processes in Al-Hilla University College. International Journal of Psychosocial Rehabilitation, 24 (3), 1248-1263. https://doi.org.db12.linccweb.org/10.37200/ijpr/v24i3/pr200875

  • << Previous: Social Media
  • Next: Websites >>
  • Last Updated: Feb 13, 2024 6:21 PM
  • URL: https://irsc.libguides.com/APA

how to find unpublished research papers

Why the Pandemic Probably Started in a Lab, in 5 Key Points

how to find unpublished research papers

By Alina Chan

Dr. Chan is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of “Viral: The Search for the Origin of Covid-19.”

This article has been updated to reflect news developments.

On Monday, Dr. Anthony Fauci returned to the halls of Congress and testified before the House subcommittee investigating the Covid-19 pandemic. He was questioned about several topics related to the government’s handling of Covid-19, including how the National Institute of Allergy and Infectious Diseases, which he directed until retiring in 2022, supported risky virus work at a Chinese institute whose research may have caused the pandemic.

For more than four years, reflexive partisan politics have derailed the search for the truth about a catastrophe that has touched us all. It has been estimated that at least 25 million people around the world have died because of Covid-19, with over a million of those deaths in the United States.

Although how the pandemic started has been hotly debated, a growing volume of evidence — gleaned from public records released under the Freedom of Information Act, digital sleuthing through online databases, scientific papers analyzing the virus and its spread, and leaks from within the U.S. government — suggests that the pandemic most likely occurred because a virus escaped from a research lab in Wuhan, China. If so, it would be the most costly accident in the history of science.

Here’s what we now know:

1 The SARS-like virus that caused the pandemic emerged in Wuhan, the city where the world’s foremost research lab for SARS-like viruses is located.

  • At the Wuhan Institute of Virology, a team of scientists had been hunting for SARS-like viruses for over a decade, led by Shi Zhengli.
  • Their research showed that the viruses most similar to SARS‑CoV‑2, the virus that caused the pandemic, circulate in bats that live r oughly 1,000 miles away from Wuhan. Scientists from Dr. Shi’s team traveled repeatedly to Yunnan province to collect these viruses and had expanded their search to Southeast Asia. Bats in other parts of China have not been found to carry viruses that are as closely related to SARS-CoV-2.

how to find unpublished research papers

The closest known relatives to SARS-CoV-2 were found in southwestern China and in Laos.

Large cities

Mine in Yunnan province

Cave in Laos

South China Sea

how to find unpublished research papers

The closest known relatives to SARS-CoV-2

were found in southwestern China and in Laos.

philippines

how to find unpublished research papers

The closest known relatives to SARS-CoV-2 were found

in southwestern China and Laos.

Sources: Sarah Temmam et al., Nature; SimpleMaps

Note: Cities shown have a population of at least 200,000.

how to find unpublished research papers

There are hundreds of large cities in China and Southeast Asia.

how to find unpublished research papers

There are hundreds of large cities in China

and Southeast Asia.

how to find unpublished research papers

The pandemic started roughly 1,000 miles away, in Wuhan, home to the world’s foremost SARS-like virus research lab.

how to find unpublished research papers

The pandemic started roughly 1,000 miles away,

in Wuhan, home to the world’s foremost SARS-like virus research lab.

how to find unpublished research papers

The pandemic started roughly 1,000 miles away, in Wuhan,

home to the world’s foremost SARS-like virus research lab.

  • Even at hot spots where these viruses exist naturally near the cave bats of southwestern China and Southeast Asia, the scientists argued, as recently as 2019 , that bat coronavirus spillover into humans is rare .
  • When the Covid-19 outbreak was detected, Dr. Shi initially wondered if the novel coronavirus had come from her laboratory , saying she had never expected such an outbreak to occur in Wuhan.
  • The SARS‑CoV‑2 virus is exceptionally contagious and can jump from species to species like wildfire . Yet it left no known trace of infection at its source or anywhere along what would have been a thousand-mile journey before emerging in Wuhan.

2 The year before the outbreak, the Wuhan institute, working with U.S. partners, had proposed creating viruses with SARS‑CoV‑2’s defining feature.

  • Dr. Shi’s group was fascinated by how coronaviruses jump from species to species. To find viruses, they took samples from bats and other animals , as well as from sick people living near animals carrying these viruses or associated with the wildlife trade. Much of this work was conducted in partnership with the EcoHealth Alliance, a U.S.-based scientific organization that, since 2002, has been awarded over $80 million in federal funding to research the risks of emerging infectious diseases.
  • The laboratory pursued risky research that resulted in viruses becoming more infectious : Coronaviruses were grown from samples from infected animals and genetically reconstructed and recombined to create new viruses unknown in nature. These new viruses were passed through cells from bats, pigs, primates and humans and were used to infect civets and humanized mice (mice modified with human genes). In essence, this process forced these viruses to adapt to new host species, and the viruses with mutations that allowed them to thrive emerged as victors.
  • By 2019, Dr. Shi’s group had published a database describing more than 22,000 collected wildlife samples. But external access was shut off in the fall of 2019, and the database was not shared with American collaborators even after the pandemic started , when such a rich virus collection would have been most useful in tracking the origin of SARS‑CoV‑2. It remains unclear whether the Wuhan institute possessed a precursor of the pandemic virus.
  • In 2021, The Intercept published a leaked 2018 grant proposal for a research project named Defuse , which had been written as a collaboration between EcoHealth, the Wuhan institute and Ralph Baric at the University of North Carolina, who had been on the cutting edge of coronavirus research for years. The proposal described plans to create viruses strikingly similar to SARS‑CoV‑2.
  • Coronaviruses bear their name because their surface is studded with protein spikes, like a spiky crown, which they use to enter animal cells. T he Defuse project proposed to search for and create SARS-like viruses carrying spikes with a unique feature: a furin cleavage site — the same feature that enhances SARS‑CoV‑2’s infectiousness in humans, making it capable of causing a pandemic. Defuse was never funded by the United States . However, in his testimony on Monday, Dr. Fauci explained that the Wuhan institute would not need to rely on U.S. funding to pursue research independently.

how to find unpublished research papers

The Wuhan lab ran risky experiments to learn about how SARS-like viruses might infect humans.

1. Collect SARS-like viruses from bats and other wild animals, as well as from people exposed to them.

how to find unpublished research papers

2. Identify high-risk viruses by screening for spike proteins that facilitate infection of human cells.

how to find unpublished research papers

2. Identify high-risk viruses by screening for spike proteins that facilitate infection of

human cells.

how to find unpublished research papers

In Defuse, the scientists proposed to add a furin cleavage site to the spike protein.

3. Create new coronaviruses by inserting spike proteins or other features that could make the viruses more infectious in humans.

how to find unpublished research papers

4. Infect human cells, civets and humanized mice with the new coronaviruses, to determine how dangerous they might be.

how to find unpublished research papers

  • While it’s possible that the furin cleavage site could have evolved naturally (as seen in some distantly related coronaviruses), out of the hundreds of SARS-like viruses cataloged by scientists, SARS‑CoV‑2 is the only one known to possess a furin cleavage site in its spike. And the genetic data suggest that the virus had only recently gained the furin cleavage site before it started the pandemic.
  • Ultimately, a never-before-seen SARS-like virus with a newly introduced furin cleavage site, matching the description in the Wuhan institute’s Defuse proposal, caused an outbreak in Wuhan less than two years after the proposal was drafted.
  • When the Wuhan scientists published their seminal paper about Covid-19 as the pandemic roared to life in 2020, they did not mention the virus’s furin cleavage site — a feature they should have been on the lookout for, according to their own grant proposal, and a feature quickly recognized by other scientists.
  • Worse still, as the pandemic raged, their American collaborators failed to publicly reveal the existence of the Defuse proposal. The president of EcoHealth, Peter Daszak, recently admitted to Congress that he doesn’t know about virus samples collected by the Wuhan institute after 2015 and never asked the lab’s scientists if they had started the work described in Defuse. In May, citing failures in EcoHealth’s monitoring of risky experiments conducted at the Wuhan lab, the Biden administration suspended all federal funding for the organization and Dr. Daszak, and initiated proceedings to bar them from receiving future grants. In his testimony on Monday, Dr. Fauci said that he supported the decision to suspend and bar EcoHealth.
  • Separately, Dr. Baric described the competitive dynamic between his research group and the institute when he told Congress that the Wuhan scientists would probably not have shared their most interesting newly discovered viruses with him . Documents and email correspondence between the institute and Dr. Baric are still being withheld from the public while their release is fiercely contested in litigation.
  • In the end, American partners very likely knew of only a fraction of the research done in Wuhan. According to U.S. intelligence sources, some of the institute’s virus research was classified or conducted with or on behalf of the Chinese military . In the congressional hearing on Monday, Dr. Fauci repeatedly acknowledged the lack of visibility into experiments conducted at the Wuhan institute, saying, “None of us can know everything that’s going on in China, or in Wuhan, or what have you. And that’s the reason why — I say today, and I’ve said at the T.I.,” referring to his transcribed interview with the subcommittee, “I keep an open mind as to what the origin is.”

3 The Wuhan lab pursued this type of work under low biosafety conditions that could not have contained an airborne virus as infectious as SARS‑CoV‑2.

  • Labs working with live viruses generally operate at one of four biosafety levels (known in ascending order of stringency as BSL-1, 2, 3 and 4) that describe the work practices that are considered sufficiently safe depending on the characteristics of each pathogen. The Wuhan institute’s scientists worked with SARS-like viruses under inappropriately low biosafety conditions .

how to find unpublished research papers

In the United States, virologists generally use stricter Biosafety Level 3 protocols when working with SARS-like viruses.

Biosafety cabinets prevent

viral particles from escaping.

Viral particles

Personal respirators provide

a second layer of defense against breathing in the virus.

DIRECT CONTACT

Gloves prevent skin contact.

Disposable wraparound

gowns cover much of the rest of the body.

how to find unpublished research papers

Personal respirators provide a second layer of defense against breathing in the virus.

Disposable wraparound gowns

cover much of the rest of the body.

Note: ​​Biosafety levels are not internationally standardized, and some countries use more permissive protocols than others.

how to find unpublished research papers

The Wuhan lab had been regularly working with SARS-like viruses under Biosafety Level 2 conditions, which could not prevent a highly infectious virus like SARS-CoV-2 from escaping.

Some work is done in the open air, and masks are not required.

Less protective equipment provides more opportunities

for contamination.

how to find unpublished research papers

Some work is done in the open air,

and masks are not required.

Less protective equipment provides more opportunities for contamination.

  • In one experiment, Dr. Shi’s group genetically engineered an unexpectedly deadly SARS-like virus (not closely related to SARS‑CoV‑2) that exhibited a 10,000-fold increase in the quantity of virus in the lungs and brains of humanized mice . Wuhan institute scientists handled these live viruses at low biosafet y levels , including BSL-2.
  • Even the much more stringent containment at BSL-3 cannot fully prevent SARS‑CoV‑2 from escaping . Two years into the pandemic, the virus infected a scientist in a BSL-3 laboratory in Taiwan, which was, at the time, a zero-Covid country. The scientist had been vaccinated and was tested only after losing the sense of smell. By then, more than 100 close contacts had been exposed. Human error is a source of exposure even at the highest biosafety levels , and the risks are much greater for scientists working with infectious pathogens at low biosafety.
  • An early draft of the Defuse proposal stated that the Wuhan lab would do their virus work at BSL-2 to make it “highly cost-effective.” Dr. Baric added a note to the draft highlighting the importance of using BSL-3 to contain SARS-like viruses that could infect human cells, writing that “U.S. researchers will likely freak out.” Years later, after SARS‑CoV‑2 had killed millions, Dr. Baric wrote to Dr. Daszak : “I have no doubt that they followed state determined rules and did the work under BSL-2. Yes China has the right to set their own policy. You believe this was appropriate containment if you want but don’t expect me to believe it. Moreover, don’t insult my intelligence by trying to feed me this load of BS.”
  • SARS‑CoV‑2 is a stealthy virus that transmits effectively through the air, causes a range of symptoms similar to those of other common respiratory diseases and can be spread by infected people before symptoms even appear. If the virus had escaped from a BSL-2 laboratory in 2019, the leak most likely would have gone undetected until too late.
  • One alarming detail — leaked to The Wall Street Journal and confirmed by current and former U.S. government officials — is that scientists on Dr. Shi’s team fell ill with Covid-like symptoms in the fall of 2019 . One of the scientists had been named in the Defuse proposal as the person in charge of virus discovery work. The scientists denied having been sick .

4 The hypothesis that Covid-19 came from an animal at the Huanan Seafood Market in Wuhan is not supported by strong evidence.

  • In December 2019, Chinese investigators assumed the outbreak had started at a centrally located market frequented by thousands of visitors daily. This bias in their search for early cases meant that cases unlinked to or located far away from the market would very likely have been missed. To make things worse, the Chinese authorities blocked the reporting of early cases not linked to the market and, claiming biosafety precautions, ordered the destruction of patient samples on January 3, 2020, making it nearly impossible to see the complete picture of the earliest Covid-19 cases. Information about dozens of early cases from November and December 2019 remains inaccessible.
  • A pair of papers published in Science in 2022 made the best case for SARS‑CoV‑2 having emerged naturally from human-animal contact at the Wuhan market by focusing on a map of the early cases and asserting that the virus had jumped from animals into humans twice at the market in 2019. More recently, the two papers have been countered by other virologists and scientists who convincingly demonstrate that the available market evidence does not distinguish between a human superspreader event and a natural spillover at the market.
  • Furthermore, the existing genetic and early case data show that all known Covid-19 cases probably stem from a single introduction of SARS‑CoV‑2 into people, and the outbreak at the Wuhan market probably happened after the virus had already been circulating in humans.

how to find unpublished research papers

An analysis of SARS-CoV-2’s evolutionary tree shows how the virus evolved as it started to spread through humans.

SARS-COV-2 Viruses closest

to bat coronaviruses

more mutations

how to find unpublished research papers

Source: Lv et al., Virus Evolution (2024) , as reproduced by Jesse Bloom

how to find unpublished research papers

The viruses that infected people linked to the market were most likely not the earliest form of the virus that started the pandemic.

how to find unpublished research papers

  • Not a single infected animal has ever been confirmed at the market or in its supply chain. Without good evidence that the pandemic started at the Huanan Seafood Market, the fact that the virus emerged in Wuhan points squarely at its unique SARS-like virus laboratory.

5 Key evidence that would be expected if the virus had emerged from the wildlife trade is still missing.

how to find unpublished research papers

In previous outbreaks of coronaviruses, scientists were able to demonstrate natural origin by collecting multiple pieces of evidence linking infected humans to infected animals.

Infected animals

Earliest known

cases exposed to

live animals

Antibody evidence

of animals and

animal traders having

been infected

Ancestral variants

of the virus found in

Documented trade

of host animals

between the area

where bats carry

closely related viruses

and the outbreak site

how to find unpublished research papers

Infected animals found

Earliest known cases exposed to live animals

Antibody evidence of animals and animal

traders having been infected

Ancestral variants of the virus found in animals

Documented trade of host animals

between the area where bats carry closely

related viruses and the outbreak site

how to find unpublished research papers

For SARS-CoV-2, these same key pieces of evidence are still missing , more than four years after the virus emerged.

how to find unpublished research papers

For SARS-CoV-2, these same key pieces of evidence are still missing ,

more than four years after the virus emerged.

  • Despite the intense search trained on the animal trade and people linked to the market, investigators have not reported finding any animals infected with SARS‑CoV‑2 that had not been infected by humans. Yet, infected animal sources and other connective pieces of evidence were found for the earlier SARS and MERS outbreaks as quickly as within a few days, despite the less advanced viral forensic technologies of two decades ago.
  • Even though Wuhan is the home base of virus hunters with world-leading expertise in tracking novel SARS-like viruses, investigators have either failed to collect or report key evidence that would be expected if Covid-19 emerged from the wildlife trade . For example, investigators have not determined that the earliest known cases had exposure to intermediate host animals before falling ill. No antibody evidence shows that animal traders in Wuhan are regularly exposed to SARS-like viruses, as would be expected in such situations.
  • With today’s technology, scientists can detect how respiratory viruses — including SARS, MERS and the flu — circulate in animals while making repeated attempts to jump across species . Thankfully, these variants usually fail to transmit well after crossing over to a new species and tend to die off after a small number of infections. In contrast, virologists and other scientists agree that SARS‑CoV‑2 required little to no adaptation to spread rapidly in humans and other animals . The virus appears to have succeeded in causing a pandemic upon its only detected jump into humans.

The pandemic could have been caused by any of hundreds of virus species, at any of tens of thousands of wildlife markets, in any of thousands of cities, and in any year. But it was a SARS-like coronavirus with a unique furin cleavage site that emerged in Wuhan, less than two years after scientists, sometimes working under inadequate biosafety conditions, proposed collecting and creating viruses of that same design.

While several natural spillover scenarios remain plausible, and we still don’t know enough about the full extent of virus research conducted at the Wuhan institute by Dr. Shi’s team and other researchers, a laboratory accident is the most parsimonious explanation of how the pandemic began.

Given what we now know, investigators should follow their strongest leads and subpoena all exchanges between the Wuhan scientists and their international partners, including unpublished research proposals, manuscripts, data and commercial orders. In particular, exchanges from 2018 and 2019 — the critical two years before the emergence of Covid-19 — are very likely to be illuminating (and require no cooperation from the Chinese government to acquire), yet they remain beyond the public’s view more than four years after the pandemic began.

Whether the pandemic started on a lab bench or in a market stall, it is undeniable that U.S. federal funding helped to build an unprecedented collection of SARS-like viruses at the Wuhan institute, as well as contributing to research that enhanced them . Advocates and funders of the institute’s research, including Dr. Fauci, should cooperate with the investigation to help identify and close the loopholes that allowed such dangerous work to occur. The world must not continue to bear the intolerable risks of research with the potential to cause pandemics .

A successful investigation of the pandemic’s root cause would have the power to break a decades-long scientific impasse on pathogen research safety, determining how governments will spend billions of dollars to prevent future pandemics. A credible investigation would also deter future acts of negligence and deceit by demonstrating that it is indeed possible to be held accountable for causing a viral pandemic. Last but not least, people of all nations need to see their leaders — and especially, their scientists — heading the charge to find out what caused this world-shaking event. Restoring public trust in science and government leadership requires it.

A thorough investigation by the U.S. government could unearth more evidence while spurring whistleblowers to find their courage and seek their moment of opportunity. It would also show the world that U.S. leaders and scientists are not afraid of what the truth behind the pandemic may be.

More on how the pandemic may have started

how to find unpublished research papers

Where Did the Coronavirus Come From? What We Already Know Is Troubling.

Even if the coronavirus did not emerge from a lab, the groundwork for a potential disaster had been laid for years, and learning its lessons is essential to preventing others.

By Zeynep Tufekci

how to find unpublished research papers

Why Does Bad Science on Covid’s Origin Get Hyped?

If the raccoon dog was a smoking gun, it fired blanks.

By David Wallace-Wells

how to find unpublished research papers

A Plea for Making Virus Research Safer

A way forward for lab safety.

By Jesse Bloom

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

Alina Chan ( @ayjchan ) is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of “ Viral : The Search for the Origin of Covid-19.” She was a member of the Pathogens Project , which the Bulletin of the Atomic Scientists organized to generate new thinking on responsible, high-risk pathogen research.

  • Share full article

Advertisement

IMAGES

  1. (DOC) Unpublished Research Papers

    how to find unpublished research papers

  2. (PDF) Unpublished research

    how to find unpublished research papers

  3. How to Publish a Research Paper in Reputed Journals?

    how to find unpublished research papers

  4. Unpublished dissertations. Best Website For Homework Help Services

    how to find unpublished research papers

  5. About Scientific Reports

    how to find unpublished research papers

  6. How to get unpublished research papers from any websites and get them published under my name

    how to find unpublished research papers

VIDEO

  1. Find Unpublished List Page on a Google Site

  2. Research Emotional Intelligence Unpublished Draft 1

  3. How to Submit a Paper and Check the Status

  4. How To Find Draft Post On Facebook

  5. Unpublished Manuscript

  6. Inaugural video of The Third Annual International Capital Markets Conference 2022

COMMENTS

  1. Research Paper

    97% of Grammarly users report that it is their favorite writing tool. Write clear, compelling papers and essays with Grammarly's real-time writing feedback.

  2. Unpublished Research

    Presentations, posters, conference papers published on personal websites or research networks like ResearchGate or Mendeley, Theses and dissertations published on the web or through repositories. Unpublished research can be harder to find a number of reasons. There is no one place to look. You have to dig a little deeper.

  3. Step 5: Unpublished Materials

    Step 5: Searching for Unpublished Articles. The publication process takes a long time—sometimes a year or more—so it's important to search for articles on your topic that have already been written but not yet published. SSRN and bepress are the best sources for unpublished articles and working papers: Social Science Research Network (SSRN)

  4. Out of sight but not out of mind: how to search for unpublished

    Corrrespondence to: A-W Chan [email protected]. A key challenge in conducting systematic reviews is to identify the existence and results of unpublished trials, and unreported methods and outcomes within published trials. An-Wen Chan provides guidance for reviewers on adopting a comprehensive strategy to search beyond the published literature.

  5. How to get access to articles that are not Open Access

    Accessing papers without VPN. It is not necessary to use VPN to access texts, however. An alternative is to navigate to a university library's search interface and download the paper through it. Every library works slightly differently in service its users. At Aalto University, it is possible to enter e.g., the paper's title, and the ...

  6. OATD

    You may also want to consult these sites to search for other theses: Google Scholar; NDLTD, the Networked Digital Library of Theses and Dissertations.NDLTD provides information and a search engine for electronic theses and dissertations (ETDs), whether they are open access or not. Proquest Theses and Dissertations (PQDT), a database of dissertations and theses, whether they were published ...

  7. Methods for obtaining unpublished data

    Methods to obtain unpublished studies. One of the six included studies assessed methods to obtain unpublished studies (i.e. data for studies that have never been published). Proportion of unpublished studies (data) obtained as defined and reported by authors.

  8. Searching for unpublished studies.

    Searching for unpublished studies. A consortium consisting of York Health Economics Consortium and the Cochrane Information Retrieval Methods Group has looked into the issue of searching for unpublished studies and obtaining access to unpublished data and has produced the following report and bibliography: Arber M, Cikalo M, Glanville J ...

  9. Unpublished or informally published work

    Sometimes, however, the most useful research article might not be available as a peer-reviewed published article but it is available to us in an unpublished form. Use other peer-reviewed articles if possible but if there is a lack of published research reports and, for example, a pre-press version is available directly from the author, you may ...

  10. College & Research Libraries News

    INTERNET RESOURCES: Gray literature: Resources for locating unpublished research. by Brian S. Mathews. ... • Research Papers in Economics (RePEc).This site is an international collaborative effort to enhance the dissemination of research in economics and related fields. The project includes a mostly full-text database of working papers ...

  11. Guides: APA

    Unpublished material. These types of publications are manuscripts that might be submitted for publication or works in progress. In this category, you might also find manuscripts that are not formally published but are retrievable online on personal or institutional websites. If no year can be identified, use "n.d." instead (= no date).

  12. Literature Search: Databases and Gray Literature

    Gray Literature. Gray Literature is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers. includes: hard to find studies, reports, or dissertations. conference abstracts or papers. governmental or private sector research.

  13. Methods for obtaining unpublished data

    Background: In order to minimise publication bias, authors of systematic reviews often spend considerable time trying to obtain unpublished data. These include data from studies conducted but not published (unpublished data), as either an abstract or full-text paper, as well as missing data (data available to original researchers but not reported) in published abstracts or full-text publications.

  14. Library Guides: Systematic Reviews: Where to search

    In conducting a systematic review, it is important that you search widely through published and unpublished research, to find all information available on a particular topic.This usually includes searching sources such as: Bibliographic databases; Trials registers; Reviews and guidelines; Grey literature; Hand searching other relevant sources.

  15. ACAP Learning Resources: Reference in APA 7: Unpublished Works

    Proofs with stochastic-aware conformance checking: An entropy-based approach [Unpublished manuscript]. Faculty of Science and Technology, Queensland University of Technology. Faculty of Science and Technology, Queensland University of Technology.

  16. 10.3.2 Including unpublished studies in systematic reviews

    10.3.2 Including unpublished studies in systematic reviews. Publication bias clearly is a major threat to the validity of any type of review, but particularly of unsystematic, narrative reviews. Obtaining and including data from unpublished trials appears to be one obvious way of avoiding this problem. Hopewell and colleagues conducted a review ...

  17. Unpublished report

    Unpublished report. To be made up of: Author or organisation. Year produced (in round brackets). Title of report (in italics). Internal report (including name of institution). Unpublished. In-text citation:

  18. Unpublished/Not retrievable

    Find how to cite a web page, journal, book, eBook, textbook, magazine, newspaper, video, DVD, TV show, Twitter, Tweet, Instagram, Facebook, or blog post. Find how to format in-text/parenthetical citations, papers or title pages and cite when no author. Class documents/notes, Interviews/letters/emails, Surveys, AI/ChatGPT

  19. How do I cite unpublished research in APA?

    You will cite unpublished work the same as you would published work, with the author's last name and the year the work is in progress or was completed. Keep in mind that authors are protected by copyright law against unauthorized use of their unpublished research. Until their work is published, authors own the copyright to their work, and you ...

  20. Search for unpublished data by systematic reviewers: an audit

    Objectives We audited a selection of systematic reviews published in 2013 and reported on the proportion of reviews that researched for unpublished data, included unpublished data in analysis and assessed for publication bias. Design Audit of systematic reviews. Data sources We searched PubMed and Ovid MEDLINE In-Process & Other Non-Indexed Citations between 1 January 2013 and 31 December 2013 ...

  21. How to cite UNPUBLISHED SOURCES

    Note number. Author First Last Name, "Title" (Type of dissertation, Location of Publisher, Year of Pub.), pages cited, URL or database (if online). Sample Note: 43. Afrah Daaimah Richmond, "Unmasking the Boston Brahmin: Race and Liberalism in the Long Struggle for Reform at Harvard and Radcliff, 1945-1990" (PhD diss.,

  22. Searching practices and inclusion of unpublished studies in systematic

    Specific sources of unpublished studies were searched in 22 and 68 reviews, for example, conference proceedings (n = 4 and n = 18), databases only containing conference abstracts (n = 2 and n = 33), or trial registries (n = 12 and n = 39). At least one unpublished study was included in 17 and 23 reviews.

  23. Is unpublished paper necessarily bad?

    4. Unpublished papers are not necessarily bad ones. How can it happen that good papers remain unpublished? Good papers (or papers with good ideas/methods…) get rejected quite frequently. Reasons may be that the authors targeted the wrong audience or journal, or failed to get their main points across. Sometimes this also happens after a first ...

  24. Unpublished Research Papers

    Unpublished Research papers, relating to Holocaust-Era Assets, made available online Papers: Berenbaum, Michael. Testimony before the Nazi War Criminals Interagency Working Group, June 24, 1999. Bradsher, Greg. Archivists, Archival Records, and Looted Cultural Property Research. Paper presented at the Vilnius International Forum on Holocaust-Era Looted Cultural Assets, Lithuania, October 3 ...

  25. How to Cite an Unpublished Paper or Manuscript in APA Referencing

    In this case, the correct format is: Author Surname, Initial (s). (Year of Production). Title of manuscript [Unpublished manuscript]. Department, University Name. So, in practice, we could cite an unpublished paper like this: Clarke, J. (2020). The publication process explained [Unpublished manuscript].

  26. APA 7th Edition Style Guide: Unpublished Manuscripts/Informal

    These may be published in a database or freely available online or they may be unpublished. Cite unpublished dissertation or thesis (Skidmore, 2017). Skidmore, K. L. (2017). The effects of postpartum depression among young mothers who give children up for adoption (Unpublished master's thesis). Nova Southeastern University, Fort Lauderdale, FL.

  27. Why the Pandemic Probably Started in a Lab, in 5 Key Points

    Dr. Chan is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of "Viral: The Search for the Origin of Covid-19." This article has been updated to reflect news ...