Skip to main content

Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform

Abstract

Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.

This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants’ views on the processes in the life of a randomised trial that should be covered by Trial Forge.

General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a ‘go to’ website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.

Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.

Peer Review reports

Background

There is a peculiar paradox that exists in trial execution - we perform clinical trials to generate evidence to improve patient outcomes; however, we conduct clinical trials like anecdotal medicine: (1) we do what we think works; (2) we rely on experience and judgement and (3) limited data to support best practices.’

Monica Shah, quoted in Gheorghiade et al. [1].

This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge (www.trialforge.org), an initiative focused on improving randomised trial efficiency and quality. The initiative is aimed at the people who design and run trials, staff at trials units, for example, or clinicians and others who design studies. In this paper, we outline the problem of inefficiency in trials and describe the Trial Forge initiative to improve efficiency. We hope that many of those reading the paper will be interested in contributing to Trial Forge in the future.

Randomised trials (hereafter ‘trials’), especially when brought together in systematic reviews, are regarded as the gold standard for evaluating the effects of healthcare treatments, with thousands of trials and hundreds of systematic reviews reported every year. PubMed has indexed over 370,000 reports of randomised trials; the World Health Organisation’s International Clinical Trials Registry Platform [2] contains over 250,000 trial records, of which, 71,000 are listed as recruiting; and the Cochrane Central Register of Controlled Trials contains more than 800,000 records. Tens of billions of dollars of public and private money are invested globally in trials every year (US $25 billion in the United States alone in 2010 [3]) and the average cost of a trial per participant is estimated to be almost £8,500 in the United Kingdom [4].

Many of these resource are wasted, often because insufficient account is taken of existing evidence when choosing questions to address [5], and results are either not published or poorly reported. Moreover, despite trials being a cornerstone of evidence-based healthcare, the methods and infrastructure for conducting these complex studies are largely evidence free [6]. For example, every trial has to recruit and retain participants, but only a handful of recruitment and retention strategies and interventions are currently supported by high-quality evidence [7, 8]. A recent analysis found that only 55 % of UK National Institute of Health Research and Medical Research Council (MRC) trials (a set of large, relatively well-funded studies in the UK) recruiting between 2002 and 2008 met their recruitment targets [9]. The same study found that extensions are common, with 45 % of trials needing at least one funding extension, although only 55 % of these then go on to meet their recruitment targets. Furthermore, although data collection is central to trials and can consume a large proportion of trial resources, researchers often collect more data than they are able to analyse and publish [10]. Indeed, there is a dearth of research into the optimal methods for data collection and data management [11]. This is a different problem from selective reporting, where bias is introduced through the way outcomes are selected and presented in trial reports, especially for harms [12]. Vera-Badillo and colleagues called this type of bias ‘spin’ [13].

As a consequence, the most appropriate methods are not always chosen when trials are designed, leading to trial management and delivery problems later. Indeed, poor design decisions may do more than make a trial difficult to deliver; they may mean that any eventual findings will be of limited value. This could be because, for example, the comparator used renders the trial clinically irrelevant [14], the outcome measures are not relevant to those making treatment decisions [15], or the patients involved do not represent the majority of patients with the condition of interest [16]. The patients, health professionals, and policymakers who look to systematic reviews of trials for help in their decision making are often frustrated to find that the questions addressed by researchers do not reflect clinical decision making needs (a failure of prioritisation) [17], have dubious relevance in their settings [1719], or that failings in the conduct or reporting of trials mean that they do not provide the reliable and robust evidence that they need. Some trials may simply be unnecessary [20]. This all represents an unacceptably wasteful approach to designing, running, analysing, and reporting trials. The problem of inefficiency in medical research is not new: Schwartz and Lellouch urged trialists to change the way they designed trials as long ago as 1967 [21], Altman pointed to the scandal of poor medical research in 1994 [22], and, in 2009 [23], Chalmers and Glasziou estimated that more than 85 % of the resources invested in medical research was being avoidably wasted. What has been lacking is a coordinated attempt to tackle inefficiency in clinical trials.

Main text

Trial Forge

Trial Forge (www.trialforge.org) aims to address the lack of an accessible evidence base around trial efficiency and quality. A one-day workshop, funded by the Network of MRC Hubs for Trials Methodology Research and the Health Services Research Unit at the University of Aberdeen, UK, was held in Edinburgh on 10 July 2014 to discuss the initiative. The grant holders of the MRC Hub funding (Marion Campbell, Mike Clarke, Athene Lane, Trudie Lang, John Norrie, Shaun Treweek, and Paula Williamson) invited 38 participants with experience in methodology and trial design, trial management, statistics, data management, clinical care, commissioning and publishing research, public and patient involvement, and providing trial support through trials units to the worship.

The aims of the workshop were as follows:

  1. 1.

    To share knowledge on resources that already exist with regard to efficient trials.

  2. 2.

    To share knowledge on guidance relating to trial design, conduct, analysis, and reporting.

  3. 3.

    To agree on the key processes of the trial pathway, that is, the major processes in the life of a trial.

  4. 4.

    To begin to suggest features that Trial Forge must have.

  5. 5.

    To promote awareness of Trial Forge.

  6. 6.

    To produce a statement paper on the Trial Forge initiative

As the workshop members were professional trialists, trial managers, statisticians, and others involved in trial design, conduct, analysis, and reporting and the discussions were of current practice, no formal ethics approval, or consent was deemed necessary.

How will Trial Forge work?

Discussion at the workshop highlighted several substantial problems, some of which are listed in Table 1. Trial Forge aims to remove or reduce these problems and others through targeted collaborative work. Some of the ways it will do this are listed in Table 1. Trial Forge will use a five-step process to identify and address gaps in knowledge about trial method:

Table 1 Trial Forge Examples of trial challenges and how Trial Forge could help
  1. 1.

    Identify trial processes

  2. 2.

    Collate what is known about these processes.

  3. 3.

    Strengthen the evidence base by creating a methodology research agenda.

  4. 4.

    Collaborate to work through the methodology research agenda.

  5. 5.

    Dissemination.

Step 1 - Identify trial processes

Step 1 will identify the processes that make up a trial, starting with the main processes (for example, recruitment) and then breaking these down into smaller processes (for example, how to set the eligibility criteria for a trial, selecting the components of the recruitment strategy, identifying potential participants, and targeting appropriate recruitment strategies for them). This is similar to the process improvement approach taken by the British cycling team in its preparation for the 2012 London Olympic Games. Dave Brailsford, British Cycling's Performance Director at the time said when asked about the team’s approach:

The whole principle came from the idea that if you broke down everything you could think of that goes into riding a bike, and then improved it by 1 %, you will get a significant increase when you put them all together.’ [24]

There are very many processes involved in a trial, and learning about, and improving each of them may have a minimal effect on its own, but taken together, these improvements could have a much more profound impact.

Participants at the Edinburgh workshop produced an initial list of headline trial processes (Fig. 1) for which collating (and creating) research evidence would be beneficial. This list will form the starting point for Trial Forge work.

Fig. 1
figure 1

Key processes of the trial pathway (many of which are overlapping and non-linear). Suggestions from a one-day workshop held in Edinburgh on 10 July 2014. The placement and length of the bars gives an indication of when in the trial they start and end, though this is likely to vary greatly between trials

Step 2 - Collate what is known about these processes

In Step 2, Trial Forge will either identify existing initiatives to collate what is known about individual processes or work to collate the evidence (which may include providing links to ongoing studies) and integrate reviews (and other relevant literature) using both quantitative and qualitative synthesis approaches [2528]. For example, for help in choosing trial outcomes, Trial Forge would direct people towards the COMET (Core Outcome Measures in Effectiveness Trials [29], http://www.comet-initiative.org) Initiative. COMET has systematically reviewed published standardised core outcome sets for trials [30], and combined these in the COMET database with information on core outcome sets currently being developed. As another example, for help with choosing evidence-based recruitment interventions, the MRC Network of Hubs for Trials Methodology Research is funding a project to develop a searchable database containing published and ongoing research into recruitment. On a smaller scale, Cochrane Methodology Reviews, and other systematic reviews have brought together existing research in specific topic areas. These will be highlighted in Step 2. Epistemonikos (http://www.epistemonikos.org/en/), a website that links together systematic reviews, overviews of reviews, and primary studies to support health-policy decisions, is another example of how research evidence can be collated.

More generally, the Evidence-Based Research Network (http://www.ebrnetwork.org) is an example of an initiative that aims to promote the efficient use of existing research, especially through the use of systematic reviews [31] and information about ongoing research. Proposals for new research should be informed by systematic reviews, and reports of new research should be set in the context of updated systematic reviews.

Trial Forge will aim to apply quality criteria when pointing to external resources and when collating individual studies. How to do this will form part of the initial work of Trial Forge, though it is likely that GRADE [32] (a system for grading the quality of evidence and the strength of recommendations, particularly for guidelines) will contribute importantly. Different approaches to presenting evidence will be explored using methods developed by the GRADE Working Group where appropriate, and the methods used to present the information will be informed by work done with, among others, the Cochrane Plain Language Summaries [33], the GRADE Summary of Findings tables [34, 35], and the DECIDE project (a project that aims to improve the way research information is presented in guidelines, http://www.decide-collaboration.eu). This presentation work will also be evaluated.

Step 3 - Strengthen the evidence base by creating a methodology research agenda

Step 3 will focus on strengthening the evidence base by providing a platform to highlight key areas of uncertainty, which would enable individuals and research groups to suggest ways in which the uncertainties could be addressed. For example, we know less about the effect of recruitment interventions aimed at recruiters than we do about those aimed at potential participants [7]. Recruiters play a hugely influential role and can have a substantial impact on recruitment [36, 37], but there remains uncertainty about how best to address the issues and concerns that recruiters face [3645]. One way to help fill this gap (and others) may be through the availability of standard outlines for Studies Within A Trial (SWATs). The design of SWAT-1 is for site visits by the principal investigator to increase or maintain recruitment [46].

Publishing protocols for methodology research, which can then be embedded in other studies, makes it easier for research groups to become involved in filling evidence gaps. Much of the intellectual work around the appropriate methodology research already will have been done by the authors of the protocol. A database of outlines for SWATs is being developed to improve access to these ideas [47]. Step 3 of Trial Forge will produce SWATs as well as link people to initiatives such as the MRC-funded Systematic Techniques for Assisting Recruitment to Trials (START) programme (http://www.population-health.manchester.ac.uk/mrcstart/), which is developing a platform to evaluate recruitment interventions simultaneously across many trials.

Finally, where evidence does not yet exist, information about these gaps will be systematically directed to funding agencies for consideration in their prioritisation processes. In the meantime, Trial Forge will provide a repository for experience and knowledge from the community of trialists, trial managers, and others about interventions and approaches that they believed worked well in their settings. Trial Forge will thus provide support for electronically linked communities of practice (for example, through question and answer and discussion sections on its website) to facilitate sharing of knowledge and experiences, especially when rigorous evidence to inform decisions is lacking.

Step 4 - Collaborate to work through the methodology research agenda

A methodology research agenda will have been created in Step 3. Step 4 will encourage wide collaboration among methodologists, trialists, and other relevant stakeholders to tackle this research agenda. For some processes in the trial pathway (Fig. 1), the agenda will be substantial and very challenging. A single research group or trials unit is unlikely to have the skills, capacity, or interest to take on a whole agenda. By bringing research groups together around a shared agenda, Trial Forge will minimise unnecessary duplication, focus work on topics shown to be most in need of attention (with a recent survey of the priorities of UK Clinical Trials Unit Directors providing a good starting point [48]), and identify groups with the necessary expertise to do the work. For example, groups could work together to evaluate an intervention described in a SWAT. This collaboration between groups may happen naturally through direct contact but could be facilitated by Trial Forge, for example by having a coordinator identify potential links and encouraging collaboration.

Step 5 - Dissemination

The value of the expanded evidence base will be realised in Step 5: when Trial Forge has identified or generated an important result from, for example, an up-to-date systematic review of relevant methodology research, people who need to know about it should be informed efficiently. For example, if, as a result of including new trial data to the Cochrane review of interventions to improve retention in trials [8] meant that there was now clear evidence that a particular intervention was effective, Trial Forge would help to ensure that this information is disseminated efficiently to trialists. A variety of dissemination routes will be used, for example, electronic mailing lists, a Twitter feed, presentations at the UK Clinical Trials Units Directors’ meetings and training courses. Dissemination routes are likely to need to change over time and may well need to differ depending on the trial process being addressed. An underlying principle will be that simply publishing the findings in a journal article is unlikely to be sufficient to promote uptake. To maximise the impact of this methodology research, Trial Forge will use evidence from implementation research to promote clinical and professional behaviour change interventions [49]. This step of Trial Forge will also be evaluated.

The five steps in Trial Forge will be iterative, especially since many trial processes are linked and because suggestions for change in one area may have consequences for others. Trial Forge’s own processes will also be evaluated and modified over time as we and others learn from our experience of using the five steps to reduce gaps in knowledge about how best to design, conduct, analyse, and report trials. Once started, Trial Forge should produce a steady stream of methodology innovations that address trial process problems of recognised significance to people involved in trials. Importantly, work, and prestige will not be concentrated in one place or group but will be distributed across a collaborative network. Groups engaging with Trial Forge will be encouraged to build up their own portfolios of methodology work in areas that match their interests and expertise.

Conclusion

Trial Forge aims to support active and regular engagement with people who design, conduct, analyse, and report trials in the UK and elsewhere. It will promote meaningful improvements in trial efficiency and greater potential for trials to improve health. Moreover, Trial Forge will support an informal network of interested trialists, who will meet virtually and occasionally in person to build capacity and knowledge in efficient trials design and conduct. It will aim to be the ‘go to’ website for summaries of what is known about trial methods research but also for a linked programme of applied methodology research that encourages people to collaborate to fill gaps in evidence.

Not all problems in trials need more methodology research. However, many aspects of trial design, conduct, analysis, and reporting could be subjected to research to identify the relative effects of alternative approaches and whether these aspects are scientific, methodological, or administrative; they all have uncertainties that could be addressed by research leading to greater evidence-based approaches than is currently the case. We believe that Trial Forge will maximise the effectiveness and efficiency of trials, increase the chances that they will produce reliable and robust answers, and minimise waste. Trialists share many of the same problems; Trial Forge is about working together to solve them.

Abbreviations

COMET:

Core Outcome Measures in Effectiveness Trials

DECIDE:

Developing and Evaluating Communication Strategies to Support Informed Decisions and Practice Based on Evidence

GRADE:

Grading of Recommendations Assessment, Development, and Evaluation

MRC:

Medical Research Council

START:

Systematic Techniques for Assisting Recruitment to Trials

SWAT:

Studies Within A Trial

References

  1. Gheorghiade M, Vaduganathan M, Greene SJ, Mentz RJ, Adams Jr KF, Anker SD, et al. Site selection in global clinical trials in patients hospitalized for heart failure: perceived problems and potential solutions. Heart Fail Rev. 2014;19:135–52.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Ghersi D, Pang T. From Mexico to Mali: Four years in the history of clinical trial registration. J Evid Base Med. 2009;2:1–7.

    Article  Google Scholar 

  3. The Clinical Trials Business. BCC Research. http://www.bccresearch.com/market-research/pharmaceuticals/clinical-trials-market-phm027c.html. Accessed 2 Jan 2015.

  4. Hawkes N. UK must improve its recruitment rate in clinical trials. BMJ. 2012;345, e8104.

    Article  PubMed  Google Scholar 

  5. Research: increasing value, reducing waste. Available from www.researchwaste.net. Accessed 2 Jan 2015.

  6. Salman RAS, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J, et al. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383:176–85.

    Article  PubMed Central  Google Scholar 

  7. Treweek S, Mitchell E, Pitkethly M, Cook J, Kjeldstrøm M, Johansen M, et al. Methods to improve recruitment to randomised controlled trials: Cochrane systematic review and meta-analysis. BMJ Open. 2013;3, e002360.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Brueton VC, Tierney J, Stenning S, Harding S, Meredith S, Nazareth I, Rait G. (2013) Strategies to improve retention in randomised trials. Cochrane Database of Systematic Reviews. 2013; 12:MR000032.

  9. Sully BGO, Julious SA, Nicholl J. A reinvestigation of recruitment to randomised, controlled, multicenter trials: A review of trials funded by two UK funding agencies. Trials. 2013;14:166.

    Article  PubMed  PubMed Central  Google Scholar 

  10. O’Leary E, Seow H, Julian J, Levine M, Pond GR. Data collection in cancer clinical trials: Too much of a good thing? Clinical Trials. 2013;10:624–32.

    Article  PubMed  Google Scholar 

  11. Marcano Belisario JS, Huckvale K, Saje A, Porcnik A, Morrison CP, Car J. Comparison of self administered survey questionnaire responses collected using mobile apps versus other methods (Protocol), Cochrane Database of Systematic Reviews. 2014; MR000042.

  12. Saini P, Loke YK, Gamble C, Altman DG, Williamson PR, Kirkham JJ. Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews. BMJ. 2014;349:g6501–1.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Vera-Badillo FE, Shapiro R, Ocana A, Amir E, Tannock IF. Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer. Ann Oncol. 2013;24:1238–44.

    Article  CAS  PubMed  Google Scholar 

  14. Habre C, Tramer MR, Popping DM, Elia N. Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection. BMJ. 2014;349:g5219–9.

    Article  PubMed Central  Google Scholar 

  15. Sinha IP, Altman DG, Beresford MW, Boers M, Clarke M, Craig J, et al. Selection, measurement, and reporting of outcomes in clinical trials in children. Pediatrics. 2012;129:S146–52.

    Article  PubMed  Google Scholar 

  16. Saunders C, Byrne CD, Guthrie B, Lindsay RS, McKnight JA, Philip S, et al. External validity of randomized controlled trials of glycaemic control and vascular disease: how representative are participants? Diabet Med. 2013;30:300–8.

    Article  CAS  PubMed  Google Scholar 

  17. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–65.

    Article  PubMed  Google Scholar 

  18. Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10:37.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Rothwell PM. Treating individuals 1: External validity of randomised controlled trials:“To whom do the results of this trial apply?.”. Lancet. 2005; 365:82–93

  20. Clarke M, Brice A, Chalmers I. Accumulating research: A systematic account of how cumulative meta-analyses would have provided knowledge, improved health reduced harm and saved resources. PLoS ONE. 2014;9, e102670.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis. 1967;20:637–48.

    Article  CAS  PubMed  Google Scholar 

  22. Altman DG. The scandal of poor medical research. BMJ. 1994;308:283–4.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–9.

    Article  PubMed  Google Scholar 

  24. Slater M. Olympics cycling: Marginal gains underpin Team GB. Available at http://www.bbc.co.uk/sport/0/olympics/19174302. Accessed 2 Jan 2015.

  25. Booth A, Papaioannou D, Sutton A. Systematic Approaches to a Successful Literature Review. London: Sage Publications; 2012.

    Google Scholar 

  26. Candy B, King M, Jones L, Oliver S. Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Med Res Methodol. 2011;11:124.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Doyle LH. Synthesis through meta-ethnography: paradoxes, enhancements, and possibilities. Qual Res. 2003;3:21–4.

    Article  Google Scholar 

  28. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Gargon E, Williamson PR, Altman DG, Blazeby JM, Clarke M. The COMET Initiative database: progress and activities from 2011 to 2013. Trials. 2014;15:279.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Gargon E, Gurung B, Medley N, Altman DG, Blazeby JM, Clarke M, et al. Choosing important health outcomes for comparative effectiveness research: a systematic review. PLoS ONE. 2014;9, e99111.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Chalmers I, Nylenna M. A new network to promote evidence-based research. Lancet. 2014;384:1903–4.

    Article  PubMed  Google Scholar 

  32. The GRADE Working Group: List of GRADE working group publications and grants. Available from http://www.gradeworkinggroup.org/publications/index.htm. Accessed 2 Jan 2015.

  33. Glenton C, Santesso N, Rosenbaum S, Nilsen ES, Rader T, Ciapponi A, et al. Presenting the results of cochrane systematic reviews to a consumer audience: a qualitative study. Med Decis Making. 2010;30:566–77.

    Article  PubMed  Google Scholar 

  34. Santesso N, Rader T, Nilsen ES, Glenton C, Rosenbaum S, Ciapponi A, et al. A summary to communicate evidence from systematic reviews to the public improved understanding and accessibility of information: a randomized controlled trial. J Clin Epidemi. 2014;1–9.

  35. Rosenbaum SE, Glenton C, Oxman AD. Summary-of-findings tables in Cochrane reviews improved understanding and rapid retrieval of key information. J Clin Epidemi. 2010;63:620–6.

    Article  Google Scholar 

  36. Donovan JL, Parmasivan S, de Salis I, Torrien M. Clear obstacles and hidden challenges: understanding recruiter perspectives in six pragmatic randomised controlled trials. Trials. 2014;15:5.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Donovan JL, de Salis I, Toerien M, Paramasivan S, Hamdy FC, Blazeby JM. The intellectual challenges and emotional consequences of equipoise contributed to the fragility of recruitment in six randomized controlled trials. J Clin Epidemi. 2014;67:912–20.

    Article  Google Scholar 

  38. Eborall HC, Dallosso HM, Daly H, Martin-Stacey L, Heller SR. The face of equipoise–delivering a structured education programme within a randomized controlled trial: qualitative study. Trials. 2014;15:15.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Garcia J, Elbourne D, Snowdon C. Equipoise: a case study of the views of clinicians involved in two neonatal trials. Clin Trials. 2004;1:170–8.

    Article  PubMed  Google Scholar 

  40. Graffy J, Grant J, Boase S, Ward E, Wallace P, Miller J, et al. UK research staff perspectives on improving recruitment and retention to primary care research; nominal group exercise. Fam Pract. 2009;26:48–55.

    Article  PubMed  Google Scholar 

  41. Hamilton DW, de Salis I, Donovan JL, Birchall M. The recruitment of patients to trials in head and neck cancer: a qualitative study of the EaStER trial of treatments for early laryngeal cancer. Eur Arch Otorhinolaryngol. 2013;270:2333–7.

    Article  CAS  PubMed  Google Scholar 

  42. Howard L, de Salis I, Tomlin Z, Thornicroft G, Donovan J. Why is recruitment to trials difficult? An investigation into recruitment difficulties in an RCT of supported employment in patients with severe mental illness. Contemp Clin Trials. 2009;30:40–6.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Menon U, Gentry-Maharaj A, Ryan A, Sharma A, Burnell M, Hallett R, et al. Recruitment to multicentre trials–lessons from UKCTOCS: descriptive study. BMJ. 2008;337:a2079.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Paramasivan S, Huddart R, Hall E, Lewis R, Birtle A, Donovan JL. Key issues in recruitment to randomised controlled trials with very different interventions: a qualitative investigation of recruitment to the SPARE trial (CRUK/07/011). Trials. 2011;12:78.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Wade J, Donovan JL, Lane JA, Neal DE, Hamdy FC. It’s not just what you say, it’s also how you say it: opening the ‘black box’ of informed consent appointments in randomised controlled trials. Soc Sci Med. 2009;68:2018–28.

    Article  PubMed  Google Scholar 

  46. Smith V, Clarke M, Devane D, Begley C, Shorter G, Maguire L. SWAT 1: what effects do site visits by the principal investigator have on recruitment in a multicentre randomized trial? J Evid Base Med. 2013;6:136–7.

    Article  Google Scholar 

  47. Clarke M. Online database for SWAT (Studies Within A Trial) and SWAR (Studies Within A ReviewAvailable at http://pure.qub.ac.uk/portal/en/projects/online-database-for-swat-studies-within-a-trial-and-swar-studies-within-a-review(1d8d69e3-1fcb-442f-a663-35dd5056459d).html. Accessed 2 Jan 2015.

  48. Smith CT, Hickey H, Clarke M, Blazeby J, Williamson P. The trials methodological research agenda: results from a priority setting exercise. Trials. 2014;15:32.

    Article  Google Scholar 

  49. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgments

We are grateful for the contributions of Monica Ensini, Michela Guglieri, Peter Holding, Lynn McKenzie, Ken Snowden, and David Torgerson. The Edinburgh workshop was funded by the Network of MRC Hubs for Trials Methodology Research and the Health Services Research Unit at the University of Aberdeen. The Health Services Research Unit at the University of Aberdeen is funded by the Chief Scientist Office of the Scottish Government Health Directorates.

Funding

The workshop was funded by the Network of MRC Hubs for Trials Methodology Research and the Health Services Research Unit at the University of Aberdeen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaun Treweek.

Additional information

Competing interests

All authors declare that they have no competing interests.

Authors’ contributions

ST and MClarke jointly conceived of the idea of a platform to share knowledge of effective ways of improving trial efficiency. They, together with MCampbell, AL, TL, JN, and PW obtained funding for and organised the Trial Forge meeting. All authors (ST, DA, PB, MCampbell, IC, SC, PC, DC, PD, DD, LD, JD, DE, BF, CG, KG, KH, TL, RL, KL, AMcD, GMcP, AN, JN, CR, PS, DRS, WS, MS, PW, and MClarke) made contributions to the issues discussed at the meeting and described in the manuscript. ST wrote the first draft of the paper, and all authors contributed to revising it. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Treweek, S., Altman, D.G., Bower, P. et al. Making randomised trials more efficient: report of the first meeting to discuss the Trial Forge platform. Trials 16, 261 (2015). https://doi.org/10.1186/s13063-015-0776-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13063-015-0776-0

Keywords