Your session is about to expire

Site initiation visit (siv): clinical trial basics, what is an siv in clinical research, siv definition: site initiation visit.

An SIV (clinical trial site initiation visit) is a preliminary inspection of the trial site by the sponsor before the enrollment and screening process begins at that site. It is generally conducted by a monitor or clinical research associate (CRA), who reviews all aspects of the trial with the site staff, including going through protocol documents and conducting any necessary staff training.[ 1 ],[ 2 ]

Also known as a study start-up visit, the sponsor can only request an SIV after the site has been selected and formal agreements such as the CTA have been signed.

What is the purpose of an SIV?

Clinical trial SIVs are necessary to ensure that all personnel of a given site who will be involved in the clinical trial, such as investigators and study staff, thoroughly understand the trial protocol and are trained appropriately so as to handle their role and responsibilities. Furthermore, the site initiation visit has the aim of ensuring the trial site is operationally ready, with working infrastructure, tools, and any study materials needed.[ 1 ]

Given the scope of the SIV, clinical trial sponsors should schedule this visit well before enrollment so that there is plenty of time to comprehensively inspect all relevant processes, and to conduct further training or rectify any issues, if necessary.

Can the SIV be conducted before IRB approval?

IRB approval is generally necessary before the SIV is carried out. Clinical trial sponsors should select sites that are compliant with all applicable regulatory requirements, and after the site receives IRB approval for the research, the sponsor can conduct the SIV.

SIV checklist for thorough site initiation visits

Given the importance of the SIV, clinical trial sponsors should make the most of this inspection visit by coming fully prepared with a detailed checklist of what is to be confirmed during the SIV.

Clinical trial sites might receive a copy of this checklist so they can ensure that all relevant staff are present for the visit. Specific tasks to include in the SIV checklist include the following:[ 1 ],[ 2 ],[ 3 ],[ 4 ]

  • Discussing the clinical trial’s objectives with study staff
  • Educating the research team on Good Clinical Practices
  • Reviewing the operation schedule for the protocol
  • Discussing the enrollment and screening process, including clarifying the inclusion and exclusion criteria
  • Reviewing the informed consent documents and procedure
  • Clarifying procedures for storing, dispensing, and managing the investigational product (IP)
  • Checking inventory for all required medical supplies and equipment
  • Ensuring secure access to all digital platforms, i.e., correct usernames and passwords
  • Touring the clinical trial site to ensure facilities are in proper condition
  • Reviewing and discussing all clinical trial documentation, such as forms, surveys, SOPs, etc.
  • Reviewing the data management system and any other technological solutions forming part of the site’s or sponsor’s workflow
  • Ensuring that site staff are up to date on training and understand how to maintain essential documentation
  • Reviewing the site/trial budget financial protocols, including any processes related to compensating trial participants
  • Verifying and testing reporting procedures possible adverse events
  • Leaving room for an open discussion of any specific concerns that trial staff may have

This checklist provides basic guidelines only, and should be built upon and customized for each individual study according to risk areas and specific protocols.

Other Trials to Consider

Patient Care

WeChat Quit Coach

Angiotensin (1-7), family-based mental health navigation, observational+ coq10, cognitive enhancement therapy, eye90 microspheres treatment, popular categories.

Colon Cancer Clinical Trials 2024

Colon Cancer Clinical Trials 2024

Cannabis Clinical Trials 2024

Cannabis Clinical Trials 2024

Prostate Clinical Trials 2024

Prostate Clinical Trials 2024

Zika Virus Clinical Trials 2024

Zika Virus Clinical Trials 2024

Paid Clinical Trials in Milwaukee, WI

Paid Clinical Trials in Milwaukee, WI

Ofev Clinical Trials

Ofev Clinical Trials

Smoking Cessation Clinical Trials 2024

Smoking Cessation Clinical Trials 2024

Scleroderma Clinical Trials

Scleroderma Clinical Trials

Semantic Dementia Clinical Trials 2023

Semantic Dementia Clinical Trials 2023

Craniopharyngioma Clinical Trials 2023

Craniopharyngioma Clinical Trials 2023

Popular guides.

Clinical Trial Basics: Site Initiation Visit (SIV)

Add Your Heading Text Here

Clinical site initiation visit checklist and best practices.

Picture of Medha Datar

Medha Datar

  • March 3, 2023

site initiation visit gcp

The clinical site initiation visit is a critical component of the clinical trial start-up process. It involves the CRA visiting the study site to ensure that the site is prepared to conduct the study according to the protocol and Good Clinical Practice (GCP) guidelines. The purpose of the site initiation visit is to confirm that the site has the necessary resources, procedures, and training in place to conduct the study and collect accurate data.

site initiation visit gcp

Here are some best practices for conducting a successful site initiation visit:

  • Schedule the site initiation visit as early as possible in the study start-up process to allow sufficient time for addressing any issues that may arise.
  • Confirm that the site has all the necessary study documents, including the protocol, informed consent form, case report form, and monitoring plan.
  • Verify that the site has obtained IRB/EC approval and that all regulatory documents are complete and accurate.
  • Ensure that all site staff has completed the required training, including GCP training, and that their CVs are up to date.
  • Review the study drug or device management plan and confirm that the site has procedures in place for managing adverse events and protocol deviations.
  • Explain the monitoring process to the site staff and discuss the CRA’s role in monitoring the study.
  • Confirm that the site has a plan for managing subject enrollment and explain the subject screening and recruitment process.
  • Review the case report form with the site staff and explain how to complete it accurately and completely.
  • Discuss the communication plan between the site staff and the sponsor/CRO, including how to report issues and the frequency and format of study updates.
  • Verify that the site has procedures in place for data management and document retention.

Here is a sample clinical trial initiation visit checklist for a Clinical Research Associate (CRA):

By following these best practices and checklists, the CRA can help ensure that the study is conducted according to the protocol and GCP guidelines and that high-quality data is collected. The site initiation visit is an important opportunity to establish a good working relationship with the site staff and to identify any issues that may need to be addressed before the study begins.

Cloudbyz Unified Clinical Trial Management (CTMS) is a comprehensive, integrated solution to streamline clinical trial operations. Built on the Salesforce cloud platform, our CTMS provides real-time visibility and analytics across study planning, budgeting, start-up, study management, and close-out. Cloudbyz CTMS can help you achieve greater efficiency, compliance, and quality in your clinical operations with features like automated workflows, centralized data management, and seamless collaboration. Contact us today to learn how Cloudbyz CTMS can help your organization optimize its clinical trial management processes.

To know more about the Cloudbyz Unified Clinical Trial Management Solution contact [email protected]

Request a demo specialized to your need.

Subscribe to our weekly newsletter

site initiation visit gcp

At Cloudbyz, our mission is to empower our clients to achieve their business goals by delivering innovative, scalable, and intuitive cloud-based solutions that enable them to streamline their operations, maximize efficiency, and drive growth. We strive to be a trusted partner, dedicated to providing exceptional service, exceptional products, and unparalleled support, while fostering a culture of innovation, collaboration, and excellence in everything we do.

Subscribe to our newsletter

CTMS CTBM eTMF EDC RTSM Safety & PV PPM

Business Consulting Services Professional Services Partners CRO Program

Blog e-Books & Brochures Videos Whitepapers News Events

About us Careers Sustainability FAQ Terms & Conditions Privacy Policy Compliance

© 2024 Cloudbyz

Monitoring Planning

Monitoring is the act of overseeing the progress of a clinical trial, and of ensuring that it is conducted, recorded and reported in accordance with the protocol, SOPs, GCP and applicable regulatory requirements.  The clinical research monitor is the person with direct access to, and oversight of the site research activities. They conduct monitoring visits in accordance with a monitoring plan and provide feedback and training to sites as necessary.    

Regular site monitor visits can be broken down into four types: pre-study visits, initiation visits, periodic monitoring visits, and close-out visits. Study sites may also be monitored or audited by the FDA, Clinical Research Organizations (CROs), IRBs and sponsors.

Before  the study begins the monitor may come out for some pre study visits, or those may be conducted remotely. A site initiation visit will be conducted, in person or remotely, to formally document that a site is ready to begin engaging in the conduct of the trial.

At the start of the study the monitor should be identified, the site should be qualified by the OCR Regulatory Services team. For industry sponsored trial, a monitor will be provided. For IITs it would be necessary to identify a monitor for the study.  

The monitor is typically part of the Site Initiation Visit (SiV), led by the Regulatory Sponsor for the study. The monitor will then conduct ongoing monitoring, according to the data safety monitoring plan, throughout the trial.  

Study Initiation visit – check all procedures in place

site initiation visit gcp

The visit is usually conducted by the trial coordinator or often a monitor on behalf of the sponsor. Essentially their aim will be to work with the sites to ensure that the site’s planned operational procedures fits with the requirements of the protocol and will ensure accurate data as well as safe and ethical conduct of the trial.

The initiation visit is held once sites have had their regulatory and Ethics approval in place and after all the essential documents have been retrieved for the site and sponsor files. There are a number of items that may be discussed at this visit: - Study protocol (study objectives, purpose, endpoints) - Reporting AEs and SAEs - Investigational drugs (storage, dispensing, destruction, accountability) or any procedures necessary for other types of intervention - Inclusion/Exclusion criteria of protocol - Patient Enrolment (including withdrawal criteria) - Case record form completion and error correction - Protocol compliance and deviation issues - Quality management In some cases, the sponsor may decide to waive the initiation visit either in particular sites or overall across the study. For instance where the sites have been used for a similar study previously, for simpler studies where the study procedures are straight forward and easy to follow or where the sponsor has decided that the investigator meeting will also act as the initiation visit.

Where this occurs, it should be made clear and documented in the study files what training has occurred to replace the initiation visit.

Preparing for the Initiation Visit - A meeting room should be available - A site checklist is used by the monitor or trial coordinator to ensure that all items have been covered during the initiation visit. - The monitor should check that all regulatory documents have been retrieved prior to the meeting. It is possible to retrieve the last of the documents at the initiation visit provided the site has confirmed that they have these documents at site for collection. - All trial staff should also be available at the visit. This includes the principal and sub investigators, study coordinators; pharmacist etc. if they can’t be all available at the same time, the monitor could split the visits over several times in the day to meet with all members of the trial team.

During the Initiation Visit

A useful technique for planning an initiation visit is the ‘trial walk-through’. Here those in the meeting work through the whole study from (usually) the perspective on a participant. This helps see how the trial will run on the ground logistically and enable any potential glitches to be identified. For example working through the process for a vaccine trial might work as follows;

1. Community engagement in the study areas to explain the planned trial with community leaders and then the whole community 2. Mothers present at clinic and are approach as a group to explain the trial 3. One to one consent discussions with each mother with a study nurse 4. Consent taking by clinical officer 5. History of child taken and blood sample 6. Sample labelled and prepared for cold transportation to laboratory 7. Child vaccinated 8. Mother reminded of follow up visit and contact details of locally based field worker 9. Samples transported back to lab within 3 hours 10. Samples received and processed in trial lab 11. Any abnormal lab results reported by to investigators 12. Follow up visit one 13. Adverse event reported and child needing treatment 14. Child late to next follow up visit and needing finding in community 15. Next follow up visit completed 16. Child ends trial

Going through the whole trial journey from the participant’s perspective enables many of the potential outcomes to be thought about beforehand and therefore enables the staff team to be prepared.

Generally a trial initiation visit should cover the following;

- All aspects of the study should be discussed as mentioned above but also to include lab sample collections, method of randomisation (where necessary), advertising and any other matter that needs clarification. - Where the study medication is already on site, the monitor will check that it is stored correctly and plans for dispensing have been agreed by the PI and Pharmacist. - Ensure that all participants have completed the site personnel log. - Check that sites have sufficient study materials such as CRFs (where necessary) lab kits and any other study specific kits for sites to start recruiting. - Discuss the site arrangements for archiving the study documents at the end of the study. - Retrieve the site contact details and establish the frequency of monitoring visits. - Retrieve any last bit of documentation (where applicable).

Post Initiation Visit - The monitor will complete the initiation report and file a copy of the report in the site file. - Any actions from the meeting will also be addressed and a follow-up letter forwarded the site to clarify any questions that were raised at the meeting.

in Articles

Initiating the trial, understanding the investigators: a qualitative study investigating the barriers and enablers to the implementation of local investigator-initiated clinical trials in ethiopia, data sharing; transparency initiative - the multi-regional clinical trials center of harvard and brigham and women's hospital, exploring collaborative health initiatives: odin team's site visit in dar es salaam 2024, data sharing guidance for cruk researchers initiatives and repositories to support clinical rseacrhers with data management and sharing contents, same-day initiation of oral pre-exposure prophylaxis among gay, bisexual, and other cisgender men who have sex with men and transgender women in brazil, mexico, and peru (imprep), nursing now challenge and the global health network launch ground-breaking and global research leadership initiative: the 1000 challenge, applications open: africa oxford initiative visiting scholars programme, the project data sphere initiative: accelerating cancer research by sharing data, guidelines for clinical trial registration.

Site Initiation Visit

Before recruitment begins it is a good idea to check that all processes have been completed and to validate all previous steps. A site initiation visit should ...

in Discussions

Informing study staff of trial initiation.

Information sessions for study staff

In addition to informing the sponsors, it is important that all study staff in all study sites are aware that the trial is ready to ...

Monitoring guidelines

this topics is probably linked to the GCP discussion which is ongoing in the group on Trial Management. However.... in non-commercial, externally-funded research, monitoring can have a significant ...

Data management scenario; non-unique patient IDs

Imagine a healthcare setting where a new patient ID is generated for every new case that comes in, but you’re interested in retrospectively considering how often the same patients returned ...

May Issue of the Month - Using Technology (all types!) for Global Health Research, what are your experiences?

Technology is used throughout most steps of health research – whether it’s data management, mobile data capture, eCRFs, creating an online presence for your research site, using an online tool ...

UK-China AMR Partnership Initiative

The Medical Research Council (MRC) and partners are currently inviting proposals from high quality research projects. Funding will be available for projects focusing on antimicrobial resistance, specifically antibacterial resistance of ...

Global Research Nurses' Network

What would you like the Global Research Nurses' network to do for you? We are just starting out, and have lots of ideas and plans - so it is a ...

COLLABORATING WITH GHT for the West African Region

We are grateful for your interest in collaborating with GHT for the West African Regional Faculty (WARF). I am GHT coordinator for West Africa and let me first tell you ...

Research Capacity and Clinical Research Initiatives in Zambia. Share your views!

Dear colleagues,

It would be fantastic to engage with other researchers from Zambia to understand what are the ongoing clinical research initiatives in the country, which are the most common ...

site initiation visit gcp

Site Qualification Visits and Site Initiation Visits

Thank you to Patient Recruitment Centre: Leicester for providing this content Version 1 - March 2023

site initiation visit gcp

Download .pdf version of Checklist here

Site Qualificatio n Visit Checklist

The purpose of a SQV is to assess whether it is feasible for a site to run a study from the sponsor perspective. You will still need an internal feasibility assessment to discuss the study in much more detail, in particular recruitment strategies and targets. 

Consider making a video tour of your facilities since this reduces staff burden and provides an overview of site. It also showcases your research department and can be used during the EOI process. You can find examples of this here . 

Book a room and establish if in person or virtual.

Be prepared to give a tour of your facility including relevant departments e.g. pharmacy.

Ensure you have display screen equipment for SQV slides. If your organisation does not allow encrypted or external devices, request that the slides are sent in advance.

Invite appropriate people from support departments e.g. pharmacy or radiology,  or consider if they will have a separate meeting with the sponsor.

Make sure you can accommodate the number of attendees, internally and from the sponsor and/or the CRO and allow for additional guests. 

Preparation Needed

Ensure your department is clean and tidy and be aware of confidentiality with departmental documents.

Review material provided e.g. protocol synopsis, training slides and compile questions.

Check you have equipment required e.g. fridge, freezer, centrifuge, and space resource. If not, make a list of what is required.

Collate any information about previous sponsor audits or site inspection outcomes.

During the Meeting

Ask the following questions:

 What is the status of the study?

 What are the study timelines?

 What is the expected target?

 What is the recruitment period?

 Number of UK sites?

 Is the protocol finalised? Can amendments be suggested by PI and site staff?

 If the study is already open what have the challenges been?

 What is the screen failure rate?

Identify any equipment that the sponsor will need to provide or fund and who will order them. Ensure this is discussed and clear at an early stage. Do not wait until SIV when the contract will likely have been finalised.

Review recruitment strategy.

 Will they allow PIC sites?

 What support do you need from the sponsor e.g. what advertising materials are provided? Is there the opportunity to suggest alterations?

Ask when the sponsor will inform the site if they have been selected or not.

After the Meeting

Follow up with any actions.

Be prepared to provide documentation such as GCP certificates, CV’s, calibration certificates, FDF, and a contact list of staff.

If you are not selected as a site, remember to ask for feedback.

Site Initiation Visit Checklist

Th e sponsor run the SIV, however the lead site staff can utilise this opportunity to ask any remaining queries regarding the protocol and identify any outstanding requirements.

Ensure you have display screen equipment for SIV slides. If your organisation does not allow encrypted or external devices, request that the slides are sent in advance.

Invite all staff who will be working on the study. If they are not available they can review the slides after the event. Best practice would be to book SIV when all key site staff are available. The PI may only be needed for part of the meeting.

Invite appropriate people from support departments e.g. pharmacy or radiology and consider if they will have a separate meeting with the sponsor.

Make sure you can accommodate the number of attendees internally and from the sponsor/CRO. Be prepared for additional external staff to attend.

Some SIVs can last the full working day. Ensure you are clear on how long the meeting is meant to be and ensure you are aware of lunch arrangements i.e. is the sponsor providing or funding.

Prepare any questions you have about the study beforehand.

Review whether everything is in place for starting the study - IMP,  lab kits, system accesses, all documents including site files, equipment e.g. ECG, medical devices. This may not be the case for some studies with expedited set up.

Ensure you know the protocol.

Ensure someone is in attendance that knows the contract, in case any activities are discussed that are not included in the contract.

Complete the SIV attendance log or send a list of attendees to the CRA.

Circulate the delegation log  if not already completed.

Raise any queries about missing equipment, documents etc.

Review inclusion and exclusion criteria.

Source document review  - Agree what documents are source and what electronic systems the monitors will need access to.

Decide who will be reviewing the safety reports.

Record the SIV training on the finance tracker so it can be invoiced.

Put a plan in place for screening the first participant.

Review the participant recruitment pathway.

Consider a dummy run especially if numerous support departments are involved.

Request any amendments to the contract that are identified.

Make worksheets if not already prepared at this point.

Not a member?

Find out what The Global Health Network can do for you. Register now.

Member Sites A network of members around the world. Join now.

  • 1000 Challenge
  • ODIN Wastewater Surveillance Project
  • CEPI Technical Resources
  • UK Overseas Territories Public Health Network
  • Global Malaria Research
  • Global Outbreaks Research
  • Sub-Saharan Congenital Anomalies Network
  • Global Pathogen Variants
  • Global Health Data Science
  • AI for Global Health Research
  • MRC Clinical Trials Unit at UCL
  • Virtual Biorepository
  • Epidemic Preparedness Innovations
  • Rapid Support Team
  • The Global Health Network Africa
  • The Global Health Network Asia
  • The Global Health Network LAC
  • Global Health Bioethics
  • Global Pandemic Planning
  • EPIDEMIC ETHICS
  • Global Vector Hub
  • Global Health Economics
  • LactaHub – Breastfeeding Knowledge
  • Global Birth Defects
  • Antimicrobial Resistance (AMR)
  • Human Infection Studies
  • EDCTP Knowledge Hub
  • CHAIN Network
  • Brain Infections Global
  • Research Capacity Network
  • Global Research Nurses
  • ZIKAlliance
  • TDR Fellows
  • Global Health Coordinators
  • Global Health Laboratories
  • Global Health Methodology Research
  • Global Health Social Science
  • Global Health Trials
  • Zika Infection
  • Global Musculoskeletal
  • Global Pharmacovigilance
  • Global Pregnancy CoLab
  • INTERGROWTH-21ˢᵗ
  • East African Consortium for Clinical Research
  • Women in Global Health Research
  • Global Health Research Management
  • Coronavirus

Research Tools Resources designed to help you.

  • Site Finder
  • Process Map
  • Global Health Training Centre
  • Resources Gateway
  • Global Health Research Process Map
  • About This Site

Downloadable Templates and Tools for Clinical Research

Welcome to global health trials' tools and templates library. please note that this page has been updated for 2015 following a quality check and review of the templates, and many new ones have been added. please click on the orange text to download each template., the templates below have been shared by other groups, and are free to use and adapt for your researchstudies. please ensure that you read and adapt them carefully for your own setting, and that you reference global health trials and the global health network when you use them. to share your own templates and sops, or comment on these, please email [email protected]. we look forward to hearing from you.

These templates and tools are ordered by category, so please scroll down to find what you need.

To share your own templates and SOPs, or comment on these, please email [email protected]. We look forward to hearing from you!

  • Webinar on community engagement in clinical research involving pregnant women
  • Free Webinar: Science, technology and innovation for upskilling knowledge-based economies in Africa
  • Open Public Consultation on “Strengthened cooperation against vaccine preventable diseases”

Trial Operations    Trial Management    Ethics and Informed Consent    Resources    Trial Design    Data Management and Statistics   

training   

shewitdege

This is Degena Bahrey Tadesse from Tigray, Ethiopia. I am new for this web I am assistant professor in Adult Health Nursing Could you share me the sample/templet research proposal for Global Research Nurses Pump-priming Grants 2023: Research Project Award

jo8281968517

I have learned lot..Thanks..

yfarzi

i was wondering why there is no SOP on laboratory procedures ?

kirannn14

Hi, Can you provide me the SOP for electronic signatures in Clinical trial

anupambendre

Do you have an "SOP for Telephonic site selection visit". Kindly Share on my registered mail ID

sguteta

Thank you for sharing the resources. It is very kind of you.

ericdortenzio

Hi These tolls are very useful! Thank you

Do you have a task and responsability matrix template for clinical trial managment ? Best

abdulkamara1986

I am very much happy to find myself here as a clinician

GHN_Editors

Dear Getrude

We have a free 14-module course on research ethics on our training centre; you'll receive a certificate if you complete all the modules and quizzes. You can take it in your own time. Just visit 'Training centre' in the tabs above, then 'short courses'.

Kind regards The Editorial Team

gamanyagg

need modules on free online gcp course on research ethics

antropmcdiaz

Estimados: me parece excelente el aporte que han hecho dado que aporta. por un lado a mejorar la transparencia del trabajo como a facilitar el seguimiento y supervisión de los mismos. Muchas gracias por ello

We also have an up to date list of global health events available here: https://globalhealthtrials.tghn.org/community/training-events/

Dear Nazish

Thank you, I am glad you found the seminars and the training courses useful. We list many training events (all relevant to Global Health, and as many of them as possible are either free or subsidised) on the 'community' web pages above. Keep an eye on those for events and activities which you can get involved with. Also, if you post an 'introduction' on the introduction group stating where you are from and your research interests, we can keep you updated of relevant local events.

ndurran

Thanks so much. These are very helpful seminars. Please let me know any other websites/links that provide free or inexpensive lectures on clinical Research. Appreciate your help.

Hi Nazish, and welcome to the Network. The items here are downloadable templates for you to use; it sounds like you may be seeking lectures and eLearning courses? If so - no problem! You can find free seminars with sound and slides here: https://globalhealthtrainingcentre.tghn.org/webinars/ , and you can find free, certified eLearning courses here: https://globalhealthtrials.tghn.org/elearning . Certificates are awarded for the eLearning courses for those scoring over 80% in the quiz at the end of each course. If you need anything else, do ask! Kind regards The Editorial Team

Hi, I am new to this website and also to the Clinical Research Industry for that matter I only am able to see the PDF of these courses, just wanted to know are these audio lectures and also happen to have audio clips that go with the pdf?

amanirak

This site is impeccable and very useful for my job!!!!

Thank you for your kind comments.

shailajadr

Fantastic resources

dralinn

I am delighted you found this website. I earlier introduced it to you because of your prolific interest in health care information and resource sharing....

Please Sign in (or Register ) to view further.

Useful Resources

Related articles.

  • PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts BY Jai K Das
  • 5 ways statistics can fool you—Tips for practicing clinicians BY Jai K Das
  • How to prepare for a job interview and predict the questions you’ll be asked BY The Editorial Team
  • Preparing for and Executing a Randomised Controlled Trial of Podoconiosis Treatment in Northern Ethiopia BY Henok Negussie, Thomas Addissie, Adamu Addissie, Gail Davey
  • Dengue: Guidelines for Diagnosis, Treatment, Prevention and Control BY WHO/ TDR

Most popular tags

  • Archive (303)
  • archive (104)
  • data sharing (70)
  • sharing (63)
  • training (49)
  • malaria (30)
  • ACT consortium (25)
  • informed consent (7)
  • data management (6)
  • trial management (6)
  • careers (5)
  • guidelines (5)
  • monitoring (5)
  • workshop (5)
  • administration (4)
  • clinical research (4)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Cochrane Database Syst Rev

Monitoring strategies for clinical intervention studies

Trial monitoring is an important component of good clinical practice to ensure the safety and rights of study participants, confidentiality of personal information, and quality of data. However, the effectiveness of various existing monitoring approaches is unclear. Information to guide the choice of monitoring methods in clinical intervention studies may help trialists, support units, and monitors to effectively adjust their approaches to current knowledge and evidence.

To evaluate the advantages and disadvantages of different monitoring strategies (including risk‐based strategies and others) for clinical intervention studies examined in prospective comparative studies of monitoring interventions.

Search methods

We systematically searched CENTRAL, PubMed, and Embase via Elsevier for relevant published literature up to March 2021. We searched the online 'Studies within A Trial' (SWAT) repository, grey literature, and trial registries for ongoing or unpublished studies.

Selection criteria

We included randomized or non‐randomized prospective, empirical evaluation studies of different monitoring strategies in one or more clinical intervention studies. We applied no restrictions for language or date of publication.

Data collection and analysis

We extracted data on the evaluated monitoring methods, countries involved, study population, study setting, randomization method, and numbers and proportions in each intervention group. Our primary outcome was critical and major monitoring findings in prospective intervention studies. Monitoring findings were classified according to different error domains (e.g. major eligibility violations) and the primary outcome measure was a composite of these domains. Secondary outcomes were individual error domains, participant recruitment and follow‐up, and resource use. If we identified more than one study for a comparison and outcome definitions were similar across identified studies, we quantitatively summarized effects in a meta‐analysis using a random‐effects model. Otherwise, we qualitatively summarized the results of eligible studies stratified by different comparisons of monitoring strategies. We used the GRADE approach to assess the certainty of the evidence for different groups of comparisons.

Main results

We identified eight eligible studies, which we grouped into five comparisons.

1. Risk‐based versus extensive on‐site monitoring: based on two large studies, we found moderate certainty of evidence for the combined primary outcome of major or critical findings that risk‐based monitoring is not inferior to extensive on‐site monitoring. Although the risk ratio was close to 'no difference' (1.03 with a 95% confidence interval [CI] of 0.81 to 1.33, below 1.0 in favor of the risk‐based strategy), the high imprecision in one study and the small number of eligible studies resulted in a wide CI of the summary estimate. Low certainty of evidence suggested that monitoring strategies with extensive on‐site monitoring were associated with considerably higher resource use and costs (up to a factor of 3.4). Data on recruitment or retention of trial participants were not available.

2. Central monitoring with triggered on‐site visits versus regular on‐site visits: combining the results of two eligible studies yielded low certainty of evidence with a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of triggered monitoring intervention. Data on recruitment, retention, and resource use were not available.

3. Central statistical monitoring and local monitoring performed by site staff with annual on‐site visits versus central statistical monitoring and local monitoring only: based on one study, there was moderate certainty of evidence that a small number of major and critical findings were missed with the central monitoring approach without on‐site visits: 3.8% of participants in the group without on‐site visits and 6.4% in the group with on‐site visits had a major or critical monitoring finding (odds ratio 1.7, 95% CI 1.1 to 2.7; P = 0.03). The absolute number of monitoring findings was very low, probably because defined major and critical findings were very study specific and central monitoring was present in both intervention groups. Very low certainty of evidence did not suggest a relevant effect on participant retention, and very low‐quality evidence indicated an extra cost for on‐site visits of USD 2,035,392. There were no data on recruitment.

4. Traditional 100% source data verification (SDV) versus targeted or remote SDV: the two studies assessing targeted and remote SDV reported findings only related to source documents. Compared to the final database obtained using the full SDV monitoring process, only a small proportion of remaining errors on overall data were identified using the targeted SDV process in the MONITORING study (absolute difference 1.47%, 95% CI 1.41% to 1.53%). Targeted SDV was effective in the verification of source documents but increased the workload on data management. The other included study was a pilot study which compared traditional on‐site SDV versus remote SDV and found little difference in monitoring findings and the ability to locate data values despite marked differences in remote access in two clinical trial networks. There were no data on recruitment or retention.

5. Systematic on‐site initiation visit versus on‐site initiation visit upon request: very low certainty of evidence suggested no difference in retention and recruitment between the two approaches. There were no data on critical and major findings or on resource use.

Authors' conclusions

The evidence base is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons, more prospective, comparative monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. However, the results suggesting risk‐based, targeted, and mainly central monitoring as an efficient strategy are promising. The development of reliable triggers for on‐site visits is ongoing; different triggers might be used in different settings. More evidence on risk indicators that identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. In particular, approaches with an initial assessment of trial‐specific risks that need to be closely monitored centrally during trial conduct with triggered on‐site visits should be evaluated in future research.

Plain language summary

New monitoring strategies for clinical trials

Our question

We reviewed the evidence on the effects of new monitoring strategies on monitoring findings, participant recruitment, participant follow‐up, and resource use in clinical trials. We also summarized the different components of tested strategies and qualitative evidence from process evaluations.

Monitoring a clinical trial is important to ensure the safety of participants and the reliability of results. New methods have been developed for monitoring practices but further assessments of these new methods are needed to see if they do improve effectiveness without being inferior to established methods in terms of patient rights and safety, and quality assurance of trial results. We reviewed studies that examined this question within clinical trials, i.e. studies comparing different monitoring strategies used in clinical trials.

Study characteristics

We included eight studies which covered a variety of monitoring strategies in a wide range of clinical trials, including national and large international trials. They included primary (general), secondary (specialized), and tertiary (highly specialized) health care. The size of the studies ranged from 32 to 4371 participants at one to 196 sites.

Key results

We identified five comparisons. The first comparison of risk‐based monitoring versus extensive on‐site monitoring found no evidence that the risk‐based approach is inferior to extensive on‐site monitoring in terms of the proportion of participants with a critical or major monitoring finding not identified by the corresponding method, while resource use was three‐ to five‐fold higher with extensive on‐site monitoring. For the second comparison of central statistical monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits, we found some evidence that central statistical monitoring can identify sites in need of support by an on‐site monitoring intervention. In the third comparison, the evaluation of adding an on‐site visit to local and central monitoring revealed a high percentage of participants with major or critical monitoring findings in the on‐site visit group, but low numbers of absolute monitoring findings in both groups. This means that without on‐site visits, some monitoring findings will be missed, but none of the missed findings had any serious impact on patient safety or the validity of the trial's results. In the fourth comparison, two studies assessed new source data verification processes, which are used to check that data recorded within the trial Case Report Form (CRF) match the primary source data (e.g. medical records), and reported little difference to full source data verification processes for the targeted as well as for the remote approach. In the fifth comparison, one study showed no difference in participant recruitment and participant follow‐up between a monitoring approach with systematic initiation visits versus an approach with initiation visits upon request by study sites.

Certainty of evidence

We are moderately certain that risk‐based monitoring is not inferior to extensive on‐site monitoring with respect to critical and major monitoring findings in clinical trials. For the remaining body of evidence, there is low or very low certainty in results due to imprecision, small number of studies, or high risk of bias. Ideally, for each of the five identified comparisons, more high‐quality monitoring studies that measure effects on all outcomes specified in this review are necessary to draw more reliable conclusions.

Summary of findings

Summary of findings 1.

a Downgraded one level due to the imprecision of the summary estimate with the 95% confidence interval including the substantial advantages and disadvantages with the risk‐based monitoring intervention. b Downgraded two levels due to substantial imprecision; there were no confidence intervals for either of the two estimates on resource use provided in the ADAMON and OPTIMON studies and the two estimates could not be combined due to the nature of the estimate (resource use versus cost calculation).

Summary of findings 2

a Downgraded one level because both studies were not randomized, and downgraded one level for imprecision.

Summary of findings 3

a Downgraded one level because the estimate was based on a small number of events and because the estimate stemmed from a single study nested in a single trial (indirectness). b Downgraded three levels because the 95% confidence interval of the estimate allowed for substantial benefit as well as substantial disadvantages with the intervention and there was only a small number of events (serious imprecision); in addition, the estimate stemmed from a single study nested in a single trial (indirectness). c Downgraded three levels because the estimate was not accompanied by a confidence interval (imprecision) and because the estimate stemmed from a single study nested in a single trial (indirectness).

Summary of findings 4

a Downgraded two levels because randomization was not blinded in one of the studies and the outcomes of the two studies could not be combined. b Downgraded by one additional level in addition to (a) for imprecision because there were no confidence intervals provided.

Summary of findings 5

a Downgraded three levels because of substantial imprecision (relevant advantages and relevant disadvantages were plausible given the small amount of data), and indirectness (a single study nested in a single trial).

b We downgraded by one additional level in addition to (a) for imprecision due to the small number of events.

Trial monitoring is important for the integrity of clinical trials, the validity of their results, and the protection of participant safety and rights. The International Council on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) for Good Clinical Practice (GCP) formulated several requirements for trial monitoring ( ICH 1996 ). However, the effectiveness of various existing monitoring approaches was unclear. Source data verification (SDV) during monitoring visits was estimated to use up to 25% of the sponsor's entire clinical trial budget, even though the association between data quality or participant safety and the extent of monitoring and SDV has not been clearly demonstrated ( Funning 2009 ). Consistent application of intensive on‐site monitoring creates financial and logistical barriers to the design and conduct of clinical trials, with no evidence of participant benefit or increase in the quality of clinical research ( Baigent 2008 ;  Duley 2008 ;  Embleton‐Thirsk 2019 ;  Hearn 2007 ;  Tudur Smith 2012a ;  Tudur Smith 2014 ).

Recent developments at international bodies and regulatory agencies such as the European Medicines Agency (EMA), the Organisation for Economic Co‐operation and Development (OECD), the European Commission (EC) and the Food and Drug Administration (FDA), as well as the 2016 addendum to ICH E6 GCP have supported the need for risk‐proportionate approaches to clinical trial monitoring and overall trial management ( EC 2014 ;  EMA 2013 ;  FDA 2013 ;  ICH 2016 ;  OECD 2013 ). This has encouraged study sponsors to implement risk assessments in their monitoring plans and to use alternative monitoring approaches. There are several publications reporting on the experience of using a risk‐based monitoring approach, often including central monitoring, in specific clinical trials ( Edwards 2014 ;  Heels‐Ansdell 2010 ;  Valdés‐Márquez 2011 ). The main idea is to focus monitoring on trial‐specific risks to the integrity of the research and to essential GCP objectives, that is, risks that threaten the safety, rights, and integrity of trial participants; the safety and confidentiality of their data; or the reliable report of the trial results ( Brosteanu 2017a ). The conduct of 'lower risk' trials (lower risk for study participants) — which optimize the use of already authorized medicinal products, validated devices, implemented interventions, and interventions formally outside of the clinical trials regulations — may particularly benefit from a risk‐based approach to clinical trial monitoring in terms of timely completion and cost efficiency. Such 'lower risk' trials are often investigator‐initiated or academic ‐ sponsored clinical trials conducted in the academic setting ( OECD 2013 ). Different risk assessment strategies for clinical trials have been developed, with the objective of defining risk‐proportionate monitoring plans ( Hurley 2016 ). There is no standardized approach for examining the baseline risk of a trial. However, risk assessment approaches evaluate risks associated with the safety profile of the investigational medicinal product (IMP), the phase of the clinical trial, and the data collection process. Based on a prior risk assessment, a study‐specific combination of central/centralized and on‐site monitoring might be effective. Centralized monitoring, also referred to as central monitoring, is defined as any monitoring processes that are not performed at the study site ( FDA 2013 ), and includes remote monitoring processes. Central data monitoring is based on the evaluation of electronically available study data in order to identify study sites with poor data quality or problems in trial conduct ( SCTO 2020 ;  Venet 2012 ), whereas on‐site monitoring comprises site inspection, investigator/staff contact, SDV, observation of study procedures, and the review of regulatory elements of a trial. Central statistical monitoring (including plausibility checks of values for different variables, for instance) is an integral part of central data monitoring ( SCTO 2020 ), but this term is sometimes used interchangeably with central data monitoring. The OECD classifies risk assessment strategies into stratified approaches and trial‐specific approaches, and proposes a harmonized two‐pronged strategy based on internationally validated tools for risk assessment and risk mitigation ( OECD 2013 ). The effectiveness of these new risk‐based approaches in terms of quality assurance, patient rights and safety, and reduction of cost, needs to be empirically assessed. We examined the risk‐based monitoring approach followed at our own institution (the Clinical Trial Unit and Department of Clinical Research, University Hospital Basel, Switzerland) using mixed methods ( von Niederhausern 2017 ). In addition, several prospective studies evaluating different monitoring strategies have been conducted. These include ADAMON (ADApted MONitoring study;  Brosteanu 2017a  ), OPTIMON (Optimisation of Monitoring for Clinical Research Studies;  Journot 2015 ), TEMPER (TargetEd Monitoring: Prospective Evaluation and Refinement;  Stenning 2018a ), START Monitoring Substudy (Strategic Timing of AntiRetroviral Treatment;  Hullsiek 2015 ;  Wyman Engen 2020 ), and MONITORING ( Fougerou‐Leurent 2019 ).

Description of the methods being investigated

Traditional trial monitoring consists of intensive on‐site monitoring strategies comprising frequent on‐site visits and up to 100% SDV. Risk‐based monitoring is a new strategy that recognizes that not all clinical trials require the same approach to quality control and assurance ( Stenning 2018a ), and allows for stratification based on risk indicators assessed during the trial or before it starts. Risk‐based strategies differ in their risk assessment approaches as well as in their implementation and extent of on‐site and central monitoring components. They are also referred to as risk‐adapted or risk‐proportionate monitoring strategies. In this review, which is based on our published protocol ( Klatte 2019 ), we investigated the effects of monitoring methods on ensuring patient rights and safety, and the validity of trial data. These key elements of clinical trial conduct are assessed by monitoring for critical or major violation of GCP objectives, according to the classification of GCP findings described in  EMA 2017 .

Monitoring strategies empirically evaluated in studies

All the monitoring strategies eligible for this review introduced new methods that might be effective in directing monitoring components and resources guided by a risk evaluation or prioritization.

1. Risk‐based monitoring strategies

The risk‐based strategy proposed by Brosteanu and colleagues is based on an initial assessment of the risk associated with an individual trial protocol (ADAMON:  Brosteanu 2009 ). The implementation of this three‐level risk assessment focuses on critical data and procedures describing the risk associated with a therapeutic intervention and incorporates an assessment of indicators for patient‐related risks, indicators of robustness, and indicators for site‐related risks. Trial‐specific risk analysis then informs a monitoring plan that contains on‐site elements as well as central and statistical monitoring methods to a different extent corresponding to the judged risk level. The consensus risk‐assessment scale (RAS) and risk‐adapted monitoring plan (RAMP) developed by Journot and colleagues in 2010 consists of a four‐level initial risk assessment, leading to monitoring plans of four levels of intensity (OPTIMON;  Journot 2011 ). The optimized monitoring strategy concentrates on the main scientific and regulatory aspects, compliance with requirements for patient consent and serious adverse events (SAE), and the frequency of serious errors concerning the validity of the trial's main results and the trial's eligibility criteria ( Chene 2008 ). Both strategies incorporate central monitoring methods that help to specify the monitoring intervention for each study site within the framework of their assigned risk level.

2. Central monitoring with triggered on‐site visits

The triggered on‐site monitoring strategy suggested by the Medicines and Healthcare products Regulatory Agency, Medical Research Council (MRC), and UK Department of Health includes an initial risk assessment on the basis of the intervention and design of the trial and a resulting monitoring plan for different trial sites that is continuously updated through centralized monitoring. Over the course of a clinical trial, sites are prioritized for on‐site visits based on predefined central monitoring triggers ( Meredith 2011 ; TEMPER:  Stenning 2018a ).

3. Central and local monitoring

A strategy that is mainly based on central monitoring, combined with a local quality control provided by qualified personnel on‐site, is being evaluated in the START Monitoring Substudy ( Hullsiek 2015 ). In this study, continuous central monitoring uses descriptive statistics on the consistency and quality of the data and data completeness. Semi‐annual performance reports are generated for each site, focusing on the key variables/endpoints regarding patients' safety (SAEs, eligibility violations) and data quality. This evaluates whether adding on‐site monitoring to these procedures leads to differences in the participant‐level composite outcome of monitoring findings.

4. Monitoring with targeted or remote source data verification

The monitoring strategy developed for the MONITORING study is characterized by a targeted SDV in which only regulatory and scientific key data are verified ( Fougerou‐Leurent 2019 ). This strategy is compared to full SDV and assessed based on final data quality and costs. One pilot study assessed a new strategy of remote SDV where documents were accessed via electronic health records, clinical data repositories, web‐based access technologies, or authentication and auditing tools ( Mealer 2013 ).

5. On‐site initiation visits upon request

In this monitoring strategy, systematic initiation visits at all sites are replaced by initiation visits that take place only upon investigators' request at a site ( Liènard 2006 ).

How these methods might work

The intention for risk‐based monitoring methods is to increase the efficiency of monitoring and to optimize resource use by directing the amount and content of monitoring visits according to an initially assessed risk level of an individual trial. These new methods should be at least non‐inferior in detecting major or critical violation of essential GCP objectives, according to  EMA 2017 , and might even be superior in terms of prioritizing monitoring content. The risk assessment preceding the risk‐based monitoring plan should consider the likelihood of errors occurring in key aspects of study performance, and the anticipated effect of such errors on the protection of participants and the reliability of the trial's results ( Landray 2012 ). Trials within a certain risk category are initially assigned to a defined monitoring strategy which remains adjustable throughout the conduct of the trial and should always match the needs of the trial and specific trial sites. This flexibility is an advantage, considering the heterogeneity of study designs and participating trial sites. Central monitoring would also allow for continuous verification of data quality based on prespecified triggers and thresholds, and would enable early intervention in cases of procedural or data‐recording errors. Besides the detection of missing or invalid data, trial entry procedures and protocol adherence, as well as other performance indicators, can be monitored through a continuous analysis of electronically captured data ( Baigent 2008 ). In addition, comparison with external sources may be undertaken to validate information contained in the data set; and the identification of poorly performing sites would ensure a more targeted application of on‐site monitoring resources. Use of methods that take advantage of the increasing use of electronic systems (e.g. electronic case report forms [eCRFs]) may allow data to be checked by automated means and allows the application of entry rules supporting up‐to‐date, high‐quality data. These methods would also ensure patient rights and safety while simultaneously improving trial management and optimizing trial conduct. Adaptations in the monitoring approach toward a reduction of on‐site monitoring visits, provided that patient rights and safety are ensured, could allow the application of resources to the most crucial components of the trial ( Journot 2011 ).

In order to evaluate whether these new risk‐based monitoring approaches are non‐inferior to the traditional extensive on‐site monitoring, an assessment of differences in critical and major findings during monitoring activities is essential. Monitoring findings are determined with respect to patient safety, patient rights, and reliability of the data, and classified as critical and major according to the classification of GCP findings described in the Procedures for reporting of GCP inspections requested by the Committee for Medicinal Products for Human Use ( EMA 2017 ). Critical findings are conditions, practices, or processes that adversely affect the rights, safety, or well‐being of the participants or the quality and integrity of data. Major findings are conditions, practices, or processes that might adversely affect the rights, safety, or well‐being of the participants or the quality and integrity of data.

Why it is important to do this review

There is insufficient information to guide the choice of monitoring approaches consistent with GCP to use in any given trial, and there is a lack of evidence on the effectiveness of suggested monitoring approaches. This has resulted in high heterogeneity in the monitoring practices used by research institutions, especially in the academic setting ( Morrison 2011 ). A guideline describing which type of monitoring strategy is most effective for clinical trials in terms of patient rights and safety, and data quality, is urgently needed for the academic clinical trial setting. Evaluating the benefits and disadvantages of different risk‐based monitoring strategies, incorporating components of central or targeted and triggered (or both) monitoring versus intensive on‐site monitoring, might lead to a consensus on how effective these new approaches are. In addition, evaluating the evidence of effectiveness could provide information on the extent to which on‐site monitoring content (such as SDV or frequency of site visits) can be adapted or supported by central monitoring interventions. In this review, we explored whether monitoring that incorporates central (including statistical) components could be extended to support the overall management of study quality in terms of participant recruitment and follow‐up.

The risk‐based monitoring interventions that are eligible for this review incorporate on‐site and central monitoring components, which may vary extent and procedural structure. In line with the recommendation from the Clinical Trials Transformation Initiative ( Grignolo 2011 ), it is crucial to systematically analyze and compare the existing evidence so that best practices may be established. This review may facilitate the sharing of current knowledge on effective monitoring strategies, which would help trialists, support units, and monitors to choose the best strategy for their trials. Evaluation of the impact of a change of monitoring approaches on data quality and study cost is relevant for the effective adjustment of current monitoring strategies. In addition, evaluating the effectiveness of these new monitoring approaches in comparison with intensive on‐site monitoring might reveal possible methods to replace or support on‐site monitoring strategies by taking advantage of the increasing use of electronic systems and resulting opportunities to implement statistical analysis tools.

Criteria for considering studies for this review

Types of studies.

We included randomized or non‐randomized prospective, empirical evaluation studies that assessed monitoring strategies in one or more clinical intervention studies. These types of embedded studies have recently been called 'studies within a trial' (SWATs) ( Anon 2012 ;  Treweek 2018a ). We excluded retrospective studies because of their limitations with respect to outcome standardization and variable definitions.

We followed the Cochrane Effective Practice and Organisation of Care (EPOC) Group definitions for the eligible study designs ( EPOC 2016 ).

We applied no restrictions on language or date of publication.

Types of data

We extracted information about monitoring processes as well as evaluations of the comparison and advantages/disadvantages of different monitoring approaches. We included data from published and unpublished studies, and grey literature, that compared different monitoring strategies (e.g. standard monitoring versus a risk‐based approach).

Study characteristics of interest were:

  • monitoring interventions;
  • risk assessment characteristics;
  • finding rates of serious/critical audits;
  • impact on participant recruitment and follow‐up; and

Types of methods

We included studies that compared:

  • a risk‐based monitoring strategy versus an intensive on‐site monitoring strategy for prospective intervention studies; or
  • any other prospective comparison of monitoring strategies for intervention studies.

Types of outcome measures

Specific outcome measures were not part of the eligibility criteria.

Primary outcomes

  • Combined outcome of critical and major monitoring findings in prospective intervention studies. Different error domains of critical and major monitoring findings were combined in the primary outcome measure (eligibility violations, informed‐consent violations, findings that raise doubt about the accuracy or credibility of key trial data and deviations of intervention from the trial protocol, errors in endpoint assessment, and errors in SAE reporting).

Critical and major findings were defined according to the classification of GCP findings described in  EMA 2017  , as follows.

  • Critical findings: conditions, practices, or processes that adversely affected the rights, safety, or well‐being of the study participants or the quality and integrity of data. Observations classified as critical may have included a pattern of deviations classified either as major, or bad quality of the data or absence of source documents (or both). Manipulation and intentional misrepresentation of data was included in this group.
  • Major findings: conditions, practices, or processes that might adversely affect either the rights, safety, or well‐being of the study participants or the quality and integrity of data (or both). Major observations are serious deficiencies and are direct violations of GCP principles. Observations classified as major may have included a pattern of deviations or numerous minor observations (or both).

Our protocol stated definitions of combined outcomes of critical and major findings in the respective studies ( Table 6 ) ( Klatte 2019 ).

ART: antiretroviral therapy; CTU: clinical trials unit; GCP: good clinical practice; IRB: institutional review board; SAE: serious adverse event; TSM: trial supply management.

Secondary outcomes

  • major eligibility violations;
  • major informed‐consent violations;
  • findings that raised doubt about the accuracy or credibility of key trial data and deviations of intervention from the trial protocol (with impact on patient safety or data validity);
  • errors in endpoint assessment; and
  • errors in SAE reporting.
  • Impact of the monitoring strategy on participant recruitment and follow‐up.
  • Effect of the monitoring strategy on resource use (costs).
  • Qualitative research data or process evaluations of the monitoring interventions.

Search methods for identification of studies

Electronic searches.

We conducted a comprehensive search (May 2019) using a search strategy that we developed together with an experienced scientific information specialist (HE). We systematically searched the Cochrane Central Register of Controlled Trials (CENTRAL), PubMed, and Embase via Elsevier for relevant published literature (PubMed strategy shown below, all searches in full in the  Appendix 1 ). The search strategy for all three databases was peer‐reviewed according to PRESS guidelines ( McGowan 2016 ) by the Cochrane information specialist, Irma Klerings (Cochrane Austria). We also searched the online SWAT repository (go.qub.ac.uk/SWAT-SWAR). We applied no restrictions regarding language or date of publication. Since our original search for the review took place in May 2019, we performed an updated search in March 2021 to ensure that we included all eligible studies up to that date. Our updated search identified no additional eligible studies.

We used the following terms to identify prospective studies that compared different strategies for trial monitoring:

  • triggered monitoring;
  • targeted monitoring;
  • risk‐adapted monitoring;
  • risk adapted monitoring;
  • risk‐based monitoring;
  • risk based monitoring;
  • centralized monitoring;
  • centralised monitoring;
  • statistical monitoring;
  • on site monitoring;
  • on‐site monitoring;
  • monitoring strategy;
  • monitoring method;
  • monitoring technique;
  • trial monitoring; and
  • central monitoring.

The search was intended to identify randomized trials and non‐randomized intervention studies that evaluated monitoring strategies in a prospective setting. Therefore, we modified the Cochrane sensitivity‐maximizing filter for randomized trials ( Lefebvre 2011 ).

PubMed search strategy:

(“on site monitoring”[tiab] OR “on‐site monitoring”[tiab] OR “monitoring strategy”[tiab] OR “monitoring method”[tiab] OR “monitoring technique”[tiab] OR ”triggered monitoring”[tiab] OR “targeted monitoring”[tiab] OR “risk‐adapted monitoring”[tiab] OR “risk adapted monitoring”[tiab] OR “risk‐based monitoring”[tiab] OR “risk based monitoring”[tiab] OR “risk proportionate”[tiab] OR “centralized monitoring”[tiab] OR “centralised monitoring”[tiab] OR “statistical monitoring”[tiab] OR “central monitoring”[tiab]) AND (“prospective” [tiab] OR “prospectively” [tiab] OR randomized controlled trial [pt] OR controlled clinical trial [pt] OR randomized [tiab] OR placebo [tiab] OR drug therapy [sh] OR randomly [tiab] OR trial [tiab] OR groups [tiab]) NOT (animals [mh] NOT humans[mh])

Searching other resources

We handsearched reference lists of included studies and similar systematic reviews to find additional relevant study articles ( Horsley 2011 ). In addition, we searched the grey literature ( Appendix 2 ) (i.e. conference proceedings of the Society for Clinical Trials and the International Clinical Trials Methodology Conference), and trial registries (ClinicalTrials.gov, the World Health Organization International Clinical Trials Registry Platform, the European Union Drug Regulating Authorities Clinical Trials Database, and ISRCTN) for ongoing or unpublished prospective studies. Finally, we collaborated closely with researchers of already identified eligible studies (e.g. OPTIMON, ADAMON, INSIGHT START, and MONITORING) and contacted researchers to identify further studies (and unpublished data, if available).

Data collection and analysis methods were based on the recommendations described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and Methodological Expectations for the Conduct of Cochrane Intervention Reviews ( Higgins 2016 ).

Selection of studies

After elimination of duplicate records, two review authors (KK and PA) independently screened titles and abstracts for eligibility. We retrieved potentially relevant studies as full‐text reports and two review authors (KK and MB) independently assessed these for eligibility, applying prespecified criteria (see:  Criteria for considering studies for this review ). We resolved any disagreements between review authors by discussion until consensus was reached, or by involving a third review author (CPM). We documented the study selection process in a flow diagram, as described in the PRISMA statement ( Moher 2009 ).

Data extraction and management

For each eligible study, two review authors (KK and MMB) independently extracted information on a number of key characteristics, using electronic data collection forms ( Appendix 3 ). Data were extracted in Epi‐Reviewer 4 ( Thomas 2010 ). We resolved any disagreements by discussion until consensus was reached, or by involving a third review author (MB). We contacted authors of included studies directly when target information was unreported or unclear to clarify or complete extracted data. We summarized the data qualitatively and quantitatively (where possible) in the  Results  section, below. If meta‐analysis of the primary or secondary outcomes was not applicable due to considerable methodological heterogeneity between studies, we reported the results qualitatively only.

Extracted study characteristics included the following.

  • General information about the study: title, authors, year of publication, language, country, funding sources.
  • Methods: study design, allocation method, study duration, stratification of sites (stratified on risk level, country, projected enrolment, etc.).
  • design (randomized or other prospective intervention trial);
  • setting (primary care, tertiary care, community, etc.);
  • national or multinational;
  • study population;
  • total number of sites randomized/analyzed;
  • inclusion/exclusion criteria;
  • IMP risk category;
  • support from clinical trials unit (CTU) or clinical research organization for host trial or evidence for experienced research team; and
  • trial phase.
  • number of sites randomized/allocated to groups (specifying number of sites or clusters);
  • duration of intervention period;
  • risk assessment characteristics (follow‐up questions)/triggers or thresholds that induce on‐site monitoring (follow‐up questions);
  • frequency of monitoring visits;
  • extent of on‐site monitoring;
  • frequency of central monitoring reports;
  • number of monitoring visits per participant;
  • cumulative monitoring time on‐site;
  • mean number of monitoring visits per site;
  • delivery (procedures used for central monitoring: structure/components of on‐site monitoring/triggers/thresholds);
  • who performed the monitoring (study team, trial staff; qualifications of monitors);
  • degree of SDV (median number of participants undergoing SDV); and
  • co‐interventions (site/study‐specific co‐interventions).
  • Outcomes: primary and secondary outcomes, individual components of combined primary outcome, outcome measures and scales, time points of measurement, statistical analysis of outcome data.
  • Data to assess the risk of bias of included studies (e.g. random sequence generation, allocation concealment, blinding of outcome assessors, performance bias, selective reporting, or other sources of bias).

Assessment of risk of bias in included studies

Two review authors (KK and MMB) independently assessed the risk of bias in each included study using the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and the Cochrane EPOC Review Group ( EPOC 2017 ). The domains provided by these criteria were evaluated for all included randomized studies and assigned ratings of low, high, or unclear risk of bias. We assessed non‐randomized studies using the ROBINS‐I tool of bias assessment for non‐randomized studies separately ( Higgins 2020 , Chapter 25).

We assessed the risk of bias for randomized studies as follows.

Selection bias

Generation of the allocation sequence.

  • If sequence generation was truly random (e.g. computer generated): low risk.
  • If sequence generation was not specified and we were unable to obtain relevant information from study authors: unclear risk.
  • If there was a quasi‐random sequence generation (e.g. alternation): high risk.
  • Non‐randomized trials: high risk.

Concealment of the allocation sequence (steps taken prior to the assignment of intervention to ensure that knowledge of the allocation was not possible)

  • If opaque, sequentially numbered envelopes were used or central randomization was performed by a third party: low risk.
  • If the allocation concealment was not specified and we were unable to ascertain whether the allocation concealment had been protected before and until assignment: unclear risk.
  • Non‐randomized trials and studies that used inadequate allocation concealment: high risk.

For non‐randomized studies, we further assessed if investigators attempted to balance groups by design (control for selection bias) and attempted to control for confounding: high risk according to Cochrane risk of bias tool, but we considered the risk of bias control efforts in our judgment of the certainty of the evidence according to GRADE.

Performance bias

It is not practicable to blind participating sites and monitors to the intervention to which they were assigned because of the procedural differences of monitoring strategies.

Detection bias (blinding of the outcome assessor)

  • If the assessors performing audits had knowledge of the intervention and thus outcomes were not assessed blindly: high risk.
  • If we could not ascertain whether assessors were blinded and study authors did not provide information to clarify: unclear risk.
  • If outcomes were assessed blindly: low risk.

Attrition bias

We did not expect to have missing data for our primary outcome (i.e. the rates of serious/critical audit findings at the end of the host clinical trials; and because missing participants were not audited, missing data in the proportion of critical findings were not expected). However, for the statistical power of the individual study outcomes, missing data for participants and site accrual could be an issue and is discussed below ( Discussion ).

Selective reporting bias

We investigated whether all outcomes mentioned in available study protocols, registry entries, or methodology sections of study publications were reported in results sections.

  • If all outcomes in the methodology or outcomes specified in the study protocol were not reported in the results, or if outcomes reported in the results were not listed in the methodology or in the protocol: high risk.
  • If outcomes were only partly reported in the results, or if an obvious outcome was not mentioned in the study: high risk.
  • If information is unavailable on the prespecified outcomes and the study protocol: unclear risk.
  • If all outcomes were listed in the protocol/methodology section and reported in the results: low risk.

Other potential sources of bias

  • If there was one or more important risk of bias (e.g. flawed study design): high risk .
  • If there was incomplete information regarding a problem that may have led to bias: unclear risk .
  • If there was no evidence of other sources of bias: low risk .

We assessed the risk of bias for non‐randomized studies as follows.

Pre‐intervention domains

  • Confounding – baseline confounding occurs when one or more prognostic variables (factors that predict the outcome of interest) also predict the intervention received at baseline.
  • Selection bias (bias in selection of participants into the study) – when exclusion of some eligible participants, or the initial follow‐up time of some participants, or some outcome events, is related to both intervention and outcome, there will be an association between interventions and outcome even if the effect of interest is truly null.

At‐intervention domain

  • Information bias – bias in classification of interventions, i.e. bias introduced by either differential or non‐differential misclassification of intervention status.

Postintervention domains

  • Confounding – bias that arises when there are systematic differences between experimental intervention and comparator groups in the care provided, which represent a deviation from the intended intervention(s).
  • Selection bias – bias due to exclusion of participants with missing information about intervention status or other variables such as confounders.
  • Information bias – bias introduced by either differential or non‐differential errors in measurement of outcome data.
  • Reporting bias – bias in selection of the reported result.

Measures of the effect of the methods

We conducted a comparative analysis of the impact of different risk‐based monitoring strategies on data quality and patient rights and safety measures, for example by the proportion of critical findings.

If meta‐analysis was appropriate, we analyzed dichotomous data using a risk ratio with a 95% confidence interval (CI). We analyzed continuous data using mean differences with a 95% CI if the measurement scale was the same. If the scale was different, we used standardized mean differences with 95% CIs.

Unit of analysis issues

Included studies could differ in outcomes chosen to assess the effects of the respective monitoring strategy. Critical/serious audit findings could be reported on a participant level, per finding event, or per site. Furthermore, components of the primary endpoints could vary between studies. We specified the study outcomes as defined in the study protocols or reports, and only meta‐analyzed outcomes that were based on similar definitions. In addition, we compared individual components of the primary outcome if these were consistently defined across studies (e.g. eligibility violations).

Cluster randomized trials have been highlighted separately to individually randomized trials. We reported the baseline comparability of clusters and considered statistical adjustment to reduce any potential imbalance. We estimated the intracluster correlation coefficient (ICC), as described by  Higgins 2020 , using information from the study (if available) or from an external estimate from a similar study. We then conducted sensitivity analyses to explain variation in ICC values.

Dealing with missing data

We contacted authors of included studies in an attempt to obtain unpublished data or additional information of value for this review ( Young 2011 ). Where a study had been registered and a relevant outcome was specified in the study protocol but no results were reported, we contacted the authors and sponsors to request study reports. We created a table to summarize the results for each outcome. We narratively explored the potential impact of missing data in our  Discussion .

Assessment of heterogeneity

When we identified methodological heterogeneity, we did not pool results in a meta‐analysis. Instead, we qualitatively synthesized results by grouping studies with similar designs and interventions, and described existing methodological heterogeneity (e.g. use of different methods to assess outcomes). If study characteristics, methodology, and outcomes were sufficiently similar across studies, we quantitatively pooled results in a meta‐analysis and assessed heterogeneity by visually inspecting forest plots of included studies (location of point estimates and the degree to which CIs overlapped), and by considering the results of the Chi 2 test for heterogeneity and the I 2 statistic. We followed the guidance outlined in  Higgins 2020  to quantify statistical heterogeneity using the I 2 statistic:

  • 0% to 40% might not be important;
  • 30% to 60% may represent moderate heterogeneity;
  • 50% to 90% may represent substantial heterogeneity;
  • 75% to 100%: considerable heterogeneity.

The importance of the observed value of the I 2 statistic depends on the magnitude and direction of effects, and the strength of evidence for heterogeneity (e.g. P value from the Chi 2 test, or a credibility interval for the I 2 statistic). If our I 2 value indicated that heterogeneity was a possibility and either the Tau 2 was greater than zero, or the P value for the Chi 2 test was low (less than 0.10), heterogeneity may have been due to a factor other than chance.

Possible sources of heterogeneity from the characteristics of host trials included:

  • trial phase;
  • support from a CTU or clinical research organization for host trial or evidence for an experienced research team; and
  • study population.

Possible sources of heterogeneity from the characteristics of methodology studies included:

  • study design;
  • components of outcome;
  • method of outcome assessment;
  • level of outcome (participant/site); and
  • classification of monitoring findings.

Due to high heterogeneity of studies, we used the random‐effects method ( DerSimonian 1986 ), which incorporates an assumption that the different studies are estimating different, yet related, intervention effects. As described in Section 9.4.3.1 of the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ), the method is based on the inverse‐variance approach, making an adjustment to the study weights according to the extent of variation, or heterogeneity, among the varying intervention effects. Due to the small number of studies included into the meta‐analyses and the high heterogeneity of the studies in the number of participants or sites included in the analysis we decided to use the inverse variance method. The inverse variance estimates the amount of variation across studies by comparing each study's result with an inverse‐variance fixed‐effect meta‐analysis result. This resulted in a more appropriate weighting of the included studies according to the extent of variation.   

Assessment of reporting biases

To decrease the risk of publication bias affecting the findings of the review, we applied various search approaches using different resources. These included grey literature searching and checking reference lists (see  Search methods for identification of studies ). If 10 or more studies were available for a meta‐analysis, we would have created a funnel plot to investigate whether reporting bias may have existed unless all studies were of a similar size. If we noticed asymmetry, we would not have been able to conclude that reporting biases existed, but we would have considered the sample sizes and presence (and possible influence) of outliers and discussed potential explanations, such as publication bias or poor methodological quality of included studies, and performed sensitivity analyses.

Data synthesis

Data were synthesized using tables to compare different monitoring strategies. We also reported results by different study designs. This was accompanied by a descriptive summary in the  Results  . We used Review Manager 5 to conduct our statistical analysis and undertake meta‐analysis, where appropriate ( Review Manager 2014 ).

If meta‐analysis of the primary or secondary outcomes was not possible, we reported the results qualitatively.

Two review authors (KK and MB) assessed the quality of the evidence. Based on the methods described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ) and GRADE ( Guyatt 2013a ;  Guyatt 2013b ), we created summary of findings tables for the main comparisons of the review. We presented all primary and secondary outcomes outlined in the  Types of outcome measures  section. We described the study settings and number of sites addressing each outcome. For each assumed risk of bias cited, we provided a source and rationale, and we implemented the GRADE system to assess the quality of the evidence using GRADEpro GDT software or the GRADEpro GDT app ( GRADEpro GDT ). If meta‐analysis was not appropriate or the units of analysis could not be compared, we presented results in a narrative summary of findings table. In this case, the imprecision of the evidence was an issue of concern due to the lack of a quantitative effect measure.

Subgroup analysis and investigation of heterogeneity

If visual inspection of the forest plots, Chi 2 test, I 2 statistic, and Tau 2 statistic indicated that statistical heterogeneity might be present, we carried out exploratory subgroup analysis. A subgroup analysis was deemed appropriate if the included studies satisfied criteria assessing the credibility of subgroup analyses ( Oxman 1992 ;  Sun 2010 ).

The following was our a priori subgroup: monitoring strategies using very similar approaches and consistent outcomes.   

Sensitivity analysis

We conducted sensitivity analyses restricted to:

  • peer‐reviewed and published studies only (i.e. excluding unpublished studies); and
  • studies at low risk of bias only (i.e. excluding non‐randomized studies and randomized trials without allocation concealment;  Assessment of risk of bias in included studies ).

Description of studies

See: Characteristics of included studies and Characteristics of excluded studies tables.

Results of the search

See  Figure 1  (flow diagram).

An external file that holds a picture, illustration, etc.
Object name is nMR000051-FIG-01.jpg

Study flow diagram.

Our search of CENTRAL, PubMed, and Embase resulted in 3105 unique citations, 3103 citations after removal of duplicates and two additional citations that were identified through reference lists of relevant articles. After screening titles and abstracts, we sought the full texts of 51 records to confirm inclusion or clarify uncertainties regarding eligibility. Eight studies (14 articles) were eligible for inclusion. The results of six of these were published as full papers ( Brosteanu 2017b ;  Fougerou‐Leurent 2019 ;  Liènard 2006 ;  Mealer 2013 ;  Stenning 2018b ;  Wyman 2020 ), one study was published as an abstract only ( Knott 2015 ), and one study was submitted for publication ( Journot 2017 ). We did not identify any ongoing eligible studies or studies awaiting classification.

Included studies

Seven of the eight included studies were government or charity funded. The other was industry funded ( Liènard 2006  ). The primary objectives were heterogeneous and included non‐inferiority evaluations of overall monitoring performance as well as single elements of monitoring (SDV, initiation visit); see  Characteristics of included studies  table and  Table 7 .

ARDS network: Acute Respiratory Distress Syndrome network; ART: antiretroviral therapy; ChiLDReN: Childhood Liver Disease Research Network; CRF: case report form; CTU: clinical trials unit; GCP: good clinical practice; IQR: interquartile range; min: minute; MRC: Medical Research Council; SAE: serious adverse event; SD: standard deviation; SDV: source data verification.

Overall, there were five groups of comparisons:

  • risk‐based monitoring guided by an initial risk assessment and information from central monitoring during study conduct versus extensive on‐site monitoring (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 );
  • central monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits ( Knott 2015 ; TEMPER:  Stenning 2018b );
  • central statistical monitoring and local monitoring at sites with annual on‐site visits (untriggered) versus central statistical monitoring and local monitoring at sites only (START‐MV:  Wyman 2020 );
  • 100% on‐site SDV versus remote SDV ( Mealer 2013 ) or targeted SDV (MONITORING:  Fougerou‐Leurent 2019 ); and
  • on‐site initiation visit versus no on‐site initiation visit ( Liènard 2006 ).

Since there was substantial heterogeneity in the investigated monitoring strategies and applied study designs, a short overview of each included study is provided below.

General characteristics of individual included studies

1. risk‐based versus extensive on‐site monitoring.

The ADAMON study was a cluster randomized non‐inferiority trial comparing risk‐adapted monitoring with extensive on‐site monitoring at 213 sites participating in 11 international and national clinical trials (all in secondary or tertiary care and with adults and children as participants) ( Brosteanu 2017b ). It included only randomized, multicenter clinical trials (at least six trial sites) with a non‐commercial sponsor and had standard operating procedures (SOPs) for data management and trial supervision as well as central monitoring of at least basic extent. The prior risk analysis categorized trials into two of three different risk categories and trials were monitored according to a prespecified monitoring plan for their respective risk category. While the RAMP for the highest risk category was only marginally less extensive than full on‐site monitoring, risk‐based monitoring strategies for the lower risk categories relied on information from central monitoring and previous visits to determine the amount of on‐site monitoring. This resulted in a marked reduction of on‐site monitoring for sites without noticeable problems, limited to key data monitoring (20% to 50%). Only studies that had been classified as either intermediate risk or low risk based on the trial‐specific risk analysis ( Brosteanu 2009 ) were included in the study. From the 11 clinical trials, 156 sites were audited by ADAMON‐trained auditors and included in the final analysis. The analysis included a meta‐analysis of results obtained within each trial.

The OPTIMON study was a cluster randomized non‐inferiority trial evaluating a risk‐based monitoring strategy within 22 national and international multicenter studies ( Journot 2017 ). The 22 trials included 15 randomized trials, four cohort studies, and three cross‐sectional studies in the secondary care setting with adults, children, and older people as participants. All trials involved methodology and management centers or CTUs, had at least two years of experience in multicenter clinical research studies, and SOPs in place. A total of 83 sites were randomized to one of two different monitoring strategies. The risk‐based monitoring approach consisted of an initial risk assessment with four outcome levels (low, moderate, substantial, and high) and a standardized monitoring plan, where on‐site monitoring increased with the risk level of the trial ( Journot 2011 ). The study aimed to assess whether such a risk‐adapted monitoring strategy provided results similar to those of the 100% on‐site strategy on the main study quality criteria, and, at the same time, improved other aspects such as timeliness and costs ( Journot 2017 ). Only 759 participants from 68 sites were included in the final analysis, because of insufficient recruitment at 15 of the 83 randomized sites. The difference between strategies was evaluated by the proportion of participants without remaining major non‐conformities in all of the four assessed error domains (consent violation, SAE reporting violation, eligibility violation, and errors in primary endpoint assessment) assessed after trial monitoring by the OPTIMON team. The overall comparison of strategies was estimated using a generalized estimating equation (GEE) model, adjusted for risk level and intra‐site, intra‐patient correlation common to all sites.

2. Central monitoring with triggered on‐site visits versus regular (untriggered) on‐site visits

Knott 2015  was a monitoring study embedded in a large international multicenter trial evaluating the ability of central statistical monitoring procedures to identify sites with problems. Monitoring findings at sites during on‐site monitoring visits targeted as a result of central statistical monitoring procedures were compared to monitoring findings at sites chosen by regional co‐ordinating centers. Oversight of the clinical multicenter trial was supported by central statistical monitoring that identified high scoring sites as priority for further investigation and triggered a targeted on‐site visit. In order to compare targeted on‐site visits with regular on‐site visits, high scoring sites, and some low scoring sites in the same countries identified by the country teams as potentially problematic were visited. The decision about which of the low scoring sites would benefit most from an on‐site visit was based on prior experience of the regional co‐ordinating centers with the site. Twenty‐one sites (12 identified by central statistical monitoring, nine others as comparators) received a comprehensive monitoring visit from a senior monitor and the number of major and minor findings were compared between the two types of visits (targeted versus regular visit).

The TEMPER study ( Stenning 2018b ) was conducted in three ongoing phase III randomized multicenter oncology trials with 156 UK sites ( Diaz‐Montana 2019a ). All three included trials were in secondary care settings, were conducted and monitored by the MRC CTU at University College London, and were sponsored by the UK MRC and employed a triggered monitoring strategy. The study used a matched‐pair design to assess the ability of targeted monitoring to distinguish sites at which higher and lower rates of protocol or GCP violations (or both) would be found during site visits. The targeted monitoring strategy was based on trial data that were scrutinized centrally with prespecified triggers provoking an on‐site visit when certain thresholds had been crossed. In order to compare this approach to standard on‐site monitoring, a matching algorithm proposed untriggered sites to visit by minimizing differences in 1. number of participants and 2. time since first participant randomized, and by maximizing differences in trigger score. Monitoring data from 42 matched paired visits (84 visits) at 63 sites were included in the analysis of the TEMPER study. The monitoring strategy was assessed over all trial phases and the outcome was assessed by comparing the proportion of sites with one or more major or critical finding not already identified through central monitoring or a previous visit ('new' findings). The prognostic value of individual triggers was also assessed.

3. Central and local monitoring with annual on‐site visits versus central and local monitoring only

The START Monitoring Substudy was conducted within one large international, publicly funded randomized clinical trial (START – Strategic Timing of AntiRetroviral Treatment) ( Wyman 2020 ). The monitoring substudy included 4371 adults from 196 secondary care sites in 34 countries. All clinical sites were associated with one of four INSIGHT co‐ordinating centers and central monitoring by the statistical center was done continuously using central databases. In addition, local monitoring of regulatory files, SDV, and study drug management was performed by site staff semi‐annually. In the monitoring substudy, sites were randomized to receive annual on‐site monitoring in addition to central and local monitoring or to central and local monitoring alone. The composite monitoring outcome consisted of eligibility violations, informed consent violations, intervention (use of antiretroviral therapy as initial treatment not permitted by protocol), primary endpoint and SAE reporting. In the analysis, a generalized estimation equation model with fixed effects to account for clustering was used and each component of the composite outcome was evaluated to interpret the relevance of the overall composite result.

4. Traditional 100% source data verification versus remote or targeted source data verification

Mealer 2013  was a pilot study on remote SDV in two national clinical trials' networks in which study participants were randomized to either remote SDV followed by on‐site verification or traditional on‐site SDV. Thirty‐two participants in randomized and other prospective clinical intervention trials within the adult trials network and the pediatric network were included in this monitoring study. A sample of participants in this secondary and tertiary care setting, who were due for an upcoming monitoring visit that included full SDV were randomized and stratified at each individual hospital. The five study sites had different health information technology infrastructures, resulting in different approaches to enable remote access and remote data monitoring. Only participants randomized to remote SDV had a previsit remote SDV performed prior to full SDV at the scheduled visit. Remote SDV was performed by validating the data elements captured on CRFs submitted to the co‐ordinating center using the same data verification protocols that were used during on‐site visits and remote monitors had telephone access to the local co‐ordinators. The primary outcome was the proportion of data values identified versus not identified for both monitoring strategies. As an additional economic outcome, the total time required for the study monitor to verify a case report item with either remote or on‐site monitoring form was analyzed.

The MONITORING study was a prospective cross‐over study comparing full SDV, where 100% of data was verified for all participants, and targeted SDV, where only key data were verified for all participants ( Fougerou‐Leurent 2019 ). Data from 126 participants from one multinational and five national clinical trials managed by the Clinical Investigation Center at the Rennes University Hospital INSERM in France were included in the analysis. These studies included five randomized trials and one non‐comparative pilot single‐center phase II study taking place in either tertiary or secondary care units. Key data verified by the targeted SDV included informed consent, inclusion and exclusion criteria, main prognostic variables at inclusion, primary endpoint, and SAEs. The same CRFs were analyzed with full or targeted SDV. SDV of both strategies was followed by the same data‐management program, detecting missing data and checking consistency, on final data quality, global workload, and staffing costs. Databases of full SDV and targeted SDV after the data‐management process were compared and identified discrepancies were considered as remaining errors with targeted monitoring.

5. Systematic on‐site initiation visit versus on‐site initiation visit upon request

Liènard 2006  was a monitoring study within a large international randomized trial of cancer treatment. A total of 573 participants from 135 centers in France were randomized on a center level to receive an on‐site initiation visit for the study or no initiation visit. Although the study was terminated early, 68 secondary care centers, stratified by center type (private versus public hospital), had entered at least one participant into the study. The study was terminated because the sponsor decided to redirect on‐site monitoring visits to centers in which a problem had been identified. The aim of this monitoring study was to assess the impact of on‐site initiation visits on the following outcomes: participant recruitment, quantity and quality of data submitted to the trial co‐ordinating office, and participants' follow‐up time. On‐site initiation visits by monitors included review of the protocol, inclusion and exclusion criteria, safety issues, randomization procedure, CRF completion, study planning, and drug management. Investigators requesting on‐site visits were visited regardless of the allocated randomized group and results were analyzed by randomized group.

Characteristics of the monitoring strategies

There was substantial heterogeneity in the characteristics of the evaluated monitoring strategies.  Table 7  summarizes the main components of the evaluated strategies.

Central monitoring components within the monitoring strategies

Use of central monitoring to trigger/adjust on‐site monitoring.

Central monitoring plays an important role in the implementation of risk‐based monitoring strategies. An evaluation of site performance through continuous analysis of data quality can be used to direct on‐site monitoring to specific sites or support remote monitoring methods. A reduction in on‐site monitoring for certain trials was accompanied by central monitoring which also enabled additional on‐site interference in cases of low‐quality performance related to data quality, completeness, or patient rights and safety of specific sites. Six included studies used central monitoring methods to support their new monitoring strategy (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ;  Knott 2015 ;  Mealer 2013 ; TEMPER:  Stenning 2018b ; START Monitoring Substudy:  Wyman 2020 ). Four of these studies used central monitoring information to trigger or delegate on‐site monitoring. In the ADAMON study, part of the monitoring plan for the lower‐ and medium‐risk studies comprised a regular assessment of the trial sites as 'with' or 'without noticeable problems' ( Brosteanu 2017b ). Classification as a site 'with noticeable problems' resulted in an increased number of on‐site visits per year. In the OPTIMON study, major problems (patient rights and safety, quality of results, regulatory aspects) triggered an additional on‐site visit for level B and C sites, or a first on‐site visit for level A sites ( Journot 2017 ). All entered data were checked for completeness and consistency for all participants for all sites ( OPTIMON study protocol 2008 ). The TEMPER study evaluated prespecified triggers for all sites in order to direct on‐site visits to sites with a high trigger score ( Stenning 2018b ). A trigger data report based on database exports was generated and used in the trigger meeting to guide the prioritization of triggered sites. Triggers were 'fired' when an inequality rule that reflected a certain threshold of data non‐conformities was evaluated as 'true'. Each trigger had an associated weight specifying its importance relative to other triggers, resulting in a trigger score for each site that was evaluated in trigger meetings and guided the prioritization of on‐site visits ( Diaz‐Montana 2019a ). In  Knott 2015 , all sites of the multicenter international trial received central statistical monitoring that identified high scoring sites as priority for further investigation. Scoring was applied every six months and a subsequent meeting of the central statistical monitoring group, including the chief investigator, chief statistician, junior statistician, and head of trial monitoring, and assessed high scoring sites and discussed trigger adjustments. Fired triggers resulted in a score of one and high scoring sites were chosen for a monitoring visit in the triggered intervention group.

Use of central monitoring and remote monitoring to support on‐site monitoring

In the ADAMON study, central monitoring activities included statistical monitoring with multivariate analysis, structured telephone interviews, site status in terms of participant numbers (number of included participants, number lost to follow‐up, screening failures, etc.) ( Brosteanu 2017b ). In the OPTIMON study, computerized controls were made on data entered from all participants in all investigation sites to check their completeness and consistency ( Journot 2017 ). Following these controls, the clinical research associate sent the investigator requests for clarification or correction of any inconsistent data. Regular contact was maintained by telephone, fax, or e‐mail with the key people at the trial site to ensure that procedures were observed, and a report was compiled in the form of a standardized contact form.

Use of central monitoring without on‐site monitoring

In the START Monitoring Substudy, central monitoring was performed by the statistical center using data in the central database on a continuous basis ( Wyman 2020 ). Reports summarizing the reviewed data were provided to all sites and site investigators and were updated regularly (daily, weekly, or monthly). Sites and staff from the statistical center and co‐ordinating centers also reviewed data summarizing each site's performance every six months and provided quantitative feedback to clinical sites on study performance. These reviews focused on participant retention, data quality, timeliness, and completeness of START Monitoring Substudy endpoint documentation, and adherence to local monitoring requirements. In addition, trained nurses at the statistical center reviewed specific adverse events and unscheduled hospitalizations for possible misclassification of primary START clinical events. Tertiary data, for example, laboratory values, were also reviewed by central monitoring ( Hullsiek 2015 ).

Use of central monitoring for source data verification

In the  Mealer 2013  pilot study, remote SDV validated the data elements captured on CRFs submitted to the co‐ordinating center. Data collection instruments for capturing study variables were developed and remote access for the study monitor was set up to allow secure online access to electronic records. The same data verification protocols were used as during on‐site visits and remote monitors had telephone access to local co‐ordinators.

Initial risk assessment

An initial risk assessment of trials was performed in the ADAMON ( Brosteanu 2017b ) and OPTIMON ( Journot 2017 ) studies. The RAS used in the OPTIMON study was evaluated in the validity and reproducibility study, the Pre‐OPTIMON study, and was performed in three steps leading to four different risk categories that imply different monitoring plans. The first step related to the risk of the studied intervention in terms of product authorization, invasiveness of surgery technique, CE marking class, and invasiveness of other interventions, which led to a temporary classification in the second step. In the third step, the risk of mortality based on the procedures of the intervention and the vulnerability of the study population were additionally taken into consideration and may have led to an increase in risk level. The risk analysis used in the ADAMON study also had three steps. The first step involved an assessment of the risk associated with the therapeutic intervention compared to the standard of care. The second step was based on the presence of at least one of a list of risk indicators for the participant or the trial results. In the third step, the robustness of trial procedures (reliable and easy to assess primary endpoint, simple trial procedures) was evaluated. The risk analysis resulted in one of three risk categories entailing different basic on‐site monitoring measures in each of the three monitoring classes.

Excluded studies

We excluded 37 studies after full‐text screening ( Characteristics of excluded studies  table). We excluded articles for the following reasons: 21 studies did not compare different monitoring strategies and 16 were not prospective studies.   

Risk of bias in included studies

Risk of bias in the included studies is summarized in  Figure 2  and  Figure 3 . We assessed all studies for risk of bias following the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions for randomized trials ( Higgins 2020 ). In addition, we used the ROBINS‐I tool for the three non‐randomized studies ( Fougerou‐Leurent 2019 ;  Knott 2015 ;  Stenning 2018b ; results shown in  Appendix 4 ).

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-02.jpg

Risk of bias graph: review authors' judgments about each risk of bias item presented as percentages across all included studies.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-03.jpg

Risk of bias summary: review authors' judgments about each risk of bias item for each included study.

Group allocation was at random and concealed in four of the eight studies with low risk of selection bias ( Brosteanu 2017b ;  Journot 2017 ;  Liènard 2006 ;  Wyman 2020 ). Three were non‐randomized studies; two evaluated triggered monitoring (matched comparator design), where randomization was not practicable due to the dynamic process of the monitoring intervention ( Knott 2015 ;  Stenning 2018b ), and the other used a prospective cross‐over design (the same CRFs were analyzed with full or targeted SDV) ( Fougerou‐Leurent 2019 ). Since we could not identify an increased risk of bias for the prospective cross‐over design (intervention applied on same participant data), we rated the study at low risk of selection bias. Although the original investigators attempted to balance groups and to control for confounding in the TEMPER study ( Stenning 2018b ), we rated the design at high risk of bias according to the criteria described in the Cochrane Handbook for Systematic Reviews of Interventions ( Higgins 2020 ). One study randomly assigned participant‐level data without any information about allocation concealment (unclear risk of bias) ( Mealer 2013 ).

In six studies, investigators, site staff, and data collectors of the trials were not informed about the monitoring strategy applied ( Brosteanu 2017b ;  Journot 2017 ;  Knott 2015 ;  Liènard 2006 ;  Stenning 2018b ;  Wyman 2020 ). However, blinding of monitors was not practicable in these six studies and thus we judged them at high risk of bias. In two studies, blinding of site staff was difficult because the interventions of monitoring involved active participation of trial staff (high risk of bias) ( Fougerou‐Leurent 2019 ;  Mealer 2013 ). It is unclear if the data management was blinded in these two studies.

Detection bias

Although monitoring could usually not be blinded due to the methodologic and procedural differences in the interventions, three studies performed a blinded outcome assessment (low risk of bias). In ADAMON, the audit teams verifying the monitoring outcomes of the two monitoring interventions were not informed of the sites' monitoring strategy and did not have access to any monitoring reports ( Brosteanu 2017b ). Audit findings were reviewed in a blinded manner by members of the ADAMON team and discussed with auditors, as necessary, to ensure that reporting was consistent with the ADAMON audit manuals ( ADAMON study protocol 2008 ). In OPTIMON, the main outcome was validated by a blinded validation committee ( Journot 2017 ). In TEMPER, the lack of blinding of monitoring staff was mitigated by consistent training on the trials and monitoring methods, the use of a common finding grading system, and independent review of all major and critical findings which was blind to visit type ( Stenning 2018b ). The other five studies provided no information on blinded outcome assessment or blinding of statistical center staff (unclear risk of bias) ( Fougerou‐Leurent 2019 ;  Knott 2015 ;  Liènard 2006 ;  Mealer 2013 ;  Wyman 2020 ).

Incomplete outcome data

All eight included studies were at low risk of attrition bias ( Brosteanu 2017b ;  Fougerou‐Leurent 2019 ;  Journot 2017 ;  Knott 2015 ;  Liènard 2006 ;  Mealer 2013 ;  Stenning 2018b ;  Wyman 2020 ). However, ADAMON reported that "… one site refused the audit, and in the last five audited trials, 29 sites with less than three patients were not audited due to limited resources, in large sites (>45 patients), only a centrally preselected random sample of patients was audited. Arms are not fully balanced in numbers of patients audited (755 extensive on‐site monitoring and 863 risk‐adapted monitoring) overall" ( Brosteanu 2017b ). Another study was terminated prematurely due to slow participant recruitment, but the number of centers that randomized participants were equal in both groups (low risk of bias) ( Liènard 2006 ).   

Selective reporting

A design publication was available for one study (START Monitoring Substudy [two publications]  Hullsiek 2015 ;  Wyman 2020 ) and three studies published a protocol (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ; TEMPER:  Stenning 2018b ). Three of these studies reported on all outcomes described in the protocol or design paper in their publications ( Brosteanu 2017b ;  Stenning 2018b ;  Wyman 2020 ), and one study has not been published as a full report yet, but provided outcomes stated in the protocol in the available conference presentation ( Journot 2017 ). One study has only been published as an abstract to date ( Knott 2015 ), but results of the prespecified outcomes were communicated to us by the study authors. For the three remaining studies, there were no protocol or registry entries available but the outcomes listed in the methods sections of their publications were all reported in the results and discussion sections (MONITORING:  Fougerou‐Leurent 2019 ;  Liènard 2006 ;  Mealer 2013 ).

There was an additional potential source of bias for one study (MONITORING:  Fougerou‐Leurent 2019 ). If the clinical research assistant spotted false or missing non‐key data when checking key data, he or she may have corrected the non‐key data in the CRF. This potential bias may have led to an underestimate of the difference between the two monitoring strategies. The full SDV CRF was considered without errors.

Effect of methods

In order to summarize the results of the eight included studies, we grouped them according to their intervention comparisons and their outcomes.

Primary outcome

Combined outcome of critical and major monitoring findings.

Five studies, three randomized (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ; START Monitoring Substudy:  Wyman 2020 ), and two matched pair (TEMPER:  Stenning 2018b ;  Knott 2015 ), reported a combined monitoring outcome with four to six underlying error domains (e.g. eligibility violations). The ADAMON and OPTIMON studies defined findings as protocol and GCP violations that were not corrected or identified by the randomized monitoring strategy. The START Monitoring Substudy directly compared findings identified by the randomized monitoring strategies without a subsequent evaluation of remaining findings not corrected by the monitoring intervention. The classification into different severities of findings comprised different categories in three included studies that had different denominations (non‐conformity/major non‐conformity [ Journot 2017 ], minor/major/critical [ Brosteanu 2017b ;  Stenning 2018b ]), but were consistent in the assessment of severity with regard to participant's rights and safety or to validity of study results. Only findings classified as major or critical (or both) were included in the primary comparison of monitoring strategies in the ADAMON and OPTIMON studies. The START Monitoring Substudy only assessed major violations, which constitutes the highest severity of findings with regard to participant's rights and safety or to validity of study results. All three of these studies defined monitoring findings for the most critical aspects in the domains for consent violations, eligibility violations, SAE reporting violations, and errors in endpoint assessment. Since the START Monitoring Substudy focused on only one trial, these descriptions of critical aspects are very trial specific compared to the broader range of critical aspects considered in ADAMON and OPTIMON with a combined monitoring outcome. Critical and major findings are defined according to the classification of GCP findings described in  EMA 2017 . For detailed information about the classification of monitoring findings in the included studies, see the Additional tables.

1. Risk‐based monitoring versus extensive on‐site monitoring

ADAMON and OPTIMON evaluated the primary outcome as the remaining combined major and critical findings not corrected by the randomized monitoring strategy. Pooling the results of ADAMON and OPTIMON for the proportion of trial participants with at least one major or critical outcome not corrected by the monitoring intervention resulted in a risk ratio of 1.03 with a 95% CI of 0.80 to 1.33 (below 1.0 would be in favor of the risk‐based strategy;  Analysis 1.1 ;  Figure 4 ). However, START Monitoring evaluated the primary outcome of combined major and critical findings as a direct comparison of monitoring findings during trial conduct and the comparison of monitoring strategies differed from the one assessed in ADAMON and OPTIMON. Therefore, we did not include START Monitoring in the pooled analysis, but reported its results separately below.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-04.jpg

Forest plot of comparison: 1 Risk‐based versus on‐site monitoring – combined primary outcome, outcome: 1.1 Combined outcome of critical and major monitoring findings.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-001.01.jpg

Comparison 1: Risk‐based versus on‐site monitoring – combined primary outcome, Outcome 1: Combined outcome of critical and major monitoring findings

In the ADAMON study, 59.2% of participants with any major finding not corrected by the randomized monitoring strategy was identified in the risk‐based monitoring intervention group compared to 64.2% of participants with any major finding in the 100% on‐site group ( Brosteanu 2017b ). The analysis of the composite monitoring outcome in the ADAMON study using a random‐effects model, estimated with logistic regression and with sites as random effects accounting for clustering, resulted in evidence of non‐inferiority (point estimates near zero on the logit scale and all two‐sided 95% CIs clearly excluding the prespecified tolerance limit) ( Brosteanu 2017a ).

The OPTIMON study reported the proportions of participants without major monitoring findings ( Journot 2017 ). When considering the proportions of participants with major monitoring findings, 40% of participants in the risk‐adapted monitoring intervention group had a monitoring outcome not identified by the randomized monitoring strategy compared to 34% in the 100% on‐site group. Analysis of the composite primary outcome via the GEE logistic model resulted in an estimated relative difference between strategies of 8% in favor of the 100% on‐site strategy. Since the upper one‐sided confidence limit of this difference was 22%, non‐inferiority with the set non‐inferiority margin of 11% could not be demonstrated.

Two studies used a matched comparator design ( Knott 2015 ;  Stenning 2018b ). In these new strategies, on‐site visits were triggered by the exceeding of prespecified trigger thresholds. The studies reported the number of triggered sites that had monitoring findings versus the number of control sites that had a monitoring finding.

We pooled these two studies for the primary combined outcome of major and critical monitoring findings including all error domains ( Analysis 3.1 ;  Figure 5 ) and also after excluding re‐consent for the TEMPER study ( Analysis 4.1 ;  Figure 6 ). Excluding the error domain "re‐consent" gave a risk ratio of 2.04 (95% CI 0.77 to 5.38) in favor of the triggered monitoring while including re‐consent findings gave a risk ratio of 1.83 (95% CI 0.51 to 6.55) in favor of the triggered monitoring intervention. These results provide some evidence that the trigger process was effective in guiding on‐site monitoring but the differences were not statistically significant.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-05.jpg

Forest plot of comparison: 3 Triggered versus untriggered on‐site monitoring, outcome: 3.1 Sites one or more major monitoring finding combined outcome.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-06.jpg

Forest plot of comparison: 4 Sensitivity analysis of the comparison: triggered versus untriggered on‐site monitoring (sensitivity outcome TEMPER), outcome: 4.1 Sites one or more major monitoring finding excluding re‐consent.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-003.01.jpg

Comparison 3: Triggered versus untriggered on‐site monitoring, Outcome 1: Sites ≥ 1 major monitoring finding combined outcome

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-004.01.jpg

Comparison 4: Sensitivity analysis of the comparison: triggered versus untriggered on‐site monitoring (sensitivity outcome TEMPER), Outcome 1: Sites ≥ 1 major monitoring finding excluding re‐consent

In the study conducted by Knott and colleagues, 21 sites (12 identified by central statistical monitoring, nine others as comparators) received an on‐site visit and 11 of 12 identified by central statistical monitoring had one or more major or critical monitoring finding (92%), while only two of nine comparator sites (22%) had a monitoring finding ( Knott 2015 ). Therefore, the difference in proportions of sites with at least one major or critical monitoring finding was 70%. Minor findings indicative of 'sloppy practice' were identified at 10 of 12 sites in the triggered group and in two of nine in the comparator group. At one site identified by central statistical monitoring, there were serious findings indicative of an underperforming site. These results suggest that information from central statistical monitoring can help focus the nature of on‐site visits and any interventions required to improve site quality.

The TEMPER study identified 37 of 42 (88.1%) triggered sites with one or more major or critical finding not already identified through central monitoring or a previous visit and 34 of 42 (81.0%) matched untriggered sites with one of more major or critical finding (difference 7.1%, 95% CI –8.3% to 22.5%; P = 0.365) ( Stenning 2018b ). More than 70% of on‐site findings related to issues in recording informed consent, and 70% of these to re‐consent. The prespecified sensitivity analysis excluding re‐consent findings demonstrated a clear difference in event rate. When excluding re‐consent findings, the numbers reduced to 85.7% for triggered sites and 59.5% for untriggered sites (difference 26.2%, 95% CI 8.0% to 44.4%; P = 0.007). Thus, triggered monitoring in the TEMPER study did not satisfactorily distinguish sites with higher and lower levels of concerning on‐site monitoring findings. However, the prespecified sensitivity analysis excluding re‐consent findings demonstrated a clear difference in event rate. There was greater consistency between trials in the sensitivity and secondary analyses. In addition, there was some evidence that the trigger process used could identify sites at increased risk of serious concern: around twice as many triggered visits had one or more critical finding in the primary and sensitivity analyses.

The START Monitoring study ( Wyman 2020 ), with 196 sites in a single large international trial, reported a higher proportion of participants with a monitoring finding detected in the on‐site monitoring group (6.4%) compared to the group with only central and local monitoring (3.8%), resulting in an odds ratio (OR) of 1.7 (95% CI 1.1 to 2.7; P = 0.03) ( Wyman Engen 2020 ). However, it is not clearly reported if the findings within the groups were identified on‐site (on‐site visit or local monitoring) or by central monitoring and it was not verified whether central monitoring and local monitoring alone were unable to detect any violations or discrepancies within sites randomized to the intervention group. In addition, relatively few monitoring findings that would have impacted START results were identified by on‐site monitoring (no findings of participants who were inadequately consented, no findings of data alteration or fraud).

The two studies of targeted (MONITORING:  Fougerou‐Leurent 2019 ) and remote ( Mealer 2013 ) SDV reported findings only related to source documents. Different components of source data were assessed including consent verification as well as key data, but findings were reported only as a combined outcome. Minimal relative differences of parameters assessing the effectiveness of these methods in comparison to full SDV were identified in both studies. Both studies only assessed the SDV as the process of double checking that the same piece of information was written in the study database as well as in source documents. Processes, often referred to as Source Data Review, that confirm that the trial conduct complies with the protocol and GCP and ensure that appropriate regulatory requirements have been followed, are not included as study outcomes.

In the prospective cross‐over MONITORING study, comparing the databases of full SDV and target SDV, after the data management process, identified an overall error rate of 1.47% (95% CI 1.41% to 1.53%) and an error rate of 0.78% (95% CI 0.65% to 0.91%) on key data ( Fougerou‐Leurent 2019 ). The majority of these discrepancies, considered as the remaining errors with targeted monitoring, were observed on baseline prognostic variables. The researchers further assessed the impact of the two different monitoring strategies on data‐management workload. While the overall number of queries was larger with the targeted SDV, there was no statistical difference for the queries related to key data (13 [standard deviation (SD) 16] versus 5 [SD 6]; P = 0.15) and targeted SDV generated fewer corrections on key data in the data‐management process step. Considering the increased workload for data management at least in the early setup phase of a targeted SDV strategy, monitoring and data management should potentially be viewed as a whole in terms of efficacy . 

The pilot study conducted by Mealer and colleagues assessed the feasibility of remote SDV in two clinical trial networks ( Mealer 2013 ). The accuracy and completeness of remote versus on‐site SDV was determined by analyzing the number of data values that were either identical or different in the source data, missing or unknown after remote SDV reconciliated to all data values identified via subsequent on‐site monitoring. The percentage of data values that could either not be identified or were missed via remote access were compared to direct on‐site monitoring in another group of participants. In the adult network, only 0.47% (95% CI 0.03% to 0.79%) of all data values assigned to monitoring could not be correctly identified via remote monitoring and in the ChiLDReN network, all data values were correctly identified. In comparison, three data values could not be identified in the only on‐site group (0.13%, 95% CI 0.03% to 0.37%). In summary, 99.5% of all data values were correctly identified via remote monitoring. Information on the difference in monitoring findings during the two SDV methods was not reported in the publication. The study showed that remote SDV was feasible despite marked differences in remote access and remote chart review policies and technologies.

5. On‐site initiation visit versus no on‐site initiation visit

There were no data on critical and major findings in  Liènard 2006 .

Individual components of the primary outcome

Individual components of the primary outcome considered in the included studies were:

In the ADAMON study, there was non‐inferiority for all of the five error domain components of the combined primary outcome: informed consent process, patient eligibility, intervention, endpoint assessment, and SAE reporting ( Brosteanu 2017a ). In the OPTIMON study, the biggest difference between monitoring strategies was observed for findings related to eligibility violations (12% of participants with major non‐conformity in eligibility error domain in the risk‐adapted group versus 6% of participants in the extensive on‐site group), while remaining findings related to informed consent were higher in the extensive on‐site monitoring group (7% of participants with major non‐conformity in informed consent error domain in the risk‐adapted group versus 10% of participants in the extensive on‐site group). In the OPTIMON study, consent form signature was checked remotely using a modified consent form and a validated specific procedure in the risk‐adapted strategy ( Journot 2013 ). To summarize the domain specific monitoring outcomes of the ADAMON and OPTIMON studies, we analyzed the results of both studies within the four common error domains ( Analysis 2.1 , including unpublished results from OPTIMON). Pooling the results of the four common error domains (informed consent process, patient eligibility, endpoint assessment, and SAE reporting) resulted in a risk ratio of 0.95 (95% CI 0.81 to 1.13) in favor of the risk‐based monitoring intervention ( Figure 7 ).

An external file that holds a picture, illustration, etc.
Object name is tMR000051-FIG-07.jpg

Forest plot of comparison: 2 Risk‐based versus on‐site monitoring – error domains of major findings, outcome: 2.1 Combined outcome of major or critical findings in four error domains.

An external file that holds a picture, illustration, etc.
Object name is tMR000051-CMP-002.01.jpg

Comparison 2: Risk‐based versus on‐site monitoring – error domains of major findings, Outcome 1: Combined outcome of critical and major findings in 4 error domains

In TEMPER, informed consent violations were more frequently identified by a full on‐site monitoring strategy ( Stenning 2018b ). During the study, but prior to the first analysis, the TEMPER Endpoint Review Committee recommended a sensitivity analysis to exclude all findings related to re‐consent, because these typically communicated minor changes in the adverse effect profile that could have been communicated without requiring re‐consent. Excluding re‐consent findings to evaluate the ability of the applied triggers to identify sites at higher risk for critical on‐site findings resulted in a significant difference of 26.2% (95% CI 8.0% to 44.4%; P = 0.007). Excluding all consent findings also resulted in a significant difference of 23.8% (95% CI 3.3% to 44.4%; P = 0.027).

There were no data on individual components of critical and major findings in  Knott 2015 .

In the START Monitoring Substudy, informed consent violations accounted for most of the primary monitoring outcomes in each group (41 [1.8%] participants in the no on‐site group versus 56 [2.7%] participants in the on‐site group) with an OR of 1.3 (95% CI 0.6 to 2.7; P = 0.46) ( Wyman 2020 ). The most common consent violation was the most recently signed consent signature page being missing and that the surveillances for these consent violations by on‐site monitors varied. Within the START Monitoring Substudy, they had to modify the primary outcome component for consent violations prior to the outcomes assessment in February 2016 because documentation and ascertainment of consent violations were not consistent across sites. This suggests that these inconsistencies and variation between sites could have influenced the results of this primary outcome component. In addition, the follow‐up on consent violations by the co‐ordinating centers identified no individuals who had not been properly consented. The largest relative difference was for the findings related to eligibility (1 [0.04%] participant in the no on‐site group versus 12 [0.6%] participants in the on‐site group; OR 12.2, 95% CI 1.8 to 85.2; P = 0.01), but 38% of eligibility violations were first identified by site staff. In addition, a relative difference was reported for SAE reporting (OR 2.0, 95% CI 1.1 to 3.7; P = 0.02), while the differences for the error domains primary endpoint reporting (OR 1.5, 95% CI 0.7 to 3.0; P = 0.27) and protocol violation of prescribing initial antiretroviral therapy not permitted by START (OR 1.4, 95% CI 0.6 to 3.4; P = 0.47) as well as for the informed consent domain were small.

There were no data on individual components of critical and major findings in MONITORING ( Fougerou‐Leurent 2019 ) or  Mealer 2013 .

There were no data on individual components of critical and major findings in  Liènard 2006 .

Impact of the monitoring strategy on participant recruitment and follow‐up

Only two included studies reported participant recruitment and follow‐up as an outcome for the evaluation of different monitoring strategies ( Liènard 2006 ; START Monitoring Substudy:  Wyman 2020 ).

Liènard 2006  assessed the impact of their monitoring approaches on participant recruitment and follow‐up in their primary outcomes. Centers were randomized to receive an on‐site initiation visit by monitors or no visit. There was no statistical difference in the number of recruited participants between these two groups (302 participants in the on‐site group versus 271 participants in the no on‐site group) as well as no impact of monitoring visits on recruitment categories (poor, average, good, and excellent). About 80% of participants were recruited in only 30 of 135 centers, and almost 62% in the 17 'excellent recruiters'. The duration of follow‐up at the time of analysis did not differ significantly between the randomized groups. However, the proportion of participants with no follow‐up at all was larger in the visited group than in the non‐visited group (82% in the on‐site group versus 70% in the no on‐site group).

Within the START Monitoring Substudy, central monitoring reports included tracking of losses to follow‐up ( Wyman 2020 ). Losses to follow‐up were similar between groups (proportion of participants lost to follow‐up: 7.1% in the on‐site group versus 8.6% in the no on‐site group; OR 0.8, 95% CI 0.5 to 1.1), and a similar percentage of study visits were missed by participants in each monitoring group (8.6% in the on‐site group versus 7.8% in the no on‐site group).

Effect of monitoring strategies on resource use (costs)

Five studies provided data on resource use.

The ADAMON study reported that with extensive on‐site monitoring, the number of monitoring visits per participant and the cumulative monitoring time on‐site was higher compared to risk‐adapted monitoring by a factor of 2.1 (monitoring visits) and 2.7 (cumulative monitoring time) (ratios of the efforts calculated within each trial and summarized with the geometric mean) ( Brosteanu 2017b ). This difference was more pronounced for the lowest risk category, resulting in an increase of monitoring visits per participant by a factor of 3.5 and an increase in the cumulative monitoring time on‐site by a factor of 5.2. In the medium‐risk category, the number of monitoring visits per participant was higher by a factor of 1.8 and the cumulative monitoring time on‐site was higher by a factor of 2.1 for the extensive on‐site group compared to the risk‐based monitoring group.

In the OPTIMON study, travel costs were calculated depending on the distance and on‐site visits were assumed to require two days for one monitor, resulting in monitoring costs of EUR 180 per visit ( Journot 2017 ). The costs were higher by a factor of 2.7 for the 100% on‐site strategy when considering travel costs only, and by a factor of 3.4 when considering travel and monitor costs.

There were no data on resource use from TEMPER ( Stenning 2018b ) or  Knott 2015 .

In the START Monitoring Substudy, the economic consequence of adding on‐site monitoring to local and central monitoring was assessed by the person‐hours that on‐site monitors and co‐ordinating centers spent performing on‐site monitoring‐related activities and was estimated to be 16,599 person‐hours ( Wyman 2020 ). With a salary allocation of USD 75 per hour for on‐site monitors, this equated to USD 1,244,925. With the addition of USD 790,467 international travel costs that were allocated for START monitoring, a total of USD 2,035,392 was attributed to on‐site monitoring. It has to be considered that there were four additional visits for cause in the on‐site group and six visits for cause in the no on‐site group.

For the MONITORING study, economic data were assessed in terms of time spent on SDV and data management with each strategy ( Fougerou‐Leurent 2019 ). A query was estimated to take 20 minutes to handle for a data manager and 10 minutes for the clinical study co‐ordinator. Across the six studies, 140 hours were devoted by the clinical research associate to the targeted SDV versus 317 hours for the full SDV. However, targeted SDV generated 587 additional queries across studies, with a range of less than one (0.3) to more than eight additional queries per participant, depending on the study. In terms of time spent on these queries, based on an estimate of 30 minutes for handling a single query, the targeted SDV‐related additional queries resulted in 294 hours of extra time spent (mean 2.4 [SD 1.7] hours per participant).   

For the cost analysis, the hourly costs for a clinical research associate were estimated to be EUR 33.00, a data‐manager was EUR 30.50, and a clinical study co‐ordinator was EUR 30.50. Based on these estimates, the targeted SDV strategy provided a EUR 5841 saving on monitoring but an additional EUR 8922 linked to the queries, totaling an extra cost of EUR 3081.

The study on remote SDV by  Mealer 2013  only compared time consumed per data item and time per case report form for both included networks. Although there was no relevant difference (less than 30 seconds) per data item between the two strategies, more time was spent with remote SDV. However, this study did not consider travel time for monitors, and the delayed access and increased response time for the communication with study co‐ordinators affected the overall time spent. The authors proposed SOPs for prescheduling times to review questions by telephone and the introduction of a single electronic health record.

For both of the introduced SDV monitoring strategies, a gain of experience with these new methods would most likely translate into improved efficiency, making it difficult to estimate the long‐term resource use from these initial studies. For the risk‐based strategy in the OPTIMON study, a remote pre‐enrollment check of consent forms was a good preventive measure and improved quality of consent forms (80% of non‐conformities identified via remote checking). In general, remote SDV monitoring may reduce the frequency of on‐site visits or influence their timing ultimately decreasing the resources needed for on‐site monitoring.

There were no data on resource use from  Liènard 2006 .

Qualitative research data or process evaluations of the monitoring interventions

The  Mealer 2013  pilot study of traditional 100% SDV versus remote SDV provided some qualitative information. This came from an informal post‐study interview of the study monitors and site co‐ordinators. These interviews revealed a high level of satisfaction with the remote monitoring process. None of the study monitors reported any difficulty with using the different electronic access methods and data review applications.

The secondary analyses of the TEMPER study assessed the ability of individual triggers and site characteristics to predict on‐site findings by comparing the proportion of visits with the outcome of interest (one major/critical finding) for triggered on‐site visits with regular (untriggered) on‐site visits ( Stenning 2018b ). This analysis also considered information of potential prognostic value obtained from questionnaires completed by the trials unit and site staff prior to the monitoring visits. Trials unit teams completed 90/94 pre‐visit questionnaires. There was no clear evidence of a linear relationship between the trial team ratings and the presence of major or critical findings, including or excluding consent findings (data not shown). A total of 76/94 sites provided pre‐visit site questionnaires. There was no evidence of a linear association between the chance of one major/critical finding and the number of active trials either per site or per staff member (data not shown). There was, however, evidence that the greater the number of different trial roles undertaken by the research nurse, the lower the probability of major/critical findings (number of research nurse roles (grouped) – proportion of one or more major or critical finding within the group, excluding re‐consent findings: less than 3: 94%; 4: 94%; 5: 80%; 6: 48% (P < 0.001; from Chi 2 test for linear trend) ( Stenning 2018b , Online Supplementary Material Table S5).

Summary of main results

We identified eight studies that prospectively compared different monitoring interventions in clinical trials. These studies were heterogeneous in design and content, and covered different aspects of new monitoring approaches. We identified no ongoing eligible studies.

Two large studies compared risk‐based versus extensive on‐site monitoring (ADAMON:  Brosteanu 2017b ; OPTIMON:  Journot 2017 ), and the pooled results provided no evidence of inferiority of a risk‐based monitoring intervention in terms of major and critical findings, based on moderate certainty of evidence ( Table 1 ). However, a formal demonstration of non‐inferiority would require more studies.

Considering the commonly reported error domains of monitoring findings (informed consent, eligibility, endpoint assessment, SAE reporting), we found no evidence for inferiority of a risk‐based monitoring approach in any of the error domains except eligibility. However, CIs were wide. To verify the eligibility of a participant usually requires extensive SDV, which might explain the potential difference in this error domain. We found a similar trend in the START Monitoring Substudy for the eligibility error domain. Expanding processes for remote SDV may improve the performance of monitoring strategies with a larger proportion of central and remote monitoring components. The OPTIMON study used an established process to remotely verify the informed consent process ( Journot 2013 ), which was shown to be efficient in reducing non‐conformities related to informed consent. A similar remote approach for SDV related to eligibility before randomization might improve the performance of risk‐based monitoring interventions in this domain.

In the TEMPER study ( Stenning 2018b ) and the START Monitoring Substudy ( Wyman 2020 ), most findings related to documenting the consent process. However, in the START Monitoring Substudy, there were no findings of participants whose consent process was inadequate and, in the ADAMON and the OPTIMON studies, findings in the informed consent process were lower in the risk‐adapted groups. Timely central monitoring of consent forms and eligibility documents with adequate anonymization ( Journot 2013 ) may mitigate the effects of many consent form completion errors and identify eligibility violations prior to randomization. This is also supported by the recently published further analysis of the TEMPER study ( Cragg 2021a ), which suggested that most visit findings (98%) were theoretically detectable or preventable through feasible, centralized processes, especially all the findings relating to initial informed consent forms, thereby preventing patients starting treatment if there are any issues.  Mealer 2013  assessed a remote process for SDV and found it to be feasible. Data values were reviewed to confirm eligibility and proper informed consent, to validate that all adverse events were reported, and to verify data values for primary and secondary outcomes. Almost all (99.6%) data values were correctly identified via remote monitoring at five different trial sites despite marked differences in remote access and remote chart review policies and technologies. In the MONITORING study, the number of remaining errors after targeted SDV (verified by full SDV) was very small for the overall data and even smaller for key data items ( Fougerou‐Leurent 2019 ). These results provide evidence that new concepts in the process of SDV do not necessarily lead to a decrease in data quality or endanger patient rights and safety. Processes involved with on‐site SDV and often referred to as source data review, that confirm that the trial conduct complies with the protocol and GCP and ensure that appropriate regulatory requirements have been followed, have to be assessed separately. Evidence from retrospective studies evaluating SDV suggest that intensive SDV is often of little benefit to clinical trials, with any discrepancies found having minimal impact on the robustness of trial conclusions ( Andersen 2015 ;  Olsen 2016 ;  Tantsyura 2015 ;  Tudur Smith 2012a ).

Furthermore, we found evidence that central monitoring can guide on‐site monitoring of trial sites via triggers. The prespecified sensitivity analysis of the TEMPER results excluding re‐consent findings ( Stenning 2018b ) and the results from  Knott 2015  suggested that using triggers from a central monitoring process can identify sites at higher risk for major GCP violations. However, the triggers used in TEMPER may not have been ideal for all included trials and some tested triggers seemed not to have any prognostic value. Additional work is needed to identify more discriminatory triggers and should encompass work on key performance indicators ( Gough 2016 ) and central statistical monitoring ( Venet 2012 ). Since  Knott 2015  focused on one study only, the triggers used in TEMPER were more trial specific. Developing trial specific triggers may lead to even more efficient triggers for on‐site monitoring. This may help to distinguish low performing sites from high performing sites and guide monitors to the most urgent problems within the identified site. Study‐specific triggers could even provoke specific monitoring activities (e.g. staff turnover indicates additional training, or data quality issues could trigger SDV activities). Central review of information across sites and time would help direct the on‐site resources to targeted SDV and activities best performed in‐person, for example, process review or training. We found no evidence that the addition of untriggered on‐site monitoring to central statistical monitoring assessed in the START Monitoring Substudy had a major impact on trial results or on participants' rights and safety ( Wyman 2020 ). In addition, there was no evidence that the no on‐site group was inferior in the study‐specific secondary outcomes including the percentage of participants lost to follow‐up, timely data submission and query resolution, and the absolute number of monitoring outcomes in the START Monitoring Substudy was very low ( Wyman 2020 ). This might be due to a study‐specific definition of critical and major findings in the monitoring plan and the presence of an established central monitoring system in both intervention groups of the study.

With respect to resource use, both studies evaluating a risk‐based monitoring approach showed that considerable resources could be saved with risk‐based monitoring (factor three to five;  Brosteanu 2017b ;  Journot 2017 ). However, the potential increase in resource use at the co‐ordinating centers (including data management) was not considered in any of the analyses. The START Monitoring Substudy reported more than USD 2,000,000 for on‐site monitoring, taking into account the monitoring hours as well as the international travel costs ( Wyman 2020 ). In both groups, central and local monitoring by site staff were performed to an equal extent, suggesting that there is no difference in the resources consumed by data management. The MONITORING study reported a reduction in cost of on‐site monitoring by the targeted SDV approach, but this was offset by an increase in data management resources due to queries ( Fougerou‐Leurent 2019 ). This increase in data management resources may to some degree be due to the inexperience with the new approach of site staff and trial monitors. There was no statistical difference in number of queries related to key data between targeted SDV and full SDV. When an infrastructure for centralized monitoring and remote data checks is already established, a larger difference between resources spent on risk‐based compared to extensive on‐site monitoring would be expected. Setting up the infrastructure for automated checks, remote processes, and other data management structures as well as the training of monitors and data managers on a new monitoring strategy requires an upfront investment.

Only two studies assessed the impact of different monitoring strategies on recruitment and follow‐up. This is an important outcome for monitoring interventions because it is crucial for the successful completion of a clinical trial ( Houghton 2020 ). The START Monitoring study found no significant difference in the percentage of participants lost to follow‐up between the on‐site and no on‐site groups ( Wyman 2020 ). Also, on‐site initiation visits had no effect on participant recruitment in  Liènard 2006 . Closely monitoring site performance in terms of recruitment and losses to follow‐up could enable early action to support affected sites. Secondary qualitative analyses of the TEMPER study revealed that the experience of the research nurse had an impact on the monitoring outcomes ( Stenning 2018b ). The experience of the study team and the site staff might also be an important factor to be considered in a risk assessment of the study or in the prioritization of on‐site visits.   

Overall completeness and applicability of evidence

Although we extensively searched for eligible studies, we only found one or two studies for specific comparisons of monitoring strategies. This very limited evidence base stands in stark contrast to the number of clinical trials run each year, each of which needs to perform monitoring in some form. None of the included studies reported on all primary and secondary outcomes specified for this review and most studies reported only a few. For instance, only one study reported on participant recruitment ( Liènard 2006 ), and only two studies reported on participant retention ( Liènard 2006 ;  Wyman 2020 ). Some monitoring comparisons were nested in a single clinical trial limiting the generalizability of results (e.g. Knott 2015; START Monitoring:  Wyman 2020  ). However, the OPTIMON ( Journot 2017 ) and ADAMON ( Brosteanu 2017b ) studies included multiple and heterogeneous clinical trials for their comparison of risk‐based and extensive on‐site monitoring strategies increasing the generalizability of their results. The risk assessments of the ADAMON and OPTIMON studies differed in certain aspects ( Table 7 ), but the main concept of categorizing studies according to their evaluated risk and adapting the monitoring requirements depending on the risk category was very similar. The much lower number of overall monitoring findings in the START study (based on one clinical trial only) compared with OPTIMON or ADAMON (involving multiple clinical trials) suggests that the trial context is crucial with respect to monitoring findings. Violations considered in the primary outcome of the START Monitoring Substudy were tailored to issues that could impact the validity of the trial's results or the safety of study participants. A definition of assets focused on the most critical aspects of a study that should be monitored closely is often missing in extensive monitoring plans and allows for some margin of interpretation by study monitors.

The TEMPER study introduced triggers that could direct on‐site monitoring and evaluated the prognostic values of these triggers ( Stenning 2018b ). Only three of the proposed triggers showed a significant prognostic impact across all three included trials. A set of triggers or performance measures of trial sites that are promising indicators for the need of additional support across a wide range of clinical trials are yet to be determined and trigger refinement is still ongoing. Triggers will to some degree always depend on the specific risks determined by the study procedures, management structure, and design of the study at hand. A combination of performance metrics appropriate for a large group of trials and study‐specific performance measures might be most effective. Multinational, multicenter trials might benefit the most from the directing of on‐site monitoring to sites that show low quality of performance. More studies in trials with large numbers of participants and sites, and trials covering diverse geographic areas, are needed to assess the value of centralized monitoring to assist with the identification of sites where additional support in terms of training is needed the most. This would lead to a more 'needs‐oriented' approach, so that clinical routine and study processes in well‐performing sites will not be unnecessarily interrupted. An overview of the progress of the ongoing trial in terms of site performance and other aspects such as recruitment and retention would also support the whole complex management processes of trial conduct in these large trials.

Since this review focused on prospective comparisons of monitoring interventions, the evidence from retrospective studies and reports from implementation studies is not included in the above results but is discussed below. We excluded retrospective studies because standardization of extracted data is not possible since data were collected before considering the analysis, especially for our primary outcome. However, trending analyses provide valuable information on outcomes such as improved data quality, recruitment, and follow‐up compliance, and thus demonstrate the effect of monitoring approaches on the overall trial conduct and success of the study. We considered the results from retrospective studies in our discussion of monitoring strategies but also pointed out the need to establish more SWAT to prospectively compare methods with a predefined mode of analysis.

Quality of the evidence

Overall, the certainty of this body of evidence on monitoring strategies for clinical intervention studies was low or very low for most comparisons and outcomes ( Table 1 ;  Table 2 ;  Table 3 ;  Table 4 ;  Table 5 ). This was mainly due to imprecision of effect estimates because of small numbers of observations and indirectness because some comparisons were based on only one study nested in a single trial. The included studies varied considerably in terms of the reported outcomes with most studies reporting only some. In addition, the risk of bias varied across studies. A risk of performance bias was attributed to six of the included studies and was unclear in two studies. Since it was difficult to blind monitors to the different monitoring interventions, an influence of the monitors' performance on the monitoring outcomes could not be excluded in these studies. Two studies were at high risk of bias because of their non‐randomized design ( Knott 2015  ; TEMPER:  Stenning 2018b ). However, since the intervention determined the selection of sites for an on‐site visit in the triggered groups, a randomized design was not practicable. In addition, the TEMPER study attempted to balance groups by design and controlled the risk of known confounding factors by using a matching algorithm. Therefore, the judgment of high risk of bias for TEMPER ( Stenning 2018b ) and  Knott 2015  remains debatable. In the START Monitoring Substudy, no independent validation of remaining findings was performed after monitoring intervention. Therefore, it is uncertain if central monitoring without on‐site monitoring missed any major GCP violations and chance findings cannot be ruled out. More evidence is needed to evaluate the value of on‐site initiation visits.  Liènard 2006  found no evidence that on‐site initiation visits affected participant recruitment, or data quality in terms of timeliness of data transfer and data queries. However, the informative value of the study was limited by its early termination and the small number of ongoing monitoring visits. In general, embedding methodology studies in clinical intervention trials provides valuable information for the improvement and adaptation of methodology guidelines and the practice of trials ( Bensaaud 2020 ;  Treweek 2018a ;  Treweek 2018b ). Whenever randomization is not practicable in a methodology substudy, the attempt to follow a 'diagnostic study design' and minimize confounding factors as much as possible can increase the generalizability and impact of the study results.

Potential biases in the review process

We screened all potentially relevant abstracts and full‐text articles independently and in duplicate, assessed the risk of bias for included studies independently and in duplicate, and extracted information from included studies independently and in duplicate. We did not calculate any agreement statistics, but all disagreements were resolved by discussion. We successfully contacted authors from all included studies for additional information. Since we were unable to extract only the outcomes of the randomized trials included in the OPTIMON study ( Journot 2015 ), we used the available data that included mainly randomized trials but also a few cohort and cross‐sectional studies. The focus of this review was on monitoring strategies for clinical intervention studies and including all studies from the OPTIMON study might introduce some bias. With regard to the pooling of study results, our judgment of heterogeneity might be debatable. The process of choosing comparator sites for triggered sites differed between the TEMPER study ( Stenning 2018b ) and  Knott 2015 . While both studies selected high scoring sites for triggered monitoring and low scoring sites as control, the TEMPER study applied a matching algorithm to identify sites that resembled the high scoring sites in certain parameters. In  Knott 2015 , comparator sites from the same countries were identified by the country teams as potentially problematic among the low scoring sites without a pairwise matching to a high scoring site. However, the principle of choosing sites for evaluation based on results from central statistical monitoring closely resembled methods used in the TEMPER study. Therefore, we decided to pool results from TEMPER and  Knott 2015 .

Agreements and disagreements with other studies or reviews

Although there are no definitive conclusions from available research comparing the effectiveness of risk‐based monitoring tools, the OECD advises clinical researchers to use risk‐based monitoring tools ( OECD 2013 ). They emphasized that risk‐based monitoring should become a more reactive process where the risk profile and performance is continuously reviewed during trial conduct and monitoring practices are modified accordingly. One systematic review on risk‐based monitoring tools for clinical trials by Hurley and colleagues summarized a variety of new risk‐based monitoring tools for clinical trial monitoring that had been implemented in recent years by grouping common ideas ( Hurley 2016 ). They did not identify a standardized approach for the risk assessment process for a clinical trial in the 24 included risk‐based monitoring tools, although the process developed by TransCelerate BioPharma Inc. has been replicated by six other risk‐based monitoring tools ( TransCelerate BioPharma Inc 2014 ). Hurley and colleagues suggested that the responsiveness of the tool depends on their mode of administration (paper‐based, powered by Microsoft Excel, or operated as a Service as a system) and the degree of centralized monitoring involved ( Hurley 2016 ). An electronic data capture system is beneficial to the efficient performance of centralized monitoring. However, to support the reactive process of risk‐based monitoring, tools should be able to incorporate information on risks provided by on‐site experiences from the study monitors. This is in agreement with our findings that a risk‐based monitoring tool should support both on‐site and centralized monitoring and that assessments are continuously reviewed during study conduct. Monitoring is most efficient when integrated as part of a risk‐based quality management system as also discussed by Buyse et al. ( Buyse 2020 ), where a focus on trial aspects that have a potentially high impact on patient safety and trial validity and on systematic errors is emphasized.

From the five main comparisons that we identified through our review, four have also been assessed in available retrospective studies. 

Risk‐based versus extensive on‐site monitoring: Kim and colleagues retrospectively reviewed three multicenter, investigator‐initiated trials that were monitored by a modified ADAMON method consisting of on‐site and central monitoring according to the risk of the trial ( Kim 2021 ). Central monitoring was more effective than on‐site monitoring in revealing minor errors and showed comparable results in revealing major issues such as investigational product compliance and delayed reporting of SAEs. The risk assessment assessed by Higa and colleagues was based on the Risk Assessment Categorization Tool (RACT) originally developed by TransCelerate BioPharma Inc. ( TransCelerate BioPharma Inc 2014 ), and was continuously adopted during the study based on results of centralized monitoring in parallel with site (on‐site/off‐site) monitoring. Mean on‐site monitoring frequency decreased as the study progressed and a Pharmaceutical and Medical Devices Agency inspection after study end found no significant non‐conformance that would have affected the study results and patient safety ( Higa 2020 ). 

Central monitoring with triggered on‐site visits versus regular on‐site visits: several studies have assessed triggered monitoring approaches that depend on individual study risks in trending analysis of their effectiveness. Diani and colleagues evaluated the effectiveness of their risk‐based monitoring approach in clinical trials involving implantable cardiac medical devices ( Diani 2017 ). Their strategy included a data‐driven risk assessment methodology to target on‐site monitoring visits and they found significant improvement in data quality related to the three risk factors that were most critical to the overall compliance of cardiac rhythm management along with an improvement in a majority of measurable risk factors at the worst performing site quantiles. The methodology evaluated by Agrafiotis and colleagues is centered on quality by design, central monitoring, and triggered, adaptive on‐site and remote monitoring. The approach is based on a set of risk indicators that are selected and configured during the setup of each trial and are derived from various operational and clinical metrics. Scores from these indicators form the basis of an automated, data‐driven recommendation on whether to prioritize, increase, decrease, or maintain the level of monitoring intervention at each site. They assessed the trending impact of their new approach by retrospectively analyzing the change in risk level later in the trials. All 12 included trials showed a positive effect in risk level change and results were statistically significant in eight of them ( Agrafiotis 2018 ). The evaluation of a new trial management method for monitoring and managing data return rates in a multicenter phase III trial performed by Cragg and colleagues adds to the findings of increased efficiency by prioritizing sites for support ( Cragg 2019 ). Using an automated database report to summarize the data return rate, overall and per center, enabled the early notification of centers whose data return rate appeared to be falling, or crossed the predefined acceptability threshold of data return rate. Concentrating on the gradual improvement of centers having persistent data return problems, resulted in an increase in the overall data return rate and return rates above 80% in all centers. These results agree with the evidence we found for the effectiveness of a triggered monitoring approach evaluated in TEMPER ( Stenning 2018b ) and  Knott 2015 , and emphasize the need for study‐specific performance indicators. In addition, the data‐driven risk assessment implemented by  Diani 2017  highlighted key focus areas for both on‐site and centralized monitoring efforts and enabled an emphasis of site performance improvements where it is needed the most. Our findings agree with retrospective assessments that focusing on the most critical aspects of a trial and guiding monitoring resources to trial sites in need of support may be efficient to improve the overall trial conduct.

Central statistical v ersu s on‐site monitoring: one retrospective analysis of the potential of central monitoring to completely replace on‐site monitoring performed by trial monitors showed that the majority of reviewed on‐site findings could be identified using central monitoring strategies ( Bakobaki 2012 ). One recent scoping review focused on methods used to identify sites of 'concern', at which monitoring activity may be targeted, and consequently sites 'not of concern', monitoring of which may be reduced or omitted ( Cragg 2021b ). It included all original reports describing methods for using centrally held data to assess site‐level risk described in a reproducible way. Thus, in agreement with our research, they only identified one full report of a study ( Stenning 2018b ) that prospectively assessed the methods' ability to target on‐site monitoring visits to most problematic sites. However, through contacting the authors of  Knott 2015 , which is only available as an abstract, we gained more detailed information on the methodology of the study and were able to include the results in our review. In contrast to our review,  Cragg 2021b  included retrospective assessments (in comparison to on‐site monitoring, effect on data quality or other trial parameters) as well as case studies, illustrations of methods on data, assessment of methods' ability to identify simulated problem sites, or known problems in real trial data. Thus, it constitutes an overview of methods introduced to the research community, and simultaneously underlines the lack of evidence for their efficacy or effectiveness.

Traditional 100% SDV versus targeted or remote SDV: in addition to these retrospective evaluations of methods to prioritize sites and the increased use of centralized monitoring methods, several studies retrospectively assessed the value and effectiveness of remote monitoring methods including alternative SDV methods. Our findings related to a reduction of 100% on‐site SDV in  Mealer 2013  and the MONITORING study ( Fougerou‐Leurent 2019 ) are in agreement with  Tudur Smith 2012b , which assessed the value of 100% SDV in a cancer clinical trial. In their retrospective comparison of data discrepancies and comparative treatment effects obtained following 100% SDV to those based on data without SDV, the identified discrepancies for the primary outcome did not differ systematically across treatment groups or across sites and had little impact on trial results. They also suggested that a focus of SDV on less‐experienced sites or sites with differing reporting characteristics of SDV‐related information (e.g. SAE reporting compared to other sites), with provision of regular training may be more efficient. Similarly, the study by Anderson and colleagues analyzed error rates of data from three randomized phase III trials monitored with a combination of complete SDV or partial SDV that were subjected to post hoc complete SDV ( Andersen 2015 ). Comparing partly and fully monitored trial participants, there were only minor differences between variables of major importance to efficacy or safety. In agreement with these studies, the study by Embleton‐Thirsk and colleagues showed that the impact of extensive retrospective SDV and further extensive quality checks in a phase III academic‐led, international, randomized cancer trial was minimal ( Embleton‐Thirsk 2019 ). Besides the potential reduction in SDV, remote monitoring systems for full or partial SDV are becoming more relevant during the COVID‐19 pandemic and are currently evaluated in various forms. Another recently published study assessed the clinical trial monitoring effectiveness of remote risk‐based monitoring versus on‐site monitoring with 100% SDV ( Yamada 2021 ). It used a cloud‐based remote monitoring system that does not require site‐specific infrastructure for remote monitoring since it can be downloaded onto mobile devices as an application and involves the upload of photographs. Remote monitoring was focused on risk items that could lead to critical data and process errors, determined using the risk assessment and categorization tool developed by TransCelerate BioPharma Inc. ( TransCelerate BioPharma Inc 2014 ). Using this approach, 92.9% (95% CI 68.5% to 98.7%) of critical process errors could be detected by remote risk‐based monitoring. With a retrospective review of monitoring reports, Hirase and colleagues supported an increased efficiency of monitoring and resources used by a combination of on‐site and remote monitoring using a web‐conference system ( Hirase 2016 ).

The qualitative finding in TEMPER ( Stenning 2018b ) that the experience of the research nurse had an impact on the monitoring outcomes is also reflected in the retrospective study by von Niederhäusern and colleagues, which found that one of the factors associated with lower numbers of monitoring findings was experienced site staff and concluded that the human factor was underestimated in the current risk‐based monitoring approach ( von Niederhausern 2017 ).

Implication for systematic reviews and evaluations of healthcare

We found no evidence for inferiority of a risk‐based monitoring approach compared to extensive on‐site monitoring in terms of critical and major monitoring findings. The overall certainty of the evidence for this outcome was moderate. The initial risk assessment of a study can facilitate a reduction of monitoring. However, it might be more efficient to use the outcomes of a risk assessment to guide on‐site monitoring in terms of prioritizing sites with conspicuously low performance quality of critical assets identified by the risk assessment. Some triggers that were used in the TEMPER study ( Stenning 2018b ) and  Knott 2015  could help identify sites that would benefit the most from an on‐site monitoring visit. Trigger refinement and inclusion of more trial‐specific triggers will, however, be necessary. The development of remote access to trial documentation may further improve the impact of central triggers. Timely central monitoring of consent forms or eligibility documents with adequate anonymization and data protection may mitigate the effects of many formal documentation errors. More studies are needed to assess the feasibility of eligibility and informed consent‐related assessment and remote contact to the site teams in terms of data security and effectiveness without on‐site review of documents. The COVID‐19 pandemic has resulted in innovative monitoring approaches in the context of restricted on‐site monitoring that also includes the remote monitoring of consent forms and other original records as well as compliance to study procedures usually verified on‐site. Whereas central data monitoring and remote monitoring of documents were formerly applied to improve efficiency, it now has to substitute on‐site monitoring to comply with pandemic restrictions, making evaluated monitoring methods in this review even more valuable to the research community. Both the Food and Drug Administration (FDA) and European Medicines Agency have provided guidance on aspects of clinical trial conduct during the COVID‐19 pandemic including remote site monitoring, handling informed consent in remote settings, and the importance of maintaining data integrity and audit trail ( EMA 2021 ;  FDA 2020 ). The FDA has also adopted contemporary approaches to consent involving telephone calls or video visits in combination with a witnessed signing of the informed consent ( FDA 2020 ). Experiences on new informed consent processes and advice on how remote monitoring and centralized methods can be used to protect the safety of patients and preserve trial integrity during the pandemic have been published and provide additional support for sites and sponsors ( Izmailova 2020 ;  Love 2021 ;  McDermott 2020 ). This review may support study teams faced by pandemic‐related restrictions with information on evaluated methods that focus primarily on remote and centralized methods. It will be important to provide more management support for clinical trials in the academic setting and develop new recruitment strategies. In our review, low certainty of evidence suggested that initiation visits or more frequent on‐site visits were not associated with increased recruitment or retention of trial participants. Consequently, trial investigators should plan for other, more trial‐specific strategies to support recruitment and retention. To what extent recruitment or retention can be improved through real‐time central monitoring remains to be evaluated. Research has emphasized the need for evidence on effective recruitment strategies ( Treweek 2018b ), and new flexible recruitment approaches initiated during the pandemic may add to this. During the COVID‐19 pandemic, both social media and digital health platforms have been leveraged in novel ways to recruit heterogeneous cohorts of participants ( Gaba 2020 ). In addition, the pandemic underlines the need for a study management infrastructure supported by central data monitoring and remote communication ( Shiely 2021 ). One retrospective study at the Beijing Cancer Hospital assessed the impact of their newly implemented remote management model on critical trial indicators: protocol compliance rate, rate of loss to follow‐up, rate of participant withdrawal, rates of disease progression and mortality, and detection rate of monitoring problems ( Fu 2021 ). The measures implemented after the first COVID‐19 outbreak led to significantly higher rates of protocol compliance and significantly lower rates of loss to follow‐up or withdrawal after the second outbreak compared to the first, without affecting rates of disease progression or mortality. In general, new experiences with electronic methods initiated throughout the COVID‐19 pandemic might facilitate development and even improvement of clinical trial management.

Implication for methodological research

Several new monitoring interventions were introduced in recent years. However, the evidence base gathered for this Cochrane Review is limited in terms of quantity and quality. Ideally, for each of the five identified comparisons (risk‐based versus extensive on‐site monitoring, central statistical monitoring with triggered on‐site visits versus regular [untriggered] on‐site visits, central and local monitoring with annual on‐site visits versus central and local monitoring only, traditional 100% source data verification [SDV] versus remote or targeted SDV, and on‐site initiation visit versus no on‐site initiation visit) more randomized monitoring studies nested in clinical trials and measuring effects on all outcomes specified in this review are necessary to draw more reliable conclusions. The development of triggers to guide on‐site monitoring while centrally monitoring incoming data is ongoing and different triggers might be used in different settings. In addition, more evidence on risk indicators that help to identify sites with problems or the prognostic value of triggers is needed to further optimize central monitoring strategies. Future methodological research should particularly evaluate approaches with an initial trial‐specific risk assessment followed by close central monitoring and the possibility for triggered and targeted on‐site visits during trial conduct. Outcome measures such as the impact on recruitment, retention, and site support should be emphasized in further research and the potential of central monitoring methods to support the whole study management process needs to be evaluated. Directing monitoring resources to sites with problems independent of data quality issues (recruitment, retention) could promote the role of experienced study monitors as a site support team in terms of training and advice. The overall progress in conduct and success of a trial should be considered in the evaluation of every new approach. The fact that most of the eligible studies identified for this review are government or charity funded suggests a need for industry‐sponsored trials to evaluate their monitoring and management approaches. This could particularly promote the development and evaluation of electronic case report form‐based centralized monitoring tools, which require substantial resources.

Protocol first published: Issue 12, 2019 Review first published: Issue 12, 2021

Acknowledgements

We thank the monitoring team of the Department of Clinical Research at the University Hospital Basel, including Klaus Ehrlich, Petra Forst, Emilie Müller, Madeleine Vollmer, and Astrid Roesler, for sharing their experience and contributing to discussions on monitoring procedures. We would further like to thank the information specialist Irma Klerings for peer reviewing our electronic database searches.

Appendix 1. Search strategies CENTRAL, PubMed, and Embase

Cochrane Review on monitoring strategies: search strategies Terms shown in italics were different compared to the strategy in PubMed.

3 May 2019: 842 hits (836 trials/6 reviews); Update 16 March 2021: 1044 hits (monitor* NEAR/2 (site OR risk OR central*)): ti,ab OR "monitoring strategy":ti,ab OR "monitoring method":ti,ab OR "monitoring technique":ti,ab OR "triggered monitoring":ti,ab OR "targeted monitoring":ti,ab OR "risk proportionate":ti,ab OR "trial monitoring":ti,ab OR "study monitoring":ti,ab OR "statistical monitoring":ti,ab

PubMed 13 May 2019: 1697 hits; Update 16 March 2021: 2198 hits

("on site monitoring"[tiab] OR "on‐site monitoring"[tiab] OR "monitoring strategy"[tiab] OR "monitoring method"[tiab] OR "monitoring technique"[tiab] OR "triggered monitoring"[tiab] OR "targeted monitoring"[tiab] OR "risk‐adapted monitoring"[tiab] OR "risk adapted monitoring"[tiab] OR "risk‐based monitoring"[tiab] OR "risk based monitoring"[tiab] OR "risk proportionate"[tiab] OR "centralized monitoring"[tiab] OR "centralised monitoring"[tiab] OR "statistical monitoring"[tiab] OR "central monitoring"[tiab] OR “trial monitoring”[tiab] OR “study monitoring”[tiab]) AND ("Clinical Studies as Topic"[Mesh] OR (("randomized controlled trial"[pt] OR controlled clinical trial[pt] OR trial*[tiab] OR study[tiab] OR studies[tiab]) AND (conduct*[tiab] OR practice[tiab] OR manag*[tiab] OR standard*[tiab] OR harmoni*[tiab] OR method*[tiab] OR quality[tiab] OR performance[tiab])))

Embase (via Elsevier) 13 May 2019: 1245 hits; Update 16 March 2021: 1494 hits ('monitoring strategy':ti,ab OR 'monitoring method':ti,ab OR 'monitoring technique':ti,ab OR 'triggered monitoring':ti,ab OR 'targeted monitoring':ti,ab OR 'risk‐adapted monitoring':ti,ab OR 'risk adapted monitoring':ti,ab OR 'risk based monitoring'/exp OR 'risk proportionate':ti,ab OR 'trial monitoring':ti,ab OR 'study monitoring':ti,ab OR 'statistical monitoring':ti,ab OR (monitor* NEAR/2 (site OR risk OR central*)):ti,ab) AND ('clinical trial (topic)'/exp OR ((trial* OR study OR studies) NEAR/3 (conduct* OR practice OR manag* OR standard* OR harmoni* OR method* OR quality OR performance)):ti,ab)

Appendix 2. Grey literature search

(Discipline: Medicine)

British Library

Direct Plus

BIOSIS databases ( www.biosis.org/ ).

Web of Science

Citation Index

(Conferences)

Web of Science (Core Collection) Proceedings Paper, Meeting Abstracts

Handsearch of References in identifies articles

WHO Registry (ICTRP portal)

Risk‐based Monitoring Toolbox

Appendix 3. Data collection form content

1. General Information

Name of person extracting data, report title, report ID, publication type, study funding source, possible conflicts of interest.

2. Methods and study population (trials)

Study design, duration study, design of host trials, characteristics of host trials (primary care, tertiary care, allocated …), total number of sites randomized, total number of sites included in the analysis, stratification of sites. Example: stratified on risk level, country, projected enrolment etc., inclusion/exclusion criteria for host trials.

3. Risk of bias assessment

Random sequence generation, allocation concealment, blinding of outcome assessment, performance bias, incomplete outcome data, selective outcome reporting, other bias, validated outcome assessment – grading of findings (minor, major, critical).

4. Intervention groups

Number randomized to group, duration of intervention period, was there an initial risk assessment preceding the monitoring plan?, classification of trials/sites, risk assessment characteristics, differing monitoring plan for risk classification groups, what was the extent of on‐site monitoring in the risk‐based monitoring group?, triggers or thresholds that induced on‐site monitoring, targeted on‐site monitoring visits or according to the original trials monitoring plan?, timing (frequency of monitoring visits, frequency of central/remote monitoring), number of monitoring visits per participant, cumulative monitoring time on‐site, mean number of monitoring visits per site, delivery (procedures used for central monitoring structure/components of on‐site monitoring triggers/thresholds), who performed the monitoring (part of study team, trial staff – qualification of monitors), degree of source data verification (median number of participants undergoing source data verification), co‐interventions (site/study‐specific co‐interventions).

5. Outcomes

Primary outcome, secondary outcomes, components of primary outcome (finding error domains), predefined level of outcome variables (major, critical, others, upgraded)?, time points measured (end of trial/during trial), factors impacting the outcome measure, person performing the outcome assessment, was outcome/tool validated?, statistical analysis of outcome data, imputation of missing data.

Comparison of interventions, outcome, subgroup (error domains), postintervention or change from baseline?, unit of analysis, statistical methods used and appropriateness of these methods.

7. Other information (key conclusions of study authors).

Appendix 4. Risk of bias assessment for non‐randomized studies

Edited (no change to conclusions)

Data and analyses

Comparison 1, comparison 2, comparison 3, comparison 4, characteristics of studies, characteristics of included studies [ordered by study id].

ARDS network: Acute Respiratory Distress Syndrome network; ChiLDReN: Childhood Liver Disease Research Network; CRA: clinical research associate; CRF: case report form; CTU: clinical trials unit; DM: data management; SAE: serious adverse event; SDV: source data verification.

Characteristics of excluded studies [ordered by study ID]

Differences between protocol and review.

We did not estimate the intracluster correlation and heterogeneity across sites within the ADAMON and OPTIMON studies as planned in our review protocol (Klatte 2019) due to lack of information. .

We planned in the protocol to assess the statistical heterogeneity of studies in meta‐analyses. Due to the small number of included studies per comparison, it was not reasonable to assess heterogeneity statistically.

Planned sensitivity analyses were also not performed because of the small number of included studies.

We removed characteristics of monitoring strategies from the list of secondary outcomes upon request of reviewers and included the information in the section on general characteristic of included studies. We changed the order of the secondary outcomes in an attempt to improve the logical flow of the Results section.

Contributions of authors

KK, CPM, and MB conceived the study and wrote the first draft of the protocol.

SL, MS, PB, NB, HE, PAJ, and MMB reviewed the protocol and suggested changes for improvement.

HE and KK developed the search strategy and conducted all searches.

KK, CPM, and MB screened titles and abstracts as well as full texts, and selected eligible studies.

KK and MMB extracted relevant data from included studies and assessed risk of bias.

KK conducted the statistical analyses and interpreted the results together with MB and CPM.

KK and MB assessed the certainty of the evidence according to GRADE and wrote the first draft of the review manuscript.

CPM, SL, MS, PB, NB, HE, PAJ, and MMB critically reviewed the manuscript and made suggestions for improvement.

Sources of support

Internal sources.

The Department of Clinical Research provided salaries for review contributors.

External sources

  • No sources of support provided

Declarations of interest

MS was a co‐investigator on an included study (TEMPER), but had no role in study selection, risk of bias, or certainty of evidence assessment for this review. He has no other relevant conflicts to declare.

References to studies included in this review

Brosteanu 2017b {published data only}.

  • Brosteanu O, Houben P, Ihrig K, Ohmann C, Paulus U, Pfistner B, et al. Risk analysis and risk adapted on-site monitoring in noncommercial clinical trials . Clinical Trials 2009; 6 :585-96. [ PubMed ] [ Google Scholar ]
  • Brosteanu O, Schwarz G, Houben P, Paulus U, Strenge-Hesse A, Zettelmeyer U, et al. Risk-adapted monitoring is not inferior to extensive on-site monitoring: results of the ADAMON cluster-randomised study . Clinical Trials 2017; 14 :584-96. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Study protocol ("Prospektive cluster-randomisierte Untersuchung studienspezifisch adaptierterStrategien für das Monitoring vor Ort in Kombination mit zusätzlichenqualitätssichernden Maßnahmen") . www.tmf-ev.de/ADAMON/Downloads.aspx (accessed prior to 19 August 2021).

Fougerou‐Leurent 2019 {published and unpublished data}

  • Fougerou-Leurent C, Laviolle B, Bellissant E. Cost-effectiveness of full versus targeted monitoring of randomized controlled trials . Fundamental & Clinical Pharmacology 2018; 32 ( S1 ):49 (PM2-035). [ Google Scholar ]
  • Fougerou-Leurent C, Laviolle B, Tual C, Visseiche V, Veislinger A, Danjou H, et al. Impact of a targeted monitoring on data-quality and data-management workload of randomized controlled trials: a prospective comparative study . British Journal of Clinical Pharmacology 2019; 85 ( 12 ):2784-92. [DOI: 10.1111/bcp.14108] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Journot 2017 {published and unpublished data}

  • Journot V, Perusat-Villetorte S, Bouyssou C, Couffin-Cadiergues S, Tall A, Chene G. Remote preenrollment checking of consent forms to reduce nonconformity . Clinical Trials 2013; 10 :449-59. [ PubMed ] [ Google Scholar ]
  • Journot V, Pignon JP, Gaultier C, Daurat V, Bouxin-Metro A, Giraudeau B, et al. Validation of a risk-assessment scale and a risk-adapted monitoring plan for academic clinical research studies – the Pre-Optimon study . Contemporary Clinical Trials 2011; 32 :16-24. [ PubMed ] [ Google Scholar ]
  • Journot V. OPTIMON – first results of the French trial on optimisation of monitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/docs/Communications/2015-Montpellier/OPTIMON%20-%20EpiClin%20Montpellier%202015-05-20%20EN.pdf (accessed 2 October 2019).
  • Journot V. OPTIMON – the French trial on optimization of monitoring . SCT Annual Meeting; 2017 May 7-10; Liverpool, UK .
  • Study protocol: evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed prior to 19 August 2021).

Knott 2015 {published and unpublished data}

  • Knott C, Valdes-Marquez E, Landray M, Armitage J, Hopewell J. Improving efficiency of on-site monitoring in multicentre clinical trials by targeting visits . Trials 2015; 16 ( Suppl 2 ):O49. [ Google Scholar ]

Liènard 2006 {published data only}

  • Liénard JL, Quinaux E, Fabre-Guillevin E, Piedbois P, Jouhaud A, Decoster G, et al. Impact of on-site initiation visits on patient recruitment and data quality in a randomized trial of adjuvant chemotherapy for breast cancer . Clinical Trials 2006; 3 ( 5 ):486-92. [DOI: 10.1177/1740774506070807] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Mealer 2013 {published data only}

  • Mealer M, Kittelson J, Thompson BT, Wheeler AP, Magee JC, Sokol RJ, et al. Remote source document verification in two national clinical trials networks: a pilot study . PloS One 2013; 8 ( 12 ):e81890. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Stenning 2018b {published data only}

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Assessing the potential for prevention or earlier detection of on-site monitoring findings from randomised controlled trials: further analyses of findings from the prospective TEMPER triggered monitoring study . Clinical Trials 2021; 18 ( 1 ):115-26. [DOI: 10.1177/1740774520972650] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Diaz-Montana C, Choudhury R, Cragg W, Joffe N, Tappenden N, Sydes MR, et al. Managing our TEMPER: monitoring triggers and site matching algorithms for defining triggered and control sites in the temper study . Trials 2017; 18 :P149. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Diaz-Montana C, Cragg WJ, Choudhury R, Joffe N, Sydes MR, Stenning SP. Implementing monitoring triggers and matching of triggered and control sites in the TEMPER study: a description and evaluation of a triggered monitoring management system . Trials 2019; 20 :227. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stenning SP, Cragg WJ, Joffe N, Diaz-Montana C, Choudhury R, Sydes MR, et al. Triggered or routine site monitoring visits for randomised controlled trials: results of TEMPER, a prospective, matched-pair study . Clinical Trials 2018; 15 :600-9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Study protocol: TEMPER (TargetEd Monitoring: Prospective Evaluation and Refinement) prospective evaluation and refinement of a targeted on-site monitoring strategy for multicentre cancer clinical trials . journals.sagepub.com/doi/suppl/10.1177/1740774518793379/suppl_file/793379_supp_mat_2.pdf (accessed prior to 19 August 2021).

Wyman 2020 {published data only}

  • Hullsiek KH, Kagan JM, Engen N, Grarup J, Hudson F, Denning ET, et al. Investigating the efficacy of clinical trial monitoring strategies: design and implementation of the cluster randomized START monitoring substudy . Therapeutic Innovation and Regulatory Science 2015; 49 :225-33. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Wyman Engen N, Huppler Hullsiek K, Belloso WH, Finley E, Hudson F, Denning E, et al. A randomized evaluation of on-site monitoring nested in a multinational randomized trial . Clinical Trials 2020; 17 ( 1 ):3-14. [DOI: 10.1177/1740774519881616] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

References to studies excluded from this review

Agrafiotis 2018 {published data only}.

  • Agrafiotis DK, Lobanov VS, Farnum MA, Yang E, Ciervo J, Walega M, et al. Risk-based monitoring of clinical trials: an integrative approach . Clinical Therapeutics 2018; 40 :1204-12. [ PubMed ] [ Google Scholar ]

Andersen 2015 {published data only}

  • Andersen JR, Byrjalsen I, Bihlet A, Kalakou F, Hoeck HC, Hansen G, et al. Impact of source data verification on data quality in clinical trials: an empirical post hoc analysis of three phase 3 randomized clinical trials . British Journal of Clinical Pharmacology 2015; 79 :660-8. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Bailey 2017 {published data only}

  • Bailey L, Straw FK, George SE. Implementing a risk based monitoring approach in the early phase myeloma portfolio at Leeds CTRU . Trials 2017; 18 :220. [ Google Scholar ]

Bakobaki 2011 {published data only}

  • Bakobaki J, Rauchenberger M, Kaganson N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring in clinical trials: a review of monitoring findings from an international multi-centre clinical trial . Clinical Trials 2011; 8 :454-5. [ PubMed ] [ Google Scholar ]

Bakobaki 2012 {published data only}

  • Bakobaki JM, Rauchenberger M, Joffe N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring: findings from an international multi-centre clinical trial . Clinical Trials 2012; 9 :257-64. [ PubMed ] [ Google Scholar ]

Biglan 2016 {published data only}

  • Biglan K, Brocht A, Raca P. Implementing risk-based monitoring (RBM) in STEADY-PD III, a phase III multi-site clinical drug trial for Parkinson disease . Movement Disorders 2016; 31 ( 9 ):E10. [ Google Scholar ]

Collett 2019 {published data only}

  • Collett L, Gidman E, Rogers C. Automation of clinical trial statistical monitoring . Trials 2019; 20 ( Suppl 1 ):P-251. [ Google Scholar ]

Cragg 2019 {published data only}

  • Cragg WJ, Cafferty F, Diaz-Montana C, James EC, Joffe J, Mascarenhas M, et al. Early warnings and repayment plans: novel trial management methods for monitoring and managing data return rates in a multi-centre phase III randomised controlled trial with paper case report forms . Trials 2019; 20 :241. [DOI: 10.1186/s13063-019-3343-2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Del Alamo 2018 {published data only}

  • Del Alamo M, Sanchez AI, Serrano ML, Aguilar M, Arcas M, Alvarez A, et al. Monitoring strategies for clinical trials in primary care: an independent clinical research perspective . Basic & Clinical Pharmacology & Toxicology 2018; 123 :25-6. [ Google Scholar ]

Diani 2017 {published data only}

  • Diani CA, Rock A, Moll P. An evaluation of the effectiveness of a risk-based monitoring approach implemented with clinical trials involving implantable cardiac medical devices . Clinical Trials 2017; 14 :575-83. [ PubMed ] [ Google Scholar ]

Diaz‐Montana 2019b {published data only}

  • Diaz-Montana C, Masters L, Love SB, Lensen S, Yorke-Edwards V, Sydes MR. Making performance metrics work: developing a triggered monitoring management system . Trials 2019; 20 ( Suppl 1 ):P-63. [ Google Scholar ]

Edwards 2014 {published data only}

  • Edwards P, Shakur H, Barnetson L, Prieto D, Evans S, Roberts I. Central and statistical data monitoring in the Clinical Randomisation of an Antifibrinolytic in Significant Haemorrhage (CRASH-2) trial . Clinical Trials 2014; 11 :336-43. [ PubMed ] [ Google Scholar ]

Elsa 2011 {published data only}

  • Elsa VM, Jemma HC, Martin L, Jane A. A key risk indicator approach to central statistical monitoring in multicentre clinical trials: method development in the context of an ongoing large-scale randomized trial . Trials 2011; 12 :A135. [ Google Scholar ]

Fu 2021 {published data only}

  • Fu ZY, Liu XH, Zhao SH, Yuan YN, Jiang M. A preliminary analysis of remote monitoring practice in clinical trials . Chinese Journal of New Drugs 2021; 30 ( 3 ):209-14. [ Google Scholar ]

Hatayama 2020 {published data only}

  • Hatayama T, Yasui S. Bayesian central statistical monitoring using finite mixture models in multicenter clinical trials . Contemporary Clinical Trials Communication 2020; 19 :100566. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Heels‐Ansdell 2010 {published data only}

  • Heels-Ansdell D, Walter S, Zytaruk N, Guyatt G, Crowther M, Warkentin T, et al. Central statistical monitoring of an international thromboprophylaxis trial . American Journal of Respiratory and Critical Care Medicine 2010; 181 :A6041. [ Google Scholar ]

Higa 2020 {published data only}

  • Higa A, Yagi M, Hayashi K, Kosako M, Akiho H. Risk-based monitoring approach to ensure the quality of clinical study data and enable effective monitoring . Therapeutic Innovation and Regulatory Science 2020; 54 ( 1 ):139-43. [ PubMed ] [ Google Scholar ]

Hirase 2016 {published data only}

  • Hirase K, Fukuda-Doi M, Okazaki S, Uotani M, Ohara H, Furukawa A, et al. Development of an efficient monitoring method for investigator-initiated clinical trials: lessons from the experience of ATACH-II trial . Japanese Pharmacology and Therapeutics 2016; 44 :s150-4. [ Google Scholar ]

Jones 2019 {published data only}

  • Jones L, Ogburn E, Yu LM, Begum N, Long A, Hobbs FD. On-site monitoring of primary outcomes is important in primary care clinical trials: Benefits of Aldosterone Receptor Antagonism in Chronic Kidney Disease (BARACK-D) trial – a case study . Trials 2019; 20 ( Suppl 1 ):P-272. [ Google Scholar ]

Jung 2020 {published data only}

  • Jung HY, Jeon Y, Seong SJ, Seo JJ, Choi JY, Cho JH, et al. Information and communication technology-based centralized monitoring system to increase adherence to immunosuppressive medication in kidney transplant recipients: a randomized controlled trial . Nephrology, Dialysis, Transplantation 2020; 35 ( Suppl 3 ):gfaa143.P1734. [DOI: 10.1093/ndt/gfaa143.P1734] [ CrossRef ] [ Google Scholar ]

Kim 2011 {published data only}

  • Kim J, Zhao W, Pauls K, Goddard T. Integration of site performance monitoring module in web-based CTMS for a global trial . Clinical Trials 2011; 8 :450. [ Google Scholar ]

Kim 2021 {published data only}

  • Kim S, Kim Y, Hong Y, Kim Y, Lim JS, Lee J, et al. Feasibility of a hybrid risk-adapted monitoring system in investigator-sponsored trials in cancer . Therapeutic Innovation and Regulatory Science 2021; 55 ( 1 ):180-9. [ PubMed ] [ Google Scholar ]

Lane 2013 {published data only}

  • Lane JA, Wade J, Down L, Bonnington S, Holding PN, Lennon T, et al. A Peer Review Intervention for Monitoring and Evaluating sites (PRIME) that improved randomized controlled trial conduct and performance . Journal of Clinical Epidemiology 2011; 64 :628-36. [ PubMed ] [ Google Scholar ]
  • Lane JA. Improving trial quality through a new site monitoring process: experience from the Protect Study . Clinical Trials 2008; 5 :404. [ Google Scholar ]
  • Lane JJ, Davis M, Down E, Macefield R, Neal D, Hamdy F, et al. Evaluation of source data verification in a multicentre cancer trial (PROTECT) . Trials 2013; 14 :83. [ Google Scholar ]

Lim 2017 {published data only}

  • Lim JY, Hackett M, Munoz-Venturelli P, Arima H, Middleton S, Olavarria VV, et al. Monitoring a large-scale international cluster stroke trial: lessons from head position in stroke trial . Stroke 2017; 48 :ATP371. [ Google Scholar ]

Lindley 2015 {published data only}

  • Lindley RI. Cost effective central monitoring of clinical trials . Neuroepidemiology 2015; 45 :303. [ Google Scholar ]

Miyamoto 2019 {published data only}

  • Miyamoto K, Nakamura K, Mizusawa J, Balincourt C, Fukuda H. Study risk assessment of Japan Clinical Oncology Group (JCOG) clinical trials using the European Organisation for Research and Treatment of Cancer (EORTC) study risk calculator . Japanese Journal of Clinical Oncology 2019; 49 ( 8 ):727-33. [ PubMed ] [ Google Scholar ]

Morales 2020 {published data only}

  • Morales A, Miropolsky L, Seagal I, Evans K, Romero H, Katz N. Case studies on the use of central statistical monitoring and interventions to optimize data quality in clinical trials . Osteoarthritis and Cartilage 2020; 28 :S460. [ Google Scholar ]

Murphy 2019 {published data only}

  • Murphy J, Durkina M, Jadav P, Kiru G. An assessment of feasibility and cost-effectiveness of remote monitoring on a multicentre observational study . Trials 2019; 20 ( Suppl 1 ):P-265. [ Google Scholar ]

Pei 2019 {published data only}

  • Pei XJ, Han L, Wang T. Enhancing the system of expedited reporting of safety data during clinical trials of drugs and strengthening the management of clinical trial risk monitoring . Chinese Journal of New Drugs 2019; 28 ( 17 ):2113-6. [ Google Scholar ]

Stock 2017 {published data only}

  • Stock E, Mi Z, Biswas K, Belitskaya-Levy I. Surveillance of clinical trial performance using centralized statistical monitoring . Trials 2017; 18 :200. [ Google Scholar ]

Sudo 2017 {published data only}

  • Sudo T, Sato A. Investigation of the factors affecting risk-based quality management of investigator-initiated investigational new-drug trials for unapproved anticancer drugs in Japan . Therapeutic Innovation and Regulatory Science 2017; 51 :589-96. [DOI: 10.1177/2168479017705155] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Thom 1996 {published data only}

  • Thom E, Das A, Mercer B, McNellis D. Clinical trial monitoring in the face of changing clinical practice. The NICHD MFMU Network . Controlled Clinical Trials 1996; 17 :58S-59S. [ Google Scholar ]

Tudur Smith 2012b {published data only}

  • Tudur Smith C, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, et al. The value of source data verification in a cancer clinical trial . PloS One 2012; 7 ( 12 ):e51623. [ PMC free article ] [ PubMed ] [ Google Scholar ]

von Niederhäusern 2017 {published data only}

  • Niederhäusern B, Orleth A, Schädelin S, Rawi N, Velkopolszky M, Becherer C, et al. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned . BMC Medical Research Methodology 2017; 17 :26. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Yamada 2021 {published data only}

  • Yamada O, Chiu SW, Takata M, Abe M, Shoji M, Kyotani E, et al. Clinical trial monitoring effectiveness: remote risk-based monitoring versus on-site monitoring with 100% source data verification . Clinical Trials (London, England) 2021; 18 ( 2 ):158-67. [DOI: 10.1177/1740774520971254] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Yorke‐Edwards 2019 {published data only}

  • Yorke-Edwards VE, Diaz-Montana C, Mavridou K, Lensen S, Sydes MR, Love SB. Risk-based trial monitoring: site performance metrics across time . Trials 2019; 20 ( Suppl 1 ):P-33. [ Google Scholar ]

Zhao 2013 {published data only}

  • Zhao W. Risk-based monitoring approach in practice-combination of real-time central monitoring and on-site source document verification . Clinical Trials 2013; 10 :S4. [ Google Scholar ]

Additional references

Adamon study protocol 2008.

  • ADAMON study protocol. Study protocol ("Prospektive cluster-randomisierte Untersuchung studienspezifisch adaptierterStrategien für das Monitoring vor Ort in Kombination mit zusätzlichenqualitätssichernden Maßnahmen") . www.tmf-ev.de/ADAMON/Downloads.aspx (accessed prior to 19 August 2021).
  • Anon. Education section: Studies Within A Trial (SWAT) . Journal of Evidence-based Medicine 2012; 5 :44-5. [ PubMed ] [ Google Scholar ]

Baigent 2008

  • Baigent C, Harrell FE, Buyse M, Emberson JR, Altman DG. Ensuring trial validity by data quality assurance and diversification of monitoring methods . Clinical Trials 2008; 5 :49-55. [ PubMed ] [ Google Scholar ]

Bensaaud 2020

  • Bensaaud A, Gibson I, Jones J, Flaherty G, Sultan S, Tawfick W, et al. A telephone reminder to enhance adherence to interventions in cardiovascular randomized trials: a protocol for a Study Within A Trial (SWAT) . Journal of Evidence-based Medicine 2020; 13 ( 1 ):81-4. [DOI: 10.1111/jebm.12375] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Brosteanu 2009

Brosteanu 2017a.

  • Buyse M, Trotta L, Saad ED, Sakamoto J. Central statistical monitoring of investigator-led clinical trials in oncology . International Journal of Clinical Oncology 2020; 25 ( 7 ):1207-14. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Chene G. Evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed 2 October 2019).

Cragg 2021a

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Assessing the potential for prevention or earlier detection of on-site monitoring findings from randomised controlled trials: further analyses of findings from the prospective TEMPER triggered monitoring study . Clinical Trials 2021; 18 ( 1 ):115-26. [DOI: 10.1177/1740774520972650] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Cragg 2021b

  • Cragg WJ, Hurley C, Yorke-Edwards V, Stenning SP. Dynamic methods for ongoing assessment of site-level risk in risk-based monitoring of clinical trials: a scoping review . Clinical Trials 2021; 18 ( 2 ):245-59. [DOI: 10.1177/1740774520976561] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

DerSimonian 1986

  • DerSimonian R, Laird N. Meta-analysis in clinical trials . Controlled Clinical Trials 1986; 7 ( 3 ):177-88. [ PubMed ] [ Google Scholar ]

Diaz‐Montana 2019a

  • Duley L, Antman K, Arena J, Avezum A, Blumenthal M, Bosch J, et al. Specific barriers to the conduct of randomised trials . Clinical Trials 2008; 5 :40-8. [ PubMed ] [ Google Scholar ]
  • European Commission. Risk proportionate approaches in clinical trials. Recommendations of the expert group on clinical trials for the implementation of Regulation (EU) No 536/2014 on clinical trials on medicinal products for human use . ec.europa.eu/health/sites/default/files/files/eudralex/vol-10/2017_04_25_risk_proportionate_approaches_in_ct.pdf (accessed 28 July 2021).
  • European Medicines Agency. Reflection paper on risk based quality management in clinical trials, 2013 . ema.europa.eu/docs/en_GB/document_library/Scientific_guidelines/2013/11/WC500155491.pdf (accessed 2 July 2021).
  • European Medicines Agency. Procedure for reporting of GCP inspections requested by the Committee for Medicinal Products for Human Use, 2017 . ema.europa.eu/en/documents/regulatory-procedural-guideline/ins-gcp-4-procedure-reporting-good-clinical-practice-inspections-requested-chmp_en.pdf (accessed 2 July 2021).
  • EMA European Medicines Agency. Guidance on the management of clinical trial during the COVID-19 (coronavirus) pandemic . European Medicines Agency 2021; V4 ( https://ec.europa.eu/health/sites/default/files/files/eudralex/vol-10/guidanceclinicaltrials_covid19_en.pdf (accessed August 2021) ).

Embleton‐Thirsk 2019

  • Embleton-Thirsk A, Deane E, Townsend S, Farrelly L, Popoola B, Parker J, et al. Impact of retrospective data verification to prepare the ICON6 trial for use in a marketing authorization application . Clinical Trials 2019; 16 ( 5 ):502-11. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Effective Practice Organisation of Care. What study designs should be included in an EPOC review and what should they be called? EPOC resources for review authors, 2016 . epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/EPOC%20Study%20Designs%20About.pdf (accessed 2 July 2021).
  • Effective Practice Organisation of Care. Suggested risk of bias criteria for EPOC reviews. EPOC resources for review authors, 2017 . epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/Resources-for-authors2017/suggested_risk_of_bias_criteria_for_epoc_reviews.pdf (accessed 2 July 2021).
  • US Department of Health and Human Services Food and Drug Administration. Guidance for industry oversight of clinical investigations – a risk-based approach to monitoring . www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf (accessed 2 July 2021).
  • US Food and Drug Administration. FDA guidance on conduct of clinical trials of medical products during COVID-19 public health emergency: guidance for industry, investigators, and institutional review boards, 2020 . www.fda.gov/media/136238/download (accessed 19 August 2021).

Funning 2009

  • Funning S, Grahnén A, Eriksson K, Kettis-Linblad A. Quality assurance within the scope of good clinical practice (GCP) – what is the cost of GCP-related activities? A survey within the Swedish Association of the Pharmaceutical Industry (LIF)'s members . Quality Assurance Journal 2009; 12 ( 1 ):3-7. [DOI: 10.1002/qaj.433] [ CrossRef ] [ Google Scholar ]
  • Gaba P Bhatt DL. The COVID-19 pandemic: a catalyst to improve clinical trials . Nature Reviews. Cardiology 2020; 17 :673-5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gough J, Wilson B, Zerola M. Defining a central monitoring capability: sharing the experience of TransCelerateBioPharmas approach, part 2 . Therapeutic Innovation and Regulatory Science 2016; 50 ( 1 ):8-14. [DOI: 10.1177/2168479015618696] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

GRADEpro GDT [Computer program]

  • GRADEpro GDT . Version Accessed August 2021. Hamilton (ON): McMaster University (developed by Evidence Prime Inc), 2020. Available at gradepro.org.

Grignolo 2011

  • Grignolo A. The Clinical Trials Transformation Initiative (CTTI) . Annali dell'Istituto Superiore di Sanita 2011; 47 :14-8. [DOI: 10.4415/ANN_11_01_04] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Guyatt 2013a

  • Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, et al. GRADE guidelines: 12. Preparing summary of findings tables – binary outcomes . Journal of Clinical Epidemiology 2013; 66 :158-72. [ PubMed ] [ Google Scholar ]

Guyatt 2013b

  • Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, et al. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles – continuous outcomes . Journal of Clinical Epidemiology 2013; 66 :173-83. [ PubMed ] [ Google Scholar ]
  • Hearn J, Sullivan R. The impact of the 'Clinical Trials' directive on the cost and conduct of non-commercial cancer trials in the UK . European Journal of Cancer 2007; 43 :8-13. [ PubMed ] [ Google Scholar ]

Higgins 2016

  • Higgins JP, Lasserson T, Chandler J, Tovey D, Churchill R. Methodological Expectations of Cochrane Intervention Reviews . London (UK): Cochrane, 2016. [ Google Scholar ]

Higgins 2020

  • Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 6.1 (updated September 2020). Cochrane, 2020 . Available from handbook: training.cochrane.org/handbook/archive/v6.1 .

Horsley 2011

  • Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews . Cochrane Database of Systematic Reviews 2011, Issue 8 . Art. No: MR000026. [DOI: 10.1002/14651858.MR000026.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Houghton 2020

  • Houghton C, Dowling M, Meskell P, Hunter A, Gardner H, Conway A, et al. Factors that impact on recruitment to randomised trials in health care: a qualitative evidence synthesis . Cochrane Database of Systematic Reviews 2020, Issue 10 . Art. No: MR000045. [DOI: 10.1002/14651858.MR000045.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Hullsiek 2015

  • Hullsiek KH, Kagan JM, Engen N, Grarup J, Hudson F, Denning ET, et al. Investigating the efficacy of clinical trial monitoring strategies: design and implementation of the cluster randomized START monitoring substudy . Therapeutic Innovation and Regulatory Science 2015; 49 ( 2 ):225-33. [DOI: 10.1177/2168479014555912] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Hurley 2016

  • Hurley C, Shiely F, Power J, Clarke M, Eustace JA, Flanagan E, et al. Risk based monitoring (RBM) tools for clinical trials: a systematic review . Contemporary Clinical Trials 2016; 51 :15-27. [ PubMed ] [ Google Scholar ]
  • International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. ICH Harmonised Tripartite Guideline: guideline for good clinical practice E6 (R2) . www.ema.europa.eu/en/documents/scientific-guideline/ich-e-6-r2-guideline-good-clinical-practice-step-5_en.pdf (accessed 28 July 2021).
  • International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. Integrated Addendum to ICH E6(R1): guideline for good clinical practice E6R(2) . database.ich.org/sites/default/files/E6_R2_Addendum.pdf (accessed 2 July 2021).

Izmailova 2020

  • Izmailova ES, Ellis R, Benko C. Remote monitoring in clinical trials during the COVID-19 pandemic . Clinical and Translational Science 2020; 13 ( 5 ):838-41. [DOI: 10.1111/cts.12834] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Journot 2011

Journot 2013, journot 2015.

  • Journot V. OPTIMON – first results of the French trial on optimisation of monitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/docs/Communications/2015-Montpellier/OPTIMON%20-%20EpiClin%20Montpellier%202015-05-20%20EN.pdf (accessed 28 July 2021).

Landray 2012

  • Landray MJ, Grandinetti C, Kramer JM, Morrison BW, Ball L, Sherman RE. Clinical trials: rethinking how we ensure quality . Drug Information Journal 2012; 46 :657-60. [DOI: 10.1177/0092861512464372] [ CrossRef ] [ Google Scholar ]

Lefebvre 2011

  • Lefebvre C, Manheimer E, Glanville J. Chapter 6: Searching for studies. In: Higgins JP, Green S, editor(s). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration, 2011 . Available from training.cochrane.org/handbook/archive/v5.1/ .
  • Love SB, Armstrong E, Bayliss C, Boulter M, Fox L, Grumett J, et al. Monitoring advances including consent: learning from COVID-19 trials and other trials running in UKCRC registered clinical trials units during the pandemic . Trials 2021; 22 :279. [ PMC free article ] [ PubMed ] [ Google Scholar ]

McDermott 2020

  • McDermott MM, Newman AB. Preserving clinical trial integrity during the coronavirus pandemic . JAMA 2020; 323 ( 21 ):2135-6. [ PubMed ] [ Google Scholar ]

McGowan 2016

  • McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 guideline statement . Journal of Clinical Epidemiology 2016; 75 :40-6. [DOI: 10.1016/j.jclinepi.2016.01.021] [PMID: ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Meredith 2011

  • Meredith S, Ward M, Booth G, Fisher A, Gamble C, House H, et al. Risk-adapted approaches to the management of clinical trials: guidance from the Department of Health (DH) / Medical Research Council (MRC)/Medicines and Healthcare Products Regulatory Agency (MHRA) Clinical Trials Working Group . Trials 2011; 12 :A39. [ Google Scholar ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement . Journal of Clinical Epidemiology 2009; 62 :1006-12. [ PubMed ] [ Google Scholar ]

Morrison 2011

  • Morrison BW, Cochran CJ, White JG, Harley J, Kleppinger CF, Liu A, et al. Monitoring the quality of conduct of clinical trials: a survey of current practices . Clinical Trials 2011; 8 ( 3 ):342-9. [ PubMed ] [ Google Scholar ]
  • Organisation for Economic Co-operation and Development. OECD recommendation on the governance of clinical trials . oecd.org/sti/inno/oecdrecommendationonthegovernanceofclinicaltrials.htm (accessed 2 July 2021).
  • Olsen R, Bihlet AR, Kalakou F. The impact of clinical trial monitoring approaches on data integrity and cost? A review of current literature . European Journal of Clinical Pharmacology 2016; 72 :399-412. [ PubMed ] [ Google Scholar ]

OPTIMON study protocol 2008

  • OPTIMON study protocol. Study protocol: evaluation of the efficacy and cost of two monitoring strategies for public clinical research. OPTIMON study: OPTImisation of MONitoring . ssl2.isped.u-bordeaux2.fr/OPTIMON/DOCS/OPTIMON%20-%20Protocol%20v12.0%20EN%202008-04-21.pdf (accessed prior to 19 August 2021).
  • Oxman AD, Guyatt GH. A consumer's guide to subgroup analyses . Annals of Internal Medicine 1992; 116 :78-84. [ PubMed ] [ Google Scholar ]

Review Manager 2014 [Computer program]

  • Review Manager 5 (RevMan 5) . Version 5.3. Copenhagen: Nordic Cochrane Centre, The Cochrane Collaboration, 2014.
  • Monitoring Platform of the Swiss Clinical Trial Organisation (SCTO) F dated. Fact sheet: central data monitoring in clinical trials? V 1.0 . www.scto.ch/monitoring (accessed 2 July 2021).

Shiely 2021

  • Shiely F, Foley J, Stone A, Cobbe E, Browne S, Murphy E, et al. Managing clinical trials during COVID-19: experience from a clinical research facility . Trials 2021; 22 :62. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Stenning 2018a

  • Sun X, Briel M, Walter SD, Guyatt GH. Is a subgroup effect believable? Updating criteria to evaluate the credibility of subgroup analyses . BMJ 2010; 340 :c117. [ PubMed ] [ Google Scholar ]

Tantsyura 2015

  • Tantsyura V, Dunn IM, Fendt K. Risk-based monitoring: a closer statistical look at source document verification, queries, study size effects, and data quality . Therapeutic Innovation and Regulatory Science 2015; 49 :903-10. [ PubMed ] [ Google Scholar ]

Thomas 2010 [Computer program]

  • EPPI-Reviewer: software for research synthesis. EPPI-Centre Software . Thomas J, Brunton J, Graziosi S, Version 4.0. London (UK): Social Science Research Unit, Institute of Education, University of London, 2010.

TransCelerate BioPharma Inc 2014

  • TransCelerateBiopharmaInc. Risk-based monitoring methodology . www.transceleratebiopharmainc.com/wp-content/uploads/2016/01/TransCelerate-RBM-Position-Paper-FINAL-30MAY2013.pdf (accessed 28 July 2021).

Treweek 2018a

  • Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Trials 2018; 19 :139. [DOI: 10.1186/s13063-018-2535-5] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Treweek 2018b

  • Treweek S, Pitkethly M, Cook J, Fraser C, Mitchell E, Sullivan F, et al. Strategies to improve recruitment to randomised trials . Cochrane Database of Systematic Reviews 2018, Issue 2 . Art. No: MR000013. [DOI: 10.1002/14651858.MR000013.pub6] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Tudur Smith 2012a

  • Tudur Smith C, Stocken DD, Dunn J, Cox T, Ghaneh P, Cunningham D, et al. The value of source data verification in a cancer clinical trial . PloS One 2012; 7 :e51623. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Tudur Smith 2014

  • Tudur Smith C, Williamson P, Jones A, Smyth A, Hewer SL, Gamble C. Risk-proportionate clinical trial monitoring: an example approach from a non-commercial trials unit . Trials 2014; 15 :127. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Valdés‐Márquez 2011

  • Valdés-Márquez E, Hopewell CJ, Landray M, Armitage J. A key risk indicator approach to central statistical monitoring in multicentre clinical trials: method development in the context of an ongoing large-scale randomized trial . Trials 2011; 12 ( Suppl 1 ):A135. [ Google Scholar ]
  • Venet D, Doffagne E, Burzykowski T, Beckers F, Tellier Y, Genevois-Marlin E, et al. A statistical approach to central monitoring of data quality in clinical trials . Clinical Trials 2012; 9 :705-13. [ PubMed ] [ Google Scholar ]

von Niederhausern 2017

  • Niederhausern B, Orleth A, Schadelin S, Rawi N, Velkopolszky M, Becherer C, et al. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned . BMC Medical Research Methodology 2017; 17 :26. [ PMC free article ] [ PubMed ] [ Google Scholar ]

Wyman Engen 2020

  • Wyman Engen N, Huppler Hullsiek K, Belloso WH, Finley E, Hudson F, Denning E, et al. A randomized evaluation of on-site monitoring nested in a multinational randomized trial . Clinical Trials 2020; 17 ( 1 ):3-14. [DOI: 10.1177/1740774519881616] [PMID: ] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Young T, Hopewell S. Methods for obtaining unpublished data . Cochrane Database of Systematic Reviews 2011, Issue 11 . Art. No: MR000027. [DOI: 10.1002/14651858.MR000027.pub2] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

References to other published versions of this review

Klatte 2019.

  • Klatte K, Pauli-Magnus C, Love S, Sydes M, Benkert P, Bruni N, et al. Monitoring strategies for clinical intervention studies . Cochrane Database of Systematic Reviews 2019, Issue 12 . Art. No: MR000051. [DOI: 10.1002/14651858.MR000051] [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • ITHS SharePoint

Search the Site

Need help have a question.

Contact the Research Navigator

ITHS Email Updates

Connect with Us

Visit UW in a new window/tab

Pre-Study Preparation

The key decisions involved with the pre-study preparation of industry-sponsored clinical trials include the following:

Confirm Research Support Services/Facilities

Based on your final budget and contract, contact the supporting departments who will be involved with the trial to let them know you are ready to begin. This might include:

ITHS CLINICAL RESEARCH CENTER

ITHS RESEARCH COORDINATION CENTER

INVESTIGATIONAL DRUG SERVICE PHARMACY Harborview Medical Center (206) 744-5448 | [email protected] UW Medical Center (206) 598-6054 | [email protected] Website

UWMC LABORATORY MEDICINE Administration (206) 598-6131 |  [email protected]

RESEARCH TESTING SERVICE (206) 616-8979 |  [email protected]

UW DEPARTMENT OF PATHOLOGY NWBioSpecimen

UW DEPARTMENT OF RADIOLOGY RESEARCH PROGRAM

Site Initiation Visit

Many industry sponsors/Clinical Research Organizations conduct a Site Initiation Visit (SIV) to prepare and set up a research site, conduct protocol training, and ensure the Principal Investigator (PI) fully understands all trial responsibilities.  The visit usually occurs after the site has completed all regulatory requirements, including Institutional Review Board approval, but prior to recruiting participants.  The sponsor/Clinical Research Organization will want to meet with the PI and as many members of the research team as possible.  The sponsor/Clinical Research Organization may ask to meet with representatives from supporting departments (e.g., pharmacy, radiology, lab medicine).

Topics of discussion during the site initiation visit include:

  • PI responsibilities
  • PI and research team qualifications
  • Study objectives, eligibility criteria, recruitment, and procedures
  • Space requirements, availability of a secure area to store investigational drug or devices, availability of required equipment
  • Lab manual, specimen processing, and shipping
  • Regulations and Good Clinical Practice (GCP) guidelines, informed consent requirements, Institutional Review Board obligations, adverse event reporting, drug accountability, source documentation, and records retention (regulatory documents and study file organization)
  • Data forms review (Case Report Forms, or CRFs), including electronic data entry

If it is a large multi-site trial, a sponsor/Clinical Research Organization may choose to hold an Investigator Meeting in lieu of conducting site initiation visits.  In this case, the meeting is held sometime after the Site Qualification Visit (link to Site Qualification Visit ) and prior to recruiting participants. The Investigator Meeting is usually only attended by the PI.  However, if the PI is unable to attend, the sponsor/Clinical Research Organization may allow a co-investigator or research coordinator to attend.  Travel to this meeting is coordinated and paid for by the sponsor/Clinical Research Organization.

Access to UW Electronic Medical Records

To use the University of Washington’s electronic medical record systems (ORCA and Epic) to identify eligible patients or capture clinical data about participants, you will need to request ORCA and Epic access for necessary research staff from User Access Administration .

If you have questions about ORCA access privileges or how to complete the form, contact: [email protected] .

If you access PHI for research purposes under an Institutional Review Board-approved HIPAA waiver, you must record and submit records of the dates and purposes of these disclosures to UW Medicine Compliance via the UW Medicine Disclosure Accounting online database.

UW MEDICINE COMPLIANCE (206) 543-3098 | [email protected] Accounting of Disclosures Use & Disclosure of Protected Health Information for Research Accounting of Disclosures of Protected Health Information

Pre-Screening to Identify Participants

To identify possible subjects for research, you may need to access Epic or ORCA to look at healthcare records. Even if you are involved in the clinical care of the possible subjects, you will need Institutional Review Board approval of a Waiver of HIPAA Authorization (and if the UW is the Institutional Review Board reviewing the project, a Waiver of Informed Consent and Confidentiality Agreement as well).

Members of your research team may need to complete Epic Training, which is described in the “Training/Credentialing” section below.

Clinical Research Billing for Research Procedures

There are complex federal and private payer rules that govern the conditions under which clinical services, items and tests associated with a research study can be billed to study sponsors, study subjects and/or their insurers. Research teams are required to use Epic to accurately bill for research procedures. Accurate research billing depends on planning and collaboration between the research team and a wide variety of individuals and offices before, during and after the study is initiated.

EPIC REVENUE CYCLE OPERATIONS EDUCATION [email protected] Research Billing Compliance Policies Research Participant Association in Epic Epic Billing Tools

Laboratory Medicine and Research Testing Services

If results of testing performed by University of Washington Lab Medicine will be part of data capture, you must maintain copies of lab normal ranges along with CLIA and CAP certifications.

Licenses & Accreditation

Lab Medicine is also the home department for Research Testing Services, which coordinates and provides research-related phlebotomy, CLIA-licensed testing, research-only testing, processing, and limited specimen storage.

UW LABORATORY MEDICINE Research Testing Service (206) 616‑8979 |  [email protected]

Research Instrument Validation and Calibration

Scientific Instruments supports more than 18,000 pieces of patient care, laboratory, and research equipment spread across the greater Seattle area including UW Medical Center, Harborview Medical Center, Northwest Hospital & Medical Center, Seattle Cancer Care Alliance, UW Physician’s Neighborhood Clinics and a variety of other University, state, federal and other publicly funded agencies.  Their records can be used to document compliance with TJC, CAP, CLEA, AABB, Food and Drug Administration, CMS, or other accrediting agencies’ requirements for equipment maintenance.

UW HEALTH SCIENCES SCIENTIFIC INSTRUMENTS Machine Shop (206) 616-5074 |  [email protected]

Arranging Compensation for Research Participants

Trials often offer compensation to research participants to encourage and appreciate the time and effort involved in participation.  Payments or travel/parking reimbursements to research subjects must be approved by the IRB as part of the research activities.  You may need to work with your department to identify specific procedures for compensation of research participants.

UW HUMAN SUBJECTS DIVISION (206) 543-0098 |  [email protected]

UW PROCUREMENT SERVICES (206) 543-4500 |  [email protected]

UW TRANSPORTATION SERVICES 206-221-3701 | [email protected]

Register Study on Participate in Research

Researchers can post their trials to www.ParticipateInResearch.org , and potential volunteers can search for studies that apply to them. This website was developed by the Institute of Translational Health Sciences in partnership with the UW School of Medicine’s Office of Research and Graduate Education to connect research teams with members of the community.

Set Up the Study Binder

Regulatory binders help research teams organize their files, maintain regulatory compliance, and adhere to Good Clinical Practice (GCP) standards for record keeping practices for research involving human subjects. Most sponsors/Clinical Research Organizations will provide the organizational forms and supplies they require you to maintain throughout the trial.

Helpful background can be found in Section 8, “Essential Documents for the Conduct of a Clinical Trial,” of the Food and Drug Administration’s guidance document E6, Good Clinical Practice: Consolidated Guidance.

Do a Walk Through

To make sure you are prepared to conduct the trial, do a walk-through of the research procedures before you schedule the first visit:

  • Confirm pre-screening steps in Epic and ORCA
  • Create visit packets that contain your recruitment, consent, and data collection resources you will use when approaching participants
  • Role play a recruitment conversation using the recruitment script
  • Pretend to schedule a study visit
  • Role play an informed consent discussion
  • Walk from the place where you’ll meet the participant to the visit location(s)
  • Make sure you have everything you need at the visit location(s):  lab kits, MD orders, pharmacy communication, lab requisition slips, data collection forms, laptop to access eCRFs/regulatory docs, equipment calibrated and in working order, mailing/shipping containers
  • Review data collection forms (CRFs) and confirm access to electronic data entry system

Training/Credentialing

Human subjects protections training.

There is no institutional-wide requirement for human subjects training at the University of Washington. However, many sponsors, funding agencies, UW departments, and collaborating institutions require that all members of the research team have completed a standard training course on the protecting human subjects in research.  The Collaborative Institutional Training Initiative (CITI) web-based training meets the requirements of most industry sponsors.  To take a course on the CITI website, you must register for an account and then affiliate yourself with the UW.

UW HUMAN SUBJECTS DIVISION TRAINING (206) 543-0098 |  [email protected] Human Subjects Division website

Clinical Trial Policy Training

University of Washington’s Clinical Research Budget & Billing (CRBB) support office provides Clinical Trial Policy Training to ensure that all UW Medicine faculty have a uniform knowledge of the regulations governing clinical research and that they understand the internal processes that have been implemented in order to maintain compliance in clinical research billing.

Your research support staff may also need to complete the CRBB modules within the UW Medicine Clinical Research Staff Training Program (CRB1, CRB2, CRB3) .

HIPAA Training

UW employees who are involved with research conducted with UW Medicine facilities must complete “HIPAA Online Training” and sign the “UW Medicine Privacy, Confidentiality and Information Security Agreement” within the first 30 days of the individual’s first day as a member of the research team.

UW MEDICINE COMPLIANCE  (206) 543-3098 |  [email protected] HIPAA Privacy and Information Security Training

Epic Training

For trials that utilize UW Medicine clinical facilities, investigators (or their designees) are required to use Epic scheduling software to enter research subject enrollment status information and to forward study-related admission notifications to the UW Clinical Research Budget and Billing office. To do this work, members of the research team need to complete Epic training and obtain access to Epic.

Register for the Epic classroom training course, RES110: Epic Research Participant Enrollment.

EPIC REVENUE CYCLE OPERATIONS EDUCATION [email protected] UW Medicine Account Activation Request Form

Bloodborne Pathogens Training

Principal investigators are responsible for assessing research activities to determine if members of the research team have a potential for exposure to human blood and its components, human tissue, all human cell lines, human source materials, as well as medications derived from blood (e.g., immune globulins, albumin) and other potentially infectious materials. If your research activities involve human blood and its components, you and your research team are required to comply with the UW’s Bloodborne Pathogens Program, which includes a training requirement.

UW ENVIRONMENTAL HEALTH AND SAFETY Training Administration (206) 543-7201 |  [email protected] Bloodborne Pathogens Program

Radiation Safety Training

Members of your research team may need to complete radiation safety training or review “Radiation Safety Training for Ancillary Personnel.”

UW ENVIRONMENTAL HEALTH AND SAFETY Training Administration (206) 543-7201 |  [email protected] Radiation Safety Training for Ancillary Personnel Radiation Safety Training

Shipping Biohazards Certification

If your research team will package and ship specimens via land, air, or sea, all team members must be trained and certified to ship hazardous materials. There are prescriptive requirements for packaging and labeling of hazardous materials and for the associated documentation used in the event of an emergency. There are fines for lack of certification and improper packaging and, worse, a chance for loss of life and property.

UW ENVIRONMENTAL HEALTH AND SAFETY Training Administration (206) 543-7201 |  [email protected] Shipping Hazardous Materials

UW Medical Center Credentialing

The UW Medical Center credentialing process ensures that individuals other than physicians, Nurse Practitioners, and Physician’s Assistants who interact with patients at UW Medical Center are competent to practice in their role and have current immunizations to ensure the safety of the patients. Members of your research team interacting with UW Medical Center patients must have current credentialing.

UW MEDICAL CENTER CREDENTIALING [email protected] Credentialing

IMAGES

  1. Monitoring Visit Series: Episode 2

    site initiation visit gcp

  2. Site Initiation Visit(SIV)||Monitoring visits type||Clinical Trial

    site initiation visit gcp

  3. Site Initiation Visit (SIV) in Clinical Research

    site initiation visit gcp

  4. Site Initiation Visit Presentation

    site initiation visit gcp

  5. 定義 SIV: サイト開始訪問

    site initiation visit gcp

  6. How to Conduct a Site Initiation Visit: Step by Step

    site initiation visit gcp

VIDEO

  1. Budget Friendly Kid's Birthday Party Ides for 2024

  2. Oz The Great & Powerful [Set Interview]: Joey King / CHINA GIRL

  3. How To Fix This Site Can't Be Reached Android Full Guide

  4. Exit Poll Live

  5. Begrawiya

  6. (Join: 9999947824) Word Of The Day Insinuation #publicspeaking #onlinespoken #vocabulary

COMMENTS

  1. ICH GCP

    Initial (first)monitoring visit. If you were recently hired for a CRA position in a new pharmaceutical company, you would need to do the next steps prior to scheduling the first monitoring visit: - Familiarize with the company's general SOPs and Sponsor's study-specific SOPs (if applicable) relating to the clinical study initiation ...

  2. ICH GCP

    MONITORING VISIT REPORTS. To document site visits by, and findings of, the monitor : X. 8.3.11. RELEVANT COMMUNICATIONS OTHER THAN SITE VISITS - letters - meeting notes - notes of telephone calls . To document any agreements or significant discussions regarding trial administration, protocol violations, trial conduct, adverse event (AE ...

  3. ICH GCP

    5.18.1 Purpose. The purposes of trial monitoring are to verify that: (a) The rights and well-being of human subjects are protected. (b) The reported trial data are accurate, complete, and verifiable from source documents. (c) The conduct of the trial is in compliance with the currently approved protocol/amendment (s), with GCP, and with the ...

  4. PDF Study Start-Up SS-204.01 STANDARD OPERATING PROCEDURE FOR Site

    The study initiation visit is a meeting arranged and conducted by Georgia CORE and the sponsor, if applicable, to complete the final orientation of the study personnel to the study procedures and GCP requirements. It occurs after the pre-study site visit when all study arrangements have been concluded or are almost complete, and the study is ...

  5. Site Initiation Visit (SIV): Clinical Trial Basics

    SIV Definition: Site initiation visit. An SIV (clinical trial site initiation visit) is a preliminary inspection of the trial site by the sponsor before the enrollment and screening process begins at that site. It is generally conducted by a monitor or clinical research associate (CRA), who reviews all aspects of the trial with the site staff ...

  6. Clinical site initiation visit checklist and best practices

    Here is a sample clinical trial initiation visit checklist for a Clinical Research Associate (CRA): Task. Responsible Party. Completed. Verify that the site has received all necessary study materials. Site staff. Confirm that the site has IRB/EC approval. Site staff. Verify that all site staff have completed the required training.

  7. Penn Medicine Clinical Research

    A site initiation visit will be conducted, in person or remotely, to formally document that a site is ready to begin engaging in the conduct of the trial. At the start of the study the monitor should be identified, the site should be qualified by the OCR Regulatory Services team. For industry sponsored trial, a monitor will be provided.

  8. Site Initiation Visit (SIV)

    What. Prior to study enrollment, the study monitor on behalf of the sponsor will conduct a Site Initiation Visit (SIV) to provide the principal investigator and the study team training on the protocol, procedures, processes and monitoring plan. The monitor will also review the responsibilities of the investigator (21 CFR 312 Subpart D).

  9. Guidance for Industry

    Office of Communication, Outreach and Development, HFM-40 Center for Biologics Evaluation and Research Food and Drug Administration 1401 Rockville Pike, Suite 200N, Rockville, MD 20852-1448 ocod ...

  10. PDF Site Initiation Visit Checklist

    Adequate study staff is available for the Site Initiation Visit (SIV) Discuss and determine the particular responsibilities of the staff in the clinical trial team on the Delegation of Responsibilities Log. All clinical trials team GCP trained. Facilities that are required are available and functional. Ensure materials and documents for the ...

  11. Study Initiation visit

    Preparing for the Initiation Visit - A meeting room should be available - A site checklist is used by the monitor or trial coordinator to ensure that all items have been covered during the initiation visit. - The monitor should check that all regulatory documents have been retrieved prior to the meeting. It is possible to retrieve the last of ...

  12. Site Initiation Visit (SIV)

    Site Qualification Visits and Site Initiation Visits. Site Qualification Visit Checklist. The purpose of a SQV is to assess whether it is feasible for a site to run a study from the sponsor perspective. You will still need an internal feasibility assessment to discuss the study in much more detail, in particular recruitment strategies and targets.

  13. PDF STANDARD OPERATING PROCEDURE Site Initiation and Closeout

    initiation visit. 4.1.2 Review the Investigator's Brochure (IB) and any up-to-date information on the ... An individual responsible for the conduct of a clinical trial at a trial site ensuring that it complies with GCP guidelines. If a trial is conducted by a team of individuals at a trial

  14. Site Initiation

    Main Content Site Initiation Definition. Site initiation is the final step in the study set-up process. Site initiation occurs prior to site activation. Typically, the sponsor or clinical research organization will conduct a site initiation visit after the Institutional Review Board approves the clinical research study at that site and the clinical research study agreement is signed.

  15. PDF Site Initiation/Study Start-Up Visit Tip Sheet

    A Site Initiation Visit (SIV) or Study Start-Up is an organized meeting to discuss the new protocol before the research project is ready to screen and enroll potential patients. It also serves as training for the protocol of interest. All members on the study (everyone listed on the Delegation Log and IRB

  16. Site Initiation Visit (SIV)

    The Site Initiation Visit (SIV) is required to prepare and set up a research site to conduct a study and must occur prior to patient recruitment. The principal investigator (PI) must attend this visit together with as many members of the research team as possible. Representatives from any supporting departments should also attend where possible ...

  17. Downloadable Templates and Tools for Clinical Research

    Study Initiation : Site initiation checklist. Site initiation, activation and close out SOP ... Hi, Can you provide me the SOP for electronic signatures in Clinical trial. Anupam 26 Dec 2016. Do you have an "SOP for Telephonic site selection visit". Kindly Share on my registered mail ID . Senbeta 9 Nov 2015.

  18. Monitoring strategies for clinical intervention studies

    5. Systematic on‐site initiation visit versus on‐site initiation visit upon request . Liènard 2006 was a monitoring study within a large international randomized trial of cancer treatment. A total of 573 participants from 135 centers in France were randomized on a center level to receive an on‐site initiation visit for the study or no ...

  19. PDF Guideline for good clinical practice E6(R2)

    An independent data-monitoring committee that may be established by the sponsor to assess at intervals the progress of a clinical trial, the safety data, and the critical efficacy endpoints, and to recommend to the sponsor whether to continue, modify, or stop a trial. good clinical practice E6(R2) 1.26.

  20. PDF Site Initiation Checklist

    SITE INITIATION Checklist. The purpose of this document is to provide the Lead Site with a system for performing study initiation visits. Instructions: The following items should be addressed when initiating a participating site into a multi-center trial. Fill in the participating site information, and the names of the attendees.

  21. PDF Questions on ICH-GCP Guidelines

    4. Discuss the essential documents required by ICH-GCP to conduct a site initiation visit? 5. You are a new CRA required to qualify a clinical research site for a study in Follicular Lymphoma, FL, how will you satisfy the requirements to qualify the Principal investigator by education and training, and the clinical research site to conduct the ...

  22. PDF NIDCR Clinical Monitoring Guidelines

    SIV Site Initiation Visit: A meeting with the PI, all clinical site personnel, and sponsor personnel that takes place prior to the start of the clinical research study. The purpose of this meeting is to reviewthe protocol, Investigator's brochure, all procedures, forms, and documents related to the conduct of the study.

  23. Pre-Study Preparation

    Site Initiation Visit. Many industry sponsors/Clinical Research Organizations conduct a Site Initiation Visit (SIV) to prepare and set up a research site, conduct protocol training, and ensure the Principal Investigator (PI) fully understands all trial responsibilities. The visit usually occurs after the site has completed all regulatory ...