Presented by PaCCSC, this free online four-part Masterclass Series was delivered to new and emerging clinician researchers to increase knowledge, make connections, and get involved in clinical research in palliative care.
Clinical trial masterclass series
Clinical trial masterclass 1 - The average clinical trial lifecycle
Masterclass 1 will give you an overview of the entire clinical trial process, taking you through every aspect from the spark of the new trial idea to seeking funding, getting your trial started, trial monitoring, data management, and dissemination of findings.
Presenters
Belinda Fazekas, National Project Officer, leads the IMPACCT Trials Coordination Centre team and has over 15 years’ experience in the conduct of clinical trials in palliative care and cancer symptom management.
Fran Hyslop, Project Officer, Palliative Care Clinical Studies Collaborative (PaCCSC), has worked in research, support management, and education for two decades.
Welcome to the UTS IMPACCT Trials Coordination Centre Masterclasses, funded by the New South Wales Ministry of Health, aimed to help you bring your clinical trial idea to life.
Today, we're going to talk about the average clinical trial life cycle. First, I'd like to acknowledge the Gadigal people of the Eora Nation upon whose ancestral lands our city campus now stands.
I would like to pay respect to the elders both past and present, acknowledging them as the traditional custodians of knowledge for this land. I would like to acknowledge the traditional custodians of the various lands from which all our attendees joined today, and to pay respects to those elders past and present, and I extend this respect to First Nations people attending today.
My name is Fran Hyslop. My colleague, Belinda Fazekas, National Project Officer, and I will be presenting today. I have worked in research, support management, and education for two decades.
Belinda and I are from PaCCSC and CST at the University of Technology Sydney. PaCCSC is the Palliative Care Clinical Studies Collaborative, and CST is the Cancer Symptom Trials Collaborative. Both PaCCSC and CST are part of UTS IMPACCT. The Trials Coordination Centre or ITCC works to coordinate PaCCSC and CST trials.
This is the first in a series of masterclasses from the UTS IMPACCT Trials Coordination Centre aimed to help you bring your clinical trial idea to life. Other topics we'll be covering in this series include developing your clinical trial protocol, how we can support you in running your trial, and a critical appraisal masterclass.
I'd like to hand over to Belinda to introduce herself.
Hi, thanks Fran. My name is Belinda Fazekas and I lead the ITCC team. I've worked in the area of clinical trials coordination within palliative care and cancer symptom management for over 15 years.
We hope that this will be an interesting session. Please feel free to seek clarification as we move through the life cycle of the clinical trials and please understand that we cover all the ground for multi-site trials. So some of the steps may not apply to your current or proposed work, but it may give you insight into the larger picture. So we're just going to launch straight in.
Today, we'll progress through the average life cycle of a multi-site clinical trial and provide you with information on the what, when, and how various tasks need to happen in order to commence and complete a clinical trial.
With that in mind, let us walk you through the life cycle of a multi-site clinical trial. So you may have this copy of the diagram in your notes. This flow diagram shows all of the steps and time points for a typically fully powered multi-site randomised control clinical trial.
Fran and I will talk you through this flow. It might look overwhelming, but hopefully it's logical to follow and steps you through a clinical trial.
Our experience is that an entire Phase 3 clinical trial can take about 10 years. There may be some parts that take longer or happen more quickly than expected, but it seems that 10 years is a magic number.
We are not limiting this cycle to the recruitment stage alone. Recruitment is only one part of the picture. There is so much more that goes on prior to this and also after.
So we'll work through the entire life cycle of the average clinical trial from study concept through to peer review publication and beyond.
Year by year, we're going to look at each of the major activities and trial phases. So let's look at Year 1, the beginning. There are five important steps in this first year.
Firstly, how do clinical trials originate? Sometimes a medication or an intervention works and sometimes it doesn't. You may know there's no evidence in the population you're treating or you may be unsure of what dose should be used or how the intervention is best delivered. There may be little or nothing in the literature and little consensus among your colleagues. These can all lead to ideas for new studies.
Or you may already have an idea. Fantastic, but where do you go next? Who can help you? Who do you need to speak to? What support do you have for research in your own organisation? Is there someone doing this already? What training policies and procedures do you need to conduct clinical research?
Firstly, your idea needs to be developed into a protocol. A protocol is a comprehensive document that describes every detail of the study, detailing everything that needs to happen and how it is to happen. Part of the protocol development process will involve building your team.
You need to consider who you will invite to help and think about what each individual will bring to the study such as: expertise in trial design, as a recruiting site, the networking skills, and so on.
To make this work, you will need other aspects in the study... other experts, I'm sorry, in the designs of the study, statistics, health, economics, and so on. And during this time you'll cement your team and iron out the details of the study design within the protocol.
The protocol continues to develop along with your team and may continue for what seems like months. When it's done, the detail in your protocol should be sufficient for someone else to pick up and implement.
So what does a protocol look like? The ITCC has a team have a protocol template which is available for members, but I also direct you to the ICH GCP E6. This is good clinical practise which has a whole section outlining the requirements for a protocol.
The PaCCSC and CST templates are based on ICH GCP. Using these templates will ensure that you have covered every contingency. At this point, the more comprehensive the protocol, the fewer problems you will experience later.
As we progress, I will mention version control a number of times. I cannot stress enough the importance of having control of the many documents generated throughout a clinical trial.
Starting with the protocol, each document needs to have a form of version, maybe a simple name and number such as Version 1.0, and a date and other information that can identify one document from a previous version. And this should be applied to every document associated with the study protocol.
Every trial needs a sponsor and you need to identify who the sponsor will be. The sponsor has a broad range of legal obligations and responsibility for all sites in a multi-site trial.
Sponsors can be your local institution, university, biotech, or pharmaceutical company. Think about who is the most suitable sponsor. Is it your organisation, the grant holder, an academic institution you're aligned with? Do they have the appropriate experience in trial sponsorship?
Having your idea and your subsequent protocol evaluated by experts, the scientific, academic, and/or professional rigour is absolutely essential. They can pick up issues with your protocol and comment on any logistical problems you might encounter during implementation.
Remember that your protocol will determine the entire study and results. Multiple peer review is critical. PaCCSC and CST both have scientific advisory committees which perform this function.
Funding is where it gets really serious, bigger picture funding at this point. You need to obtain funding to run your study. You need the right team to attract funding and a plan to follow when funding applications are unsuccessful.
Grant applications need to include all costs associated with the study such as staff time, materials, medication, and pathology. You also need to consider other trial costs such as ethics, travel, and salaries.
It may be possible to run a small pilot or feasibility study with smaller grants. This will also improve your chances of success for bigger applications. Feedback from funding bodies can help you with the development and refinement of the protocol for future submissions.
So don't be surprised if this process takes a full year and often longer. Your experience with protocols to date may be involvement in a study with a protocol already in place. Bear in mind that the investigators may have taken a year to get to that point.
And so, we come to the second year. By now, you will have a near final protocol. If you've been successful in attracting funding, you need to start planning for the actual study rollout.
You should have some idea about who you will need to meet with regarding the study and how frequently. You may have already been having regular protocol development meetings. These can become trial management committees.
You'll need to meet regularly with the planned participating sites, probably monthly in the lead up to trial initiation and recruitment opening. At times, you may need to hold specialist meetings, such as for safety reporting, protocol violations, or provide upstream or downstream reporting to your sponsor, funding body, lead organisation, participating sites, et cetera. You need to know who is responsible for what decisions.
Every study will have different agreement requirements depending on the study, the sponsor, and the funding body. As an example, if you are successful in gaining competitive category one NHMRC, MRFF, or ARC funding, you're going to need a multi-institutional agreement or research collaboration agreements with each associated member on the grant, regardless of whether they are a recruiting site.
If the trial sponsor is not the lead agency, then a research collaboration agreement will be needed between them. You will require CTRAs between the sponsor and the individual, and if your grant holder is not also your sponsor, they will need to be involved.
CTRAs set out the terms and conditions between the sponsor and the recruiting sites. Medicines Australia has a template agreement that we strongly recommend.
Year 2 is when you also plan your sites. More than one site is often needed for successful recruitment. In fact, it's almost impossible to successfully recruit to a clinical trial in Australia without involving many sites.
You need to carefully consider which sites you wish to involve and how do you make this choice. So do you make a random list of sites or are you more targeted? Do you choose the sites where you know the team already?
You need to think about those sites that will enable your study to recruit on time, in budget, and with data that you can use. Basically, which sites will give you the most successful outcome?
Consider their previous experience, staffing, other competing studies, patient population, clinical interest, et cetera. Also, consider how many sites you will need to ensure your sample size is attainable. The more sites, the better the chance of recruiting, but the cost and complexities also increase.
The intervention itself needs to be carefully planned. It's very easy to put this into a protocol, but the actual logistics are not as easy.
What is the intervention? If it's a drug trial, is blinding involved and how will this occur? Is a specific manufacturing process involved, particularly if it's a drug trial? How will the intervention be presented? Is a specific packaging required? Will there be any potential confusion regarding the administration for patients or staff?
Other issues such as accountability of the intervention, stock control, and multiple other planning issues will need to be commenced at this point. Non-drug interventions are no less complex and still require much planning.
So now it's also the time to start thinking about the data and what do you want, in what format, who will collect it, and how. The data collection worksheets are crucial. These will capture the data. The data that's captured here and then entered into the database will constitute the data for the analysis at the end.
If you don't ask the right questions in your worksheets, you won't have the data that you need to answer your research question. Be very careful and pile up the forms if you can.
Set up master files. These are detailed in GCP Essential Documents. There are certain documents that are required to be kept at the coordinating office and at each site. And the master file will ensure that these are set up and maintained so everything has its place. It needs to be easy to maintain from day one to prevent loss of documents.
The take home message is even before you start, think about being able to trust the integrity and rigour of your data.
Consider that a study may take 10 years or more across numerous sites with lots of paperwork. The required archive period is 15 years, so it's possible that there may be some review of the files in 25 years time. The likelihood is that we may have all moved on in one way or another. The patients may have all died. Some of the hospitals may have even closed or changed their service delivery.
Any future reviewers and auditors will only have the quality of the available paperwork to see if all the steps of the research process were correctly undertaken so they can assess the quality of the study results.
If it's not documented, it didn't happen. The paper trail starts at the start of the life cycle and continues until the end. In GCP, this covers all parts of drug development and continues until the drug is in clinical practise. So you can see that trials are only two or three parts of the complete picture that you can see on your screen.
Filing is very important. It's very easy to have documents either paper or electronic flying everywhere, and you need to have control of them. Setting up a filing system keeps you and everyone else organised and in control.
The use of standard operating procedures is unavoidable in multi-site trials. They provide detailed written instruction that describe a specific activity and ensure uniformity and consistency across the study, across sites and over time.
They enable scrutiny of the procedures at a later date. So SOPs support the study protocol and may include SOPs around consent, worksheet completion, patient flow, follow-up procedures, and any protocol specific procedures where consistency is required.
SOPs also ensure that sites are operating in accordance with the international regulations and the study protocol. Again, these support external scrutiny of the study. PaCCSC and CST have a full suite of SOPs available for researchers.
Now, we get to the fun stuff. The regulatory environment. Clinical trial registration is essential. The World Health Organisation regards trial registration as an important scientific, ethical, and moral responsibility.
You need to do this now if you ever want to publish anything on this study. No journal will accept a paper reporting a trial that was not registered before recruitment of the first participant. Once a trial is registered, it is the sponsor's responsibility to update the registry regularly and if the study changes. We mentioned CTRAs before.
If you intend to use a drug, check the Australian Register of Therapeutic Drugs for current approved indication, dose, and population. If your trial will be using the drug for a different indication, a different dose, or a different population than the current TGA approval, you must submit a CTN to the TGA, and both the sponsor and participating sites require seating insurance.
The process of ethics and governance submission and approval is a major achievement and this is a big step.
Most times now you'll need to complete an online application which provides the HREC, the Human Research Ethics Committee with specific ethics focus for the review of the study protocol.
Other patient facing documents such as advertisements, patient diaries, questionnaires, consent forms, et cetera, will need to be included for HREC review.
Version control is important here as it is essential that the study uses the version that is been approved. Consider which HREC you will submit to and how do you choose. Is it convenience? Is it an institutional requirement, or do they have the experience with the type of trial that you are planning?
Site Specific Agreements or SSAs are generated after the application is locked so that each site can submit to their own governance office for local approval.
Even though you already have funding and a budget, now you need to detail your costs and payment schedules. Your per participant payment and other site payments could account for up to 50% of your operating costs.
You need to be able to track spending throughout the study to monitor and adjust for blowouts. Other regularly occurring items are listed, but the overall study budget will predominantly depend on the study design.
It might not be obvious, but most questionnaires and assessment tools and scales are the intellectual property of others and you need to seek permission to use them.
So look online or check the validation publications to see if this is required and a fee may be involved. You need to obtain written permission and keep this document in your files.
All trial equipment provided to or used within health facilities will need to be approved, tagged, and checked and they will need a maintenance programme. And any computing programmes or software will need to be installed by the IT administrators and be allowed through the institutional firewalls.
And so, we come to the third year: Site Start Up.
This is the year to finalise all of your documents, check everything, and start getting the sites ready for recruitment. This is usually referred to as Site Initiation Visit or SIV.
Do they have the correct approved versions of the patient materials that they need? Is the study drug ready and available? Can your sites enter data? Do they understand the study and is there any additional training required?
This will usually happen over a period of time as it's unusual for all sites to have approval for recruitment at the same time as each other. But time spent now on processes will save time when things get busy, when patients are being recruited, data is coming in, and questions are being asked.
A filing system here means that documents get stored in a logical order, ensure that the sites have a way of dealing with the paperwork and can file and retrieve documents easily.
They will need to establish an Investigator Site File or ISF, and perhaps this is the time to also start thinking about your next study.
So the next stage is a significant milestone moment. You are recruiting, and you are already a few years into the study. So how exciting.
The recruiting years can vary in time. We've allowed four years, but this can be much less. The years of recruitment are where it all happens. So does the protocol work and what do the data look like?
This period can vary. It may be short, but it might take years. If it's slow, there'll be problems with sites becoming disenchanted and demoralised. Staff will move on and the details may be overlooked.
Recruitment is hard work and the dedicated teams who recruit participants need some joy and positive feedback. So factor in how this can be done. And during this time, while you need to keep track of recruitment, there are also procedural issues that may crop up.
There may be new sites that require training and consider how feedback on recruitment will be handled, how does the study stay on track, and study procedures not deviate. And your monitoring plans will be initiated here. Continue to monitor the consenting process and changes in referral patterns and changes in staff. And this is a time to continually monitor progress.
So just to let you know how challenging this time can be, this slide shows the recruitment number for six completed studies. Each study gives the referral numbers throughout recruitment and the number of those randomised. And the right-hand column shows the percentage of those referred who were then randomised.
And you can see that while there is maybe a little variation for the trials that you can see on the screen, the overall recruitment rate shown in the circled number is about one recruitment for every five referrals
From this same table, you can see that Study M is in trouble with a recruitment rate of only 13%. We'll discuss this shortly, but recruitment is hard work.
This period of time will also require you to monitor and report safety. What are the safety concerns with the study? And there always will be some. So how is this being reported and to whom? What's being communicated to the ethics committee and to the data safety monitoring committee that was established way back under trial governance.
The HREC, the TMC and the Data Safety Monitoring Committee will also require periodic monitoring of the adverse events. So there needs to be a way to collect and collate the events to enable efficient reporting.
Data management also means keeping track of the recruitment through progress KPIs. Is your recruitment rate as expected and can you tell if you're not on track?
So have a look at the graph for Study M that we mentioned a few slides ago. The blue line shows the expected recruitment for this study given the sample size required and the funding period. The pink line shows the actual recruitment based on data entry, the data quality, and the protocol defined completion numbers.
You can see that this study is in trouble. It is not on track to meet the sample size and the implications could be significant. So there are some possible strategies that can be implemented to increase recruitment and these could be to meet with the recruitment staff, to discuss the barriers.
You can engage consumers to give you feedback about perhaps some of the issues that participants may have. You can increase advertising for the study and you can add sites.
Other strategies could be to review eligibility criteria and/or reduce the target sample size, but this may have implications on your study's power. You need to be very cautious about making any changes that can result in being unable to prove your hypothesis.
There may be circumstances where you will need to amend or change a protocol, and it may be an attempt to improve recruitment, or it may be a safety issue, inconsistency with implementation, lack of clarity, or external factors such as medication safety warnings, for example.
Consider how the amendment may change your data, and can the worksheets accommodate those changes?
We work with online data entry, which also means making changes to a live database and this is always risky. What are the budgetary implications and will the protocol amendment itself delay recruitment?
Your budget included site payments that were agreed in the CTRA, including ending clauses. Your recruitment sites will invoice for payment per participant or per completion.
Depending on the model and time period of the study intervention, you may have staggered payments related to critical time points. The are various other payments such as lead site and other annual payments.
Your payments and your data are absolutely interlinked. If there is no data, there should be no payment. You will need a study log to track each participant's journey through the study and link to payment time points. You need to keep good records.
Thinking about dissemination, who needs to know about your study and the results when they're available? All trial results, regardless of whether the outcome is positive or negative, should be subjected to peer review and many funding bodies now make this a requirement of funding. Peer review confirms the rigour of the process and conduct of the study to ensure the results are reliable and can stand up to scrutiny.
What dissemination techniques will you use? Presentations, publications, letters, reports? Are there key stakeholders you need to liaise with? Do you need to develop strategies to ensure your key findings are disseminated to key stakeholder groups? Do you need to liaise with pharmaceutical companies to ensure the results are relevant and available to enable submissions to the TGA and PBAC?
How do you make study findings available to the clinical sector and work to translate your results into clinical practise? How do you make your study findings publicly available in an efficient and timely manner? Data analysis can take months or even years.
And so, we come to the ninth year, two years to go. Reaching the sample size is often really hard to call and fraught with anxiety. You think you've got the numbers, but have you met with your statistician, your chief investigator, and protocol investigator team to review the protocol violations to see if participants will be removed from the study?
Closing too early seriously interrupts momentum. It's better to keep the momentum going and risk getting more patients than stopping prematurely, but every extra patient costs money. So leave the study open and work fast to answer your data questions, and then call it and break out the bubbles.
No, you need to hold the bubbles. It's not over yet. At this point, check all the data for completion and accuracy. Make use of your data checking procedures, audit trails, et cetera. So do you know what data changes have occurred and have you followed your standard operating procedures?
This is also when you close access to the database or database lock after which no changes are made. You need to then download your data set and this is likely to happen a number of times as you continually check your data. If you don't design your worksheets properly, this is where it will all come home to roost.
You also need to check your statistical analysis plan, your SAP against the protocol to ensure consistency. Complete the tables in the template to ensure that you have the required information.
And the complexity of the SAP can vary, but it does ensure that the analysis is decided before you unblind. This means that you are not mining for the results that you think you should get. You let the data tell the story.
As part of the analysis, you will need to unblind if this is a blinded study where you reveal the allocation, and you need to be really confident of the randomization schedules you developed in year two.
Do you know who got what, and can you trust the record? Start building up your demographic tables to describe your study population.
Your consult diagram will show patient flow. It ensures that every single patient can be accounted for and it tracks every patient journey. This diagram takes a surprising amount of time.
Once you have your analysis, you can start writing your outcome papers. You need to consider how many publications will be written. You may write up your protocol, your main results, quality of life and sub studies separately, but you need to be careful of salami slicing where the results of your study are subdivided into as many different papers as you can think of. It's not all about quantity of publication. Each needs to be substantial in its own right.
For each paper, consider what journal are you going to target and the authorship, who will be lead author. The lead author takes responsibility for timeliness, drafting, version control, and submission.
And who else should be involved? What criteria will you use to determine authorship eligibility? Many journals and the PaCCSC, CST, SOP require authors to meet the International Committee of Medical Journal Editors or the ICMJE authorship criteria. Just being involved in patient recruitment alone does not necessarily give you authorship rights.
So we're up to year 10. How did we get here so quickly? So the end is insight. This last year is a long year and it feels a bit like mopping up as often done in your own time, but it's equally as important as the earlier years.
Final reporting is required for every committee and there may have been multiple committees. The final reports need to be a comprehensive report of the study. You may need to use templates. Where possible include the study results, but this will depend on successful publication location.
The clinical study report template alone is 32 pages long and it collects all of the information from the past 10 years of your work. The CSR is a significant body of work and you should allow yourself weeks or months to complete it.
The CSR can be populated from information collected during the study, such as study protocol, your registration and approval processes, the statistical analysis plan, the database users, the meetings and outcomes such as DSMC, protocol violation and monitoring; the allocations if it's a randomised trial; and descriptions of the adverse events, serious adverse events, and deaths.
It comes back to good record-keeping and filing, as you will need to retrieve these records, minutes reports, and tracking, et cetera, from the previous years.
The CSR should be comprehensive enough to allow other researchers to reproduce your study in every detail and for auditors to assess the integrity of the entire study.
It's time to go back and review the dissemination plan developed during years four to eight.
Updated accordingly and set about publicising your results, dissemination is a whole mini project on its own. Now, you want to change practise.
Dissemination generally commences with presentations, then peer review publications and a circulation programme to get the results into practise and to get policymakers thinking.
So do we all pack up and do something else now?
You'll need to work through a programme of formally closing the study, and this will include final reports if acquired, including to the funding body; final drug accountability and destruction procedures; archiving of study materials; and you'll need to ensure that every site has completed the tasks required to close.
You'll need to determine where the study files get stored and who has access and for how long, and contingencies if key people leave. And you need to consider computing changes over time for your electronic files.
So that is 10 years of your life and a great contribution to clinical care and decision-making of treatment options.
We hope this map has given you an oversight and understanding of the steps involved.
10 years condensed into 40 minutes, I expect you're a bit overwhelmed. While that was not our objective, conductive multi-site clinical trials is very complex, but if you have a research question or an idea you would like to progress to a clinical trial, then we are here to help you.
Please don't hesitate to get in touch with us. Our job is to help you navigate the process.
If you would like to know more about PaCCSC or CST or become a member, please visit our website. Our website has information about our trials and other work and the resources, support, and networking opportunities available for PaCCSC and CST members. We have several avenues to support new study ideas, provide networking opportunities, and forge new collaborations.
If you have any other questions, please don't hesitate to get in touch through our website.
We'd like to thank the Ministry of Health for funding today's masterclass and we hope you found this to be an interesting and informative session. And we thank you for your time.
Resources mentioned in the video
- ICH Efficacy Guidelines [opens external site]
- VC Clinical Trials risks [opens external site]
- InFORMed Project [opens external site]
Clinical trial masterclass 2 - Support for your clinical trial concept
Masterclass 2 is presented by Dr Vanessa Yenson, Research Assistant-Writer. You’ll learn about the support available to you as a member of the Palliative Care Clinical Studies Collaborative or Cancer Symptom Trials.
Vanessa has extensive clinical trial knowledge and writing expertise. Her role is to work with investigators to prepare grant and ethics applications, clinical trial protocols and other clinical trial documents.
Fran Hyslop: Welcome everybody to the UTS IMPACCT Trials Coordination Centre Masterclasses funded by the New South Wales Ministry of Health, aimed to help you bring your clinical trial idea to life. Today we're going to talk about how we can support your new clinical trial concept. First, I'd like to acknowledge the Gadigal people of the Eora Nation upon whose ancestral lands our city campus now stands, and the traditional custodians on the land that Vanessa is presenting from today, the Cammeraygal people of the Eora Nation. I would like to acknowledge the traditional custodians of the various lands from which all our attendees join us today and pay my respects to their elders past and present, acknowledging them as traditional custodians of knowledge for this land, and I extend this respect to First Nations people attending today.
Fran Hyslop: My name is Fran Hyslop and my colleague Vanessa Yenson will be presenting today. Vanessa and I are from PaCCSC and CST at the University of Technology, Sydney. PaCCSC is the Palliative Care Clinical Studies Collaborative and CST is the Cancer Symptoms Trial Collaborative. Both PaCCSC and CST are part of UTS IMPACCT. The Trials Coordination Centre or ITCC works to coordinate PaCCSC and CST clinical trials. This is the second in a series of masterclass from the UTS IMPACCT Trials Coordination Centre, the ITCC. In the first masterclass, we discussed the average clinical trial lifecycle, and today we're going to cover how we can support your new clinical trial concept. I'd like to hand over to Vanessa to introduce herself.
Vanessa Yenson: Hi, as Fran said, my name is Vanessa Yenson and I'm part of the ITCC team pictured here. I'm in the middle at the back. I provide research, writing and editing support to help convert new study ideas into clinical trials. I assist clinical researchers to develop trial protocols, generate literature reviews, and prepare grant applications and ethics submissions. Just a bit about my background, I have a PhD in immunology, which involve laboratory research, and I also have previously worked as a monitor for clinical trials. As a cancer survivor myself, I'm a member of the consumer group called ConViCTioN, which stands for Consumer Voices in Clinical Trials New South Wales, where I advocate for a strong consumer voice in the preparation, implementation and dissemination of all health and medical research, including clinical trials.
Vanessa Yenson: So let's get started. You might have come up with an idea to help people that you see in clinical practise to better manage their cancer symptoms or improve the quality of life for those with a life-limiting illness. You know you need to test your idea, but you're just not sure how to progress this into a clinical trial. So how can we support your new trial concept? As a member of CST or PaCCSC, you can flesh out your idea on a new study concept template, giving more detail around your research idea and how you might turn it into a clinical trial and submit your new study concept for feedback.
Vanessa Yenson: The focus of your proposed idea and the study will determine which collaborative you submit to. PaCCSC clinical trials contribute to improving quality of life for people living with a life-limiting illness. CST clinical trials address improving symptom management specifically related to cancer. Our scientific advisory committee will review your submission. Your new study concept will first go for an out of session review by two healthcare professionals and at least one consumer and the Cancer Australia supported national technical services for health economics, CREST, and the quality of life or patient reported outcomes, CQUEST.
Vanessa Yenson: You will have the opportunity to partner with the other clinical trials groups that CST and PaCCSC are aligned with. You will receive reviewer feedback before being given the opportunity to present to the scientific advisory committee. We'll then be able to ask you questions and provide further feedback to improve your clinical trial idea. Formal feedback and or their endorsement will be provided after the presentation. Once your study has been endorsed, it's time to start working on your protocol. A protocol is a comprehensive document that describes every detail of the clinical trial. It is the focal point for the whole study detailing everything that needs to happen and how it is to happen.
Vanessa Yenson: Protocol development is a significant task and you may continue to work on this for over a year or longer. Developing your protocol will be covered in depth in the third masterclass, but in short, we can provide you with a template that has sections to outline your introduction and aims, outcome measures, study design, consumer engagement, patient reported outcome measures and health economic considerations. Developing your protocol will also help you consider the different risks involved in your proposal and potential mitigation strategies. We can assist you with all aspects of your protocol, including study, design and development, systematic and literature reviews, sample size calculations, and so on. When it's done, the detail of your protocol should be sufficient for someone else to pick up and implement the study.
Vanessa Yenson: Once your study idea has been reviewed and endorsed and your protocol is on its way, we can help you with getting your project funded and into hospitals. Funding your research can be stressful. We can help you find suitable grants to apply for, such as those offered by the National Health and Medical Research Council or the NHMRC, the Medical Research Future Fund or the MRFF, seed funding and implementation grants, early career research fellowships and community and organisation grants. To do this, we actively search for suitable grants based on your study and where you are in your research career. Some grants are specifically aimed at EMCRs or early and mid-career researchers.
Vanessa Yenson: We have signed up for notifications about government, community, organisation and UTS grants and we'll notify you when a good fit for your study comes through. We can collaborate with you to write the grant proposal, including assistance and completion of the application portal. To do this, we'll go through the grant application guidelines to make sure all eligibility requirements are met and work with you to complete the portal sections. This includes working with you on the grant proposal itself, project feasibility analysis, risk management plan and measures of success, and there is often a short turnaround time on grant offerings, which is why it is a good idea to have your proposal already fleshed out in some detail.
Vanessa Yenson: We will liaise with the Research Administering Office or the RAO, and funding bodies if required, and coordinate the investigators, including consumers to finalise the grant. We will meet with the UTS research office who run compliance checks on the application, complete the university requirements for submission and coordinate and assist the investigators, including consumers with meetings and profile updates in the grant portals. The grant application and research project will always be yours, but ITCC offers writing, technical and administrative support. So again, you're not reinventing the wheel.
Vanessa Yenson: So you've applied for funding and you want to get started on ethics submission because you've heard this can take a while, but how difficult can this part be? Because you've already got a protocol and what else needs to be submitted for ethics review and approval? This is a recent list of documents that all needed to be submitted for ethics approval for a study that is now ready for their first patient. Along with your protocol, you will need to submit all patient facing material for ethics approval, including patient information sheets, consent forms, patient cards, advertising posters, leaflets and patient diaries, as well as the investigator's brochure and all the assessment tools that you're going to use, radiation assessments and other documents that are specific for your study, so they can be quite a lot.
Vanessa Yenson: You need to keep in mind that after the lead HREC has approved your master documents in Australia, each site will then have their own governance submission processes. So being on top of this first application and all the required documents is so very important. We can help you with your initial ethics submissions as our staff has intricate knowledge of your protocol, especially if we have collaborated with you to write it, and experienced with many New South Wales HRECs. We can help finalise the documents for submission, including master documents, assessment tools and other examples that were on the previous line. We can collaborate with you to complete the online HREC application.
Vanessa Yenson: HRECs often come back to you with questions or requests for further clarification and we can liaise with you to answer any questions raised from the HREc review. We can submit your study for UTS HREC ratification, which is a requirement for any study that goes through ITCC. You are the coordinating principal investigator, so the finished application and response to questions will always be your responsibility, but we can assist in making sure the application and responses are as complete and as comprehensive as possible.
Vanessa Yenson: Once ethics and all of your documents have been approved by your lead HREC and ratified by UTS HREC, we can assist with the subsequent approvals and requirements, which might include site-specific approvals or SSAs for each individual study site, indemnity and clinical trial insurance, clinical trial notification through the TGA, which is known as the CTN or CTX, clinical trial research agreements, site feasibility and confirmation, and assessment tool licencing.
Vanessa Yenson: Obtaining funding and ethics approval is a moment for congratulations. It's a huge milestone and now you're ready to roll. But how does it actually happen? ITCC can provide central coordination for your multi-site trials as well as site-specific support, from study initiation through to closeout, including safety and data monitoring. We work with your investigator team to operationalize trials and work with sites on the development and improvement of your participant recruitment strategies. We can help with trial registration, standard operating procedures, guidance documents and templates.
Vanessa Yenson: There's still a bit more preparation that is needed before you can recruit any participants. ITCC can help you plan the data management for your trial, including the randomization schedule, electronic case report forms or eCRFs for direct data entry into REDCap, data collection worksheets, data management plans, monitoring and statistical analysis plans, and we can provide manuals for the coordinating principal investigator and the principal investigator, and for E-consent, and we can help you develop the site investigator and pharmacy manuals.
Vanessa Yenson: During the trial, we can help with site and research staff training, recruitment and advertising, trial randomization services, database access including the help desk, trial and pharmacy monitoring, adverse event and serious adverse event reporting requirements, medical monitor oversight, data and safety monitoring committee support. We can organise stock ordering of the investigational product and help with the budget finalisation, study payments and tracking.
Vanessa Yenson: Once your trial has recruited all the participants and collected all the data, what then? Did it work? How can you know? After data collection, we can help you with statistical and health economic analysis, report writing, sponsor and institutional reporting requirements, result dissemination, including the preparation of manuscripts and papers, conference presentations and reports. If your trial has proved that your intervention will improve outcomes for your patients, you want to get the results out there to change practise.
Vanessa Yenson: The ITCC team is a group of solution-focused professionals with diverse qualifications and combined expertise in clinical trial coordination with expertise in moving new medications, devices or health service reforms. We offer a full project management service from concept to startup, conduct and completion. So that is just a quick overview of what PaCCSC and CST membership and the ITCC can do for you.
Fran Hyslop: To know more about becoming a PaCCSC member, then please visit our website. Our website has information about our trials and other work and the resources and support and networking opportunities for PaCCSC and CST members that you can see all the different things that we do as part of PaCCSC and CST. We have several avenues to support new study ideas, provide networking opportunities and forge new collaboration so please don't hesitate to get in touch. I'm sure you'll all join me in thanking Vanessa for her presentation today. Our thanks too to the New South Wales Ministry of Health for funding these masterclasses. We thank you all for your time. We hope you have a pleasant afternoon. I'll end the meeting now. Thank you.
Resources mentioned in the video
- Critical Appraisal Skills Programme [opens external site]
- CASP Randomised Controlled Trial Checklists [opens external site]
- JBI Critical Appraisal Tools [opens external site]
- Centre for Evidence-Based Medicine Critical Appraisal Tools [opens external site]
- BMJ Critical Appraisal Tools [opens external site]
- AMSTAR 2 Checklist - [opens external site]
Clinical trial masterclass 3A - Developing your protocol
Masterclass 3 is presented in two parts by Belinda Fazekas and Dr Charmain Strauss.
In Part A, Belinda and Charmain lead a deep dive into the process of developing your clinical trial protocol, which is the bedrock of your clinical trial, and will inform everything else you will do to roll out your study.
Good afternoon everyone. Welcome to the UTS IMPACCT Trials Coordination Centre Master Classes funded by the New South Wales Ministry of Health, and aim to help you bring your clinical trial idea to life. First, I'd like to acknowledge the Gadigal people of the Eora Nation, upon whose ancestral lands our city campus now stands. I would like to pay respect to the Elders, both past and present, acknowledging them as the traditional custodians of knowledge for this land.
I would like to acknowledge the traditional custodians of the various lands from which all our attendees joined today and to pay respects to those Elders past and present. And I extend this respect to First Nations people attending today. My name is Fran Hyslop, and my colleagues Belinda Fazekas and Charmaine Strauss will be presenting today. Belinda, Charmaine and I are from PaCCSC and CST at the University of Technology Sydney. PaCCSC is the Palliative Care Clinical Studies Collaborative, and CST is the Cancer Symptoms Trial Collaborative.
PaCCSC and CST are member-based collaboratives that conduct investigator-led clinical trials. Both PaCCSC and CST are part of UTS IMPACCT. The Impact Trials Coordination Centre, or ITCC, works to coordinate PaCCSC and CST clinical trials. This is the third in a series of masterclass from the UTS IMPACCT Trials Coordination Centre aim to help you bring your clinical trial idea to life. This masterclass is in two parts. Today we will cover developing your clinical trial protocol, and part B will be held later this week and cover tips, tricks, and pitfalls. I'd like to hand over to Belinda and Charmaine to introduce themselves.
Hi everyone. My name is Belinda Fazekas, and I lead the ITCC team. I've worked in the area of clinical trials within palliative care and cancer symptom management now for over 15 years. I'm involved in all aspects of clinical trials. We hope this will be an interactive session, and please feel free to seek clarification as we move through the development of a clinical trial protocol.
Hi everyone. My name's Charmaine Strauss and I'm a project officer with the ITCC. I have over nine years experience in clinical research covering all aspects of clinical trials in palliative care and cancer symptom management, supporting Belinda and the ITCC.
Start off with... To drive straight in and we'll think about if you've come up with a research idea that you think will help your patients better manage their cancer symptoms or improve the quality of life for people living with a life-limiting illness and you want to test this idea, then your next step is to develop this into a protocol. On the screen is an overview of the entire clinical trial life cycle, and we covered in detail in our first masterclass. This flow diagram shows all of the steps from start to finish and the basic requirements for a typical multi-site clinical trial.
As you can see, the life cycle starts with an idea and continues until publication and beyond. The second step in the life cycle is where you turn your idea into a clinical trial protocol. Protocol development is the focus of today's session. The protocol is the focal point for the whole study. It needs to describe every detail of the study and how it is to be conducted. As you begin this process, you will clarify what other expertise you need and what other team members you need to invite to join you. It's helpful to consider what each individual will bring to the study.
The ITCC has developed a comprehensive clinical trial protocol template, which will help you develop your clinical trial idea. This template is available to PaCCSC and CST members to help you write every section of your protocol. We'll go through the sections of the template, and we'll look at a case study. We hope there'll be a chance for you to contribute to the discussion so we can all learn from one another. The ITCC template covers the introduction, a description of the problem and the importance of the trial, a description of the methods and exactly how the trial will be conducted, ethical considerations, and how the results are going to be disseminated. The ITCC template has up-to-date information from guidance bodies around good clinical practise and quality clinical trials, including patient-reported outcomes and pathology considerations. It provides links and references to associated guidelines and suggested wording for commonly used assessment tools.
I also direct you to the ICH GCP E6 Good Clinical Practise, which has a whole section outlining the requirements of a protocol. These are the guidelines established by the International Conference on Harmonisation known as ICH, to ensure that clinical trials are conducted in accordance with regulations and conventions. If you are involved in clinical trials of any intervention, I strongly recommend you undertake training in GCP. It can be online, and there are free courses available that will take you a few hours to complete. The ITCC template is based on the ICH GCP requirements. Using this template will ensure you have included every contingency. The more comprehensive your protocol, the fewer problems you'll experience later.
As a starting point, I cannot stress enough the importance of having control of the many documents generated throughout the clinical trial, starting with the protocol. Each document needs to have a form of version, maybe a simple name and number such as version 2.3, a date and other information which can distinguish one document from a previous version. This should be applied to every document associated with a protocol. The protocol begins with a detailed introduction to the trial. What are we trying to find out or achieve? Why and how will it be done? Firstly, we have the background. What is already known about your research topic? So you've had this great idea, but what has already been done in the field? Your background will need to include a review of the literature, looking to answer questions such as what is the problem? What has been done to date? What is the gap and the significance of filling this gap?
You would also discuss the standard of care for the therapeutic area or indication. You need to justify why the specific intervention is being proposed and summarise the known potential risks and benefits. There'll be more on the risks in the risk register later in the presentation. You'll need a clear statement for your hypothesis. What do you hypothesise will happen as a result of your intervention? What answers you anticipate for your research questions? Your hypothesis will be a statement of your proposed treatment or treatments or that they will do something better or worse than something else. The study needs to prove or disprove the hypothesis. Next, you need to consider your objectives. Essentially, objectives are how are you going to carry out the aim of your study. What will you measure to determine your aim?
If your research question requires Patient-Reported Outcomes, known as PROs, to be measured, remember to state any specific PRO objectives as well as any that will be used to evaluate the intervention. The last part of this introductory section, before we look at some examples with an opportunity for you to contribute to the discussion, is around the study design. Your study design needs to be appropriate to answer the scientific question. There are many different types of studies, such as crossover, parallel, randomised. Studies can also be qualitative or quantitative, or mixed. Here at the ITCC, we focus on Randomised Control Trials or RCTs, small pilot studies for proof of concept as well as sub-studies embedded within a current study that adds value to the suite of currently running RCTs. It's important to consider your study outcomes and specific endpoints when designing your clinical trial.
Endpoints and study outcomes will be discussed in more detail later in this presentation. Another important thing to consider at this point is whether a substudy would enhance your research. Now let's take a look at an example of a study diagram. It should be clear and simple and enable the reader to identify the study design, the main time points and gain an overall impression of the study. This example shows a simple parallel arm Randomised Control Trial of an intervention versus placebo over 12 weeks with a follow-up period. It's important to also consider the specific risks associated with the trial itself, including the mitigation strategies that must be considered in the trial design. This process occurs in parallel with protocol development. As you work through each section of the protocol, consider all the risks pertinent to each section and build your repository or risk register in an Excel file, ensuring that these six points on the slide are covered.
So one, you would identify the risk. Two, analyse the probability and impact of the risk. Three, rank the level of the risk. Four, list your mitigation strategies. Five, monitor the frequency and type. And six, act and respond. Once completed, the risk register can be attached to your protocol as an appendix. Risks form a big part of the application for ethical approval, so addressing this clearly in your protocol will get you on your way to getting your study approved and recruiting. Masterclass three, part B will go through risks in more detail.
No, let's have a look at an example. We're conscious of ours and others intellectual property, so we'll use a highlight... a lighthearted example for our case study, and we hope you'll feel confident to contribute. So we have a scenario where you're attending work social picnic event, and we'll be packing ham and tomato sandwiches and to bring with you to share with your colleagues. This activity leads you to a simple question, which wrapper is best to use to ensure the sandwiches remain fresh? You have cling wrap and paper available on hand, so your simple question can lead you to think about what an actual... about an actual study to test this.
First, what's the background and what's already known? Cling wrap adheres to itself and forms a tight seal to keep sandwiches moist. It isolates food and limits cross-contamination, but it's hard to open. It is good for food hygiene but bad environmentally. Paper, on the other hand, can be fiddly to wrap as it's not adhesive. It's easier to open as it can be torn. It's more economical, and it has lower environmental impact. There are many options, but what's a potential hypothesis? For our case study, we postulate that cling wrap is more efficient than paper at maintaining the freshness, flavour, and moisture of a ham and tomato sandwich.
The next item to include in your introduction relates to the objective of your study and you need to go back to your research question. In this instance, it's to conduct a study to determine which type of sandwich wrapping is best. Your objective is how are you going to do it and what will you measure to determine your aim. For our case study, we've defined the objective as, "Comparing the efficiency of clang... cling and paper wrapping in maintaining the freshness of ham and tomato sandwiches." Now we need to consider what the study design would be appropriate to answer this question.
There are many types of designs, such as pilot feasibility, RCT, adaptive, among others. For the purposes of this sandwich wrapper case study, let us proceed with a Phase II RCT. It's usually a smaller pilot or feasibility study. Once you know all of the above, you can draught your study name, being aware that it's early days and things can change. The name of the study will include important information such as phase, study design, population, intervention, and, if applicable, a trial acronym. So let's call our study a Phase II multi-centre parallel-arm, randomised study of cling versus paper wrapping in maintaining sandwich freshness.
So now we have drafted the main introduction for our sandwich study. We can get onto next section of the clinical trial protocol template, which is the methods. It's easily the largest section and forms the bulk of your protocol. Our template divides it into three subsections centred around participants, data, and monitoring. The first subsection of your methods focuses on your participants. As you can see, there are many subheadings in this section. Each one will need to be addressed in detail.
The methods section begins with a description of the study setting. This is where your study will be conducted. You'll need to specify the number, location, and type of sites where participants will be recruited. Consider the clinical areas where referrals will be sourced from. For international trials, include a list of participating countries. The eligibility criteria includes a description of the specific population needed for the trial to evaluate the intended question. These are the participants you want to include in your trial and those who should not be included in your trial.
You'll need to consider the commonly accepted criteria for diagnosing and evaluating patients with the disease under study and the comorbid conditions that are exclusionary. Look at any participant populations that need to be excluded for safety reasons and whether excluding a specific population has the potential to affect the integrity of the trial results. Ensure exclusions on the basis of language, particularly on the basis of English only, are fully justified. Remember to seek input on the eligibility criteria from a variety of protocol investigators, including consumers and patient advocacy groups. We will be going into more detail around the inclusion and exclusion criteria in the part B masterclass.
Your intervention will need to be described with sufficient detail to allow for replication, including how and when the intervention will be administered. You need to describe the medication or intervention presentation, the dosing or schedule, the method of administration and the schedule for dispensing and returning of the medication. Also include the criteria for discontinuing or modification for a given trial participant. This will apply for non-pharmacological interventions also. A diagram may be useful if the intervention is complicated. The intervention section also needs to include the manufacturing process, the blinding and randomization procedures, presentation and packaging.
Specifically, consider if the study involves a placebo, how easily can it be manufactured to look, feel, taste the same, et cetera as the intervention. If the intervention or the placebo needs to be manufactured, who will do that manufacturing, and do they have the appropriate licence to manufacture in the context of clinical trials? The treatment of participant section is one major part of the protocol. How are patients or participants going to be treated and kept safe while on study? As mentioned previously, it's important to explain how many risks associated with the intervention will be mitigated. This section would typically be more relevant for drug or device trials and would include a description of any rescue medications, the dosage and the circumstances for the administration during the treatment period.
For example, you may wish to determine the assessment and treatment of participants if they develop nausea or vomiting. You'll need to outline a nausea treatment protocol, such as the allowed medication and dosage, and if this will result in discontinuation or adjustment of the intervention. Pictured is an example of the management of an... of the intervention if nausea develops. Another example could be if your intervention is a questionnaire or a survey about a particularly sensitive topic. You may anticipate that some participants may be triggered by some of the questions in the survey, which could cause them psychological distress. This section of the protocol would be where you would address how this will be handled in your study.
In this section of the protocol, you want to describe the primary, secondary and other outcomes. The outcomes selected for evaluation must address the trial objectives. You'll need to describe the specific variable that will be measured, such as systolic blood pressure. What will be analysed, looking, for example, at a change from baseline, the method of aggregation and the time point for each outcome. Once you have to find your outcomes, you then need to determine your endpoints. Endpoints are the specific measures of these outcomes.
To be valid, an endpoint should capture the income... the outcome of interest accurately, precisely, and consistently with repeated measurements. Things to consider when determining the study endpoints are, do they align with the scientific question and objectives of the study? Are there standardised and generally accepted definitions and methods to determine the endpoints? With regards to your primary endpoint specifically, how is the endpoint defined? Is the endpoint objective, such as pregnancy or death or subjective, such as a pain score? Is it accessible for all participants? How and by whom will the endpoints be ascertained?
Will it be an investigator or will it be determined centrally by a third party uninvolved in the study? Your interventions, treatment, and outcomes can be visualised on your study diagram. There are many types of study diagrams. This should be a snapshot of the study period and the main assessment points showing the main activity at those times. Your timeline would include the schedule of enrollment, interventions, assessments, and visits for participants. Both the study visit diagram and the timeline are aimed at clarifying the pertinent visits during the trial. This removes ambiguity. One format is pictured here, which combines both.
The methods section also requires detailed information about informed consent and obtaining informed consent from study participants is imperative. No study procedures should be performed prior to the participants being fully informed about the key facts of the clinical trial and confirming their decision to participate. Your protocol section should describe the circumstances of the consent process, remembering that it is not static. But it's a two-way process. You need to consider questions, such as how does the consent process, as opposed to the consent document, fit within the study processes? What are the key elements of the informed consent process for that particular study?
Consider how long it will take, how much time you will give participants to consider consent, who will obtain the consent, and how. If the discussion is not face-to-face, then describe how this will be done and consider the security of any systems used and ensure that you are able to substantiate that process. This includes how the informed consent will be documented. It's important also to consider the target population of the study when determining the consent process. And, of course, the approved information sheet and consent form is to be used at all times to ensure that the process complies with the national and international requirements.
Informed consent is very clearly spelled out in ICH GCP, where there are three full pages of detailed guidelines and also within the Australian National Health and Medical Research Council National Statement. Both of these documents are non-negotiables, and the conditions must be met. For most trials within Australia, there are standard and detailed templates to use which will ensure that the above requirements are met. Let's come back to the risk register you are building alongside the protocol. What risks would you consider given the type of consent you might collect and the method of consent, and how might these risks be mitigated? Consider your study population. Is there a potential for inclusion of vulnerable trial participants or people who have accessibility difficulties due to geography, vision, or cognitive impairment, et cetera?
We've got a Mentimeter on the next slide, so please grab hold of your phone again. Let us know what you think might be risks related to consent. You can choose as many as you think might apply, and I'll just give it a minute or two for people to refresh and enter in what they think. Well, that's great. Thank you for... everyone for contributing. As you can see, these are just a few of the risks related to consent to consider here, and there are many more. Remember that the protocol is a document that not only fully describes how the study is going to be conducted by others as well as you but will also be a record to demonstrate that the study was well planned and conducted in accordance with GCP and with respect to the principles of research as outlined within the NHMRC National Statement.
So now let's revisit our case study and consider what the method section for the sandwich wrapper trial might look like remembering that our hypothesis is that cling wrap is more efficient than paper at maintaining the freshness, flavour, and moisture of a ham and tomato sandwich. So first, what would be an appropriate setting to recruit our study participants? It's a multi-centre study, so the number of participating sites would be dependent on your budget and your sample sites. We propose three sites within the one state so that there is access to the same brand of materials. The setting might be schools, workplaces, or public spaces like a shopping centre or a cafe. But for simplicity, we suggest that the setting be within the clinical research lab at the included sites. All sites will have a dedicated trial space where participants can relax and be comfortable during their participation.
So now let us think about the participants who should be included in our RCT. What factors do you think should be included in the eligibility criteria? As you can see, there are many factors that you can consider, and it all comes back to your research question and your objectives. You want to ensure that the risk of your intervention is minimised for the participants whilst ensuring that there's a representative population enrolled in the trial, which will generate meaningful results. That is, there'll be no bias, and you'll achieve a heterogeneous population. Things to consider with respect to the age of participants are the implications for consent of including children. Similarly, it's no longer acceptable to exclude participants based on language ability. There should be provisions made to enable participants from culturally and linguistically diverse populations to participate in research, such as using interpreters and providing translated study documents.
So these are some of the criteria we've selected for our case study. We're looking to enrol participants who are adults, so 18 or over, who are able to swallow, participants who do not have any allergies or intolerances to the sandwich ingredients. So that means excluding those who have celiac disease and then participants who do not have any dietary restrictions due to lifestyle, religion, or culture because they may not be able to eat the ham as well as participants who do not have any taste or smell disorders. Keeping this in mind, this leaves the potential to include a substudy including celiacs to see if there are any differences between cling and paper wrap on ham and to tomato sandwiches that are prepared with gluten-free bread.
So now that we've determined our inclusion and exclusion criteria, it's time to describe our study intervention. How will it be presented, administered, and when? So key considerations here are to ensure that we have adequate controls in place so that any differences between our two groups arise due to our intervention being the wrapper and not as a result of other external factors. In this case, we want to make sure that our sandwich ingredients and the testing environment are consistent and controlled.
So our sandwich will consist of two slices of wholemeal bread, one thin spread of margarine of about a teaspoon, two slices of ham of 55 grammes, and four slices of truss tomatoes. The same brand of bread, margarine, and ham, and variety of tomato will be used, ensuring that these are supermarket board. One will be cling-wrapped. The sandwich is to be tightly wrapped in a 20x20 centimetre square of cling wrap. Two will be the paper wrap, and sandwiches will be tightly wrapped in a 20x20 centimetre square of paper. Wrappers will be sourced from the same brand. The participants will be randomly allocated to one of the two arms without stratification using a web-based randomization system, which will need to be specified.
It's also important to ensure that the preparation of our intervention is controlled. Who will prepare and how? The sandwiches are to be prepared and wrapped by the delegated project officer four hours plus or minus 10 minutes prior to the expected consumption time. And they will then be immediately stored in the fridge at a temperature between four to eight degrees Celsius, and there'll be a log for the fridge to ensure it remains within the temperature range. A standard operating procedure will detail the sandwich preparation to ensure the same conditions are in place for each sandwich.
There'll also need to be a training session, possibly with follow-up sessions every few months, as well as preparation sessions. Prior to wrapping the sandwich, a small sample will be taken to measure the moisture content. There will also be a standard operating procedure to detail this process. Participants will be provided with the allocated wrap sandwich three hours after their last meal. It's important to keep the time since the last meal consistent to ensure the baseline hunger level for all participants is the same.
So the sandwiches must be taken out of the fridge and placed at room temperature 30 minutes plus or minus five minutes prior to the provision to the participants. Participants will unwrap the sandwich, and a repeat sample will be extracted to measure the moisture content. Participants will then eat their sandwich over a 15-minute period, and at the end of this time, any wrappers and uneaten sandwiches are to be returned.
Although this fictional study is not a drug or device trial, we could consider including in the treatment of participants section details related to participants who exhibit allergy symptoms after consuming the sandwich. We could consider various symptom clusters, such as localised rash or itchy mouth, to more severe symptoms, such as breathlessness and or respiratory distress. For the latter, differential diagnosis is acute anaphylaxis or airway obstruction from other causes, and a management plan would entail immediate cessation of the intervention, provision of urgent medical attention to the airway, breathing, and circulation.
Probably one of the key considerations in developing your clinical trial is ensuring that you have robust and clearly defined outcomes and endpoints, as these will directly affect your method. For the purposes of this case study, we've defined the following outcomes. The primary outcome will be the difference in participant-reported freshness score for sandwiches stored in cling wrap compared to paper. Our primary endpoint will be the participant-reported sandwich flavour and moisture score 10 minutes after consuming the sandwich.
Our secondary outcomes will be looking at the change in moisture content through refractometer testing between baseline and sandwich consumption. How easy was it to open the wrapper? Did the wrapper cause the sandwich to go soggy? Did the type of wrapper affect the structural integrity of the sandwich as it was consumed? That is, as you unwrapped it, did the sandwich fall apart? Did the amount of sandwich left over, if any, correlate with a decrease in freshness score, or was it a result of the participant feeling full? So now that we've detailed our intervention and our outcomes for this case study, we need to visualise these on our Study Visit Diagram. Here's what it looks like for our sandwich wrapper study.
So for our sandwich wrapper trial, the informed consent section will need to specify that the consent form will be posted out one week prior to the screening appointment. That written consent will be obtained through a signed and dated Participant Information and Consent Form or PICF. Consent will be obtained prior to any trial activities occurring and prior to the collection of study assessments. Informed consent will be obtained by the PI or principal investigator, or the delegated project officer. The signed PICF will be copied, and the copy will be provided to the participant. The original signed form will be find filed in the study file. The consent process, including if the participant withdraws, their consent will be documented in full in the participant study notes.
Now let's return to the methods section of the protocol template, which also includes a subsection for the participant timeline. This section will describe what the study will involve for participants, and it may be helpful to add this as a diagram. It may include what visits are required. How long will they take. What forms will need to be completed? And at what points will the intervention be administered? The next section requires the protocol description for sample size. Sample size refers to the estimated number of participants needed to achieve the study objectives.
You'll need the expertise of a statistician to determine your sample size, the power calculation, and align... to be aligned with the primary outcome and for the statistical analysis. Now that you've determined your sample size, how will you recruit the required participants? The recruitment plan section of the protocol will describe the enrollments require... needed by the sites and overall to complete the study. It should include information about the sites and clinical areas, how participants are going to know about the study, the referral mechanisms, and any associated recruitment materials.
A recruitment plan will include the advertising forums such as social media, clinic rooms, referral forms, posters, advocacy groups, and support networks. You will need to describe each of these and provide a copy of the advertising materials to the Ethics Committee for approval. The plan will also include how you're going to deal with competing studies at any of the plan sites. You will also want to consider and perhaps outline the risks with recruitment and your strategies for dealing with these.
So now, let's go back to our case study to complete the last subsections of the participant section of the methods. Remembering that our hypothesis is that cling wrap is more efficient than paper at maintaining the freshness, flavour, flavour, and moisture of a ham and tomato sandwich. So shown here would be what the visual representation of our participant timeline would look like for the case study. And next, our sample size.
So our statistician has determined that in order to meet the primary outcome of participant-reported freshness score, our sample size should be 100 with 50 participants per arm, and our recruitment plan will involve advertising on social media. We'll need to submit the text and the forum to the Ethics Committee for approval. We'll also advertise in social venues like local workplaces and schools, maybe using posters in the staff room or through the parent newsletters. So now going back to our protocol template, the second subsection of the methods will focus on the data.
This focuses on how the relevant data for your study will be collected and managed. This is where you will detail all of the assessments that will be performed. As part of this section, you should build up a table of study measures which details each data collection in a tabular form and you can see an example on your screen. This is the one of the most important tables in your protocol, but it's very easy to do badly. It should be possible to conduct your trial simply by looking at this table.
Each measure should also be detailed in the next subsection of study assessments, providing a full description including references, the justification for using the tool or the instrument, and outline of the mode of administration, describe the time points for collection, and the anticipated burden on the participant. And you'll need to identify the collection method, whether it be by the participant, the researcher, or external provider.
An example on your screen would be a quality of life assessment. You would need to give a full description of that instrument and justify why it was selected. You'll need to summarise the purpose of the instrument, the number of questions, who completes it, how long it takes, and when it is to be completed. Also, include a copy of the instrument as an appendix or attachment to the protocol. You would expect one paragraph per assessment instrument.
Items to consider in the section for data management are listed on the slide. Guidance is provided within the protocol template. Each subsection needs to be detailed in full. An example of the source data is shown on the screen. This will be a full description of the data management for the trial. If data is not managed correctly, the study results may be in jeopardy. You'll also need to describe the quality control procedures, such as training and ongoing monitoring, collection of samples, pharmacy procedures, and the trial monitoring plan.
You'll certainly need the support of statistician on your team. There is no point undertaking a trial if you don't know how many participants you need. What and how many data will be required to answer your research question? This section needs to be a full description of the methods to be used, the hypothesis to be tested, and the analysis of the various endpoints will be presented, including the primary and secondary endpoints, any analysis of the safety and toxicity data, any analysis of other efficacy, outcomes, and also health economics.
Further description will be required about the handling of missing data. For larger studies, there may be an associated statistical analysis plan. This section, the monitoring section, will need to describe the way your study will be monitored. What oversight will there be to ensure safety of the participants and the safety and integrity of the data? Detail will be required to describe the potential harm to the participant, such as adverse events and how these will be defined, identified, assessed, recorded, and reported. The procedures for trial monitoring need to be described, including any committee involvement that may be required.
So let's return to our case study and the data collection section for our methods. The first step is to put together the table of study measures. For the sandwich wrapper trial the table would look like this. As you can see, all of the measures are listed, and they've been separated into groups based on who will be completing them. There are also windows for the visit time points and footnotes to provide additional detail regarding specific measures to collect our study data. There will be two main forms of study assessments. Patient Reported Outcomes, PROs, and laboratory testing. Participants will be provided with four different PRO questionnaires to complete 10 minutes after consuming their sandwich. They will be asked to rate the flavour, moisture level, ease of opening the wrapper, and the ability of the wrapper to maintain structural integrity. The type of questionnaires selected directly reflect the outcome for our sandwich wrapper.
Shown here is an example of the PRO instrument for moisture level of the sandwich consumed, rated by the participant using a five-point Likert scale. In addition, laboratory testing of the moisture content will use a refractometer, as you can see on the screen, to analyse a small whole punch-size sample of the sandwich, and it will be performed at two-time points. They will be at baseline immediately after preparation and prior to wrapping and refrigeration, and then after 30 minutes at room temperature immediately after unwrapping and proceeding consumption by the participant. For data management, data will be collected using paper forms collected by the... completed by the project officer and the participant. A delegated lab person will collect a sample and perform moisture measurement using the refractory metre. There will be an SOP for sample preparation and refractometer operation.
A photograph of the refractory metre reading will be taken and timestamped, and this will constitute the source data from the refractometer. Data will be entered into an electronic data capture system called REDCap. For monitoring, staff will be specifically trained on study procedures, including the preparation of sandwiches, the refractory metre operation, data collection, consent, and delegation, et cetera. This will take place at site initiation and will be recorded in training logs. Service logs of equipment will be maintained and filed. There will be checking that the type of wrap is compliant with the protocol requirements, including a recording of measurement during preparation, as well as ongoing monitoring of data entry and errors.
Okay, so now, back to our protocol template. The next section is ethics. This section of the protocol describes the ethical issues relevant to the protocol, and this is where the issues identified in the risk register mentioned previously will be addressed. Specifically, there'll need to be detail around the benefit anticipated from the study, an assessment of the burden and stress, the potential risks will be listed, and the mitigation strategies will be outlined. This is where you'll refer to your risk register. How will confidentiality and access to data be managed? This section will ensure that the principles of ethical research are being followed and that the trial complies with the NHMRC National Statement. Including these details in your protocol will also assist completion of your HREA, your Human Research Ethics Application, as part of the initial submission to ethics.
And finally, the last section... the next section of the clinical trial protocol template, she's also the final section, is a dissemination of results. Specifically there will need to be detail in this section around declaration of interests. What is the intended use of the data you collected? It will include a description of the sponsor and collaborations and the dissemination policy, which is the plan for communication of the results. Consider if the results will be communicated to participants, and if so, how? The authorship subsection will need to outline the authorship guidelines for all planned publications for clarity and to avoid future conflict and provide an outline of plans for granting access to the protocol participant level data and the statistical code. This concludes all of the required sections of the protocol. We've covered a lot of ground today, so let's summarise the key points we have touched on.
If you want to turn your idea into a clinical trial, you'll need a protocol. We recommend that your protocol is developed from a template, either the ITCC one or, at the very least, the ICH GCP guidance. Following a template will ensure that all aspects are covered and that current requirements and guidelines are followed. Don't delete sections just because they look too hard. It does look big and daunting, but contemplating... but completing the entire template ensures your protocol is full and complete, comprehensive, and unambiguous, and it can be followed by anyone, anywhere, and will save you trouble during your trial. As you complete the protocol template, build a good team. Listen to the experts and those who have gone before you and all your key stakeholders. And, of course, ask for help.
There's some links here that you can see on the screen that describe where you can go to find further information about the things we've talked about today. There's certainly plenty of information out there, and we have barely peaked below the surface of what's involved in writing a clinical trial protocol today. If you'd like to know more about PaCCSC and CST or become a member, please visit our website.
Our website has information about our trials and other work and the resources, support, and networking opportunities available for PaCCSC and CST members. We have several avenues to support new study ideas, provide networking opportunities, and forge new collaborations. And membership of PaCCSC and CST is free. Our thanks to Belinda and Charmaine for presenting today and to the New South Wales Ministry of Health for funding these masterclass. And to our attendees, we thank you all for your time.
Resources mentioned in the video
- Australian Government Clinical Trials toolkit [opens external site]
- Good Clinical Practice (GCP) in Australia [opens external site]
- ICH Efficacy Guidelines [opens external site]
- NHMRC National Statement on Ethical Conduct in Human Research [opens external site]
- CONSORT Statement reporting guidelines [opens external site]
- EQUATOR Network reporting guidelines [opens external site]
Clinical trial masterclass 3B - Tips, tricks and pitfalls
Masterclass 3 is presented in two parts by Belinda Fazekas and Dr Charmain Strauss.
In Part B, Belinda and Charmain delve into the more complex protocol items, helping you to avoid re-inventing the wheel by providing tips and tricks to enable you avoid common pitfalls encountered by early career researchers.
Fran Hyslop:
Welcome to the UTS IMPACCT Trials Coordination Centre Masterclasses, funded by the New South Wales Ministry of Health, an aim to help you bring your clinical trial idea to life. This is the second part of the third Masterclass in this series. Today, we're going to talk about developing your clinical trial protocol and focus on tips and pitfalls. First, I'd like to acknowledge the Gadigal people of the Eora Nation, upon whose ancestral lands our city campus now stands. I would like to pay respect to the elders both past and present. Acknowledging them as the traditional custodians of knowledge for this land. I would like to acknowledge the traditional custodians of the various lands from which all our attendees join today and pay respects to those elders past and present.
I extend this respect to First Nations people attending today. My name is Fran Hyslop, and my colleagues Belinda Fazekas and Charmain Strauss will be presenting today. Belinda, Charmaine and I are from PaCCSC and CST at the University of Technology, Sydney. PaCCSC is Palliative Care Clinical Studies Collaborative. CST is the Cancer Symptom Trials Collaborative. Both PaCCSC and CST are part of UTS IMPACCT. The IMPACCT Trials Coordination Centre or ITCC works to coordinate PaCCSC and CST clinical trials. This is the second part of this Masterclass covering tips and pitfalls during the development of a trial protocol. I'd like to hand over to Belinda and Charmaine to introduce themselves.
Belinda Fazekas:
Hi everyone. My name is Belinda, and I lead the ITCC team. I've worked in the area of clinical trials within palliative care and symptom management for over 15 years. I'm involved in all aspects of clinical trials.
Charmain Strauss:
Hi everyone. My name is Charmain Strauss and I'm a project officer with the ITCC. I have over nine years clinical research experience coordinating all aspects of clinical trials in palliative care and cancer symptom management, supporting Belinda and the rest of the ITCC team. As we discussed in the previous class, the ITCC has developed a comprehensive protocol template, which will help you write every section of your clinical trial protocol. Developing the protocol is a major piece of work. It takes time and can involve input for multiple stakeholders. The more comprehensive your protocol, the fewer problems you'll experience later. Today, we want to cover some of the common issues that occur during clinical trials. You can consider them in your protocol, which will help you either avoid them or deal with them when they crop up later down the track.
First, let's review some of the common issues that can affect clinical trial protocols. Firstly, your protocol might not be comprehensive enough. There isn't sufficient information to allow for the study to be conducted consistently and accurately. The protocol may also be ambiguous where the terminology used is inconsistent throughout or there is conflicting information between one section and another. This can result in multiple interpretations of the protocol. While it may seem clear to those who wrote it, it may not be as evident to the principal investigator and their staff who are implementing the study at a site. If different sites are implementing the study differently, then this can jeopardise the quality of your data. Remember that the protocol is a critical piece of trial documentation, not only during the trial but also in the years after. The protocol should be of a stand that could allow replication of the trial by another party and also, allows scrutiny to enable assessment of compliance within GCP.
It forms part of the long-term paper trail that supports the trial results into the future. Poorly written protocols or inadequate study designs are costly and can delay timelines. They can jeopardise patient safety and data quality, resulting in failure to support regulatory approval of the intervention. Site staff may misinterpret aspects of the protocol which can compromise conclusions and data integrity. A poor protocol may also cause time delays where the sponsor can no longer conduct the study or certain aspects of it till issues are addressed. The budgetary implications of this can be quite significant, particularly for investigator-initiated trials where funding is often limited. In addition, this may lead to the study results being ignored, challenged, or even fail to lead to regulatory approvals. In today's session, we'll focus on the following five topics, risks, inclusion and exclusion criteria. Study diagrams, particularly the table of study measures. Version control and consent. Specifically, the requirements for an impartial witness.
These are from our experience, the most common areas where problems typically arise and which we feel are essential to get right to ensure the success of a study. We hope to share some tips and tricks for you to consider. The aim is to achieve a well-written protocol that is operationally feasible, generates quality data, and is compliant with regulatory guidelines. All clinical trials must be conducted in accordance with ICH GCP efficacy guideline number six, good clinical practise. These are the guidelines established by the International Conference on Harmonisation, known as ICH, to ensure that clinical trials meet the requirements mandated by regulations and conventions. If you are involved in clinical trials of any intervention, I strongly recommend you undertake training in GCP. It can be online and there are free courses available that will take you a few hours to complete.
Now, we'll look at the first thing, which is to discuss risks. The sponsor and investigator are responsible for evaluating or risks to participants and the trial data before a trial starts and developing a plan to control the risks to an acceptable level. If risks are not identified as much as possible before time, then you'll spend your time putting out spot fires. It's better to plan for potential issues. This ensures you'll know what to do when they arise and you can deal with problems in a consistent manner. Lack of consideration of risks leads to wasted time, wasted money, and potentially wasting the data of your participants. While the sponsor of the study will develop and manage an overall risk assessment of the trial, including items such as site selection, budget, recruitment, and monitoring, it's also important to consider the specific risks associated with the trial itself. Including the mitigation strategies that must be considered in your trial design.
This process occurs in parallel with protocol development and it follows a six-step process. First, you identify, analyse, and rank the risk. Then you list the mitigation strategies, monitor the frequency of the risk. Finally, act and respond to the risk. Let's review each of these steps individually. During protocol development, you should identify risks to your trial participants. This includes any harms related to the trial intervention, trial procedures, as well as any serious breaches. Next, you also need to identify risks associated with the conduct of your trial. Examples of this include the trial being inadequately powered to meet your primary outcome, poor recruitment, inadequate safety monitoring, inadequate data collection, and inadequate data management systems.
Once the risks have been identified, you need to perform an assessment of the risks and associated control measures. This involves assessing all identified risks in terms of the likelihood and the severity or impact of the potential harm on the participants, specifically on their safety, their rights and their wellbeing, and on the reliability of the trial results. You need to assess each risk individually to assign the impact for the risk of low, medium, or high. Next, you need to rank the level of each risk. Is it critical, high, moderate, or low? The last stage of the initial assessment is to manage and monitor the risks. You'll need to establish a plan to reduce all risks to an acceptable level and review the risks regularly throughout the trial. Finally, you now have your risk assessment plan in place. You'll need to implement this plan to maintain risks at an acceptable level.
As risks arise, use the plan to act and respond by implementing corrective and preventative actions. The ITCC protocol template does walk users through the development of a risk register. You can see a great diagram from the Victorian Comprehensive Cancer Centre Alliance on your screen, which summarises different types of risks commonly associated with clinical trials. Let's have a look at the risk associated with the outcomes not being collected. This may result from the outcome measures not being feasible or due to participant dropout. There may be risks associated with your study design and objectives. For example, if your primary outcome requires specific equipment, what are the risks of that equipment failing? One real life example is a study that required step counters to measure the number of steps at a specific time, which was the endpoint, to determine the primary outcome, which was the changing physical activity from baseline.
The risk was, what happens if the step counts fail to work? Which is exactly what happened. What is the probability of this risk and what is the impact on the study? Equipment failure does occur and the probability would depend on the equipment. For example, if it was purchased new at the start of the study, the likelihood of it failing is low to moderate. However, the probability of the equipment failing due to user error is moderate to high. The impact of the equipment failing would be high as you cannot generate results for this study without it. In this case, the overall risk would then be rated as high. Now, consider how significant is the risk. In this case, it's critical. Without the equipment, you cannot measure your primary outcome. It's a risk you can't ignore and you must put strategies in place to deal with it. Then you would go on to list your proposed mitigation strategies. These would include monitor use of the equipment and any failures and ensure that you perform routine checking and maintenance.
You'd consider additional monitoring of the data collected, such as monitoring the data as it's entered to identify missing data due to the equipment failing or user error. Actions you would take in response to the risk would be to provide in-depth training at site initiation, then update educational materials to ensure user knowledge and training is kept up to date. You could also schedule regular meetings with sites so issues can be identified and discussed in a timely manner to ensure quality of the data, as well as any additional retraining that may be required. If the risk cannot be mitigated, then you'll need to review the outcome measure. This would require a protocol amendment. Now having identified a critical risk, the items can be added to a simple tabular risk register as shown here. Here, you can see the six elements in separate columns. We have the risk related to the equipment identified. The probability is high. The risk is ranked as critical. The mitigation strategies monitoring and response are also detailed.
For each risk identified, you would start a new line or row and add this to the table. As you work through each section of the protocol, remember to consider all the risks pertinent to each section and build up your repository or what we call a risk register in an Excel file or similar. Ensuring that all six elements discussed previously are covered. Once complete, the risk register can be attached to your protocol as an appendix. Risks form a big part of the application for ethical approval. Addressing this clearly in your protocol will get you on your way to getting your study approved and recruiting. Having a risk register will save you time as you complete the human research ethics application or hurry out for your study. Because it contains a major section centred around risks, benefits, and harms of the research.
Belinda Fazekas:
Now that we've reviewed some of the issues and important considerations related to risk assessment, let's look at the inclusion and exclusion criteria as the impact of getting these wrong are high and critical. I will outline the criteria you'll need to cover and how to avoid pitfalls. First, let's look at the inclusion criteria. Going back to the protocol template, what does inclusion mean? This is referring to the individuals who are able to participate. The inclusion criteria cover all of the elements that will allow inclusion of the intended study population. For each item in the inclusion criteria, you must be able to provide a yes or no answer. Each potential participant must be seen as either being able to participate or not.
There should be no ambiguity or maybe. Can another person confirm that the criteria was met? Be wary of double negatives as they can be very confusing. Both a yes and a no answer might be understood to apply and should therefore be avoided. Inclusion criteria based on a test result or a specific score must be recorded so that it can be independently confirmed and ensure that the record with the score or that test result is available for review and kept as part of the source data. Excuse me. If a range is allowed, provide the absolute values and specify the range. Also, specify if the value needs to be signed or be available. An example might be ranges of a pathology result. If there is no range, then specify if clinician discretion is allowed. Some wording for this might include, or if otherwise appropriate as assessed by the clinician.
If a criterion is based on a specific assessment, then this must be available. For example, if based on a blood pressure reading, the recording of the blood pressure must form part of the source data. Or if a medical assessment of respiratory function for example, then access in the medical record or a print of the report must be available to confirm the decision. Let's work through some real-world examples and questions related to inclusion criteria. I'm sure we've all seen a criteria that says adult with cancer. Should there be an age range and what is the definition of adult? We've also seen criteria where we're looking for a particular score on a particular scale. As you can see on the screen, a score of a certain amount on an NRS scale. Should we spell out what the assessment is or use an acronym?
What is an NRS and what are the parameters? For as an example, you might want to state a numeric rating scale for breathlessness, with zero being no breathlessness and 10 being worse possible breathlessness. Now, what if the inclusion criterion is amended to be the score on the NRS scale or a score of something else on a different scale, such as a BORG scale, where the BORG scale is a different measure of breathlessness? What happens if both of those scores apply? Which one is scored? Are both done or only one? If that is not met, is the second measure assessed? Or if one applies and not the other, is one more important? Thirdly, we've seen criteria that state adequate trial of a specific medication or an inability to tolerate a specific dose or inability to swallow. What is considered adequate and how is this defined? Ensure that you can tick yes. A yes to inability to swallow should actually exclude those people.
A fourth criterion that we've seen is one that says able to complete study measures, or study assessments, but which assessments? They need to be specified, such as able to score breathlessness on a numeric rating scale or the ability to complete those study assessments. Is that only those related to the primary endpoint or is it all assessments in the protocol, which could include quality of life diaries, other questionnaires for symptoms, et cetera. How is meeting this criteria measured and recorded? Does it rely on the potential participants stating their ability, or is there an assessment to measure this? Now, let's look at the exclusion criteria. This is referring to the individuals who should not participate. GCP and the national statement requires that researchers protect the wellbeing and safety of participants. This is the reason for excluding some people.
The exclusion criteria cover all of those attributes to avoid participants who may be at higher risk of adverse events or those at risk of early dropout, such as drug interactions or concurrent medical conditions that can put them at higher risk. Higher dropout, risks study recruitment, requirements and the associated budget. The major attributes to cover in exclusion criteria should be those related to comorbidities. You need to specify these and the reason for exclusion, which are mostly for safety. Consider the impact of getting it wrong. What will happen if a person is enrolled who should have otherwise been excluded? You need to consider the safety and data implications of this. What are the known and potential drug interactions? Use the investigative brochure or the product information to determine those.
What happens if the person is already involved in another study? Will the current study impact the other study and vice versa? Can both studies coexist? Will the intervention or data collection of one interfere with the intervention or data collection of the other? Is it safe for the participant to be on both studies at the same time? What is the cumulative potential burden on the participant, as that's an important factor to consider? As with the inclusion criteria, let's have a look at a few examples I've come across and had questions related to them. One, we've got a criteria and exclusion criteria that says concurrent use of medications known to interact with the study medication. You should provide a list or reference to another part of a protocol where this is included to remove any ambiguity or doubt.
We've seen exclusion criteria that state cognitive impairment. Excuse me. We need to define cognitive impairment. It should be determined using an objective measure and you should specify what that measure is. For example, mini-mental status examination. You should also state who should measure it, what would the exclusionary score will be and what happens if the score keeps changing during the screening period. We've also seen exclusion criteria that state renal failure. How will this be measured? Is a calculation required, such as EGFR or MDRD, and which one is to be used? What range is to be applied and who assesses and signs this? Is a timeframe to be applied, such as the result from the previous week or month? What is the risk of the range changing in that time? Will this impact the participant's safety?
I think we've all seen exclusion criteria that state pregnancy or breastfeeding. Is a test required and when is this test performed? Pregnancy test, I mean. Or do you use verbal participant report? Does the test result form he source data and therefore, it must be kept? How do you do that if it's a urine stick sample, and who reads the result? How can we avoid errors in the study inclusion and exclusion criteria? Firstly, we should consider stating the specific age along with the date of birth instead of a criteria that simply says adult. For a gender requirement, we need to specify if this is biological, particularly if drug interactions, for example, are going to be an issue. Otherwise, gender can be a demographic data item but not an inclusion criteria. If capacity is to be assessed, then state the meeting of a certain score from a specific test or instrument. The criteria and the study assessment section of the protocol needs to state who and how this is assessed.
If pregnancy is to be an exclusion criteria, then specify the need for a test, which test is to be used and how the result is to be recorded. Also, provide a timeframe for when the test is to be undertaken. Also, keep cognizant of the religious organisational restrictions, such as those from Catholic hospitals regarding contraception. Particularly if the exclusion criteria states, refers to adequate contraception. Excluded medications as part of the criteria should be specified. This can be by medication group while then providing a full list within the protocol for clarity. Bear in mind that brand names can change over time, or new ones can become available. It's useful to include the active ingredient to improve clarity. It's a good idea to state that the list is non-exhaustive and the protocol needs to direct sites to double-check that none of the concomitant medications belong to one of the exclusionary drug classes.
Comorbid diseases are often excluded, mainly due to safety considerations. The protocol needs to specify which conditions, how, and who tests this, how the results are scrutinised and recorded, and if there is any room for discretion. If it's not specifically stated, then the firm criteria must be adhered to. Let me use the AKPS as an example. With the Australian Karnofsky Performance Score, you can exclude those who fall below a certain score. A score such as 40 would generally give an indication that the person's prognosis is not great and that this itself may exclude them from the study. There needs to be consideration and explanation as to why the test is being done. In this case, it may be for the purposes of prognosis. Because you need to consider the possibility of physical disabilities that may result in a low score for performance. A person can score less than 40 if they're disabled, but otherwise have a good prognosis. Clinical discretion may be appropriate to state within this criteria.
Just as a side note, if criteria are fixed, which in most cases they will be, there is no scope for waivers without prior HREC approval. If a potential participant is just slightly outside the parameters, they will be excluded. Unless there is a statement allowing clinician discretion specifically written into the criteria. Assessment tools are often used to determine eligibility, such as a pain score or meeting certain severity of PRO assessments. If this is the case, the instrument should be specified along with who administers that instrument, how it is to be scored, and particularly if a sub-score is to be used. Again, this can be specified within the study assessment section of the protocol, but it needs to be very clear. Also, I've come across inclusion or exclusion criteria which refer to the required assessment at baseline. While possible, it's often difficult and confusing as baseline scores are taken after eligibility screening.
A person cannot usually proceed to the collection of baseline data until all the exclusion and exclusion criteria have been met. Avoid long and convoluted criteria where interpretation by others can lead to different understandings. Consider splitting long criteria into separate statements if possible. In building the inclusion and exclusion criteria, it's very easy to be long and convoluted, which can then become less and less easy to follow. Criteria that are not clear or easy to follow will result in errors. Consider the points that you can see on your screen. Assess if the criteria will result in recruitment of the intended population. Inclusion and exclusion criteria are aimed at including as many participants as possible, and then excluding those who should not participate due to safety or other risks.
Ensure that each inclusion or exclusion criteria can be ticked as yes or no. Yes, will include the participant. Ensure each exclusion criteria can also be ticked as yes or no. Yes, will indicate that the exclusion has been met, which means they're excluded. No, and I'll bring in a double negative here, no, will not exclude the participant and therefore, they remain included. All inclusion criteria should be ticked yes, and all exclusion criteria should be ticked no for a patient to be enrolled. All criteria need to be verified. Consider what will be recorded as source data. Specify if a range is allowed and if discretion is possible in assessing that criteria. No waivers are allowable. A person will be included and then not excluded, but there are no maybes.
Charmain Strauss:
The next feature of the protocol, which is often a source of problems or ambiguity, are the study diagrams. If you were able to attend masterclass part A, you'll have noticed that a protocol can and should contain a number of diagrams. They're not all required, but there are some that can provide clarification around complex or long text regarding certain aspects of the trial. There are four that should be considered, which are listed on the screen now. To improve the clarity of the protocol, there could also be others that you can consider. We don't intend to be definitive with the diagrams, but what we would like to do now is show that a variety of diagrams do exist, but there is no fixed format.
The study diagram should be a visual display of your study design. Ideally, it would show you the timeframe for the study, including any significant time periods or phases, as well as the groups, crossover, or other main design elements. It might be a flow diagram as shown on the screen. Here, you can see the duration of the study from day one to day 24. You can see the two treatment arms, the active medication and the placebo arm. Or it can be more comprehensive like this. You can see the possible dose titrations and repeat assessments are shown here. While this diagram looks overwhelming, it's a good representation for a complex study and very much simpler than trying to describe it in prose.
This diagram shows the treatment arms, an overview of the assessments at the important time points such as the safety and efficacy, the length of the treatment and the follow-up period. Or even this, while the diagram might appear busy and intricate, it does visually show a complex study design, including multiple randomizations, titration and rescue medications. Although study diagrams may vary in appearance, they should all visually demonstrate the overall design of the study in a snapshot and compliment the text description in the relevant sections of the protocol. Another type of diagram is the study timeline. Although this is not always included, it can provide an alternative way to show the study design in broad terms. This can be more of a timeline rather than the flow diagram, and it steps the reader through the days involved in the major activities at each of the salient times.
A participant timeline is one way to visually show what the study will mean for a participant, when there will be visits, what is required at those visits and how many there will be. Often, this is combined with the study diagram, but this can sometimes become visually complex to look at. Separating them can make the process much clearer. Here's another example of a participant timeline showing dosing, visits, and data collection. Perhaps the most important diagram within the protocol is in fact being the table of study measures. This table should summarise all the assessments and the activities for each time point within a trial. It should be able to outline to the study staff when and what they should do at any given time without having to repeatedly read through the many pages explaining each assessment. I'll just draw your attention to the circle where biochemistry is listed. This is rather ambiguous. What test does this entail exactly?
The table can group the assessments into those that are clinician assessed, participant reported or taken from investigations, such as this one. You can see that the biochemistry item from the previous table now lists specific pathology measures which is clearer. This table takes into account those measures that will be collected if the participant ceases at any time in between visits and also, includes the measures to be collected during the follow-up visits. Ideally, the table would show the acceptable windows for the assessment and also, if the assessment can be remote or telephone or if it is required as face-to-face. Each of the visits, including the allowable windows, are included in the example on the screen. Along with each of the visits as detailed in the participant timeline section of the protocol and what each assessment is as detailed in the study assessment section of the protocol.
Of course, whilst the table is comprehensive and does include footnotes, bearing in mind, we did ensure the text of the footnotes. Each item is also required to be detailed in depth within the text of the protocol, including how the remote visits are to be conducted. In summary, your table of study measures should provide information for all the assessments that will be collected as part of the study, who'll collect these and when they'll be collected. This includes when investigational product is prepared, dispensed, and returned, as well as any expected compliance checks are done. For example, if participants are completing a diary, checking that this has been done at specific time points gives an opportunity to reeducate the participant early if non-compliance is observed to avoid missing data. You should complete the table first and then complete the text of your protocol. This will ensure that everything is included and consistent.
Site staff should be able to refer to your table of study measures and conduct the study accurately. This should be able to be replicated by other staff at the same site and across the various sites in the same way to ensure consistency. Here are some of my tips to help with preparing your table of study measures. List all your assessments individually. For example, don't simply write quality of life for patient reported outcomes. List each one. Show each visit and which of the assessments will be conducted at each visit. Ensure you include timeframe or windows for collection of assessments, such as plus five minutes, as well as windows for the visit or for the contact. For example, plus three days.
Include all time points of collection, not just regular study visits and also, end of treatment, end of study or exit withdrawal, follow-up. Use footnotes to specify additional pertinent information. For example, instead of simply putting, including the general term vital signs in the table, you would list these separately, such as tympanic temperature, blood pressure, pulse, et cetera. You can then use footnotes to provide further details regarding each measure. For example, for blood pressure, you may specify that this needs to be collected from the same arm at each visit. Bear in mind that the more tables and diagrams, the greater the potential for divergence away from the text if amendments are made over time.
Belinda Fazekas:
I'd like to spend a few minutes to talk about version control. In our first Masterclass, the average clinical trial life cycle, we walked everyone through the entire life cycle of a clinical trial. You'll recognise the diagram that's on the screen. Protocol development is a very early part of that life cycle and was covered in much more detail in part A of this Masterclass. During that class, we touched on version control. Obviously, the development process will result in numerous versions of a protocol before it becomes finalised. Being able to keep track of which version is the most recent version is very important. It's also not the end. Once final and approved, there are other occasions where the protocol version may change and where it's equally important to keep track.
Let's skip to later in the life cycle of a clinical trial where ethics and governance approvals are sought, as this is an important part of the protocol development. Once the protocol has been finalised and has completed development and peer review, it will need to be submitted to the HREC for ethical approval to continue. At this point, the HREC will review the protocol in detail for ethical and scientific merit, along with the associated documents, such as the PICF, advertising materials, patient facing documents, et cetera. These can be numerous as outlined in the previous Masterclass. As a result of this review, it's highly likely that changes will be requested by the HREC, or more clarity within the protocol will be required.
Still later in the life cycle, into the recruitment phase, further amendments to the protocol may be required. Reasons can vary, but this can include where parts of the protocol are unclear for the study teams and where further clarity is required. Having had clinical input during the development can help to minimise this. There may be changes required to procedures due to technical or facility circumstances. The inclusion or the exclusion criteria may need to be changed or corrected as this may be in response to recruitment issues or simply ambiguity. There are circumstances where the intervention itself may change, such as a change to the manufacturer or provider. Any protocol amendment made during recruitment has implications for the trial budget and for recruitment. There may be delays while changes are being rolled out while any additional training is being conducted.
For a protocol amendment, maintaining control of the version is critical. This is to ensure that the study is always being conducted in accordance with the current approvals and is being followed at each site. Let me run through the process for maintaining control of the versions. On the screen, we have an approved version of version 1.1 from the 5th of January 2023. The current Word version of the protocol should be opened in its original format, perhaps Word. Immediately save this as a new file with the same name using the saved as option. The version might then be version 1.2 dated the 8th of May 2023. Then you add track change or TC to the file name. Turn on the track change function from the review menu in Word as indicated on the screen. Make your required changes in the protocol, including the text, and also check your headers and footers, and ensure that the document version numbers and dates are changed.
Just check that after each section break that the footers have been changed throughout, and then save. Now, you save as again, to make it the same file name, but this time with clean in the file name Only now can you accept the changes and save and then turn off the track changes. Both the track change version and the clean version should be saved and submitted for approval. I suggest as a PDF, if possible. Why do we need two versions? Ethics will want to see exactly what changes were made to ensure that your request for an amendment was consistent with the submitted protocol. The track change version allows them to see the changes that you made. The ethics committee will also want to see the final version and both will or should be listed in the approval letter.
Charmain Strauss:
To finish up today, we'll look at informed consent. This is the most important process in clinical research. It's an essential legal and ethical obligation which must proceed conduct of any clinical trial procedure. Whilst it's a process that is separate to the protocol, it must be detailed in the protocol, including how the consent process will fit within the study processes and what the key elements of the informed consent process will be for the study. One of the key considerations when determining the consent process is the target population, as this will directly determine the process that should be followed to comply with GCP regulatory and ethical requirements. Clinical trials involving vulnerable participants must ensure that there is appropriate support to help vulnerable participants make informed decisions about participating in a study.
Some examples of vulnerable people and groups are listed here. Researchers need to balance the rights of vulnerable individuals and groups, as well as any potential benefits of their participation in research against any increased risk to them. Specifically, we need to consider three types of people when seeking informed consent. Those who can give informed consent. Those who require assistance to give informed consent. Those who can't give informed consent. The latter two groups would constitute individuals with a diminished capacity to consent. This would directly impact the type of consent that will be obtained, the format of the discussion, the documentation, as well as the parties that may be involved during the consent process.
Consent may be collected verbally and or in written format. For example, individuals who are blind can verbally consent but may not be able to provide written consent. In these cases, their legal representative can provide written consent and an independent impartial witness should also be present to witness the consent process. Consent discussions may occur face-to-face or remotely. For certain vulnerable people, it may be more appropriate or necessary to have discussions face-to-face to ensure a person-centred supported decision-making model. Particularly for those participants who rely heavily on medical care, such as terminal or ICU patients. The timeframe for discussion will also depend on the study population. Participants should be given ample time to consider the study. For example, in emergency situations, the consent process needs to occur very rapidly. Consent will most likely need to be obtained from the legal representative who is often provided with a short timeframe for consideration, so as not to delay critical treatment.
Documentation of informed consent includes signing the Participant Information and Consent Form, or PICF, as well as keeping a permanent record of the process in the participant's notes. As a form of evidence that information was provided in an appropriate manner and that consent was obtained free from coercion. The target population directly impacts the development of the PICF. The PICF needs to deliver the information in a form that's appropriate to the individual being consented. The PICF must take into account factors, such as level of understanding, reading ability, and knowledge about research and research requirements. If the individual is unable to read or write, then using verbal or other alternative methods of communication to convey information and to record informed consent may be required. Many PICF templates will include an impartial witness section specifically for participants who are unable to read or write.
Now that we've reviewed some of the important considerations related to informed consent, I would like to look in more detail at four specific issues related to consent. The first being the use of a witness. What does this mean? Then how to document the informed consent process. We'll look at data sharing statements and give you an overview of the informed project. Firstly, what is meant by an impartial witness? In our experience, this causes significant confusion at study sites. This is a person who is independent of the trial and who can't be unfairly influenced by people involved with the trial. They're commonly referred to as an impartial witness. This person is present throughout the entire informed consent process. Including all discussions, and also reads the informed consent form and any other written information that's supplied to the potential participant. The impartial witnesses are required under GCP to be present during the consent discussion.
If a participant or their legal representative, such as a guardian, cannot read. After the written informed consent form and any other written information has been discussed in accordance with the NHMRC national statement, the witness should sign and personally date the consent form. By signing the consent form, the witness confirms that the information in the consent form and any other written information provided was accurately explained to and understood by the participant or their legal representative and that informed consent was freely given. Bear in mind that if an interpreter is used, they cannot sign the consent form either for the witness or as an impartial witness.
In 2012, the National Health and Medical Research Council or NHMRC, in collaboration with the Australian states and territories, developed templates to serve as a starting point for the development of written PICFs to be used for research conducted in Australia. There are templates available for different types of studies, including genetic, interventional, non-interventional, health and social research. These templates include a specific signature field for the impartial witness, and this is what it looks like. A common misconception is that the person signing this box is witnessing the participant's signature only. This is likely, because as you can see, it specifies name of witness to participant signature. Confusing. However, the blue text specifically refers to the TGA annotated guide to GCP section 4.89, which outlines the requirements for an impartial witness when consenting individuals or their legal representatives who are unable to read.
Per GCP, this is clearly not simply a witness to the signature process, but of the entire consent process, which includes the consent discussion. It's commonly found during routine monitoring and audits that this impartial witness section has been incorrectly signed. For participants or their legal representative who in fact can read, most of the time this signature field may not be applicable as an impartial witness will not be required. In these circumstances, the person obtaining consent should strike through the section, provide their initials and date with a comment not required. In order to record that information was provided in an appropriate manner and that informed consent was obtained free from coercion, your consent process must be documented in the participant study notes or in their medical notes.
Why is a signed PICF not enough, especially when it does state that consent was informed and provided willingly? That additional documentation serves to confirm that the correct procedures for consent were followed, including appreciation of the vulnerable position of the participant, and that consent was provided with full disclosure and full understanding and obtain free of coercion. It helps to provide substantiation of the process, being a dialogue and a sharing of information. It helps to provide evidence of the timeline for participation in the study, that is that consent was obtained prior to the participant being randomised and prior to initiating any of the study related procedures. It can confirm the process in the event that the PICF cannot be located, the signed PICF cannot be located at a future time, which does happen. Documents get lost.
GCP is very specific about the elements that need to be recorded, and this includes the study name and number, who was there, what was covered, what questions were asked, and the response that was given that consent was provided by the participant or their legal representative. That the participant entered the study willingly. That the PICF was signed by the participant, impartial witness, if applicable, and the person who obtained consent. That a copy of the signed form was given to the participant. That documentation needs to be completed by the person who obtained consent or somebody who assisted with the consent process and who was present during the entire process. The consent documentation is also applicable for consent that's obtained for subsidies, such as caregiver subsidies, as well as re-consent after each HREC approved amendment to the PICF as required.
How can we ensure GCP compliance and a complete record of informed consent? One of the things you can do is consider specifying the PICF version that was provided to the participant and the version that they signed. That's particularly relevant when you have multiple versions of the PICF throughout the study as the protocol gets amended. Two, is to ensure that the documentation is signed and dated by the person making the record. It's not enough for it to just be a paragraph. We need to know who made that record. My most valuable tip would be to consider using a template. We consistently find that sites fail to either accurately record the entire consent process or fail to include all the required elements in their documentation. They may, for example, forget to specify that they've provided a copy to the participant.
As a result, they're unable to substantiate that consent was obtaining compliance with GCP and the NHMRC statement. A template makes this easier, and this may be in the form of sample text as shown here. You can see that highlighted in yellow within this template, it actually contains all of the required elements that we've discussed in the previous slides. Or it can be a checklist, such as this. Both formats capture the same information and ensures that all site staff are recording the consent process completely, accurately and consistently. Such documentation demonstrates the process that was followed and that it was in accordance with GCP and the NHMRC national statement, and that there was no ambiguity in the process that was followed. Now, we'll be looking at data sharing statements. You also need to consider the data that will be collected during the study and how this will be used and shared.
Whilst data sharing statements have been within the PICF template for many years, it's gaining increasing importance. As the cost of trials escalates, the full and complete use of all data obtained during the course of a trial is imperative, but not all data is used by the investigator team. Trial participants need to be asked to consider if the data they contribute can be used for future or ongoing research. Within the national statement, consent for use of data can be for the current project only. For example, secondary analysis, for extended research, whether future research may be related to the original project, or in the same general area of inquiry. Or unspecified, whether data may be entered into a data bank for future researchers to access.
The protocol should clearly describe this, and the PICF should be specific in the statement. Participants also need to be informed of how and why their data is going to be shared with others. They need to consent to this even if the data is being shared with external collaborators. All data sharing needs to comply with the Australian Privacy Principles. Whilst having a national PICF templates is beneficial to ensure consistency and demonstrate ethical as well as regulatory compliance, the NHRMC PICF templates are very long and complex. Often, 25 or more pages.
A collaborative of stakeholders involved in clinical trials known as CTIQ has initiated the informed project, which the primary objective of which is to develop a simplified participant friendly national PICF template. Through this project, CTIQ aims to demonstrate widespread intent to adopt the redesigned PICF template to justify NHRMC endorsement, with the ultimate goal being to replace the 2012 NHRMC templates on their website. In addition to revising the national PICF templates, CTIQ is also working with the Health Studies Australian National Data Asset, or HeSANDA, to develop a data sharing agreement to be incorporated in the national PICFs to ensure that we have ethically and legally robust consent for future data sharing. This work is currently underway, so watch this space.
Belinda Fazekas:
We've covered quite a lot of ground and we really only scratched the surface with some of the points that we come across. Protocols can be fought with ambiguity and inconsistencies. We hope that we've helped to highlight and explain the most common problems we see, and to explain how to make protocols clearer. We've covered five topics for today, risks, inclusion and exclusion criteria, study diagrams, version control and consent. We really hope that this has been helpful to you as you will move forward with the development of your own protocols. Charmain, can I put you on the spot? Sorry.
Charmain Strauss:
Yes.
Belinda Fazekas:
When we're talking about risk, obviously you can't cover all risks. It's impossible. I'm just wondering what happens if you've identified a risk but you can't actually mitigate that risk?
Charmain Strauss:
Well, you can't actually cover every single scenario. I think what we need to do here is think about what are the most common things that you might come across. Identify those that are going to be high and critical and that you can do something about maybe in the training you provide, in your protocol, as well as your standard operating procedures, if you have them. If you don't consider having them, because that's a good way to address risks, but always keeping in mind that there are things that can pop up that we may not think about. One good example of that is COVID. Nobody was prepared for that, and it affected trials in a big way.
I don't think anybody had that in their risk register, but we all adapted and we learned. Now we have trials that do remote visits. Where before, those things were actually, a lot of sponsors and a lot of investigators were reluctant to do and to incorporate. I think even though we may not identify everything, we can still learn from the things that come along. If you can't identify a risk that's high or critical that you can't actually work out any mitigation strategies for, then you might need to consider maybe changing your study or your design of your study to actually ensure that you can run it. If you can't meet your primary outcome, then there's a problem there and you're wasting your time. I think that's quite important to do.
Belinda Fazekas:
You mentioned COVID and remote visits. I think COVID forced the trials community to initiate quite a number of other changes as well, such as remote monitoring. It was fairly standard for monitors to go on site and sit there for a few days and look through all the files and look through the medical records. Now, a lot of that is done remotely. In fact, worldwide monitoring procedures have changed. It's been an interesting space, that's for sure.
Charmain Strauss:
Transitioning from 100% source data review to now doing more risk-based type of monitoring. The other thing also, I mean for the inclusion and exclusion criteria, Belinda, what happens if somebody actually gets enrolled who's not eligible?
Belinda Fazekas:
That does happen. I guess it does depend on the study and it depends on whether that person then goes in to participate. I guess from a sponsor point of view and from the study point of view, that's a protocol violation. A patient has gone, or a participant has gone into the study that should not have gone in for some stages, or for some phase studies that will be picked up very early, often before the patient's been randomised or received the intervention. In which case, that can be interrupted. In other cases, the person may in fact go on to receive the intervention. I guess at those times, that person's participation and the data that they've provided then is reviewed by, in our case for our studies, is a data safety monitoring committee that will also include a statistician. They'll have a look at the circumstances and have a look at whether that person's participation will interfere with the data and whether that person actually needs to remain in the study or not in terms of their data. That may be that, that data is removed from the overall set.
Charmain Strauss:
One of the other things I guess that, I mean I didn't really cover in my section about the table of study measures is, what is the balance between including everything that you need in the table versus having too much information that it's impossible to actually read the table in an efficient way? I mean, you don't want a table that goes for 20 pages. You also don't want something that doesn't have enough information that you can't actually use it to know what you need to do. I don't think there is a right or wrong answer here. I think it very much depends on what the study is, how complex it is. I think there should be definitely an overlap between what is in your table and the text, particularly if the text sections are quite complicated. The table does help to summarise. You also don't want, I know we talked about this yesterday, you don't want sites to simply just look at the table and not read the rest of the protocol because they may essentially miss some things by doing so.
That's just more of a general comment I think for everyone on the line to keep that in mind when you're putting your study tables together. We have a comment from ADA. In the clinical trial life cycle in year four to eight, there is mention of protocol amendments. If you amend some processes in the study design, do you need to go back to the sponsor and ethics to get approval? Yes, you definitely do. Any amendments to the protocol need to be pre-approved by ethics before they get to be implemented on site. The protocol amendment might be ready and it's gone to ethics, but you can't actually start following the new protocol until that's received ethics approval. All subsequent study documents that need to be amended in response to that protocol change, also need to be approved.
Belinda Fazekas:
I guess the only exception to that is where there's a say, an urgent safety requirement to change something. Then you can implement it straight away, but you still need to get ethics approval for it. To maintain safety of the patients, you can implement that change immediately. The difficulty also comes in where, particularly if you've got multiple sites, if you've got a protocol that's been amended and approved by ethics, if it also requires changes to site specific documents, each site can't implement that change until they've had local approval. An example might be the PICF, where the ethics committee has approved the change to the PICF, but the individual site PICFs need to be approved by research governance. They can't be used until that approval's been given. You may then have a period of time where some sites are operating on one PICF and other sites operating on another. Fran, maybe we'll hand it back to you.
Fran Hyslop:
Fantastic. Thank you so much for that. Thanks to everybody. For those of you who may have heard the earlier masterclass and found that, and this today, a tad overwhelming, we would like to encourage you not to be put off. We've really only just peaked beneath the surface into the complexity of running multi-site clinical trials. This is why the ITCC exists. No one expects any one person to be all over this. It does need a dedicated group of experts, which is exactly what the ITCC team is. We're a group of solution-focused professionals with diverse qualifications and combined expertise in clinical trial coordination. We offer full project management service from concept to startup, conduct and completion.
We do this all day, every day, and we are here to help you. Membership of PaCCSC and CST is free, and gains you access to a network of experts who can assist you every step of the way to evaluate your idea in clinical practise and to improve outcomes for your patients, which is why we're all here. More about PaCCSC and CST, or become a member, please visit our website. Our website has information about our trials and other work and the resources, support and networking opportunities available for PaCCSC and CST members. We have several avenues to support new study ideas, provide networking opportunities, and forge new collaborations. Our thanks to Belinda and Charmain for presenting today. To all our attendees, thank you for your time. We hope you have a pleasant afternoon.
Resources mentioned in the video
- ICH Efficacy Guidelines [opens external site]
- VCCC Clinical Trials risks [opens external site]
- InFORMed Project [opens external site]
Clinical trial masterclass 4 - Critical appraisal skills
Critical appraisal is an essential skill to evaluate the quality and relevance of published research that may inform your new clinical trial idea.
Masterclass 4 is presented by Dr Wei Lee, Associate Professor Ann Dadich and Misbah Faiz. Wei, Ann and Misbah present an engaging and interactive masterclass to improve your critical appraisal skills. You’ll find out if your trial idea has already been researched, what the outcome was, and whether the quality of the research was adequate or needs further investigation.
Welcome, everybody, to the UTS Impact Trials Coordination Centre master classes, funded by the New South Wales Ministry of Health, and aimed to help you bring your clinical trial idea to life. This is the fourth and last in the series, and today we're going to talk about critical appraisal skills. First, I'd like to acknowledge the Gadigal people of the Eora Nation, upon whose ancestral lands our city campus now stands. I would like to pay respect to the elders both past and present, acknowledging them as the traditional custodians of knowledge for this land. I'd like to acknowledge the traditional custodians of the various lands from which all our attendees join today, and pay respect to those elders past and present. And I extend this respect to First Nations people attending today.
My name is Fran Hyslop, and my colleagues Wei Lee, Ann Dadich, and Misbah Faiz will be presenting today. Dr. Lee and I are from PaCCSC and CST at the University of Technology, Sydney. PaCCSC is the Palliative Care Clinical Studies Collaborative, and CST is the Cancer Symptom Trials Collaborative. Both PaCCSC and CST are part of UTS IMPACCT. The Impact Trials Coordination Centre, or ITCC, works to coordinate PaCCSC and CST clinical trials. I'd like to hand over to Wei and Misbah to introduce themselves.
Hi, my name is Wei, so I'm one of the palliative care specialists, based in Northern Sydney. And I'll hand over to Misbah.
Good afternoon, everyone, my name's Misbah Faiz. I'm currently acting quality improvement officer at the clinical governance unit here at Southwestern Sydney LHD, and I have a substantive position at the Multicultural Health Service here at Southwestern Sydney LHD, also.
Good afternoon colleagues, Ann Dadich. I'm an associate professor in the School of Business at Western Sydney University. I pursue a research programme on the management of health services, broadly defined. I have a particular interest in knowledge translation, and that's the myriad ways in which different knowledges coalesce to promote quality care. I'm joined today by my esteemed colleagues and friends Wei and Misbah to facilitate this master class this afternoon on critical appraisal. Specifically, we'll clarify what critical appraisal is, its importance, and how you can conduct a critical appraisal.
But why bother with critical appraisal? Well, colleagues, there are several reasons. Critical appraisal helps us to evaluate the validity of evidence. In doing so, we can identify and reduce the information that is unlikely to be helpful. By appraising research, we can avoid bias, we can avoid errors, because we've come to judge the trustworthiness of the evidence. This, in turn, helps us to make better-informed decisions.
Critical appraisal provides a framework to assess the strengths and the limitations of research. As such, it enables us to determine the extent to which findings can be applied to practise or to our decision. If we make better decisions, we're likely to enhance the care we provide to patients and their carers. By critically appraising the evidence, we can identify the most effective interventions, the most appropriate diagnostic tools and treatment options for patients, thereby promoting patient outcomes and carer experiences.
But critical appraisal is not just helpful for clinicians and decision-makers, it's also important for skills of researchers as well. Because by evaluating the evidence, we can identify gaps in knowledge, areas for further investigation, and opportunities to improve research methods. So for these and other reasons, critical appraisal is important helping to separate what's important from what is not important.
So how do we conduct a critical appraisal? Well, unfortunately, colleagues, there's no universally-accepted approach or, indeed, gold standard. And this has been indicated by a few reviews of myriad tools, suggesting there's no consensus on the preferred appraisal tool available to us. Now, while there is no gold standard, there are some commonly-used tools. Consider those of JBI, which you might know as the Joanna Briggs Institute. JBI offers some 13 tools to guide how you critically appraise different forms of research. There's also the tools of the Centre for Evidence-Based Medicine in Oxford, which offers six different tools to critically appraise different forms of research: systematic reviews, randomised controlled trials among others. Similarly, BMJ in London offers five tools to appraise different research designs: RCTs, randomised controlled trials, systematic reviews, among some other study designs as well.
And you might have also heard of the AMSTAR, used to critically appraise systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. And there are, of course, the eight tools offered by the Critical Appraisal Skills Programme, or CASP, in Oxford. CASP commenced in the '90s to help healthcare decision-makers understand scientific evidence. While varied, the CASP tools collectively encourage us to consider three questions. First, is the study valid? So the first step is to decide whether the study was unbiased by evaluating its methodological quality. The second question to consider is what are the results? If we decide that the study is valid, we can go on to look at the results. The third question we're invited to consider by the CASP tools is whether the results are useful. So once you've decided that your evidence is valid and important, you need to think about how it applies to your question. Is it likely, for example, that your patients or population may have different characteristics to those in the study?
With this brief overview, I'd like to now invite my colleague Wei to demonstrate how we conduct a critical appraisal.
Thank you very much, Ann. So start imagining, say, that you are a medical officer doing on-call, which is not uncommon in a lot of your experiences. So one afternoon, while you're on call in a busy tertiary hospital, you get a consult from a respiratory team, saying that there's a 75 years old female smoker with chronic obstructive pulmonary disease, and importantly having C02 retention and known right side of the heart failure, now admitted in hospital with type 2 respiratory failure, or really high CO2 level, because she can't breathe.
The PaCO2 is as high as 60mmHg despite the BiPAP that they're trying to optimise, with a respiratory rate of 32 per minute. And she also had hypoxia saturating only 88% on BiPAP machines. She's on extensive bronchodilators, including Atrovent 500 microgram Q4 hourly, salbutamol 5 milligramme Q2 hourly, and hydrocortisone IV, 100 milligramme QID, plus the usual antibiotics, and the fan, the purse-lip breathing that we all do in pall care.
Now, the respiratory team called for the pall care consult for symptom control, but actually demanded not to use opioids and benzos due to the fear of worsening hypercapnia and type 2 respiratory failure. So I actually had this case in one of the, like a well-known hospital, which is very disturbing. You then discussed the case with the respiratory physician and said, "Well, opioids and benzos are quite safe for breathlessness," but sadly, despite all your effort in trying to do that, the treating doctors still decide not to use that, because they're worried about worsening respiratory depression and the already-high CO2 level.
So, out of desperation, you start calling your friends for [inaudible] and also for advice. A friend of yours was really into alternative and complementary medicine and said, "We know what? You can't do the usual treatment, why don't you try some acupuncture?" You then say, "Well, hm, how does that work?" So you start performing a search on PubMed, and found that in one of this famous journal, the Lancet, there's an interesting article. And the article is about controlled trial of acupuncture for disabling breathlessness. So we thought we might give it a read. And being a great advocate for evidence-based medicine, you decide to put this article to the test.
So I'm going to now use the CASP checklist with you, together. And because of the number of people, what we'll do is we'll do like a didactic teaching style with you polling your opinion of what you think, and we'll see how everybody does. It'll be anonymous. So as Ann already described, the CASP list go down a series of questions, and importantly, unlike how how clinicians, a lot of us when we look at articles, we tend to like to, because of our busy time schedule, just look at the results and conclusion, or sometimes just the conclusion, and say, "Oh, what's the conclusion? Hm, can I apply it?"
And so what this is like in the CASP checklist is just like looking at section C and D directly and decide to just forego anything else, and decide, "I'm going to apply it." The problem with that is a lot of times the study may not be done in a way that is methodologically sound, and that means the result that's produced may not be totally trustworthy or reliable. And so what we need to do, actually, is before we trust the result, starting to look into the study design and say, "Hey, can I actually trust, therefore, the result that has been made?"
So what we'll do is, not as the usual practise, is we're actually going to start looking at whether or not the design and the study is methodologically sound, before we go into the results. So the first thing, section A, is the basic study design valid for a randomised controlled trial? And you might be surprised. Essentially the idea is, is this study that claim to be a randomised controlled trial really a randomised controlled trial?
So to do so, firstly, like any research, research is about finding out answer to your question. And in order to do so, you need a question that is quite specific, because otherwise you can't find a specific answer. And so the question, then, is do we have a specific goal or objective in the study? And by that, you can also look into the population intervention comparator and outcome measure as well. So, often the aim of the study's located within the introduction, and often that will be the last few sentence of the introduction, that's where you can find it.
Just because of time, what I'll do throughout this workshop is I'll give you the quotation of what the study includes, and for you to then decide whether or not you think they meet the checklist criteria for CASP. So in this study it says, "Although there is controversy about whether acupuncture of any form is more than a placebo effect, and ti has even been suggested that it is based on irrational principles, one can equally argue by inference from the work on acupuncture, pain, and endogenous opiates, that acupuncture might be able to alter the perception of breathlessness and sensations of distress in patients with COPD. We have investigated this hypothesis in a randomised controlled trial." And that's it. There's no aim word in this article, and you move on to the method.
If you go to the method and look for the PICO, the population, intervention, control, and outcome side of things, you'll find that the article has saying that population-wise is, "All patients have COPD and all had been smokers. Two continued to smoke." For the intervention, it says, "Acupuncture needles were inserted into acupuncture points according to the principles of traditional Chinese medicine, with addition of moxibustion," so the burning of the herb artemisia over certain acupuncture points, which is like an anti-inflammatory herb, if indicated. Genuine acupuncture treatments were given on 13 occasions spread over three weeks.
For the control group, the author mentioned placebo patients received the same number of treatments over the same period, were assessed in the same way, and were given the same amount of tension. They were given exactly the same number of needles, with or without moxibustion, which were left in for the same length of time as their pair. The only difference was that the needles were inserted into non-acupuncture dead points, in an area along the middle of the knee, over the patella.
For outcome, it talks about subjective scores of Visual Analogue Scale, of general wellbeing and breathlessness, with a five point scale, an oxygen cost score, modified Borg, six-minute walk test, and lung function test. Now, with that in mind, I'm going to ask you guys to conduct a poll, which Misbah will be helping us with. You'll see on your computer screen, a poll will now come up very shortly, and we'll see what you guys think, whether or not the study address a clearly-focused research question.
Thanks, Wei. And results are completely anonymous, so just choose what you think might be the best answer, and then I'll hand it back over to Wei, and we'll see what he has to say. And looks like we kind of had 50/50 with yes and no, and a few people were unsure, one person was unsure. So Wei, I'll hand it back to you, and you tell us what the answer is.
We actually saw, in this particular case, that the study did not address a clearly-focused research question. And why that's so is because firstly, when we go back to the aim, the aim itself is reported in a way that is quite obscure and indirect. It wasn't identified as a clear sentence. Similarly, also, you can't find clearly what is considered as the primary objective or the primary outcome. And similarly for I say, "Hey, I'm going to measure breathlessness, and whether or not acupuncture work for breathlessness." You might ask me, "What do you mean by breathlessness? Or od you mean your subjective breathlessness from zero to 10? Do you mean the distress?"
And because of all the measurement that's different, which necessitate a different kind of sample size calculation, you might not be able to get what you want in the answer from the study if you don't have a very clear, clearly-identified aim.
Similarly for the population side, when the study says, "All patient has COPD and all had been smoker," it's not really clear, they do not report a clear legibility criteria if you read through the study. It doesn't talk ab the severity of the COPD patients, including, are they functionally bedbound, or are they very functional, that they can run one kilometre a day? Are they people who can respond or not respond to bronchodilators? It doesn't really say.
Similarly, when it come to intervention, so the acupuncture needles were inserted into the acupuncture point, but it doesn't describe where to ... And if you know Chinese medicine, you know that a lot of the times the needles, the effect really depend on where you put the needles. And so for reproducibility, you really need to know where the needles are inserted. Similarly, the use of moxibustion, the burning of the herb that is anti-inflammatory, it's like giving somebody Neurofen, for example. The question then is, what's the dose that's been used? It hasn't been really mentioned as well, and that's only been used for people that's indicated, and it doesn't in any way in the article mention what's the indication of using moxibustion.
So in terms of the control itself, they used acupuncture dead points, which is on the patella. Some people who practises dry needling may argue there is potential effects of dry needling. This is a difficult study to do, though. And so importantly, it's not to say that, "Oh, we can't use dry needling," just that in the article, you will also need to talk about what may be the confounding effect, and how does the author therefore deal with it in interpreting the results?
And lastly for the outcome, they list out all the different kind of outcome measured. But importantly, it doesn't tell you what is the most ... So, essentially what we call primary outcome, which is the outcome measure that you want to make sure the sample size calculation is geared at to pick up significant differences. So from that, you don't really know what the study is trying to target.
And therefore, because it is a bit obscure, that's why we said no, but let's give it a benefit of the doubt, and let's proceed further for now. So the second question to try and make sure this is indeed a randomised controlled trial, is asking, is the participant randomised? It sounds like a very dumb question, because this is reported to be a randomised controlled trial. But the thing about it is some people will say, "Well, if I write down on a list of paper number one to 100, and I just close my eyes and pick a number, this is called randomizations." Or, "I put the numbers into an envelope and then people just try and pick it up, and decide this is randomization."
So how was randomization carried out, is that truly randomised? Is that appropriate, the method that has bene used? Was the randomization sufficient to eliminate systematic bias? And do people, the research staff, do they truly know of the allocations? Would they deliberately think, "Oh, this patient with COPD's too severe, let's definitely give the patient acupuncture," for example. So do you really know this is a true randomization?
So in the study itself, it's reported that the patients were randomly allocated to genuine or placebo acupuncture treatment. Then, pairs of patients, one from each group, were matched as far as possible for age, sex, severity of dyspnea, shortness of breath, oxygen cost, general wellbeing, and measured FEV1 from the initial three baseline values by a physician not involved in the rest of the study. Such allocation of patients produced two groups well-matched in all respects. It also says, "Intended to be single-blinded throughout for the authors, but one author could not remain blind for practical reasons."
In the discussion, they mentioned that the acupuncturist could not keep treatment blind, and no study of traditional Chinese medicine could ever achieve these. "Seen in the context of the traditional approach of the Chinese physician, the use of a placebo control group might have presented ethical problems. It could be argued that placebo treatment may not, therefore, have been given with equal conviction."
And lastly, they mention, "TO obtain the greatest possible placebo effect, all patients were under the impression that they were receiving true acupuncture at entry to the study, so for them treatments were truly blind." Now, the question therefore to you and polling is, was the assignment of participants to interventions randomised? And over to Misbah.
Thank you, Wei. All right, let's share these results. And as you can see, 82% of people that responded are saying no, they don't believe that this was randomised, Wei. So what do you have to say about that?
Thank you, Misbah. So we thought, actually, we can't tell. So firstly, on the method itself, patients they claim to be randomly allocated, but it doesn't report on how this is being done. Is it done by a random number generator, is it done by writing it on a piece of paper? And similarly for blinding itself, it does a genuine report that one author could not remain blind for practical reasons. But meanwhile, usually when that happens, a author will then write about why the reason for that, and in this case it's not also reported. And so therefore, overall, that's why we then said we can't tell.
Let's move on to the third question, then. Were all participants who entered the study accounted for at its conclusion? Because you want to know, if you actually give, let's say, intervention of acupuncture here, when you analyse the study at the end and you say acupuncture works, you do have enough participant, rather than 80% dropped off, or died, or they get so painfully irritated by acupuncture. So anyone who this is not working or gets tremendous side effect left, and you only analyse those who can tolerate it and reported that result, essentially. So you'd really want to know that, from the start to the end, that all the patient movement or participants' movement is accounted for.
So to do that, and to find these kind of information, you need to find the information in a resource section, and generally it will be the first part of the result, in what we call, for a randomised control trial, a CONSORT diagram. And so a CONSORT diagram is a flowchart that you can see on the right, and generally you will see how many patients are assessed for eligibility and then enrolled, and because of whatever reason, they might dropped off from proceeding to the study, and then how many will come to the baseline assessment, how many have the actual interventions, and then come on to the followup and analysis stage. And that way you can then know, for the results that's actually presented, what does the population consist of?
And so generally you also compare, table one is the baseline demographic, with table two, which is generally the main result, to look for any discrepancy. So then, for example, if there's 80% dropped off, you know there's something wrong, and the authors tend to discuss that in more depth as well. So in terms of the data analysis section, it will be in the last part of the methods that we'll talk about how they analysed the data.
So for this study, it mentioned that 26 patients were selected from the outpatient department of Osler in Oxford. It's randomised into genuine or placebo acupuncture point, resulting in 13 pairs. Now, for the treatment ones, it says one patient whose medication was changed because of an acute exacerbation after baseline testing, and before starting treatment was withdrawn from analysis together with his matched pair." And for the statistics then, it mentioned, "Although 13 pairs entered into the trial, matched data following treatment have been analysed for only 12 pairs because of the pair withdrawn." That's the one patient false, essentially. "And measures of FVC are for 11 pairs, since one patient could not perform this test." Study was not stopped early. So, question. Were all participants who entered the study accounted for at its conclusion? Let's start the poll.
Thank you, Wei. So yeah, were all the participants accounted for? I'll share the results now. Nearly 60% of participants said yes, all of the participants were accounted for, and a couple of people they said no, and a few people couldn't tell. So could you just explain that a bit further, Wei?
Thank you, Misbah. So yes, in this case we actually thought, with the detailed answers and reporting, that the participants were accounted for at its conclusion, including the one that was lost, and they did report it. So, we're going to move on to the next poll really shortly. Do you think this article is worth continuing, considering the previous questions?
Is this study worth continuing? What do you guys think? If we look at the results here, which I'll just share 71% of people are saying that no, they don't think it's worth continuing, and the remainder said yes, they would consider continuing. What do you have to say?
It's very interesting in that way, because I think as clinicians, a lot of times, we just read whatever we got, and we read the whole thing. And I think it's just important to keep in mind, the CASP tool is just saying that if a study is not methodologically sound, then the results might be problematic, so don't rely on it or don't trust and make your major decision on it.
So in this case we actually said, according to the CASP rules, say no, it's not worth continuing. You might say it's worth continuing because we want to learn about how they did and things, feel free, go ahead, definitely. We're talking about in the context of if we're going to read on past the results and make a decision possibly in this case. Don't do that, because you don't know whether or not this is totally reliable results.
So the implication of so-called a bad paper ... Because in journal club, a lot of the times we would, as a junior medical officer, we always want to critically appraise the paper. And the ideal appraisal is a funny one, because most of the time when I was doing it, I thought, "Oh, that means I need to be very critical. I need to try and criticise the paper or the author, find faults with it." To trust something sometimes becomes very hard.
And so generally, if you give me a tool and say, "Use this tool to measure how good a paper is," if a paper doesn't fulfil that criteria, I'll say, "Oh yeah, that's a bad paper." With this bad paper, then, it means that I might then think, "Hm, maybe the intervention's not good enough?" So our question, then, to you, is would you now, given that it failed the CASP criteria, would you forget acupuncture for this case, COPD patient you had that can't have opioids and benzos, and think acupuncture is useless treatment and not pursue it further, and tell you friend acupuncture is worthless? Or would you think, "Hm, acupuncture may still be worthwhile and worth investigating further despite the study?" Or, would you contact the author and ask for further clarification and decide what you would do? Let's start polling.
Just launched the poll, I don't think there's any right or wrong answers here, it's just about what you think you would do with the information that you've been presented. So forget about acupuncture, would you think maybe or maybe not it might be worthwhile? Or do you think you would contact the author and ask for further information? That I will just share the results. So Wei, it looks like the majority of the group thinks acupuncture is worthwhile, and I think for them it's worth investigating further. And that was 70% of respondents, and 25% want to contact the author and they want to ask for further information, and 5% or one responded said that they don't think acupuncture is something that they would consider.
Thank you, Misbah. So I think from the JMO perspective, a lot of times when I was presenting in journal club, I used to think, "Ugh, a bad paper means that that intervention, just get it out of our usual clinical practise," and in fact, you might see this generally in a lot of clinical practises. Once a paper come out with a negative study, for example, then that so-called intervention is no longer popular anymore.
Now, they may therefore miss a number of nuances, because it may be that sometimes it's a ... What we're trying to say is the implication of a bad paper is that an inadequately designed or reported study showing that a intervention doesn't work is not the same as a well-designed and reported study showing that an intervention doesn't work. So whatever doesn't pass, for example, this test two, doesn't mean that the studied intervention doesn't work. But the property of the intervention remains uncertain and may need further valuation.
Whereas, if the study is really well-designed and say to you, "Acupuncture doesn't work," for example, then definitely don't go ahead and use acupuncture. So in fact, after this study was done in 2020, it's reported in the systematic review that acupuncture improved breathlessness severity in patients with advanced disease. So if we just, like what I used to do, think, "Oh, this is a bad paper, I'm going to forego this intervention," then I might have missed a potentially useful intervention for my patient.
So the key thing I want to try and drive home is that we want to build bridges for ourselves and with others, rather than burn bridges. Rather than doing critical appraisal in a very critical attitude, or in a way that says, "Ah, this scientist or this author, they did a crappy job, I'm not going to believe what they say anymore," or, "This intervention is considered as under the bad paper, and therefore the intervention doesn't work, I'm not going to read it anymore or won't contact the author." Rather than doing that, if you find a study is inadequately reported, it's worth actually contacting the author, and then build a bridge with the author and say, "Hey, what do you mean by you did randomization? How did you do it?" Because it may be that the randomization is done really well, just that it's not reported. And that's how a number of times, if you ever get to be a journal reviewer, that's what you will be doing to try and help the authors to report it fully in way that other people can reproduce the study findings.
So moving back to the case, you ponder over what to do next after you read this paper. Your medical student on the team had a light bulb moment and said, "I recently got told of nebulized furosemide. Down southeast in Sydney, people used that a lot five years ago in the management of breathlessness. What do you think? Do you think it would work?"
And so again, you do a literature search, and you found a randomised controlled trial that suggests that the use of nebulized furosemide in the setting of COPD exacerbation. And so this is an article by Vahedi titled, "The Adjunctive Effect of Nebulized Furosemide in COPD Exacerbation: A Randomised Controlled Trial," and again, you decide to critically appraise the article using CASP.
So to do so, again, we'll go down A, B, C, D sections. First one, did the study address a clearly-focused research question? And where do we find it? Again, as a reminder, we find it in the end part of introduction, as well as in the method. First, aim, at the end, introduction mentioned, "To examine the effect of nebulized furosemide as an adjunct to conventional treatment of patients with COPD exacerbation in an emergency department." The population specified in the methods is adults age 18 years or greater with COPD exacerbation, defined as increasing dyspnea in 24 hours of hospitalisation defined by American Thoracic Society and the Global Initiative for Chronic Obstructive Lung Disease guideline, and then the participant also needs to be not on mechanical ventilations.
The intervention itself is furosemide 40 milligrammes, with that concentration giving a full mL of volume, plus the use of conventional treatment. The control itself is normal saline four mLs, plus conventional treatment. Conventional treatment is defined by the use of oxygen, .5 litre per minute for 30 minutes, plus the user of salbutamol 200 micrograms, and ipratropium 40 micrograms used by a metered dose inhaler without spacer, plus hydrocortisone, 200 milligramme IV.
The outcome they specified, the primary outcome, which is what the sample size calculation's geared at, is FEV change and dyspnea severity at one hour after treatment. The secondary outcome is the heart rate, breathing frequency, blood pressure, partial pressure of CO2, and oxygen, and bicarbonate, and saturation. So, question to you is did the study address a clearly focused research question? Let's start the poll.
Got 17 responses. I think these people have really kind of honed their critical appraisal skills already, so this is some really great news. I'll give some people another moment to respond. We have a clear winner, 100% of you think yes, there is a clearly-focused research question here. So is that the correct answer, Wei?
I think overall we did say from our team that this study did address a clearly-focused research question, because of it being so specific and be able to answer the PICO domain in such a specific way, as well as the aim. So the next question we have is was the assignment of participants to intervention randomised? So essentially, is this a true randomization? Considering how it's carried out, whether it can eliminate systematic bias, and about the allocation concealment as well.
So this you can usually find in the methods section, and if you have time to read it, which I provided on the slides, in a randomization process, in quotation mark, the study author says, "We used statistical softwares to randomise the subjects into intervention and placebo groups." And the question around whether or not it sufficiently eliminates systematic bias, you can look at table one, the baseline demographic, and compare the groups at baseline to see whether or not there's significant differences. And you can therefore read it on the right side, and I'll give you around 10 seconds to have a read and see what you think.
And meanwhile, for allocation concealment, so whether or not the people or the research team, they are blind, essentially, or not, the authors wrote, "Daily, an assistant who was not involved with the subjects' care, blindly provided the medication/placebo by filling 10 similar vials labelled one or two, with four mLs of furosemide or four mLs of normal saline. The furosemide group labelled vials labelled one, the saline group received vials labelled two." Question to you now, was the assignment of participants to interventions randomised? Let's start the poll.
So what do you guys think, do you feel like these interventions were randomised or not? Okay, we've got in about 15 responses so far, and there's definitely one answer that everyone is leaning towards. So just sharing the results, we can see here that 83% of you, 15 all together, say yes, you think the participant assignment was randomised, and we got a couple of people who say no, or they can't tell. But Wei, could you please elaborate further on this one?
Thank you. So in our case, we actually thought it's good [inaudible] and said yes, maybe yes. Because something like as clear as possible, it's not extremely, 100% definite. But meanwhile, it's more tend towards the yes side. And that's because they mentioned about they used SPSS to randomise subjects. Also in terms of table one, talking about whether or not there's any differences between the group, most of the variables have no differences statistically significant. But there is a few variables that are statistically different. And you may argue, "No, can you ever, ever, ever achieve a case where there's no statistical significance between the two groups, it's kind of relatively random and people can control."
So let's look at what is different. I've highlighted on the right side, underlined in blue. The heart rate, for example, is different, the bicarbonate, and also the FEV1 percentage predicted. If there is statistically significant difference, what then we can do is look at whether or not they have clinically significantly difference. So looking at the heart rate, the baseline group for the furosemide group, intervention group, is 88.9, whereas the placebo is 101. And you may say, "Oh, that nearly 10 is kind of clinically, maybe not too significant." Whereas if that is like 30, 40, maybe that's more significant. So I'll leave it up to you guys.
Meanwhile, for the bicarbonate side of things, one is 31, one is nearly 29. And FEV1 percentage predicted that it is fifty-nearly-five, and also fifty-nearly-three, even though it's statistically significant. So overall we'll say that we thought that the clinical differences were relatively small, and therefore we can consider proceeding.
As for the allocation concealment, though, we can't tell whether or not the investigators know the randomised sequence. But we know that the administrator, the participant, and the assessor, which partly consists of the participant for the primary outcome, because the primary outcome is subjective score of breathlessness, and it relies on the participant to report on themself, and they don't know whether or not they have given placebo versus saline. Sorry, versus furosemide. And therefore we say, even though we can't tell, but meanwhile there is partial blinding, or at least we know some. So therefore, we've given the benefit of the doubt, so now we'll say yes.
So we'll continue on the third items, were all participants who entered the study accounted for at its conclusion? So again, looking at from the start of the study to the end of the study, did the report give you a sense of all the patients that have dropped out, whether or not there's a systematic reason for them leaving the study, such as because of side effects or benefits or whatever it is?
So to find that, as a reminder, you generally will look at the first section of the result in the CONSORT diagram, and looking at the dropout, and the loss of followup, and then you then compare the baseline data, which is usually table one, baseline demographic, whereas the table two usually is the main result. So look for whether or not there's any dropout. And if you look at how they analysed the data, whether or not it is quantitatively or qualitatively, you look at the methods, at the end of the method talking about how they analysed the data.
So for the result, I summarise it here for you, the author says we enrolled 100 patients, so we know they enrolled that. At baseline assessment, they have analysed or described 50 furosemide participants and 50 placebo participants. When they reported their primary outcomes, they also reported 50 in intervention group, as well as 50 in the placebo group. Question to you is were all patients who entered the study accounted for at its conclusion? So let's start the poll.
Mkay, let's see what you think. Yep, I think we got a clear response. 73% of respondents said yes, they're happy that all the participants were accounted for, and the remainder can't tell. So could you please explain that a little bit further?
Thank you. We actually thought as a group that, we said yes for that. And thus, overall, in answering, is that the trial itself was not stopped early, the number started at 100 for the whole trial, and there's report of no dropout, essentially. And there's no mention of any cross-contamination. And were the patients are analysed in the group to which they were randomised? The answer's yes, and therefore that's why we thought and we wrote yes for that. So, is this article worth continuing considering the previous questions?
Would you continue, yes or no, dependent on all the information you've seen today? We do have a clear winner here, people, 100% of respondents, all 18 of you said yes, this is worth continuing.
Yeah, we also thought that this article is worth continuing. Because of that, we're going to move on to section B. So I hope you get a sense of what section A's trying to do, because you feel section A in the CASP tool is not worthwhile, then don't waste your time, do not trust the results. So therefore, this time we're going to look at, given it is a randomised controlled trial, let's see whether or not the conduct of it is methodologically sound.
So by that, again, there's some overlap to section A, and therefore it will save us some time. Were the participants blinded to the intervention they were given? Where the investigators blind to the intervention they were giving to the participants? And were the people assessing and analysing the outcome blinded?
And so we go back to what we previously was looking at. In the methods section, the authors mentioned, "Daily, an assistant who was not involved with the subjects' care, blindly provided the medication/placebo by filling 10 similar vials labelled one or two with four mLs of furosemide or four mLs of normal saline. The furosemide group received vials labelled one, and then the saline received vials labelled two."
And so this time, therefore, we say, "Were the participants blind to the intervention they were given?" We answered yes. But meanwhile, for the investigative side, we actually can't tell whether or not the investigators know the randomised sequence. But we know, also, the assessors, including the participants themself, for primary outcome, they were blinded according to the outcome measure they mentioned.
So, the next question then is were the study group similar at the start of the randomised controlled trial? Which we also looked at in part of the section A question. And taking us back again, we look at table one and try to look at the differences. And just to recap, remember, initially we look at statistically significant differences, and if there is difference, then you then look at whether or not it's clinically meaningful, that difference. And in this case, we previously identified those three variables, and said that the intervention group itself had lower heart rate, higher bicarb, and higher FEV1, and that clinically significant difference, seems like this part is the subjective part depending on your clinical experiences, and so it's therefore saying there's no sort of right or wrong answer in the CASP tool usage. But you then, based on your experiences as well as the patient in front of you, you may say, "Hm, okay. The clinically significant difference seems small," for example.
And so therefore we said, "Well, the study groups were similar at the start of the randomised controlled trial," relatively. And then we move on to the sixth question, which then asks: apart from the experimental intervention, did each study group receive the same level of care? That is, were they treated equally? And if you're not sure how to conduct that assessment, the CASP tool asks you to consider, for example, was there a clearly-defined study protocol? Was there any additional intervention that was given to any particular group? Were they similar between the study group? And then what about the followup interval for both groups, are they set the same? Or would you say because they are given furosemide, and then the investigator's a bit stressed about side effects, and therefore they are going to follow up a bit more rigorously than the placebo group.
So, to find that information, then, you look at the methods section again, in the intervention and control paragraphs. And it mentions, on the right side as this whole paragraph, I've just summarised it on the left for you, saying that it says, "All subjects received conventional treatment," and they also define conventional treatment, as we previously mentioned, as oxygen for 0.5 litre per minute for 30 minutes, followed by the use of salbutamol, 200 mics, ipratropium 40 mics, with inhaled metered dose inhaler without spacer, and hydrocortisone 200 milligramme IV. And they also mentioned, "All variables were measured again one hour." So not a difference between the groups. And therefore, we then said as a panel that we thought that yes, the study groups, whether or not they received furosemide or saline, received the same level of care according to what they reported.
So, that's section B for you. And given that we thought section B is relatively okay, we can move on to what a lot of clinicians are very interested, what are the results? And so to look at the results, then we look at whether or not the results of the effects of the intervention was reported comprehensively. And you may say, "What do you mean is comprehensive reporting?" And so for something to be comprehensive, a lot of times you want to know whether or not the power calculation was undertaken, whether or not the sample size is big enough. Because it can be that a intervention is effective, just that the study did not recruit enough participants to show that this intervention is working. And therefore if you just look at the conclusion that says, "Oh, the intervention is no better than the placebo," you might think, "Oh, this drug just doesn't work."
So when you see a negative study, you might have to go back and look at the sample size, and look at the study limitation and say, "Did they have any trouble recruiting?" For example. A lot of times, a lot of the pall interventions, when we say a drug doesn't work, is the so-called outcome that is measured, let's say for pain or breathlessness, is a secondary outcome. And if that is a secondary outcome, it means that the study, for sample size calculation, is not calculating to try and demonstrate a difference for that secondary outcome. So if you then find a study that says, let's say, duloxetine doesn't help with neuropathic pain, and yet the pain is a secondary outcome measure, you may say, "Well, actually, let's look at the recruitment rate, let's look at the sample size." Maybe to really demonstrate it properly, we need to redo a study, by having the primary outcome as the outcome of interest, and do the sample size calculation accordingly.
And that brings us to another point, is that how are the outcome measured? Are they clearly specified? We need to know that. What about the results, when they're expressed? If the results express themselves as this percentage of positive response as a binary outcome, were both the relative and absolute effect reported? Let's say if somebody says to you, or the author report, "Lasix makes the person's breathlessness improve by 50%, and yet the absolute improvement on the absolute scale is only less than one," would that be really meaningful? So you need to really look at whether or not they report both absolute and relative effects.
Last, with all that being said, importantly, any missing data. If there's missing data, is there a particular reason that it is missing? Did the author mention, "That's because of the harms or the benefits, that they patient dropped out," for example. And so lastly, looking at the tests used and P-values, are they reported? Have the addressed potential source of bias?
So overall, where do you find this information? A lot of this can be found in methods and the results and discussion. So for the power, statistical tests, and p-value, sample size calculation in the methods. You can find that in this article it mentioned, "We used statistical software, Stata, to estimate the minimum required sample size to detect a similar difference, with type one and type two error both set at 5%, and the sample size calculation was 78 subjects."
They also mentioned, "To cope with the possibility of non-adherence, dropouts, and or missed or excluded measurements, 100 patients were recruited," so just keep that in mind. They are trying to account of missing data. The next part, in terms of the actual data analysis, when they are looking at results, on the right side they mention that the data was analysed using the statistical software SPSS, using chi-squared for categorical variables, that's the binary outcome. The independent t-test for the non-binary outcome were used.
For outcome measured and result reported is mentioned in methods as well as table two, saying that, "All the outcomes were measured again one hour after treatment. The primary endpoint were the changes in FEV1 and dyspnea severity. The secondary end point were the changes in the other parameters," and these are the parameters that's all reported here, with primary outcome being the last two over here. They also then mentioned about the missing data/dropout/bias, so I think the discussions, that's where you can move forward, as well as the limitation.
So overall, as a panel, looking at this briefly we'll say, "Yes, the effect of intervention was reported comprehensively." So moving on to question eight, was the precision of the estimate of the intervention or treatment effect reported? You might say, "What does precision means?" So let's take you back to statistical analysis 101, if I have 10 people in a room, and I'm trying to measure their height, and then I tell you the average height of people in the room is 170 centimetre, you may say, "Great." But then, to really tell and imagine who is in a room and how tall they are, you might ask me, "What is the range of the heights of people in the room? Is it as high as two metres to as low as 110 centimetres, versus everybody is similarly between 160 to 180? What's the range of this data?"
And that's called precision of estimate. So commonly in the trial setting, people like to report it as confidence interval, like for example, 95% confidence interval. You get a range and you know, 95% of the time, the true values lies within this range. However, sometimes, rather than using confidence interval, people might use standard deviation or standard error. And there's ways for us to change these values to get what we need, overall getting a range, essentially, of the data that is being presented.
So over here, in this article, they didn't give confidence interval, but they gave standard deviation for the intervention and placebo group. On the right side, you can see that intervention group, the first value is the average value, or the mean. The next number, after the plus or minus, these are standard deviations. And that's been explained here in small print by the authors.
So you may say, "I don't know how to read standard deviation." So how we can do it is, if you want to change it to confidence interval to get a better sense, if you times standard deviation by a number 1.96, which is nearly two, then you can get 95% confidence interval. So for example, taking the example of dyspnea score, which is the primary outcome at the very bottom here. It says intervention group has average of minus 2.7 change from pre- and post-nebulized furosemide, and it has a plus or minus one standard deviation. You take this one number, times it by, rather than 1.96, we'll times it by two, and then we'll say, "Okay, the dyspnea scores therefore, it has a 95% confidence interval range between -0.7 to -4.7.
And doing the same thing for the other primary outcome that they mentioned as breathing frequency. So it would be ... Sorry, that's not the primary outcome, but the other secondary outcome of breathing frequency, for example. So we see here they reported that after Lasix, most people reduced their respiratory rate ... Or, no, sorry. The average people reduced their respiratory rate by seven per minute. The standard deviation here is 3.2. And so if you times this number by two, and put it as a range before and after this minus seven, then we're saying that the 95% confidence interval for breathing frequency change, is around -0.6 to minus 13.4 per minute after the Lasix.
So overall, you may say, "Wow, that is a big range, and therefore the results are not precise enough." And if you ever find a result not being precise enough, that tells you that the study will need a bigger sample size, and that's how you get precision in the results.
So overall, in terms of the study, what's the precision of the estimate reported? I said yes, they did report it, even though it's reported in standard deviation, not confidence interval. And even though it is not precise, but the question is around whether or not it's reported for you.
Do the benefits of the experimental intervention outweigh the harms and the costs is the next question. Because when you as a clinician are looking through a trial, you want to know, is this worth it? And to know whether or not this is worth it, you're trying to figure out does the benefit outweigh the harm? So to do so, firstly, you look at the so-called benefit. Is the benefit big or small? So that's called the effect size, what's the size of the effect size? What about, if that effect is good, is there side effects, and what are they? How big are they? And then you may say, at a service level, if you are the director of a service, "This intervention might be too costly, or not too costly," and you want a cost-effective analysis to be undertaken.
So, in terms of the treatment effect, a lot of times in literature people like to talk about this value called Cohen's D. And so the Cohen's D says, if your effect size after the statistical analysis is 0.2, then it says to you that the treatment effect is small. When it reach 0.5, it says the treatment effect's medium, and if that is at .8 or greater, it says it's a large effect size.
So that's how a number of trials will report. However, it's not reported here. And meanwhile, if that is not reported here, then your way of trying to figure out whether or not the treatment effect is big or not, is, a lot of times, dependent on the clinical relevance, and that can depend on your own experience and the clinical context.
So in the result itself, and oftentimes you'll find the result in all the papers as tablet two or table three, usually table two. And from that, looking at the result in particular, the primary outcome, which is what they want to target, is true. So for furosemide group, you can see, compare the furosemide group to the placebo group. Let's look at the dyspnea score first. The dyspnea score was, those people who got the furosemide has a minus 2.7 on average of their baseline breathlessness score, versus this ... And therefore you can say that breathlessness are for people received furosemide versus the saline is one unit better for those who received furosemide.
And similarly, we can do the same for the FEV1, the airway obstruction percentage, looking at furosemide and also the placebo group, and you can minus those two means and say, "Those people who received furosemide did 6.6 times better for their FEV1 percentage." If we do the rest of the secondary outcome, you'll find that the respiratory rate for those who received furosemide is 3.7 per minute better on the mean, and they also have less on lower PaCO2, with higher partial oxygen pressure, reflecting potentially when are they less physiologically stressed and not hyperventilating as much.
So the authors also proceed to subgroup analysis. So what they did is they looked at different categories of the FEV1 for COPD patients. And so generally, from the respiratory physician, they will say, "If that FEV1 percentage is 51 to 80%, that's considered as mild COPD, whereas if that's 31 to 50, they'll consider it ... Sorry, moderate to severe COPD. So they were saying that for subjects who have this more severe COPD patient, who have a lower FEV1 on admission, they seem to have a better or more benefit from furosemide than those who have that higher baseline FEV1. The only thing is they didn't report whether or not this difference is statistically significant.
So, after benefit you look at the harm. And the harm was, I'll just report it just quickly from this study. They mentioned about a drop in blood pressure from this intervention, with a mean of minus 8.9 millimetre of mercury, with a standard deviation of 10. So therefore you may say, "Well, therefore depending on the patient's blood pressure, it might limit its use." So if you have a patient's blood pressure already on systolic of only 80 or 70, you may say, "Well, if I give the patient this drug, then they'll have a cardiac arrest," versus if the blood pressure's okay, then that's not too bad.
And remember, you can transform the standard deviation by timesing two to get your confidence interval. But you may therefore say in your case, that you're sitting in your clinic or that you're looking at, the blood pressure's good, and you can't use opioids, benzos, maybe it's worth a try. So just considering the clinical net benefit of your case. Now, looking at cost-effective analysis for health economic analysis, usually people talk about the money needed per unit of health benefit. However, this is not done in this RCT, and also many older RCTs didn't do this. This is a grey area for trials, and therefore you'll find that a number of trials are not doing this, and if you're running trials, that may take a specialist health economist to try and do this properly in your trial.
So, the question then is, do the benefit of the experimental intervention, in this case Lasix, avoid the harm and the cost? And I'll kind of think that overall, yes, likely the benefit outweighs the cost. So after section C of looking at the result, we then look at is this result useful? And by that we then say, "Will the result help us locally?" And to do that, you look at whether or not you can apply this result to your local context, or into the patient in front of you. What you need to think about, then, is are the study participants similar to the people you care for? Are there differences between the population of yours with the study participants, and would that alter the study outcome? And are the outcomes, as measured and reported here, important to your population?
Because your patient may say, "well, actually, I don't feel breathless, that is not important for me. All I care about is I can, with my COPD, do my laundry, and I can walk far enough to get to cafes and do my shopping." So what they care about is exercise tolerance rather than the subjective breathlessness. So keep that in mind. Or some people may say, "All I care about is I can drive without getting foggy in my head." And therefore what you might want to consider is the outcome measure of, for example, drowsiness, or ability to drive. And if that's the case, this study might not help you to make that decision. Are there any other outcomes that you would be wanting the information on and that's not reported, studied? Are there limitation of the study that would affect your decisions?
And to do so, a lot of times, you look through the methods again, the results, the tables, and the discussions. And from that, you'll be able to check, for example, in the methods, the eligibility criteria to say, "Oh, are they similar to my patient population?" Look in the results, table one demographic, are they similar to my population that I'm caring for? The outcomes that has been reported, are they meaningful to me or my clinical practise? And lastly, the limitations, is there anything else that would make me interpret the results in a different way? Can I generalise the finding, can I apply it to clinical practise?
In fact, if you look at experienced researchers, a number of researchers will tell me, "Hey, rather than look at the study, and look at the result and conclusion, and then start reading deeply into it, we should prioritise the reading of the limitation side. Because that usually is a small part, you jump through the limitations, see what the author says about their own limitation, and then you'll get a sense of how you can then interpret the results when you read it.
So for this case, our patient in our case likely has a really bad COPD on the baseline. They have the worst baseline FEV1 percentage as compared to the moderate/severe baseline characteristic as presented in this trial. Both context for the paper as well as our current clinical case are based in acute hospital rather than at home. Given the case has likely got worsening baseline FEV1 that we have, it might give her a higher chance of responding positively to furosemide, according to the study finding, that says, "Those with a lower FEV1 percentage has less potential to bronchodilate with the conventional treatment, and therefore person is responding better to Lasix."
The secondary outcome that was reported, for example, that has less reduction in the PaCO2 and more improvement in PaO2 in the Lasix group than the placebo might also give an indirect indication of the improved comfort for those on Lasix nebulized or furosemide nebulized. So what other outcome would you want that might not have been studied? Or what I as a clinician may want, I was quite keen to look at whether or not there were diuretic effects on that, and whether or not the use of Lasix changes the total opioids and benzo requirement, and also, the subsequent impact on overall alertness and cognition if, for example, less opioid and benzo is used.
However, while these outcome are quite desirable to be included in the trials, a lot of the times you'll find in a number of pall care trials, we have too many outcomes. Simply put, the patients will just get tired. You can't ask them too many questions, you can't do too much tests. To be feasible, sometimes we do need to aim for the balance, and you might say, "Oh, I need to run a different trial later to try and address it." So in this case, not having them may not affect the management decisions for the person in your case. And this will be different case by case.
So can the result be applied to your clinical context or your local population in your context? For this current case, in my context, I wrote yes or likely yes. But again, remember, in a different context, in a different hospital, with different resources, it might not be the case. So the next question to think about is would the experimental intervention provide greater value to people in your care than any other existing intervention? So what you are trying to think about is, this is a experimental drug or intervention. Compared to the standard of care that you have, with your current resourcing, with a labour shortage of doctors and nurses, and the knowledge deficit, if you tell people to do some new thing, sometimes they might do it wrong, or they might get really nervous about it. Is the resource you need to introduce this intervention, is that too much? Or is that relatively doable? And if you're going to invest in resource to try and introduce this new intervention, what is the opportunity cost?
So again, this is something that doesn't have a clear-cut answer, but is something that you need to think about, and that's locally-dependent and case dependent, and you need to consider in your own setting. So for you, let's say, in your current service setting, if this patient present to you, what do you think, whether or not this study would provide enough value for you to change your practise? Let's do a poll.
If you can just take a moment and just reflect in the sector of work that you do, do you think, with the information that's been presented to you today, is furosemide a better option? Yeah, really got a mixed bag here, because yeah, as you said, Wei, it is locally dependent, and you kind of have to just look at where you're working, who you're working with, who you're treating, and it's all on a case-by-case basis. So I guess it's no surprise is that it's a mixed response. Majority said yes, some people said no, and some people said can't tell. Did you have anything else you wanted to add to that, Wei? What do you think?
Yeah, so again, highlighting that it is very service-dependent. In this particular hospital that this case happened, the respiratory team was so scared of opioids and benzos that this is just not going to happen, and the person was having a lot of respiratory issues. And given that we can't use conventional palliative care pharmacological interventions, and there have been, in this particular ward, they were giving nebulized Lasix already in the past, with the nurses having that experience, then we actually did end up saying yes, that we should give it, and in fact the patient did receive that.
Whereas, for example, if you work in a different setting and the respiratory physician says, "You know what? I'm actually pretty okay with using opioids and benzos," and evidence of opioids and benzos are much better than Lasix nebulizers, then you may say, "You know, I don't need that, and this study won't change my practise for this particular case that's in front of me." So just take that into consideration. S
So after critically appraising this article, would you therefore use Lasix nebulizer in this patient? What would you do the next time you see a similar case? I think this is what to think about. And any comments or questions? I'll open this to the floor for discussion for now.
I think a lot of what you say with your presentation, Wei, when I think about, I guess, medical misinformation and critical appraisal skills there, I feel like I definitely connect the dots there. Because I think even everyday people need to be able to have critical appraisal skills to be able to look in the news or magazines or media and be able to critically appraise the information that they see and see whether or not it's right or wrong, and whether that information applies to them. So that's just probably something that's coming to the forefront of my mind when I was listening to your presentation.
Yeah, so we just had one comment that says, "Thank you for a really clear example of how to utilise a CASP checklist for this type of study." Thank you, thanks for that feedback. And another comment that just says, "Thank you, Wei, for this well-presented workshop." So lovely, thank you for that feedback.
Thank you, Misbah, because I think a lot of times, I think this tool or the principles of this tool, you said, don't necessarily just apply to journal articles, and I think sometimes with families in the house, with the older generation that watch social medias and then said, "Oh, this new drug is coming out, this magical pill for this cancer, I want that, it definitely works, immunotherapy's definitely the way to go for cancer with no side effect, everybody just get cured. Oh, the cannabis oil." It's important when we do get all these kind of social media things to really go back and ask, "Hey, how was the study done? What exactly was done in the study to shed some light and think about whether or not you can trust it." Yeah-
Definitely.
Yeah.
And who funded the study as well.
True, true, Ann.
That's a good point, yeah, for sure.
So essentially what we've done is this study, or this, sorry, not study, this workshop, is about talking about what critical appraisal is, which Ann has very clearly went through the very concise but very interesting history of critical appraisal, all the various different kind of tools that's available, and just want you to realise that there's no one way of doing things. This still an evolving thing. Like for example, the Lancet paper, during that time Lancet is such a famous journal, and the way it's reported then to now is you go over time. It's actually quite different. And same as Nature and all the famous papers. And you'll find that the reporting criteria if you are an author to the paper will keep changing. And so it is very important that we continue to be able to see and look at information critically, yet not necessarily with a critical attitude in a way that more build bridges rather than burn down bridges.
And therefore, using CASP as just one example, you can use this to consider how to quickly appraise a study, not necessarily having 100% right or wrong answer, but can be used also as a tool for you to communicate with your colleague and say, "Hey, what do you think? I don't really think this study informed me in this manner," and remember there's no necessary right or wrong answer, so don't go around looking at this slide and say, "Oh, I don't get this answer." It's not necessary in that case.
So the take-home message, critical appraisal is important and it can inform quality of patient care. It is used for this CASP tool as a guide, and can be very useful, but don't burn bridges, and build bridges with researchers, with your colleagues. And that's our contact details if you have any questions, and I will hand over to Fran.
Thank you, thank you, Wei, thank you Misbah and Ann. I'm sure you'll all join me in thanking Wei, Ann, and Misbah for their fabulous masterclass today. If you are interested in learning more about PaCCSC, CST, and ITCC, our website has information about our trials and other work, and the resources, support, and networking opportunity available for PaCCSC and CST members. There's several avenues to support new study ideas, provide networking opportunities, and forge new collaborations. If you have any other questions or would like to become a member of PaCCSC, CST, or both, please don't hesitate to get in touch through our website, or by replying to the PaCCSC emails that you will have received about today's masterclass. We hope you found this to be interesting and informative. Thanks go the New South Wales Ministry of Health for the funding to support these masterclasses, and again, thank you to Wei, Ann, and Misbah for their time presenting today. And thank you all to all our attendees for your time today as well. I hope you all have a lovely evening, thank you so much for joining us.
Resources mentioned in the video
- Critical Appraisal Skills Programme [opens external site]
- CASP Randomised Controlled Trial Checklists [opens external site]
- JBI Critical Appraisal Tools [opens external site]
- Centre for Evidence-Based Medicine Critical Appraisal Tools [opens external site]
- BMJ Critical Appraisal Tools [opens external site]
- AMSTAR 2 Checklist [opens external site]