The Keosys Blog | Insights on Medical Imaging in Clinical Trials

Choosing the Appropriate Imaging Modality and Imaging Criteria for Your Target Condition

Written by Pierre Terve | Chief Scientific Officer | Nov 6, 2018 6:48:37 PM

The application of advanced imaging methods streamlines the clinical trial process by facilitating timely go/no-go decisions. Specifically, robust imaging data enable scientists and clinicians to:

  • Gain a better understanding of disease pathways
  • Improve patient selection for therapy
  • Monitor patient responses and disease progression under treatment
  • Analyze the safety and pharmacokinetics of a therapeutic

Magnetic resonance imaging (MRI), computed tomography (CT) and positron emission tomography (PET) are the main modalities used in medical imaging. However, how do you decide which is the most suitable to use in a clinical trial? Also, how best to analyze the obtained images?

Selecting the Appropriate Modality

Knowledge of the limitations and strengths of available imaging modalities and an understanding of disease biology will help determine which is optimal for the trial. CT is often used to generate high-resolution anatomical images, whereas PET and MRI are used to measure functional characteristics of organs and cells.

For example, for a solid tumor, CT scans constitute >90% (with the remainder provided by MRI) of evaluations in a typical Phase III trial, in which lesion size is a primary parameter of measurement. Meanwhile, PET imaging with the radioactive tracer FDG (18-fluoro-deoxyglucose) is used in numerous oncological and non-oncological indications. FDG provides a measure of the metabolic activity of cells; for example, highly active cancer cells show higher levels of FDG uptake, whereas brain cells affected in a context of dementia show lower FDG uptake.

Combinations of techniques have also emerged to address specific diagnostics needs of certain diseases and to increase sensitivity. For example, for disease assessment in Hodgkin lymphoma, a review noted that FDG-PET plus CT increased the sensitivity from 10% to 20% compared to conventional CT. Importantly, an upstaging rate of 10% to 40% was observed, and up to 20% of patients received a modified treatment decision.

However, selecting which imaging modality to use for a study is almost the ‘easy’ part of the decision-making process. Deciding what parameter to measure is more challenging.

 

Selecting the Appropriate Imaging Criteria

In cancer, the gold standard for determining benefit from an investigational product is whether there is an improvement in overall survival. But achieving this endpoint might not always be easy.

Over the years, numerous imaging markers and surrogate endpoints have been developed to assess early whether a treatment provides clinical benefit and to optimize patient comfort and safety. That is, it is equally important to remove a patient early during the course of a clinical trial if there is clear imaging evidence to indicate that therapy has no significant beneficial effect. 

The Response Evaluation Criteria in Solid Tumors (RECIST) guidelines were one of the earliest guidelines adopted by industry and regulatory agencies. RECIST allows the assessment of treatment response according to tumor shrinkage. It soon became apparent though that applying RECIST posed some problems. RECIST was then adapted to RECIST 1.1, and several cancer-specific and therapy-specific criteria were developed. For example, with the fast ascent of immunotherapies, the novel iRECIST criteria were developed to evaluate anti-tumor responses of immunotherapeutic agents. To guide researchers in deciding which criteria to choose, some scientists have developed a decision tree (see the figure).

Imaging-based surrogate endpoints also provide valuable objective assessments of therapeutic benefit in other indications. Neuropsychological measures, such ADAS-Cog, are often used as the standard outcome measure in clinical trials of investigational therapies for Alzheimer disease. Unfortunately, ADAS-Cog suffers from poor sensitivity in tracking progression of Alzheimer’s disease in patients. In contrast, MRI measurement of whole brain or hippocampal atrophy rate can be used to support clinical outcome endpoints.

Overall, imaging biomarkers as surrogate endpoints are objective and fast to measure and can reveal subtle changes reflecting progression or regression that might be missed with clinical approaches. Moreover, they allow sponsors to demonstrate efficacy of a therapy much earlier than waiting for, for example, overall survival rates.

 

Benefits of Using a Specialist Imaging CRO

Owing to the rapid pace of development in medical imaging, keeping up with latest developments is challenging. For example, RECIL for lymphoma is aiming to replace the current Lugano Criteria, but acceptance by regulatory authorities is not following at the same speed.

Even for established imaging criteria such as RECIST1.1, errors can occur unless there is a clear understanding of how to apply them according to the indication being investigated.

Although there is guidance from regulatory authorities (e.g., Clinical Trial Imaging Endpoint Process Standards from the FDA), it is useful to have an experienced imaging CRO, such as Keosys, who is able to provide advice on selection of imaging modality and endpoint criteria.

Keosys is also an expert in implementing robust and validated workflow around these criteria and can train experts to read images or write imaging charters depending on the pathology and the product that is under investigation.

 

 
Example: Decision Tree
Example of a decision tree for selecting the most appropriate response criteria for cancer.
Source: Subbiah, V. et al. Diagnostics 7(1): 10(2017).