21CBT Research Updates – Coming Fall 2014

self_validation_Portfolio_Research

21CBT RESEARCH UPDATES – COMING FALL 2014

21CBT FUNDED VALIDATION STUDY OF SELF ASSESSMENT MOBILE APPLICATIONS

The 21st Century BrainTrust™ has underwritten a study to evaluate mobile computing devices in a well characterized longitudinal clinical research population (in individuals with normal or very subtly impaired cognition).

The laboratory for these studies will be the Massachusetts Alzheimer’s Disease Research Center’s Brad Hyman, MD, PhD, Deborah Blacker, MD, ScD, Dorene M. Rentz, PsyD.

THINKING:

There is a pressing need to develop methods to evaluate the “brain health” of individuals in the community that allow them to receive appropriate feedback, and to empower them to seek care, change behaviors, or become involved in research.

Technological breakthroughs now make it possible to complete neuropsychological assessments on mobile devices like the iPad any time and any place, but major practical and research questions remain.

What information might be improved from a home testing / mobile device scenario? 

Most critically, it decreases the variation introduced by measuring cognition on a single occasion, where there may be undue influence of intercurrent illness or mood fluctuation, augmented by the fatigue of travel time and the long batteries typically administered.

It seems clear that a single observation of a complex function like cognition in one ~2 hour period is unlikely to be as good an estimate as multiple assessments of the same functions measured many times. In addition, of course, it greatly reduces the cost, makes testing much less cumbersome for the subject and investigator, and enables participation by individuals residing (or traveling) anywhere — not just within driving distance of a testing center.

What variation is introduced by home testing on a computer? 

Some of the questions are specific to computerized vs. oral or pencil and paper testing: inter individual and intraindividual variation in manual dexterity, typing skill, and–for voice recognition software–voice quality. Others relate to testing done off site, out of the investigators’ control in non-uniform settings with variation in lighting conditions and noise levels, and using devices with differences in brightness of screen, responsivness of keyboard, etc.

This variation can be minimized and the impact of specific factors can be directly tested – e.g., enforcing use of the same device (or not) – and providing in person or over the phone individualized training (or not).   Sometimes the “disadvantage” can be turned to advantage (information about manual dexterity can be quantitated from input devices more readily than from clinical observations).

TIMEFRAME

The time frame to obtain answers to these questions is short for several reasons. In addition to the broader pressing need (we have no time to waste!), the A4 trial of early intervention in individuals with documented brain amyloid deposits is about to launch with COGSTATE mobile neuropsychological testing as part of its outcome assessment, but the tool has yet to be validated in this population.  We therefore propose to join forces with 21 CBT and leverage our extensive experience with a longitudinally followed cohort of individuals to 1. Answer a specific set of research objectives about and 2. Begin the conversation about how to communicate information about cognitive status to individuals and their health care providers.

METHODOLOGY

A selection of subjects from among the  ~900 in our longitudinal cohort, who are approximately 1/3 clinically normal, 1/3 “concerned” – i.e. not clearly normal but not “impaired” in the clinical sense, 1/3 demented – will be the testbed to answer specific questions about the reliability and validity of these tools, as well as efficient study designs to maximize the utility of neurocognitive assessments done without a research technician or clinical evaluator, but only a computer interface.

PILOT PHASE

Based on the pilot phase, we will develop a formal evaluation phase with up to 100 subjects across the same range of impairment. In this phase, we will compare the two to three most promising schedules for reliability, and for validity of the tool compared to standard clinical evaluations. In addition, after one year we will validate the tool as a measure of longitudinal change compared to other office based markers. In addition, in tandem with the baseline and follow up evaluation during the test phase, we will conduct a formal quantitative and qualitative evaluation of subjects’ responses.

Loading