Mental health apps may be promising, but they sorely lack regulation and quality reviews
Mobile health care apps now number in the thousands on the Apple and Google online stores, and many of these are targeted toward mental health. The need is real: in both the U.S. and U.K. lack of mental health services and an ongoing stigma are barriers to receiving help. Apps that are both accessible and affordable have the potential to fill this void. In addition to providing treatment using methods like cognitive behavioral therapy, portable devices can help with diagnosis and symptom monitoring by passively gathering a constant stream of personal data, such as sleep patterns and physical activity.
The promise is there. But despite the game-changing potential of mental health apps, there is very little scientific evidence to prove that most of them actually work. Researchers who have analyzed the literature found a severe lack of evidence-based research. In 2013, a study in the JMIR mHealth uHealth, revealed that only five apps targeting depression, anxiety and substance abuse had been tested for clinical effectiveness.
Two years later, little has changed.
A similar study this May in Internet Interventions showed that by last November there were only 10 peer-reviewed published articles for depression apps, and four for bipolar disorder. The authors concluded that though the small body of literature was informative, large gaps remained in the research. For example, there was a lack of certain types of key studies for depression and the effectiveness of app-based interventions for bipolar disorder had not been evaluated. “I think a lot of these mobile digital health technologies would like to take the role of ‘new pharma,’ which is especially interesting since there hasn’t been any new blockbuster drugs for mental health in a while,” says John Torous, a psychiatrist at Harvard Medical School and coauthor of the 2015 study. “But at the same time, we need to do high-quality clinical studies and apply the same standards that you would to a new pharmaceutical.”
These types of studies are essential because health apps may sometimes have unintended consequences. For example, a study conducted on an app made to reduce alcohol use among college students actually raised drinking rates in male students. There are also practical issues—a feasibility study on an app designed to help manage schizophrenia, found that while users found the app to be effective, the most common technical problem was people forgetting to charge their phones.
In addition to the lack of evidence, there is currently no method for gauging quality, making it difficult for both clinicians and patients to weed out the good apps among untested and ineffective ones. In the UK, the National Health Services (NHS) launched a Health Apps Library in March 2013. However, the library’s recommendations had their own problems. In September, researchers at Imperial College London found that some of these apps were leaking users’ personal data. Last month, economist Simon Leigh at the University of Liverpool co-authored an article in Evidence Based Mental Health that reported among the 14 apps for depression and anxiety recommended by the NHS, only four had any research to back up their claims and only two of those used validated measurement tools to test their effectiveness.
Amid these concerns, the NHS took down their list last month, explaining that it is “working to upgrade” the library that was only intended as a pilot trial. The need for a comprehensive list of effective apps remains acute. “I think a lot of this boils down to the NHS and other regulatory bodies having to make clear exactly what is acceptable,” Leigh says.
In the U.S., no library of accredited apps exists, but efforts to regulate them are underway. Earlier this year, the American Psychiatric Association put together a task force, led by Torous, to help clinicians and patients find safe and effective apps. “This is a challenging task given the lack of clinical data on how apps can help or harm patients, serious concerns about privacy and data security, and the need for more discussion on related ethical issues,” Torous says.
Another barrier to developing an evidence-based set of health care apps is that they are an always a moving target, making them difficult to evaluate. While a new pharmaceutical can take more than a decade to hit the market, apps can be developed and ready for use in less than a year. Apps are also constantly updated, making it difficult to keep track of the changes from one version to another. “The app space is built on speed and market penetration, as opposed to the years and hundreds of millions of dollars it take to launch a drug,” says John Fromson, the chief of psychiatry at Brigham and Women’s Faulkner Hospital, who was not involved in the studies. “It’s a blessing and a curse.” But apps may not require the same level of vetting a drug receives, says Leigh, who suggests that any form of observational evidence would suffice, as long as there is transparency about the participants being studied to avoid biased reports.
So should you try out a mobile mental health app? At this stage, researchers, including Torous, suggest the best course of action is to check in with a physician before trying one. And previous studies have shown that internet-based interventions yield better outcomes and lower drop-out rates when users are supported by a mental health practitioner. In the meantime, it up to researchers and policy makers to create better mechanisms for regulation. “You have the potential for clinical benefit and it can be instantly scaled and is totally affordable.” Fromson says. “Yet the regulatory environment is not set up to handle something that is so ideal and antithetical to all the problems we have.”