ABOUTNeha Chaudhary, MD @NehaChaudharyMD, is a child and adolescent psychiatrist at Massachusetts General Hospital and Harvard Medical School. She is a co-founder of Brainstorm, The Stanford Lab for Mental Health Innovation and is on the APA's Committee on Innovation. She also consults to tech companies, including Pinterest. Nina Vasan, MD, MBA @NinaVasan is a psychiatrist and Assistant Clinical Professor at Stanford School of Medicine. She is the director of Brainstorm, Chair of the APA's Committee on Innovation, and author of Do Good Well. She also consults to tech companies, including Pinterest. Follow her on Twitter.This story, though fictional, is unfortunately not unique. These patterns are common among the many teens we see as psychiatrists, whose mental health has been impacted by social media. Research shows that the more time teens spend online, the more likely they are to be exposed to self-harm content, and engage in self-harm behavior (like cutting or hitting oneself) and develop suicidal thoughts. In fact, a study showing the uptick in suicides in the months immediately following the release of 13 Reasons Why on Netflix illustrates a dangerous contagion effect that can occur when kids start to emulate what they see. Last year, Instagram was criticized after the suicide of UK 14-year-old Molly Russell, whose parents believe she had seen pictures encouraging self-harm and suicide on the sites. YouTube’s algorithms were found to be identifying and recommending videos of partially clothed children. Live continues to struggle with people broadcasting suicides. The list is only growing.
The online manifestations of mental health have added a complex new dimension to psychiatry. Facebook CEO Mark Zuckerberg issued a call to action in The Washington Post. “Internet companies should be accountable for enforcing standards on harmful content,” he wrote. He called on third-party experts to set safety standards.Request received. As psychiatrists working in Silicon Valley, we split our days between treating patients and working directly with tech companies to create new products that improve mental health. We’ve learned that tech companies often lack clear guidelines for how to make user safety decisions, especially in the face of simultaneous pressures: public outcry, a lack of legal precedent, financial repercussions, and more. With our colleagues from Brainstorm, Stanford’s Lab for Mental Health Innovation, we’ve created guiding principles that companies can follow to create products that protect users and have potential to help:
Do no harm:
- Do not allow teens to be harmed by what they consume.
- Do not allow teens to harm others by what they create.
- If there is concern for imminent high risk (like thoughts of harm to oneself or others) that is identified on a platform, address it immediately in line with legal, ethical, and cultural norms, and pass it along to experts. This is an obligation.
- Help teens get help on and off the platform.
- Work with mental health experts like psychiatrists to understand how the platform can be leveraged to improve teens’ lives, and refer them to barrier-free resources.
“Do no harm” is inspired by the oath we take as physicians, “primum non nocere”, stressing that our first responsibility is to safeguard against the negative. For example, if someone posts saying they have no reason to live and want to say goodbye to their friends, that would qualify as “imminent risk” and cannot be ignored; at the least, warnings to call 911 or to go to the closest ER should be presented on the platform. Social media companies are not, however, physicians. It is not their responsibility or training, and they should not be making medical decisions or shoulder the burden of medical care. The highest-risk situations should be left to the professionals—and companies should help people get there when they are in danger.
Beyond that, companies can “do good” and offer tools to users that responsibly and safely help to improve their wellbeing.We believe that tech and social media companies are uniquely suited to be a psychiatrist’s biggest ally in our mission to improve mental health for the 2 billion people around the world struggling with brain and behavioral health disorders. The same ingenuity that allowed Facebook to acquire 2.3 billion users and Twitter to help us send 500 million tweets per day can be the key to identifying people at risk of depression, preventing slut shaming, or shepherding teens to the best medical treatment center. With that many people on a platform, however, things are bound to go awry. Recently, a BBC article revealed an underground network across the world of thousands of Instagrammers with “dark” accounts. The posts ranged from serious self-harm to documentation of the final hours before someone’s death by suicide. Many are made under the guise of benign, everyday photos, in efforts to circumvent Instagram’s new bans on graphic content.
When Algorithms Think You Want to Die