How to evolve artificial intelligence models alongside societal needs

We live in a time where generation-defining events and emerging technologies are colliding at a rapid pace, affecting the way our society communicates and interacts with one another. Nowhere is this more evident than in the advent of digital therapeutics in mental health, as new models of care and apps are developed to treat physical and behavioral health conditions such as pain, sleep, anxiety and depression.

While it is often assumed that the act of bonding is reserved exclusively for human therapeutic relationships, recent studies have shown that digital therapeutic tools are indeed capable of creating a comparable therapeutic bond with users. But just as relationships between people need to be nurtured, so too must the connection between virtual mental health services and their users be nurtured.

As society becomes more open and dependent on these tools, it is the responsibility of tech companies, especially those tasked with supporting people’s mental health, to develop artificial intelligence (AI) and machine learning (ML) models and to maintain, which adapt to the needs of society. But what does it mean to use this technology responsibly?

It takes good people to build AI for good.

We are still a long way from building an AI that can consistently replicate many of the unique characteristics of long-lasting human relationships. For this reason, algorithmic speech generation by an AI is fraught with risk, a risk that is amplified in conversations about health. Assessment and collaboration between clinicians and technologists is key to identifying where interventions are needed and generating relevant, thoughtful, and clinically effective responses.

Ultimately, AI that serves the mental health needs of humans cannot be developed in isolation. It requires a comprehensive understanding of the human condition as well as the factors that create stress and anxiety in our daily lives. By injecting human oversight throughout the process, developers can create models that reflect people’s lived experiences and the diversity therein, making it easier for users to bond with a relational agent.

Once a model is deployed, it needs to be continuously evaluated.

Regular evaluation of performance and retraining models is critical to ensure technology keeps pace with our evolving world. This process requires a commitment to understanding how individuals interact with digital therapeutics in order to recognize emerging societal behaviors and adjust responses accordingly. As world events shape the topics users want to discuss and new slang is developed by new generations on social media, we have a responsibility to monitor how society changes over time and adapt accordingly. This maintenance ensures that users feel uniquely understood as individuals during their conversations, so therapeutic bonds continue to form.

AI is the gateway to understanding conversations, not the conversation itself.

Even when retrained, AI models must be designed to work against established and validated assumptions, never make unilateral decisions without the user’s explicit consent, and always leave the final decision in their hands. While ML can be used to interpret natural language, it should actually reflect reflective listening – with mechanisms to actively acknowledge the accuracy of what users are being told and their participation and collaboration in defining the path forward enable.

For example, if a model determines that a person has a sleep problem, the conversation flow can be designed to acknowledge and reflect that endpoint. Instead of moving through the conversation, the model can pause and ask: “It sounds like you’re having trouble sleeping, am I right?” If the answer is no, the conversation can move forward in a more informed manner.

Ultimately, each step must be carefully designed to cause no harm and support better care. Even if we build great models without great conversational design and without preserving human autonomy, we will not enable clinical outcomes or improve mental health.

There has never been a more urgent need to address the ethical complexities that digital tools bring to mental health. Ultimately, we serve to improve outcomes for users, and it is our responsibility to consistently assess people’s emerging needs and build failsafes into our systems to ensure users have the autonomy to shape their own journey. With a proactive approach grounded in the authenticity of human relationships and guided by principles of transparency and respect for self-determination, we can build dynamic digital experiences while remaining accountable to our users and ourselves.

Photo: ipopba, Getty Images

Leave a Reply

Your email address will not be published. Required fields are marked *