top of page

Lost in translation? Use of artificial intelligence in psychiatry

Updated: Jun 23

Interview with Helga Brøgger




Bridging the gap between AI and healthcare


In 2016, Nobel Prize-winning physicist John J. Hopfield from Princeton University stated that radiologists would soon become obsolete due to advancements in artificial intelligence (AI).


“I was personally offended,” says Helga Brøgger, a specialist in radiology. “I thought, ‘That’s not happening without a fight,’ and I decided to learn about technology from a medical perspective.”


Despite Hopfield’s prediction, radiologists are still being trained, and Brøgger has since been recognized as one of Norway’s leading women in technology. Today, she works full-time as a researcher on AI at DNV.

 

Merging medicine and technology


Brøgger’s decision to enter the field of AI was driven partly by protest and partly by concern.


“I was worried that technologists had too narrow an idea of what AI should do in healthcare. Technologists understand technology. Doctors understand health.”


She describes herself as a technology optimist.


“I have great faith in technological possibilities, but we must always ask whether technology improves morbidity, mortality, and quality of life. That’s why we need responsible oversight.”


Over time, she has become one of those responsible voices in health technology. She frequently gives lectures and engages in public outreach.

 

Translator between two worlds


“The intersection between health and technology is full of exciting developments, but it’s also a fragile field. We need people who can translate between these domains.”


Brøgger sees her role as part diplomat, part facilitator, and part translator.


“Sometimes, I feel like an educator as well,” she adds.


In 2024, she was invited to speak before the board and committee leaders of the Norwegian Psychiatric Association. She had to begin with some very fundamental explanations.


Helga Brøgger: “I was worried that technologists had too narrow an idea of what AI should do in healthcare. Technologists understand technology. Doctors understand health.”

 

Empowering healthcare professionals in the AI era


“One of my goals is to help healthcare professionals feel confident in their roles, responsibilities, and expertise. Their knowledge remains just as valid in the face of technology as in any other context. There’s no reason to feel incompetent when interacting with engineers and technologists. Healthcare professionals must trust their own judgment. They must demand that technology is safe and effective. Doctors need to speak up - not just for themselves but for their patients.”


Her advice to healthcare professionals is clear:


“Be curious, optimistic, and forward-thinking. Don’t doubt your own questions - others are likely wondering the same things. Demand clear explanations.”

 

The hidden dangers of AI in healthcare


“Gray areas are dangerous - when things aren’t transparent.”


She is particularly concerned about technology marketed as “wellness” or “optimization” but that, in reality, impacts health. She mentions that Apple has filed a patent for AirPods capable of recording EEG signals. “Why on earth would they need EEG sensors in earbuds? They want brain data,” she exclaims, visibly alarmed. She notes that organizations like Amnesty International are increasingly concerned about ‘neuro privacy.’


She warns that much of today’s technology, designed for other purposes, actually has the ability to monitor emotions and cognitive activity.


“In the worst-case scenario, technology could influence the agency of free will. We must stay vigilant.”

 

The risk of AI misuse in healthcare


Brøgger is also worried about app developers creating tools to help people without sufficient knowledge of diagnostic accuracy.


“There should be room for technology, and in some cases, such tools may be better than nothing. But there’s a fine line between ‘almost something’ and ‘nothing.’ It’s dangerous if people believe they’re receiving help when they aren’t.”


“Establishing causal relationships in medicine is already difficult. If we add technology on top of that without ensuring specificity and sensitivity of interventions, it leads to chaos. I’m far more concerned about irresponsible actors misusing AI than I am about qualified healthcare professionals engaging with AI.”


Helga Brøgger: “AI could certainly be useful in one-on-one consultations, but its greatest potential lies in identifying broad patterns - such as the relationships between education, nutrition, and mental health.” Image by Unsplash.
Helga Brøgger: “AI could certainly be useful in one-on-one consultations, but its greatest potential lies in identifying broad patterns - such as the relationships between education, nutrition, and mental health.” Image by Unsplash.

Artificial intelligence in psychiatry – opportunities and caution


When it comes to psychiatry, Helga Brøgger is not particularly worried.


“If you are a psychiatrist, you have worked with professional development and role understanding within a specific context. You are also accustomed to considering the user’s perspective. This remains crucial when utilizing health technology.”

 

Brøgger offers several recommendations for healthcare professionals interested in how AI can benefit psychiatry.


Remember that if the AI is used to diagnose, prevent, monitor, predict prognose, treat or alleviate disease in EU, it must have obtained a CE mark of conformity with EUs Medical Device Regulation (MDR).

 

Brøgger encourages those interested in AI to investigate what datasets an AI tool has been trained on. Are these datasets representative of your patient population under your care?

 

It’s also important to accept that AI is new, and that we are all in a learning process together. Transparency with patients is key.” she advises. She emphasizes the importance of being completely transparent with patients about how AI is being used.


“Mistakes happen in medicine. The key is to learn from them.” She also advises organizations to have clear systems for detecting errors, reporting issues, and managing risks.

 

If a psychiatrist wants to integrate AI tools into their clinical practice, Brøgger recommends addressing this with their leaders.


“Make it clear what kind of assistance you need! Then, try it out and share experiences. Don’t work in isolation - collaborate with colleagues in similar roles. It’s also beneficial to involve lawyers and technologists alongside managers and clinicians.”

 

The long-term impact of AI in psychiatry


In the long run, she believes AI will be most impactful when it comes to big data.


“AI could certainly be useful in one-on-one consultations, but its greatest potential lies in identifying broad patterns - such as the relationships between education, nutrition, and mental health.”

 

The hidden costs of AI


One aspect of AI that is often overlooked is its resource consumption - both in terms of energy, water and human labor.


“Everything has a cost. From a sustainability perspective, we need to consider what we are using energy for. In my opinion, patient care and research are far more worthy causes than advertising designed to make people buy more things.”


She reminds us that AI and cloud services may seem abstract, but they ultimately rely on hardware, energy, water, and people on the ground.

 

Data sharing and ethical AI use


Scientific journals are increasingly requesting disclosure of how AI is used in research. At the same time, many people use open AI tools like ChatGPT in their daily lives.


Brøgger urges caution when using such services: “When using open AI tools, remember that there is always someone else in the loop. If AI is to be used in treatment, there must be formal agreements between the service provider and the user. Think carefully about whether you are sharing data or not.”


In larger organizations, these considerations fall under leadership responsibilities.

 

For private users, she has a simple rule: “Only share public information. Do not share secrets or confidential data.”

 

AI Is not neutral


Brøgger is also concerned with the bias in AI systems.


“What an AI model is trained on matters. AI is not neutral - it reflects the data it is built on. The values and perspectives of the developers can be found within the data.”


She quotes Ghaninan-Canadian-American computer scientist and digital activist Joy Buolamwini, who said: “Neural does not equal neutral.”


Brøgger stresses that categorization and modeling are never fully accurate at an individual level but can still be useful - as long as their limitations are understood.


“Tech enthusiasts may fall into the trap of believing they can understand reality through data streams alone. But they can’t. There’s far too much we don’t have control over or insight into.”

 

Accessing reliable AI guidelines


The EU AI Regulation provides guidelines on AI usage and defines the roles of various stakeholders.

 

DNV has written a paper called ENSURING SAFE AND TRUSTWORTHY AI. An AI Act playbook for the healthcare sector.

 

This paper explores the AI Act from a healthcare perspective, with a focus on the responsibilities of both the developers of AI systems and the health institutions that utilize AI. It covers key definitions, describes the interplay of the AI Act with established product safety law, and explains relevant regulatory topics including the conformity assessment pathway, risk classification and harmonized standards.

 

The paper gives extra focus to novel aspects of the AI Act, such as the new role of deployer, regulatory sandboxes, transparency obligations, human oversight requirements, and the protection of fundamental rights. Finally, this white paper includes a basic six-step plan to begin work towards AI Act compliance.

 

The six practical steps are:

 

  1. System inventory and qualification;

  2. Risk classification;

  3. Assess roles for each AI system;

  4. Identify regulatory requirements;

  5. Obtain evidence and conduct a gap analysis;

  6. Build a compliance roadmap.



Links

 

For those who want to read more, DNV offers the publication for free: 🔗 Ensuring Safe and Trustworthy AI

 

Additionally, Brøgger has co-authored a detailed article on AI in healthcare:


bottom of page