Trust and acceptance of medical expert systems

Project Proposal by Martin Stacey


Trust and acceptance of medical expert systems

Software

None

Covers

Artificial intelligence, social attitudes to computers, medical ethics

Skills Required

Interest in AI, preferably interest in psychology or ethics.

Challenge

Conceptual ???? Technical ?? Programming

Brief Description

In the 1970s and 1980s medical diagnosis was a major area of research in artificial intelligence, with plenty of juicy problems to excite the researchers, and to some extent it still is. Hopes were high that AI would bring major benefits to patients, doctors, and other medical practitioners. But medical AI has largely disappeared from view, and only partly because AI isn't as sexy as it used to be, and certainly not because prototype systems didn't work well, because several did. Medical expert systems are alive and well in one or two niche applications, notably on navy ships. And in Britain the NHS Direct advice service relies on an expert system: old-time AI types can sniff its presence in the background, but the general public aren't aware of it.

So why haven't medical AI systems achieved much more widespread and visible success? What attitudes among doctors, patients, administrators and decision-makes influence willingness to use and trust medical expert systems? What attitudes govern decisions to invest in AI for medicine for practical applications? Are these attitudes rational? Are they grounded in accurate views of how AI systems work and what they can and can't do? What would need to change to enable greater acceptance of medical AI systems, and would these changes be a good thing? Are the real technical limitations of AI systems deal-breakers, and if so, when and why?

Variants

This could be a computer ethics and social responsibility project rather than a social investigation. How much are people influenced by ethical issues, and these ethical concerns rational or the right ones to have? What is the moral relationship between an AI system, its users, and its builders? Are AI systems ethically different from textbooks or research reports?

How do attitudes to using AI systems in medical practice depend on circumstances? What do they look like in different cultures? What do they look like in situations where access to qualified doctors is limited by cost or geographical isolation?

You could choose to focus on one particular aspect of medicine: Psychotherapy might be a good choice; there is interesting work on sex therapy, a field where providing patients with accurate knowledge and dispelling ignorance is often the most important thing the medical professionals need to do. You could alternatively look at the potential of AI systems as teaching tools.

Where else is public or professional trust or the lack of it an issue in the adoption of AI in visible or safety-critical roles? What factors influence that trust...


Back to