Medical monitoring creates ethical dilemmas
Surveillance cameras with night vision in care receivers’ homes, boxes for medicines that detect and sound the alarm if a medicine is not taken. Sensors in clothing and watches that measure heart rate and blood pressure. Medical monitoring is becoming increasingly sophisticated – but this also creates ethical dilemmas.
These dilemmas are being studied by Professor Göran Collste and postdoctoral student Elin Palm, both researchers in Applied Ethics at Linköping University.
“The EU is investing heavily in developing new technologies in health care”, says Göran Collste. Health care to move out of hospitals and into homes is a motto.
Much is centred on streamlining care so that service users can live at home and be monitored remotely. A technique of using night vision cameras in service users homes, for example, is about to be introduced in several county councils. Another example is the Giraffe, a robot with a screen that can assist the care recipient wherever they are in their home. This new technology is to be tested in for example Västerås. And in Norrköping, as part of the project New Tools for Health, smart medicine boxes are about to be tested. These are programmed to sound the alarm if a medication is not taken at the right time.
The technological development is in full swing. But attention is also given to the ethical aspects early in the process, something that George Collste appreciates.
“Often, ethicists like me are involved too late, when the technology is already developed and in use. Now we participate early on in the process and hopefully this means we can avoid some pitfalls.”
The project, which is funded by the EU's 7th Framework Programme, brings together researchers in Informatics, medical psychologists, lawyers and ethicists at universities in five EU countries.
“A lack of human contact is an obvious risk in remote medical monitoring,” says Elin Palm, a post.doc researcher involved in the project.
“Although the technology is introduced and described as an optional extra to the care we already have, there is obviously the risk that social interactions will decrease”, she says.
She also sees a risk in creating a false sense of security. For the technology to work an actual human being is required to be available and to notice when the alarm is sounded.
However, after having interviewed service users, she does not believe that the threat to privacy is seen as an issue.
“The people involved are often very old and sick. For them, security is more important than complete personal integrity, if they are faced with a choice.
Göran Collste also highlights the risk of the medicalisation of healthy people, as a possible long term effect.
“The question is what the continuous monitoring of our health does to our identity. Will we always feel like patients if we are constantly aware of and preoccupied with how we feel?”
The two researchers are nowdeveloping a model for the ethical evaluation of technology, which could be of use in the procurement of new technologies. If buyers, often the county councils, can detect and avoid problems in advance, and thus put pressure on manufacturers, there is a lot to be gained, they say.
The project will publish its final and complete report in 2012.
Last updated: 2010-05-20