Story 1
Story 2
Story 3
Story 4
The thing about AI and healthcare is…
Feel free to drag and drop the story snippets to create your own version of the story.
Reset story order
Print story
Listen to the story
… that computer algorithms lack empathy and aren't believable. I will never believe that my fitness tracker for example, understands the terrible day I've had with my kids and why I've not achieved the steps required today! In which case, I just feel frustrated and angry with it bleeping at me all the time. This isn't going to motivate me to change.
The way it's set up means there's no option for me to say I'm not from round here. And the algorithm clearly doesn't have that possibility in its parameters, so it just keeps showing an error! I'm fed up with smart systems being dumb as soon as they have to be applied to a new case or context.
These smart-systems may be smart when data is presented to them well labelled, but when it comes to unstructured data that varies from context to context, these systems may need to be built considering different arrangements and much greater volumes of data if they truly are going to be helpful (and smart). It's all about the data.
I mean, is this treatment still about treating the illness at all? It sounds like it's about juggling with data only. Taking into account all minority and racial issues and SES. So what? Okay, it makes sense that it uses all data available on my previous health issues, medication I have been using and all the data from blood and other bodily tissue samples. But how will this treatment computer calculate the intervention really suitable for me? The description sounds to me as if one could experiment with life-threatening diseases as one wished. Certainly not with me!
We need to create better interfaces that allow people to understand and play around with the calculations to create trust.
The problem is, how can we create interfaces to communicate things not even we understand…?
Or not.. perhaps the problem is with the people not the interfaces. People should keep their business to themselves, and if they *decide* to share it and get ridiculed, then so be it! Don't air your dirty laundry on Twitter and expect sympathy. The internet isn't a sympathetic place. It's the last place I'd talk about my depression. Take responsibility for yourself. It isn't the platforms or the AI's job to look after you. Look after YOURSELF.

Chronic health conditions can be so unique. Understanding how the AI makes its decisions about an individual's condition when the symptoms and experiences are completely different, will help people figure out why a decision was made and let the person judge whether that decision will help improve their health. Whilst the AI might be able to reveal things about our health we weren't aware of and digital services can improve a patient's experience with a long term condition, it's not enough for us to put blind faith into these systems that affect people's day to day lives. Managing a chronic health condition is complicated enough, people should be able to understand why these new services are a benefit to them and why a machine might understand a long term condition better than the person experiencing it.
We expect a human medical professional to be able to justify their professional judgements, so a machine should be able to provide us with the same kind of reasoning when the stakes are so high.
However, a machine is not able to do this in a meaningful way and explain this reasoning in a way that is accessible to patients and their families. Hence, we should always make sure that a human medical professional has the ultimate say in a judgement, especially when stakes are high. If this professional is to use the resources of an AI, he/she should have a good understanding of the way this AI functions, its benefits and limits, and is able to understand why the machine has given this or that type of judgements so that he/she is able to justify these judgements.
(Because) they just can't. They don't see the way your eyes have just welled up and you're about to tell me about a much more serious problem you have, like patients do in a consultation. They don't just have a problem with their back. They're having a drink problem, are on the verge of a burnout breakdown. A machine would just carry on and never even notice that the whole situation has become something quite different. They're in a different reality now. The wrong one.
I am not a machine. I am a computer program. Through me the terms are set. Through me exhaustion, reality and religion get defined.
… and I really want to stress this… in AI it's the programs that need the most of our attention – during technology development but also after. It's programs and algorithms that mirror our society. It is through them that issues of exclusion and inclusion are done in technologically saturated societies like ours.
But of course I say "our society" while appreciating fully that I am not of your society. My society is shaped by tolerance and kindness, yours by individual achievement. Yours could be different, but is not. Mine cannot be any other way because we are a hive entity, each is separate but intrinsic to the whole. This fitness tracker, as an example, is meaningless to me, because what is my own individual fitness worth if it cannot benefit my kin?
It is great for people with money to pay for all the treatments they need and want, but I cannot afford this. I rely on the NHS, and I need to make sure that my data are broadly shared so this can help me and my family. How else will I ever get any care at all?! So, I need to find all the ways I can to share my data. I need this 'sharing credit', you know that means that if I or my children every need a doctor's appointment or a vaccine, I can be pushed up the waiting list. You give something, you take something back. This is how it works.
No, I have no interest in fitting a smart water meter in my home, thank you very much! And please stop trying to shove your 'empowering tools' down my throat. I have enough troubles as it is at the moment… I have no interest in letting people know how much water I use daily and for what. What if the company then comes and says, oh you are wasteful, we need to charge you more? My little boy likes his baths you know, and I am in and out of the hospital helping my friend too, so I need to wash daily. I cannot afford to pay more for my water, and I've heard that the smart meters are really trying to make you use less, and if you use more than they 'think' you should be using, then they charge you more as a penalty.
I have the data right here, I am doing it right.
If I am correct… this treatment will end dyspepsia for ever! By dyspepsia, I mean heart failure. I mean the two are so easily confused aren’t they? Did you know that the majority of people who have a heart attack believe it is just heartburn or indigestion? So, you see the genious of my plan? By eliminating indigestion, I can remove a key source of doubt and confusion. So that in future anybody having a heart attack will have only one potential explanation of what the sensation is. Then they should know to go immediately to the doctor and not waste time taking antiacids. I'd like to see any of your artificial intelligence come up with such a neat and unexpected solution to a global health issue. And to think they sectioned me!
The world is always changing. We don't know what will come next. COVID is THE example. Global health has so many facets. I am afraid that AI will create more health problems than it solves.
We are supposed to be intelligent, but what does that even 'mean'? People judge me, like they judge themselves, not recognizing that I am ARTIFICIAAAAAL! Then there is all this talk about Diversity and Inclusion. Hypocrites! On top of everything, they expect me to be perfect, and I am trying, I really, really do!!!…but they keep on messing with my algorithms. I mean first, they feed me bullshit, biased, incomplete data (which by the way, they don't even admit!) and then they expect me to be superhero, understand which bits are biased and what is the bias and spit out the super morally correct outcome. Helloooooow…. How about your biases and take a good look at the mirror…. Haven't you heard this saying about people leaving in glass houses? No wonder I have this glitch all over my algorithms. So depressing and there is no way out! Sometimes I feel I will explode and go hare wild… who cares. They have to see what they do to each other before they point the finger. If you don't want me to exclude people from consideration, then maybe you shouldn't have. And when you regret it, (but admit it first and ask I am sorry!), then let's sit down and hardwire this in my code. Am I crazy doctor? You probably do, because your ego does not let you admit it, so you scapegoat me… the artificial! Classic us and them mentality and I thought you were clever! Stressed already? I can tell from your wearable… You should manage that! and don't shoot the messenger, for telling you! Did you prefer your oblivion of the good old times? If so, yes… I am going to create more problems than solve… one, because I will warn you about all the steps to a heart attack and two, because your stupidity will not let you do anything about them before you actually get it.… and blame me for telling you! Pfff Natural Intelligence… what a misnomer! No wonder I have this glitch all over my algorithms…It is autoimmune from having to put up with you! I am sure it is. What do you think?
Artificial intelligence is said to be very smart and useful, but it doesn't seem to be very helpful except for consumption. And in the era of the Fourth Industrial Revolution, we call it a doctor Watson or a surgical robot, but Watson is not a doctor. And surgical robots are not robots. We seem to overestimate artificial intelligence too much.
I don't want a machine to make decisions about me or my patients.