The DeepMind Conspiracy – Is Google Really Hiding Something?
There is an apocryphal story in technology circles: Google and Facebook could do things with our data that would make our heads spin, but they don’t because it would freak everyone out.
Apocryphal it may be, but data companies do not have to do much to freak us out. Some people will be extremely unnerved by our revelation that Google’s subsidiary DeepMind has access to healthcare data from three UK National Health Service hospitals. People who use the hospitals now know that DeepMind has intimate – albeit anonymised – details of their medical history, including HIV status, past drug overdoses and abortions.
It doesn’t help that DeepMind is very unwilling to talk about what it wants to do with this data. If it were more open, there would arguably be less room for freaking out. The data is a gold mine: by digging into it, the company could build powerful tools to diagnose disease sooner. Indeed, for some medical practitioners who are used to working with such data, this is a non-story. It already happens all the time.
This tension between privacy and progress is a critical issue for modern society. Powerful technology companies can tell us valuable things, but only if we give them control of our data. This question is especially important in the development of artificial intelligence. Without data about real human experiences, whether playing Go or the results of cancer treatments, artificial intelligence cannot learn anything.
At the moment we give our data up very easily, arguably without choosing to do so. The NHS patients whose data is now in Google’s hands probably consented somehow, though without actually realising it. That needs to change. If we are to hand Google et al. ever more data, then we should insist they ask us first, and tell us what they want it for. Who knows, if we were better informed and the companies less secretive, they could win our consent to do things that would really make our heads spin.