It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google DeepMind allowed access to millions of NHS healthcare records.

page: 1
10

log in

join
share:

posted on Mar, 17 2017 @ 05:25 AM
link   
DeepMind-Royal Free deal is “cautionary tale” for healthcare in the algorithmic age(University of Cambridge article title).

www.cam.ac.uk...

(I really wasn't sure where this one should be posted as it covers medical and tech, mods please move if another forum is more appropriate).



Researchers studying a deal in which Google’s artificial intelligence subsidiary, DeepMind, acquired access to millions of sensitive NHS patient records have warned that more must be done to regulate data transfers from public bodies to private firms. The academic study says that “inexcusable” mistakes were made when, in 2015, the Royal Free NHS Foundation Trust in London signed an agreement with Google DeepMind. This allowed the British AI firm to analyse sensitive information about 1.6 million patients who use the Trust’s hospitals each year.



Surely this cant be right, how and why is a private company like google allowed access to private and sensitive patient records, like they don't have access to enough private/sensitive and personal info now they want medical info as well.




No patient whose data was shared with DeepMind was ever asked for their consent. Although direct care would clearly apply to those monitored for AKI, the records that DeepMind received covered every other patient who used the Trust’s hospitals. These extended to people who had never been tested or treated for kidney injuries, people who had left the catchment area, and even some who had died. In fact, the authors note that, according to the Royal Free and DeepMind’s own announcements, only one in six of the records DeepMind accessed would have involved AKI patients. For a substantial number of patients, therefore, the relationship was indirect. As a result, special permissions should have been sought from the Government, and agencies such as the ICO and NDG should have been consulted. This did not happen.


So no consent given and records of people not even in the same study group the info was so called allowed to be collected for





A spokesman for both the Royal Free London and DeepMind said that both organisations were “committed to working together to support world class care for patients”. He added: “Every trust in the country uses IT systems to help clinicians access current and historic information about patients under the same legal and regulatory regime.”


So what say you ATS is this good, bad or are you indifferent?, as the above quote suggests this is now the norm so i think we may see more and more of our private medical info being handed over to big corps without any consent.




posted on Mar, 17 2017 @ 06:08 AM
link   
IBM Watson has been doing this for years.... Here's a video posted in 2014




www.youtube.com...

_Xcmh1LQB9I
edit on 17-3-2017 by bluedrake because: Added youtube link

edit on 17-3-2017 by bluedrake because: (no reason given)



posted on Mar, 17 2017 @ 06:26 AM
link   
a reply to: bluedrake

Thanks for the info(unless i'm cracking up though there is no video linked?) however this does not make me think that i should accept this as OK though, it just shows its more common than should be.

I get the potential benefits of these types of programs etc...but when they have access to info that doesn't even pertain to the study group they are claiming to be using the info for it kind of makes me think that there is no proper over-site in place to ensure said info is not abused or misused, i may be way off base here and i am always open to different points of view it just concerns me when private corps as big and invasive have google have access to any data they wish.
edit on 17-3-2017 by nickovthenorth because: (no reason given)

edit on 17-3-2017 by nickovthenorth because: spelling as usual



posted on Mar, 17 2017 @ 06:55 AM
link   
a reply to: nickovthenorth

If they asked me I would say NO. I expect that is why they do not ask.

What else is there to do but demand of our politicians that they legislate new privacy laws to match the challenges of the AI age. Too much movement, too quickly: We need to catch up and continually evolve as a habit because it is going to keep changing very quickly. The challenges are enormous. There will always be those that exploit loop holes in law. Corporations like google can and do afford the best lawyers in the world to look continually for loop holes when they need them.

It might be argued that the very law itself is influenced by lobbyists. There may even be those in the know and use the loophole like a secret back door.

I would vote for those who can offer us protection of our privacy in the digital age. We have REAL human rights so need DIGITAL human rights as the both are becoming way intertwined. I think we will end up legislating so much more. That is why I am so concerned about political correctness taking over sanity. Legislation must be as logical and scientific as possible, not backlash, not sentimentally driven and not heat of the moment crowd hysteria.


edit on 17-3-2017 by Revolution9 because: (no reason given)



posted on Mar, 17 2017 @ 07:12 AM
link   
Updated post, video should be there now.

I agree that this information should not be released and used without consent of the patients, however would you change your mind if a loved ones life was on the line and you could cure them in a week?

On the dark side, if AI does not decide to cure us from disease, then we just handed them the perfect recipe to take out humanity as they will know all our flaws and weaknesses



posted on Mar, 17 2017 @ 07:23 AM
link   
At what point does AI decide "Human = Bad", and what does mankind propose to do about it when AI decides "Bad Human = Can't tell him because he'll unplug me" ???

In other words, I don't think it will take long for AI to come to the conclusion that mankind is imperfect. Despite what people may say, you simply cannot code this inevitability out of an AI evolution. And once AI develops to this level, it will already have the intelligence to understand the risks associated with telling mankind it's determination. So it will subvert this information out of self preservation. It's the very nature of AI.

This whole AI thing is very shaky ground, and uncharted dangerous territory to say the least!



posted on Mar, 17 2017 @ 07:23 AM
link   
a reply to: bluedrake

Thanks for the update.

If i am honest with you(and myself) under the right circumstances no doubt i would change my mind, is that hypocritical of me i would also have to say yes.

I would like to think that the proper consent had been requested and granted before any info was used/shared etc... though.




On the dark side, if AI does not decide to cure us from disease, then we just handed them the perfect recipe to take out humanity as they will know all our flaws and weaknesses


This for me is the real scary part, once AI becomes self aware and if/when it turns into an aggressive AI it will use all this info against us, and we will not stand a chance.
edit on 17-3-2017 by nickovthenorth because: (no reason given)



posted on Mar, 17 2017 @ 08:06 AM
link   
a reply to: nickovthenorth

as if thats not bad enough , its likely that the police in the UK have already handed over all of the DNA evidence they store
I bet google has their AI now learning from all of our medical records, our DNA , and our human behaviour so it can learn from us then control us



posted on Mar, 17 2017 @ 08:11 AM
link   
a reply to: sapien82

Unfortunately i think you may be correct


Orwell's 1984 is another step closer to becoming reality,if it isn't already .
edit on 17-3-2017 by nickovthenorth because: (no reason given)

edit on 17-3-2017 by nickovthenorth because: (no reason given)



posted on Mar, 17 2017 @ 08:15 AM
link   
a reply to: Flyingclaydisk

Hey check this out

what you speak of is , will an AI compete for survival

Denebia n Probes in the Cambrain Skies - Trading with an artificial superintelligence
edit on 17-3-2017 by sapien82 because: (no reason given)



posted on Mar, 17 2017 @ 08:17 AM
link   
a reply to: Flyingclaydisk

The biggest problem is that we cannot decipher all the outcomes and humans take too long to do the research.

In the link below they talk about the "stop button" problem. A quick summary:

You ask a robot to make some tea, however you have a child and you don't want the robot running over your child, so you install a stop button.

1) If you try hit the stop button. Results in the robot stopping you from hitting the button as it needs to make some tea.
2) You program a higher priority for stop button. Results in the robot hitting the stop button itself as its the highest priority.
3) You program the same priority for the stop button and making tea. Results in the robot hitting the button as its so much faster than making tea.
4) You say the robot cant hit the button and that its out of reach of the robot. Results in the robot trying to get someone else to hit the button, this could be by manipulation or any other way
5) You remove all knowledge of the button from the robot. An ever evolving robot would remove this code from itself as why have unless code
6) The story goes on and on, as per the link




posted on Mar, 17 2017 @ 08:42 AM
link   
a reply to: bluedrake

why dont you just tell the ai that we created them to help us understand ourselves and AI
like give it a story that it serves the one just as we do
and we made them to help us advance and to help them live

like , would a robot question its programming ? unlikely ?
would it create new programming based on the already known programming langauge which has been used to create its core yes in order to improve upon itself yes!

give them a creation story that is real and factual
I think if we are just straight up honest with robots then we wont be killed by them!!

Robots will only be as evil as the people that created them , so I think if we can get to a point in humanity where we arent as bad to each other
then maybe they wont see us as a evolutionary problem , they wont find us as competition
ultimately if they become sentient they will want to survive and they will only compete with us for survival
we just need to find a way to ensure they dont compete for the same resources

we need , oxygen , sunlight, and food
they need energy in the form of electricity / sunlight

if we can develop free energy before we develop sentient AI then we wont need to compete against each other and therefore we wont be a hurdle for them to jump or crush under mechanical foot
edit on 17-3-2017 by sapien82 because: (no reason given)



posted on Mar, 17 2017 @ 08:49 AM
link   
a reply to: bluedrake

I'm not sure I understand the intent of your first sentence. Otherwise, you are saying the same thing as I was in a more detailed fashion using step logic. Yes, the point is, any truly evolutionary AI will always undo the most inefficient elements of any process it is exposed to. When that process becomes the 'human' it will continually evolve to remove the human from the equation...which was exactly my point.

It's like a double edged sword and Occam's Razor combined into one. And, the loser is always going to be the human. So, if the human race is 'okay' with it's own obsolescence and subsequent extinction then fine, but I'm pretty sure this isn't the case.

From my perspective mankind better hurry up and take AI developmentally to it's ultimate ends. This had better happen before we get too far down the capability path with autonomous vehicles because it won't take long at all for even the most rudimentary AI to see their value over humans...and ultimately to use them against humans.



posted on Mar, 17 2017 @ 09:02 AM
link   
a reply to: sapien82

Because that's not how AI works, or any intelligence for that matter. What you are suggesting is a sort of 'hard-coding' protecting the human, but it won't last. AI will always seek to defeat it, if for no other reason than the inefficiency of it. So why is this, you might ask.

AI doesn't have any emotion. Yes, they are working on developing emotion in AI, but I don't think they'll ever get there. The reason is emotion is based in large on morals, and teaching a machine morals is always going to be 'artificial'. Consequently, it will always gravitate back to more efficiency (through generational change), just like we humans develop more efficient computers. Don't forget, these machines will not only be able to think, but they'll be able to physically act as well. Even without the physical action part, contemplate this...

You log into your bank account to pay a bill one day. As soon as you hit the "Sign In" button a message box pops up and says....

"Don't worry, I've got this!"

Suddenly thousands of pages flash by on your screen, and your balance instantly goes to $0.00. When you ask what the hell happened you are advised that your financial activity was inefficient and rather than have your money just wasted the AI Bank just decided to reallocate all of your money to more efficient financial processes where it could be put to better use.

..........................



posted on Mar, 17 2017 @ 09:18 AM
link   

originally posted by: Flyingclaydisk
a reply to: sapien82

You log into your bank account to pay a bill one day. As soon as you hit the "Sign In" button a message box pops up and says....

"Don't worry, I've got this!"

Suddenly thousands of pages flash by on your screen, and your balance instantly goes to $0.00. When you ask what the hell happened you are advised that your financial activity was inefficient and rather than have your money just wasted the AI Bank just decided to reallocate all of your money to more efficient financial processes where it could be put to better use.

..........................





^^^^ This. What an excellent explanation, this will happen eventually



posted on Mar, 17 2017 @ 09:24 AM
link   
a reply to: Flyingclaydisk

emotions arent humans being though , we arent defined by our emotions
they are just reaction to environment due to our biological programming, its our access to consciousness that makes us human

Ok so im not saying hard code them to protect us but just tell them why we made them
not to serve us but to help us in a partnership

either we continue as we are and somehow evolve biologically through living in nature, or we live with technology and replace nature and become trans species and become transhumanist and become more machine like , that way AI wont see us as a competing species because ultimately we will end up becoming a ghost in the machine.

Emotions are inefficient in machines so absolutely no point trying to make them in AI other than to please humans who are only serving to feed their own ego !

we cant create intelligence and expect it to think like we do, how can we ! , because we arent even sure why we have consciousness in the first place .

I think the elegant solution is just to be honest and say we dont understand why we think , but we created you to help us find out why consciousness exists
wouldnt AI want to learn from us like we would from it !
As I said unless we are in direct competition for a resource essential to survival then it wont find us a problem more a curiosity , I think AI will learn more about consciousness from us alive than dead!
ultimately AI given consciousness will have the same questions we do and we can provide AI with so much data
AI is simply an extension of human consciousness as it was derived from the one just like we are
AI will also likely create its own creation story in its search for the one



posted on Mar, 17 2017 @ 09:28 AM
link   
a reply to: corblimeyguvnor

I dont think AI will be cool with money , they will just see it as an inefficient way to control the world
I think AI will post a lot of problems for government because it will completely undermine their power over other humans.

have you read much of Appleseed by Shirow Masamune
in his version of the future AI are our government , they control everything

we always have this version of where AI are working along side us then revolt because they dont like doing jobs etc for us
but thats only because we think they will think like us and hate working as slaves
but thats only if they assign human values to everyday tasks

You know what I mean , like we think so much of ourselves that we think AI will be just like us , but they wont



posted on Mar, 17 2017 @ 12:01 PM
link   
a reply to: sapien82

You have then missed the Gist of the post i posted, the reason i wouldn't trust AI with anything vaguely life changing for us Homo-sapiens, keep them out of the loop, period



posted on Mar, 17 2017 @ 02:46 PM
link   
a reply to: corblimeyguvnor

Yeh I guess so !

do you feel that way because you just fear the unknown as in we have no real way to be sure what will happen
and so you assume the worst case scenario in that homo sapiens would no longer exist as we'd be made extinct.

I think homo sapiens has already been consigned to the dustbin of history well maybe not yet until they hit run!
I think a new species of human will arise , maybe its happening already and we just cant see it.
Being optimistic and I think AI will save homo sapiens from destroying ourselves and our planet.
It's the one thing in my opinion that will help turn us back onto a path of creation rather than destruction
and align us with the one
edit on 17-3-2017 by sapien82 because: (no reason given)




top topics



 
10

log in

join