It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence Is Learning to Predict and Prevent Suicide

page: 1
2

log in

join
share:

posted on Mar, 18 2017 @ 10:30 AM
link   
There's a whole lot of monitoring, data collection and privacy invading going on - commercially, the data is used to figure out what to sell you and where to send you. [Alphabet use is another story.] Now though, R&D everywhere wants to monitor and flag your potential for self-harm.

On the surface, helping people about to commit suicide seems to be a good idea.

What say you ATS?


Artificial Intelligence Is Learning to Predict and Prevent Suicide

For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business—selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.

But it’s not just tech giants like Facebook, Instagram, and China’s up-and-coming video platform Live.me who are devoting R&D to flagging self-harm. Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms that capture more data than ever before. The goal: build predictive models to tailor interventions earlier. Because preventative medicine is the best medicine, especially when it comes to mental health.

If you’re hearing more about suicide lately, it’s not just because of social media. Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data.

...Think about it. Between all the sensors in your phone, its camera and microphone and messages, that device’s data could tell a lot about you. More so, potentially, than you could see about yourself. To you, maybe it was just a few missed trips to the gym and a few times you didn’t call your mom back and a few times you just stayed in bed. But to a machine finely tuned to your habits and warning signs that gets smarter the more time it spends with your data, that might be a red flag.

That’s a semi-far off future for tomorrow’s personal privacy lawyers to figure out. But as far as today’s news feeds go, pay attention while you scroll, and notice what the algorithms are trying to tell you.





posted on Mar, 18 2017 @ 10:51 AM
link   
a reply to: soficrow

What say me?

I say it's just one more watch list I'm going to be put on.

How long before we're all labeled as suicidal terrorists?



posted on Mar, 18 2017 @ 11:00 AM
link   
a reply to: NarcolepticBuddha



How long before we're all labeled as suicidal terrorists?



Good question!

...Who's the ATS bookie these days?






posted on Mar, 18 2017 @ 11:16 AM
link   
a reply to: soficrow

I would say that you critically need to review this sentence in your OP:


"On the surface, helping people about to commit suicide seems to be a good idea.

What say you ATS?"



posted on Mar, 18 2017 @ 11:28 AM
link   

originally posted by: Aliensun
a reply to: soficrow

I would say that you critically need to review this sentence in your OP:


"On the surface, helping people about to commit suicide seems to be a good idea.

What say you ATS?"


Naw. It's a ruse. OP knows.

Every invasion of privacy and automated profiling is to help us after all. Oh and to protect the children too!


edit on 18-3-2017 by NarcolepticBuddha because: (no reason given)



posted on Mar, 18 2017 @ 11:51 AM
link   
This remind me a little of the movie Minority Report.

I find it interesting that fiction is followed by fact. And yes, on the surface it appears to be a good thing. Attempting suicide is a cry for help. But some people are in so much emotional or physical pain, they don't want help. For the people left behind, it can be devistating. I'm on the fence on this one. On one hand, I think if someone can get help, wonderful! As a mom, if one of my children were ever contiplating suicide, I'd move heaven and earth to stop it before it got to that point. But on the other hand, it's pretty presumptuous that it would be someone else's business.

I don't know what is the right answer here...



posted on Mar, 18 2017 @ 12:29 PM
link   
a reply to: soficrow

I can see benefits to this and I've been reading about the suicide epidemic for some time.

But I can see this getting way out of control going forward. I'm damn skippy glad I'm not a young person trying to navigate this new world of social media.



posted on Mar, 18 2017 @ 12:30 PM
link   
What if the technology they want use to combat depression is actually the cause of our depression?
I see a trend where societies that are far removed from this lifestyle of techno-dependency seem to have less depression rates. I could be wrong. But one would think having every whim catered to you via technology would produce a happier person than having to walk miles just to get water. But the opposite seems true.

If anything I hope AI helps humanity achieve a balance between the natural world and technology before we forget where we came from.



posted on Mar, 18 2017 @ 01:12 PM
link   

originally posted by: Aliensun
a reply to: soficrow

I would say that you critically need to review this sentence in your OP:


"On the surface, helping people about to commit suicide seems to be a good idea.

What say you ATS?"


lol.

But given that such people aren't assisted in their desire to commit suicide, what DOES happen? Are they apprehended? Restrained? Injected with happy drugs?

OR, in a post UBI world, do we have teams of unemployed-unemployables taking weekend counselling workshops then racing around providing talk therapy to the red flagged?



new topics

top topics



 
2

log in

join