It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI can predict when someone will die with unsettling accuracy

page: 2
4
<< 1    3 >>

log in

join
share:

posted on Mar, 28 2019 @ 09:54 PM
link   

originally posted by: DBCowboy
a reply to: InTheLight

We're off topic and should stop.

My relationship is private in any regard.

____________________________________________________

So an AI can predict death! Insurance companies will love this!


Yes, me must stop and not reveal truth.



posted on Mar, 29 2019 @ 03:01 AM
link   
Sounds like those websites that use health questions to evaluate when you will die.
I don’t want to know. Surprise me.



posted on Mar, 29 2019 @ 06:37 AM
link   
a reply to: neoholographic

It doesn't take a genius to predict death by heart disease.



posted on Mar, 29 2019 @ 08:14 AM
link   

originally posted by: dfnj2015
a reply to: neoholographic

It doesn't take a genius to predict death by heart disease.


It does take a deep learning algorithm to predict the death of 14,500 people out of 500,000 people.

The benefits to this are huge because it can connect causes of death to certain combinations of medicine all the way down to particular diets that could make things worse if you have a certain ailment. Let me repeat the relevant line from the article:

The Cox model leaned heavily on ethnicity and physical activity, while the machine-learning models did not. By comparison, the random forest model placed greater emphasis on body fat percentage, waist circumference, the amount of fruit and vegetables that people ate, and skin tone, according to the study. For the deep-learning model, top factors included exposure to job-related hazards and air pollution, alcohol intake and the use of certain medications.

When all the number crunching was done, the deep-learning algorithm delivered the most accurate predictions, correctly identifying 76 percent of subjects who died during the study period. By comparison, the random forest model correctly predicted about 64 percent of premature deaths, while the Cox model identified only about 44 percent.


Again, the benefits are obvious and the deep learning algorithm will just get better with more data.



posted on Mar, 29 2019 @ 02:51 PM
link   
a reply to: Narcolepsy13



14,418 deaths (2.9%) occurred over a total follow-up time of 3,508,454 person-years. A simple age and gender Cox model was the least predictive (AUC 0.689, 95% CI 0.681–0.699). A multivariate Cox regression model significantly improved discrimination by 6.2% (AUC 0.751, 95% CI 0.748–0.767). The application of machine-learning algorithms further improved discrimination by 3.2% using random forest (AUC 0.783, 95% CI 0.776–0.791) and 3.9% using deep learning (AUC 0.790, 95% CI 0.783–0.797). These ML algorithms improved discrimination by 9.4% and 10.1% respectively from a simple age and gender Cox regression model. Random forest and deep learning achieved similar levels of discrimination with no significant difference. Machine-learning algorithms were well-calibrated, while Cox regression models consistently over-predicted risk.

PLOS.org - Prediction of premature all-cause mortality: A prospective general population cohort study comparing machine-learning and standard epidemiological approaches.

+10% over the standard is not really mind-blowing.

Factor in opioid use to your Cox model and then you might be even closer than "deep learning"!

In the end, more people are dying than expected. That should make you go, "hum?"



posted on Mar, 29 2019 @ 04:21 PM
link   

originally posted by: buddha
any remember that cat that could tell who would die?
now that was real.


Cats and dogs can smell cancer.



posted on Mar, 29 2019 @ 04:28 PM
link   

originally posted by: Gothmog
You must be the AI that is stated in the OP
That answer was spot on for everyone.
Wow




posted on Mar, 29 2019 @ 04:43 PM
link   
Life-Line: Robert Heinlein, 1939


"I will repeat my discovery. In simple language, I have invented a technique to tell how long a man will live. I can give you advance billing of the Angel of Death. I can tell you when the Black Camel will kneel at your door. In five minutes' time, with my apparatus, I can tell any of you how many grains of sand are still left in your hourglass." He paused and folded his arms across his chest. For a moment no one spoke. The audience grew restless.

www.baen.com...



posted on Mar, 29 2019 @ 06:01 PM
link   

originally posted by: TEOTWAWKIAIFF
a reply to: Narcolepsy13



14,418 deaths (2.9%) occurred over a total follow-up time of 3,508,454 person-years. A simple age and gender Cox model was the least predictive (AUC 0.689, 95% CI 0.681–0.699). A multivariate Cox regression model significantly improved discrimination by 6.2% (AUC 0.751, 95% CI 0.748–0.767). The application of machine-learning algorithms further improved discrimination by 3.2% using random forest (AUC 0.783, 95% CI 0.776–0.791) and 3.9% using deep learning (AUC 0.790, 95% CI 0.783–0.797). These ML algorithms improved discrimination by 9.4% and 10.1% respectively from a simple age and gender Cox regression model. Random forest and deep learning achieved similar levels of discrimination with no significant difference. Machine-learning algorithms were well-calibrated, while Cox regression models consistently over-predicted risk.

PLOS.org - Prediction of premature all-cause mortality: A prospective general population cohort study comparing machine-learning and standard epidemiological approaches.

+10% over the standard is not really mind-blowing.

Factor in opioid use to your Cox model and then you might be even closer than "deep learning"!

In the end, more people are dying than expected. That should make you go, "hum?"


This is wrong and what you quoted explains why:

Machine-learning algorithms were well-calibrated, while Cox regression models consistently over-predicted risk.

This makes that 10% very important. The machine learning algorithms didn't over-predict risk. This is EXTREMELY IMPORTANT and a study like this can save so many lives as the machine learning algorithm gets better at it's predictions. This is because if you over-predict or under-predict risk it can have catastrophic consequences.

Say you're taking medicine x which says there's low risk of internal bleeding when taking medicine x. But the risk goes up if you have a certain skin tone, you're taking y and z medicines and you have a particular diet.

To try to understate the value of a study like this is just ridiculous.

If you over-predict and say medicine x will will make you nauseous but we find out this is over-predicted in Blacks or Hispanics and a family history of cancer or certain vegetable consumption makes this nauseousness something dangerous and could lead to a disease that causes death, then this is HUGE! The report says this:

Calibration was assessed by comparing observed to predicted risks; and discrimination by area under the ‘receiver operating curve’ (AUC).

Again, the machine learning algorithms predicted risks in line with the observed risk. The Cox model over predicted risk which is very dangerous.

Just look at the baseline predictor variables!

➢ Age (years)
➢ Gender (female; male)
➢ Educational qualifications (none; College/University; A/AS levels; O levels/GCSEs; CSEs; NVQ/HND/HNC; other professional qualifications; unknown)
➢ Townsend deprivation index (continuous)
➢ Ethnicity (White; South Asian; East Asian; Black; other/mixed race; unknown)
➢ Height (m)
➢ Weight (kg)
➢ Waist circumference (cm)
➢ Body mass index (kg/m2)
➢ Body fat percentage (%)
➢ Forced expiratory volume 1 (L)
➢ Diastolic blood pressure (mm HG)
➢ Systolic blood pressure (mm HG)
➢ Skin tone (very fair; fair; light olive; dark olive; brown; black; unknown)
➢ Vitamins and supplements (none; vitamin A; vitamin B; vitamin C; vitamin D; vitamin B9; calcium; multi-vitamins)
➢ Family history of prostate cancer (no; yes)
➢ Family history of breast cancer (no; yes)
➢ Family history of colorectal cancer(no; yes)
➢ Family history of lung cancer (no; yes)
➢ Smoking status (non-smoker; current smoker)
➢ Environmental tobacco smoke (hours per week)
➢ Residential air pollution PM2.5 (quintiles of μg/m3)
➢ Physical activity (MET-min per week)
➢ Beta-carotene supplements (no; yes)
➢ Alcohol consumption (never, special occasions only; 1–3 times per month; 1–3 times per week; daily or almost daily, unknown)
➢ Fruit consumption (pieces per day)
➢ Vegetable consumption (pieces per day)
➢ Beef consumption (never; < one per week; one per week; 2–4 times per week; 5–6 times per week; once or more daily; unknown)
➢ Pork consumption (never; < one per week; one per week; 2–4 times per week; 5–6 times per week; once or more daily; unknown)
➢ Processed meat consumption (never; < one per week; one per week; 2–4 times per week; 5–6 times per week; once or more daily; unknown)
➢ Cereal consumption (bowls per week)
➢ Cheese consumption (never; < one per week; one per week; 2–4 times per week; 5–6 times per week; once or more daily; unknown)
➢ Salt added to food (never/rarely; sometimes; usually; always; unknown)
➢ Type of milk used (never/rarely; other types; soya; skimmed; semi-skimmed; full cream; unknown)
➢ Fish consumption (never; < one per week; one per week; 2–4 times per week; 5–6 times per week; once or more daily; unknown)
➢ Sunscreen usage (never/rarely; sometimes; usually; always; unknown)
➢ Ease of skin tanning (very tanned; moderately tanned; mildly/occasionally tanned; never tan/only burn; unknown)
➢ Job exposure to hazardous materials (none; rarely; sometimes; often; unknown)
➢ Aspirin prescribed (no; yes)
➢ Warfarin prescribed (no; yes)
➢ Digoxin prescribed (no; yes)
➢ Metformin prescribed (no; yes)
➢ Oral contraceptives prescribed (no; yes)
➢ Hormone replacement therapy prescribed (no; yes)
➢ Anti-hypertensive drugs prescribed (no; yes)
➢ Statins prescribed (no; yes)
➢ Previously diagnosed with h. pylori infection (no; yes)
➢ Previously had radiotherapy (no; yes)
➢ Previously diagnosed with bowel polyps (no; yes)
➢ Previously diagnosed with Coeliac disease (no; yes)
➢ Previously diagnosed with Crohn’s disease (no; yes)
➢ Previously diagnosed with thyroid disease (no; yes)
➢ Previously diagnosed with acid reflex (no; yes)
➢ Previously diagnosed with hyperplasia (no; yes)
➢ Previously diagnosed with prostate disease (no; yes)
➢ Previously diagnosed with cancer (no; yes)
➢ Previously diagnosed with coronary heart disease [CHD] (no; yes)
➢ Previously diagnosed with stroke/transient ischemic attack [TIA] (no; yes)
➢ Previously diagnosed with Type II diabetes [T2DM] (no; yes)
➢ Previously diagnosed with chronic obstructive pulmonary disease [COPD] (no; yes)


This is AMAZING to say the least.

There's more variables that will added.

Just think, machine learning algorithms makes correlations in the data like we do. They just do it EXTREMELY faster. You can eventually say things like:

If Mike works around these materials, he should be prescribed medicine x instead of medicine y because medicine y could lead to a disease that will most likely cause Mike's death.

The lives that will be saved and extended because of this will be in the millions and of course the question will be asked do we want to save those lives because of climate change and overpopulation.



posted on Mar, 29 2019 @ 06:30 PM
link   
a reply to: neoholographic


It did not predict anything!

They trained it over an open data set of ~500,000 records. Each record gets fed through a feed-forward series of algorithms that can track multiple items and relate them to each other (each layer is technically a "statistical bias" as ANN are set with values). The results of that run were compared to death notifications on record for any individual from the sample group (it does not even look like the individuals were searched for, like moving out of country, lets say).

AI and deep learning are meant to do something: be repeatable. As a new data set is thrown at it, it should already "know" what to look for. That part is missing!

Give it another 500,000 records and then compare your results. If the trained AI has consistently higher percentages, throw another data set at it! You do this multiple times, from different years, different areas, different mixes, then report back your findings.

In the end, at least to me, a one off run is not too impressive. Interesting, yes. Very! And not surprising that a computer can keep track of multiple "dimensions" of data better than a human (which is what the Cox model is a crutch for) but that is it as far as I am concerned.



posted on Mar, 29 2019 @ 07:02 PM
link   
a reply to: TEOTWAWKIAIFF

You said:

It did not predict anything!

The title of the study:

Prediction of premature all-cause mortality: A prospective general population cohort study comparing machine-learning and standard epidemiological approaches Stephen F. Weng , Luis Vaz, Nadeem Qureshi ,

Here's more:

Machine-learning significantly improved accuracy of prediction of premature all-cause mortality in this middle-aged population, compared to standard methods. This study illustrates the value of machine-learning for risk prediction within a traditional epidemiological study design, and how this approach might be reported to assist scientific verification.

Yes, the machine learning algorithms predicted something extremely important as I pointed to earlier.

This study illustrates the value of machine-learning for risk prediction

Again, this is INVALUABLE if you just have a basic understanding of medical treatments.

If a machine learning algorithm can say person x with a family history of lung cancer that works around x materials should take y medication instead of z medication because z medication increases the likelihood of the person developing x disease which increases the likelihood of death, that's VERY IMPORTANT.

If you can't grasp how better risk prediction can save millions of lives then you should just Google risk prediction and read.

Google
edit on 29-3-2019 by neoholographic because: (no reason given)



posted on Mar, 29 2019 @ 07:52 PM
link   
"Ms. Peacock, in the study, with a pipe wrench."

The creators of the AI don't seem to have factored in any type of "war equation." The imbalance of sex/gender ratios can often predict when a given community will go to war. During times of national crisis, women tend to conceive slightly more boys than girls. And 19 years later (the average age of combatants), the same culture tends to be at war. So for Americans, The stock market crash of 1929 caused an increase in boys, and 19 years later America was in the depths of it's involvement in the Korean War. And 19 years after that was the climax of the Vietnam war, and the social chaos of the sixties. The mid 1980s didn't really see the US in a war; but there were a disproportionate number of boys born 2002/2003, in the wake of "This generation's Pearl Harbor," which implies that the US will be at war 2020-2022.

Demographers do take seriously the correlation between excessive surpluses of unmarried males and the increase in crime/gang activity; as well as a more aggressive foreign policy to assuage the "extra" single males.

The Security Risks of China's abnormal demographics

Wikipedia: list of countries by sex ratio



posted on Mar, 29 2019 @ 09:04 PM
link   
a reply to: neoholographic

That is the title (MSM, too).

Please read and understand what the actual study did do.

I am not claiming anything about medical studies. I am saying that “A.I.” as depicted as “sees the future” is, uh,... stretched, at best.

Again, YAY medicine and diagnostics. Boo click-bait headlines that try to “golly! Gee!” the public who doesn’t know any better.



posted on Mar, 29 2019 @ 09:32 PM
link   
How is this unsettling? Surely the idea of applying an A.I to certain subject and then discovering that its good St it in terms of accuracy was actually the goal to begin with, and therefore settling.

Unless of course, you're the person on the other end of the prediction I suppose.

😏



posted on Mar, 29 2019 @ 09:52 PM
link   
a reply to: neoholographic

Neo,

A prediction is taking known data and stating “in the year 2000... people will discover that Supreme Court is just regular court but with tomatoes and sour cream”! (Thanks Team CoCo for that!)

All their data was from and about the past. Ever hear that saying, “hindsight is 20/20”??

That is my point. Nothing more. I love technology. It should be used to help all people. But I know that I cannot enter my file into their A.I. and know when I am going to die.



posted on Mar, 29 2019 @ 09:58 PM
link   

originally posted by: RMFX1
How is this unsettling? Surely the idea of applying an A.I to certain subject and then discovering that its good St it in terms of accuracy was actually the goal to begin with, and therefore settling.

Unless of course, you're the person on the other end of the prediction I suppose.

😏


The unsettling part is knowing when you might die but in exchange for that knowledge there will be some database with every bit of information about your life, genetics, medical history and more. This will allow the algorithm to make more correlations.

So you might get an alert that says,"Because you went through x treatment, you're taking x supplements, your diet is rich in x vegetables and you're an Hispanic male over 40 with a family history of diabetes, you're chances of developing x disease over the next year has just increased by 30%.

Think of all the lives that will be saved and extended but it comes at a high cost. This A.I. would have to know just about everything about everyone.

I can see a future where A.I. Health Monitoring is mandatory. Of course, this would mean some Government could legally collect a lot of information about everyone.

We will not have any privacy but we will be healthier. That will be the pitch.



posted on Mar, 29 2019 @ 10:02 PM
link   
a reply to: TEOTWAWKIAIFF

Again, you just don't understand it.

A prediction can be made by looking at past data and the more data you have the better predictions you can make. That's just basic common sense.



posted on Mar, 30 2019 @ 07:57 PM
link   
a reply to: neoholographic

A pre-diction takes current state and says what the future will be.

In the study, it was all the past.

I understand. Do you?



posted on Mar, 30 2019 @ 09:12 PM
link   
a reply to: TEOTWAWKIAIFF

Again, not trying to be mean or contrary just saying that statistics is what it it is.



posted on Mar, 31 2019 @ 04:02 PM
link   
a reply to: TEOTWAWKIAIFF

You can't be serious?

We're not talking about a Psychic prediction, we're talking about a future prediction based on data from the past.

Every prediction outside of a Swami is made by using past data to predict future trends. The more data the better the prediction.

Here's a recent study with AI PREDICTING cancer.

AI predicts cancer patients' symptoms

You know how they were able to make these predictions?


Researchers analysed existing data of the symptoms experienced by cancer patients during the course of computed tomography x-ray treatment. The team used different time periods during this data to test whether the machine learning algorithms are able to accurately predict when and if symptoms surfaced.


Let me repeat.

Researchers analysed existing data of the symptoms experienced by cancer patients during the course of computed tomography x-ray treatment. The team used different time periods during this data to test whether the machine learning algorithms are able to accurately predict when and if symptoms surfaced.

www.sciencedaily.com...

AI had to use data from different time periods.

Here's one more.

AI predicts ovarian cancer survival rates from CT scans

Did the study look at past data or is this a Psychic Medium study? Let's look.

For their study, the researchers initially segmented the CT scans then used TEXLab 2.0—a machine learning software tool—to identify tumor aggressiveness in CT scans and tissue samples from 364 women with ovarian cancer between 2004 to 2015.

The software examined a total of 657 features relating to the structure, shape, size and genetic makeup of the tumors to assess the patients’ prognosis. Each patient then received a Radiomic Prognostic Vector (RPV) score, which indicates how mild to severe the disease was. Results from blood tests and currently utilized prognostic scores were used to estimate survival rates.


www.radiologybusiness.com...

You just can't be serious with your last post. It has to be a joke. You said:

A pre-diction takes current state and says what the future will be. In the study, it was all the past.

REALLY??

Of course it's all past unless you're talking about Psychics Uri Gellar or Sylvia Browne.

The more PAST DATA you have the better you can make future predictions.

How can AI make a prediction about Ovarian Cancer without CT scans from the past, specifically between 2004-2015?
edit on 31-3-2019 by neoholographic because: (no reason given)



new topics

top topics



 
4
<< 1    3 >>

log in

join