It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Big pharma turns to artificial intelligence to speed drug discovery

page: 1
3

log in

join
share:

posted on Jul, 2 2017 @ 09:36 PM
link   
Some big drug companies are turnong to AI to reduce the time it takes to get a new drug to market. The goal is to go from 5.5 years to 1 year. And state they have a responsibility to lower drug costs.


The world's leading drug companies are turning to artificial intelligence to improve the hit-and-miss business of finding new medicines, with GlaxoSmithKline unveiling a new $43 million deal in the field on Sunday.
(the) goal is to reduce the time it takes from identifying a target for disease intervention to finding a molecule that acts against it from an average 5.5 years today to just one year in future.

"That is a stretch. But as we've learnt more about what modern supercomputers can do, we've gained more confidence," Baldoni told Reuters. "We have an obligation to reduce the cost of drugs and reduce the time it takes to get medicines to patients."
ca.news.yahoo.com...

Glaxo is paying $43 million to search for drugs for up to 10 diseases, it sounds like a trial for the Exscientia company that has the know how to do the computing.


The new deal with Exscientia will allow GSK ( GlaxoSmithKline ) to search for drug candidates for up to 10 disease-related targets. GSK will provide research funding and make payments of 33 million pounds ($43 million), if pre-clinical milestones are met.



posted on Jul, 2 2017 @ 09:46 PM
link   
This sounds great. The idea of cutting testing times and in the long run, prices.

I guess I just don't know enough about the process to fully comprehend the entire article.
Can someone explain it a little better for me?



posted on Jul, 2 2017 @ 09:51 PM
link   
a reply to: essentialtremors

I find it amazing that AI is in theory going to go from 5.5 years to 1.

I wonder how much $$$$ this is going to save consumers (in the US)?



posted on Jul, 2 2017 @ 10:11 PM
link   
a reply to: seasonal

This will not save any money to public. The sooner they can make a cure, the faster they can create an ailment for the cure and let it spread.

The endgame is to create a disease or a sickness like diabetes that forces everyone to buy daily injections or suffer pain all day long. They will squeeze every single person for every last penny and then let us all die to lower the population as robots will be ready to serve the elite.



posted on Jul, 2 2017 @ 10:13 PM
link   
a reply to: Heruactic

You earn the Mr. sunshine award, and I wish I could disagree with you.



posted on Jul, 2 2017 @ 10:14 PM
link   

originally posted by: essentialtremors
This sounds great. The idea of cutting testing times and in the long run, prices.

I guess I just don't know enough about the process to fully comprehend the entire article.
Can someone explain it a little better for me?


There are several different ways They can tell whether a particular molecule will work or not. The systems can even predict with 54% accuracy whether one molecule will have any effect on a particular disease:

www.techemergence.com...

They can also use machine learning/deep learning/neural networks to sift through the vast amounts of human genetic data for biomarkers that indicate a particular disease (mutated gene, active gene, inactive gene) is related.

There's protein folding modelling. That requires GPU processing to try all the different possible combinations of rotations of molecular hinge joints of a one million plus enzyme or protein.

There are also robot chemical analysis machines that can be assigned a particular task to investigate interactions between different genes and molecules. It would perform one set of experiments, make standard logical deductions to reach a conclusion, then create a new hypothesis, combine those results, do some more logical deduction and eventually narrow down the options and continue doing experiments until a final result was found. This replaced the job of technicians performing experiments, writing reports, sending them for approval, doing another experiment and so on.
Something like if A and B and C and D affect E, then investigate if A and B affect E and if C and D affect E. If they have no influence then discard them, so it realizes that ultimately only A affects E.



posted on Jul, 2 2017 @ 11:06 PM
link   
a reply to: Heruactic




They will squeeze every single person for every last penny and then let us all die to lower the population as robots will be ready to serve the elite


Except the life expectancy in the US is higher now than it ever has been. It is projected to continue to increase over the next 30-50 years to an average of 80.

If they're trying to kill us all, they are doing it wrong.

Either that or maybe all these scary conspiracy hypotheses are simply bull#.



posted on Jul, 2 2017 @ 11:19 PM
link   
Finding the key to trigger a desired response will not be so easy. It would take databases of chemicals in plants and foods that are a possible solution. Just take for instance the chemicals in popcorn. It is full of chemicals, each name can generate a possible configuration and those configurations can be analyzed to fit a lock on a protein to start the creation of a process or enzyme production. I saw the chemicals list in plain popcorn. It contains a real lot of chemicals in it, many that are methyl compounds, possible medicines to filter through. I stumbled across that but did not save the link.

Here is one I did save....if it works. www.acs.org... .html

So what is wrong with snacks.



posted on Jul, 3 2017 @ 09:14 AM
link   

originally posted by: rickymouse
Finding the key to trigger a desired response will not be so easy. It would take databases of chemicals in plants and foods that are a possible solution. Just take for instance the chemicals in popcorn. It is full of chemicals, each name can generate a possible configuration and those configurations can be analyzed to fit a lock on a protein to start the creation of a process or enzyme production. I saw the chemicals list in plain popcorn. It contains a real lot of chemicals in it, many that are methyl compounds, possible medicines to filter through. I stumbled across that but did not save the link.

Here is one I did save....if it works. www.acs.org... .html

So what is wrong with snacks.


They've got the databases. I haven't heard of these before today, but that "big data" is already present:

"ZINC "... a free database of commercially-available compounds for virtual screening. ZINC contains over 13 million purchasable compounds in ready-to-dock, 3D formats."

"BindingDB "BindingDB is a public, web-accessible database of measured binding affinities, focusing chiefly on the interactions of protein considered to be drug-targets with small, drug-like molecules. BindingDB contains 781,982 binding data, for 6,448 protein targets and 342,414 small molecules."

"PheroBase "Currently, there are over 30000 entries, around 8000 molecules, and over 100000 static php pages that make it the world's largest database of behaviour modifying chemicals. In addition, mass spectral, NMR, synthesis data for more than 2500 compounds are included."

"DrugBank "... a unique bioinformatics and cheminformatics resource that combines detailed drug (i.e. chemical, pharmacological and pharmaceutical) data with comprehensive drug target (i.e. sequence, structure, and pathway) information. The database contains 6707 drug entries including 1436 FDA-approved small molecule drugs, 134 FDA-approved biotech (protein/peptide) drugs, 83 nutraceuticals and 5086 experimental drugs."

depth-first.com...

If you consider that there 13 million chemicals, human DNA has 30,000+ genes, each with associated proteins, enyzmes and they all have hundreds of binding points, it is a massive problem.



posted on Jul, 3 2017 @ 09:39 AM
link   
a reply to: stormcell

Thanks for the links to the databases. I have other ones but have been looking for one to combine chemistries again. It has been a while now since I lost access to a good one I had in the past. Someone gave me a back door to that one and I used it for about three years, but when the person graduated college there, the database didn't work anymore. Sucked. I got used to using that site to combine chemistries. I have access to a couple of good chemistry sites I use regularly.



posted on Jul, 8 2017 @ 08:24 PM
link   
a reply to: rickymouse

What exactly do you mean by combine chemistries?



posted on Jul, 8 2017 @ 09:31 PM
link   

originally posted by: hypervalentiodine
a reply to: rickymouse

What exactly do you mean by combine chemistries?


I take certain chemicals listed in foods and combine them with other listed chemicals in other foods to try to determine what chemistries will form in foods when cooking them. I still have my heat liable database connection....At least I think that one still connects to the link. Then I try to determine what happens when the food hits the highly variable stomach acid and enzymes there.



posted on Jul, 8 2017 @ 11:39 PM
link   
a reply to: rickymouse

I think you mean to say that you are predicting reaction products. I'm not sure I see the point in the context you describe?

To the OP: We do already perform a lot of in silico testing docking potential drug targets into proteins, but doing so accurately requires a lot of information about the protein structure (you would need crystal data, ideally), how it binds to things, how it acts, etc. Even with all that, it's still no substitute for in vivo or in vitro assays. A potent drug molecule may also not bind in a way not predicted by computational analyses, and so may be missed by such a screen. Furthermore, you are relying on databases that look only at a small area of chemical space that follow a set of rules. There are a lot of articles why this can be prohibitive.

A quick read suggests that all they're really doing is purchasing more computing power to allow for virtual screening of more molecules more efficiently, for a set number of diseases. While useful in that rather limited context, I don't particularly see the novelty in this, nor how it would aid in drug discovery any more than HTS did and does. I have to question the number they use for average time taken to get a drug to market, also; 5.5 years is too small. It's closer to 12 years. I can't see a world in which that goes down to 1, at least not in the foreseeable future, if for no other reason than long term clinical trials (which is in the later stages of the drug discover pipeline) can easily take longer than a year. All of the biological testing and optimisation that currently takes place in the drug discovery process would still need to take place (we can't currently predict toxicity in silico, for example), so costs would remain much the same.




top topics



 
3

log in

join