It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Artificial Intelligence Experts Thread

page: 3
<< 1  2   >>

log in


posted on May, 11 2017 @ 08:07 AM
a reply to: mrperplexed

I work with AI for a living. Specifically, my job is to develop & build strong AI for a variety of uses.

I am looking to chat with other ATS'ers who have a background with AI, deep learning, big data, etc. Personally I am interested in hybrid deep learning, and video applications combining static visual, short term patterns, and long term temporal clues however this thread is open to all AI topics ranging from common to esoteric.

What is it you wish to know about these areas? I'm curious - why these specific areas? Is your interest work related or personal?

Big Data is a different topic altogether, although it is a natural repository of information for a good strong AI system. It also benefits greatly from AI systems that can make sense out of all that data...

posted on May, 11 2017 @ 09:35 AM
a reply to: mrperplexed


We try to build AI but it cannot be. The only way to do it is to grow it, and still, it would be as further away from our perceptive terminology than at any other point in history. The more we try to grasp, the more slippery it becomes!

It must be grown, and then the learning and building can begin!

What we have now is not AI.... the closest thing to AI is us humans!

posted on May, 11 2017 @ 10:23 AM
Since you're into the implications of AI into the visual realm you might be interested in this link as well as the links listed there - LINK

All stills have been created by feeding footage through a set of different Convolutional Neural Network algorithms which were custom trained for a few weeks, alongside tons of additional coding.

posted on Jun, 13 2019 @ 07:24 AM
a reply to: mrperplexed

Why AI is producing chaos ?
If there is one global ai system that intake
All empirical data that it is not in the way
Average hamane think
But there is no way to mimik"average human"
By any chip.
There is huge diveresty betwean humans
Caltures ecoenvairements .
So there can be many AI SISTEMES
To spesific population
Spesific caltures
Lenguge ocuopation trade and so on.
It also dosent function cos many ai sistemes
Are conected one to anather and when you conect
Many diferent spesific ai systemes
It is chaos.
We are in a global world human are exposed
To many diferent ai systems so it is chaos
So if the path is one global ai
Or many spesific ai system

posted on Jun, 16 2019 @ 07:32 AM
It's an interesting topic. What if the problem-solving skills of AI become spontaneous? If AI can solve simple problems, what's to prevent them from solving more complex problems? I'm not a programmer - just curious about opinions on the true capabilities of AI.

This article in the Atlantic discusses a publication from Facebook AI Research. Perhaps we need to change our perception of AI - it's not your grandma's AI.

An Artificial Intelligence Developed Its Own Non-Human Language
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

JUN 15, 2017

I retrieved the original paper published by Facebook AI Research which can be found here:

During reinforcement learning, an agent A attempts to improve its parameters from conversations with another agent B. While the other agent B could be a human, in our experiments we used our fixed supervised model that was trained to imitate humans. The second model is fixed as we found that updating the parameters of both agents led to divergence from human language. In effect, agent A learns to improve by simulating conversations with the help of a surrogate forward model.

6.3 Intrinsic Evaluation For development, we use measured the perplexity of user generated utterances, conditioned on the input and previous dialogue. Results are shown in Table 3, and show that the simple LIKELIHOOD model produces the most human-like responses, and the alternative training and decoding strategies cause a divergence from human language. Note however, that this divergence may not necessarily correspond to lower quality language—it may also indicate different strategic decisions about what to say. Results in §6.4 show all models could converse with humans.

It appears that the "agents" communicated spontaneously in a language known only to them. Might be science fiction now, but imagine if AI develop communication skills among themselves, morphing their language frequently such that humans could never understand it.

Any opinions on this research?

edit on 16-6-2019 by Phantom423 because: (no reason given)

posted on Jun, 17 2019 @ 07:23 AM

originally posted by: Phantom423
Any opinions on this research?

It basically shows how languages evolve over time. Humans do the same thing but on a slower scale. English from 1000 years ago would be completely incomprehensible to you today.

All reinforcement learning models do this, it's one of the downfalls of the algorithms.
edit on 17-6-2019 by Aazadan because: (no reason given)

posted on Jun, 17 2019 @ 07:59 AM

originally posted by: Aazadan

originally posted by: Phantom423
Any opinions on this research?

It basically shows how languages evolve over time. Humans do the same thing but on a slower scale. English from 1000 years ago would be completely incomprehensible to you today.

All reinforcement learning models do this, it's one of the downfalls of the algorithms.

That makes sense. However, it appears that the AI communicated in a language known only to them. The research paper didn't provide much detail about the AI language itself probably because the main topic under investigation was "negotiation" between the AI. So I'm just wondering if there are other similar events where AI spontaneously created their own language to solve a problem. Thanks for the reply.

posted on Jun, 17 2019 @ 06:17 PM
a reply to: Phantom423

It happens all the time, it's not really spontaneous either but rather has to do with two AI's each giving feedback to each other.

When both can be modified it leads to weird results. Usually one needs to be randomized so that you can avoid local minima/maxima (basically, where you hit a local peak in results, and any progress after that looks to be a negative, even if it later leads to a bigger positive). By using randomization you can track data points that wouldn't normally be available and determine if the approach could be correct, proving a current solution optimal/non optimal.

This requires the definition of the non optimal result to not change though. When it changes, both AI's instead of rating against an objective metric, begin to modify themselves to get more positive feedback from each other. Introduce randomization and those metrics change. Once that happens, they start to develop their own language. Do this millions/billions of times, and things look quite different.

Think of it like a game of telephone where each time you tell the person next in line a word they want to hear, they reward you. The message you're conveying gets lost in favor of getting the reward.

<< 1  2   >>

log in