It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Am I reading too much into these abstracts?

page: 1

log in


posted on Feb, 24 2011 @ 05:26 PM
While bored and digging around on Information Bridge a bit at random I came across a couple of interesting studies. The abstract of the first one in found is as follows:

In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.

Sounds to me like they're study how to hide and make unidentifiable artificially intelligent entities.

A couple of phrases in that caught my eye: "deterministic finite automata" and "Turing machines"

Searching the same source for the former phrase yields a document with this abstract:

The aim of this paper is to show how certain diverse and advanced techniques of information processing and system theory might be integrated into a model of an intelligent, complex entity capable of materially enhancing an advanced information management system. To this end, we first examine the notion of intelligence and ask whether a semblance thereof can arise in a system consisting of ensembles of finite-state automata. Our goal is to find a functional model of intelligence in an information-management setting that can be used as a tool. The purpose of this tool is to allow us to create systems of increasing complexity and utility, eventually reaching the goal of an intelligent information management system that provides and anticipates needed data and information. We base our attempt on the ideas of general system theory where the four topics of system identification, modeling, optimization, and control provide the theoretical framework for constructing a complex system that will be capable of interacting with complex systems in the real world. These four key topics are discussed within the purview of cellular automata, neural networks, and evolutionary programming. This is a report of ongoing work, and not yet a success story of a synthetic intelligent system.

This latter study was completed in 1993 and we all know how fast technology advances.

posted on Feb, 24 2011 @ 05:32 PM
The first abstract is (if I understand it) talking about how to protect software from being hacked or copied by compiling the normal code into something that is unintelligible but still works in the same way.

In fact it says as much in the paper

Program obfuscation, if possible and practical, would have a considerable impact on the way we pro- tect software systems today. It would be instrumental in protecting intellectual property, preventing software piracy, and managing use-control applications.

How it is achieved though is WAY to mathematical for me to give you any hints

The full paper in PDF form is here

posted on Feb, 24 2011 @ 05:33 PM
I dont know the answer to your question, but i do understand that those passages have many pseudoscientific buzzwords, and both were written in the hoodoo-voodoo vein common to writing to be submitted for grant funding.

posted on Feb, 24 2011 @ 05:46 PM
With terminology thrown around like that, I'd assume that there is some obfuscated meaning.

posted on Feb, 24 2011 @ 11:08 PM
reply to post by jadedANDcynical

the cynics have spoken (in the responses thus far), but they are wrong.

stephen wolfram, who is currently working on the wolfram alpha project founded the scientific discipline of "cellular automata". in his book, "a new kind of science", he details how a simple color-change rule set of only FIVE rules and a single starting cell can produce structures in the data set which appear to perform certain intelligent tasks.


aside from being beautiful, the most striking aspect of any of these cellular automata is that they are by all accounts, completely random. it is not possible to guess how any of the structures which emerge from the data set are going to interact with each other. the automata unfolds in the present moment in very much the same way as we commonly associate with conscious intelligence.

so, that being said. the first paper is describing how a change in one of the initial starting rules of a cellular automata can be used to either obscure or reveal what lies within the data set. in some way, they have enabled the emergent qualities of the cellular automata as a technique of encryption.

the second paper is describing how apparently linear streams of data can spontaneously form "chunks" of independently manipulable forms, similar to the emergent forms in the cellular automata. the information system can then work from the level of the chunks, rather than with the raw data itself. this is, again, a function which intelligent systems seem to possess.

it may be difficult to believe, but we really are at the edge of the precipice. this is NOT "hoodoo".

p.s. skeptics please suck on THIS. (pimp my thread.)

posted on Feb, 24 2011 @ 11:10 PM
reply to post by chr0naut

"...terminology thrown around like that..." ?!?!?!? seriously. have you ever read a science paper?

posted on Feb, 25 2011 @ 06:51 AM
reply to post by tgidkp

Getting shades of the Michael Chriton book "Prey"

I sure hope those ivory tower egg heads include rules that prevent them from harming people. But hen again insuppose that would negate the sentient aspect of the underlying program.

posted on Feb, 25 2011 @ 07:56 AM
edit on 25-2-2011 by MMPI2 because: errata

posted on Feb, 25 2011 @ 02:58 PM

Originally posted by jadedANDcynical

A couple of phrases in that caught my eye: "deterministic finite automata" and "Turing machines"

Both these terms are referring to mathematical machines. They are abstract machines that are used in computational theory to carry out work.

A "deterministic finite automata" is just a fancy computer science term. It means a bit of programing that accepts a finite bit of input and produces a change in state to a new state from a predetermined list of possible states.

It can be as simple as something like recognizing a button press and turning on a light. The button produces an input signal and the programing knows to turn on the light when it gets that signal.

A "Turing machine" is another computer science term. A Turing machine is envisioned as a strip of tape with symbols on it. The machine reads these symbols and manipulates them based on a set of rules. It could be as simple as taking a series of 0s and turning them into 1s.

top topics


log in