It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Bitcoinus Maximus a computer science experiment

page: 1
3

log in

join
share:

posted on Apr, 22 2014 @ 05:25 PM
link   
bitcoin is three things,
Bitcoin is the network of computer nodes, that secure the network and share transaction records called blocks
bitcoins is a property that can be transferred between users over the computer node network, an entry in a ledger
Bitcoin is a triple entry ledger system for secure transfer of ownership of digital property, every node has every transaction recorded

why is it an experiment?

in computer science, there are a number of innovations that were thought to be to difficult to solve.
one computer science problem that has been very hard to solve is "network consensus".
network consensus across a large number of nodes with agreement on a large number of parameters is described as very difficult
getting a large group of computers to "agree" on a large number of parameters in an environment where some of those computers
could be "rouge" was though to be near imposable.

these problems are know as "Byzantine fault tolerance" and "the Byzantine generals problem"


Byzantine fault tolerance is a sub-field of fault tolerance research inspired by the Byzantine Generals' Problem,[1] which is a generalized version of the Two Generals' Problem.

The objective of Byzantine fault tolerance is to be able to defend against Byzantine failures, in which components of a system fail in arbitrary ways (i.e., not just by stopping or crashing but by processing requests incorrectly, corrupting their local state, and/or producing incorrect or inconsistent outputs). Correctly functioning components of a Byzantine fault tolerant system will be able to correctly provide the system's service assuming there are not too many Byzantine faulty components.


en.wikipedia.org...

this problem is about ongoing and continuing consensus in the the face of failures of either loss of connection, corruption of data or incorrect or malicious data.

Byzantine generals problem

this problem is about solving an "agreed time" for something to be "seen" across the network of nodes or put another way,
this is the problem that occurs when information needs to propagate across a network when more than one node can suggest that some data has been seen. if two or more competing nodes propagate a time stamped transaction or data across the node network, how do you know which transaction or piece of data is the correct one to include without conflicting with the other transactions or data, which could "break" the consensus of the network

the solution to the Byzantine fault tolerance and Byzantine generals problems are very similar but desecrate solutions to each problem that leverage off each other and are required to make a failure resistant consensus network that can agree on,
what happened,
when it happened,
that all agree on exactly when that thing happened even if your node was not there to see it.
can do so even when some nodes are "untrusted" and or malicious

to solve the two discrete Byzantine problems with one solution can allow for both problems to be addressed while getting the consensus required and the agreement of a time stamp or order of transactions or data that all can record to provide consensus even in the face of propagation delays.

the solution is the block chain

the block chain is the triple entry ledger part of the network, it has time fault tolerance and anti data corruption and network consensus properties derived from a "race condition".
a race condition is where all the nodes on the network, must "race" each other to preform a complex calculation, that takes a "known" amount of time or that requires a "known" amount of compute power to discover the solution to the problem.
this causes a "random" probability that any one of the nodes will find the solution first, and the "problem" solved is hard to compute initially but easy to "check" once the answer has been broadcast back to the rest of the network.

when a node solves the computation problem and wins the "race" that node is allowed to produce a "block" of compressed data that the network can verify "when" it happened, ie at what time was the race condition satisfied, and that any data "included" in the block has "already" been seen by the whole network.

adding blocks to the block chain

this is the fault tolerance part of of the solution,
once a block has consensus from the entire network it is added to a "blockchain"
a blockchain is an interdependent chain of previous blocks, solved by "random" nodes added to over time by which ever node solved the last "block". the new blocks must contain a hash of the previous block hashed into the new block
so that only blocks when "unhashed" (unencrypted) contain all of the consensus data of the current block including time data and the entire hash of the previous block.

this "chains" each new block to the last block in a way that cannot be "forged" because each new hash is decided on the contents of the previous block "plus" the hash of the block previous to it.


A hash function is any algorithm that maps data of arbitrary length to data of a fixed length. The values returned by a hash function are called hash values, hash codes, hash sums, checksums or simply hashes. Recent development of internet payment networks also uses a form of 'hashing' for checksums, and has brought additional attention to the term.


en.wikipedia.org...

how does this "chaining" of "blocks" provide fault tolerance?

every node on the network keeps a ledger, of all past blocks and any new block added. this is where the fault tolerance comes from,
every node must "verify" the solution to the race condition, and the data to be added to the new block has been seen and that it contains the previous hashes of the previous blocks and that it happened at a specific time. this allows distributed consensus on all the parameters required because to make a new block you must reference the previous block and the previous block must already have consensus to be included.

solution to the double spend problem

in the digital realm it is easy to copy digital data, and costs next to nothing to do so, so how does one solve the double spending problem?
the answer is actually already explained in the "triple entry ledger" that is the block chain.
when you go to send some bitcoins to some one, the network looks back through the block chain, and follows the "chain of ownership" of the coins you are about to spend to see if there is an entry in the ledger that corresponds to you receiving the coins you are trying to spend or send. because everyone has a copy of the block chain, they can all "look" for the chain of ownership back through time to see if you really do have the coin you are trying to send/spend.

solution to the address problem

if i want to spend bit coins, how does my computer or the entire network "know" that i am the rightful owner of the coins?
this solution is a unique "address" or a series of numbers and letters (not including 0, o, 1, l)that can be generated by a two step "hashing" function that is easy to generate but imposable to reverse.

this experiment has shown that consensus can be reached across many computers over a given period of time without a central authority.
a problem once though imposable.

this tech could be used for many other applications like voting for example.

xploder



posted on Apr, 22 2014 @ 05:31 PM
link   
for anyone who wants a high level over view of the bitcoin network here is a video i watched before writing this op,



i see bitcoin as an experiment in computer science,
one that solves interesting computer science problems in a novel way.

for a more simple explanation the following will give you a good idea of how it works




xploder
edit on 22/4/14 by XPLodER because: (no reason given)



posted on Apr, 22 2014 @ 05:47 PM
link   
I am interested in how you would use this method for voting. I am no computer scientist, but from the voter side of the equation, would there not need to be a very secure database of all eligible voters that could not be gamed? Let's say that each voter would only get one vote via this methodology, which could be done, but what is to stop people from manufacturing fraudulent voters/use dead voters, etc. The current state databases are most certainly inaccurate.



posted on Apr, 22 2014 @ 06:03 PM
link   

originally posted by: ScientiaFortisDefendit
I am interested in how you would use this method for voting. I am no computer scientist, but from the voter side of the equation, would there not need to be a very secure database of all eligible voters that could not be gamed? Let's say that each voter would only get one vote via this methodology, which could be done, but what is to stop people from manufacturing fraudulent voters/use dead voters, etc. The current state databases are most certainly inaccurate.


there are already people working on this exact use of the underlying technology.
while digital consensus can be reached physical consensus or making sure in the physical world that each person only gets one coin for example is another problem. i would suggest that the same mechanisms for voting would be used,
except that each person when they vote has a unique "token" and that token can only be voted with once.

how you ensure that each person only gets one token to vote with is a non trivial problem that can only be verified with including bio metric data into the message space provided with each coin.

personally i dont like handing out bio metric data, for any reason, to any one,
but this has been stated as one possible solution.

ie facial recognition, finger print, palm vein scan ect

the problem of physical impersonation of a person is not an easy solution to solve.
but the "audit-able" results of the digital vote can have network consensus.

so we are a way off having a practical solution just yet

xploder



posted on Apr, 22 2014 @ 06:55 PM
link   
a reply to: ScientiaFortisDefendit

found one group who is already using block chain tech to vote,

Liberal Alliance holds e-voting with bitcoin technology

www.version2.dk...

requires translating

xploder



posted on Apr, 23 2014 @ 05:26 AM
link   
With all the electricity and processing already going on, I do question just how valid this mathematical lottery process is for dealing with the problems of transaction pools? The idea of a transaction chain sounds good, but setting random functions just to burn up processing power looks quite wasteful of processing resources.



posted on Apr, 23 2014 @ 01:57 PM
link   

originally posted by: kwakakev
With all the electricity and processing already going on, I do question just how valid this mathematical lottery process is for dealing with the problems of transaction pools? The idea of a transaction chain sounds good, but setting random functions just to burn up processing power looks quite wasteful of processing resources.



because profitability for miners is a function of hashs per second AND power consumption,
the chip fabricators are pushing the boundaries of processing power AND making ever more power efficient designs
with every new iteration of ASIC (application specific integrated circuit)

if you look at the evolution of the specific hardware, from CPU to GPU to FPGA (field programmable Gate Array) to ASIC,
you will see a move from GENERAL purpose computing to solve the hashs= wastful
to specialised chip fabrications that do nothing else but solve hashes= more efficient.

my prediction is that the ASIC processing power will follow moores law,
and the power consumption per T/hash will decrease by half every four years or so.

we are about to move into the age of super low power processors, advances in "photonic" chips and "on chip" power supplies will decrease the power consumption in large jumps.

while i agree that 62,000,000 G/hashes/second is expensive in terms of power, it does provide security for the system.
and provides an economic incentive to the chip fabricators to design ever more power saving chips.

if you know of a method other than computational work (proof of work) to produce an equal probability (random distribution) of nodes that can also allow for a time correlated consensus, i would love to hear it.



ps good to hear from ya

xploder



posted on Apr, 23 2014 @ 02:42 PM
link   
a reply to: kwakakev


I do question just how valid this mathematical lottery process is for dealing with the problems of transaction pools


while the "perfect" solution is always sought,
some times when solving multiple problems a "compromise" between practicality and functionality are required,

while the lottery is a crude mechanism, it is effective.
it provides competition between pools,
and the community has an economic incentive to co operate and to compete,
out of simple order comes complex behaviour,

it will be interesting to see how the pooling problem is resolved over time.



xploder



posted on Apr, 26 2014 @ 05:22 AM
link   
a reply to: XPLodER

To quickly recap and confirm, a block chain is what is being mined in the bitcoin process, it holds and encrypts the currency unit. A transaction chain is what is being designed and implemented to help patch the current vulnerabilities when performing currency trades.

If this is right then what encryption / redundancy / double checking options exist for the transactions chain? There is a chance here to take advantage of a lot of processing and strengthen the system. There does appear to be a random feature from the bitcoin mining process, could this lottery turn into some kind of transaction coin for the bit coin?

I can handle PHP but there is a lot I do not know about distributing computing and the challenges of it. The future of computing does look like it is still in start up stages, kinda does make you wonder just how much of a difference optimizing CPU distribution on the die can make. How much longer until not just having solar cells on your house, but millions of CPU's as well. To these ends you will need solid code, not just something that you hope to out process with.

With the bitcoin limit being fixed it is important to not have any long term solutions based directly on their mining. With what is happening in the stock market and micro trades there does look to be a growing load with bitcoin and similar systems. How will the currency go when there are millions of transactions every second?

As for the pool, one option is hand out tickets as they walk in then the random function becomes simple again. But this does bring along a centralization issue. Do you know what part of the current system is responsible for setting the random puzzle? Is this part also responsible for reading the results and assigning a transaction chain?

It is good to chat to you again too.



posted on Apr, 26 2014 @ 06:16 PM
link   

originally posted by: kwakakev
a reply to: XPLodER

To quickly recap and confirm, a block chain is what is being mined in the bitcoin process,

mining is a three step process, step one is to "process" actual transactions to verify their validity into a pool of "pending" transactions. step two is to collapse the pending transactions into a merkle tree, to conform to size limitations of each "block" of transactions
step three is the (POW) proof of work, a hash is made of the following components,
a hash of the previous block combined with the compressed transaction data and a nonce (number)
when a hash output has a certain number of zeros preceding it (difficulty) it is said to have solved the POW and the new compressed block is published and connected onto the chain of blocks, redo again so that each POW takes approx ten minutes = confirmation of published transactions.



If this is right then what encryption / redundancy / double checking options exist for the transactions chain?

as in sudoku it is difficult to solve but easy to verify if the solution is correct, every single node who has a block chain independently verifies that each block has A)transactions that have been "seen" by that node B)that the new block includes a hash of the previous block C)is attached to the longest "chain" of blocks (ie two blocks solved at the same time can have different transactions, but the next solved block "after" will be "longer" and the network will use its hash as a starting point for the next block) this means after 6 blocks confirmed onto the block chain it would take more processing power than all the rest of the network to "alter" the hashes of all 6 blocks and publish an alternate transaction pool.
ie 51% attack


There is a chance here to take advantage of a lot of processing and strengthen the system.

because of the random distribution function of the POW even with 51% of the network hash rate,
you would not be guaranteed to succeed with an attack, it just increases the probability of success,
you would still have to win the POW puzzle a number of times in a row.


There does appear to be a random feature from the bitcoin mining process, could this lottery turn into some kind of transaction coin for the bit coin?

when a miner (processor) wins the POW algorithm they are awarded newly minted bitcoin, this along with transaction fees are the economic incentive to process transactions for the network.


I can handle PHP but there is a lot I do not know about distributing computing and the challenges of it. The future of computing does look like it is still in start up stages, kinda does make you wonder just how much of a difference optimizing CPU distribution on the die can make.

because of the nature of the specific calculations the POW algorithm requires, the architecture of the chip can be radically tailored for this operation exclusively, there is currently no point trying to compete in mining unless you have specialised asics



How much longer until not just having solar cells on your house, but millions of CPU's as well. To these ends you will need solid code, not just something that you hope to out process with.

because of the money involved, many smart people are looking at optimization of code, many are also looking into the green energy requirements this tech requires.



With the bitcoin limit being fixed it is important to not have any long term solutions based directly on their mining. With what is happening in the stock market and micro trades there does look to be a growing load with bitcoin and similar systems.

micro transactions for disaster relief is many factors more efficient with bitcoin


How will the currency go when there are millions of transactions every second?

i have been looking at rolling algorithms to solve the problem, but their is many people looking into this area.
transaction propagation bottle necks can be solved by new transmition control protocols that are a factor faster,
and by block chain pruning algorithms. the real answer is the open source community will solve these problems before they become real limits to wide adoption.

remember this is a computer science experiment, i expect that hard academics will contribute to solving the problems of scalability.


As for the pool, one option is hand out tickets as they walk in then the random function becomes simple again. But this does bring along a centralization issue.

centralisation always brings more problems than it solves


Do you know what part of the current system is responsible for setting the random puzzle?

it is "hard coded" into the original code, it is one parameter that is self adjusting (difficulty) that requires that the first n numbers of a hash "solved" equate to zero for x number of places at the front of the hash,(more zeros = more difficulty)
this is achieved by selecting different nonces till the hash out put looks like this,
00000000000000008234e952fffb8c25b33ea5fff527e53d8e505c148dd5669c



Is this part also responsible for reading the results and assigning a transaction chain?
It is good to chat to you again too.


the first node to find a hash with the "correct" information encoded in the hash that has the required number of zeros (difficulty) is the node that gets to publish a new block for the rest of the network to unhash and "see" creating a "consensus"
this "consensus" is time stamped to correlate "time" across a distributed network. this "consensus" becomes the staring point for the next POW that every other node uses to start the process again.

i hope my explanation is technically correct, it is a complicated subject with many facets.

xploder

edit on 26/4/14 by XPLodER because: (no reason given)




top topics



 
3

log in

join