posted on Dec, 26 2010 @ 07:39 PM
It can be done, securely, but I would question their method.
one could make a single backup file, use 256-bit encryption to chop it into little pieces, then sprinkle the pieces across many different places, to
make the file secure as hell.
p2p would work great for this. That is one of my first targets for my modified version of transmission-gtk, to allow server administrators to backup
their data into the cloud and to ensure co-operative push seeing of single pieces between user groups and the cloud as a whole. The 256bit encryption
is secure enough that it will take millions of years for a computer to break. The key would keep it readable only by your system. Any way they can
get your key without you intentionally giving it, they would likely have root access to your server anyways.
HENCE IT IS SECURE, and not one piece of it is sent unencrypted and locked down.
Do any of you people have degrees in computer science? About the worst I could picture is a flaw with a random number generator but I honestly don't
think that's an issue anymore. Even if the numbers are not so random, it's random enough to keep people out. Even 20,000 years to break into it
would be a stretch, and that is on top of the time it would take to break through ssh encryption which the packets themselves go through, so
eavesdropping is drastically less likely than an unlikely crack on the encryption.
If I can convince server admins and average users to do this, and can create a recovery utility one can load onto a usb thumbdrive, then that'd be
just the ticket. I would trust it to backup important things (and is definitely something I am going to write in the software). Personally I think
it is a great idea, I just don't think I would trust anybody in a proprietary corporate environment -- such a thing should be purely open source.
I suppose that opens the door to people messing with the network; there just simply has to be a means to perform random integrity polling.
Either way -- I wholly stand behind the idea, and I feel that if this is paired up with distributed computing of some sort, one could have the cloud
act as a server, and share the processing load of server side objects and merely have site run a database service, that would suffice. There are ways
to have a distributed database across geographically disparate regions, it's been done before; so this can be made. In fact if it's made right it
could encapsulate all web traffic and provide greater anoniminity in the end, once the tunneling and proxying capabilities are included into the deal,
it will allow complete obfuscating of source and destination by using a series of agents to relay data and requests. The distributed nature of p2p
makes this possible through careful management of distribution of hosted online data housed in a virtual net-drive.
All of this stuff can be bundled in. A suitable replacement for dns could be made, using utf8 format addresses instead of typical tld arrangements.
your site is a folder on the netdrive. no more dns needed, if prisonplanet.ca is taken, try prisonplanet.canada, or prisonplanet.can or
prisonplanetca or prisonplanetcanada. There may be partial need for user accounts and identifying non-critical information from your peer, but the
only information needed is tracking data for specific pieces. Such data shouldn't be stored and eventually making the tracker part of a function of
the cloud as a whole would be perfect.
One can create a subweb that is better than the web is today. In fact, torrent traffic dominates the web at 70% of all traffic, why not make it 100%
and just be done with the different protocols from the perspective of what goes between point A and point B.
In fact web searching capabilities and im should be integrated in a decentralized fashion as well. Essentially I think we need a huge open source
project to get this rolling. I wouldn't bother with time sensitive packets... but the delivery of a file, or a webpage, or an email, is almost
always not time sensitive, it gets there when it gets there and just the fact that it gets there soon enough is good enough. I don't care if it
always takes 10 seconds to load a webpage on undernet, I only care that it is delivered reliably, and if the whole set of data is fairly large that I
can get it in one shot instead of making connections for each and every single little thing. The only thing I should need to connect to a webserver
for should be to get their package, and to connect with their database; that's it. It could be done easily with flash apps and just using a
webserver as a backend using server side scripting.