It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
So how can you identify the hacker If caught by what software they attempted to use problem is that software is always made available by its creators on hacker sites. Because the more people that use the lower your risk of being Identified.
Before we proceed though, I would note that the date of these mod times is 2016-07-05 — July 5th? That's almost a month after CrowdStrike had given the hackers the boot.
That's also weeks after Assange claimed to have emails that would hurt Clinton and weeks after "Guccifer 2.0" had already been releasing documents?
The documents posted online do not appear to contain any email or communications, but rather include shared passwords for the committee shared accounts to various news services, Lexis, and a federal courts public access system called PACER.
And really? Cygwin? What maverick of forensics is using Cygwin?
I was going to address the 22 MB/s thing.
That's not that fast. That's like 176 Mb/s. Let's pretend that the date makes sense and just for the sake of argument go by this rather unsupported notion that we've got files copied from the DNC to somewhere remote.
* Where were the DNC servers located? On site in the offices? Or were they hosted somewhere?
* What kind of connection did the DNC have?
* How do we know that point B was local to the hacker and not an interim server?
What exactly is the author saying in this mess here?
Conclusion 5: The lengthy time gaps suggest that many additional files were initially copied en masse and that only a small subset of that collection was selected for inclusion into the final 7zip archive file (that was subsequently published by Guccifer 2). Given the calculations above, if 1.98 GB were copied at a rate of 22.6 MB/s and all the time gaps were attributed to additional file copying then approximately 19.3 GB in total were initially copied. In this hypothetical scenario, the 7zip archive represents only about 10% of the total amount of data that was initially collected
The problem there of course is that that if the files were copied in a single batch, the time between last mod times doesn't make sense for the theory because the start of each transfer would fall within a second of the last mod of the file transferred immediately before.
So the obvious solution is to then invent theoretical missing files to fill in the gaps? And how would one determine the size and number of files in the gaps?
Initially when this data was analyzed, the “time gaps” were attributed to “think time”, where it was assumed that the individual who collected the files would copy the files in small batches and in between each batch would need some “think time” to find or decide on the next batch to copy. This may be an equally valid way to explain the presence of time gaps at various junctures in the top-level files and folders. However, in this analysis we will assume that a much larger collection of files were initially copied on 7/5/2016; the files in the final .7z file (the subject of this analysis) represent only a small percentage of all the files that were initially collected.
I certainly wouldn't point to this as any reason to say "case closed." It's not even remotely definitive. Relies on a TON of assumptions and ignores more plausible scenarios.
Exactly. It wasnt a hack. thats the whole point.
That's because the emails were not released in that batch:
I support government encryption and RF engineers that frequently use Cygwin. Whats your point and please dont spare the technical details.
I also support multiple clients with high bandwidth connections to the internet in the same city or state. (MD/DC) where we are close to a central hub (Mae East). Even with the same service provider and close proximity we never see consistent high bandwidth transfer speeds. The speeds vary 10-15%. And you NEVER see full bandwidth utilization without pretty intense tuning of tcp windowing, MSS and receive window. Even with a dedicated connection supporting jumbo frames and highly tuned gear at both ends, I've only ever seen just over 85% utilization. Internet doesn't support jumbo frames. On the contrary, all packets max out at 1500bytes and are fragmented, needing re-assembly at the receiving end. This adds latency on top of internet propagation time. Then there's out of order packets, retransmissions etc. etc. Possible? Yes but HIGHLY HIGHLY improbable.