posted on Nov, 17 2012 @ 08:44 PM
reply to post by HIWATT
I'm allergic to a hell of a lot of clicking....
Fortunately, I'm a programmer and I've been exposed to lists like this before. This will be a snap to follow for anyone using Linux or a Mac.
1. Save the webpage (with the list) using your browser.
2. edit the page (text editor), removing everything above and below the list.
3. edit each download link so it contains only the href link. Remove everything that isn't part of the links and leave one link per line.
4. edit each link address, using find and replace - replacing this:
open?id=
with this:
uc?export=download&id=
5. Save the file, changing it's extension from html to txt. Lets say you downloaded it into your Downloads folder. The filename is now
100-neat-free-survival-downloads.txt
6. make a "survival" folder in your Documents directory
7. open a terminal window and go to your new "survival" directory (cd ~/Documents/survival)
8. Run this command:
for LINE in $(cat ~/Downloads/100-neat-free-survival-downloads.txt); do curl -OJL $LINE; done
These are exact instructions for doing this on a Mac. Linux users should be able to make slight modifications to these instructions to get it running
on their systems.
This will download each individual file without further interaction. When I started writing this reply I had 12 downloaded, now I'm over 20 files
in. It seems like a lot to get it started, but now instead of clicking links, you can start reading one of the books while the others flow onto your
hard drive.
edit on 17-11-2012 by stutteringp0et because: (no reason given)