It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Documenting and Cataloging an Unfolding Disaster - Ideas Please.

page: 1
8

log in

join
share:

posted on May, 27 2012 @ 12:17 AM
link   
Those of us who are interested in keeping on top of a developing crisis need to come together and create a resource which lists all of the places where we find information. This would be invaluable in documenting the unfolding disaster. It is particularly important in the early days of a disaster since the authorities often make slip-ups and mistakes in releasing information which they later retract or change.

I feel we need to prepare ourselves for other disasters which are likely to happen in the future and to document what we have learned from this Fukushima and other similar incidents.
There is so much information which we have all brought to the threads on Fukushima and I have started to pull together the threads of that information by trying to analyse the posts - but for one thread only when there are many on this topic. However, we need some way to organise and structure the data so that it can be referred to later (example - it would probably make a book much easier to write if we could go to fewer places to find the information needed, do searches, etc)

I do not think it is safe to have one place where the actual information is held but we could have a copy of the information in one place and encourage people to download their own versions so that the data was distributed everywhere.

The one most important thing which is stopping us from being REALLY effective is that we are disorganised and our information is spread over multiple threads, multiple websites, and multiple languages on the internet. As they say, information is power, but ONLY if it is in a form which can be used effectively and efficiently.

What we need is a structure for the data and organised plan of action which can be used for the next crisis. Just like everyone involved in the Fukushima disaster, it has taken us time to realise which are the good resources and which are the reliable ones, and onto which "side" of the fence these resources fall.

What we need is a list of websites where we can go to look for images, web articles, data, pdf documents, videos.
We need links to URLs and how these URLs are formed and how the source data is organised (for example by date, 'j' for Japanese, 'e' for english') so that we can easily look for other files in the series. Most large organisation have a standard which they follow to reference their documents otherwise their document libraries would get in complete chaos.

This was really my idea about the "reference" for the Fukushima thread. If ATS gets taken down, then all of the information is lost that we have collected. OK, well it is not "lost" because each person has parts of it, but in one place as a reference, it is lost and it is unlikely that the same information could be recreated in its original form or organised into one thread on a forum.

One of the problems I can forsee is that any search facility we may have would need to have the original documents in a form which could be searched. This means that PDF documents would have to be translated into text so that the search could both index them and then search them for the things we need to find. This is almost impossible when documents are written in foreign languages and there are so many different implementations of the PDF standards that it is difficult to find a web-based way of translating these different formats into text for searching and indexing. Documents would ideally be tagged with words which could be searched for later.

It would become a huge project which would take many man-hours, and would be one which is never completed, so I am wondering how this could be limited but still remain a useful resource to us all,

Your ideas and suggestions please.

Unless of course... there is another website which does this kind of thing (I dont mean a website like World Health Organisation or CrisisWatch or ThreatsWatch which do not do the kind of investigations which ATS members do)
=======================================



posted on May, 27 2012 @ 12:27 AM
link   
surviveeconomiccollapse.com...reply to post by qmantoo
 


I always keep an eyeball on this one...you never know til it hits ya



posted on May, 27 2012 @ 09:37 PM
link   
really good thread, very well thought out, well written post. .....I will think about this for a while.......this is really important to be prepared for the next disaster.........I look forward to other posts in this thread



posted on May, 28 2012 @ 05:04 AM
link   
What i learned from F'Shima is that we/ you/ i need to copy that Page,
to just copy the Web-Addresses is useless, ca. 50% of all Reports
and Articles are not "there" anymore because many Newspapers, Magazines, etc.
keep the Articles only for a few Month online, after that Time it is lost!

Tokyo, 0.110uSv/h, Rainy



posted on May, 28 2012 @ 09:10 AM
link   
Yes, and some like the Japanese newspapers only keep them for a few days - 3 I think if I remeber correctly. NHKnews or NEInews maybe.

I think we need to divide the information into how recent it is and how relevent it is.
For example
a) RSS feeds are



posted on May, 28 2012 @ 10:38 AM
link   
This is a very important endeavor. TPTB dont want information easily accessible. In fact, they wish we'd all drop dead.

I think the single most important point is that a site for information storage and easy retrieval exists period. Then it takes a sharpie to keep it together, with the ability to move the site with data intact if need be. Q, if you're willing to do that, be point person for the site, the battle is half won.

Then I agree with Human that copying or downloading the moment data is discovered is really important. we cant even wait a matter of minutes I have found. I now save the whole web page. That way I can go through the folder and pull out the stuff i need.

But equally important is we need a way to pass data without it being intercepted. Nothing against ATS but U2Us arent enough. A couple of us will need to exchange email addys or other locating data once the site is up and running and we know who can be trusted.

And we'll need a way to notify others in case the site gets taken down and a new one has had to be established.



posted on May, 28 2012 @ 06:32 PM
link   

I think the single most important point is that a site for information storage and easy retrieval exists period.

The only way that it will be difficult to take down is to have it spread across many different places. Any site in the US is easy to be removed at any time. Any site in one country is easy to be removed, so we would have to plan for that - or not bother. We can easily make the backup copy of the site available to certain people if they want to download them and keep them safe on their own machine.

Actually, I think that if the TPTB have to start removing sites, then we are nearing the final battle and it really wont matter what sites are removed - our focus should then be on looking after ourselves. Besides, there are many more important sites to be taken out if that is what is going to happen. I dont think we should see this as making that much trouble for them in their Big Game Plan and it is only to document and identify where lies are being spread. If the final battle time comes then the whole communications network will be taken out. We better start using telepathy instead!

The only way to be certain is for the individual who has found the site to download the information themselves before posting the link. Any site can be monitored and any new links posted which are important can be removed. We can have a store of information but I dont think we can expect it to be a) secure, and b) copied by the site before it gets taken down. We can copy the data, but it may be that we have to upload the data from the posters copy if the original has been removed in the meantime.

If we say that the data has to be on this site, then it will become vast. If you take one persons data and multiply it by 10 (for example) we are talking gigabytes and terrabytes of data already due to Youtube videos etc. This is not practical when considering multiple projects which span multiple years as any disaster which happens on the scale of Fukushima will take years to sort out. Well, it IS practical, but it is the kind of thing which will probably be located at someone's business or home on a server there where they can manage it - and then it becomes vulnerable to interference.

Passing data between us in a secure way is not practical. As you know, the vast resources of the US government is pitted at the moment against terrorists and the US agencies are building sites with acres of computers which can decode anything in seconds. In the face of this kind of computing power, any data on the internet is not secure at all and just makes individuals "flagged" as trying to hide information. That is not what this is all about surely? So we have to realise this and work around it.

It seems that PCs running Windows are fairly easy to hack into and monitor. It is probably easy to hack into other types of operating system too, but maybe not so easy as Windows and then Mac. This is partly becuse Windows & Mac programs are not open source so we can not have control over what is actually running on our machines. Of course, they can be open source, but often the Antivirus programs are not. Linux has a a firewall built into the kernel (the core operating system) which makes it easier to block things but also perhaps more complicated to administer.
Q



posted on Jun, 9 2012 @ 04:42 AM
link   
The first outline of ideas is here and needing more input as we go..

For each "Crisis", as many RSS feeds as we want can be added. I have used a Google alert of Oyster Creek as the pretend crisis for this exercise and we can clear out the database or start another "Crisis" if one happens.

Currently the RSS feeds are updated by press ing the button on the right side of the main index page, but eventually (next couple of days) it will run automatically every 2(?) hours and download the latest RSS feed and process it.

Click the menu item on the right menu to see the RSS items and click on any to be taken to the page referenced. Currently I am NOT saving the actual pages on the internet as I feel this would take up loads of space over time..

Next, I am going to add Google searches.

So, let me have any of your ideas generated by this mock-up please.
Q
edit on 9 Jun 2012 by qmantoo because: (no reason given)



posted on Jun, 9 2012 @ 07:22 AM
link   
OK, so the way it is working at the moment is, as I said, that I am not keeping the pages online. This means that whatever text we need to search through in the future has to be saved at the moment when the RSS feed item is read from the web.
As soon as I have finished processing that page, it gets deleted and it only exists as a link + title + short description in the RSS feed.

That is all virtually useless for searches, anything we will need, we have to extract at the moment we have the page in memory.. If the web page disappears from the internet in the future and the link becomes unusable , then we have to go back to archives - if there are any, or the text we have stored when we had the page in memory.

There is one problem - A computer cannot determine what is relevant text and what is advert/page formatting, or some other rubbish everyone places on their page. The best we can hope to do is to extract anything within HTML paragraphs on the page. Often pages are structured to have these HTML paragraph areas but not all do so it is rather hit-and-miss.

Even if we saved the whole page, we would probably still get searhes matched on rubbishy data and relatively little of news pages is useful text. Most is adverts and page formatting so maybe an average of 50% would be wasted space and not worth keeping for text searches.

The reason Google is so good is because they have spent millions of dollars and probably millions of man-hours making their searches better and better. They have terabytes of data indexed in all kinds of ways which we are not going to be able to do.

When you are taking text from one website, it is relatively easy to determine what are the best text areas to keep, but when the pages and text is coming from a broad range of websites across the internet, each has its own method of presenting their content so a broad catch-all method is best for us to use.



posted on Jun, 10 2012 @ 01:23 AM
link   
When I cannot afford to loose data off the web, I save complete page/video and categorise/folder store with redundancy and backups. That way if a member on ATS requests certain data you can have it on hand quickly to collaborate.

If you want to really get serious a faraday cage is probably a good start for a 'SHTF backup solution'. Won't stop high energy solar neutrinos though.

For a centralised location, perhaps this is something we can work with ATS on? A reference forum for super threads such as Fukushima. Moderated by moderators or thread participants on a voted and rotary basis.
Still doesn't solve teh problem if something goes down (e.g. ats taken down).
I don't see it working any other way without extensive P2P software, distributed files and databases, cloud computing really :x we each have to save as much as possible.
edit on 10/6/12 by GhostR1der because: (no reason given)



posted on Jun, 12 2012 @ 07:00 PM
link   

Originally posted by qmantoo
The first outline of ideas is here and needing more input as we go..

For each "Crisis", as many RSS feeds as we want can be added. I have used a Google alert of Oyster Creek as the pretend crisis for this exercise and we can clear out the database or start another "Crisis" if one happens.

Currently the RSS feeds are updated by pressing the button on the right side of the main index page, but eventually (next couple of days) it will run automatically every 2(?) hours and download the latest RSS feed and process it.

Click the menu item on the right menu to see the RSS items and click on any to be taken to the page referenced. Currently I am NOT saving the actual pages on the internet as I feel this would take up loads of space over time..

Next, I am going to add Google searches.

So, let me have any of your ideas generated by this mock-up please.
Q


Great, as usual.

Will this be user-accessible to add new info?

The users, of course, are pre-screened by the collective...?

Meaning that guests will not be able to add/edit for security sake.

We're in.

How can we be of service?

tfw



posted on Jun, 12 2012 @ 11:41 PM
link   
Currently this is just a grab of any RSS feeds from Google etc, so there is not much that users can contribute, but of course, we can have anything we want - we just need the ideas and examples and if it is practical and do-able I will do it.

I think users can definitely be involved by downloading and saving the pages/files referenced on their own computer, but I see this more as a regular reference site which points to pages on other sites. Like a menu of whats going down at the moment for any particular crisis plus some older stuff if there is a need to do research.

This is not a standard content management system or wordpress system so I dont have logins on the site as such. These can be added later if we think we need them.

I have done it so that I can easily add RSS feeds through Google Alerts, I just need to know what search terms to use to bring the best results for the particular crisis.

The test ones I have at the moment are
"oyster creek"
fukushima tepco daiichi japan
"browns ferry"

and various combinations around
nuclear power plant

The alerts are returning about 10-20 results each time as-it-happens and get processed every couple of hours.

Google searches are more longer term results from the past so I am working on extracting these and various more focused searches such as .doc, PFDs only or videos etc.

Not sure how to do images yet. I am not really happy with the ones from the Fukushima thread and I dont know how to present the images found in a meaningful way which is easily viewable and searchable. There is no way to categorise them (unless manually) thats the problem.

I can tell, this is going to get very big even without the source documents referenced.

Things to do at the moment
google search extraction
decide how we store page text content
search facility of that stored page content



posted on Jun, 14 2012 @ 12:38 PM
link   
I got a lot of the real Information about F'Shima from the real Science Channels
where they done deep Studies and not this average Bs. like we can read in div. Blogs!

There are a few Sites who offer this,
(Springer, MedLine, and Science-Direct for example)
some Articles are for free, some are really expensive
but always good and stimulating for a further Research!

I am talking about this for example:
www.scirus.com...
hps.org...

Tip: look for "Bugmenot", Firefox Extension! !
edit on 14-6-2012 by Human0815 because: Info!

edit on 14-6-2012 by Human0815 because: spell



posted on Jul, 2 2012 @ 05:19 PM
link   
Added the google search, and allowed users to add resource links and watching a page for fresh content. However the extraction of the google search is a bit 'iffy' at the moment.

Crisis Mapped

what else can I add?



posted on Mar, 13 2013 @ 08:55 PM
link   
OK, so I am ready to take the next step I think. This will be a huge project and not one will be written overnight by one man unfortunately. However in approx 3 months I reckon we should have a wobbly working site with obviously some bugs in it and needed features.

I want some more comments please as it is a modification/extension of the above idea. Anyone who cannot post on here due to government restrictions can contact me via any of my pelicanbill com websites

I actually want to start programming this now so any further comments, ideas and offers of help are very welcome.

=====================================
What I think we need is a place where a team can get together and work on a research project. This would have to be in the form SIMILAR to a forum but which is not open to all who join the site.

My ideas so far on this as follows but maybe modified depending on what is decided will be required.

1) Project proposals for investigation are submitted by Members who want to start an investigation around a particular topic.
2) All Visitors & Members of the site can see project proposals submitted by members.
3) Each project would have a Team Leader who will drive and co-ordinate the project and his team. He will decide who joins/leaves the team in accordance with their 'CV' of relevant experience submitted to him by site members.

4) All project pages will be opensource and viewable by everyone visiting the site(visitors - not logged in), however resources will be private to projects and only the titles/descriptions of resources (files,videos,html pages,pdfs,documents,etc) will be shown to visitors of the site. Project resources will not be viewable to all.

5) Project pages will show true links to resources for logged-in team members.
6) Full copies are the responsibility of team members. Team members are encouraged to copy their project(partial or complete) and download it to their PC as often as possible to allow distributed security of information.

7) A project can have multiple sub-projects with sub team leaders and sub team members from within the Project team. Once a member of a team, there will be no security restrictions to sub projects. Basically, anyone in the team is trusted to modify pages within that project. Sub teams are organisational structures only. Team Leaders(Project proposers) have the moderation responsibility for their teams.

8) a project will be divided into
a) the project proposal, description, team members(usernames only)
b) project resources
c) nodes called "pnodes" consisting of questions needed to be investigated/documented/researched
d) node pages will have a list of node resources which are referenced in that node. Links to those resources can be on any sub-node page
e) nodes can have sub-nodes
f) nodes can be sub-projects consisting of their own nodes/resources/etc

9) members/users profile, relevant experience 'CV', email address(private), join date, password, etc
10) some method of communication like instant messages perhaps
11) I am planning to copy/keep all resource data (which will take a lot of storage space) - dont know if this is practical
12) to be continued...

edit on 13 Mar 2013 by qmantoo because: add point 11/12



new topics

top topics



 
8

log in

join