Help ATS with a contribution via PayPal:
learn more

Does ATS need a fallout shelter? IP after DNS.

page: 3
13
<< 1  2   >>

log in

join

posted on Mar, 7 2013 @ 12:10 PM
link   
reply to post by asciikewl
 


This is certainly another solution.
However if the server is doing any sort of load balancing or multiple ips then you can run into issues.




posted on Mar, 7 2013 @ 12:13 PM
link   

Originally posted by Mr Tranny
In the battlefield, lines of communication are worth their weight in gold, no mater how thin and fragile they are.
Situational awareness is paramount.
Access to information is always top priority, any way you can find it.

Without it, a solder or civilian is def, dumb, and blind.


Agreed! If something long term happens, it does allow for people to communicate and find help, resources, sanity-saving social interaction, or even just a friend. Granted... if something bad enough happens where most communication is down and we are forced to rely on a site / forum for survival... it is equally as likely that we will not have electricity and probably be a little more concerned with finding shelter / surviving / etc..

Yes I am going a little extreme here but I usually do when it comes to thinking about being prepared! =)



posted on Mar, 7 2013 @ 01:01 PM
link   

Originally posted by grey580
However if you specify a port as well as a header then that port can be used to access the website.

So www.website.com becomes 96.84.128.53:9001

If website owners did that and had people bookmark that address DNS outages wouldn't be an issue.


It would work, but the problem with that idea is it will still produce a link that the googlebots can follow which will generate a duplicate of the website on a different host name which will cause a duplicate of the site to show up in the search engine under the IP address. Same as it does with normal raw IP connectivity.

There is a reason for the whole raw IP connectivity problem across the internet. The cause of that problem is the fact that the bone crushingly stupid google can’t make a simple logical connection. They are unable to differentiate between an IP address and a domain name.

You have IP address A with a website on it.
You have domain name B that points to IP address A.
Domain name B has a website on it.
The sites on domain name B, and IP address A appear to be substantially the same.
It would appear to be safe to assume that they are in fact, the same website, not a duplicate, and the IP version should be tied to the domain name version and hidden from normal search results with out affecting the ranking.

Good, have a cookie!
But no cookie for google, because they are too stupid to make that connection.

Google treats the IP version as a second website, and turns around and de-ranks both websites because they are duplicates. Google hates people that create duplicate websites under different domain names. But google doesn’t evidently understands how the internet works, so they are too stupid to figure out that the websites are not duplicates, but the same site.

Thus, almost every website across the net seems to be scared of their IP shadow. They do their best to make sure the IP version is killed any way possible. They go out of their way to make sure you can’t connect via raw IP.

That is an example of how a simple expedient decisions that google made when they was writing their search engine software has caused a net wide degradation of connectivity.

Lets look at FR.
They load balance between two servers.
209.157.64.200
209.157.64.201
When you enter the FR domain name you will get sent to one of the two servers.
FR does not seem to be afraid of their own shadow, so If you enter either IP address, you will also get the complete FR website. If the DNS system goes down, then you will still be able to enter one of those two IP addresses and get you some FR.

But lets look at how google treats it. Google found the raw IP links somewhere, so now if you search google, you will find that google thinks there is three FR websites. One under FR, and one under the 200 and 201 addresses. They rank all three differently, and ding all three for being duplicates. Bad google, stupid google!

Google should have said “We have a domain pointing to two IPs. Both IPs appear to have the same site on them that the hostname has, which points to the two IPs. Maybe they are the same, so we should hide the IP results and consider them all FR.” But no they don’t. So that policy decision has single handedly caused this situation.

FR on the other hand has evidently got tired of running from their shadow, and decided to give google the finger. They tried the redirect activity for a while but decided to say hell with it, and went back to open IP access.
edit on 7-3-2013 by Mr Tranny because: (no reason given)



posted on Mar, 7 2013 @ 04:03 PM
link   
reply to post by Mr Tranny
 


My assumption of why Google chose to do what it did. Was that many websites are share hosted on an ip.
And use the host header to identify which site to serve up to the browser.

Certainly it makes economic sense to go this route. We do this here on IIS. One IP with a multitude of websites.

Like I said before to get around this we can always specify a port and everything should work as intended.



posted on Mar, 7 2013 @ 04:16 PM
link   
reply to post by grey580
 


It may make some sense on multi website servers, but I still have arguable conditions on how it’s handled. On the other hand, it is hard to comprehend why the practice should be carried over to servers that clearly support one website.

To not take into account the normal backend functionality of a server is asinine in almost any regard, no mater how someone tries to explain it away.



The amount of trouble they have caused by a simple decision that could easily be corrected by them, and them alone, almost leads me to believe that it is intentional. Like they want to cripple the non DNS functionality of websites to make a DNS kill switch more effective.
edit on 7-3-2013 by Mr Tranny because: (no reason given)



posted on Mar, 7 2013 @ 04:57 PM
link   
reply to post by Mr Tranny
 


I don't know. I don't see it as a huge conspiracy.
Google is simply going with what's popular. And that's DNS.

If a server is only hosting one website it's probably behind a load balancer.

I would say that the majority of sites out there are sharing ip addresses.
With places like Godaddy and other shared hosting companies doing a brisk business.
Shared ip addresses are probably used by the majority of sites out there.

Why do all that extra programming for the minority of sites.



posted on Mar, 7 2013 @ 05:39 PM
link   
If the "Fallout Shelter" is a link on the main ATS website, then how would one go about clicking it if they can't get to the main site because of a DNS issue in the first place? If there were a DNS outage on a large scale, then site traffic and the like couldn't be tracked anyway, so using the IP address to access the site would go unnoticed by search engines. What websites could do is have a backup local DNS server for their site only and people who use the sites could configure their networks to use that DNS address. Of course that would mean that everyone would need their own local DNS server on their home network, but that can be accomplished with a virtual machine and a Windows Server or Linux ISO file running the DNS service with a bridged NIC to the physical host computer. Anyway, if "someone" wanted to take down the internet on a large scale, I would contend that "they" would do it through rouiting (BGP protocol) and not DNS servers. Kill the routers... kill the internet. Kill DNS... people will figure out a way around it.



posted on Mar, 7 2013 @ 06:19 PM
link   

Originally posted by OptimusSubprime
If the "Fallout Shelter" is a link on the main ATS website, then how would one go about clicking it if they can't get to the main site because of a DNS issue in the first place?

Anyway, if "someone" wanted to take down the internet on a large scale, I would contend that "they" would do it through rouiting (BGP protocol) and not DNS servers. Kill the routers... kill the internet. Kill DNS... people will figure out a way around it.


The fallout shelter would be a basic forum, which would include a link to the normal site. You could use the forum, or you could click the link to the normal DNS based domain to take you to the normal site. The link is for people that wanted to go to the main site, but accidentally ended up in the fallout shelter. Basically an “exit” sign.

There is a big reason why a government will go after the DNS, and not the routing protocols. The internet is also the heart of the government communications infrastructure. If they went after routing, they would break it for everyone, including themselves. On the other hand, going after public DNS, they can disable it for 99 percent of the population, but still retain use of it for their own communications needs. The number of people that will be able to figure out how to get to primary information sites to communicate in such a short order will be such a small portion of the population that they would be negligible.
edit on 7-3-2013 by Mr Tranny because: (no reason given)



posted on Mar, 8 2013 @ 11:15 AM
link   

Originally posted by Mr Tranny

Originally posted by OptimusSubprime
If the "Fallout Shelter" is a link on the main ATS website, then how would one go about clicking it if they can't get to the main site because of a DNS issue in the first place?

Anyway, if "someone" wanted to take down the internet on a large scale, I would contend that "they" would do it through rouiting (BGP protocol) and not DNS servers. Kill the routers... kill the internet. Kill DNS... people will figure out a way around it.


The fallout shelter would be a basic forum, which would include a link to the normal site. You could use the forum, or you could click the link to the normal DNS based domain to take you to the normal site. The link is for people that wanted to go to the main site, but accidentally ended up in the fallout shelter. Basically an “exit” sign.

There is a big reason why a government will go after the DNS, and not the routing protocols. The internet is also the heart of the government communications infrastructure. If they went after routing, they would break it for everyone, including themselves. On the other hand, going after public DNS, they can disable it for 99 percent of the population, but still retain use of it for their own communications needs. The number of people that will be able to figure out how to get to primary information sites to communicate in such a short order will be such a small portion of the population that they would be negligible.
edit on 7-3-2013 by Mr Tranny because: (no reason given)


I'm sure the government could configure the routing infrastructure using ACLs that would enable them to carry out their business while severely limiting access for the rest of us. There are not very many ISPs out there, and an ACL with all of the ISPs IP ranges would do the trick.



posted on Mar, 8 2013 @ 11:17 AM
link   
reply to post by SkepticOverlord
 


Has ATS ever been attacked before?






top topics



 
13
<< 1  2   >>

log in

join