Originally posted by grey580
However if you specify a port as well as a header then that port can be used to access the website.
So www.website.com becomes 184.108.40.206:9001
If website owners did that and had people bookmark that address DNS outages wouldn't be an issue.
It would work, but the problem with that idea is it will still produce a link that the googlebots can follow which will generate a duplicate of the
website on a different host name which will cause a duplicate of the site to show up in the search engine under the IP address. Same as it does with
normal raw IP connectivity.
There is a reason for the whole raw IP connectivity problem across the internet. The cause of that problem is the fact that the bone crushingly stupid
google can’t make a simple logical connection. They are unable to differentiate between an IP address and a domain name.
You have IP address A with a website on it.
You have domain name B that points to IP address A.
Domain name B has a website on it.
The sites on domain name B, and IP address A appear to be substantially the same.
It would appear to be safe to assume that they are in fact, the same website, not a duplicate, and the IP version should be tied to the domain name
version and hidden from normal search results with out affecting the ranking.
Good, have a cookie!
But no cookie for google, because they are too stupid to make that connection.
Google treats the IP version as a second website, and turns around and de-ranks both websites because they are duplicates. Google hates people that
create duplicate websites under different domain names. But google doesn’t evidently understands how the internet works, so they are too stupid to
figure out that the websites are not duplicates, but the same site.
Thus, almost every website across the net seems to be scared of their IP shadow. They do their best to make sure the IP version is killed any way
possible. They go out of their way to make sure you can’t connect via raw IP.
That is an example of how a simple expedient decisions that google made when they was writing their search engine software has caused a net wide
degradation of connectivity.
Lets look at FR.
They load balance between two servers.
When you enter the FR domain name you will get sent to one of the two servers.
FR does not seem to be afraid of their own shadow, so If you enter either IP address, you will also get the complete FR website. If the DNS system
goes down, then you will still be able to enter one of those two IP addresses and get you some FR.
But lets look at how google treats it. Google found the raw IP links somewhere, so now if you search google, you will find that google thinks there is
three FR websites. One under FR, and one under the 200 and 201 addresses. They rank all three differently, and ding all three for being duplicates.
Bad google, stupid google!
Google should have said “We have a domain pointing to two IPs. Both IPs appear to have the same site on them that the hostname has, which points to
the two IPs. Maybe they are the same, so we should hide the IP results and consider them all FR.” But no they don’t. So that policy decision has
single handedly caused this situation.
FR on the other hand has evidently got tired of running from their shadow, and decided to give google the finger. They tried the redirect activity for
a while but decided to say hell with it, and went back to open IP access.
edit on 7-3-2013 by Mr Tranny because: (no reason given)