SEO for Bing – Google and Bing Indexing Differences

Bing and Yahoo are now (mostly) sharing their search functions and further integration will occur over the next several months.  This combination has resulted in a 25% market share of the Search landscape for Bing/Yahoo!  If you have so far ignored the growing importance of Bing, now is the time to start looking at how to optimize your website for Bing search results.  This post will take a look at how indexing occurs with Bing.

Google is clearly the leader in the Search space and a very mature and sophisticated search engine.  They do a remarkable job of indexing a wide variety of content.  Bing, on the other hand, is still in its infancy and has yet to develop some of the rich indexing capabilities that exist with Google.  Some of the more compelling differences between how Google indexes websites verses how the Bing crawler operates include Canonical Requirements, Page Size , 301 & 302 Redirects, Meta Refreshes and Backlink Requirements.

Canonical Requirements:  Google is very good at determining a website’s Canonical URL even if a website is not coded to properly return the Canonical URL.  Google’s Webmaster Tools even allows website owners to manage their Canonical URL without changing any code.  Additionally, Google supports the use of the Canonical tag as a way for website owners to easily avoid duplicate content issues.

Bing, on the other hand, does not support the Canonical tag and does not offer Canonical URL management in their Webmaster Center.  And Bing has a need for websites to be Canonical from a programmatic standpoint.

The Bing crawler, by default, initially accesses a website’s root domain without the “www” sub domain (example:  If the server sends back a 200 ok response, then Bing will register the domain in their index without the “www”.  If the non-www domain is 301 redirected to the “www” sub domain, Bing will usually follow that directive without issue and properly index the “www” version of the domain.  If your preferred domain configuration includes the “www” sub domain, make sure your Canonical redirects are in place to reflect this preference.

Page Size: Back in the early days of Google, googlebot would only crawl the first 100k of any given page.  As Google has matured, page size is less of an issue for their crawler.  Bing, however, currently only caches the first 100k of most web pages (although the range is more like 95k-105k).   Keep this in mind as you optimize your website for Bing.  Be sure and place the important elements of your content within the first 100k or it will not make it into the Bing cache.

301 & 302 Redirects:  Although Google prefers a 301 redirect, a 302 will not cause major issues with indexing.  However, if a website employs a 302 Canonical redirect instead of a 301, Bing will not follow the redirect and, in many cases, will refuse to index the website altogether.  For this reason, it is very important that Canonical redirects always use a 301.  Bing has stated that, “We do not index any pages that have been 302 redirected by design.”  In other words, if the non-www version of a domain 302 redirects to the www version of the domain, Bing simply will not index the website.

Meta Refreshes:  Some websites still utilize a Meta Refresh to redirect users.  Bing and Google handle this technique very differently.  Google will follow a zero-second Meta refresh and treat it like a 301.   Bing will not.  As a matter of fact, the use of a Meta Refresh will terminate the Bing crawler from accessing any more of the website being indexed.  If you want Bing to index your entire site, don’t use Meta Refreshes.

Backlink Requirements:  Google clearly has the largest index.  In years past, Google made a big deal about how many web pages were in their index.  These days, they don’t really brag too much about their index size as they have won that battle.  Bing doesn’t even play.  Instead of seeking to index each and every piece of content available on a domain, Bing actively removes web pages from their index if those pages are not found to have enough link authority or value to rank in their SERPs.

While Google will index every single file that it can find on a given website (and even some that don’t exist thanks to Javascript functions that expose internal URLs), Bing discards pages that do not have ranking authority.  In most cases, in order for pages to maintain a place in Bing’s index, they must have at least one external website link to them.  According to Bing’s former Program Manager Brett Yount, websites “need to build page-specific backlinks before those internal pages will get indexed.”  There are some exceptions, but this is currently the standard operating procedure of Bing’s index.

Bing will certainly evolve as their search engine matures.  Check back for future posts that will discuss ranking factors for Bing as well as updates to their indexing capabilities.

by John Sherrod
Google +

Related Posts

Join the Conversation

Check out Kelly Wortham’s Optimization based YouTube channel: Test & Learn Community.

Search Discovery
Education Community

Join Search Discovery’s new education community and keep up with the latest tools, technologies, and trends in analytics.

Follow Us


Share on facebook
Share on twitter
Share on linkedin
Scroll to Top