On its 15th Birth­day, Google broke the news that its orig­i­nal search algo­rithm has been retired. No need to panic, Google still works. The new algo­rithm, named Humming­bird, is a combi­na­tion of new and exist­ing compo­nents with the over­ar­ch­ing goal to better orga­nize the world’s infor­ma­tion. Humming­bird quickly deliv­ers more rele­vant content based on user intent. Amit Singhal, Senior VP and Google Fellow, told Danny Sulli­van that the last time the algo­rithm was rewrit­ten to this extent was over a decade ago, in 2001.

Unlike past Google algo­rithm updates such as Florida, Panda or Penguin, Humming­bird wasn’t released in a single session. It was grad­u­ally released over time, much like Caffeine. Accord­ing to Google’s press team, Humming­bird has actu­ally been in action for a few months, leading indus­try experts to reason­ably assume the Google search team has done exten­sive testing and analy­sis on the new algo­rithm. Because of the manner in which the algo­rithm was released, Humming­bird will not be a single inflec­tion point in search traffic, but rather a gradual change. It also aligns with recent increases in “not provided” traffic (coin­ci­dence, I think not), making it even more diffi­cult to pinpoint whether a site “won” or “lost” with this update.

The other major differ­ence between Humming­bird and other updates is that Humming­bird is more of a replace­ment of the algo­rithm than a specific filter or adjust­ment. Humming­bird improves the way Google inter­prets queries and matches them to search results.

Query Intent

Search engines look at a variety of factors to deter­mine the intent of user search queries. The most visual factor tends to be loca­tion. Search­ing for “restau­rants” will return dramat­i­cally differ­ent results based on the loca­tion of the user. Users with IP addresses or network connec­tions in Atlanta, Chicago, and Boston will see remark­ably differ­ent search results pages.

Another factor that Google looks at is histor­i­cal search behav­ior. Suppose the user loves Zappos. They often search for “zappos” or “zappos shoes” and regu­larly inter­act with the site. If that same user searched instead for “running shoes” or “dress shoes,” Google would likely display a rele­vant page from the Zappos website in the search results based on the user’s history of inter­ac­tion with Zappos.com. Search engines may also want to deliver search results for women’s or men’s running shoes based on the user’s histor­i­cal search behav­ior.

With the Humming­bird update, Google has contin­ued to refine how it processes queries based on search context. Users will even­tu­ally be able to string together a series of queries to narrow down the search. For example, you could start with “Where is the Eiffel Tower?,” then proceed with “How tall is it?” and “How much are tours?” Each query depends on context from the previ­ous ques­tion, and the result is conver­sa­tional search.

Relevant Content

With Google’s Humming­bird algo­rithm, content is still king. However, rele­vant content is increas­ingly impor­tant. Humming­bird goes beyond simply exam­in­ing the words in a search and match­ing them to Web pages that include those same keywords. When a user types in a query, Google is now process­ing the query to examine and iden­tify the intent and meaning behind it.

Google repre­sen­ta­tives have mentioned a few times that approx­i­mately 20% of the search queries entered each day are brand new. At one point, it was common for a webmas­ter to create content opti­mized with long-tail keywords. Whether it be the 20% of queries users have not yet searched, or the other 30–50% that get very few searches each month, a given Web page was previ­ously able to rank quite easily for a few targeted long-tail search terms. If a site employed long-tail opti­miza­tion tactics sitewide, it could poten­tially recieve tens of thou­sands of visi­tors each month from long-tail searches alone.

With the intro­duc­tion of Humming­bird, webmas­ters would be wise to stop chasing the long-tail synonyms for each general search query, but should instead focus on vari­ants or similar ques­tions. Instead of creat­ing pages for “What”s an easy way to relieve allergy symp­toms?,” “How are aller­gies treated?” or “How do you treat allergy symp­toms?,” a website could have a single page about reliev­ing and treat­ing allergy symp­toms and addi­tional pages describ­ing alter­na­tive treat­ments or treat­ment side effects (and other similar, but distinct topics).

It’s no longer enough to create pages rele­vant to a specific keyword phrase or search query. Content should be rele­vant to search intent, and the surround­ing archi­tec­ture should also support that theme. Rele­vance can extend beyond the page itself to a specific section of a site and the domain as a whole. Long-tail opti­miza­tion is still key, but in a differ­ent way than before.

Structured Data & Entity Search

In order for a search engine to under­stand the intent behind search queries and filter through billions of Web pages to return the most rele­vant results possi­ble, it needs a common ground for compar­i­son. How can a search engine under­stand that a chain of hotels stretch­ing across the world are all related to each other? What about a local burrito chain with a dozen loca­tions? Is there some kind of data­base it refer­ences to connect these words? Well, yes, in fact, there is.

Humming­bird relies on a funda­men­tal data layer of enti­ties in order to orga­nize the Web, although struc­tured data isn’t a new thing for SEO. Google and Bing have been pushing webmas­ters to use Schema.org markup, a common format of struc­tured data, in order to label enti­ties and attrib­utes within Web pages. Struc­tured data can be used to markup reviews, ratings, movies, tv shows, restau­rants, apart­ment complexes, law firms, hotels, and even people. Google has been incor­po­rat­ing an increas­ing number of data elements into search results via rich snip­pets, includ­ing recipes, sports scores, and, most recently, TV episodes and show­times.

A year ago, Google’s knowl­edge graph was track­ing 500 million enti­ties, but, more impor­tantly, 3.5 billion rela­tion­ships between those enti­ties. The more Google expands its entity graph, the better it will be at combing through the Web and select­ing the perfect content to match each search query. If you’re not already using struc­tured data on your website, now’s as good a time as ever to esca­late the prior­ity of those stories in your devel­op­ment queue.

Is SEO Dead, Again?

With every search engine update, many ask the ques­tion, “Is SEO Dead?” The answer depends on one’s approach to SEO. Are some “SEO” tactics dead? Prob­a­bly. But, to be honest, they most likely weren’t helping very much in the first place. Buying links, link sculpt­ing, hidden text, and keyword stuff­ing are not good ideas today, but there was a short period of time when they did “work,” if short-term success was the goal.

The way we approach SEO is sustain­able. We focus on the user and what is best for connect­ing our clients’ websites with people who are looking for their content, services, and prod­ucts. As SEOs, we are chasing the same goal as search engines and forgo short-term tricks or secrets for projects that support a complete long-term strat­egy for success. Google Humming­bird is a great thing for users and for our clients, and we look forward to future updates that continue to improve the way search engines orga­nize the Web and improve user expe­ri­ences.

by Jordan Silton, SEO Tech­nol­ogy Lead