On its 15th Birth­day, Google broke the news that its orig­i­nal search algo­rithm has been retired. No need to pan­ic, Google still works. The new algo­rithm, named Hum­ming­bird, is a com­bi­na­tion of new and exist­ing com­po­nents with the over­ar­ch­ing goal to bet­ter orga­nize the world’s infor­ma­tion. Hum­ming­bird quick­ly deliv­ers more rel­e­vant con­tent based on user intent. Amit Sing­hal, Senior VP and Google Fel­low, told Dan­ny Sul­li­van that the last time the algo­rithm was rewrit­ten to this extent was over a decade ago, in 2001.

Unlike past Google algo­rithm updates such as Flori­da, Pan­da or Pen­guin, Hum­ming­bird wasn’t released in a sin­gle ses­sion. It was grad­u­al­ly released over time, much like Caf­feine. Accord­ing to Google’s press team, Hum­ming­bird has actu­al­ly been in action for a few months, lead­ing indus­try experts to rea­son­ably assume the Google search team has done exten­sive test­ing and analy­sis on the new algo­rithm. Because of the man­ner in which the algo­rithm was released, Hum­ming­bird will not be a sin­gle inflec­tion point in search traf­fic, but rather a grad­ual change. It also aligns with recent increas­es in “not pro­vid­ed” traf­fic (coin­ci­dence, I think not), mak­ing it even more dif­fi­cult to pin­point whether a site “won” or “lost” with this update.

The oth­er major dif­fer­ence between Hum­ming­bird and oth­er updates is that Hum­ming­bird is more of a replace­ment of the algo­rithm than a spe­cif­ic fil­ter or adjust­ment. Hum­ming­bird improves the way Google inter­prets queries and match­es them to search results.

Query Intent

Search engines look at a vari­ety of fac­tors to deter­mine the intent of user search queries. The most visu­al fac­tor tends to be loca­tion. Search­ing for “restau­rants” will return dra­mat­i­cal­ly dif­fer­ent results based on the loca­tion of the user. Users with IP address­es or net­work con­nec­tions in Atlanta, Chica­go, and Boston will see remark­ably dif­fer­ent search results pages.

Anoth­er fac­tor that Google looks at is his­tor­i­cal search behav­ior. Sup­pose the user loves Zap­pos. They often search for “zap­pos” or “zap­pos shoes” and reg­u­lar­ly inter­act with the site. If that same user searched instead for “run­ning shoes” or “dress shoes,” Google would like­ly dis­play a rel­e­vant page from the Zap­pos web­site in the search results based on the user’s his­to­ry of inter­ac­tion with Zappos.com. Search engines may also want to deliv­er search results for women’s or men’s run­ning shoes based on the user’s his­tor­i­cal search behav­ior.

With the Hum­ming­bird update, Google has con­tin­ued to refine how it process­es queries based on search con­text. Users will even­tu­al­ly be able to string togeth­er a series of queries to nar­row down the search. For exam­ple, you could start with “Where is the Eif­fel Tow­er?,” then pro­ceed with “How tall is it?” and “How much are tours?” Each query depends on con­text from the pre­vi­ous ques­tion, and the result is con­ver­sa­tion­al search.

Relevant Content

With Google’s Hum­ming­bird algo­rithm, con­tent is still king. How­ev­er, rel­e­vant con­tent is increas­ing­ly impor­tant. Hum­ming­bird goes beyond sim­ply exam­in­ing the words in a search and match­ing them to Web pages that include those same key­words. When a user types in a query, Google is now pro­cess­ing the query to exam­ine and iden­ti­fy the intent and mean­ing behind it.

Google rep­re­sen­ta­tives have men­tioned a few times that approx­i­mate­ly 20% of the search queries entered each day are brand new. At one point, it was com­mon for a web­mas­ter to cre­ate con­tent opti­mized with long-tail key­words. Whether it be the 20% of queries users have not yet searched, or the oth­er 30–50% that get very few search­es each month, a giv­en Web page was pre­vi­ous­ly able to rank quite eas­i­ly for a few tar­get­ed long-tail search terms. If a site employed long-tail opti­miza­tion tac­tics sitewide, it could poten­tial­ly recieve tens of thou­sands of vis­i­tors each month from long-tail search­es alone.

With the intro­duc­tion of Hum­ming­bird, web­mas­ters would be wise to stop chas­ing the long-tail syn­onyms for each gen­er­al search query, but should instead focus on vari­ants or sim­i­lar ques­tions. Instead of cre­at­ing pages for “What”s an easy way to relieve aller­gy symp­toms?,” “How are aller­gies treat­ed?” or “How do you treat aller­gy symp­toms?,” a web­site could have a sin­gle page about reliev­ing and treat­ing aller­gy symp­toms and addi­tion­al pages describ­ing alter­na­tive treat­ments or treat­ment side effects (and oth­er sim­i­lar, but dis­tinct top­ics).

It’s no longer enough to cre­ate pages rel­e­vant to a spe­cif­ic key­word phrase or search query. Con­tent should be rel­e­vant to search intent, and the sur­round­ing archi­tec­ture should also sup­port that theme. Rel­e­vance can extend beyond the page itself to a spe­cif­ic sec­tion of a site and the domain as a whole. Long-tail opti­miza­tion is still key, but in a dif­fer­ent way than before.

Structured Data & Entity Search

In order for a search engine to under­stand the intent behind search queries and fil­ter through bil­lions of Web pages to return the most rel­e­vant results pos­si­ble, it needs a com­mon ground for com­par­i­son. How can a search engine under­stand that a chain of hotels stretch­ing across the world are all relat­ed to each oth­er? What about a local bur­ri­to chain with a dozen loca­tions? Is there some kind of data­base it ref­er­ences to con­nect these words? Well, yes, in fact, there is.

Hum­ming­bird relies on a fun­da­men­tal data lay­er of enti­ties in order to orga­nize the Web, although struc­tured data isn’t a new thing for SEO. Google and Bing have been push­ing web­mas­ters to use Schema.org markup, a com­mon for­mat of struc­tured data, in order to label enti­ties and attrib­ut­es with­in Web pages. Struc­tured data can be used to markup reviews, rat­ings, movies, tv shows, restau­rants, apart­ment com­plex­es, law firms, hotels, and even peo­ple. Google has been incor­po­rat­ing an increas­ing num­ber of data ele­ments into search results via rich snip­pets, includ­ing recipes, sports scores, and, most recent­ly, TV episodes and show­times.

A year ago, Google’s knowl­edge graph was track­ing 500 mil­lion enti­ties, but, more impor­tant­ly, 3.5 bil­lion rela­tion­ships between those enti­ties. The more Google expands its enti­ty graph, the bet­ter it will be at comb­ing through the Web and select­ing the per­fect con­tent to match each search query. If you’re not already using struc­tured data on your web­site, now’s as good a time as ever to esca­late the pri­or­i­ty of those sto­ries in your devel­op­ment queue.

Is SEO Dead, Again?

With every search engine update, many ask the ques­tion, “Is SEO Dead?” The answer depends on one’s approach to SEO. Are some “SEO” tac­tics dead? Prob­a­bly. But, to be hon­est, they most like­ly weren’t help­ing very much in the first place. Buy­ing links, link sculpt­ing, hid­den text, and key­word stuff­ing are not good ideas today, but there was a short peri­od of time when they did “work,” if short-term suc­cess was the goal.

The way we approach SEO is sus­tain­able. We focus on the user and what is best for con­nect­ing our clients’ web­sites with peo­ple who are look­ing for their con­tent, ser­vices, and prod­ucts. As SEOs, we are chas­ing the same goal as search engines and for­go short-term tricks or secrets for projects that sup­port a com­plete long-term strat­e­gy for suc­cess. Google Hum­ming­bird is a great thing for users and for our clients, and we look for­ward to future updates that con­tin­ue to improve the way search engines orga­nize the Web and improve user expe­ri­ences.

by Jor­dan Sil­ton, SEO Tech­nol­o­gy Lead