The internet connects billions of people across the globe. It also hosts at least 1.5 billion websites (it’s virtually impossible to know the exact number for sure). But when one of the billions of internet users wants to find one of the billions of websites out there, they overwhelmingly turn to just one website to find their way: Google.
Google dominates the search engine space (its rivals, including Yahoo and Microsoft’s Bing, split less than 10 percent of the search markets, according to some estimates). And the search engine space is arguably the most important space on the internet. Through search engines, typical users route their web traffic and find the news stories, products, and other web content that they’re looking for.
So how does Google do it? How does one website keep tabs on so many others, and how is it able to produce relevant results to virtually limitless user queries?
And how can we use the answers to those questions to answer this one: How can we make Google offer up our website — the one we want our fans and customers to see — when users ask things relevant to what we do?
Crawling the web
Before Google can offer a list of relevant websites to users, it needs to know which websites are out there. That’s a monumental task — so much so that most of the internet is actually in the “deep web,” a term that refers to sites not indexed by Google or other search engines (The deep web, by the way, is not the same thing as the “Dark Web,” which is the chunk of the internet with websites that can only be accessed through encrypted browsers; the Dark Web is a part of the Deep Web, but most of the Deep Web is innocuous stuff such as the portions of websites that hide behind login pages and the forgotten homemade websites of amateur web designers.).
To find all of the websites it wants to index, Google uses computer programs called “spiders.” Spiders earn their nickname by “crawling” the web, moving purposefully from link to link to create a constantly updating picture of what the internet looks like. Along the way, spiders check out the information that websites share with them: They look at the text on the page, the tags and captions used for images, the HTML tags and metadata, and more.
The spiders give Google a whole lot of information about each page. With that information, Google’s algorithm can decide how well a page should rank for a given query.
Inside the algorithm
Google’s algorithm is a carefully guarded trade secret, but we do know about some things that it takes into account. We know that Google cares about links (the same links that its spiders use to crawl and catalog the web). If Google spots a link that goes to your website, it will make Google think more highly of your website (at least, that is, if Google respects the page that the link is on). Google will also use the text of the link to infer something about what your website offers.
Google cares about the text on your page, too. Google’s algorithm quite reasonably assumes that your website’s text will talk about whatever your website is for: Restaurant websites will have menus on them, auto repair shops will talk about car parts, and so on. If your website has the right “keywords” (the same keywords that your would-be customers are Googling), that’s a good thing.
Catering to Google’s algorithm is what search engine optimization (“SEO” for short) is all about, explain the experts at LinkGraph.io. But it’s not something that you’ll want to tackle on your own. Google’s top-secret and ever-changing algorithm makes for a difficult moving target, and you’ll want full-time professionals to make sure that your website contains the right keyword density and is getting the proper high-authority links from other sites to compete.
Google is a big, complex thing, but our own role here is simple: We want Google to rank us as high as possible. The key to that is SEO.