Back in the early days of search engines, Google wasn’t yet the dominant force it is today. Instead, there were several contenders for that throne, all vying for market share.

Some of those first true search engines, which used a crawler to index pages that could then be searched for keywords, included:

Eventually, MSN Search launched in 1998, which would later become Bing, and around the same time, Google was made available to the public.

With multiple different search engines all producing unique results, users sometimes faced a dilemma over which was the best search engine for the job.

Meta-search engines aimed to overcome this by searching several of the leading search engines all at once and aggregating the results into a single page, theoretically with the most relevant results at the top.

It wasn’t foolproof – search engine technology was very new and assigning an accurate relevance score to results from different search engines was difficult. However, it gave users a starting point from which they could explore a comprehensive set of search results.

So why aren’t we using meta-search engines anymore? It’s down to the way search has evolved, the way we want to find information, and the way the individual search engines have started to offer the kind of capabilities those early meta-search engines trailblazed.

Why did meta-search engines matter?

Before the first search engines emerged, there were two main ways to find websites on the internet:

  1. Type the full and correct URL into the address bar of your browser.
  2. Use an online directory like the early Yahoo! Directory and browse by category.

Free-text search engines meant you could type a word into the search box and get results containing that word, usually along with a percentage relevance score to help you see how well the page answered your search query.

But the primitive search engines had vastly different coverage, so on a page of ten results, you might get 2-3 good ones on each search engine, but they would be completely different from the 2-3 good results on another search engine.

Meta-search engines overcame this by aggregating the results of many search engines into a single page, theoretically giving you one place to access the best results of all the major search engines of the time.

They also had privacy benefits. By using the meta-search engine as a proxy for your search, you could avoid having your IP address and search history tracked by the search engines themselves (although it’s not clear how many people were concerned about this at the time).

A meta-search engine results page was like a dashboard to explore many different results relating to a particular subject, including reference sources, e-commerce pages, and brand names that happened to contain the specific word you searched for.

Where did meta-search engines go?

It’s easy to argue that the meta-search engines provided better results in the late 1990s and early 2000s.

Aggregating multiple search engines’ results gave them a large overall index. At that time, web users were happy to explore multiple links to build up a picture of the information they wanted.

Over time, that changed. The pure search engines increased their capabilities, moving beyond manual URL submission to start proactively crawling the web looking for newly published pages discovered via hyperlinks.

As the search engine indexes became more comprehensive, they increasingly offered equivalent coverage to one another, so meta-search didn’t fill in so many of the gaps.

We have also become impatient. Even the slight delay it sometimes takes to search a dozen search engines and aggregate the results can feel like an age on a superfast broadband connection.

Many people now want the information they seek to be given to them in a single result, ideally ranked at #1. The old sense of curiosity and exploration has gone from searching the web, which is why SEO and a first-page organic search ranking is so important nowadays.

The way the search engines present results have moved closer to what those early meta-search engines offered too. Google’s Knowledge Panel, for instance, summarises key information about certain entities when you search for them, doing the hard exploratory work for you.

And while privacy concerns are increasingly on the agenda, it’s too late for some of the early meta-search engines, which were arguably ahead of their time in offering a way for users to access all of the main search engines of their era, without exposing their IP address.

So while some of their functionality was underappreciated and some of it became relatively obsolete, the rest has influenced the ‘universal’ search results pages we take for granted today – we have a lot to thank those first meta-search engines for.

Are there any meta-search engines left?

Some of the old meta-search engines remain in operation, while other more recent ones have launched too.

WebCrawler grew to become a meta-search engine and is now owned by System1, who also own the meta-search engine Excite (which is undergoing a redesign at the time of writing) and industry-specific curated portals like Carsgenius and Health n Well.

Also owned by System1 is Startpage.com, described as the world’s most private search engine. It’s a Europe-based search engine that doesn’t log or track personal data from users.

There’s also DuckDuckGo, another search engine that promises to protect users’ privacy and so lives up to that aspect of the original meta-search engines, despite not being founded until 2008.

Finally, eTools.ch searches 16 major search engines including Google, Bing, Yahoo! and DuckDuckGo, some specifically Swiss sources and Wikipedia, all in an average of under 1.3 seconds.

The pureplay search engines clearly dominate the market – notably Google – but privacy-centric alternatives are finding favour too.

Whether we will ever see a full resurgence of meta-search engines seems doubtful. Still, in specific industries and for privacy protected results, they continue to serve an important purpose for many internet users on a daily basis.