Refining Our Product Search Engine

In early 2013 I wrote a post about some major improvements to Detailed Image’s product search. The relatively robust autosuggest, synonym system, speed improvements, and automated sorting of results by popularity were massive advancements for us. Overall I thought our search engine was one of the better product search engines out there. However, it still had plenty of weaknesses so a few weeks back I set out to make another round of improvements.

There’s huge untapped opportunity with search. It’s very important – our stats show that visitors who search are significantly more likely to buy – and most people suck at it. It also can be an endless project, with almost infinite possible improvements. We wanted to focus on fixing the majority of problems and then put a system in place to monitor the data regularly with the ability to easily make changes without needing to dive into code each time. Classic Pareto Principle stuff. I try to apply the 80/20 rule to all projects, but search is particularly difficult because of the infinite scope.

The big hole for us was searches that returned 0 results when they should have returned something. We started by analyzing all of the queries that yielded 0 results since our new site launch in April 2013. We divided the group into two: products that we don’t carry and products that we do carry. The former is used to help determine what we should pick up in the future. The latter was what I honed in on. The list was long so I “narrowed” it down to about 100 changes that we could make to improve about 6,000 queries.

The one lucky thing about us (and most e-commerce sites) is that we “only” carry ~850 products. 850 may become 2,000 or 3,000 someday, but it probably won’t ever be 10,000. Knowing that, we’re able to attack individual queries directly as opposed to improving the algorithm. For instance, the most major change I made was to make it easy to add a synonym to an item, all items within a brand, or all items within a category. This is primarily for misspellings and products that people refer to in a common-but-slightly-different way than we do. There aren’t that many of these so it’s easier to manually enter the aforementioned 100 or so changes than it is to try to build in a robust dictionary solution for misspellings. Besides, I’m not even sure there’s a dictionary on the planet that can appropriately correct all of the varieties of spellings of Meguiar’s.

So now when someone searches Meguires M205 they’re shown the proper products instead of nothing because of the misspelling:

Detailed Image Search Example

I also spent some time improving what shows when 0 results are returned. First, we try to match any keywords in the query to anything in our database. For instance, someone recently made a typo and typed in Mazi Suds when they really meant Maxi Suds. Instead of showing nothing, we show all products with suds in the name, including what they were looking for:

Detailed Image Search Example

If we really can’t find anything to show them, we still show some of our best selling products:

Detailed Image Search Example

All of this is relatively subtle, but it will affect thousands of queries from potential customers each month. It has the potential to make a drastic difference in our conversion rate. I think it’s also important that we set up a routine review of all of this. I created an internal wiki page for how to add synonyms to improve results for a specific query. Instead of spending a week or two next time, it will take me a few hours.

Beyond studying 0 queries there’s plenty of other things we can and should do eventually to improve the relevance of results, both on our product search and the contextual searches of our Guide and Ask-a-Pro blog. For now though, this is one more nice step in the right direction.

2 comments on Refining Our Product Search Engine

  1. Tim says:

    Interesting how behind the scenes “simple” things can really become complicated! Have you thought of doing a more curated search? For example when someone searches “towel” or “wax” have a feature area for a product that has knowingly high conversion (and profit), split search traffic with AB testing, run the Experiment in GA and see if you have higher conversion and AOV with one over the other? Might be an interesting test to run, you know, when you find the time 🙂 I always get goosebumps when I see this done well.

    Another tactic I encounter all the time that drives me nuts is Searchandizing, you search for a “blue widget” and a red MP3 player comes up because the retailer is paid by the manufacturer to be listed any time someone searches for a color. Creates a questionable user experience, that said as a retailer it may be worth a shot. Will a manufacturer pay you to be listed first when someone does a broad segment search like “polish”???

    • Adam McFarland says:

      Two really interesting ideas Tim. Right now it sorts by what we call “quality score”, an automated score based upon how many units of the item we’ve sold recently, so we sort of show the best products first by default. However I like the curated idea for really popular searches. That would def be worth split-testing at some pt.

      I hadn’t thought of the “searchandizing” idea. I’m not sure if a manufacturer would pay or not, and if it would be enough money to make it worth our time for the possible trade-off in user experience (also probably worth a split-test). It is worth keeping in mind for future negotiations w/a manufacturer. Maybe it’s something we could leverage to get a better deal.

Comments are closed for this post.