Visual search in SAP Hybris Commerce

We have explained why virtual search in e-commerce is important. Before going into implementation, let’s mention the use case from a customer’s perspective once again: You are walking down the street, see someone’s outfit, snap a photo, upload it to the web store of your favorite brand to search for similar products.

Let’s break the implementation into steps:

Search for visually similar products. What does visually similar mean? Does it mean only by color or also texture, structure, edges…
Analyze what is in the image, which clothing items are in the image.

Lire for visual information retrieval

LIRE stands for Lucene Image Retrieval, it uses Lucene to store and query image feature values. It takes numeric images descriptors, which are mainly vectors or sets of vectors, and stores them inside a Lucene index as text. It comes with many image descriptor implementations, such as Color Layout, Pyramid Opponent Histogram, SIFT, SIMPLE, SURF and many more. Depending on your requirements, different image features can be used. Lire is used to extract the histogram (_hi), the hashes (_ha) and if the feature supports it, metric spaces (_ms) of an image. This is then all stored in the index.

Indexing
To integrate with SAP Hybris Commerce, a new SolrIndexedProperty needs to be defined for solr indexed type and the corresponding ValueResolver implemented. In this case, all 300×300 product images are processed, cl,eh,jc,oh,ph,ac,ad,ce,fc,fo,jh,sc descriptors are used and all three (hashes, histograms and metric spaces) are extracted.

Searching

Now that we’ve indexed the features, it’s time to make them available to the search.

One way to search for similar images is by using LireRequestHandler, with query type “/lireq”. It supports different kinds of queries, but the most important one is getting images with a feature vector like the one from the uploaded image. The other way is by using function queries: lirefunc(arg1,arg2).

To enable this, the new RequestHandler and the ValueSourceParser have to be registered in the solrconfig.xml. As for the SAP Hybris Commerce implementation, few data objects have to be extended, along with populators and services.

Each feature uses its own defined distance metric. Metric spaces, spanned by each feature, are not compatible, so using multiple features for search is challenging. Relevance scores should not be added or subtracted. Possible solutions could be margining result lists, where each feature returns its own result and mash them somehow. The other way could be filtering and ranking, where one feature is used to return results and the other to re-rank them. The third way is machine learning, using learning methods to fit parameters and select appropriate dimensions.

Analyzing the image

It was relatively easy to train the model to get the information which items can be seen in the image. But, in a case when a customer uploads an image of the whole outfit, but wants to search only for a specific item, that was not enough. There was no information about where the wanted item is in the picture. There are not many open fashion datasets that are annotated with bounding boxes. Collecting images and annotating them myself would take a long time, so I’ve chosen to use Algorithmia’s DeepFashion instead. The only problem with that is the response time, it can take up to a minute.

This algorithm detects clothing items in images; it returns a list of discovered clothing articles as well as annotating the input image with bounding boxes for each found article.

Resource from hybris.com

Comments are closed.