Similar items
Last updated
Was this helpful?
Last updated
Was this helpful?
similar
recommendation model can give you items other visitors also liked, while viewing the item you're currently observing.
Common use-cases for this model are:
you-may-also-like recommendations on item page: the context of the recommendation is a single item you're viewing now.
also-purchased widget on the cart page: the context of the recommendation is the contents of your card.
There are two important parameters in the configuration:
factors
: how many hidden parameters the model tries to compute. The more - the better, but slower. Usually defined within the rage of 50-500.
iterations
: how many factor refinement attempts are made. The more - the better, but slower. Normal range - 50-300.
Rule of thumb - set these parameters low, and then increase slightly until training time becomes completely unreasonable.
See request & response formats in the .
The ALS family of algorithms for recommendations decomposes a sparse matrix of user-item interactions into a set of smaller dense vectors of implicit user and item features (or user and item embeddings). The cool thing about these embeddings is that similar items will have similar embeddings!
So Metarank does the following:
computes item embeddings.
Main pros and cons of such apporach:
pros: fast even for giant inventories, simple to implement
There is an ongoing work in Metarank project to implement NN-based methods and make current ALS implementation personalized.
Metarank uses a variation of collaborative filtering algorithm for recommendations based on the by X.He, H.Zhang, MY.Kan and TS.Chua.
pre-builds a index for fast lookups for similar embeddings.
during inference (when you call the endpoint), it makes a k-NN index lookup of similar items.
cons: lower precision compared to neural networks based methods like , recommendations are not personalized.