API

Metarank's API provides an easy way to integrate Metarank with your applications.

Feedback

API Endpoint: /feedback

Method: POST

Feedback endpoint receives several types of events: item, user, interaction, ranking.

Integrating these events is crucial for personalization to operate properly and provide relevant results.

Payload format

You can find events and their description on the Supported events.

Response

A JSON message with the following fields:

  • accepted: how many events from the submitted batch were processed

  • status: "ok" when no errors found

  • tookMillis: how many milliseconds batch processing took

  • updated: how many underlying ranking features were recomputed.

Example:

{"accepted":1,"status":"ok","tookMillis":3,"updated":0}

Example

$> curl http://localhost:8080/feedback -d '{
    "event": "ranking",
    "id": "id1",
    "items": [
        {"id":"72998"}, {"id":"589"}, {"id":"134130"}, {"id":"5459"}, 
        {"id":"1917"}, {"id":"2571"}, {"id":"1527"}, {"id":"97752"}, 
        {"id":"1270"}, {"id":"1580"}, {"id":"109487"}, {"id":"79132"}
    ],
    "user": "alice",
    "session": "alice1",
    "timestamp": 1661431894711
}'
*   Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /feedback HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.86.0
> Accept: */*
> Content-Length: 354
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 200 OK
< Date: Mon, 28 Nov 2022 13:09:44 GMT
< Content-Length: 55
< 
{"accepted":1,"status":"ok","tookMillis":3,"updated":0}

Train

API Endpoint: /train/<model name>

Method: POST

Train endpoint runs the training on persisted click-through data. You can run this method at any time to re-train the model. See the Model retraining how-to on how to set up the retraining.

Payload: none

Response

A JSON response with the following fields:

  • weights: per-field model weights

  • sizeBytes: model size in bytes

  • features: test/train error loss while training.

Example:

{
  "features": [
    {
      "name": "vote_avg",
      "weight": 629.0
    },
    {
      "name": "profile",
      "weight": [
        1202.0,
        373.0,
        627.0,
        145.0
      ]
    }
  ],
  "iterations": [
    {
      "id": 0,
      "millis": 274,
      "testMetric": 0.5787768851757988,
      "trainMetric": 0.593075630098252
    },
    {
      "id": 1,
      "millis": 104,
      "testMetric": 0.5903952545996365,
      "trainMetric": 0.6083208266384491
    }
  ],
  "sizeBytes": 843792
}

Ranking

API Endpoint: /rank/<model name>

Method: POST

Querystring Parameters:

  • explain: boolean: used to provide extra information in the response containing calculated feature values.

Ranking endpoint does the real work of personalizing items that are passed to it. You need to explicitly define which model to invoke.

Payload format

{
  "id": "81f46c34-a4bb-469c-8708-f8127cd67d27",
  "timestamp": "1599391467000",
  "user": "user1",
  "session": "session1",
  "fields": [ 
    {"name": "query", "value": "jeans"},
    {"name": "source", "value": "search"}
  ],
  "items": [ 
    {"id": "item3", "fields": [{"name": "relevancy", "value": 2.0}]},
    {"id": "item1", "fields": [{"name": "relevancy", "value": 1.0}]},
    {"id": "item2", "fields": [{"name": "relevancy", "value": 0.1}]}
  ]
}
  • id: a request identifier later used to join ranking and interaction events. This will be the same value that you will pass to feedback endpoint for impression and ranking events.

  • user: an optional unique visitor identifier.

  • session: an optional session identifier, a single visitor may have multiple sessions.

  • timestamp: when this event happened. (see timestamp format description on which formats are supported)

  • fields: an optional array of extra fields that you can use in your model, for more information refer to Supported events.

  • items: which particular items were displayed to the visitor.

  • items.id: id of the content item. Should match the item property from item metadata event.

  • items.fields: an optional set of per-item fields, for example BM25 scores coming from ES. See how to use BM25 scores in ranking.

  • items.label: an optional field for explicit relevance judgments.

Response format

{
  "took": 5,
  "items": [
    {"item": "item2", "score":  2.0, "features": [{"name": "popularity", "value": 10 }]},
    {"item": "item3", "score":  1.0, "features": [{"name": "popularity", "value": 5 }]},
    {"item": "item1", "score":  0.5, "features": [{"name": "popularity", "value": 2 }]}
  ]
}
  • took: number of millis spend processing request

  • items.id: id of the content item. Will match item property from the item metadata event.

  • items.score: a score calculated by personalization model

  • items.features: an array of feature values calculated by pesonaliization model. This field will be returned if explain field is set to true in the request. The structure of this object will vary depending on the feature type.

Recommendations

API Endpoint: /recommend/<model-name>

Method: POST

Recommend endpoint returns recommended items that are produced by recommendations model types.

Payload format:

{
  "count": 10,
  "user": "alice1",
  "items": ["item4"]
}

Where:

  • count - number of items to recommend.

  • user - optional, current user id

  • items - context of recommendation. For example, it can be single item for "similar-items" recommendation, and multiple items at once for "cart-style" recommendations.

Response format

{
  "took": 5,
  "items": [
    {"item": "item2", "score":  2.0},
    {"item": "item3", "score":  1.0},
    {"item": "item1", "score":  0.5}
  ]
}
  • took: number of millis spend processing request

  • items.id: id of the content item.

  • items.score: a score calculated by recommender model.

Inference with LLMs

Metarank has API for quick and dirty LLM inference and encoding of texts. It can be useful for implementing hybrid search applications, when you need an actual embedding vector for a query.

LMM bi-encoders

API Endpoint: /inference/encoder/<name>

Method: POST

Encode a batch of strings into vectors using configured model <name>.

Payload format

{"texts": [
  "Berlin is a capital city",
  "My cat is fast"
  ]
}

Response format

{
  "took": 5,
  "embeddings": [
    [1, 2, 3, 4],
    [0, 7, 2, 1]
  ]
}

LLM with cross-encoders

API Endpoint: /inference/cross/<name>

Method: POST

Encode a batch of query-document pairs into similarity scores using configured model <name>.

Payload format

{"input": [
  {"query": "cat", "text":  "my cat is fast"},
  {"query": "cat", "text":  "it has V8 engine"}
]}

Response format

{
  "took": 5,
  "scores": [0.25, 0.01]
}

Prometheus metrics

API Endpoint: /metrics

Method: GET

Dumps app and JVM metrics in a prometheus format.

Example

> GET /metrics HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.86.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Mon, 28 Nov 2022 13:30:41 GMT
< Transfer-Encoding: chunked
< 
# HELP metarank_feedback_events_total Number of feedback events received
# TYPE metarank_feedback_events_total counter
metarank_feedback_events_total 58441.0
# HELP metarank_rank_requests_total Number of /rank requests
# TYPE metarank_rank_requests_total counter
metarank_rank_requests_total{model="xgboost",} 5.0
# HELP metarank_rank_latency_seconds rank endpoint latency
# TYPE metarank_rank_latency_seconds summary
metarank_rank_latency_seconds{model="xgboost",quantile="0.5",} 0.011451508
metarank_rank_latency_seconds{model="xgboost",quantile="0.8",} 0.014340056
metarank_rank_latency_seconds{model="xgboost",quantile="0.9",} 0.119447575
metarank_rank_latency_seconds{model="xgboost",quantile="0.95",} 0.119447575
metarank_rank_latency_seconds{model="xgboost",quantile="0.98",} 0.119447575
metarank_rank_latency_seconds{model="xgboost",quantile="0.99",} 0.119447575

Last updated