Changelog

In a human-readable format. For a technical changelog for robots, see github releases page. Check our blog for more detailed updates.

0.7.9

  • expose redis click-through store TTL to config

0.7.8

  • a bigfix release: slash/semicolon in key/value, kinesis retries

0.7.7

  • a bugfix release: race condition in cache invalidation, booster native memleak

0.7.6

0.7.5

0.7.4

  • support for rocksdb-backed file storage

0.7.3

  • a bugfix release

0.7.2

  • Support for kv-granular Redis TTLs

  • Support HF tokenizers for biencoders: now you can run a multi-lingual E5 model in Metarank!

0.7.1

0.7.0

0.6.4

  • a minor bugfix release

0.6.3

  • diversity feature extractor

  • scoped rate feature

  • fixed an important bug with dataset preparation (symptom: NDCG reported by the booster was higher than NDCG computed after the training) - prediction quality should go up a lot.

0.6.2

  • print NDCG before and after reranking

  • print statistics for mem usage after training

0.6.1

  • fix for crash when using file-based clickthrough store

0.6.0

Upgrading: note that redis state format has a non backwards compatible change, so you need to reimport the data when upgrading.

  • recommendations support for similar and trending items models.

  • Local caching for state, the import should be 2x-3x faster.

0.5.16

  • expose JAVA_OPTS env variable to control JVM heap size.

  • fix bug for a case when there is a click on a non-existent item.

0.5.15

  • cache.maxSize for redis now disables client-side caching altogether. Makes Metarank compatible with GCP Memstore Redis.

  • fixed mem leak in clickthrough joining buffer.

  • lower mem allocation pressure in interacted_with feature.

0.5.14

  • interacted_with feature now supports string[] fields

  • fixed a notorious bug with local-file click-through store.

0.5.13

0.5.12

0.5.11

0.5.10

0.5.9

0.5.8

  • bugfix: add explicit sync on /feedback api call

  • bugfix: config decoding error for field_match over terms

  • bugfix: version detect within docker was broken

  • bugfix: issue with improper iface being bound in docker

0.5.7

0.5.6

0.5.5

Notable features:

0.5.4

Most notable improvements:

0.5.3

Highlights of this release are:

0.5.2.

Highlights of this release are:

0.5.1

Highlights of this release are:

  • Flink is rermoved. As a result only memory and redis persistance modes are supported now.

  • Configuration file now has updated structure and is not compatible with previous format.

  • CLI is updated, most of the options are moved to configuration.

    • We have updated the validate mode of the CLI, so you can use it to validate your data and configuration.

0.4.0

Highlights of this release are:

  • Kubernetes support: now it's possible to have a production-ready metarank deployment in K8S

  • Kinesis source: on par with Kafka and Pulsar

  • Custom connector options pass-through

Kunernetes support

Metarank is a multi-stage and multi-component system, and now it's possible to get it deployed in minutes inside a Kubernetes cluster:

  • Inference API is just a regular Deployment

  • Bootstrap, Upload and Update jobs can be run both locally (to simplify things up for small datasets) and inside the cluster in a distributed mode.

  • Job mabagement is done with flink-kubernetes-operator

See this doc section for details.

Kinesis source

Kinesis Streams can also be used as an event source. It still has a couple of drawbacks compared with Kafka/Pulsar, for example, due to max 7 day data retention it cannot be effectively used as a permanent storage of historical events. But it's still possible to pair it with a AWS Firehose writing events to S3, so realtime events are coming from Kinesis, and historical events are offloaded to S3.

Check out the corresponding part of docs for details and examples.

Custom connector options pass-through

As we're using Flink's connector library for pulling data from Kafka/Kinesis/Pulsar, there is a ton of custom options you can tune for each connector. It's impossible to expose all of them directly, so now in connector config there is a free-form options section, allowing to set any supported option for the underlying connector.

Example to force Kinesis use EFO consumer:

type: kinesis
topic: events
region: us-east-1
offset: latest
options: 
  flink.stream.recordpublisher: EFO 
  flink.stream.efo.consumername: metarank 

See this doc section for details.

Last updated