Understanding redis pipelining in lettuce - java

We are evaluating redis clients between jedis and lettuce. One of the feature we are looking is pipelining commands.
Behaviour in Jedis:
We simply call sync on pipeline object to send all the commands to redis. The commands are batched together by client and a single request is made to redis.
How to we achieve the same in lettuce
Do we disable autoflush and call flush() similar to sync in jedis.
In autoflush is pipelining implicit. If so when does lettuce decide to do the flush of commands. Is there any configuration to tune this behaviour
Any help or references regarding is much appreciated.

You can read my answer to another question which has a bit more detail here.
but tl;dr:
The sync interface does not pipeline, but both the async and reactive interfaces do.
Auto-flushing will pipeline, but it will write commands individually to the socket. You will generally perform better if you flush manually because multiple commands are written to the socket at once.
In both cases (auto vs manual flushing) you can send multiple requests before awaiting the results

Related

When is it safe to retry Redis command on error when using Lettuce?

I'm using Lettuce Redis client for JVM to build a queue backed by a Redis List.
Ideally it would behave like an in-memory queue but since the network interaction is involved this is not possible.
There is a section on error handling in Lettuce docs and in my case I would like to retry failed commands on error. The problem is the exception hierarchy is not very fine grained and I'm not sure how to deal with the following issues:
how to determine if the failed command can be retried? There are a number of reasons that command might fail indefinitely and retrying would lead to an infinite loop - current Redis version doesn't support the command syntax used, the key already exists and it is of a non-compatible type, etc.
can I rely on Lettuce to always reconnect in case of unreliable network? Is there a possibility that in some cases I should not retry on some RedisException or NativeIoException but recreate the Redis client instance or even restart the whole application?
is there a way to know if the failed command was not actually executed by Redis and retrying will not lead to duplication or data loss (or at least this might be determined in some cases by the exception class)? This might be a fundamental issue of exactly-once delivery and as far as I know Redis doesn't provide any means to deal with this (unlike Kafka, for example) but maybe there is some established practice to deal with this problem?
This problem looks like something everybody has to deal with but there doesn't seem to be a lot of information regarding this.

How to implements redis pipeline similar behavior in aerospike

Can any one please suggest, how to implement/use redis pipeline like behavior in aerospike java client.
Redis is a single-threaded database with a simple request/response protocol. Since every command must be processed one by one, and each request has to have a response back, this can add up to a lot of latency if you have lots of operations to do. Pipelining is a way to send multiple commands at once, have the server process all of them, then get all the results back in a batch.
Aerospike is multi-threaded with its own custom wire protocol that can run multiple commands in parallel over the same connection without any special support. The official drivers handle sending commands as efficiently as possible.
Aerospike does have something called Multiple Operations which means you can send multiple commands that act on the same key as one combined command. The java (and other language) drivers also support async operations which should further increase concurrent performance in your code.

Possible to know the publisher in Redis

I am building an Android application that requires me to subscribe to multiple channels. I am using Jedis 2.4.2 for connecting to a Redis server for the same. I am using a separate client for every channel. The problem is that since Jedis subscription is not thread-safe, I am having trouble unsubscribing. As a workaround, I was thinking of a server-side program that keeps listening for messages from different clients on a dedicated channel and kills all subscriptions on receiving a request from them. For this, I need to identify the publisher of the request. Is there any way to accomplish this or a simpler way to execute the unsubscription task?

vert.x replication over a cluster

I want to user vert.x to implement a socket server.
I will be using a cluster and I have one problem I can not figure out. If say I create a ConcurrentMap to store the socket connections on one verticle and this is accessed by the other verticles on other clusters, what happens if the node that is running the first verticle with the ConurrentMap crashes? Obviously I would lose all connections in the ConcurrentMap. How would I replicate this ConcurrentMap so that one is always ready in-case of a crash? I have looked over the documentation and there does not seem to be a solution for replication. The only solution I can think of is whenever there is a new socket connection to insert it into the concurrentMap and also create an in-memory redis database and insert a new socket connection there everytime. This though seems like too much overkill and to recover could take a lot of time if there are a lot of connections (millions). Is there any easier way?

Is there any way to read from one Netty channel only as fast as you can write to another?

We're experiencing an issue in LittleProxy where OutOfMemoryErrors are popping up when reading from a fast server LittleProxy is proxying access to and writing to a slow client configured to use the proxy. The problem is that the data coming in from the server buffers up in memory faster than we can write it to the client. LittleProxy is just a simple HTTP proxy built atop Netty.
Is there any easy way to throttle the read from the remote server to be exactly the same speed as the client is able to read it?
See:
https://github.com/adamfisk/LittleProxy/issues/53
and
https://github.com/adamfisk/LittleProxy
You could have a look at source code of : org.jboss.netty.example.proxy.HexDumpProxyInboundHandler
It set the inbound channel readeable flag according to outbound channel's status. Hope this could help.

Categories