Couchbase SDK 2 : bulk read operations , how to failover to replicas - java

We are in the process of refactoring a benchmark tool migrating from Couchbase Client 2 to new CouchBase SDK 2.
Previous version has following "bulk get" logic to retrive keys in bulk and if it fails reading from the master, there is a failover to read from the "replicas"
Legacy code :
List<Map.Entry<String, OperationFuture<CASValue<JsonNode>>>> futures = new java.util.ArrayList<>(keys.size());
for (String key : keys) {
futures.add(new AbstractMap.SimpleImmutableEntry<>(key, client.asyncGets(key, transcoder)));
}
Map<String, Long> casValues = new java.util.HashMap<>(keys.size(), 1f);
for (Map.Entry<String, OperationFuture<CASValue<JsonNode>>> e : futures) {
String key = e.getKey();
OperationFuture<CASValue<JsonNode>> future = e.getValue();
try {
CASValue<JsonNode> casVal = future.get();
if (checkStatus(future.getStatus(), errIfNotFound) == OK) {
result.put(key, JsonByteIterator.asMap(casVal.getValue()));
casValues.put(key, casVal.getCas());
} else {
return ERROR;
}
} catch (RuntimeException te) {
if (te.getCause() instanceof CheckedOperationTimeoutException) { ///READ FROM REPLICA
log.warn("Reading from Replica as reading from master has timed out.");
// This is a timeout operation on a read, let's try to read from slave
ReplicaGetFuture<JsonNode> futureReplica = client.asyncGetFromReplica(key, transcoder);
result.put(key, JsonByteIterator.asMap(futureReplica.get()));
} else {
throw te;
}
}
}
Using new Couchbase SDK2
According to the new Couchbase 2 SDK docs ,
http://docs.couchbase.com/developer/java-2.0/documents-bulk.html
I have following logic to retrieve in bulk.But I am not quite sure where to add the failover mechanism to read from "replicas" using
bucket.async().getFromReplica(key, ReplicaMode.ALL);
List<RawJsonDocument> rawDocs = idObs.flatMap((keys)->{
Observable<RawJsonDocument> rawJsonObs = bucket.async().get(key, RawJsonDocument.class);
return rawJsonObs;
}).toList()
.toBlocking()
.single();
How can I implement this "read from replica" failover mechanism with the new RxJava based CouchBase SDK ?

I think I found the anwer :
Observable<RawJsonDocument> rawDocs = idObs.flatMap((key)->{
System.out.println("key "+key);
Observable<RawJsonDocument> rawJsonObs = bucket.async().get(key, RawJsonDocument.class);
return rawJsonObs.onErrorResumeNext(new Func1<Throwable, Observable<RawJsonDocument>>() {
#Override
public Observable<RawJsonDocument> call(Throwable t1) {
if (t1.getCause() instanceof TimeoutException) { //we have a timeout
return bucket.async().getFromReplica(key, ReplicaMode.FIRST, RawJsonDocument.class).first();
}
throw OnErrorThrowable.from(t1);
}
});
});

Related

RMapCache onCreated listener calls multiple times

In springboot application i am trying to add the data in redis using redission.
Below is the sample code for adding data to redis.
RMapCache<String, String> map = redisson.getMapCache("cacheName");
if (value!= null && map != null) {
map.addListener( new EntryCreatedListener<String, String>() {
#Override
public void onCreated(EntryEvent<String, String> event) {
RKeys rkeys = redisson.getKeys();
long ttl = rkeys.remainTimeToLive(event.getKey());
System.out.println("on created key ", event.getKey(), ttl ,map.remainTimeToLive());
}
});
map.putIfAbsent(key, value, ttl, TimeUnit.SECONDS);
}
}
the redisson version used is 3.13.1
output
the print statement is printing multiple times.

How to merge multiple vertx web client responses

I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.

Hazelcast 5.1 Nearcache Server - Client (no connection found)

Regarding to the: Hazelcast Nearcache Server - Client Spring Boot
I have the same issue, but with hazelcast 5.1 and java 17:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getConnectionStrategyConfig()
.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC)
.getConnectionRetryConfig().setClusterConnectTimeoutMillis(Integer.MAX_VALUE)
.setClusterName("cluster_name")
.addNearCacheConfig(new NearCacheConfig("countries"));
clientInstance = HazelcastClient.newHazelcastClient(clientConfig);
And usage:
var task = new TimerTask() {
#Override
public void run() {
try {
Map<Integer, Country> countries = clientInstance.getMap("countries");
if (countries.isEmpty()) {
System.out.println("Map countries is empty");
} else {
for (Integer key : countries.keySet()) {
System.out.println("Name: " + countries.get(key).title());
}
}
} catch (Exception ex) {
System.err.println(ex.getMessage());
}
}
}
timer.scheduleAtFixedRate(task, 0, TimeUnit.SECONDS.toMillis(5));
Country class is:
public record Country(Integer id, String title) implements Serializable {
}
The calling with the active server and the client is ok, but when I shutdown the server I got:
No connection found to cluster
Something has been changed in version 5 or my config is wrong?
Thanks
I found that not any method can be used with NearCache: isEmpty() or keySet() will always return no connection because these methods are not proxy-object methods. So I may use getAll() or get() methods here.

How to do paged request - Spring boot

in my system (Spring boot project) I need to make a request to every 350 people that I search for my data, I need to page and go sending. I looked for a lot of ways to do it and found a lot of it with JPA but I'm using Jooq, so I asked for help with the user's tool and they guided me to use the options of limit and offset.
This is the method where I do the research, I set up my DTO and in the end I return the list of people.
public static ArrayList getAllPeople(Connection connection) {
ArrayList<peopleDto> peopleList = new ArrayList<>();
DSLContext ctx = null;
peopleDto peopleDto;
try {
ctx = DSL.using(connection, SQLDialect.MYSQL);
Result<Record> result = ctx.select()
.from(people)
.orderBy(people.GNUM)
.offset(0)
.limit(350)
.fetch();
for (Record r : result) {
peopleDto = new peopleDto();
peopleDto.setpeopleID(r.getValue(people.GNUM));
peopleDto.setName(r.get(people.SNAME));
peopleDto.setRM(r.get(people.SRM));
peopleDto.setRG(r.get(people.SRG));
peopleDto.setCertidaoLivro(r.get(people.SCERT));
peopleDto.setCertidaoDistrito(r.get(people.SCERTD));
peopleList.add(peopleDto);
}
} catch (Exception e) {
log.error(e.toString());
} finally {
if (ctx != null) {
ctx.close();
}
}
return peopleList;
}
This search without the limitations returns 1,400 people.
The question is how do I send up the limit number then return to this method to continue where I left off last until I reach the total value of records?
Feed your method with a Pageable parameter and return a Page from your method. Something along the lines of ...
public static ArrayList getAllPeople(Connection connection, Pageable pageable) {
ArrayList<peopleDto> peopleList = new ArrayList<>();
DSLContext ctx = null;
peopleDto peopleDto;
try {
ctx = DSL.using(connection, SQLDialect.MYSQL);
Result<Record> result = ctx.select()
.from(people)
.orderBy(people.GNUM)
.offset(pageable.getOffset())
.limit(pageable.getPageSize())
.fetch();
for (Record r : result) {
peopleDto = new peopleDto();
peopleDto.setpeopleID(r.getValue(people.GNUM));
peopleDto.setName(r.get(people.SNAME));
peopleDto.setRM(r.get(people.SRM));
peopleDto.setRG(r.get(people.SRG));
peopleDto.setCertidaoLivro(r.get(people.SCERT));
peopleDto.setCertidaoDistrito(r.get(people.SCERTD));
peopleList.add(peopleDto);
}
} catch (Exception e) {
log.error(e.toString());
} finally {
if (ctx != null) {
ctx.close();
}
}
return new PageImpl(peopleList, pageable, hereyoushouldQueryTheTotalItemCount());
}
Now you can do something with those 350 Users. With the help of the page you can now iterate over the remaining people:
if(page.hasNext())
getAllPeople(connection, page.nextPageable());
Inspired by this article Sorting and Pagination with Spring and Jooq

Is Guava's ListenableFuture still useful/ up to date (after Java 8)?

Since Java 8 some features from Guava are already obsolete (for example String joiner, optional values, preconditions check, Futures etc).
In ListenableFuture documentation (currently with no update since 1 year ago) they say:
"We strongly advise that you always use ListenableFuture instead of Future in all of your code, because..."
I'm using guava (and cassandra) in a old project and my question is: does Java 8 standard libraries already have something that makes ListenableFuture obsolete or is it still the best Future alternative?
Thanks.
Agree with https://stackoverflow.com/users/3371051/pedro, CompletableFuture is Java Built-in, while ListenableFuture is counting on Guava. It's easy convert ListenableFuture to CompletableFuture.
Create CompletableFuture, complete() or completeExceptionally() in ListenableFuture listenable call back.
use thenCompose to chain CompletableFuture with ListenableFuture
use allOf to replace Futures.allAsList
Like the following tested change I did for com.datastax driver core 3.11.0
CompletableFuture initAsync() {
CompletableFuture ret = new CompletableFuture();
if (factory.isShutdown) {
ret.completeExceptionally(
new ConnectionException(endPoint, "Connection factory is shut down"));
return ret;
}
ProtocolVersion protocolVersion =
factory.protocolVersion == null
? ProtocolVersion.NEWEST_SUPPORTED
: factory.protocolVersion;
try {
Bootstrap bootstrap = factory.newBootstrap();
ProtocolOptions protocolOptions = factory.configuration.getProtocolOptions();
bootstrap.handler(
new Initializer(
this,
protocolVersion,
protocolOptions.getCompression().compressor(),
protocolOptions.getSSLOptions(),
factory.configuration.getPoolingOptions().getHeartbeatIntervalSeconds(),
factory.configuration.getNettyOptions(),
factory.configuration.getCodecRegistry(),
factory.configuration.getMetricsOptions().isEnabled()
? factory.manager.metrics
: null));
ChannelFuture future = bootstrap.connect(endPoint.resolve());
writer.incrementAndGet();
future.addListener(
new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
writer.decrementAndGet();
// Note: future.channel() can be null in some error cases, so we need to guard against
// it in the rest of the code below.
channel = future.channel();
if (isClosed() && channel != null) {
channel
.close()
.addListener(
new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
ret.completeExceptionally(
new TransportException(
Connection.this.endPoint,
"Connection closed during initialization."));
}
});
} else {
if (channel != null) {
Connection.this.factory.allChannels.add(channel);
}
if (!future.isSuccess()) {
if (logger.isDebugEnabled())
logger.debug(
String.format(
"%s Error connecting to %s%s",
Connection.this,
Connection.this.endPoint,
extractMessage(future.cause())));
ret.completeExceptionally(
new TransportException(
Connection.this.endPoint, "Cannot connect", future.cause()));
} else {
assert channel != null;
logger.debug(
"{} Connection established, initializing transport", Connection.this);
channel.closeFuture().addListener(new ChannelCloseListener());
ret.complete(null);
}
}
}
});
} catch (RuntimeException e) {
closeAsync().force();
throw e;
}
Executor initExecutor =
factory.manager.configuration.getPoolingOptions().getInitializationExecutor();
return ret.thenCompose(
Void -> {
CompletableFuture ret2 = new CompletableFuture();
ProtocolOptions protocolOptions = factory.configuration.getProtocolOptions();
Future startupResponseFuture =
write(
new Requests.Startup(
protocolOptions.getCompression(), protocolOptions.isNoCompact()));
ListenableFuture channelReadyFuture =
GuavaCompatibility.INSTANCE.transformAsync(
startupResponseFuture,
onStartupResponse(protocolVersion, initExecutor),
initExecutor);
GuavaCompatibility.INSTANCE.addCallback(
channelReadyFuture,
new FutureCallback() {
#Override
public void onSuccess(Void result) {
ret2.complete(null);
}
#Override
public void onFailure(Throwable t) {
// Make sure the connection gets properly closed.
if (t instanceof ClusterNameMismatchException
|| t instanceof UnsupportedProtocolVersionException) {
// Just propagate
closeAsync().force();
ret2.completeExceptionally(t);
} else {
// Defunct to ensure that the error will be signaled (marking the host down)
Throwable e =
(t instanceof ConnectionException
|| t instanceof DriverException
|| t instanceof InterruptedException
|| t instanceof Error)
? t
: new ConnectionException(
Connection.this.endPoint,
String.format(
"Unexpected error during transport initialization (%s)", t),
t);
ret2.completeExceptionally(defunct(e));
}
// Ensure the connection gets closed if the caller cancels the returned future.
if (!isClosed()) {
closeAsync().force();
}
}
});
return ret2;
});
}
private CompletableFuture createPools(Collection hosts) {
List> futures = Lists.newArrayListWithCapacity(hosts.size());
for (Host host : hosts)
if (host.state != Host.State.DOWN) futures.add(maybeAddPool(host, null));
return CompletableFuture.allOf(futures.toArray(new CompletableFuture[futures.size()]));
}

Categories