Aerospike java client failure on scanAll - java

I using the following method in order to truncate data from aerospike namespace.set.bins:
// Setting LUT
val calendar = Calendar.getInstance()
calendar.setTimeInMillis(startTime + 1262304000000L) // uses CITRUSLEAF_EPOCH - see https://discuss.aerospike.com/t/how-to-use-view-and-calulate-last-update-time-lut-for-the-truncate-command/4330
logger.info(s"truncate($startTime = ${calendar.getTime}, durableDelete = $durableDelete) on ${config.toRecoverMap}")
// Define Scan and Write Policies
val writePolicy = new WritePolicy()
val scanPolicy = new ScanPolicy()
writePolicy.durableDelete = durableDelete
scanPolicy.filterExp = Exp.build(Exp.le(Exp.lastUpdate(), Exp.`val`(calendar)))
// Scan all records such as LUT <= startTime
config.toRecoverMap.flatMap { case (namespace, mapOfSetsToBins) =>
for ((set, bins) <- mapOfSetsToBins) yield {
val recordCount = new AtomicInteger(0)
client.scanAll(scanPolicy, namespace, set, new ScanCallback() {
override def scanCallback(key: Key, record: Record): Unit = {
val requiresNullify = bins.filter(record.bins.containsKey(_)).distinct // Instead of making bulk requests which maybe not be needed and load AS
if (requiresNullify.nonEmpty) {
client.put(writePolicy, key, requiresNullify.map(Bin.asNull): _*)
logger.debug(s"${recordCount.incrementAndGet()}: (${requiresNullify.mkString(",")}) Bins of Record: $record with $key are set to NULL")
}
}
})
logger.info(s"Totally $recordCount records affected during the truncate operation on $namespace.$set.$bins")
recordCount.get
}
}
}
This is failed on:
...
2021-08-08 16:51:30,551 [Aerospike-6] DEBUG c.d.a.c.r.services.AerospikeService.scanCallback(55) - 33950: (IsActive) Bins of Record: (gen:3),(exp:0),(bins:(IsActive:0)) with test-recovery-set-multi-1:null:95001b26e70dbb35e1487802ebbc857eceb92246 are set to NULL
for reason:
Error -11,6,0,30000,0,5: Max retries exceeded: 5
com.aerospike.client.AerospikeException: Error -11,6,0,30000,0,5: Max retries exceeded: 5
at com.aerospike.client.query.PartitionTracker.isComplete(PartitionTracker.java:282)
at com.aerospike.client.command.ScanExecutor.scanPartitions(ScanExecutor.java:70)
at com.aerospike.client.AerospikeClient.scanAll(AerospikeClient.java:1519)
at com.aerospike.connect.reloader.services.AerospikeService.$anonfun$truncate$3(AerospikeService.scala:50)
at com.aerospike.connect.reloader.services.AerospikeService.$anonfun$truncate$3$adapted(AerospikeService.scala:48)
at scala.collection.Iterator$$anon$9.next(Iterator.scala:575)
at scala.collection.immutable.List.prependedAll(List.scala:153)
at scala.collection.immutable.List$.from(List.scala:651)
at scala.collection.immutable.List$.from(List.scala:648)
at scala.collection.IterableFactory$Delegate.from(Factory.scala:288)
at scala.collection.immutable.Iterable$.from(Iterable.scala:35)
at scala.collection.immutable.Iterable$.from(Iterable.scala:32)
at scala.collection.IterableOps$WithFilter.map(Iterable.scala:884)
at com.aerospike.connect.reloader.services.AerospikeService.$anonfun$truncate$1(AerospikeService.scala:48)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.immutable.Map$Map1.flatMap(Map.scala:241)
at com.aerospike.connect.reloader.services.AerospikeService.truncate(AerospikeService.scala:47)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.$anonfun$new$2(AerospikeServiceSpec.scala:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.wordspec.AnyWordSpecLike$$anon$3.apply(AnyWordSpecLike.scala:1077)
at org.scalatest.TestSuite.withFixture(TestSuite.scala:196)
at org.scalatest.TestSuite.withFixture$(TestSuite.scala:195)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.withFixture(AerospikeServiceSpec.scala:13)
at org.scalatest.wordspec.AnyWordSpecLike.invokeWithFixture$1(AnyWordSpecLike.scala:1075)
at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$runTest$1(AnyWordSpecLike.scala:1087)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.wordspec.AnyWordSpecLike.runTest(AnyWordSpecLike.scala:1087)
at org.scalatest.wordspec.AnyWordSpecLike.runTest$(AnyWordSpecLike.scala:1069)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.runTest(AerospikeServiceSpec.scala:13)
at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$runTests$1(AnyWordSpecLike.scala:1146)
at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:390)
at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:427)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
at org.scalatest.wordspec.AnyWordSpecLike.runTests(AnyWordSpecLike.scala:1146)
at org.scalatest.wordspec.AnyWordSpecLike.runTests$(AnyWordSpecLike.scala:1145)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.runTests(AerospikeServiceSpec.scala:13)
at org.scalatest.Suite.run(Suite.scala:1112)
at org.scalatest.Suite.run$(Suite.scala:1094)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.org$scalatest$BeforeAndAfterAll$$super$run(AerospikeServiceSpec.scala:13)
at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.org$scalatest$wordspec$AnyWordSpecLike$$super$run(AerospikeServiceSpec.scala:13)
at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$run$1(AnyWordSpecLike.scala:1191)
at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
at org.scalatest.wordspec.AnyWordSpecLike.run(AnyWordSpecLike.scala:1191)
at org.scalatest.wordspec.AnyWordSpecLike.run$(AnyWordSpecLike.scala:1189)
at com.aerospike.connect.reloader.tests.services.AerospikeServiceSpec.run(AerospikeServiceSpec.scala:13)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1320)
at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1314)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1314)
at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:993)
at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:971)
at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1480)
at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:971)
at org.scalatest.tools.Runner$.run(Runner.scala:798)
at org.scalatest.tools.Runner.run(Runner.scala)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2or3(ScalaTestRunner.java:38)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:25)
Any ideas why its happening?
LUT method:
def calculateCurrentLUT(): Long = {
logger.info("calculateCurrentLUTs() Triggered")
val policy = new WritePolicy()
policy.setTimeout(config.operationTimeoutInMillis)
val key = new Key(config.toRecover.head.namespace, AerospikeConfiguration.dummySetName, AerospikeConfiguration.dummyKey)
client.put(policy, key, new Bin(AerospikeConfiguration.dummyBin, "Used by the Recovery process to calculate current machine startTime"))
client.execute(policy, key, AerospikeConfiguration.packageName, "getLUT").asInstanceOf[Long]
}
with:
def registerUDFs(): RegisterTask = {
logger.info(s"registerUDFs() Triggered")
val policy = new WritePolicy()
policy.setTimeout(config.operationTimeoutInMillis)
client.registerUdfString(policy, """
|function getLUT(r)
| return record.last_update_time(r)
|end
|""", AerospikeConfiguration.packageName + ".lua", Language.LUA)
}

AerospikeException: Error -11,6,0,30000,0,5: Max retries exceeded: 5 means -11: error code, maximum retry attempts on this operation exceeded specified value. Shows 6 iterations (orig+maxretries) and you specified max retries at 5. Your connection settings are: 0 - for connectTimeout - wait to create initial socket, 0 is default, 30000 or 30s is your time to close an idle socket, 0 is the total timeout for this scan is operation - 0 means don't timeout which is correct for scans, 5 is the times you retried - looks like server is not responding back to client scan call in 30seconds and client closes the idle socket and retries and after 5 re-tries throws an Exception. Something is obviously wrong - check server log for more clues. For e.g. Are you using the correct server version that supports Expressions for scans? Second, I would check your computation of LUT comparison expression. if the filter expression is evaluating to false, scan will just return an EOF on completion, no matching records -but if socket times out before that, scan will go into a retry.

Related

Kafka - Why is default serde for key and value required for groupBy and reduce?

I am developing a streaming application with Quarkus. My application looks as follows.
flatMap to change key and generate multiple messages from a single message.
join with a KTable using key from Step 1.
transform for a stateful operation.
flatMap change the key back to original key i.e., before Step 1.
groupBy with the key as in Step 4, which is actually that in, Step 1.
reduce to "merge" the records into a single message comprising a JSON array.
The net effect is to split an incoming message (with key as id1) into multiple messages (with different keys e.g., k1, k2, etc.). Enhance each of the message using join and transform. Then, change the key of each message back to id1. Finally, "merge" each of the enhanced message into a single message with key as id1.
I keep getting an error to set-up default key serde and value serde. While the default serde can be set in application.properties, I am not clear, why does this error even arise?
Note that, if I do not do Step 5 and Step 6, the application works successfully.
This is the Java exception I get.
2022-10-17 16:42:34,884 ERROR [org.apa.kaf.str.KafkaStreams] (app-alerts-6a7c4df8-7813-4d5d-9a86-d6f3db7c8ef0-StreamThread-1) stream-client [app-alerts-6a7c4df8-7813-4d5d-9a86-d6f3db7c8ef0] Encountered the following exception during processing and the registered exception handler opted to SHUTDOWN_CLIENT. The streams client is going to shut down now. : org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.common.config.ConfigException: Please specify a key serde or set one through StreamsConfig#DEFAULT_KEY_SERDE_CLASS_CONFIG
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:627)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:551)
Caused by: org.apache.kafka.common.config.ConfigException: Please specify a key serde or set one through StreamsConfig#DEFAULT_KEY_SERDE_CLASS_CONFIG
at org.apache.kafka.streams.StreamsConfig.defaultKeySerde(StreamsConfig.java:1587)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.keySerde(AbstractProcessorContext.java:90)
at org.apache.kafka.streams.processor.internals.SerdeGetter.keySerde(SerdeGetter.java:47)
at org.apache.kafka.streams.kstream.internals.WrappingNullableUtils.prepareSerde(WrappingNullableUtils.java:63)
at org.apache.kafka.streams.kstream.internals.WrappingNullableUtils.prepareKeySerde(WrappingNullableUtils.java:90)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.initStoreSerde(MeteredKeyValueStore.java:195)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:144)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:212)
at org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:97)
at org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:231)
at org.apache.kafka.streams.processor.internals.TaskManager.tryToCompleteRestoration(TaskManager.java:454)
at org.apache.kafka.streams.processor.internals.StreamThread.initializeAndRestorePhase(StreamThread.java:865)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:747)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:589)
... 1 more
These are StreamsConfig values:
acceptable.recovery.lag = 10000
application.id = machine-alerts
application.server =
bootstrap.servers = [kafka:9092]
buffered.records.per.partition = 1000
built.in.metrics.version = latest
cache.max.bytes.buffering = 10240
client.id =
commit.interval.ms = 1000
connections.max.idle.ms = 540000
default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
default.dsl.store = rocksDB
default.key.serde = null
default.list.key.serde.inner = null
default.list.key.serde.type = null
default.list.value.serde.inner = null
default.list.value.serde.type = null
default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serde = null
max.task.idle.ms = 0
max.warmup.replicas = 2
metadata.max.age.ms = 500
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = DEBUG
metrics.sample.window.ms = 30000
num.standby.replicas = 0
num.stream.threads = 1
poll.ms = 100
probing.rebalance.interval.ms = 600000
processing.guarantee = at_least_once
rack.aware.assignment.tags = []
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
repartition.purge.interval.ms = 30000
replication.factor = -1
request.timeout.ms = 40000
retries = 0
retry.backoff.ms = 100
rocksdb.config.setter = null
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
state.cleanup.delay.ms = 600000
state.dir = /tmp/kafka-streams
task.timeout.ms = 300000
topology.optimization = none
upgrade.from = null
window.size.ms = null
windowed.inner.class.serde = null
windowstore.changelog.additional.retention.ms = 86400000
I don't know how Quarkus works, However, the Serde error in groupBy statements is common when you start using Kafka Streams. The groupBy statement creates an internal and compacted topic where the data is going to be grouped by key, (Internally your stream application is going to send messages to that topic) for that reason you should specify the Serde for your GroupBy statement at the code level if the value or key are different types as default key and value Serde in your Stream properties.
KGroupedStream<String, User> groupedStream = stream.groupByKey(
Serialized.with(
Serdes.String(), /* key */
new CustomUserSerde()) /* value */
);
In the above example, I'm using the desired Serdes in the group statement, not at the property level.

How to validate is PoolingHttpClientConnectionManager is applied on jersey client

Below is the code snippet I am using for jersey client connection pooling.
ClientConfig clientConfig = new ClientConfig();
clientConfig.property(ClientProperties.CONNECT_TIMEOUT, defaultConnectTimeout);
clientConfig.property(ClientProperties.READ_TIMEOUT, defaultReadTimeout);
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(50);
cm.setDefaultMaxPerRoute(5);
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, cm);
clientConfig.connectorProvider(new ApacheConnectorProvider());
How can I validate that my client is using connection pooling? Is poolStats.getAvailable() count is valid way of making sure ? In my case this available count is 1 when I tested client.
Yes, the count can be 1, but to confirm you can try following steps.
You can first add a thread that keep running in background and prints the existing state of poolstats at some interval, lets say every 60 sec. You can use below logic. Ensure you are referring to same PoolingHttpClientConnectionManager object instance in below logic code running as a part of background thread.
Then, try calling the logic which internally makes call to external service using the mentioned jersey client in continuation (may be in for loop)
You should see different logs (in your thread logic) getting printed which would confirm that the jersey client is actually using the pooled configuration.
Logic:
PoolStats poolStats = cm.getTotalStats();
Set<HttpRoute> routes = cm.getRoutes();
if(CollectionUtils.isNotEmpty(routes)) {
for (HttpRoute route : routes) {
PoolStats routeStats = poolingHttpClientConnectionManager.getStats(route);
int routeAvailable = routeStats.getAvailable();
int routeLeased = routeStats.getLeased();
int routeIdle = (routeAvailable - routeLeased);
log.info("Pool Stats for Route - Host = {}, Available = {} , Leased = {}, Idle = {}, Pending = {}, Max = {} " ,
route.getTargetHost(), routeAvailable, routeLeased, routeIdle, poolStats.getPending(), poolStats.getMax());
}
}
int available = poolStats.getAvailable();
int leased = poolStats.getLeased();
int idle = (available - leased);
log.info("Pool Stats - Available = {} , Leased = {}, Idle = {}, Pending = {}, Max = {} " ,
available, leased, idle, poolStats.getPending(), poolStats.getMax());

Cassandra Exception

For my current project i'm using Cassandra Db for fetching data frequently. Within every second at least 30 Db requests will hit. For each request at least 40000 rows needed to fetch from Db. Following is my current code and this method will return Hash Map.
public Map<String,String> loadObject(ArrayList<Integer> tradigAccountList){
com.datastax.driver.core.Session session;
Map<String,String> orderListMap = new HashMap<>();
List<ResultSetFuture> futures = new ArrayList<>();
List<ListenableFuture<ResultSet>> Future;
try {
session =jdbcUtils.getCassandraSession();
PreparedStatement statement = jdbcUtils.getCassandraPS(CassandraPS.LOAD_ORDER_LIST);
for (Integer tradingAccount:tradigAccountList){
futures.add(session.executeAsync(statement.bind(tradingAccount).setFetchSize(3000)));
}
Future = Futures.inCompletionOrder(futures);
for (ListenableFuture<ResultSet> future : Future){
for (Row row: future.get()){
orderListMap.put(row.getString("cliordid"), row.getString("ordermsg"));
}
}
}catch (Exception e){
}finally {
}
return orderListMap;
}
My data request query is something like this,
"SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1 WHERE tradacntid = ?".
My Cassandra cluster has 2 nodes with 32 concurrent read and write thread for each and my Db schema as follow
CREATE TABLE omsks_v1.ordersstringv1_copy1 (
tradacntid int,
cliordid text,
ordermsg text,
PRIMARY KEY (tradacntid, cliordid)
) WITH bloom_filter_fp_chance = 0.01
AND comment = ''
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE'
AND caching = {
'keys' : 'ALL',
'rows_per_partition' : 'NONE'
}
AND compression = {
'sstable_compression' : 'LZ4Compressor'
}
AND compaction = {
'class' : 'SizeTieredCompactionStrategy'
};
My problem is getting Cassandra timeout exception, how to optimize my code to handle all these requests
It would be better if you would attach the snnipet of that Exception (Read/write exception). I assume you are getting read time out. You are trying to fetch a large data set on a single request.
For each request at least 40000 rows needed to fetch from Db
If you have a large record and resultset is too big, it throws exception if results cannot be returned within a time limit mentioned in Cassandra.yaml.
read_request_timeout_in_ms
You can increase the timeout but this is not a good option. It may resolve the issue (may not throw exception but will take more time to return result).
Solution: For big data set you can get the result using manual pagination (range query) with limit.
SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1
WHERE tradacntid > = ? and cliordid > ? limit ?;
Or use range query
SELECT cliordid,ordermsg FROM omsks_v1.ordersStringV1 WHERE tradacntid
= ? and cliordid >= ? and cliordid <= ?;
This will be much more faster than fetching the whole resultset.
You can also try by reducing the fetch size. Although it will return the whole resultset.
public Statement setFetchSize(int fetchSize) to check if exception is thrown.
setFetchSize controls the page size, but it doesn't control the
maximum rows returned in a ResultSet.
Another point to be noted:
What's the size of tradigAccountList?
Too many requests at a time also may lead to timeout. Large size of tradigAccountList and a lot of read requests are done at a time (load balancing of requests are handled by Cassandra and how many requests can be handled depends on cluster size and some other factors) may cause this exception .
Some related Links:
Cassandra read timeout
NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet
Cassandra .setFetchSize() on statement is not honoured

When kafka consumer poll return null records?

As show as below, my code is a high level consumer fetch a topic with 32 partitions in kafka server, I'm confused that why sometimes I get a empty return from consumer.poll().
I have tried to increase poll timeout, and then when I increase timeout to 1000, then each poll has return data while I set timeout to 10 or 0, then I see a lot of empty return.
Can anyone tell me how to set a correct timeout ?
def main(args: Array[String]): Unit = {
val props = new Properties()
props.put("bootstrap.servers", "kafka-01:9098")
props.put("group.id", "kch1")
props.put("enable.auto.commit", "true")
props.put("auto.commit.interval.ms", "1000")
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
//props.put("max.poll.records", "1000")
val consumers = new Array[KafkaConsumer[String, String]](16)
for(i <- 0 to 15) {
consumers(i) = new KafkaConsumer[String, String](props)
consumers(i).subscribe(util.Arrays.asList("veh321"))
}
var cnt = 0
var cacheIterator: Iterator[ConsumerRecord[String, String]] = null
for(i <- 0 to 15) {
new Thread(new Runnable {
override def run(): Unit = {
var finish = false
while(!finish) {
val start = System.currentTimeMillis()
cacheIterator = consumers(i).poll(100).iterator()
val end = System.currentTimeMillis() - start
if (end > 10 ) {
println(s"${Thread.currentThread().getId} + Duration is ${end}, ${cacheIterator.hasNext} ${cacheIterator.size}")
}
}
}
}).start()
}
Java consumer employs Linux's epoll as the underlying implementation scheme by invoking java.nio.channels.Selector.select(timeout). It is very likely to return nothing if you only give it 100 ms to try how many SelectionKeys are ready during that short time interval.
Besides, during the same 100 ms, consumer will do some other jobs including polling coordinator status, so the real time interval for record polling is obviously less than 100ms which makes harder to retrieve some really useful things.

writeConcern is not setting to Acknowledged in mongodb

private val DATABASE:String = config.getString("db.dbname")
private val SERVER:ServerAddress = {
val hostName=config.getString("db.hostname")
val port=config.getString("db.port").toInt
new ServerAddress(hostName,port)
}
val connectionMongo = MongoConnection(SERVER)
def collectionMongo(name:String) = connectionMongo(DATABASE)(name)
val result:WriteResult = collectionMongo("pgroup")
.insert(new BasicDBObject("_id",privateArtGroup.getUuid)
.append("ArtGroupStatus",privateArtGroup.artGroupStatus.toString())
.append("isNew",privateArtGroup.isNew), WriteConcern.Acknowledged)
log.info("what is the write concern " + collectionMongo(pgroup).getWriteConcern)
log.info("what is the write concern "+collectionMongo(pgroup).getWriteConcern)
I am setting WriteConcern to Acknowledged but its not setting
the log stataments prints this from where i get to know its not setting
What is the write concer WriteConcern{w=0, wTimeout=null ms, fsync=null, journal=null
Why w=0 ? it should be w=1
I am using casbah V 3.1.1
val result:WriteResult = collectionMongo("pgroup")
.insert(new BasicDBObject("_id",privateArtGroup.getUuid)
.append("ArtGroupStatus",privateArtGroup.artGroupStatus.toString())
.append("isNew",privateArtGroup.isNew), WriteConcern.Acknowledged)
WriteConcern.Acknowledged - Write operations that use this write concern will wait for acknowledgement from the primary server before returning.
w: 1 - Requests acknowledgement that the write operation has propagated to the standalone mongod or the primary in a replica set.
Reason for Why w=0 ? i
Once the given insert query is executed with writeconcern acknowledge the job is done. Moreover we are setting the writeconcern for the insert query alone and not for the collection. This could be a reason that you are getting w=0.
But still I couldn't figure out - In general we have w: 1 is the default write concern for MongoDB and why you are getting w=0.

Categories