Cassandra, Java and MANY Async request : is this good? - java

I'm developping a Java application with Cassandra with my table :
id | registration | name
1 1 xxx
1 2 xxx
1 3 xxx
2 1 xxx
2 2 xxx
... ... ...
... ... ...
100,000 34 xxx
My tables have very large amount of rows (more than 50,000,000). I have a myListIds of String id to iterate over. I could use :
SELECT * FROM table WHERE id IN (1,7,18, 34,...,)
//image more than 10,000,000 numbers in 'IN'
But this is a bad pattern. So instead I'm using async request this way :
List<ResultSetFuture> futures = new ArrayList<>();
Map<String, ResultSetFuture> map = new HashMap<>();
// map : key = id & value = data from Cassandra
for (String id : myListIds)
{
ResultSetFuture resultSetFuture = session.executeAsync(statement.bind(id));
mapFutures.put(id, resultSetFuture);
}
Then I will process my data with getUninterruptibly() method.
Here is my problems : I'm doing maybe more than 10,000,000 Casandra request (one request for each 'id'). And I'm putting all these results inside a Map.
Can this cause heap memory error ? What's the best way to deal with that ?
Thank you

Note: your question is "is this a good design pattern".
If you are having to perform 10,000,000 cassandra data requests then you have structured your data incorrectly. Ultimately you should design your database from the ground up so that you only ever have to perform 1-2 fetches.
Now, granted, if you have 5000 cassandra nodes this might not be a huge problem(it probably still is) but it still reeks of bad database design. I think the solution is to take a look at your schema.

I see the following problems with your code:
Overloaded Cassandra cluster, it won't be able to process so many async requests, and you requests will be failed with NoHostAvailableException
Overloaded cassadra driver, your client app will fails with IO exceptions, because system will not be able process so many async requests.(see details about connection tuning https://docs.datastax.com/en/developer/java-driver/3.1/manual/pooling/)
And yes, memory issues are possible. It depends on the data size
Possible solution is limit number of async requests and process data by chunks.(E.g see this answer )

Related

How to get the total count of entities in a kind in Google Cloud Datastore

I have a kind having around 5 Million entities in the Google Cloud Datastore. I want to get this count programmatically using Java. I tried following code but it work upto certain threshold (800K).
When i ran query for 5 M records, it goes into infinite loop (my guess) since it doesn't returns any count. How to get the count of entities for this big data? I would not like to use Google App Engine API since it requires to setup environment.
private static Datastore datastore;
datastore = DatastoreOptions.getDefaultInstance().getService();
Query query = Query.newKeyQueryBuilder().setKind(kind).build();
int count = Iterators.size(datastore.run(query)); //count has the entities count
How accurate do you need the count to be? For an slightly out of date count you can use a stats entity to fetch the number of entities for a kind.
If you can't use the stale counts from the stats entity, then you'll need to keep counter entities for the real time counts that you need. You should consider using a sharded counter.
Check out Google Dataflow. A pipeline like the following should do it:
def send_count_to_call_back(callback_url):
def f(record_count):
r = requests.post(callback_url, data=json.dumps({
'record_count': record_count,
}))
return f
def run_pipeline(project, callback_url)
pipeline_options = PipelineOptions.from_dictionary({
'project': project,
'runner': 'DataflowRunner',
'staging_location':'gs://%s.appspot.com/dataflow-data/staging' % project,
'temp_location':'gs://%s.appspot.com/dataflow-data/temp' % project,
# .... other options
})
query = query_pb2.Query()
query.kind.add().name = 'YOUR_KIND_NAME_GOES HERE'
p = beam.Pipeline(options=pipeline_options)
_ = (p
| 'fetch all rows for query' >> ReadFromDatastore(project, query)
| 'count rows' >> apache_beam.combiners.Count.Globally()
| 'send count to callback' >> apache_beam.Map(send_count_to_call_back(callback_url))
)
I use python, but they have a Java sdk too https://beam.apache.org/documentation/programming-guide/
The only issue is your process will have to trigger this pipeline, let it run on its own for a few minutes, and then let it hit a callback URL to let you know it's done

How to generate report from huge mongodb document [duplicate]

Using the code:
all_reviews = db_handle.find().sort('reviewDate', pymongo.ASCENDING)
print all_reviews.count()
print all_reviews[0]
print all_reviews[2000000]
The count prints 2043484, and it prints all_reviews[0].
However when printing all_reviews[2000000], I get the error:
pymongo.errors.OperationFailure: database error: Runner error: Overflow sort stage buffered data usage of 33554495 bytes exceeds internal limit of 33554432 bytes
How do I handle this?
You're running into the 32MB limit on an in-memory sort:
https://docs.mongodb.com/manual/reference/limits/#Sort-Operations
Add an index to the sort field. That allows MongoDB to stream documents to you in sorted order, rather than attempting to load them all into memory on the server and sort them in memory before sending them to the client.
As said by kumar_harsh in the comments section, i would like to add another point.
You can view the current buffer usage using the below command over the admin database:
> use admin
switched to db admin
> db.runCommand( { getParameter : 1, "internalQueryExecMaxBlockingSortBytes" : 1 } )
{ "internalQueryExecMaxBlockingSortBytes" : 33554432, "ok" : 1 }
It has a default value of 32 MB(33554432 bytes).In this case you're running short of buffer data so you can increase buffer limit with your own defined optimal value, example 50 MB as below:
> db.adminCommand({setParameter: 1, internalQueryExecMaxBlockingSortBytes:50151432})
{ "was" : 33554432, "ok" : 1 }
We can also set this limit permanently by the below parameter in the mongodb config file:
setParameter=internalQueryExecMaxBlockingSortBytes=309715200
Hope this helps !!!
Note:This commands supports only after version 3.0 +
solved with indexing
db_handle.ensure_index([("reviewDate", pymongo.ASCENDING)])
If you want to avoid creating an index (e.g. you just want a quick-and-dirty check to explore the data), you can use aggregation with disk usage:
all_reviews = db_handle.aggregate([{$sort: {'reviewDate': 1}}], {allowDiskUse: true})
(Not sure how to do this in pymongo, though).
JavaScript API syntax for the index:
db_handle.ensureIndex({executedDate: 1})
In my case, it was necessary to fix nessary indexes in code and recreate them:
rake db:mongoid:create_indexes RAILS_ENV=production
As the memory overflow does not occur when there is a needed index of field.
PS Before this I had to disable the errors when creating long indexes:
# mongo
MongoDB shell version: 2.6.12
connecting to: test
> db.getSiblingDB('admin').runCommand( { setParameter: 1, failIndexKeyTooLong: false } )
Also may be needed reIndex:
# mongo
MongoDB shell version: 2.6.12
connecting to: test
> use your_db
switched to db your_db
> db.getCollectionNames().forEach( function(collection){ db[collection].reIndex() } )

Elasticsearch SearchContextMissingException during 'scan & scroll' query with Spring Data Elasticsearch

I am using elasticsearch 2.2.0 with default cluster configuration. I encounter a problem with scan and scroll query using spring data elasticsearch. When I execute the query I get error like this:
[2016-06-29 12:45:52,046][DEBUG][action.search.type ] [Vector] [155597] Failed to execute query phase
RemoteTransportException[[Vector][10.132.47.95:9300][indices:data/read/search[phase/scan/scroll]]]; nested: SearchContextMissingException[No search context found for id [155597]];
Caused by: SearchContextMissingException[No search context found for id [155597]]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:611)
at org.elasticsearch.search.SearchService.executeScan(SearchService.java:311)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(SearchServiceTransportAction.java:433)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(SearchServiceTransportAction.java:430)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My 'scan & scroll' code:
public List<T> getAllElements(SearchQuery searchQuery) {
searchQuery.setPageable(new PageRequest(0, PAGE_SIZE));
String scrollId = elasticsearchTemplate.scan(searchQuery, 1000, false);
List<T> allElements = new LinkedList<>();
boolean hasRecords = true;
while (hasRecords) {
Page<T> page = elasticsearchTemplate.scroll(scrollId, 5000, resultMapper);
if (page.hasContent()) {
allElements.addAll(page.getContent());
} else {
hasRecords = false;
}
}
elasticsearchTemplate.clearScroll(scrollId);
return allElements;
}
When my query result size is less than PAGE_SIZE parameter, then error like this occurs five times. I guess that it is one per shard. When result size is bigger than PAGE_SIZE, the error occurs few times more. I've tried to refactor my code to not call:
Page<T> page = elasticsearchTemplate.scroll(scrollId, 5000, resultMapper);
when I'm sure that the page has no content. But it works only if PAGE_SIZE is bigger than query result, so it is no the solution at all.
I have to add that it is problem only on elasticsearch side. On the client side the errors is hidden and in each case the query result is correct. Has anybody knows what causes this issue?
Thank you for help,
Simon.
I get this error if the ElasticSearch system closes the connection. Typically it's exactly what #Val said - dead connections. Things sometimes die in ES for no good reason - master node down, data node is too congested, bad performing queries, Kibana running at the same time you are in middle of querying...I've been hit by all of these at one time or another to get this error.
Suggestion: Up the initial connection time - 1000L might be too short for it to get what it needs. It won't hurt if the query ends sooner.
This also happens randomly when I try to pull too much data quickly; you might have huge documents and trying to pull PAGESIZE of 50,000 might be a little too much. We don't know what you chose for PAGESIZE.
Suggestion: Lower PAGESIZE to something like 500. Or 20. See if these smaller values slow down the errors.
I know I have less of these problems after moving to ES 2.3.3.
This usually happens if your search context is not alive anymore.
In your case, you're starting your scan with a timeout of 1 second and then each scan is alive during 5 seconds. It's probably too low. The default duration to keep the search context alive is 1 minute, so you should probably increase it to 60 seconds like this:
String scrollId = elasticsearchTemplate.scan(searchQuery, 60000, false);
...
Page<T> page = elasticsearchTemplate.scroll(scrollId, 60000, resultMapper);
I ran into a similar problem and I suspect that Spring Data Elasticsearch has some internal bug about passing the Scroll-ID. In my case I just tried to scroll through the whole index and I can rule out #Val his answer about "This usually happens if your search context is not alive anymore", because the exceptions occurred regardless of the duration. Also the exceptions started after the first Page and occurred for every other paging query.
In my case I could simply use elasticsearchTemplate.stream(). It uses Scroll & Scan internally and seems to pass the Scroll-ID correctly. Oh, and it's simpler to use:
SearchQuery searchQuery = new NativeSearchQueryBuilder()
.withQuery(QueryBuilders.matchAllQuery())
.withPageable(new PageRequest(0, 10000))
.build();
Iterator<Post> postIterator = elasticsearchTemplate.stream(searchQuery, Post.class);
while(postIterator.hasNext()) {
Post post = postIterator.next();
}

How could a distributed queue-like-thing be implemented on top of a RBDMS or NOSQL datastore or other messaging system (e.g., rabbitmq)?

From the wouldn't-it-be-cool-if category of questions ...
By "queue-like-thing" I mean supports the following operations:
append(entry:Entry) - add entry to tail of queue
take(): Entry - remove entry from head of queue and return it
promote(entry_id) - move the entry one position closer to the head; the entry that currently occupies that position is moved in the old position
demote(entry_id) - the opposite of promote(entry_id)
Optional operations would be something like:
promote(entry_id, amount) - like promote(entry_id) except you specify the number of positions
demote(entry_id, amount) - opposite of promote(entry_id, amount)
of course, if we allow amount to be positive or negative, we can consolidate the promote/demote methods with a single move(entry_id, amount) method
It would be ideal if the following operations could be performed on the queue in a distributed fashion (multiple clients interacting with the queue):
queue = ...
queue.append( a )
queue.append( b )
queue.append( c )
print queue
"a b c"
queue.promote( b.id )
print queue
"b a c"
queue.demote( a.id )
"b c a"
x = queue.take()
print x
"b"
print queue
"c a"
Are there any data stores that are particularly apt for this use case? The queue should always be in a consistent state even if multiple users are modifying the queue simultaneously.
If it weren't for the promote/demote/move requirement, there wouldn't be much of a problem.
Edit:
Bonus points if there are Java and/or Python libraries to accomplish the task outlined above.
Solution should scale extremely well.
Redis supports lists and ordered sets: http://redis.io/topics/data-types#lists
It also supports transactions and publish/subscribe messaging. So, yes, I would say this can be easily done on redis.
Update: In fact, about 80% of it has been done many times: http://www.google.co.uk/search?q=python+redis+queue
Several of those hits could be upgraded to add what you want. You would have to use transactions to implement the promote/demote operations.
It might be possible to use lua on the server side to create that functionality, rather than having it in client code. Alternatively, you could create a thin wrapper around redis on the server, that implements just the operations you want.
Python: "Batteries Included"
Rather than looking to a data store like RabbitMQ, Redis, or an RDBMS, I think python and a couple libraries have more than enough to solve this problem. Some may complain that this do-it-yourself approach is re-inventing the wheel but I prefer running a hundred lines of python code over managing another data store.
Implementing a Priority Queue
The operations that you define: append, take, promote, and demote, describe a priority queue. Unfortunately python doesn't have a built-in priority queue data type. But it does have a heap library called heapq and priority queues are often implemented as heaps. Here's my implementation of a priority queue meeting your requirements:
class PQueue:
"""
Implements a priority queue with append, take, promote, and demote
operations.
"""
def __init__(self):
"""
Initialize empty priority queue.
self.toll is max(priority) and max(rowid) in the queue
self.heap is the heap maintained for take command
self.rows is a mapping from rowid to items
self.pris is a mapping from priority to items
"""
self.toll = 0
self.heap = list()
self.rows = dict()
self.pris = dict()
def append(self, value):
"""
Append value to our priority queue.
The new value is added with lowest priority as an item. Items are
threeple lists consisting of [priority, rowid, value]. The rowid
is used by the promote/demote commands.
Returns the new rowid corresponding to the new item.
"""
self.toll += 1
item = [self.toll, self.toll, value]
self.heap.append(item)
self.rows[self.toll] = item
self.pris[self.toll] = item
return self.toll
def take(self):
"""
Take the highest priority item out of the queue.
Returns the value of the item.
"""
item = heapq.heappop(self.heap)
del self.pris[item[0]]
del self.rows[item[1]]
return item[2]
def promote(self, rowid):
"""
Promote an item in the queue.
The promoted item swaps position with the next highest item.
Returns the number of affected rows.
"""
if rowid not in self.rows: return 0
item = self.rows[rowid]
item_pri, item_row, item_val = item
next = item_pri - 1
if next in self.pris:
iota = self.pris[next]
iota_pri, iota_row, iota_val = iota
iota[1], iota[2] = item_row, item_val
item[1], item[2] = iota_row, iota_val
self.rows[item_row] = iota
self.rows[iota_row] = item
return 2
return 0
The demote command is nearly identical to the promote command so I'll omit it for brevity. Note that this depends only on python's lists, dicts, and heapq library.
Serving our Priority Queue
Now with the PQueue data type, we'd like to allow distributed interactions with an instance. A great library for this is gevent. Though gevent is relatively new and still beta, it's wonderfully fast and well tested. With gevent, we can setup a socket server listening on localhost:4040 pretty easily. Here's my server code:
pqueue = PQueue()
def pqueue_server(sock, addr):
text = sock.recv(1024)
cmds = text.split(' ')
if cmds[0] == 'append':
result = pqueue.append(cmds[1])
elif cmds[0] == 'take':
result = pqueue.take()
elif cmds[0] == 'promote':
result = pqueue.promote(int(cmds[1]))
elif cmds[0] == 'demote':
result = pqueue.demote(int(cmds[1]))
else:
result = ''
sock.sendall(str(result))
print 'Request:', text, '; Response:', str(result)
if args.listen:
server = StreamServer(('127.0.0.1', 4040), pqueue_server)
print 'Starting pqueue server on port 4040...'
server.serve_forever()
Before that runs in production, you'll of course want to do some better error/buffer handling. But it'll work just fine for rapid-prototyping. Notice that this doesn't require any locking around the pqueue object. Gevent doesn't actually run code in parallel, it just gives that impression. The drawback is that more cores won't help but the benefit is lock-free code.
Don't get me wrong, the gevent SocketServer will process multiple requests at the same time. But it switches between answering requests through cooperative multitasking. This means you have to yield the coroutine's time slice. While gevents socket I/O functions are designed to yield, our pqueue implementation is not. Fortunately, the pqueue completes it's tasks really quickly.
A Client Too
While prototyping, I found it useful to have a client as well. It took some googling to write a client so I'll share that code too:
if args.client:
while True:
msg = raw_input('> ')
sock = gsocket.socket(gsocket.AF_INET, gsocket.SOCK_STREAM)
sock.connect(('127.0.0.1', 4040))
sock.sendall(msg)
text = sock.recv(1024)
sock.close()
print text
To use the new data store, first start the server and then start the client. At the client prompt you ought to be able to do:
> append one
1
> append two
2
> append three
3
> promote 2
2
> promote 2
0
> take
two
Scaling Extremely Well
Given your thinking about a data store, it seems you're really concerned with throughput and durability. But "scale extremely well" doesn't quantify your needs. So I decided to benchmark the above with a test function. Here's the test function:
def test():
import time
import urllib2
import subprocess
import random
random = random.Random(0)
from progressbar import ProgressBar, Percentage, Bar, ETA
widgets = [Percentage(), Bar(), ETA()]
def make_name():
alphabet = 'abcdefghijklmnopqrstuvwxyz'
return ''.join(random.choice(alphabet)
for rpt in xrange(random.randrange(3, 20)))
def make_request(cmds):
sock = gsocket.socket(gsocket.AF_INET, gsocket.SOCK_STREAM)
sock.connect(('127.0.0.1', 4040))
sock.sendall(cmds)
text = sock.recv(1024)
sock.close()
print 'Starting server and waiting 3 seconds.'
subprocess.call('start cmd.exe /c python.exe queue_thing_gevent.py -l',
shell=True)
time.sleep(3)
tests = []
def wrap_test(name, limit=10000):
def wrap(func):
def wrapped():
progress = ProgressBar(widgets=widgets)
for rpt in progress(xrange(limit)):
func()
secs = progress.seconds_elapsed
print '{0} {1} records in {2:.3f} s at {3:.3f} r/s'.format(
name, limit, secs, limit / secs)
tests.append(wrapped)
return wrapped
return wrap
def direct_append():
name = make_name()
pqueue.append(name)
count = 1000000
#wrap_test('Loaded', count)
def direct_append_test(): direct_append()
def append():
name = make_name()
make_request('append ' + name)
#wrap_test('Appended')
def append_test(): append()
...
print 'Running speed tests.'
for tst in tests: tst()
Benchmark Results
I ran 6 tests against the server running on my laptop. I think the results scale extremely well. Here's the output:
Starting server and waiting 3 seconds.
Running speed tests.
100%|############################################################|Time: 0:00:21
Loaded 1000000 records in 21.770 s at 45934.773 r/s
100%|############################################################|Time: 0:00:06
Appended 10000 records in 6.825 s at 1465.201 r/s
100%|############################################################|Time: 0:00:06
Promoted 10000 records in 6.270 s at 1594.896 r/s
100%|############################################################|Time: 0:00:05
Demoted 10000 records in 5.686 s at 1758.706 r/s
100%|############################################################|Time: 0:00:05
Took 10000 records in 5.950 s at 1680.672 r/s
100%|############################################################|Time: 0:00:07
Mixed load processed 10000 records in 7.410 s at 1349.528 r/s
Final Frontier: Durability
Finally, durability is the only problem I didn't completely prototype. But I don't think it's that hard either. In our priority queue, the heap (list) of items has all the information we need to persist the data type to disk. Since, with gevent, we can also spawn functions in a multi-processing way, I imagined using a function like this:
def save_heap(heap, toll):
name = 'heap-{0}.txt'.format(toll)
with open(name, 'w') as temp:
for val in heap:
temp.write(str(val))
gevent.sleep(0)
and adding a save function to our priority queue:
def save(self):
heap_copy = tuple(self.heap)
toll = self.toll
gevent.spawn(save_heap, heap_copy, toll)
You could now copy the Redis model of forking and writing the data store to disk every few minutes. If you need even greater durability then couple the above with a system that logs commands to disk. Together, those are the AFP and RDB persistence methods that Redis uses.
Websphere MQ can do almost all of this.
The promote/demote is almost possible, by removing the message from the queue and putting it back on with a higher/lower priority, or, by using the "CORRELID" as a sequence number.
What's wrong with RabbitMQ? It sounds exactly like what you need.
We extensively use Redis as well in our Production environment, but it doesn't have some of the functionality Queues usually have, like setting a task as complete, or re-sending the task if it isn't completed in some TTL. It does, on the other hand, have other features a Queue doesn't have, like it is a generic storage, and it is REALLY fast.
Use Redisson it implements familiar List, Queue, BlockingQueue, Deque java interfaces in distributed approach provided by Redis. Example with a Deque:
Redisson redisson = Redisson.create();
RDeque<SomeObject> queue = redisson.getDeque("anyDeque");
queue.addFirst(new SomeObject());
queue.addLast(new SomeObject());
SomeObject obj = queue.removeFirst();
SomeObject someObj = queue.removeLast();
redisson.shutdown();
Other samples:
https://github.com/mrniko/redisson/wiki/7.-distributed-collections/#77-list
https://github.com/mrniko/redisson/wiki/7.-distributed-collections/#78-queue https://github.com/mrniko/redisson/wiki/7.-distributed-collections/#710-blocking-queue
If you for some reason decide to use an SQL database as a backend, I would not use MySQL as it requires polling (well and would not use it for lots of other reasons), but PostgreSQL supports LISTEN/NOTIFY for signalling other clients so that they do not have to poll for changes. However, it signals all listening clients at once, so you still would require a mechanism for choosing a winning listener.
As a sidenote I am not sure if a promote/demote mechanism would be useful; it would be better to schedule the jobs appropriately while inserting...

Inconsistent counter values between replicas in Cassandra

I've got a 3 machine Cassandra cluster using rack unaware placements strategy with a replication factor of 2.
The column family is defined as follows:
create column family UserGeneralStats with comparator = UTF8Type and default_validation_class = CounterColumnType;
Unfortunately after a few days of production use I got some inconsistent values for the counters:
Query on replica 1:
[default#StatsKeyspace] list UserGeneralStats['5261666978': '5261666978'];
Using default limit of 100
-------------------
RowKey: 5261666978
=> (counter=bandwidth, value=96545030198)
=> (counter=downloads, value=1013)
=> (counter=previews, value=10304)
Query on replica 2:
[default#StatsKeyspace] list UserGeneralStats['5261666978': '5261666978'];
Using default limit of 100
-------------------
RowKey: 5261666978
=> (counter=bandwidth, value=9140386229)
=> (counter=downloads, value=339)
=> (counter=previews, value=1321)
As the standard read repair mechanism doesn't seem to repair the values I tried to force an
anti-entropy repair using nodetool repair. It did't have any effect on the counter values.
Data inspection showed that the lower values for the counters are the correct ones so I suspect that either Cassandra (or Hector which I used as API to call Cassandra from Java) retried some increments.
Any ideas how to repair the data and possibly prevent the sittuation from happening again?
If neither RR nor repair fixes it, it's probably a bug.
Please upgrade to 0.8.3 (out today) and verify it's still present in that version, then you can file a ticket at https://issues.apache.org/jira/browse/CASSANDRA.

Categories