I meet a very weird problem about Redis and its Java client Jedis. I have two list in Redis named workMQ and backupMQ, when I execute llen workMQ in redis-cli, it returns 16. However when I execute jedis.llen("workMQ") in Java code with Jedis, it returns 0. But when new data coming by run jedis.lpush("workMQ", "data")in Java codes, the Redis llen workMQ become 1. Why jedis.llen("workMQ") couldn't recognize the remain 16 data items in this list?
Before this weird problem occur, I did rpoplpush operate with Lua script as follows.
eval "for i = 1, 10 do\r redis.call('rpoplpush', 'backupMQ', 'workMQ')\r end" 0
Actually this Lua script have some errors, the correct one is
eval "for i = 1, 10 do\r redis.call('rpoplpush', KEYS[1], KEYS[2])\r end" 2 backupMQ workMQ
Maybe there is some type error between Redis and Lua. I have executed both of these Lua scripts, but still can't work.
PS: My Jedis client's version is 2.7.2, the latest stable version from Jedis Github.
Thanks for your time.
Solved: After one night, the Redis server magically recognized workMQ's items length, and all is fine. It's really strange.
This weird thing cannot happen. You must got something very wrong. For example, redis-cli can accept command like "llen(workMQ)" ? Or do you actually mean "llen workMQ"?
I think it's very likely you are using jedis to operate on a different list key than on redis-cli!
The lua problem is simple, you shoud return a value (at your will) at the end of lua script. And if it still didn't work, post the detailed error information for me!
Related
everyone.
I am new to snmp and faced following problem.
I have snmp table on agent. It works only with flag -Cb (request new row by getnext command). When I using net-snmp int ubuntu, I am getting this table.
enter image description here
How it's doing in java snmp4j:
it is performed by step by step getting every row by sending getnext request.
But instead of pointing table OID, I point column's OIDs which I want to get.
getnext return result and next incremented OID that will be in the next request.
As I researched, during snmpgetnext query does not get incremental value. I receive "OIDs returned from a GETNEXT or GETBULK are less or equal than the requested one (which is not allowed by SNMP)". So I can't get it there.
I suppose that net-snmp avoid this error by doing increment internally when getting this error.
I also tryied to do getnext manually via net-snmp in ubuntu instead of snmptable, but some of columns I got only first incremeted value and thats it, some does not increment at all.
But snmpget on increased value works
enter image description here
Is it a bug on snmp agent? So net-snmp increment by itself when getting snmp table?
Indeed it looks to me like you have a buggy SNMP Agent on your hands.
You should report this to the agent vendor. The data in your first screenshot should be enough evidence for them to take it on as a bug report.
The correct behavior is specified for SNMPv1 in RFC 1157 section 4.1.3, and a few other RFCs for subsequent SNMP versions. However, the gist of it remains the same in v2 and v3.
I'm not sure how the snmptable command works, but it might be trying to guess the successor OID like you say, but more likely snmptable uses SNMP GetBulkRequest-PDUs in the background, and the Agent's implementation of GetBulk is better than its GetNext. I.e. the table traversal bug is not present in the code that handles GetBulk, which gives you the whole table.
Try traversing the table with the snmpwalk, which I think uses only the GetNext operation. My guess is that the snmpwalk will halt or loop, like your snmpgetnext command!
I have a enhanced for loop that enters ch.ethz.ssh2.connection to obtain over 200 values. Every time it goes into the loop a new server is being authenicated and it only retrieves one value from that server. Each time it's looped the data are being saved into an arraylist to be displayed in html tables using thymeleaf. But this method takes forever for eclipse to run through all 200 values one at a time, then it have to restart when I open up localhost:8080 to load up all the tables with all the data. It takes over 5 mins to load the page up. What can I do to speed things up?
Problem in code
List<DartModel> data = new ArrayList<DartModel>();
for(String server:serverArray) {
try {
conn = new ch.ethz.ssh2.Connection(server);
conn.connect();
boolean isAuthenticated = conn
.authenticateWithPassword(username_array[j],
password_array[j]);
if (isAuthenticated == false) {
throw new IOException("Authentication failed.");
}
I need to somehow recode the code above so I can obtain the data all in super quickly.
Output
Loop1: Server1
Loop2: DifferentServer2
Loop3: AllDifferentSever3
and goes on......
Alternative
I was thinking to let the java program run several times while saving the data into redis. Then Auto refresh the program, when it runs it sends the data into redis. Set an expiration time, But I was unable to get the data into thymeleaf html tables. Would this work? If so how can I display this into thymeleaf.
You can query multiple servers at once (in parallel).
If your framework for remote connections is blocking (the methods you call actually wait until the response is received), you'd have to start handful of threads (one thread for one server in the edge case) to do that in parallel (which doesn't scale very well).
When you can use some Future/Promise based tool, you can do it without much overhead (convert 200 futures into one future of 200 values/responses).
Note: In case you would query single server for 200 responses, it is not good idea to do it this way, because you would flood it with too many requests at once. Then you should implement some way to get all the data by one request.
Short answer:
Create a message protocol that sends all values in one response.
More Info:
Define a simple response message protocol.
One simple example might be this:
count,value,...
count: contains the number of values returned.
value: one of the values.
Concrete simple example:
5,123,234,345,456,567
You can go bigger and define the response using json or XML.
Use whatever seems best for your implementation.
Edit: My bad, this will not work if you are polling multiple servers. This solution assumes that you are retrieving 200 values from one server, not one value from 200 servers.
At face value, it's hard to tell without looking at your code (recommend sharing a gist or your code repo).
I assume you are using library. In general, a single SSH2 operation will make several attemtps to authenticate a client. It will iterate over several "methods". I you are using ssh over a command line, you can see these when you use the flag -vv. If one fails, it tries the next. The java library implementation that I found appears to do the same.
In the loop you posted (assuming you loop 200 times), you'll try to authenticate 200 x (authentication method order). I suspect the majority of your execution may be burned in SSH handshakes. This can be avoided by making sure you use your connection only once and get as much as you can from your (already authenticated) opened socket.
Consider moving you connection outside the loop. If you absolutely must do ssh, and the data you are using is too large, parallelism may help some, but that will involve more coordination.
I'm newbie with Spark. I'm trying to read the code and to understand how K-means in Spark Streaming works. I do not know how can to get the number of iterations that the algorithm performs in the same data's group. I can't find the Java file with this information.
Can you help me, please?
Thank you
Solution: In this file /spark-1.5.0/mllib/src/main/scala/org/apache/spark/mllib/clustering/KMeans.scala there is a while statement in the run method that use a variable called iteration and Spark writes it in a log for each run.
Just as a small addition to majitux' solution (I am not allowed to comment yet). If you want to know the number of iterations K-Means takes simply change the log level of SPARK to INFO. Either inside the shell using:
spark.sparkContext.setLogLevel("INFO")
Or by setting it as default inside the conf/log4j.properties.
After K-Means finished running the string "KMeans++ converged in X iterations" will appear in the log.
When you initialize KMeans class, you can specify max-iteration parameters.
new KMeans().setMaxIterations(iterations)
then it will use that parameter for each prediction
Is there a maximum size for the arguments when publishing an event?
I use this code (java): wampClient.publish(token, response.toString());
response.toString() is a long json-string in my case. It has about 70.000 characters. I have the suspicion that the event does not get published, because when I replace response.toString() with a short string, the event gets published as expected.
I dont know much about the internals of Wamp and an initial debugging session into the code did not provide me with much insight. As I said above, I think that the long string is causing some problems.
Minimal running example: To get a minimum running example, please download the example java project from here: http://we.tl/a3kj3dzJ7N and import it into your IDE.
In the demo folder there are two .java-files: Client.java and Server.java
Run/Start both of them and a GUI should appear for each. Then do the following procedure (C = Client, S = Server):
C: hit start
S: hit start
C: hit publish
depending on the size of the message you will see different output on the console of your IDE. the size of the message can be changed in line 137 of Client.java via the size integer variable. As already explained above: If size is lower than 70000 (e.g. 60000) everything works as expected. The console output of Client.java is then as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Received event test.event with value 10000
However, if the integer variable size is changed to 70000 (or higher) the output is as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Completed event test.event
Session1 status changed to Disconnected
Session1 status changed to Connecting
Session1 status changed to Connected
As you can see the Received event ... is missing, hence, the event is not received. However, there is Completed event test.event, but the data is missing obviously.
To sum up, one can see when running the example above that the event is not received properly when the size of the transmitted string is greater than 70000. This problem may be related to netty since it is used under the hood of jawampa. Any help is appreciated. Maybe it's just some small configuration which can fix this problem.
EDIT 1: I updated the question with a minimal running example which can be downloaded.
EDIT 2: I think I now know the root of the problem (totally not sure though, see EDIT3). It is related to the allowed size of a string literal in java. See: Size of Initialisation string in java
In the above example, I can reflect that. If the size variable is lower than 65535 characters, it works, else it doesnt. Is there a workaround for this?
EDIT 3 aka SOLUTION: As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH in NettyWampConnectionConfig.java:8 should be changed to a higher value. then everything works like a charm.
As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH can be overwritten through the NettyWampConnectionConfig class, which you can provide to the NettyWampClientConnectorProvider class. The variable value should, obviously, be increased.
There is bug in jawampa, cause DEFAULT_MAX_FRAME_PAYLOAD_LENGTH is 1 bite lower than default split frame size in Crossbar. So DEFAULT_MAX_FRAME_PAYLOAD_LENGTH should be increased just by 1 bite or crossbar split frame size should be lowered by 1.
Also if you change DEFAULT_MAX_FRAME_PAYLOAD_LENGTH, it should be changed using builder: .withConnectionConfiguration((new NettyWampConnectionConfig.Builder()).withMaxFramePayloadLength(65536).build())
I would like to set the ttl for a collection once, what is the idiomatic way of achieving this when building a java application that uses mongoDB? Do ppl simply apply settings like these in the shell? Or in the application code is it normal to check if a collection is already in the DB, if it is not then create it with the desired options?
Thanks!
I never do index building in my application code anymore.
I confess that I used to. Everytime my application started up I would ensure all my indexes, until suddenly one day a beginner developer got hold of my code and accidently deleted a character within one of my index sequences.
Consequently the entire cluster froze and went down due to processing, in the foreground, this index building. Fortunately I had a number of delayed and non-index building slaves to repair from but still, I lost about 12 hours all in all and in turn 12 hours of business.
I would recommend you don't do your index building in the application code but instead carfully within your mongo console. That goes for any operation like this, even TTL indexing.
You can set a TTL on a collection as documented here.
Using the Java driver, I would try:
theTTLCollection.ensureIndex(new BasicDBObject("status", 1), new BasicDBObject("expireAfterSeconds", 3600));
hth.
Setting a TTL
is an index operation, so I guess that it would not be wise performance wise to do it every time your code is running.