everyone.
I am new to snmp and faced following problem.
I have snmp table on agent. It works only with flag -Cb (request new row by getnext command). When I using net-snmp int ubuntu, I am getting this table.
enter image description here
How it's doing in java snmp4j:
it is performed by step by step getting every row by sending getnext request.
But instead of pointing table OID, I point column's OIDs which I want to get.
getnext return result and next incremented OID that will be in the next request.
As I researched, during snmpgetnext query does not get incremental value. I receive "OIDs returned from a GETNEXT or GETBULK are less or equal than the requested one (which is not allowed by SNMP)". So I can't get it there.
I suppose that net-snmp avoid this error by doing increment internally when getting this error.
I also tryied to do getnext manually via net-snmp in ubuntu instead of snmptable, but some of columns I got only first incremeted value and thats it, some does not increment at all.
But snmpget on increased value works
enter image description here
Is it a bug on snmp agent? So net-snmp increment by itself when getting snmp table?
Indeed it looks to me like you have a buggy SNMP Agent on your hands.
You should report this to the agent vendor. The data in your first screenshot should be enough evidence for them to take it on as a bug report.
The correct behavior is specified for SNMPv1 in RFC 1157 section 4.1.3, and a few other RFCs for subsequent SNMP versions. However, the gist of it remains the same in v2 and v3.
I'm not sure how the snmptable command works, but it might be trying to guess the successor OID like you say, but more likely snmptable uses SNMP GetBulkRequest-PDUs in the background, and the Agent's implementation of GetBulk is better than its GetNext. I.e. the table traversal bug is not present in the code that handles GetBulk, which gives you the whole table.
Try traversing the table with the snmpwalk, which I think uses only the GetNext operation. My guess is that the snmpwalk will halt or loop, like your snmpgetnext command!
Related
I've started using Selenide recently, and I'm loving the fluent code it allows.
I do have strange issue with ElementsCollection, however.
$$("some ref").filterBy(not(attribute("an-attr-that-should-not-be"))).getTexts()
This query intermittently returns stringified StaleElementReferenceExceptions, and I can't understand why.
If I run the query in the debugger, it returns valid values, while during normal runtime (single thread application), this is what I get.
The target element is a GWT combo box results list.
Could someone please point me in the right direction?
Update: if it's relevant, I'm using InternetExplorerDriver.
Chrome and ChromeDriver specifically fire off StaleElementReferenceException like its the point of your test - any time an element is no longer visible the WebElement reference you have to it becomes invalid and you must look it up again. If the combo is showing/hiding or changing those could cause this (need more details on which combo and what seems to cause it for more specific) - try looking up the element when you expect to use it instead of reusing the reference again and again.
Found the problem. Apparently, the Selenide ElementsCollection cached a previous version of the element list, which updated a lot slower than anticipated, and was trying to access this ghost data when retrieving texts.
Fixed by using $$ where the list is iterated, instead of the usual static constants in class header.
My solution for this problem was very simple and straight. I just set the timeout for the search of an element around 10 seconds and it worked. It can be done only with one string:
Configuration.timeout=10000
The value is in milliseconds, of course.
I meet a very weird problem about Redis and its Java client Jedis. I have two list in Redis named workMQ and backupMQ, when I execute llen workMQ in redis-cli, it returns 16. However when I execute jedis.llen("workMQ") in Java code with Jedis, it returns 0. But when new data coming by run jedis.lpush("workMQ", "data")in Java codes, the Redis llen workMQ become 1. Why jedis.llen("workMQ") couldn't recognize the remain 16 data items in this list?
Before this weird problem occur, I did rpoplpush operate with Lua script as follows.
eval "for i = 1, 10 do\r redis.call('rpoplpush', 'backupMQ', 'workMQ')\r end" 0
Actually this Lua script have some errors, the correct one is
eval "for i = 1, 10 do\r redis.call('rpoplpush', KEYS[1], KEYS[2])\r end" 2 backupMQ workMQ
Maybe there is some type error between Redis and Lua. I have executed both of these Lua scripts, but still can't work.
PS: My Jedis client's version is 2.7.2, the latest stable version from Jedis Github.
Thanks for your time.
Solved: After one night, the Redis server magically recognized workMQ's items length, and all is fine. It's really strange.
This weird thing cannot happen. You must got something very wrong. For example, redis-cli can accept command like "llen(workMQ)" ? Or do you actually mean "llen workMQ"?
I think it's very likely you are using jedis to operate on a different list key than on redis-cli!
The lua problem is simple, you shoud return a value (at your will) at the end of lua script. And if it still didn't work, post the detailed error information for me!
Is there a maximum size for the arguments when publishing an event?
I use this code (java): wampClient.publish(token, response.toString());
response.toString() is a long json-string in my case. It has about 70.000 characters. I have the suspicion that the event does not get published, because when I replace response.toString() with a short string, the event gets published as expected.
I dont know much about the internals of Wamp and an initial debugging session into the code did not provide me with much insight. As I said above, I think that the long string is causing some problems.
Minimal running example: To get a minimum running example, please download the example java project from here: http://we.tl/a3kj3dzJ7N and import it into your IDE.
In the demo folder there are two .java-files: Client.java and Server.java
Run/Start both of them and a GUI should appear for each. Then do the following procedure (C = Client, S = Server):
C: hit start
S: hit start
C: hit publish
depending on the size of the message you will see different output on the console of your IDE. the size of the message can be changed in line 137 of Client.java via the size integer variable. As already explained above: If size is lower than 70000 (e.g. 60000) everything works as expected. The console output of Client.java is then as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Received event test.event with value 10000
However, if the integer variable size is changed to 70000 (or higher) the output is as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Completed event test.event
Session1 status changed to Disconnected
Session1 status changed to Connecting
Session1 status changed to Connected
As you can see the Received event ... is missing, hence, the event is not received. However, there is Completed event test.event, but the data is missing obviously.
To sum up, one can see when running the example above that the event is not received properly when the size of the transmitted string is greater than 70000. This problem may be related to netty since it is used under the hood of jawampa. Any help is appreciated. Maybe it's just some small configuration which can fix this problem.
EDIT 1: I updated the question with a minimal running example which can be downloaded.
EDIT 2: I think I now know the root of the problem (totally not sure though, see EDIT3). It is related to the allowed size of a string literal in java. See: Size of Initialisation string in java
In the above example, I can reflect that. If the size variable is lower than 65535 characters, it works, else it doesnt. Is there a workaround for this?
EDIT 3 aka SOLUTION: As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH in NettyWampConnectionConfig.java:8 should be changed to a higher value. then everything works like a charm.
As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH can be overwritten through the NettyWampConnectionConfig class, which you can provide to the NettyWampClientConnectorProvider class. The variable value should, obviously, be increased.
There is bug in jawampa, cause DEFAULT_MAX_FRAME_PAYLOAD_LENGTH is 1 bite lower than default split frame size in Crossbar. So DEFAULT_MAX_FRAME_PAYLOAD_LENGTH should be increased just by 1 bite or crossbar split frame size should be lowered by 1.
Also if you change DEFAULT_MAX_FRAME_PAYLOAD_LENGTH, it should be changed using builder: .withConnectionConfiguration((new NettyWampConnectionConfig.Builder()).withMaxFramePayloadLength(65536).build())
I'm playing around with OSCeleton and Processing and succesfully got to track skeletons and do stuff.
What I'm wondering is if there's any way to change the delay time a "lost_user" message is sent to Processing.
This is taking so long for what I'm trying to achive, since i need to stop tracking a user as soon as he goes away from the screen, so I can accept another user's interaction. (imagine an installation where a lot of people wants to play with).
any help/tips would be really appreciated.
Jon
As far as I can tell from the OSCeleton's source and with my minimal experience with the kinect(I never used OSCeleton), there is no way to modify that code to do that. It seems to be a thing handled even lower, by the driver or by the kinect its self(?).
Yet you need not bind yourself with that, and I would suggest a couple of ways to bypass the problem if I understand properly.
First, the latest drivers and examples should have multi-user support, meaning you can just arrange who is your main user. From what I can tell from the source you do get an osc message in Processing when a new user is detected as well as an ID number. You can put each new user that arrives, into an arrayList and figure out a way to do things without depending on the latest user.
If you are still going for the user-after-user thing though, or I was mistaken about the multi-user support(which is mentioned nowhere in the README), you can check yourself whether a user has left the area. Although you can not get a definitive answer like that you can check for example, whether a specific joint or all joints of a user have moved in the last 10-20 osc messages received. That probably means storing the position of this joint in an 10-20 item array and continuously updating while also doing a check on whether the items are different. If all items in the array are the same, your user has not moved a bit and thus probably should not be taken to account.
Last but not least you can switch to other solutions. The one I used about a year ago was "Synapse for Kinect" which also seems stale now. The latest you can use is a Processing library called SimpleOpenNI which definitively have multi-user tracking and you won't need any intermediary programs running to give you the joints.
I hope this helps
Short story:
Like the title says, how to get the SHA1 or hash from the index of a checked out file using JGit?
Long story:
I am writing an GUI application and using JGit for revisioning of just one file. So a user is able to open a window which contains all of the revisions of this file in a nice table.
User can make his changes and commit them. Also, user can go back in time and choose an older revision from a table to work on.
This workflow is very simple. What I internally do with JGit is that I use only one branch (the master branch). The HEAD is always pointing to this branch and the tip of this branch is always the newest commit. When a user chooses an older revision I simply instantiate CheckoutCommand class, set path to file using addPath() and use the branch master using setName().
The above results in HEAD pointing to master branch which in turn points to the newest revision (not the user chosen revision). But the index and the working directory itself are now at the revision chosen by the user.
So, finally, I want to be able to present to user which of those revisions in table is currently checked out or activated or whatever you want to call this. This revision would than be highlighted like on the below screenshot. But i cannot use the tip of the master branch for this purpose. I need to somehow get the SHA1 from the index.
There is a question posted which is the exactly what I want but in the context of JGit (the author of the question uses git).
EDIT: After just a little bit more analyzing I found that I could use JGit DirCache to access the contents of the index. So using DirCache class I am able to get the SHA1 of the file in the index just like in this question. But now I see that this hash is not the same as the revision hash from which I checked out. Meaning, I can not use this method to determine which revision from a table is checked out.
So, is there any other way using my workflow described as is to determine which of the revisions is user chosen to work on? Or even, maybe someone can propose a different approach.
My current approach for this problem is to use JGit AddNoteCommand. When user checks out the revision I will simply add a note to this revision with some "key: value". This key will indicate if the revision is checked out or not. Anyone with a better suggestion?
so first of all, sorry to say that, but I think it's dangerous and unintuitive to do what you do. Git is built so that you use branches. I think what you do is called Detached-head manipulation and it's not recommended, even though JGit allows you to do many things.
But if you are very careful well you can go on.
Second the Dircache (previously Index) object has been very mysterious to me and I think (I am not too sure though) the JGit team is still working on it.
Finally, to actually answer the question: I think you should use the LogCommand, with its addPath(...) method. You will get a list of RevCommit, from which you can determine the SHA1. I don't precisely remember how you get the SHA1, I think you should call getName() when you have a Ref object. I guess you'll find it on StackOverflow.
However, I would recommend to use branches (depending on what operation you want to perform on your commit), based on the SHA1 you got: you create a branch from the SHA1 you just found and can perform safely any operation you want. Then, either you destroy the branch if you don't want to commit anything or you will merge it later.