Is there a maximum size for the arguments when publishing an event?
I use this code (java): wampClient.publish(token, response.toString());
response.toString() is a long json-string in my case. It has about 70.000 characters. I have the suspicion that the event does not get published, because when I replace response.toString() with a short string, the event gets published as expected.
I dont know much about the internals of Wamp and an initial debugging session into the code did not provide me with much insight. As I said above, I think that the long string is causing some problems.
Minimal running example: To get a minimum running example, please download the example java project from here: http://we.tl/a3kj3dzJ7N and import it into your IDE.
In the demo folder there are two .java-files: Client.java and Server.java
Run/Start both of them and a GUI should appear for each. Then do the following procedure (C = Client, S = Server):
C: hit start
S: hit start
C: hit publish
depending on the size of the message you will see different output on the console of your IDE. the size of the message can be changed in line 137 of Client.java via the size integer variable. As already explained above: If size is lower than 70000 (e.g. 60000) everything works as expected. The console output of Client.java is then as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Received event test.event with value 10000
However, if the integer variable size is changed to 70000 (or higher) the output is as follows:
Open Client
Session1 status changed to Connecting
Session1 status changed to Connected
Publishing
Completed event test.event
Session1 status changed to Disconnected
Session1 status changed to Connecting
Session1 status changed to Connected
As you can see the Received event ... is missing, hence, the event is not received. However, there is Completed event test.event, but the data is missing obviously.
To sum up, one can see when running the example above that the event is not received properly when the size of the transmitted string is greater than 70000. This problem may be related to netty since it is used under the hood of jawampa. Any help is appreciated. Maybe it's just some small configuration which can fix this problem.
EDIT 1: I updated the question with a minimal running example which can be downloaded.
EDIT 2: I think I now know the root of the problem (totally not sure though, see EDIT3). It is related to the allowed size of a string literal in java. See: Size of Initialisation string in java
In the above example, I can reflect that. If the size variable is lower than 65535 characters, it works, else it doesnt. Is there a workaround for this?
EDIT 3 aka SOLUTION: As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH in NettyWampConnectionConfig.java:8 should be changed to a higher value. then everything works like a charm.
As suggested by the developer (see here), the variable DEFAULT_MAX_FRAME_PAYLOAD_LENGTH can be overwritten through the NettyWampConnectionConfig class, which you can provide to the NettyWampClientConnectorProvider class. The variable value should, obviously, be increased.
There is bug in jawampa, cause DEFAULT_MAX_FRAME_PAYLOAD_LENGTH is 1 bite lower than default split frame size in Crossbar. So DEFAULT_MAX_FRAME_PAYLOAD_LENGTH should be increased just by 1 bite or crossbar split frame size should be lowered by 1.
Also if you change DEFAULT_MAX_FRAME_PAYLOAD_LENGTH, it should be changed using builder: .withConnectionConfiguration((new NettyWampConnectionConfig.Builder()).withMaxFramePayloadLength(65536).build())
Related
everyone.
I am new to snmp and faced following problem.
I have snmp table on agent. It works only with flag -Cb (request new row by getnext command). When I using net-snmp int ubuntu, I am getting this table.
enter image description here
How it's doing in java snmp4j:
it is performed by step by step getting every row by sending getnext request.
But instead of pointing table OID, I point column's OIDs which I want to get.
getnext return result and next incremented OID that will be in the next request.
As I researched, during snmpgetnext query does not get incremental value. I receive "OIDs returned from a GETNEXT or GETBULK are less or equal than the requested one (which is not allowed by SNMP)". So I can't get it there.
I suppose that net-snmp avoid this error by doing increment internally when getting this error.
I also tryied to do getnext manually via net-snmp in ubuntu instead of snmptable, but some of columns I got only first incremeted value and thats it, some does not increment at all.
But snmpget on increased value works
enter image description here
Is it a bug on snmp agent? So net-snmp increment by itself when getting snmp table?
Indeed it looks to me like you have a buggy SNMP Agent on your hands.
You should report this to the agent vendor. The data in your first screenshot should be enough evidence for them to take it on as a bug report.
The correct behavior is specified for SNMPv1 in RFC 1157 section 4.1.3, and a few other RFCs for subsequent SNMP versions. However, the gist of it remains the same in v2 and v3.
I'm not sure how the snmptable command works, but it might be trying to guess the successor OID like you say, but more likely snmptable uses SNMP GetBulkRequest-PDUs in the background, and the Agent's implementation of GetBulk is better than its GetNext. I.e. the table traversal bug is not present in the code that handles GetBulk, which gives you the whole table.
Try traversing the table with the snmpwalk, which I think uses only the GetNext operation. My guess is that the snmpwalk will halt or loop, like your snmpgetnext command!
I'm working on a setup where we run our Java services in docker containers hosted on a kubernetes platform.
On want to create a dashboard where I can monitor the heap usage of all instances of a service in my grafana. Writing metrics to statsd with the pattern:
<servicename>.<containerid>.<processid>.heapspace works well, I can see all heap usages in my chart.
After a redeployment, the container names change, so new values are added to the existing graph. My problem is, that the old lines continue to exist at the position of the last value received, but the containers are already dead.
Is there any simple solution for this in grafana? Can I just say: if you didn't receive data for a metric for more than X seconds, abort the chart line?
Update:
Upgrading to the newest Grafana Version and Setting "null" as value for "Null value" in Stacking and Null Value didn't work.
Maybe it's a problem with statsd?
I'm sending data to statsd in form of:
felix.javaclient.machine<number>-<pid>.heap:<heapvalue>|g
Is anything wrong with this?
This can happen for 2 reasons, because grafana is using the "connected" setting for null values, and/or (as is the case here) because statsd is sending the previously-seen value for the gauge when there are no updates in the current period.
Grafana Config
You'll want to make 2 adjustments to your graph config:
First, go to the "Display" tab and under "Stacking & Null value" change "Null value" to "null", that will cause Grafana to stop showing the lines when there is no data for a series.
Second, if you're using a legend you can go to the "Legend" tab and under "Hide series" check the "With only nulls" checkbox, that will cause items to only be displayed in the legend if they have a non-null value during the graph period.
statsd Config
The statsd documentation for gauge metrics tells us:
If the gauge is not updated at the next flush, it will send the
previous value. You can opt to send no metric at all for this gauge,
by setting config.deleteGauges
So, the grafana changes alone aren't enough in this case, because the values in graphite aren't actually null (since statsd keeps sending the last reading). If you change the statsd config to have deleteGauges: true then statsd won't send anything and graphite will contain the null values we expect.
Graphite Note
As a side note, a setup like this will cause your data folder to grow continuously as you create new series each time a container is launched. You'll definitely want to look into removing old series after some period of inactivity to avoid filling up the disk. If you're using graphite with whisper that can be as simple as a cron task running find /var/lib/graphite/whisper/ -name '*.wsp' -mtime +30 -delete to remove whisper files that haven't been modified in the last 30 days.
To do this, I would use
maximumAbove(transformNull(felix.javaclient.*.heap, 0), 0)
The transformNull will take any datapoint that is currently null, or unreported for that instant in time, and turn it into a 0 value.
The maximumAbove will only display the series' that have a maximum value above 0 for the selected time period.
Using maximumAbove, you can see all history containers, if you wish to see only the currently running containers, you should use just that: currentAbove
I have a enhanced for loop that enters ch.ethz.ssh2.connection to obtain over 200 values. Every time it goes into the loop a new server is being authenicated and it only retrieves one value from that server. Each time it's looped the data are being saved into an arraylist to be displayed in html tables using thymeleaf. But this method takes forever for eclipse to run through all 200 values one at a time, then it have to restart when I open up localhost:8080 to load up all the tables with all the data. It takes over 5 mins to load the page up. What can I do to speed things up?
Problem in code
List<DartModel> data = new ArrayList<DartModel>();
for(String server:serverArray) {
try {
conn = new ch.ethz.ssh2.Connection(server);
conn.connect();
boolean isAuthenticated = conn
.authenticateWithPassword(username_array[j],
password_array[j]);
if (isAuthenticated == false) {
throw new IOException("Authentication failed.");
}
I need to somehow recode the code above so I can obtain the data all in super quickly.
Output
Loop1: Server1
Loop2: DifferentServer2
Loop3: AllDifferentSever3
and goes on......
Alternative
I was thinking to let the java program run several times while saving the data into redis. Then Auto refresh the program, when it runs it sends the data into redis. Set an expiration time, But I was unable to get the data into thymeleaf html tables. Would this work? If so how can I display this into thymeleaf.
You can query multiple servers at once (in parallel).
If your framework for remote connections is blocking (the methods you call actually wait until the response is received), you'd have to start handful of threads (one thread for one server in the edge case) to do that in parallel (which doesn't scale very well).
When you can use some Future/Promise based tool, you can do it without much overhead (convert 200 futures into one future of 200 values/responses).
Note: In case you would query single server for 200 responses, it is not good idea to do it this way, because you would flood it with too many requests at once. Then you should implement some way to get all the data by one request.
Short answer:
Create a message protocol that sends all values in one response.
More Info:
Define a simple response message protocol.
One simple example might be this:
count,value,...
count: contains the number of values returned.
value: one of the values.
Concrete simple example:
5,123,234,345,456,567
You can go bigger and define the response using json or XML.
Use whatever seems best for your implementation.
Edit: My bad, this will not work if you are polling multiple servers. This solution assumes that you are retrieving 200 values from one server, not one value from 200 servers.
At face value, it's hard to tell without looking at your code (recommend sharing a gist or your code repo).
I assume you are using library. In general, a single SSH2 operation will make several attemtps to authenticate a client. It will iterate over several "methods". I you are using ssh over a command line, you can see these when you use the flag -vv. If one fails, it tries the next. The java library implementation that I found appears to do the same.
In the loop you posted (assuming you loop 200 times), you'll try to authenticate 200 x (authentication method order). I suspect the majority of your execution may be burned in SSH handshakes. This can be avoided by making sure you use your connection only once and get as much as you can from your (already authenticated) opened socket.
Consider moving you connection outside the loop. If you absolutely must do ssh, and the data you are using is too large, parallelism may help some, but that will involve more coordination.
Well I had my Java process running over night. First of all, that is what I already have.
I have basically:
80 mio entries (stuff Person have written) and
50 mio entries of Persons
Now I have a CSV file that is connecting both via ID's.
My first idea on the Java implementation was by 200 entries/sec. (noTx)
While my latest is ~2000/sec. (Tx)
But now I'm looking on the current state of the system. And I still see CPU and RAM changing and process is still running. But when I look onto the IO values. It's just reading.
So I was thinking that maybe the lines just contain ID's that are not in the database. Maybe! But I have a syso that shows me every 10,000 lines the current state. And it's not coming up anymore. So this cannot be.
Btw I'm at line 16.777.000 right now. And it's somehow frozen I would say. It's working really hardcore but doing nothing =/
Btw2 I:
use Transactions every 100 lines
STORAGE_KEEP_OPEN=true
ENVIRONMENT_CONCURRENT=false
OIntentMassiveInsert=true
setUsingLog=false
You can find the log here https://groups.google.com/forum/#!topic/orient-database/Whedj893mIY
You need to care about the magic size 2^24 is 16,777,216 as seen in the comments.
I am creating a Spigot (Performance savvy fork of Bukkit (Minecraft server software)) plugin that communicates with a Bungee (Proxy server for managing multiple spigot instances) server.
I have a functionality, that when you type a command "/setbar (time-in-seconds) (message)", it will use an API (BarAPI if you are familiar) to create a bar on every server connected to the Bungee instance.
The fault with this is that when a player joins one of the Spigot servers after the command was issued, the Bar is not there. I solve this by storing the bar's information on the Proxy level and sending these values to the specific Spigot instance the player attempts to join.
Okay, so enough background information. The problem I'm having is that I'm storing the time the admin (or whoever issued the command) requested in a variable. When the user joins, obviously the time will have decremented slightly (or a lot.) The way I've thought of making sure the user who is joining receives the proper elapsed time, (so the BarAPI knows how large the timer graphic needs to be), was by storing the time the command was executed in a variable (currentTimeMillis / currentTimeNano) and converting that to seconds then subtracting that from the time specified in the command.
I know there is a flaw with my logic here, and I can't seem to work out the math. I know this is rather simple, but any help you can provide would be extremely beneficial.
Thanks in advance.
Postscript: Any information I have failed to provide, please let me know and I will add it to this post.
I realize this is a bit of a "no-end" question as there isn't an exact answer provided I didn't actually give any code.
Here is how I solved it in plain English, though:
Store the time the command was executed the first in milliseconds.
When the command is executed next, remove the first value from the new value and divide it by 1000 to receive a seconds-value.
The seconds-value is the elapsed time. Once can find how much time is remaining by then subtracting the seconds-value from the initially provided seconds-value for the bar.
Erase the bar for the user in question; recreate same values but substitute seconds for the newly calculated seconds-value.