Holding input message for error logging - java

I have a scenario I'd like to get your input on. We've nearly decided which route we're going to take, but I'm curious what some other opinions regarding a solution are.
Our program is a converter service that sits between two larger systems: System A makes a copy and sticks it on a WebSphere queue, JMS picks it up and starts our service by calling the onMessage method in the Converter class, we do some processing, give it back to JMS, and JMS sticks it on another queue to System B.
We're looking at the best way to capture that input message as soon as it hits our onMessage method and hold onto it throughout our program's entire process. This way if we hit an error, we can print the message that caused said error in our stack-trace log to assist with troubleshooting.
During my research, I came across four methods of obtaining this persistence:
1) Save to a temporary file.
2) Global variable/Singleton.
3) Wrapper class.
4) Spring's dependency injection methods.
The solution we're leaning towards is (ominous music) using a global variable. We're using the following known facts to drive our decision:
It is only a single String with a max of 1000 characters, so the memory impact is negligible.
Only one class will ever have access to the setter (the Consumer class it's inside)
Every other instance will only access the getter.
Clearest/simplest code, easy for someone following to understand.
Our service will never become multi-threaded.
Only one instance of our service will run at one time on a given server.
The variable will be cleared and overwritten every time a new message comes through.
I know the general opinion is that global variables are very very bad, but I've always been of the opinion that global variables aren't inherently bad, they're just ridiculously easy to use in a bad way. We're of the opinion that this is the one instance where, being mindful of the dangers of global variables, they're the right choice. Your thoughts?
It should be noted that we can't add any libraries to our environment, so we're stuck with whatever we can do with Java and Spring.

Related

Pattern/Best practice for updating objects on server from multiple clients

I have a general question about a best practice or pattern to solve a problem.
Consider that you have three programs running on seperate JVMs: Server, Client1 and Client2.
All three processes make changes to an object. When the object is changed in either client, the change in the object (not the new object) must be sent to the server. It is not possible just to send the new object from the client to the server because both clients might update the object at the same time, so we need the delta, and not the result.
I'm not so worried about reflecting changes on the server back to the clients at this point, but lets consider that a bonus question.
What would be the best practice for implementing this with X amount of processes and Y amount of object classes that may be changed?
The best way i can think of is consistently using the Command pattern to change the object on the client and the server at the same time, but there has to be a better way?
One of the possible ways to solve that is the Remote Method Invocation system in Java. Keep all the data values on the Server, then have the clients use remote calls to query them.
This would however require some smart caching to reduce the amount of pointless calls. In the end you would end up with something similar to the Command Pattern.
Modern games try to solve this issue with something I'd call an Execute-Then-Verify pattern, where every client has a local copy of the game world, that allows him to come to the same conclusion for each action as the server would. So actions of the player are applied to the local copy of the game world assuming that they are correct, then they are sent to the server, which is the ultimate instance to either accept that or revoke it later on.
The benefit of this variant of local caching is, that most players do not experience much lag, however in the case of contradictory actions they might experience the well-known roll-backs.
In the end it very much depends on what you are trying to do and what is more important for you: control over actions or client action flow.

How to approach JMX Client polling

recently I dove into the world of JMX, trying to instrument our applications, and expose some operations through a custom JMXClient. The work of figuring out how to instrument the classes without having to change much about our existing code is already done. I accomplished this using a DynamicMBean implementation. Specifically, I created a set of annotations, which we decorate our classes with. Then, when objects are created (or initialized if they are used as static classes), we register them with our MBeanServer through a static class, that builds a dynamicMBean for the class and registers it. This has worked out beautifully when we just use JConsole or VisualVM. We can execute operations and view the state of fields all like we should be able to. My question is more geared toward creating a semi-realtime JMXClient like JConsole.
The biggest problem I'm facing here is how to make the JMXClient report the state of fields in as close to realtime as I can reasonably get, without having to modify the instrumented libraries to push notifications (eg. in a setter method of some class, set the field, then fire off a JMX notification). We want the classes to be all but entirely unaware they are being instrumented. If you check out JConsole while inspecting an attribute, there is a refresh button at the bottom of the the screen that refreshes the attribute values. The value it displays to you is the value retrieved when that attribute was loaded into the view, and wont ever change without using the refresh button. I want this to happen on its own.
I have written a small UI which shows some data about connection states, and a few field on some instrumented classes. In order to make those values reflect the current state, I have a Thread which spins in the background. Every second or so the thread attempts to get the current values of the fields I'm interested in, then the UI gets updated as a result. I don't really like this solution very much, as its tricky to write the logic that updates the underlying models. And even trickier to update the UI in a way that doesn't cause strange bugs (using Swing).
I could also write an additional section of the JMXAgent in our application side, with a single thread that runs through the list of DynamicMBeans that have been registered, determines if the values of their attributes have change, then pushes a notification(s). This would move the notification logic out of the instrumented libraries, but still puts more load on the applications :(.
I'm just wondering if any of you have been in this position with JMX, or something else, and can guide me in the right direction for a design methodology for the JMXClient or really any other advice that could make this solution more elegant than the one I have.
Any suggestions you guys have would be appreciated.
If you don't want to change the entities then something is going to have to poll them. Either your JMXAgent or the JMX client is going to have to request the beans every so often. There is no way for you to get around this performance hit although since you are calling a bunch of gets, I don't think it's going to be very expensive. Certainly your JMXAgent would be better than the JMX client polling all of the time. But if the client is polling all of the beans anyway then the cost may be exactly the same.
You would not need to do the polling if the objects could call the agent to say that they have been changed or if they supported some sort of isDirty() method.
In our systems, we have a metrics system that the various components used. Each of the classes incremented their own metric and it was the metrics that were wired into a persister. You could request the metric values using JMX or persist them to disk or the wire. By using a Metric type, then there was separation between the entity that was doing the counting and the entities that needed access to all of the metric values.
By going to a registered Metric object type model, your GUI could then query the MetricRegistrar for all of the metrics and display them via JMX, HTML, or whatever. So your entities would just do metric.increment() or metric.set(...) and the GUI would query the metric whenever it needed the value.
Hope something here helps.
Being efficient here means staying inside the mbean server that contains the beans you're looking at. What you want is a way to convert the mbeans that don't know how to issue notifications into mbeans that do.
For watching numeric and string attributes, you can use the standard mbeans in the monitor package. Instantiate those in the mbean server that contains the beans you actually want to watch, and then set the properties appropriately. You can do this without adding code to the target because the monitor package is standard in the JVM. The monitor beans will watch the objects you select for changes and will emit change notifications only when actual changes are observed. Use setGranularityPeriod to tell the monitor beans how often to look at the target.
Once the monitor beans are in place, just register for the MonitorNotifications that will be created upon change.
not a solution per se but you can simplify your polling-event translator JMXAgent implementation using spring integration. It has something called JMX Attribute Polling Channel which seems to fulfill your need. example here

how can I get the History of an object or trace an Object

I have a requirement, where support in my application a lot of processing is happening, at some point of time an exception occrured, due to an object. Now I would like to know the whole history of that object. I mean whatever happened with that object over the period of time since the application has started.
Is this peeping into this history of Object possible thru anyway using JMX or anything else ?
Thanks
In one word: No
With a few more words:
The JVM does not keep any history on any object past its current state, except for very little information related to garbage collection and perhaps some method call metrics needed for the HotSpot optimizer. Doing otherwise would imply a huge processing and memory overhead. There is also the question of granularity; do you log field changes only? Every method call? Every CPU instruction during a method call? The JVM simply takes the easy way out and does none of the above.
You have to isolate the class and/or specific instance of that object and log any operation that you need on your own. You will probably have to do that manually - I have yet to find a bytecode instrumentation library that would allow me to insert logging code at runtime...
Alternatively, you might be able to use an instrumenting profiler, but be prepared for a huge performance drop when doing that.
That's not possible with standard Java (or any other programming language I'm aware of). You should add sufficient logging to your application, which will allow you to get some idea of what's happened. Also, learn to use your IDE's debugger if you don't already know how.
I generally agree with #thkala and #artbristol (+1 for both).
But you have a requirement and have no choice: you need a solution.
I'd recommend you to try to wrap your objects with dynamic proxies that perform auditing, i.e. write all changes that happen to object.
You can probably use AspectJ for this. The aspect will note what method was called and what are the parameters that were sent. You can also use other, lower level tools, e.g. Javasist or CgLib.
Answer is No.JVM doesn't mainatain the history of object's state.Maximum what you can do you can keep track of states of your object that could be some where in-memory and when you get exception you can serialize that in-memory object and then i think you can do analysis.

Is this over-eager loader object an example of a Proxy Pattern implementation?

I have a Java system that consumes an API. A few days ago, we started facing the following problem: the remote API was receiving too many requests from my system. Back in the system's early days, it was not a major concern, but little by little the system's performance was getting worse and worse, since my data was growing and I made multiple requests for each entity. I noticed many of the network requests I made were not really necessary, since the data was not updated very frequently. So, I implemented a class that, when my system starts, makes an over-eager loading of all the remote API data. When I create/update an entity, I load it before any request is made. I treat deletion accordingly. And the remote API also notifies me when any change is made so I can stay updated even when this change is made outside my system.
What I really want to know is: is there any name for this practice? Any known design pattern
? I must say I've done a little research and I think it is a proxy pattern but, again, I'm not very sure (in fact, most of the design patterns look very similar), and I'm not really that much into design patterns.
I would call it a Cache System to what you implemented. Not sure if there is a dessign pattern for this though.
Also, the fact that the remote API notifies you when any change is made, might have been done using the observer pattern.
It's not quite a proxy pattern as the proxy pattern falls more under the heading of 'lazy loading'. From the description of the Proxy Pattern specified in
Design Patterns (Group of Four Book):
One reason for controlling access to an object is to defer the full
cost of its creation and initialization until we actually need to use
it
I'm not sure what you'd call it other than over-eager loading

Messaging: Lots of RemoteServices methods or Unique message builder/interpreter?

Hey guys,
I'm using GWT to code a simple multiplayer board game.
And while I was coding the question came up to my mind:
At first I though my client could simply communicate with the server via RemoteServices calls, so if a client wanted to connect to a game he could do as follows:
joinGame (String playerName, String gameName)
And the server implementation would do the necessary processing with the argument's data.
In other words, I would have lots of RemoteService methods, one for each type of message in the worst case.
I thought of another way, which would be creating a Message class and sub-classing it as needed.
This way, a single remoteService method would be enough:
sendMessage (Message m)
The messages building and interpreting processing too would be done by specialized classes.
Specially the building class could even be put in the gwt-app shared package.
That said,
I can't see the benefits of one or another. Thus I'm not sure if I should do one way or another or even another completely different way.
One vs other, who do you think it is better (has more benefits in the given situation)?
EDIT: A thing I forgot to mention is that one of the factors that made me think of the second (sendMessage) option was that in my application there is a CometServlet that queries game instances to see if there is not sent messages to the client in its own message queue (each client has a message queue).
I prefer the command pattern in this case (something like your sendMessage() concept).
If you have one remote service method that accepts a Command, caching becomes very simple. Batching is also easier to implement in this case. You can also add undo functionality, if that's something you think you may need.
The gwt-dispatch project is a great framework that brings this pattern to GWT.
Messaging takes more programmer time and creates a more obfuscated interface. Using remote service methods is cleaner and faster. If you think there are too many then you can split your service into multiple services. You could have a service for high scores, a service for player records, and a service for the actual game.
The only advantage I can see with messaging is that it could be slightly more portable if you were to move away from a Java RPC environment but that would be a fairly drastic shift.

Categories