recently I dove into the world of JMX, trying to instrument our applications, and expose some operations through a custom JMXClient. The work of figuring out how to instrument the classes without having to change much about our existing code is already done. I accomplished this using a DynamicMBean implementation. Specifically, I created a set of annotations, which we decorate our classes with. Then, when objects are created (or initialized if they are used as static classes), we register them with our MBeanServer through a static class, that builds a dynamicMBean for the class and registers it. This has worked out beautifully when we just use JConsole or VisualVM. We can execute operations and view the state of fields all like we should be able to. My question is more geared toward creating a semi-realtime JMXClient like JConsole.
The biggest problem I'm facing here is how to make the JMXClient report the state of fields in as close to realtime as I can reasonably get, without having to modify the instrumented libraries to push notifications (eg. in a setter method of some class, set the field, then fire off a JMX notification). We want the classes to be all but entirely unaware they are being instrumented. If you check out JConsole while inspecting an attribute, there is a refresh button at the bottom of the the screen that refreshes the attribute values. The value it displays to you is the value retrieved when that attribute was loaded into the view, and wont ever change without using the refresh button. I want this to happen on its own.
I have written a small UI which shows some data about connection states, and a few field on some instrumented classes. In order to make those values reflect the current state, I have a Thread which spins in the background. Every second or so the thread attempts to get the current values of the fields I'm interested in, then the UI gets updated as a result. I don't really like this solution very much, as its tricky to write the logic that updates the underlying models. And even trickier to update the UI in a way that doesn't cause strange bugs (using Swing).
I could also write an additional section of the JMXAgent in our application side, with a single thread that runs through the list of DynamicMBeans that have been registered, determines if the values of their attributes have change, then pushes a notification(s). This would move the notification logic out of the instrumented libraries, but still puts more load on the applications :(.
I'm just wondering if any of you have been in this position with JMX, or something else, and can guide me in the right direction for a design methodology for the JMXClient or really any other advice that could make this solution more elegant than the one I have.
Any suggestions you guys have would be appreciated.
If you don't want to change the entities then something is going to have to poll them. Either your JMXAgent or the JMX client is going to have to request the beans every so often. There is no way for you to get around this performance hit although since you are calling a bunch of gets, I don't think it's going to be very expensive. Certainly your JMXAgent would be better than the JMX client polling all of the time. But if the client is polling all of the beans anyway then the cost may be exactly the same.
You would not need to do the polling if the objects could call the agent to say that they have been changed or if they supported some sort of isDirty() method.
In our systems, we have a metrics system that the various components used. Each of the classes incremented their own metric and it was the metrics that were wired into a persister. You could request the metric values using JMX or persist them to disk or the wire. By using a Metric type, then there was separation between the entity that was doing the counting and the entities that needed access to all of the metric values.
By going to a registered Metric object type model, your GUI could then query the MetricRegistrar for all of the metrics and display them via JMX, HTML, or whatever. So your entities would just do metric.increment() or metric.set(...) and the GUI would query the metric whenever it needed the value.
Hope something here helps.
Being efficient here means staying inside the mbean server that contains the beans you're looking at. What you want is a way to convert the mbeans that don't know how to issue notifications into mbeans that do.
For watching numeric and string attributes, you can use the standard mbeans in the monitor package. Instantiate those in the mbean server that contains the beans you actually want to watch, and then set the properties appropriately. You can do this without adding code to the target because the monitor package is standard in the JVM. The monitor beans will watch the objects you select for changes and will emit change notifications only when actual changes are observed. Use setGranularityPeriod to tell the monitor beans how often to look at the target.
Once the monitor beans are in place, just register for the MonitorNotifications that will be created upon change.
not a solution per se but you can simplify your polling-event translator JMXAgent implementation using spring integration. It has something called JMX Attribute Polling Channel which seems to fulfill your need. example here
Related
We are running a setup locally where we start two instances of an Axon application. The following properties are set in application.yml:
axon:
eventhandling:
processors:
SomeProcessorName:
initialSegmentCount: 2
threadCount: 1
mode: TRACKING
So both nodes have a single thread and they should each process a segment. They both connect to AxonServer. How do the two instances coordinate segment claims?
If I start both of these applications using an in-memory database, I can see in AxonServer that they both attempt to claim segment 0 and that segment 1 is claimed by neither. (We get a duplicated claim/unclaimed segment warning). If they connect to the same database, this does not happen, instance 1 claims segment 0, instance 2 claims segment 1.
Am I then correct in assuming that identical processors have to share a database in order for this to work properly? I can't find this information immediatly in the reference docs.
Does this then also mean that if I would hypothetically want to replicate a projection model for performance reasons (e.g: database server in the US and another one in the EU), this would not work properly?
To clarify: I would want both databases to build an identical query model that could both be queried separately. As it is right now (assuming that we could run two nodes on two databases), node 1 would only process events for segment 0, node 2 would only process events for segment 1. If I understand this correctly, this means that both databases only contain half of the information of the query model.
So in order to pull this off, I would have to create another near-identical codebase, with the only difference being the processor name?
I think I can give some guidance in this area.
Axon Server does not provide coordination between Tracking Tokens of TrackingEventProcessor at this point in time.
Thus, coordination of this part is purely in your application environment, or differently put, with the Axon Server client.
The most pragmatic approach would be to share the underlying storage solution for your TokenStore between both application; so your assumption on this part is correct.
Current implementations of the TokenStore are indeed database-based - nothing stops you to come up with a distributed solution of this though, as this is all open source and freely adjustable.
I do not completely follow your hypothetical suggestion that:
Does this then also mean that if I would hypothetically want to replicate a projection model for performance reasons (e.g: database server in the US and another one in the EU), this would not work properly?
Well, this can work properly, but I think the segmentation of a given TrackingEventProcessor it's TrackingToken is not the way to go in this part.
This solution is intended to share the work load of updating a single Query Model.
The 'work load' in this scenario is the Event Stream by the way.
If you're looking to replicate a given Query Model by means of reading the Event Stream, I'd indeed suggest to have a second TrackingEventProcessor, which has an identical Event Handling Component underneath.
Note that this should not require you to 'replicate the code base'.
You should merely need to register two Event Handling Components to two distinct TrackingEventProcessors.
If you are using Spring Boot as configuration, all this is typically abstracted away from you. But if you take a look at the EventProcessingConfigurer, you should be able to find a fair API describing how to achieve this. If things aren't clear in that area, I'd suggest a different issue should be introduced, as the topic somewhat diverges from the original question.
Hoping this is sufficient for you to proceed #MatthiasVanEeghem!
If I want to update a cache every minute, or do something else every hour, where I should put my code (Java) ? As I think, not in the servlets. Can you help me with it?
You need to use cron jobs:
Scheduled Tasks With Cron for Java
This is exactly what they have been designed for.
The answer by Andrei Volgin is correct, and you need to pursue the link.
However, I want to address the 'not in the servlets' part of your questions. I think you are asking from a design perspective whether the code should reside inside the servlet class. I have answered this for myself recently.
The way Crons and Tasks are implemented by GAE, the code will be called via servlets, as these are background URL calls. So, theoretically, the code can be in the servlet class itself. If you are using a framework like Spring, you will probably have one entry point servlet and your own handlers/managers/services. In this case, you can write the code in the handler.
In my project, I created a single entry point servlet for all UI related processing. When I needed to implement the first Task Queue I created another entry point servlet for the queues/crons and then coded inside new handlers.
In general, your app design would be looking similar to
UI ---> Servlet Entry Point 1 ---> Generic Business Logic Handler ---> Specific Business Logic Handler --> System Services Handler ---> System Services
Instead of UI, now we have Queues/Crons calling the system, but generally, as was in my case, the cron was calling code that was more 'internal', for example, send-mail is implemented as a queued task which needs to directly call the System Service Handler bypassing two business logic layers. Similarly, ftp-today's-transactions is a cron that needs to directly call System Services bypassing the business logic layers.
It makes sense to NOT directly call System services from servlet entry point 1, just because you happen to have it at hand and configured in web.xml. It makes more sense to create another entry point for queues and crons which are more 'internal'.
The code then resides in the next level class (called Handlers, sometimes) And you can continue to maintain the hierarchy of layers if you are using packages to enforce it.
You will then not feel bad about calling something sys level directly from servlet level as this will be a specifically secure and separate access interface defined to be calling direct.
Just to make it more intuitive, my two servlets are called
Thin - Thin Http Interface on NudeBusinessObjects [All BOs extend this, and there is a non Http interface]
Thiq - Thiq Http Interface on Queues
Thin just ensures the required parameters are present and passes to handler. It always calls com.mybusiness classes which in turn call com.mysystem classes if they need to.
Thiq has more code, needs secure credentials even on automatic, does more complicated validations and generally has defined high level behaviour for failures across crons/tasks. It always calls com.mysystem classes.
Just my two cents. It isn't too big a thing and if you only keep one entry point and achieve the same effect by writing things in handlers, or even servlets, it doesn't cause end of the world. It just looks ugly when you make an architecture diagram.
I have a scenario I'd like to get your input on. We've nearly decided which route we're going to take, but I'm curious what some other opinions regarding a solution are.
Our program is a converter service that sits between two larger systems: System A makes a copy and sticks it on a WebSphere queue, JMS picks it up and starts our service by calling the onMessage method in the Converter class, we do some processing, give it back to JMS, and JMS sticks it on another queue to System B.
We're looking at the best way to capture that input message as soon as it hits our onMessage method and hold onto it throughout our program's entire process. This way if we hit an error, we can print the message that caused said error in our stack-trace log to assist with troubleshooting.
During my research, I came across four methods of obtaining this persistence:
1) Save to a temporary file.
2) Global variable/Singleton.
3) Wrapper class.
4) Spring's dependency injection methods.
The solution we're leaning towards is (ominous music) using a global variable. We're using the following known facts to drive our decision:
It is only a single String with a max of 1000 characters, so the memory impact is negligible.
Only one class will ever have access to the setter (the Consumer class it's inside)
Every other instance will only access the getter.
Clearest/simplest code, easy for someone following to understand.
Our service will never become multi-threaded.
Only one instance of our service will run at one time on a given server.
The variable will be cleared and overwritten every time a new message comes through.
I know the general opinion is that global variables are very very bad, but I've always been of the opinion that global variables aren't inherently bad, they're just ridiculously easy to use in a bad way. We're of the opinion that this is the one instance where, being mindful of the dangers of global variables, they're the right choice. Your thoughts?
It should be noted that we can't add any libraries to our environment, so we're stuck with whatever we can do with Java and Spring.
I am working on a j2ee webapp divided in several modules. I have some metadata such as user name and preferences that I would like to access from everywhere in the app, and maybe also gather data similar to logging information but specific to a request and store it in those metadata so that I could optionally send it back as debug information to the user.
Aside from passing a generic context object throughout every method from the upper presentation classes to the downer daos or using AOP, the only solution that came in mind was using a threadlocal "Context" object very similar to a session BTW, and add a filter for binding it on ongoing request and unbinding it on response.
But such thing feels a little hacky since this breaks several patterns and could possibly make things complicated when it comes to testing and debugging so I wanted to ask if from your experience it is ok to proceed like this?
ThreadLocal is a hack to make up for bad design and/or architecture. It's a terrible practice:
It's a pool of one or more global variables and global variables in any language are bad practice (there's a whole set of problems associated with global variables - search it on the net)
It may lead to memory leaks, in any J2EE container than manages its threads, if you don't handle it well.
What's even worse practice is to use the ThreadLocal in the various layers.
Data communicated from one layer to another should be passed using Transfer Objects (a standard pattern).
It's hard to think of a good justification for using ThreadLocal. Perhaps if you need to communicate some values between 2 layers that have a third/middle layer between them, and you don't have the means to make changes to that middle layer. But if that's the case, I would look for a better middle layer.
In any case, if you store the values in one specific point in the code and retrieve it in another single point, then it may be excusable, otherwise you just never know what side affects any executing method may have on the values in the ThreadLocal.
Personally I prefer passing a context object, as the fact that the same thread is used for processing is an artifact of the implementation, and you shouldn't rely on such artifacts. The moment you want to use other threads, you'll hit a wall.
If those states are encapsulated in a Context object, I think that's clean enough.
When it comes to testing, the best tool is dependency injection. It allows to inject fake dependencies into the object under test.
And all dependency injection frameworks (Spring, CDI, Guice) have the concept of a scope (where request is one of these scopes). Under the hood, beans stored in the request scoped are indeed associated with a ThreadLocal variable, but this is all done by the dependency injection framework.
What I would do is thus to use a DI framework, which would make request-scope objects available anywhere, but without having to look them up, which would break testability. Just inject a request-scoped object where you want to use it, and the DI framework will retrieve it for you.
You must know that a servlet container can / will re-use threads for requests so if you do use ThreadLocals, you'll need to clean up after yourself once the request is finished (perhaps using a filter)
If you are the only developer in the project and you think you gain something: just do it! Because it is your time. But, be prepared to revert the decision and reorganize the code base later, as should be always the case.
Let's say there are ten developers on the project. Everybody might like to have its thread local variable to pass on parameters like currency, locale, roles, maybe it becomes even a HashMap....
I think in the end, not everything which is feasible, should be done. Complexity will strike back on you....
ThreadLocal can lead to memory leak if we do not set null manually once its out of scope.
I have a lot of existing data in my database already, and want to develop a points mechanism that computes a score for each user based on what actions they do.
I am implementing this functionality in a pluggable way, so that it is independent of the main logic, and relies on Spring events being sent around, once an entity gets modified.
The problem is what to do with the existing data. I do not want to start collecting points from now, but rather include all the data until now.
What is the most practical way to do this? Should I design my plugins in such a way as to provide for an index() method, which will force my system to fetch every single entity from the database, send an EntityDirtyEvent, to fire the points plugins, for each one, and then update it, to let points get saved next to each entity. That could result in a lot of overhead, right?
The simplest thing would be to create a complex stored procedure, and then make the index() call that stored procedure. That however, seems to me like a bad thing either. Since I will have to write the logic for computing the points in java anyway, why have it once again in SQL? Also, in general I am not a fan of splitting business logic into the different layers.
Has anyone done this before? Please help.
First let's distinguish between the implementation strategy and business rules.
Since you already have the data, consider obtaining results directly from the data. This forms the data domain model. Design the data model to store all your data. Then, create a set of queries, views and stored procedures to access and update the data.
Once you have those views, use a data access library such as Spring JDBC Template to fetch this data and represent them into java objects (lists, maps, persons, point-tables etc).
What you have completed thus far does not change much, irrespective of what happens in the upper layers of the system. This is called Model.
Then, develop a rule base or logic implementation which determines, under what inputs, user actions, data conditions or for all other conditions, what data is needed. In mathetical sense, this is like a matrix. In programming sense, this would be a set of logic statements. If this and this and this is true, then get this data, else get that data, etc. This encompasses the logic in your system. Hence it is called "Controller".
Do not move this logic into the queries/stored procedure/views.
Then finally develop a front-end or "console" for this. In the simplest case, develop a console input system, which takes a .. and displays a set of results. This is your "view" of the system.
You can eventually develop the view into a web application. The above command-line view can still be viable in the form of a Restful API server.
I think there is one problem here to be considered: as I understand there's huge data in the Database so the idea to create only one mechanism to calculate the point system could not be the best approach.
In fact if you don't want to start collecting points but include all the data, you must process and calculate the information you have now. Yes, the first time you will run this can result an overhead, but as you said, you need this data calculated.
By other hand you may include another mechanism that attends changes in an entity and launches a different process capable of calculate the new pointing diffence that applies to this particular modification.
So, you can use one Service responsible of calculate the pointing system, one for a single entity and another, may be longer to finish, capable of calculate the global points. Even, if you don't need to be calculated in real-time you can create a scheduled job responsible of launch it.
Finally, I know it's not a good approach to split the business logic in two layers (Db + Java) but sometimes is a requirement do it, for example, if you need to reply quickly to a request that finally works with a lot of registries. I've found some cases that there's no other option than add business logic to the database (as a stored procedures, etc) to manage a lot of data and return the final result to the browser client (ex: calculation process in one specific time).
You seem to be heading in the right direction. You know you want your "points" thing decoupled from the main application. Since it is implied you are already using hibernate (by the tag!), you can tap into the hibernate event system (see here section 14.2). Depending upon the size/complexity of your system, you can plugin your points calculations here (if it is not a large/complex system), or you can publish your own event to be picked up by whatever software is listening.
The point in either design approach is that neither knows or cares about your point calculations. If you are, as I am guessing, trying to create a fairly general purpose plugin mechanism, then you publish your own events to that system from this tie-in point. Then if you have no plug-ins on a given install/setup, then no one gets/processes the events. If you have multiple plug-ins on another install/setup, then they each can decide what processing they need to do based upon the event received. In the case of the "points plugin" it would calculate it's point value and store it. No stored proc required....
You're trying to accomplish "bootstrapping." The approach you choose should depend on how complicated the point calculations are. If stored procedures or plain update statements are the simplest solution, do that.
If the calculations are complicated, write a batch job that loads your existing data, probably orders it oldest first, and fires the events corresponding to that data as if they've just happened. The code which deals with an event should be exactly the same code that will deal with a future event, so you won't have to write any additional code other than the batch jobs themselves.
Since you're only going to run this thing once, go with the simplest solution, even if it is quick and dirty.
There are two different ways.
One is you already know that - poll the database for for changed data. In that case you are hitting the database when there may not be change and it may slow down your process.
Second approach - Whenever change happens in database, the database will fire the event. That you can to using CDC (Change Data Capture). It will minimize the overhead.
You can look for more options in Spring Integration