Using arguments -Xdebug, -agentlib:jdwp=transport=dt_socket, server=y, suspend=n, address=4404
Start target program.
Debug the target program using the com.sun.jdi related class. The classesByName method of the VirtualMachine class. The class that is loaded by the custom class loader is not available.
in target i can get class by
Class.forName("Script1", false, clazz.getClassLoader())
in VirtualMachine class , only has method :
List<ReferenceType> classesByName(String var1);
How should i do?
Monitoring classloading in JDI
For the last few weeks, I have been building a Java process monitoring tool based on the Java Debug Interface. Although I've done much of this work before, it has been a few years, and so now I'm retracing my steps. As I remember the details and pitfalls, I've been posting my notes in the hope that you'll find them useful.
Today I'm going to talk about ClassPrepareEvents, after a little background. As you probably already know, you can attach a debugger to an already-running Java process, or launch the target process itself from your debugger (using various command-line switches). In my project, I'm always going to be attaching to a running process, as the point is to collect process data on an as-needed basis. The reason JDI's ClassPrepareEvent is interesting is that, when you launch a debug target process, or even when you attach to an already-running process, it's likely that some of your desired breakpoints lie in classes which have not yet been loaded.
In my usual scenario, I call the com.sun.jdi.VirtualMachine's allClasses() method to get a list of all loaded ReferenceTypes. One way to think of a ReferenceType is as a chunk of a Java class definition. If your Java class has inner classes, then they will be broken out by JDI into separate ReferenceTypes. Each ReferenceType contains a collection of line locations; these correspond to lines of code on which breakpoints can be set and are identified by (among other things) the source-code line number. If a line of source code cannot be the target of a breakpoint, then there will not be a line location for it in the ReferenceType. In my debugger-based applications, I step through the line locations of all the ReferenceTypes, matching up line locations with breakpoint specifications, and then register my breakpoint requests.
As you can guess, I have a potential problem: what should I do if a class I need has not yet been loaded at the time I'm constructing my breakpoint requests? The answer is: JDI's ClassPrepareEvent. The entry point for using this part of the API is the EventRequestManager's createClassPrepareRequest() method. Having made our request, the same event-listener loop we use to wait for breakpoint events can also be used to wait for class prepare events (see the JVM specification for a definition of class preparation).
One thing I remember from my previous development on this API is that there is a timing risk here. You probably want to create the class prepare request before you iterate over the list of currently-loaded classes. The reason is that you don't want to fall into this trap:
Iterate over a set of the currently-loaded classes, processing and making breakpoint requests.
Suddenly, a class you need is loaded!
You register your class-prepare event and start getting events as classes are loaded, but you miss the class that loaded in between step #1 and step #3.
Here's another possible trap:
Register for class-prepare events so you don't get caught by the above issue.
Iterate over the currently-loaded classes, requesting breakpoints as necessary.
Process newly-loaded classes, requesting breakpoints as necessary.
The problem with this second approach is that you may process the same breakpoint twice. Why? By the time you iterate over the currently-loaded classes, some of the classes in that list are very likely going to be classes which have shown up in your class-prepare listener. Neither of these problems can be fixed by slapping a synchronized keyword somewhere.
Whether you launch your target application from your debugger or attach to it after the fact, you will have to deal with some variation of this issue. The way I deal with it is to add some state to the class I use to define each breakpoint specification. As each corresponding loaded class is found and the breakpoint request is made, I set a flag on the specification so that I know the request was registered. Further, I follow the second approach outlined above (better to have duplicates than to miss one). If I see a class-prepare event for a class I've already processed from the VM's ReferenceType list, then I simply skip over it. I do the same for the reverse situation, in which my list of ReferenceTypes contains ReferenceTypes which I have just processed in my ClassPrepareEvent listener.
Finally, one issue I have not looked at before (either for this development effort, or in my previous development in this area) -- what happens when a class is unloaded, especially a class on which you have registered breakpoint requests. For example, will a registered breakpoint request prevent a class from being unloaded? Do you care about a stranded breakpoint request if the class isn't even loaded? (Answer: yes, I suppose, if it gets reloaded and you no longer have a valid breakpoint request for it). JDI does have a ClassUnloadEvent, for which you can also register a listener. As I said, I have not dealt with this (possible) issue, having never seen a target class get unloaded before, but it's good to know "there's an API for that".
Follow link for more details
Related
I am struggling a little bit to understand how to implement a Hystrix Metrics Publisher plugin.
Having read the documentation, it is still not clear how things are supposed to work together.
My goal is to write a plugin that will collect every metrics published by hystrix and write these metrics to a file on disk.
This file will later be collected and processed by an external tool giving us a good historical basis of the circuit’s behavior and problems.
The system where hystrix is running is a normal spring application. This said, I am also somewhat new on the java platform (although I am comfortable with the java language).
I thought that a first step towards understanding how the plugin could be implemented would be looking at the already implemented publishers. With this in mind, I looked at some of the implementations of hystrix-contrib directory.
I have chosen hystrix-codahale-metrics-publisher and hystrix-servo-metrics-publisher.
Both of them have a main class (servo is HystrixServoMetricsPublisher) which seems to register for receiving all possible kinds of metrics and some classes to deal with each kind of metric.
By looking at what I will call the main class, I see that, for example, there is a method called getMetricsPublisherForCommand that must return an implementation of the interface HystrixMetricsPublisherCommand.
Now questions start:
Question 1 I am assuming that once a plugin is registered every execution of every command on the context where the plugin is registered, and by the word command we can understand every execution of the execute() method of every class which inherits from HistrixCommand on that context, will generate a call to the getMetricsPublisherForCommand() method of my plugin. Is it true?
If so, there are a lot of low level implementations in hystrix such as thread pools and other, Should my getMetricsPublisherForCommand() implementation be thread-safe or I am guaranteed to receive calls in a sequential order? On what thread will my getMetricsPublisherForCommand() be executed?
Question 2 By looking at the documentation I am still not sure about what exactly the implementation of HystrixMetricsPublisherCommand to be returned by getMetricsPublisherForCommand() has to do. This is due to the fact that the HystrixMetricsPublisherCommand interface only specifies a method called initialize (). if it specified a method called, say, publish() I would conclude that the hystrix engine would call my custom getMetricsPublisherForCommand() method to get a metrics publisher on which it would call a publish() method to perform the custom publishing. But the initialize () method seens to be called only once when this given object is returned and I have found no other method the engine would call afterwards.
Also, by reading the documentation, I am under the impression that the implementation of HystrixMetricsPublisherCommand returned by getMetricsPublisherForCommand() will be somehow a singleton which completely breaks my understanding about how the thing is supposed to work.
The documentation say this:
The initialize() method will be called once-and-only-once to indicate when this instance can register with external services, start publishing metrics etc.
If you look at the servopublisher however you will notice that, unless I am completely and absolutely confused, the publishing stuff is performed right from the constructor. Now, if initialize() will be called to make some setup, how can I code my logic from the constructor where, unless the object is a singleton, it will be executed before any method including initialize () will have a chance to be called? In the other hand ,,, if this is a singleton, how can it run its constructor for every hystrix command?
May be I have missed something, I don't know ... but I need to understand conceptually what is going on here in order to implement my logic the right way. Thanks for your patience and I hope I have made myself clear enough in this long question.
First, recommend staying within the one (concise) question format.
Second, recommend using an existing implementation such as the default CodaHale (formerly DropWizard) implementation (which publishes to Graphite repository for Grafana consumption for example) to get it working.
HystrixPlugins.reset();
final WebApplicationContext springContext =
WebApplicationContextUtils.getWebApplicationContext(sce.getServletContext());
HystrixPlugins plugins = HystrixPlugins.getInstance();
plugins.registerCommandExecutionHook(...);
// Good idea to use properties to enable/disable metrics generally...
// Using Spring type example...
if (hystrixMetricsEnabled.get()) {
plugins.registerMetricsPublisher(new HystrixCodaHaleMetricsPublisher(
getRegistry(springContext, sce.getServletContext())));
...
Otherwise the Hystrix documentation and full source of classes involved are publicly available:
https://github.com/Netflix/Hystrix/wiki/Plugins#metricspublisher
One can load a class dynamically using this method of java.lang.Class:
public static Class<?> forName(String name, boolean initialize,
ClassLoader loader)
According to the JavaDoc, the second parameter is used to control the timing of class initialization (execution of static initialization code). If true, the class is initialized after loading and during the execution of this method; if false, initialization is delayed until the first time the class is used.
Now, I understand all that, but the docs don't say how to decide which strategy to use. Is it better to always do initialization immediately? Is it better to always delay it to first use? Does it depend on the circumstances?
Yes, it depends on circumstances, but usually it is preferred to just let classes be loaded and initialized on first use.
Cases when you might want to early initialize them (e.g. by calling forName() for them):
Static initialization blocks might perform checks for external resources (e.g. files, database connection), and if those fail, you don't even want to continue the execution of your program.
Similar to the previous: loading external, native libraries. If those fail (or not suitable for the current platform), you might want to detect that early and not continue with your app.
Static initializaiton blocks might perform lengthy operations and you don't want to have delays/lags later on when they are really needed, you can initialize them early or on different, background threads.
If you have static configuration files where class names are specified as text, you might want to initialize/load them early to detect configuration errors/typos. Such examples are logger config files, web.xml, spring context etc.
Many classes in the standard Java library cache certain data like the HTTPUrlConnection caches the HTTP user agent returned by System.getProperty("http.agent"). When it is first used, its value will be cached and if you change it (with like System.setProperty()), the new value will not be used. You can force such caching if you initialize the proper classes early, protecting them to be modified by the code later on.
Cases when you should not initialize early:
Classes which might only need in rare cases, or they might not even be needed at all throughout the run of your application. For example a GUI application might only show the About dialog when the user selects the Help/About menu. Obviously no need to load the relevant classes early (e.g. AboutDialog) because this is a rare case and in most runs the user will not do this / need this.
I have a scenario I'd like to get your input on. We've nearly decided which route we're going to take, but I'm curious what some other opinions regarding a solution are.
Our program is a converter service that sits between two larger systems: System A makes a copy and sticks it on a WebSphere queue, JMS picks it up and starts our service by calling the onMessage method in the Converter class, we do some processing, give it back to JMS, and JMS sticks it on another queue to System B.
We're looking at the best way to capture that input message as soon as it hits our onMessage method and hold onto it throughout our program's entire process. This way if we hit an error, we can print the message that caused said error in our stack-trace log to assist with troubleshooting.
During my research, I came across four methods of obtaining this persistence:
1) Save to a temporary file.
2) Global variable/Singleton.
3) Wrapper class.
4) Spring's dependency injection methods.
The solution we're leaning towards is (ominous music) using a global variable. We're using the following known facts to drive our decision:
It is only a single String with a max of 1000 characters, so the memory impact is negligible.
Only one class will ever have access to the setter (the Consumer class it's inside)
Every other instance will only access the getter.
Clearest/simplest code, easy for someone following to understand.
Our service will never become multi-threaded.
Only one instance of our service will run at one time on a given server.
The variable will be cleared and overwritten every time a new message comes through.
I know the general opinion is that global variables are very very bad, but I've always been of the opinion that global variables aren't inherently bad, they're just ridiculously easy to use in a bad way. We're of the opinion that this is the one instance where, being mindful of the dangers of global variables, they're the right choice. Your thoughts?
It should be noted that we can't add any libraries to our environment, so we're stuck with whatever we can do with Java and Spring.
recently I dove into the world of JMX, trying to instrument our applications, and expose some operations through a custom JMXClient. The work of figuring out how to instrument the classes without having to change much about our existing code is already done. I accomplished this using a DynamicMBean implementation. Specifically, I created a set of annotations, which we decorate our classes with. Then, when objects are created (or initialized if they are used as static classes), we register them with our MBeanServer through a static class, that builds a dynamicMBean for the class and registers it. This has worked out beautifully when we just use JConsole or VisualVM. We can execute operations and view the state of fields all like we should be able to. My question is more geared toward creating a semi-realtime JMXClient like JConsole.
The biggest problem I'm facing here is how to make the JMXClient report the state of fields in as close to realtime as I can reasonably get, without having to modify the instrumented libraries to push notifications (eg. in a setter method of some class, set the field, then fire off a JMX notification). We want the classes to be all but entirely unaware they are being instrumented. If you check out JConsole while inspecting an attribute, there is a refresh button at the bottom of the the screen that refreshes the attribute values. The value it displays to you is the value retrieved when that attribute was loaded into the view, and wont ever change without using the refresh button. I want this to happen on its own.
I have written a small UI which shows some data about connection states, and a few field on some instrumented classes. In order to make those values reflect the current state, I have a Thread which spins in the background. Every second or so the thread attempts to get the current values of the fields I'm interested in, then the UI gets updated as a result. I don't really like this solution very much, as its tricky to write the logic that updates the underlying models. And even trickier to update the UI in a way that doesn't cause strange bugs (using Swing).
I could also write an additional section of the JMXAgent in our application side, with a single thread that runs through the list of DynamicMBeans that have been registered, determines if the values of their attributes have change, then pushes a notification(s). This would move the notification logic out of the instrumented libraries, but still puts more load on the applications :(.
I'm just wondering if any of you have been in this position with JMX, or something else, and can guide me in the right direction for a design methodology for the JMXClient or really any other advice that could make this solution more elegant than the one I have.
Any suggestions you guys have would be appreciated.
If you don't want to change the entities then something is going to have to poll them. Either your JMXAgent or the JMX client is going to have to request the beans every so often. There is no way for you to get around this performance hit although since you are calling a bunch of gets, I don't think it's going to be very expensive. Certainly your JMXAgent would be better than the JMX client polling all of the time. But if the client is polling all of the beans anyway then the cost may be exactly the same.
You would not need to do the polling if the objects could call the agent to say that they have been changed or if they supported some sort of isDirty() method.
In our systems, we have a metrics system that the various components used. Each of the classes incremented their own metric and it was the metrics that were wired into a persister. You could request the metric values using JMX or persist them to disk or the wire. By using a Metric type, then there was separation between the entity that was doing the counting and the entities that needed access to all of the metric values.
By going to a registered Metric object type model, your GUI could then query the MetricRegistrar for all of the metrics and display them via JMX, HTML, or whatever. So your entities would just do metric.increment() or metric.set(...) and the GUI would query the metric whenever it needed the value.
Hope something here helps.
Being efficient here means staying inside the mbean server that contains the beans you're looking at. What you want is a way to convert the mbeans that don't know how to issue notifications into mbeans that do.
For watching numeric and string attributes, you can use the standard mbeans in the monitor package. Instantiate those in the mbean server that contains the beans you actually want to watch, and then set the properties appropriately. You can do this without adding code to the target because the monitor package is standard in the JVM. The monitor beans will watch the objects you select for changes and will emit change notifications only when actual changes are observed. Use setGranularityPeriod to tell the monitor beans how often to look at the target.
Once the monitor beans are in place, just register for the MonitorNotifications that will be created upon change.
not a solution per se but you can simplify your polling-event translator JMXAgent implementation using spring integration. It has something called JMX Attribute Polling Channel which seems to fulfill your need. example here
I've always wanted to write a simple world in Java, but which I could then run the 'world' and then add new objects (that didn't exist at the time the world started running) at a later date (to simulate/observe different behaviours between future objects).
The problem is that I don't want to ever stop or restart the world once it's started, I want it to run for a week without having to recompile it, but have the ability to drop in objects and redo/rewrite/delete/create/mutate them over time.
The world could be as simple as a 10 x 10 array of x/y 'locations' (think chessboard), but I guess would need some kind of ticktimer process to monitor objects and give each one (if any) a chance to 'act' (if they want to).
Example: I code up World.java on Monday and leave it running. Then on Tuesday I write a new class called Rock.java (that doesn't move). I then drop it (somehow) into this already running world (which just drops it someplace random in the 10x10 array and never moves).
Then on Wednesday I create a new class called Cat.java and drop that into the world, again placed randomly, but this new object can move around the world (over some unit of time), then on Thursday i write a class called Dog.java which also moves around but can 'act' on another object if it's in the neighbour location and vice versa.
Here's the thing. I don't know what kinda of structure/design I would need to code the actual world class to know how to detect/load/track future objects.
So, any ideas on how you would do something like this?
I don't know if there is a pattern/strategy for a problem like this, but this is how I would approach it:
I would have all of these different classes that you are planning to make would have to be objectsof some common class(maybe a WorldObject class) and then put their differentiating features in a separate configuration files.
Creation
When your program is running, it would routinely check that configuration folder for new items. If it sees that a new config file exists (say Cat.config), then it would create a new WorldObject object and give it features that it reads from the Cat.config file and drops that new object into the world.
Mutation
If your program detects that one of these item's configuration file has changed, then it find that object in the World, edit its features and then redisplay it.
Deletion
When the program looks in the folder and sees that the config file does not exist anymore, then it deletes the object from the World and checks how that affects all the other objects.
I wouldn't bet too much on the JVM itself running forever. There are too many ways this could fail (computer trouble, unexepected out-of-memory, permgen problems due to repeated classloading).
Instead I'd design a system that can reliably persist the state of each object involved (simplest approach: make each object serializable, but that would not really solve versioning problems).
So as the first step, I'd simply implement some nice classloader-magic to allow jars to be "dropped" into the world simulation which will be loaded dynamically. But once you reach a point where that no longer works (because you need to modify the World itself, or need to do incompatible changes to some object), then you could persist the state, switch out the libraries for new versions and reload the state.
Being able to persist the state also allows you to easily produce test scenarios or replay scenarios with different parameters.
Have a look at OSGi - this framework allows installing and removing packages at runtime.
The framework is a container for so called bundles, java libraries with some extra configuration data in the jars manifest file.
You could install a "world" bundle and keep it running. Then, after a while, install a bundle that contributes rocks or sand to the world. If you don't like it anymore, disable it. If you need other rocks, install an updated version of the very same bundle and activate it.
And with OSGi, you can keep the world spinning and moving around the sun.
The reference implementation is equinox
BTW: "I don't know what kinda of structure/design" - at least you need to define an interface for a "geolocatable object", otherwise you won't be able to place and display it. But for the "world", it really maybe enough to know, that "there is something at coordinates x/y/z" and for the world viewer, that this "something" has a method to "display itself".
If you only care about adding classes (and not modifying) here is what I'd do:
there is an interface Entity with all business methods you need (insertIntoWorld(), isMovable(), getName(), getIcon() etc)
there is a specific package where entities reside
there is a scheduled job in your application which every 30 seconds lists the class files of the package
keep track of the classes and for any new class attempt to load class and cast to Entity
for any newlly loaded Entity create a new instance and call it's insertIntoWorld().
You could also skip the scheduler and automatic discovery thing and have a UI control in the World where from you could specify the classname to be loaded.
Some problems:
you cannot easily update an Entity. You'll most probably need to do some classloader magic
you cannot extend the Entity interface to add new business bethod, so you are bound to the contract you initially started your application with
Too long explanation for too simple problem.
By other words you just want to perform dynamic class loading.
First if you somehow know the class name you can load it using Class.forName(). This is the way to get class itself. Then you can instantiate it using Class.newInstance(). If you class has public default constructor it is enough. For more details read about reflection API.
But how to pass the name of new class to program that is already running?
I'd suggest 2 ways.
Program may perform polling of predefined file. When you wish to deploy new class you have to register it, i.e. write its name into this file. Additionally this class has to be available in classpath of your application.
application may perform polling of (for example) special directory that contains jar files. Once it detects new jar file it may read its content (see JarInputStream), then call instantiate new class using ClaasLoader.defineClass(), then call newInstane() etc.
What you're basically creating here is called an application container. Fortunately there's no need to reinvent the wheel, there are already great pieces of software out there that are designed to stay running for long periods of time executing code that can change over time. My advice would be to pick your IDE first, and that will lead you someways to what app container you should use (some are better integrated than others).
You will need a persistence layer, the JVM is reliable but eventually someone will trip over the power cord and wipe your world out. Again with JPA et al. there's no need to reinvent the wheel here either. Hibernate is probably the 'standard', but with your requirements I'd try for something a little more fancy with one of the graph based NoSQL solutions.
what you probably want to have a look at, is the "dynamic object model" pattern/approach. I implemented it some time ago. With it you can create/modify objecttypes at runtime that are kind of templates for objects. Here is a paper that describes the idea:
http://hillside.net/plop/plop2k/proceedings/Riehle/Riehle.pdf
There are more papers but I was not able to post them, because this is my first answer and I dont have enough reputation. But Google is your friend :-)