How to persist spring-boot metrics as log? - java

I am running a spring-boot application, that measures the standard and custom metrics. Now I want those metrics to be persisted in elasticsearch for further examination. I have a functioning ELK-stack in place, so the obvious solution to me was persisting the measures in a log file and let filebeat collect it.
I have found an example, that could have achieved this, but it was using MetricRepository, which is no longer supported in java 8.
The official documentation is not helpful. All shown examples use some proprietary format or are writing into a database.
Can anyone please provide a way to persist metrics as log files?

Sounds like you just have to write a MetricWriter implementation and mark the bean with #ExportMetricWriter Should be a piece of cake. Implement the three methods by just writing to the log and that's it.
Something like:
#Component
#ExportMetricWriter
public class MyMetricWriter implements MetricWriter, Closeable {
private static final Log logger = LogFactory.getLog(MyMetricWriter.class);
#Override
public void increment(Delta<?> delta) {
// record increment to log
}
#Override
public void set(Metric<?> value) {
// record set metric to log
}
#Override
public void reset(String name) {
// Not implemented
}
#Override
public void close() {
/// ...
}
}
The docs mentioned a few others you can refer to their implementations for inspiration.

Related

Listening to class reload in Java

For performance reasons, I have a class that stores a Map whose key is a Class<?> and its value is function of that class's fields. The map is populated during code execution according to the type of the calling object. The above is a generalization/simplification
public class Cache {
private static final Map<Class<?>, String> fieldsList = ...;
//Synchronization omitted for brevity
public String getHqlFor(Class<?> entity){
if (!fieldsList.containsKey(entity))
fieldsList.put(entity,createHql(entity));
return fieldsList.get(entity);
}
}
During development, thanks to the help of Jrebel, I often make modifications to classes by changing entire properties or just their names. I can continue development just fine. However, if I already put a value into the cache it will be stale forever.
What I am asking here is if it is possible to intercept the event that a class in the classpath has changed. Very broad... But my specific problem is very simple: since I have such a need only during development, I just want to wipe that cache in case any class in my classpath changes.
How can I accomplish this? I don't need to do anything special than intercepting the event and simply wiping the cache
JRebel has a plugin API that you can use to trigger code on class reloads. The tutorial complete with example application and plugin available here: https://manuals.zeroturnaround.com/jrebel/advanced/custom.html
The JRebel plugin is a self-contained jar built against the JRebel SDK, which is attached to the running application via the JVM argument -Drebel.plugins=/path/to/my-plugin.jar. The JRebel agent attached to the application will load and start plugins from this argument.
If the application is not started with the JRebel agent, the plugin is simply not loaded.
In your example you want to register a ClassEventListener that will clear the Cache.fieldsList map. As it is a private field, you need to access it via reflection or add a get/clear method via a ClassBytecodeProcessor
public class MyPlugin implements Plugin {
void preinit() {
ReloaderFactory.getInstance().addClassReloadListener(new ClassEventListenerAdapter(0) {
#Override
public void onClassEvent(int eventType, Class<?> klass) throws Exception {
Cache.clear();
}
});
}
// ... other methods ...
}
And to clear the map
public class CacheCBP extends JavassistClassBytecodeProcessor {
public void process(ClassPool cp, ClassLoader cl, CtClass ctClass) {
ctClass.addMethod(CtMethod.make("public static void clear() { fieldsList.clear(); }", ctClass));
}
}
However a better option is to only clear/recalculate the single class entry on class reload if possible. The example didn't display whether the info computed from one class depended on superclass infos, but if this is true, the JRebel SDK has methods to register a reload listener on the class hierarchy as well.
There is an existing class ClassValue which already does the job for you:
public class Cache {
private final ClassValue<String> backend = new ClassValue<String>() {
#Override
protected String computeValue(Class<?> entity) {
return createHql(entity);
}
};
public String getHqlFor(Class<?> entity){
return backend.get(entity);
}
}
When you call get, it will call computeValue if this is the first call for this specific Class argument or return the already existing value otherwise. It does already care thread safety and for allowing classes to get garbage collected. You don’t need to know when class unloading actually happens.

How can I exclude annotated definition from build in java?

I am building an Android app. Now, I have a source code for API #1, I should get it adapted for API #2. Then I will publish the both versions for API #1 and API #2 in different packages. I can't use something like values-en, because both versions can be used worldwide. Also, the user may not have choice.
As the new version will use same UI and DB logic, (and because now the code is erroneous,) I don't want to separate the code. If i were coding in c or c++, I must use #ifdef and Makefile. However, I'm in Java. It's possible to run the API-dependent code by determining the package name in runtime, but it's somewhat weird.
I think I can use annotations. What I expect is:
package foo.app;
public class API {
public boolean prepare() { ... }
#TargetPlatform(1)
public void open() { ... }
#TargetPlatform(2)
public void open() { ... }
}
and use only one of them. Also, this is good:
package foo.app;
public class R {
#TargetPlatform(1) com.example.foo.app.R R;
#TargetPlatform(2) net.example.foo.app.R R;
}
Just defining an annotation is simple. What I don't know is, how can I exclude unused duplicates from build or execution, or so on? If the work can be done in this way, I can do anything.
You cannot use annotations for that.
It would be better to hide the implementation specific classes behind an interface.
public interface Api {
boolean prepare();
void open();
}
To create a Api instance use a factory class:
public class ApiFactory {
public static Api createApi() {
if(isTargetPlatform1())
return new com.example.foo.app.Api();
else
return new net.example.foo.app.Api();
}
private boolean isTargetPlatform1() {
// determine the current platform, e.g. by reading a configuration file
}
}
In all other places you only refer to the Api interface and ApiFactory class.
Use it like that:
Api api = ApiFactory.createApi();
api.open();
// ...
A more advanced solution would be to use dependency injection.

List the names of methods being invoked

I'd like to have a reflection-like solution for displaying the methods which are called.
Example:
public class Foo {
public void bar() {
new Y().x();
}
}
public class Y {
public void x() {
}
}
public class Main {
public static void main(String[] args) {
// SETTING UP THE MAGIC
new Foo().bar();
new Y().x();
}
}
The output should be:
1. Foo.bar
1.2 Y.x
2. Y.x
Optimally there could be an event handler which would be fired every time a method is called(and some info could be passed as a parameter).
PS: I don't want to use this in a production environment, I only need this for displaying output in a skeleton app without dummy copy-paste.
I would use aspectj to define an aspect which writes log messages for every method call. You can find an example here: Tracing Java Method Execution with AspectJ. or here: Logging with AspectJ
The advantage of this solution is that you don't have to make any changes on your code. You will need some time to get into aspectj, but it meets your requirement very well.
You would have to build a profiler agent if you wanted to achieve this without having to pollute your code.
Take a look at this article, it shows you how to do it. Specifically, look at profiling aspects lower in that article.
Listing 2 is a profiling aspect that can be used to output the class name, method name, and a timestamp every time the JVM enters or leaves a method.

Simple java based workflow manager / data workflow with ability to start ext. application, call web services etc

First of all, if there is already such a question like mine on the stackoverflow, sorry for that, but I haven't managed to find it. Actually I don't know what tags could I use to search for a solution which I need.
Basically, I need a tool/software which can manage a data(objects) flow using several tools/actions during the whole process. Of course one of the existing BPM/workflow platform tool can probably do that, but they seem to be too complicated for my requirements.
I have a "static" data model built with JPA/Hibernate. Then I need to change that static model in order to use different processing functions over it. That function could be some java classes, web service or external application (which support batch mode). After that I need to catch the output from these functions and make some visualisations, draw some charts etc.
I can assume that all these processing functions have access to the static model and they can change it to that specific one so there is no need to pass input to them. On the other hand the output of them should be caught by the main "workflow manager".
One more thing, the whole process should run automatically without any user interactions (maybe it will change in the future, but look and present for now). Before process starts, administrator should define which "processing function" is used and that's it.
And another thing... the best would be if the whole process was triggered when database state was changed, but it is not crucial, I can start it for example by calling a web service.
The question is: should I use one of the existing BPM/Workflow tool such as jBPM or Activiti, write a simple "workflow manager" on my own or use an existing tool which is much simpler then jBPM/Activiti (is there any?). Of course I prefer the easiest approach...
Thanks a lot for any feedback.
Apache Camel is an open source integration framework which will help you in this.
You can use Apache Camel to build your own simple workflow manager, where each process implements Processor. Your data can passed through processors using camel Exchange. Check out camel examples for more information.
For information on how to write custom processor, please read here.
You can add Processors to a Camel RouteBuilder dynamically, schedule it using Quartz Scheduler etc., which will more or less address all your requirements.
Here is good introduction to Camel : http://www.kai-waehner.de/blog/2012/05/04/apache-camel-tutorial-introduction/
A simple implementation of Workflow Manager using Camel:
WorkflowManager.java
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
public class WorkflowManager {
DefaultCamelContext camelContext;
public WorkflowManager() {
camelContext = new DefaultCamelContext();
RouteBuilder routeBuilder = new RouteBuilder() {
#Override
public void configure() throws Exception {
from("timer:schedule?period=1s&daemon=true").process(new ProcessOne()).process(new ProcessTwo());
}
};
try {
camelContext.addRoutes(routeBuilder);
} catch (Exception e) {
e.printStackTrace();
}
}
public void start() throws Exception {
camelContext.start();
}
public void stop() throws Exception {
camelContext.stop();
}
public static void main(String[] args) {
WorkflowManager workflowManager = new WorkflowManager();
try {
workflowManager.start();
while(true) {
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
ProcessOne.java
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
public class ProcessOne implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println("In ProcessOne");
}
}
ProcessTwo.java
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
public class ProcessTwo implements Processor {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println("In ProcessTwo");
}
}
I used Camel version 2.9.0 to compile this code. Please note that I've used an infinite loop in main method to keep the main thread alive.
This code will run a route having ProcessOne and ProcessTwo, with a period of 1 second. You can see the period in from(...) method where I add processors to route builder. Hence, this route will run repeatedly. Also, I am not trying to flow any data. You can use exchange in the process method of each processor to flow data.
Output will be:
In ProcessOne
In ProcessTwo
In ProcessOne
In ProcessTwo
You can use camel components to make your WorkflowManager robust.

Could you suggest me a pattern to log exceptions during java application lifecycle?

I want to use the simple java.util.logging.* package to log exceptions. Any suggestions, links, articles to do this in a basic but elegant way?
I should implement it in a Java EE application.
I was thinking to a singleton class... what do you think?
The following doesn't handle the case of logging all exceptions automatically but might get you closer. At my current company we extended Exception with a CompanyException that supported a list of callbacks. Something like:
public abstract class CompanyException extends Exception {
private static final List<ExceptionCallback> callBacks =
new ArrayList<ExceptionCallback>();
public CompanyException(String message) {
super(message);
invokeCallbacks();
}
// register class to be called whenever exception is constructed
public static void registerCallback(ExceptionCallback callback) {
// we expect the registers to be done long before the first exception thrown
List<ExceptionCallback> newCallbacks = new ArrayList<ExceptionCallback>();
newCallBacks.addAll(callBacks);
newCallbacks.add(callback);
callBacks = newCallBacks;
}
private void invokeCallbacks() {
for (ExceptionCallback callBack : callBacks) {
try {
callBack.exceptionConstructed(this);
} catch (Throwable th) {
// ignore it because we can't go recursive
}
}
}
}
The callback was a simple interface:
public interface ExceptionCallback {
public void exceptionConstructed(Exception exception);
}
Then all of our internal exceptions would throw these or rethrow/wrap external exceptions. We then would register a logging class which could log exceptions using log4j or our internal logging mechanisms. Obviously the exception could do the logging itself but we wanted the separation. Our logger had a dedicated thread to do it in the background and send it to the central logging server. You need to be very careful about recursion if you throw while you are logging.
You can take a look at Base 22's logging suggestions, they're pretty basic but they cover a basic set of standard logging functionality.
I discourage the use of a Singleton, as with a modern logging framework you don't need them.

Categories