I have a use case where I have 6 steps being performed in one request. The business is requesting that we capture metrics on what the result of each step was in the process. They want us to log to a Kinesis stream.
Architecturally I am looking at the best solution. We have java based services I want to have a request scoped object enriched as the request progresses, then when the endpoint finishes we would make a service call to kinesis asynchronous using a fire and forget pattern. This way the reporting is not holding up the main thread.
I was looking at using the raw ThreadLocal or guice scope. Has anyone ran into to a similar problem that they solved? Im thinking of use guice request scoped components, which will greatly simply the code. Just looking for some opinions. Thanks!
I'm assuming you aren't on a servlet environment because, then, you will just use the built in request scope. Even then you can use the request scope from guice-servlet building the scope yourself.
void processRequest() {
RequestScoper scope = ServletScopes.scopeRequest(Collections.emptyMap());
try ( RequestScoper.CloseableScope ignored = scope.open() ) {
step1();
step2();
step3();
step4();
step5();
step6();
}
}
You can use #RequestScoped and it will be the same object on all your steps. You can, for example, use a provider to get access to it.
Related
Since gRPC makes service call on new thread and gRPC context is Thread Local, how can I propagate this gRPC context? I found that Context.currentContextExecutor() and ContextPropagatingExecutorService can be used but I haven't found enough resources or example for these 2 options. Can someone help to implement these?
ClientInterceptors shouldn't change the context instance seen by the application. The Context behavior shouldn't really change whether using blocking, async, or future stubs and a blocking API would not be able to change the current context.
While an interceptor is free to modify a pre-existing (mutable) value in the Context, there's generally little need. It is normally easier to create a new interceptor instance each RPC and communicate with the interceptor directly, or communicate via a custom CallOption.
If you have just a single call site that needs access to response headers, then MetadataUtils.newCaptureMetadataInterceptor() is a convenient (although roundabout) way to get the Metadata. It was designed for testing, but is appropriate for small-scale use outside of testing situations.
AtomicReference<Metadata> headers = new AtomicReference<>();
AtomicReference<Metadata> trailers = new AtomicReference<>();
// Using blocking for simplicity, but applies equally to futures
stub.withInterceptors(MetadataUtils.newCaptureMetadataInterceptor(headers, trailers))
.someRpc();
Metadata headersSeen = headers.get();
If you need to access the same header from multiple callsites, it is better to create a custom interceptor that does what you need.
CustomInterceptor interceptor = new CustomInterceptor();
stub.withInterceptors(interceptor)
.someRpc();
... = interceptor.getWhateverValue();
This is demonstrating a "general" use case. Specific instances commonly can tweak their API further to be more convenient and natural.
I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}
I am building a Spring RESTfull service and a I have the following method that retrieves a Place object based on given zipcode:
#RequestMapping(value = "/placeByZip", method = RequestMethod.GET)
public Place getPlaceByZipcode(#RequestParam(value="zipcode") String zipcode) {
Place place = placeService.placeByZip(zipcode);
return place;
}
Is it best practice to have the return type of "Place"? I imagine this is difficult for error handling?
Using the latest versions of Spring for a RESTfull web service I do believe returning the 'Object' is good practise as it allows you to simplify your code and be specific on what you are returning. I see this as strongly typing your API response.
A good practise for error handling is to use the controller advice utility supplied by spring.
Have a read of:
https://spring.io/blog/2013/11/01/exception-handling-in-spring-mvc
This allows you to throw your exceptions at any of your service layers and produce a nice helpful error response.
A better practice would be to create a Data Transfer Object with only the properties you will be using in the front end.
With a proper JS framework you could easily do proper error handling. (for example you could define a service in AngjularJs which would define the DTO's fields).
Also, you might as well do return placeService.placeByZip(zipCode);
As robinsio suggested it is good practice to add controller advice. You can set the status to some http code (for example HttpStatus.conflict) and have an exception handler (ex. BaseServiceException) which you can throw inside your place service if some validation rules you define are broken.
The controller advice could return a map which you handle in the case of the respective http status code in a consistent manner (let's say a modal appears in the interface to notify of the message you sent from the base service exception).
I'm looking to return per request debug information in a JSON HTTP response body. This debug information would be collected throughout my application, including data such as DB queries and how long they took, external calls made, whether certain conditions were met, general messages showing application flow etc.
I've looked into Java logging frameworks and the Play Framework, which I am using, has a built in logger that works great for logging information to files. But I can't seem to find any that handle request level logging i.e. store debug messages for this particular HTTP request to be returned with this request and then destroyed.
I could of course create a Debug class, instantiate that and pass that around throughout my application for each request, but this doesn't seem like a nice way to handle this as I would need to be passing this into a lot of classes in my application.
Are there any better ways/design patterns/libraries out there that can do what I'm looking for without having to pass a Debug object round my entire application?
It is not a common usage, so I do not think that you will find a product implementing that. You have basically 2 possibilities :
fully implement a custom logger and use it throughout your application
use a well known local api (slf4j, apache commons logging) and implement a dedicated back-end
Either way, the backend part should :
initiate a buffer to collect logs at request reception (probably in a filter) and put it in a ThreadLocal variable
collects all logs during the request help to the ThreadLocal variable
release the ThreadLocal variable at the end of request (in same filter that allocates it)
And all servlets of controllers should be modified to add the logging content to the json response body.
I found a solution, the Play Framework provides request level storage for arbitrary objects using Http.Context.current().args I'm using this HashMap to store my custom debug storage class so that I can access it throughout a request anywhere in my application.
Instead of passing Debug object from layer to layer, why don't you use Http.Context? It is defined at Controller class:
public abstract class Controller extends Results implements Status, HeaderNames {
...
public static Context ctx() {
return Http.Context.current();
}
}
Usage:
ctx().args.put(key, value)
ctx().args.get(key);
You don't need to use a logging framework for it. The best approach to return your debug info in the json response is to use the same method you are using to return the rest of the json.
This way, you can setup it to work in debug mode via -Dsomevar=debug or via an HTTP request parameter like debug=true
I need to write a small Java class that will enable me to add to and read from the current user session.
Everything I see refers to Servlets but I'd ideally like to just use a plain old class.
Can anyone please help this Java Newbie?
Thanks
The general concept of a "Session" is really just a data storage for an interaction between a HTTP client and server. Session management is automatically handled by all HTTP Servlets. What framework?
If you're just wanting to store information for a console app to remember information between runs, then consider using Xml to save/load data from a file.
Use a component based MVC framework which abstracts all the ugly Servlet API details away so that you ends up with zero javax.servlet imports. Examples of such are JSF2 and Struts2.
In JSF2 for example, you'd just declare User class as a session scoped managed bean:
#ManagedBean
#SessionScoped
public class User {
// ...
}
Then in the "action" bean which you're using to processing the form submit, reference it as managed property:
#ManagedBean
#RequestScoped
public class SomeFormBean {
#ManagedProperty(value="#{user}")
private User user;
public void submit() {
SomeData someData = user.getSomeData();
// ...
}
}
That's it.
If you'd like to stick to raw Servlet API, then you've to live with the fact that you have to pass the raw HttpServletRequest/HttpServletResponse objects around. Best what you can do is to homegrow some abstraction around it (but then you end up like what JSF2/Struts2 are already doing, so why would you homegrow -unless for hobby/self-learning purposes :) ).
Yes, just pass the HttpRequest to your class from your servlet.
In your servlet do something like this,
cmd.execute(request);
In your class do something like this,
public class Command implements ICommand {
.
.
public void execute(HttpServletRequest request){
HttpSession sess = request.getSession(false);
}
.
.
.
}
In general, as mentioned in the other answers, session in many ways acts as a store. So to interact wth a session from another class which is outside of the Servlet/JSP framework the reference to the session in question needs to be procured. There are few ways it can be achieved:
1) Passing the session as part of a method parameter (already mentioned in other answers)
2) Binding the session to a thread local variable on to the current thread executing (refer ThreadLocal). This method has the advantage of not declaring specific parameters on the method signature of the class that needs to use the session. In addition, if the calling thread goes through a library and then again calls some specific class e.g. Servlet->YourClass0 -> Apache Some Library -> YourClass1, the session will also be available to YourClass1.
However, the thread local also needs to be cleared when the executing thread returns through the initial component (servlet let's say) otherwise there certainly could be memory leaks.
In addition, please refer to your specific framework for treatement of sessions, the above mechanism works fine in Tomcat.