I have a #FeignClient interface:
#FeignClient(name="${some.service.id}", url="${some.service.url}")
public interface SomeInterface {
...
}
My question is- how can I direct feign to use one of the two properties (name/url)? I left the url property empty in the production properties file, but it seems as if it always uses the url property.
So eventually I found a suitable solution here:
Define different Feign client implementations based on environment
Although I wanted to solve this using only one feign client with configuration and profiles, I didn't find a way to do it. This solution is based on creating two different feign clients, each will be used in the right profile.
Related
I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}
My API currently calls one Elasticsearch endpoint using JestClient. I want to add some functionality that requires calling a second, different Elasticsearch endpoint. How is this possible, when you have to specify the endpoint upon initializing JestClient?
#Provides
#Singleton
public JestClient jestClient() {
JestClientFactory factory = new JestClientFactory();
factory.setHttpClientConfig(
new HttpClientConfig.Builder("http://localhost:9200")
.build());
return factory.getObject();
}
My application design uses Singleton classes for these initializations so I'm not sure how to fix this aside from using a different Elasticsearch client for my second endpoint.
Or you can create two clients and based on the use-case in calling classes, if you require both the clients, inject both clients or if you requires one specific client, inject that client in your class, you just need to have a different name(like clientv2 for es 2 version, and clientv5 for ES 5.x version) for your client to make it work, its easy as well, as you know your use-case and know what all versions of clients is need in your classes.
on a side note, active development of JEST is stopped long ago, and now elasticsearch provides the official java client know as Java high level rest client, so IMHO you should switch to JHLRC to get the maximum benefit and to make future migration easy, which I think is the use-case of yours.
You could create a factory and return the client depending on a parameter.
The parameter could be the url itself or just an enum that represents the endpoint.
I'm recently working with microservices, developed as Spring Boot applications (v 2.2) and in my company we're using Keycloak as authorization server.
We chose it because we need complex policies, roles and groups, and we also need the User Managed Authorization (UMA) to share resources between users.
We configured Keycloak with a single realm and many clients (one client per microservice).
Now, I understand that I need to explicitly define Resources within Keycloak and this is fine, but the question is: do I really need to duplicate all of them in my microservice's property file?
All the documentation, examples and tutorials end up with the same thing, that is something like:
keycloak.policy-enforcer-config.enforcement-mode=PERMISSIVE
keycloak.policy-enforcer-config.paths[0].name=Car Resource
keycloak.policy-enforcer-config.paths[0].path=/cars/create
keycloak.policy-enforcer-config.paths[0].scopes[0]=car:create
keycloak.policy-enforcer-config.paths[1].path=/cars/{id}
keycloak.policy-enforcer-config.paths[1].methods[0].method=GET
keycloak.policy-enforcer-config.paths[1].methods[0].scopes[0]=car:view-detail
keycloak.policy-enforcer-config.paths[1].methods[1].method=DELETE
keycloak.policy-enforcer-config.paths[1].methods[1].scopes[0]=car:delete
(this second example fits better our case because it also uses different authorization scopes per http method).
In real life each microservice we're developing has dozens of endpoints and define them one by one seems to me a waste of time and a weakness in the code's robustness: we change an endpoint, we need to reconfigure it in both Keycloak and the application properties.
Is there a way to use some kind of annotation at Controller level? Something like the following pseudo-code:
#RestController
#RequestMapping("/foo")
public class MyController {
#GetMapping
#KeycloakPolicy(scope = "foo:view")
public ResponseEntity<String> foo() {
...
}
#PostMapping
#KeycloakPolicy(scope = "bar:create")
public ResponseEntity<String> bar() {
...
}
}
In the end, I developed my own project that provides auto-configuration capabilities to a spring-boot project that needs to work as a resource server.
The project is released under MIT2 license and it's available on my github:
keycloak-resource-autoconf
Let say I have these configurations in my xml,
<int-sftp:outbound-channel-adapter id="sftpOutbound"
channel="sftpChannel"
auto-create-directory="true"
remote-directory="/path/to/remote/directory/"
session-factory="cachingSessionFactory">
<int-sftp:request-handler-advice-chain>
<int:retry-advice />
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-channel-adapter>
How can I retrieve the attributes i.e, remote-directory in Java class ?
I tried to use context.getBean('sftpOutbound') but it returns EventDrivenConsumer class which doesn't have methods to get the configurations.
I'm using spring-integration-sftp v 4.0.0.
I am actually more concerned with why you wan to access it. I mean the remote directory and other attributes will come with the headers of each message, so you will have access to it at the Message level, but not at the level of Event Driven Consumer and that is by design, hence my question.
I am trying to add some metric gathering to a Spring MVC app. Lets say I have a controller whose mapping is:
/User/{username}/Foobar
I want to gather metrics on all controller mapping invocations with the path. Right now I can create a handler/interceptor and look at the requests but that will give me:
/User/Charlie/Foobar
Which is not what I want. I want the controller mapping itself to log. and I don't want to have to add something to every controller. I'd also rather not use AOP if I can help it.
It turns out that Spring hangs the best matching controller pattern on the request itself. You can get this from within a handlerinterceptor like this:
(String)request.getAttribute(HandlerMapping.BEST_MATCHING_PATTERN_ATTRIBUTE)
I can think of two choices:
It seems to me the results of the matching are obtained in the class org.springframework.web.servlet.handler.AbstractUrlHandlerMapping, which logs the patterns obtained (see line 266). I'd try enabling logging for that class and see if the output is helpful for your purposes.
(Complicated)
Extending org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping to override the lookupHandler method inherited from AbstractUrlHandlerMapping and logging/registering what you need. Accoding to this class documentation, you can register a different one so that the DispatcherServlet uses your version.
In Spring 3.2.x DefaultAnnotationHandlerMapping is deprecated so, a different class would have to be used.