I know I can add handlers(JAX WS) to an SEI using #HandlerChain
I know I can add interceptors(Apache CXF) to an SEI like this -
http://web-gmazza.rhcloud.com/blog/entry/jaxwshandlers-to-cxfinterceptors
I know I can add handlers to the Provider Interface using #HandlerChain-
https://docs.oracle.com/middleware/1213/wls/WSGET/jax-ws-soaphandlers.htm#WSGET3461
Question is :
Can I, and if so How,(the same way as SEI?) add Interceptors to the Provider interface?
Well, I figured out the answer to this specific question. You can add interceptors like this
ProviderImpl implementor = new ProviderImpl();
JaxWsServerFactoryBean svrFactory = new JaxWsServerFactoryBean();
svrFactory.setAddress("http://localhost:9000/providerexample");
svrFactory.setServiceBean(implementor);
svrFactory.getInInterceptors().add(new LoggingInInterceptor());
svrFactory.getOutInterceptors().add(new LoggingOutInterceptor());
svrFactory.create();
But now the next problem : interceptors deal with SoapMessage(Apache CXF). Provider deals with SOAPMessage(JAXWS). So I can get the interceptors to log and all, but when I try to manipulate the SoapMessage, I have troubles. Still not sure if the reason is the incompatibility of these two classes( or whether the framework takes care of the interconversion) or the specific code I'm using there.
EDIT: There is no problem with the Interceptors, it was just some stupid errors I made.
Related
I want to find the actual java class that serves the Spring Actuator endpoint (/actuator).
It's similar to this question in a way, but that person wanted to call it via a network HTTP call. Ideally, I can call it within the JVM to save on the cost of setting up an HTTP connection.
The reason for this is because we have 2 metrics frameworks in our system. We have a legacy metrics framework built on OpenCensus and we migrated to Spring Actuator (Prometheus metrics based on Micrometer). I think the Spring one is better but I didn't realize how much my company built infrastructure around the old one. For example, we leverage internal libraries that use OpenCensus. Infra team is depending on Opencensus-based metrics from our app. So the idea is to try to merge and report both sets of metrics.
I want to create my own metrics endpoint that pulls in data from Opencensus's endpoint and Actuator's endpoint. I could make an HTTP call to each, but I'd rather call them within the JVM to save on resources and reduce latency.
Or perhaps I'm thinking about it wrong. Should I simply be using MeterRegistry.forEachMeter() in my endpoint?
In any case, I thought if I found the Spring Actuator endpoint, I can see an example of how they're doing it and mimic the implementation even if I don't call it directly.
Bonus: I'll need to track down the Opencensus handler that serves its endpoint too and will probably make another post for that, but if you know the answer to that as well, please share!
I figured it out and posting this for anyone else interested.
The key finding: The MeterRegistry that is #Autowired is actually a PrometheusMeterRegistry if you enable the prometheus metrics.
Once you cast it into a PrometheusMeterRegistry, you can call its .scrape() method to return the exact same metrics printout you would when you hit the http endpoint.
I also need to get the same info from OpenCensus and I found a way to do that too.
Here's the snippet of code for getting metrics from both frameworks
Enumeration<MetricFamilySamples> openCensusSamples = CollectorRegistry.defaultRegistry.filteredMetricFamilySamples(ImmutableSet.of());
StringWriter writer = new StringWriter();
TextFormat.write004(writer, openCensusSamples);
String openCensusMetrics = writer.toString();
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) meterRegistry;
String micrometerMetrics = registry.scrape();
return openCensusMetrics.concat(micrometerMetrics);
I found out another interesting way of doing this.
The other answer I gave but it has one issue. It contains duplicate results. When I looked into it, I realized that both OpenCensus and Micrometer were reporting the same result.
Turns out that the PrometheusScrapeEndpoint implementation uses the same CollectorRegistry that OpenCensus does so the both sets of metrics were being added to the same registry.
You just need to make sure to provide these beans
#PostConstruct
public void openCensusStats() {
PrometheusStatsCollector.createAndRegister();
}
#Bean
public CollectorRegistry collectorRegistry() {
return CollectorRegistry.defaultRegistry;
}
Lets say I have rest endpoint for my Driver resource.
I have PUT method like this
myapi/drivers/{id}
{body of put method}
I need to add functionality which will allow to 'enable' and 'disable' driver
Is it good idea to create new endpoint for that like this?
PUT myapi/drivers/{id}/enable/false
or it is better to use existing endpoint ? One problem with using existing endpoint is that driver has lot's of fields(almost 30) and sending all those fields just for updating only 'enabled' or 'disable' driver is something overkill.
What do you think?
This is exactly what the HTTP method PATCH is made for. It is used in cases where the resource has many fields but you only want to update a few.
Just like with PUT, you send a request to myapi/drivers/{id}. However, unlike with PUT, you only send the fields you want to change in the request body.
Creating endpoints like myapi/drivers/{id}/enable is not very RESTful, as "enable" can't really be called a resource on its own.
For an example implementation of a Spring PATCH endpoint, please see this link.
Use PATCH Http metod to update one field
PATCH myapi/drivers/{id}/enable
We have an API which uses Spring JPA and provides access to some data in our database via REST. This API is exposed in a Hateoas fashion (we are using the Spring implementation).
We are now considering whether stick with this approach or code s=our own REST interface manually. Now, I have read a lot of articles about HATEOAS but I am not sure what's the big advantage of using it. Sure, I understand that I can navigate through it using links but I still have to know about the existence of the links at each level, right?
To illustrate my problem, let's say that I have the following structure:
server.com/
- /store
- /users/
server.com/users
- /managers/
- /other/
server.com/managers
- list of entities with ids
I want to consume this API and get all 'manager' entities (located under server.com/users/managers)
What is the correct way to do so when using Spring boot links?
Option one:
RequestEntity<Void> request = RequestEntity.get("server.com/users/managers").accept(HAL_JSON).build();
final Resource<Manager> managers = restTemplate.exchange(request, new ResourcesType<Manager>() {
}).getBody();
Option two:
// global endpoint
RequestEntity<Void> request = RequestEntity.get("server.com").accept(HAL_JSON).build();
final Resource<Object> rootLinks = restTemplate.exchange(request, new ResourceType<Object>() {
}).getBody();
Links links = new Links(rootLinks.getLinks());
final Link userLink = links.getLink("users").expand();
// users endpoint
request = RequestEntity.get(URI.create(userLink.getHref())).accept(HAL_JSON).build();
final Resource<Object> managerLinks = restTemplate.exchange(request, new ResourceType<Object>() {
}).getBody();
links = new Links(managerLinks.getLinks());
final Link managerLink = links.getLink("managers").expand();
// managers endpoint
request = RequestEntity.get(URI.create(managerLink.getHref())).accept(HAL_JSON).build();
final Resources<Manager> resourceAccounts = restTemplate.exchange(request, new ResourcesType<Manager>() {
}).getBody();
The first option one seems straightforward and I can get all entities with single request. However, I fail to see hot Hateoas is beneficial if I just use this approach. Spring documentation states, that using hardcoded links is not recommended.
The second approach seems to be more in the Hateoas fashion but it creates three requests just to get to the resource which location I already know. That doesn't seem right either.
I know it's probably a dummy question but can somebody explain me what is the great idea behind Hateoas that I am clearly missing?
With HATEOAS server can guide a client through provided links. A contract between server and client is link's relation type and media type. A server can, by providing or not providing links on same resource representation, give client information if a resource is in a state where editing is enabled or not enabled or if a user is authorized for some operation on the resource and so on. A server can change URLs without breaking a contract.
I am currently creating a SOAP-Client in Java with help of Apache CXF.
I've generated the Service classes from a given WSDL and configure the client programmatically.(Just to make clear, that I'm not using Spring configuration).
The service I'm calling has the requirement that each Request I send, needs to be signed.
What I did so far is creating my client and add the WSS4JOutInterceptor in order to sign the message.
Client client = ClientProxy.getClient(soapService.getRawSoapInterface());
//Actually not sure if this is really needed?
QName signatureQName = new QName("http://www.w3.org/2000/09/xmldsig#", "Signature");
Map<String, Object> properties = new HashMap<String, Object>();
Map<QName, Object> processorMap = new HashMap<QName, Object>();
processorMap.put(WSSecurityEngine.SIGNATURE, signatureQName);
properties.put("wss4j.processor.map", processorMap);
properties.put(WSHandlerConstants.USER, "clientSignatureAlias");
properties.put(WSHandlerConstants.PW_CALLBACK_CLASS, MyPwCallback.class.getName());
properties.put(WSHandlerConstants.PASSWORD_TYPE, WSConstants.PW_TEXT);
properties.put(WSHandlerConstants.ACTION, WSHandlerConstants.SIGNATURE);
properties.put(WSHandlerConstants.SIG_PROP_FILE, "client.properties");
properties.put(WSHandlerConstants.ENC_KEY_ID, "X509KeyIdentifier");
WSS4JOutInterceptor wssOutInterceptor = new WSS4JOutInterceptor(properties);
My client.properties contains:
org.apache.wss4j.crypto.provider=org.apache.wss4j.common.crypto.Merlin
org.apache.wss4j.crypto.merlin.keystore.type=jks
org.apache.wss4j.crypto.merlin.keystore.password=secret
org.apache.wss4j.crypto.merlin.keystore.alias=cert_sig
org.apache.wss4j.crypto.merlin.keystore.file=clientCerts.jks
So far so good, and each Message is getting signed.
Lets get to the issue:
The Problem is, that The interceptor is putting these Security Headers into the Soap-Request.
<SOAP-ENV:Header xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
At first I don't them, second point is that the service I am calling doesn't know them and therefore is answering with an exception.
Currently I cannot find a way how to avoid this, any suggestions?
As far as I understood, WSS4J is not able to create an Enveloped Signature at all!
Therefore I moved into another direction. I used Apache Santuario in order to create a Signature for my message.
I used the Intercepor mechanism of CXF to create my own interceptor, an abstract class for this usecase is provided here: How To Modify The Raw XML message of an Outbound CXF Request? .
There I was able to call the Santuario STAX-API to create an valid signature, this is described very good in the following blog: http://coheigea.blogspot.ie/2014/03/apache-santuario-xml-security-for-java.html
Since I had some further modifications on the request, I was able to modify the raw String.
Thank god that SOAP is standardized protocoll and everybody is doing what he wants to...
I am using a Grizzly HttpServer and i want to add a specific header in every response. Specifically, i want to avoid CORS problems by adding an 'Access-Control-Allow-Origin' header.
So, ideally, i want something like this:
HttpServer server = GrizzlyServerFactory.createHttpServer(uri, crc);
server.setHeader("Access-Control-Allow-Origin" , "*");
Generally, i am looking for a solution that does not require that i have to manually insert this header in every request-response action.
Is there any way to do this?
As #alexey said, there is no way (from the current Grizzly Server version) to do this. If anyone finds something else that works, i will gladly confirm it as an accepted answer.
The best alternative that works quite well is to extend the 'ContainerResponseFilter' class and override the 'filter' method.
Here is an example for 1.x API
Here is an example for 2.x API (minor changes)