For development purposes, not everyone can install nginx on their machines (like our developers on Windows environments), but we want to be able to do a reverse proxy that behaves like nginx.
Here's our very specific case:
we have a spring boot REST service running on http://0.0.0.0:8081
we have spring boot web application running on http://0.0.0.0:8082
We would like to serve both services from http://0.0.0.0:8080
So we would like to map it like this:
requests to http://0.0.0.0:8080/ get proxied to http://0.0.0.0:8082
requests to http://0.0.0.0:8080/api get proxied to http://0.0.0.0:8081
That way it works like nginx with url rewrite reverse proxying.
I checked out the Undertow source code and examples, and even this specific example: Reverse Proxy Example, but this is a load balancer example, I haven't found any example that covers what I need.
Also, I know Undertow is capable of this, because we know we can configure WildFly to cover this specific case without issues through the Undertow component configuration, but we would like to implement it ourselves as a lightweight solution for local development.
Does anyone know of an example to do this? or any documentation that has enough info to implement this? because I've also read Undertow's documentation on reverse proxying and it's not helpful at all.
Thanks
This should do the job.
It's Java8 so some parts may not work on your setup.
You can start it in a similar way as the example you've mentioned in your question.
package com.company
import com.google.common.collect.ImmutableMap;
import io.undertow.client.ClientCallback;
import io.undertow.client.ClientConnection;
import io.undertow.client.UndertowClient;
import io.undertow.server.HttpServerExchange;
import io.undertow.server.ServerConnection;
import io.undertow.server.handlers.proxy.ProxyCallback;
import io.undertow.server.handlers.proxy.ProxyClient;
import io.undertow.server.handlers.proxy.ProxyConnection;
import org.xnio.IoUtils;
import org.xnio.OptionMap;
import java.io.IOException;
import java.net.URI;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* Start the ReverseProxy with an ImmutableMap of matching endpoints and a default
*
* Example:
* mapping: ImmutableMap("api" -> "http://some-domain.com")
* default: "http://default-domain.com"
*
* Request 1: localhost:8080/foo -> http://default-domain.com/foo
* Request 2: localhost:8080/api/bar -> http://some-domain.com/bar
*/
public class ReverseProxyClient implements ProxyClient {
private static final ProxyTarget TARGET = new ProxyTarget() {};
private final UndertowClient client;
private final ImmutableMap<String, URI> mapping;
private final URI defaultTarget;
public ReverseProxyClient(ImmutableMap<String, URI> mapping, URI defaultTarget) {
this.client = UndertowClient.getInstance();
this.mapping = mapping;
this.defaultTarget = defaultTarget;
}
#Override
public ProxyTarget findTarget(HttpServerExchange exchange) {
return TARGET;
}
#Override
public void getConnection(ProxyTarget target, HttpServerExchange exchange, ProxyCallback<ProxyConnection> callback, long timeout, TimeUnit timeUnit) {
URI targetUri = defaultTarget;
Matcher matcher = Pattern.compile("^/(\\w+)(/.*)").matcher(exchange.getRequestURI());
if (matcher.find()) {
String firstUriSegment = matcher.group(1);
String remaininguri = matcher.group(2);
if (mapping.containsKey(firstUriSegment)) {
// If the first uri segment is in the mapping, update the targetUri
targetUri = mapping.get(firstUriSegment);
// Strip the request uri from the part that is used to map upon.
exchange.setRequestURI(remaininguri);
}
}
client.connect(
new ConnectNotifier(callback, exchange),
targetUri,
exchange.getIoThread(),
exchange.getConnection().getByteBufferPool(),
OptionMap.EMPTY);
}
private final class ConnectNotifier implements ClientCallback<ClientConnection> {
private final ProxyCallback<ProxyConnection> callback;
private final HttpServerExchange exchange;
private ConnectNotifier(ProxyCallback<ProxyConnection> callback, HttpServerExchange exchange) {
this.callback = callback;
this.exchange = exchange;
}
#Override
public void completed(final ClientConnection connection) {
final ServerConnection serverConnection = exchange.getConnection();
serverConnection.addCloseListener(serverConnection1 -> IoUtils.safeClose(connection));
callback.completed(exchange, new ProxyConnection(connection, "/"));
}
#Override
public void failed(IOException e) {
callback.failed(exchange);
}
}
}
As per M. Deinum's comment suggestion, I'll use Zuul Spring Boot component instead of trying to do this with Undertow, as it's more fit for this task.
Here's a link on a tutorial to do this:
https://spring.io/guides/gs/routing-and-filtering/
Hope this helps anyone else, as this is a pretty common case, and I didn't know about Zuul on Spring Boot.
Related
I'm trying to get current host and port in micronaut application. how do i get it in dynamic manner?
I've tried #Value("{micronaut.server.host}") and #Value("{micronaut.server.port}") but doesn't work.
#Controller("/some/endpoint")
class SomeController {
#Value("{micronaut.server.host}")
protected String host;
#Value("{micronaut.server.port}")
protected Long port;
}
There are a number of ways to do it. One is to retrieve them from the EmbeddedServer.
import io.micronaut.http.HttpStatus;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import io.micronaut.runtime.server.EmbeddedServer;
#Controller("/demo")
public class DemoController {
protected final String host;
protected final int port;
public DemoController(EmbeddedServer embeddedServer) {
host = embeddedServer.getHost();
port = embeddedServer.getPort();
}
#Get("/")
public HttpStatus index() {
return HttpStatus.OK;
}
}
My mistake. As #JaredWare says, we should use Environment to retrieve the property of the application.
#Controller("/some/endpoint")
class SomeController {
#Inject
protected Environment env;
#Get
public String someMethod () {
Optional<String> host = this.env.getProperty("micronaut.server.host", String.class);
return host.orElse("defaultHost");
}
}
The original way you had it is the same as retrieving it from the environment. You were just missing the $ in your #Value annotation.
#Value("${micronaut.server.host}") is equivalent to env.getProperty("micronaut.server.host", String.class)
That will retrieve whatever is configured. If instead you want to retrieve it from the embedded server itself you could do that as well since the actual port may be different from the configured port. That is because it's possible to simply not configure the port or because the configured value is -1 which indicates a random port.
My team is trying to implement an Async Listener for the Amazon SQS Java Extended Client Library, because as noted on this and this links, there isn't going to be any support any time soon.
The original problem is that we need to send a very large payload through SQS. So to solve this, we used the aforementioned Amazon SQS Java Extended Client Library functionality. It would automatically upload large payloads to S3, and later on retrieve them as messages without the consumer ever worrying how to retrieve them from S3. Sadly this can only be done synchronously out of the box.
So to make it work asynchronously we've come up with the following code using the JmsListener. It works, and we obtain the serialized object in String format.
AWS/SQS Beans
import com.amazon.sqs.javamessaging.AmazonSQSExtendedClient;
import com.amazon.sqs.javamessaging.ExtendedClientConfiguration;
import com.amazon.sqs.javamessaging.ProviderConfiguration;
import com.amazon.sqs.javamessaging.SQSConnectionFactory;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.config.JmsListenerContainerFactory;
#Configuration
public class AWSConfig {
#Bean
public AWSCredentialsProvider awsCredentialsProvider() {
return new DefaultAWSCredentialsProviderChain();
}
#Bean
public JmsListenerContainerFactory<?> jmsListenerContainerFactory(ConnectionFactory connectionFactory) {
final DefaultJmsListenerContainerFactory jmsListenerContainerFactory = new DefaultJmsListenerContainerFactory();
jmsListenerContainerFactory.setConnectionFactory(connectionFactory);
return jmsListenerContainerFactory;
}
#Bean
public ConnectionFactory connectionFactory(final AmazonS3Client amazonS3Client, final AmazonSQSAsync amazonSqs) {
final ExtendedClientConfiguration extendedClientConfig = new ExtendedClientConfiguration()
.withLargePayloadSupportEnabled(amazonS3Client, "my-bucket-name");
final ProviderConfiguration providerConfiguration = new ProviderConfiguration();
providerConfiguration.setNumberOfMessagesToPrefetch(10);
final SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(providerConfiguration,
new AmazonSQSExtendedClient(amazonSqs, extendedClientConfig));
return sqsConnectionFactory
}
}
The Listener that uses the #JmsListener annotation:
#Component
public class QueueListener {
...
#JmsListener(destination = "my-queue-name", containerFactory = "jmsListenerContainerFactory")
public void consumeMessage(String queueMessage) {
System.out.println(queueMessage);
}
}
The problem here is that although it works, large messages read from S3 come as a String. We would really really really like the messages were automatically parsed into a specific object (like the #SqsListener already allows us to, without us having to parse the string ourselves), like this:
#JmsListener(destination = "my-queue-name", containerFactory = "jmsListenerContainerFactory")
public void consumeMessage(SomeObject queueMessage) ...
There is already a similar question, but the answer's link is dead, and the description does nothing for us. And this JMS messaging guide wouldn't work well in conjunction with the SQS Extended Library. To make things a bit harder, we can only use the org.springframework.jms.support.converter classes when setting the converter to the jmsListenerContainerFactory, so we cant use the same Jackson mappers AWS uses.
I'm new to Play framework, and trying to use JavaWS to make a call to a RESTful API. I've been struggling a lot with it. This is what I have so far:
This code is based on the JavaWS documentation (which I found quite confusing), and is meant to make the request. I think it works by returing a completion stage of an 'ok' result which contains a string that is the result of converting the response to text.
import javax.inject.Inject;
import com.fasterxml.jackson.databind.JsonNode;
import play.mvc.*;
import play.libs.ws.*;
import java.util.concurrent.*;
import static play.mvc.Results.ok;
public class MyClient implements WSBodyReadables, WSBodyWritables {
private final WSClient ws;
#Inject
public MyClient() {
this.ws = ws;
}
public CompletionStage<Result> index() {
return ws.url("http://example.com").get().thenApply(response ->
ok(response.asText())
);
}
}
This code is then called from a controller:
public Result call(){
MyClient client = new MyClient();
try {
return client.index()
.toCompletableFuture()
.get();
} catch(Exception e){
Logger.error("ah fuck");
}
return internalServerError();
}
I'm currently getting an error which says "variable ws might not have been initialized" which makes sense because I did not initialize ws. I can't figure out how to properly initialize a WSClient instance, nor do I really understand what comes after that. Any help would be greatly appreciated.
Thanks.
Alternatively, you can use Feign library from Netflix to create Rest client.
#rkj had it right:
inject #Inject WSClient ws; in your controller and then pass ws instance to >MyClient class and access it from there. MyClient client = new MyClient(this.ws);
That plus a few little bugs and it worked. Thanks!
I have a simple Dropwizard 0.8.1 REST service that pulls in Jersey 2.17. Upstream of the REST/Jetty service I have some authentication service that adds some nice authorization information to the HTTP Header that gets passed to my Dropwizard app.
I would love to be able to create a custom annotation in my Resource that hides all the messy header-parsing-to-POJO garbage. Something like this:
#Path("/v1/task")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class TaskResource {
#UserContext // <-- custom/magic annotation
private UserContextData userContextData; // <-- holds all authorization info
#GET
public Collection<Task> fetch() {
// use the userContextData to differentiate what data to return
}
I've spent the last day looking around stackoverflow and found several other people who had the same issue and appeared (?) to get some satisfaction, but I can't seem to avoid getting a "Not inside a request scope" stack trace when I try to do this.
So I stashed all my changes and tried to implement the example provided in sections 22.1 and 22.2 by the Jersey documentation directly: https://jersey.java.net/documentation/2.17/ioc.html
Following along with their example (but in my Dropwizard app), I'm trying to get a "#SessionInject" annotation in my Resource, but it also blows up with "Not inside a request scope" stack trace each time. What am I doing wrong here?
Resource:
#Path("/v1/task")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class TaskResource {
private final TaskDAO taskDAO;
#Context
private HttpServletRequest httpRequest;
#SessionInject
private HttpSession httpSession;
public TaskResource(TaskDAO taskDAO) {
this.taskDAO = taskDAO;
}
#GET
public Collection<Task> fetch(#SessionInject HttpSession httpSession) {
if (httpSession != null) {
logger.info("TOM TOM TOM httpSession isn't null: {}", httpSession);
}
else {
logger.error("TOM TOM TOM httpSession is null");
}
return taskDAO.findAllTasks();
}
The SessionInjectResolver:
package com.foo.admiral.integration.jersey;
import com.foo.admiral.integration.core.SessionInject;
import javax.inject.Inject;
import javax.inject.Named;
import javax.servlet.http.HttpSession;
import org.glassfish.hk2.api.Injectee;
import org.glassfish.hk2.api.InjectionResolver;
import org.glassfish.hk2.api.ServiceHandle;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SessionInjectResolver implements InjectionResolver<SessionInject> {
private static final Logger logger = LoggerFactory.getLogger(HttpSessionFactory.class);
#Inject
#Named(InjectionResolver.SYSTEM_RESOLVER_NAME)
InjectionResolver<Inject> systemInjectionResolver;
#Override
public Object resolve(Injectee injectee, ServiceHandle<?> handle) {
if (HttpSession.class == injectee.getRequiredType()) {
return systemInjectionResolver.resolve(injectee, handle);
}
return null;
}
#Override
public boolean isConstructorParameterIndicator() {
return false;
}
#Override
public boolean isMethodParameterIndicator() {
return false;
}
}
The HttpSessionFactory:
package com.foo.admiral.integration.jersey;
import javax.inject.Inject;
import javax.inject.Singleton;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;
import org.glassfish.hk2.api.Factory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
#Singleton
public class HttpSessionFactory implements Factory<HttpSession> {
private static final Logger logger = LoggerFactory.getLogger(HttpSessionFactory.class);
private final HttpServletRequest request;
#Inject
public HttpSessionFactory(HttpServletRequest request) {
logger.info("Creating new HttpSessionFactory with request");
this.request = request;
}
#Override
public HttpSession provide() {
logger.info("Providing a new session if one does not exist");
return request.getSession(true);
}
#Override
public void dispose(HttpSession t) {
}
}
The annotation:
package com.foo.admiral.integration.core;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.FIELD})
public #interface SessionInject {
}
And, finally, the binding in the Dropwizard Application class:
#Override
public void run(TodoConfiguration configuration, Environment environment) throws Exception {
...
environment.jersey().register(new AbstractBinder() {
#Override
protected void configure() {
bindFactory(HttpSessionFactory.class).to(HttpSession.class);
bind(SessionInjectResolver.class)
.to(new TypeLiteral<InjectionResolver<SessionInject>>() { })
.in(Singleton.class);
}
});
Ye old stack trace:
Caused by: java.lang.IllegalStateException: Not inside a request scope.
at jersey.repackaged.com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at org.glassfish.jersey.process.internal.RequestScope.current(RequestScope.java:233)
at org.glassfish.jersey.process.internal.RequestScope.findOrCreate(RequestScope.java:158)
at org.jvnet.hk2.internal.MethodInterceptorImpl.invoke(MethodInterceptorImpl.java:74)
at org.jvnet.hk2.internal.MethodInterceptorInvocationHandler.invoke(MethodInterceptorInvocationHandler.java:62)
at com.sun.proxy.$Proxy72.getSession(Unknown Source)
at com.foo.admiral.integration.jersey.HttpSessionFactory.provide(HttpSessionFactory.java:29)
at com.foo.admiral.integration.jersey.HttpSessionFactory.provide(HttpSessionFactory.java:14)
Some clues that may be useful:
1) I'm noticing is that the logging statements in my HttpSessionFactory are never getting fired, so I don't think the Factory is correctly identified to DropWizard.
2) If I change the annotation to be a Parameter instead of a Field and move the use of the annotation into the fetch( ) method signature like this, it doesn't throw the stack trace (but the httpSession is still null, presumably because the Factory isn't firing...)
public Collection<Task> fetch(#SessionInject HttpSession httpSession) {
3) It doesn't appear to matter if I "register" the binder with environment.jersey().register() or environment.jersey().getResourceConfig().register()... they appear to do the same thing.
Do you see any obvious problems? Thanks in advance!
This is weird behavior. But what looks like is going on is the following
You have registered TaskResource as an instance and not as a .class. This I'm pretty sure of (though you have not mentioned).
register(new TaskResource());
/* instead of */
register(TaskResource.class);
Doing the former, it set the resource in a singleton scope. The latter in a request scope (unless annotated otherwise - see below)
When the resource model is loading it sees the TaskResource is a singleton, and that the HttpServletRequest is in a request scope. Either that or that the factory is in a per request scope. I'm guessing one of the two.
I thought that it might actually be a scope issue, as mentioned in the error message, but what I'm pretty sure of is that at runtime, it will get handled with a thread local proxy, because of the lesser scope.
You can see it fixed by registering the TaskResource as a class, and then annotating the TaskResource with #Singleton. This is if you actually do want the resource class to be a singleton. If not, then just leave off the #Singleton.
The odd thing to me is that it the fact that it fails on startup when the resource is explicitly instantiated on startup, but works when the framework loads on the first request (which is what happens when you register it as a class). They are both still in a singleton scope.
One thing you might want to take into consideration is whether you actually want the resource to be a singleton or not. You do have to worry about thread safety issues with singletons, and there are are some other limitations. Personally, I prefer to keep them in a request scope. You would have to do some performance testing to see if there is much of an impact for your application.
UPDATE
For parameter injection you may want to take a look at this post
UPDATE 2
See Also
jersey 2 context injection based upon HttpRequest without singleton. My answer should shed some more light.
I am trying to execute a Struts2 Action from inside a Quartz job -- generalizing, from any context which is not the processing of an HTTP request.
I started here http://struts.apache.org/2.0.6/docs/how-can-we-schedule-quartz-jobs.html but the document seems to be pretty obsolete.
I believe (but I may be wrong) I've boiled it down to the need to obtain a Container object:
import java.util.HashMap;
import com.opensymphony.xwork2.ActionProxy;
import com.opensymphony.xwork2.DefaultActionProxyFactory;
...
HashMap ctx = new HashMap();
DefaultActionProxyFactory factory= new DefaultActionProxyFactory();
factory.setContainer(HOW DO I GET THE CONTAINER??);
ActionProxy proxy = factory.createActionProxy("", "scheduled/myjob", ctx);
One solution would be to issue an http request (via TCP) against localhost. I would prefer to avoid that.
I somewhat fear what providing this answer may encourage some people to do, but as a proof of concept and to actually provide a solution to anyone who may, for whatever reason (maybe they are inheriting some whacked out application for which this is needed?), need to execute Struts2 actions outside of a normal request context.
But, here is a raw (it is provided as a starting point, not an optimal implementation), but working, solution:
First, add these three classes to a package called com.stackoverflow.struts2.quartz:
A simple job that just asks for a proxy for the given job context and executes it:
package com.stackoverflow.struts2.quartz;
import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
public class ActionJob implements Job {
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
try {
QuartzActionProxyFactory.getActionProxy(context).execute();
} catch (Exception e) {
e.printStackTrace();
throw new JobExecutionException(e);
}
}
}
Some constants for passing around the action details:
package com.stackoverflow.struts2.quartz;
public class QuartzActionConstants {
public static final String NAMESPACE = "struts.action.namespace";
public static final String NAME = "struts.action.name";
public static final String METHOD = "struts.action.method";
}
A custom ActionProxyFactory that can be accessed statically from the ActionJob:
package com.stackoverflow.struts2.quartz;
import java.util.HashMap;
import java.util.Map;
import org.apache.struts2.impl.StrutsActionProxyFactory;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import com.opensymphony.xwork2.ActionContext;
import com.opensymphony.xwork2.ActionProxy;
import com.opensymphony.xwork2.ActionProxyFactory;
public class QuartzActionProxyFactory extends StrutsActionProxyFactory {
private static ActionProxyFactory actionProxyFactory;
public QuartzActionProxyFactory() {
actionProxyFactory = this;
}
public static ActionProxy getActionProxy(JobExecutionContext context) throws JobExecutionException {
ActionProxy actionProxy = null;
try {
#SuppressWarnings("unchecked")
Map<String, Object> actionParams = context.getJobDetail().getJobDataMap();
Map<String, Object> actionContext = new HashMap<String, Object>();
actionContext.put(ActionContext.PARAMETERS, actionParams);
actionProxy = actionProxyFactory.createActionProxy(
(String) actionParams.get(QuartzActionConstants.NAMESPACE),
(String) actionParams.get(QuartzActionConstants.NAME),
(String) actionParams.get(QuartzActionConstants.METHOD),
actionContext,
false, //set to false to prevent execution of result, set to true if this is desired
false);
} catch (Exception e) {
throw new JobExecutionException(e);
}
return actionProxy;
}
}
Then, in your struts.xml, add:
<bean name="quartz" type="com.opensymphony.xwork2.ActionProxyFactory" class="com.stackoverflow.struts2.quartz.QuartzActionProxyFactory"/>
<constant name="struts.actionProxyFactory" value="quartz"/>
Then you can schedule action executions with some simple code:
SchedulerFactory sf = new StdSchedulerFactory();
Scheduler scheduler = sf.getScheduler();
scheduler.start();
JobDetail jobDetail = new JobDetail("someActionJob", Scheduler.DEFAULT_GROUP, ActionJob.class);
#SuppressWarnings("unchecked")
Map<String, Object> jobContext = jobDetail.getJobDataMap();
jobContext.put(QuartzActionConstants.NAMESPACE, "/the/action/namespace");
jobContext.put(QuartzActionConstants.NAME, "theActionName");
jobContext.put(QuartzActionConstants.METHOD, "theActionMethod");
Trigger trigger = new SimpleTrigger("actionJobTrigger", Scheduler.DEFAULT_GROUP, new Date(), null, SimpleTrigger.REPEAT_INDEFINITELY, 1000L);
scheduler.deleteJob("someActionJob", Scheduler.DEFAULT_GROUP);
scheduler.scheduleJob(jobDetail, trigger);
And that's it. This code will cause the action to be executed every second indefinitely, and the interceptors will all fire and the dependencies will be injected. Of course, any logic or interceptors that depend on Servlet object like an HttpServletRequest are not going to operate properly, but then it wouldn't make sense to schedule those actions outside of the servlet context, anyway.
You don't need HttpServletRequest to format email in freemarker. See the following answer.
Create multi-part message in MIME format Freemarker template via Spring 3 JavaMail
For sending mail , you can inject the mail component to your Quartz job using spring. Even there is a RequestContextHolder class to retrieve the HttpServlet request, you won't get HttpServletRequest from Quartz job.