How to expose port from container to container on testcontainers? - java

I want to integration test few services running on docker environment. They have to talk each other as they are connected. So I need to somehow expose port between each other. Is this setup possible using test containers library? I did not find any docks on it.
Tested app running locally, which will connect to dockerised servers:
mysql
backend one
To make it even harder backend one server needs to talk to mysql service.
Is this possible using testcontainers?

Testcontainers supports networks, there are a few examples in the tests.
You need to first create a network (via Network.newNetwork()), and then use withNetwork/withNetworkAliases to "connect" your containers into the same network.

Generally, when network communication between containers is required, a network should be created at docker level and containers should share it.
Things can be simplified by using docker-compose, which by default creates a network for all services declared in docker-compose file. Then containers can communicate via their service name.
I have never used Testcontainers project but I see in docs that has a docker-compose module.
Quoting from above url:
Testcontainers will spin up a small 'ambassador' container, which will
proxy between the Compose-managed containers and ports that are
accessible to your tests. This is done using a separate, minimal
container that runs socat as a TCP proxy.
There is also an example that you can base on, on docs url.

Related

setup proxy to docker containers from java application

Need help with technical solution.
I have java (spring boot) application which may start docker container. Application assign ID and port to each container. That port is used as separate UI. ID is used to stop container.
For now, application work with 443 secured port, while each container open it own port in a range 19000-19100.
Is it possible to setup something like proxy server in application, verify request and then forward it to container?
Let's say, instead of myhost.com:19000 I want to use myhost.com/container/{containerId}?
I'm thinking about rest template or feign client, but not sure how it will behave with websockets. Any thoughts? Existing tools or libraries?
Take a look at Spring Cloud Netflix Zuul.
I am working on a project where we use this to proxy requests from frontend to a service which handles persistence processes.
Maybe it will help you achieve what you are looking for.

Use Docker to invoke .NET Core from Java

I have made a small .NET Core REST API which I would like to be able to easily put on a Linux server running a Java app on Tomcat. Can one use Docker to ease the deployment of the .NET tool, and if so, how is it done? I was told by someone that Docker would (more or less) allow me to bundle the API as a single app/file without having to bother too much about deployment policies at the place I am working (which by default only allows for Java apps to Bd deployed).
You can use a docker image like e.g. microsoft/dotnet to run your app in a docker container. Please read the documentation on the linked page on how to run your app inside the container.
If you then map an exposed port (443, 80, 8080... depends on you app) using the -p option on container startup you can then access the REST endpoints from any software you like because it is basically behaving like an other REST server running on that host. Since you want to run tomcat in parallel you should avoid to map the port from the container to 8080 on you host, thought! Other than that this setup is totally independent from the application server running on the host itself.

Zuul API GW as a docker container vs. as part of a Java Spring application?

I'm breaking my monolith application into a set of microservices written in Java Spring. As part of my microserivce architecture, I'm implementing some basic patterns such as service discovery, API gateway and more.
I implemented my API gateway as a Spring boot application using the "#EnableZuulProxy", which is part of the Spring cloud project.
My questions are:
what is the difference between my implementation and using the Zuul docker
image off the shelf?
What are the cons and pros of each approach?
Definitely there are no difference, if you use your Zuul API Gateway as jar or Docker container. In both cases, it plays the role of API Gateway.
There are difference in Ops (from DevOps), how you build, check, destroy and publish, control number of instances and so on.
If you chose the Docker as the main part of your infrastructure, and manage it using Docker Swarm, Mesos & Marathon, Kubernetes, Nomad or so on, then wrap your API Gateway to Docker.
If you run your Docker containers by the hand, using a console and docker run command, you can leave the API Gateway as jar build. But then, you loose all benefits of containerization.
The both solutions provide the support for balancing between instances of your application.
The main difference is :
Zuul API GW :
can not launch automatically new instance (you should scale manually)
no need to containerize your application
Docker container orchestrator (Docker Swarm, Kubernetes ...) has the ability to scale automatically (launch new instance when it needs to be)

How to start slaves on different machines in spring remote partitioning strategy

I am using spring batch local partitioning to process my Job.In local partitioning multiple slaves will be created in same instance i.e in the same job. How Remote partitioning is different from local partitioning.What i am assuming is that in Remote partitioning each slave will be executed in different machine. Is my understanding correct. If my understanding is correct how to start the slaves in different machines without using cloudfoundry. I have seen Michael Minella talk on Remote partitioning https://www.youtube.com/watch?v=CYTj5YT7CZU tutorial. I am curious to know how remote partitioning works without using cloudfoundry. How can I start slaves in different machines?
While that video uses CloudFoundry, the premise of how it works applies off CloudFoundry as well. In that video I launch multiple JVM processes (web apps in that case). Some are configured as slaves so they listen for work. The other is configured as a master and he's the one I use to do the actual launching of the job.
Off of CloudFoundry, this would be no different than deploying WAR files onto Tomcat instances on multiple servers. You could also use Spring Boot to package executable jar files that run your Spring applications in a web container. In fact, the code for that video (which is available on Github here: https://github.com/mminella/Spring-Batch-Talk-2.0) can be used in the same way it was on CF. The only change you'd need to make is to not use the CF specific connection factories and use traditional configuration for your services.
In the end, the deployment model is the same off CloudFoundry or on. You launch multiple JVM processes on multiple machines (connected by middleware of your choice) and Spring Batch handles the rest.

Running a java application on a remote server

I want to run a standalone java application on a remote server. It would not be accessible to clients, but would do background calculations and interact with a database and Secure Socket connection to a third party site. It would also interact with a php site.
Do I have to deploy this with JSP, or can I write a standalone application? If so, how would I deploy a standalone java application (jar file) on a remote server? I understand that I must have them install a jvm on the server (not a problem) but then how would I deploy it (if possible). Would I start it with a command line?
I know I have much to learn, but I am not sure how I would access the command line on a remote server. Through the cPanel?
Thanks.
First of all, you'll want to set up some firewall rules to allow access to that server. I'm hoping that you don't expose that server naked to the Internet.
If all you need is database access exposed on the Internet, I don't see why it can't be a secured web app deployed on a servlet/JSP engine and accessed via a web server. You can leverage basic auth for security, JDBC access to the database from the server, and servlets as controllers to accept requests in a nice REST API.
It'll save you the complications of sockets and inventing your own protocol (use HTTP), starting and stopping the application (now it's just a web server/servlet engine), and deployment (send a WAR file).
Does it really must be a 'standalone' application? I think that in your case the best match would be to use Spring container to load your application within some server (tomcat?) and expose services via standard controllers - with Spring you only have to add some annotations on services methods actually.
Then, your php site can interact with these controllers using for example ajax requests.
If your application is written already, you can easily transform it to run within Spring container. It's very non-invasive and promotes usage of POJOs.

Categories