Is there a way to deploy two containers in one heroku app? - java

Is there any way to deploy two containers in just one app on heroku?
I have this question because I need to upload two Java applications to heroku, where I have a database configured in an app. These two applications, an API and a database update process need to access the same database.
If there is no way to upload two containers on heroku to solve this case, how would you do it? Having to upload a remote database, a process that updates that database based on a cron job and shell script and an API that accesses the updated database?
Would it be an option to have a single image with both applications and jobs?

It is possible to expose only one port on a Web Dyno therefore it is not possible to deploy 2 applications together if both require HTTP connectivity.
Option Web and Worker Dynos
The API processes the HTTP incoming traffic while the backend (DB) app runs in the background as worker: they communicate via a queue, for example RabbitQM, or you can use Redis (same concept: one app produces, the other consumes)
Option 2 Web Dynos
Deploy the 2 apps independently on 2 different web Dynos, then communicate over HTTPS using a secure token.
Option both apps in one Docker image
Although it is technically possible you won't find around much help (and consent) about it as it violates the Docker principles.
If you really want to give it a try I think you can start both apps (on different ports) and expose only the API one for incoming traffic.

Related

Deploying springboot app and ui separately

Currently, I have a springboot jar file with a bunch of rest and apis including calls for login deployed on ec2. I also have a separate code base for my ui i.e with js,html,css. What is the best way to deploy this on aws and keep it separate from the backend.
This can be done in many ways. But will share a simple way.
Deploy your spring boot app in one aws instance.
Deploy the other front end app on the other aws instance.
This is a kind of two tier application where the server and client app are hosted in different instance. You can restrict the access of your rest api to be accessed only by the instance where you host front end app. For trial you can use heroku account. E.g.
Github: https://github.com/krishna28/springbootapi
Also check https://github.com/krishna28/etodo

How to start slaves on different machines in spring remote partitioning strategy

I am using spring batch local partitioning to process my Job.In local partitioning multiple slaves will be created in same instance i.e in the same job. How Remote partitioning is different from local partitioning.What i am assuming is that in Remote partitioning each slave will be executed in different machine. Is my understanding correct. If my understanding is correct how to start the slaves in different machines without using cloudfoundry. I have seen Michael Minella talk on Remote partitioning https://www.youtube.com/watch?v=CYTj5YT7CZU tutorial. I am curious to know how remote partitioning works without using cloudfoundry. How can I start slaves in different machines?
While that video uses CloudFoundry, the premise of how it works applies off CloudFoundry as well. In that video I launch multiple JVM processes (web apps in that case). Some are configured as slaves so they listen for work. The other is configured as a master and he's the one I use to do the actual launching of the job.
Off of CloudFoundry, this would be no different than deploying WAR files onto Tomcat instances on multiple servers. You could also use Spring Boot to package executable jar files that run your Spring applications in a web container. In fact, the code for that video (which is available on Github here: https://github.com/mminella/Spring-Batch-Talk-2.0) can be used in the same way it was on CF. The only change you'd need to make is to not use the CF specific connection factories and use traditional configuration for your services.
In the end, the deployment model is the same off CloudFoundry or on. You launch multiple JVM processes on multiple machines (connected by middleware of your choice) and Spring Batch handles the rest.

spring boot application in cluster

I am developing a spring boot application.
Since spring boot created a .jar file for an application.
I want to cluster this particular application on different server. Lets say I build a jar file and ran a project then it should run in cluster mode from number of defined servers and should be able to serve end user needs.
My jar will reside on only one server but it will be clustered across number of servers. When end user calls a web service from my spring boot app he never know from where it is getting called.
The reason behind clustering is suppose any of the server goes down in future, end user will still be able to access web services from another server. But I don't know how to make it clustered.
Can any one please give me insight on this ?
If you want to have it clustered, you just run your Spring Boot application on multiple servers (of course, the JAR must be present on those servers, otherwise you can't run it). You would then place a loadbalancer in front of the application servers to distribute the load.
If all services you are going to expose are stateless so you only need to use load balancer in front of your nodes for ex. apache or nginx, if your services are stateful "store any state [session, store data in db]" so you have to use distributed cache or in memory data grid:
for session you can use spring-session project which could used rails to store sessions.
for store data in DB you need to cluster DB it self and can use distributed cache above your DB layer like Hazelcast.
Look into spring cloud, they have used some netflix open software along with amazons to create 12 factor apps for micro services.
Ideally you would need a load balancer, service registry that can help you achieve multiple instances of spring boot. I believe you have to add a dependency called eureka.
Check the below link
Spring cloud
You can deploy it in cloud foundry and use autoscale function to increase your application instances.

Scaling Scheduler Web Service

We are developing an application which periodically syncs the LDAP servers of different clients with our database. This application needs to be accessed via a web portal. A web user will create, modify or delete scheduled tasks on this application. So, we have developed this application as a web service.
Now, we have to scale this application and also ensure high availability.
The application is an Axis2 based web service running on Tomcat. We have thought of httpd + mod_jk + tomcat combination for load balancing. The problem is that if a request for modification/deletion comes, then it should land on the same tomcat server on which the task was created initially. But, since, the request can come from different web users accessing web portal from different ip addresses, we can not have same session id (sticky session).
Any solutions? Different architecture? Anything.
We have also thought of using Quartz scheduler api. The site says it supports load balancing and clustering. Does anyone has experience of working on such scenario with Quartz?
If you are using Quartz for your scheduling, that can be backed by a database (see JDBCJobStore). Then you could access any Tomcat server and the scheduling would be centralized. I would recommend using a key in the database that you return back to the Axis service, so that the user can reference the same data between calls.
Alternatively it is not difficult to use the database as a job scheduler, then have your tasks run on Tomcat (any location), and put the results into the database. If the results of the job (such as its status) are small, this would work fine.

Web Server for existing Java application

I have an existing Java application running on a linux box that monitors the state of relays and network information. This is a stand-alone device currently.
What I would like to further develop is a web server for the device to remotely monitor, configure, and control the device via an Ethernet connection. To achieve this, I need some way to interface between the web server, which could be configured as enabled/disabled, and the master device, which is always running.
Is there a good way to do this? I have looked at apache w/ tomcat and similar web servers but not sure if this is what I need. The key here is that the web server needs to be able to access the existing java application without interfering with its always running services.
You either develop a webapp, use your Java application's API inside the webapp, and deploy this webapp inside a web container. Or you can do the reverse and embed a web server inside your application (see here for documentation to embed Jetty).
If you want to keep the webapp and the original application in two separate JVMs, you'll need some wey to communicate between both, like sockets, RMI, or even files, but it will be more complex.
You might want to take a look at JMX http://docs.oracle.com/javase/tutorial/jmx/overview/index.html

Categories