I am using spring batch local partitioning to process my Job.In local partitioning multiple slaves will be created in same instance i.e in the same job. How Remote partitioning is different from local partitioning.What i am assuming is that in Remote partitioning each slave will be executed in different machine. Is my understanding correct. If my understanding is correct how to start the slaves in different machines without using cloudfoundry. I have seen Michael Minella talk on Remote partitioning https://www.youtube.com/watch?v=CYTj5YT7CZU tutorial. I am curious to know how remote partitioning works without using cloudfoundry. How can I start slaves in different machines?
While that video uses CloudFoundry, the premise of how it works applies off CloudFoundry as well. In that video I launch multiple JVM processes (web apps in that case). Some are configured as slaves so they listen for work. The other is configured as a master and he's the one I use to do the actual launching of the job.
Off of CloudFoundry, this would be no different than deploying WAR files onto Tomcat instances on multiple servers. You could also use Spring Boot to package executable jar files that run your Spring applications in a web container. In fact, the code for that video (which is available on Github here: https://github.com/mminella/Spring-Batch-Talk-2.0) can be used in the same way it was on CF. The only change you'd need to make is to not use the CF specific connection factories and use traditional configuration for your services.
In the end, the deployment model is the same off CloudFoundry or on. You launch multiple JVM processes on multiple machines (connected by middleware of your choice) and Spring Batch handles the rest.
Related
Is there any way to deploy two containers in just one app on heroku?
I have this question because I need to upload two Java applications to heroku, where I have a database configured in an app. These two applications, an API and a database update process need to access the same database.
If there is no way to upload two containers on heroku to solve this case, how would you do it? Having to upload a remote database, a process that updates that database based on a cron job and shell script and an API that accesses the updated database?
Would it be an option to have a single image with both applications and jobs?
It is possible to expose only one port on a Web Dyno therefore it is not possible to deploy 2 applications together if both require HTTP connectivity.
Option Web and Worker Dynos
The API processes the HTTP incoming traffic while the backend (DB) app runs in the background as worker: they communicate via a queue, for example RabbitQM, or you can use Redis (same concept: one app produces, the other consumes)
Option 2 Web Dynos
Deploy the 2 apps independently on 2 different web Dynos, then communicate over HTTPS using a secure token.
Option both apps in one Docker image
Although it is technically possible you won't find around much help (and consent) about it as it violates the Docker principles.
If you really want to give it a try I think you can start both apps (on different ports) and expose only the API one for incoming traffic.
I have coded a Spring MVC Hibernate application with RabbitMQ as a messaging server & a MySQL DB. I have also used Hazelcast an in-memory distributed cache to centralize the state of the application, moving the local tomcat session to a centralized session & implementing distributed locks.
The app right now is hosted on a single tomcat server in my local system.
I want to test my application on a multiple JVM node environment i'e app running on multiple tomcat servers.
What would be the best approach to test the app.
Few things that come to my mind
A. Install & configure a load balancer & set up a tomcat cluster in my local system. This I believe is a tedious task & requires much effort.
B. Host the application on a PAAS like OpenShift, cloudfoundry but I am not sure if I will be able to test my application on several nodes.
C. Any other way to simulate a clustered environment on my local windows system?
I would suggest first you should understand your application requirement. For the real production/live environment, are you going to use Infrastructure as a service or PAAS.
If Infrastructure as a service then
I would suggest create local cluster environment and use the tomcat and spring application sticky session concept. Persist the session in Hazelcast or redis server installed on different node. Configure load balancer for multiple nodes having tomcat server. 2-3 VMs for testing purpose would be suitable.
If requirement is PAAS then
Don't think about local environment. Test directly on OpenShift or AWS free account and trust me you would be able to test on PAAS if all setup is fine.
In the past when a new webapp or set of services was to be deployed, it was common practice to be given a new vm with tomcat installed on to deploy to. With my current position the client is only giving me one linux instance to deploy several webapps to. (Small internal usage, 0 scaling. Deploying to a single AWS EC2 linux machine)
The applications are required to be given unique domains. ie app1 and app2 could be mapped to smallapp1.com:8080/app1/login and smallerapp2.com:8080/app2/login (ports are for example only and not a requirement)
I currently have two installations of tomcat8 running on the instance and each application is deployed to a unique tomcat install and running on different ports. (one is 8080 and the other 8081).
If I were to want to deploy a handful of other small applications would I be better off using individual tomcat installations or should I be using Virtual Hosting?
I am new to deployment. In the past I was handed a deployment destination and procedure. In the new position I was simply given credentials to a single instance. I am not sure what is better practice or in which situation which is better than another. If it matters, each application is only ever going to be used by a maximum of 20 users at the same time.
TL;DR Using multiple installations of tomcat on the same instance or using the same tomcat installation to host multiple applications.
Virtualhosts is a better option because you are not bloating the server with several installations that could conflict with each other, and you are to take 1 port for each instance of tomcat.
Keep in mind that tomcat is a better solution for Java web applications, if you are not running servlets or JSPs you are better off using Apache Http server.
I am trying to figure out an easy way to manage many Spring Boot applications on my Production server. Right now I have many fat jars running on different folders where each one has its your own script to start/stop the application and there's an external folder for the configurations (logback, properties, xml). For record those configurations are loaded by command line -Dloader.path to Spring Boot execution.
So how can I avoid conflicts for the same http/https port already running on Production? Does exist any kind of application manager where system administrators could control it? One solution I found was to virtualize Spring Boot applications with Docker, but my environment is Unix Solaris.
Is there any java solution for this scenario?
You can have a look at Spring Cloud which will give you better control and management when running multiple boot applications. All components of Spring Cloud
might not be useful to you, but few of them will help in port resolution, service rerouting and property maintenance. Along with the above you can also try SBA.
Along with the above you can also try Nginx for UI load balancing and reverse proxy.
I am developing a spring boot application.
Since spring boot created a .jar file for an application.
I want to cluster this particular application on different server. Lets say I build a jar file and ran a project then it should run in cluster mode from number of defined servers and should be able to serve end user needs.
My jar will reside on only one server but it will be clustered across number of servers. When end user calls a web service from my spring boot app he never know from where it is getting called.
The reason behind clustering is suppose any of the server goes down in future, end user will still be able to access web services from another server. But I don't know how to make it clustered.
Can any one please give me insight on this ?
If you want to have it clustered, you just run your Spring Boot application on multiple servers (of course, the JAR must be present on those servers, otherwise you can't run it). You would then place a loadbalancer in front of the application servers to distribute the load.
If all services you are going to expose are stateless so you only need to use load balancer in front of your nodes for ex. apache or nginx, if your services are stateful "store any state [session, store data in db]" so you have to use distributed cache or in memory data grid:
for session you can use spring-session project which could used rails to store sessions.
for store data in DB you need to cluster DB it self and can use distributed cache above your DB layer like Hazelcast.
Look into spring cloud, they have used some netflix open software along with amazons to create 12 factor apps for micro services.
Ideally you would need a load balancer, service registry that can help you achieve multiple instances of spring boot. I believe you have to add a dependency called eureka.
Check the below link
Spring cloud
You can deploy it in cloud foundry and use autoscale function to increase your application instances.