As Docker is a very hot tech currently, but I just wonder if it is necessary for Java? I list my reasons below, and any feedback is welcomed:
JVM just like a VM, which separate application from OS
Maven, manage dependencies
Spring Boot, support embed J2EE servelet containers like tomcat, all applications can be output as standard JAR file.
So, except you need to install JDK yourself, everything else is consistent and organized. I compare Java and Docker like below:
JAR -> Image
JVM -> VM
So, Docker really mean something for Java?
Docker is not necessary for Java. You can run a JVM on an operating system without worrying about Docker.
Docker is similar to a JVM in that they are both a level of virtualization, but it is probably not helpful to think of the virtualization provided by Docker as the same as the JVM. Docker is really a convenience tool for packaging together multiple applications/services into a container that is portable. It allows you to build the same virtual environment on multiple possibly different machines, or to destroy a virtual environment and restart it.
If you run a jar file with different versions of the JVM, you may get different results. For example, if the jar includes Java 8 functions but you try running on a Java 7 VM, you will encounter exceptions. The point of Docker is that you can control versions so this does not happen.
Depends. How much version support does your code has? How do you deploy your code? Do you use any databases and libraries other than java?
Docker is mostly for simplification of development process even when the one dev's machine differs from other. While java takes care of that by using jvm, think of a case when you are using functions that are on a newer java version, but your peer has an older java version on his machine. God forbid its your server.
Same applies when you are launching other services along with your java app. They will be databases and other services which will be versioned. Though internal libraries of java are themselves maintained by maven, they have no control over other services that you are dependent upon.
In short, no, Docker is not necessary for Java. Its not necessary for any language actually. It just creates consistency across multiple devs and production and other services, and hence the popularity.
You do not have to use Docker in Java but not for the reasons you gave.
Docker is not similar to a JVM! Docker is about having an isolated easily deployable environment that is configurable from outside, that can be started, stopped, and cloned easily.
Docker is similar to other virtualization technologies such as VirtualBox or VMWare but definitely not the JVM.
You can use Docker to select your OS, firewall settings, version of JVM and even more. You can use Docker to deploy 1 version or 1000 versions of your software. You can use Docker to give a fully-working environment to a customer or colleague.
JVMs do not do that.
Oh and there are security implications too.
Related
I am using Elasticsearch which uses Java 8. I also want to install kafka on the same machine but kafka uses java 11. Both services are to be run in parallel. Can anyone tell how can I run both java versions at same time?
Manually download and unpack Java
https://adoptium.net/releases.html?variant=openjdk11
https://www.azul.com/downloads/?version=java-17-lts&os=windows&architecture=x86-64-bit&package=jre
Instead of simply starting java with the
java -args commandline,
you can start it via /install/path/to/java/bin/java -args
or, for windows, use C:\install\location\bin\java.exe -args
You might want to make some start scripts / batch files for that, depending on the exact requirements of your system and Elasticsearch and kafka and possibly other software.
That's it.
one little addition:
If you can NOT directly call java, or the software starts more java apps via the 'default' java, you can also use scripts to manipulate the PATH variables of your system before starting the app. Then you (and your apps) can simply call java -args again.
Once you download different versions of JRE(java runtime environment) to your local, if you use Eclipse IDE, you can checkout different projects in single workspace & specify Java Build Path with JRE version you want.
This way, you can run multiple applications having different versions of Java.
I think other java IDEs also have this kind of support.
As per Docker theory, it helps to make application layer, platform independent by putting "Docker Engine" in between (This layer marked in blue in below diagram).
This seems very similar to JVM concept. It helps to make Java, platform independent language.
Questions -
Why Docker has 2 types of Engines (Linux, and Windows Engine), then ?
My understanding -
This way, it violates the fundamental concept of "platform independent".
Can you help me to clear my understanding on this ?
The platform independence is for the container wrapping the application not the engine itself.
The whole idea of Docker is to wrap the application with its dependencies so that it can be deployed on any machine where Docker is installed.
Docker started intially for Linux only distributions. It was then extended to allow users to run containers on Windows/MAC. This was achieved by deploying a mini Linux VM in the background when installing Docker on Windows/MAC. The Docker engine would then run in this Linux VM and all containers would also run there.
The reason for that is containers need support at the level of the OS kernel and initially only Linux had this support. Then big companies started realizing the advantages and the huge interest in Docker by the community. So Windows decided to do the neccessary OS developments to have a Docker engine natively running on Windows 10.
In short, the platform independence is from the perspective of the application container. A Docker container that runs on Linux can also run on Windows without any changes. This is very similar to the JVM which is specific to an OS however the same java application can run anywhere the JVM is installed.
I am about to deploy a set of JAVA based microservices.
I am confused as to whether:
Run them as simple jars via "java -jar [JAR_NAME]"
Run them in a JAVA based docker container.
Run them as a war.
Please offer me pros and cons of each implementation as this will save me a lot of headache if I use the suggested best approach :)
Thanks in advance.
Definitely Docker. Using containerization gives you max flexibility.
In your first approach, you jar is dependent on Java. Whenever you create new VM, you need to install fix set of software to support you application.
Benefits in second approach,
First, everything is going to be in single container.
You can install all required software in container and that container can be user in any VM. You have flexibility to use java of your choice for each microservices. Only install docker and everything is going to be worked.
Second, Dev Prod Parity
If you thing very much of microservice architecture and 12-factor apps. Then docker helps to support lots of factors.
Your java and other software are going to be unique in all your environment. That means you are never going to get surprise whether it is working in QA and not in Prod due to some version mismatch of runtime environment.
Third, Flexibility
If you go into microservice architecture, then why only java. You can also go with GO, Python or other languages. At this time, rather installing runtime environment for each platform on each VM it is very useful to have microservice in containers.
Last, Deployment Easiness
You can use docker-compose or docker swarm to run 100s of mivroservice in single command.
I am working on a enterprise product and primarily there are 3 pieces to it swing based client, DB, Server(for now we can ignore DB part). Being enterprise product Client and Server comes with their own installer(it is not like configuring apache or JBOSS and deploy war's on it).
We have CI configured to generate the nightly OS specific builds for Client and server which can be installed.
So we have to test these build regularly on specific OS, which requires a lot of manual process of installing and creating system with X version client on Y OS OR X version server on Y OS. This is becoming very tedious since we are all on windows and doing next-> next -> really sucks(I have created a script which installed our product via shell but then it is still steps which I believe can be automated, but don't how). And also we need an isolation.
Now I am thinking how can we automate this process of creating these test machine. I have just started exploring Vagrant/Docker if they can be helpful to me (and under the their concept, still doesn't understand Puppet/Chef though) and I am confused in which strategy should I adopt
Create VM via vagrant and run my installation script on that box (This will require one VM per Client or per server)
Create VM via vagrant and run my client docker containers on it (this I guess, will require one VM for multiple client or server, since they would be under container)
Note: I have to create VM, since we are on window.either via vagrant or via boot2docker
So my question are
If these 2 strategy are valid and not wrong then out of these 2 which strategy should I adopt out of two ?
Are there any different strategy that I am missing or am I approaching this in right way ?
If strategy #2 is to be adopted then how can I create container/docker images in which my client is installed
how can I create container/docker images in which my client is installed
You must put in a Dockerfile all what you do in order to have your client started and configured.
In order to do so, you can either create a container, do all the stuff, and then docker commit or the better way is to put all the required commands in a Dockerfile, so that when you do a slight modification, you build a new version easily with a basic docker build -t myclient_version_n .
Check the docs
https://docs.docker.com/examples/mongodb/#creating-a-dockerfile-for-mongodb
and how to automate builds
http://docs.docker.com/docker-hub/builds/#automated-builds
how to create a Dockerfile
https://docs.docker.com/examples/nodejs_web_app/#creating-a-dockerfile
and have a look at existing Dockerfiles of containerized application in the docker Hub
https://registry.hub.docker.com/
An alternative to Vagrant would be to use Docker Machine. You could leverage the cloud providers as #m1keil mentioned too. Machine can provision Docker hosts on a number of providers and they are ready to go.
Disclosure: I work at Docker and am the maintainer of Machine :)
Your strategies seem valid to me. The addition of containers (docker) to your process might help you speed up and parallelize the testing process (if it's fully automatic testing) since the initialization time and the general resource consumption of a container are lower. However one cannot give you definitive answer without inspecting your testing process first. And since you haven't provided any details about it, it would be hard to tell you if you should use the first or the second strategy.
You can take advantage of the cloud and use services such as AWS, Azure, GCE, etc to initialize machines and run your tests. You can use Vagrant to do this, or skip Vagrant and create your own simple scripts by using the appropriate APIs of your chosen Cloud provider.
Also you can take a look at services such as Travis.ci, Circle.ci, and others, which might help you created automated testing pipe without the need to spend too much time on the plumbing.
I really like docker's ease of use via the Dockerfile. The Dockerfile let's you very easily update and control the software in the docker image, and then you can provision it in you CI/testing environment. Docker now has native Windows support, so this shouldn't prevent you from being able to use it: https://docs.docker.com/docker-for-windows/ Furthermore, I like that you can setup very lightweight, minimal machines, with only the build and runtime dependencies needed for your project, and store them for free on hub.docker.com. Depending on how long it takes to build & install certain dependencies, this can speed up your testing because you can just download a docker image with everything already installed and built, and then just build and test your actual project.
I use this for https://github.com/sourceryinstitute/opencoarrays, which is GCC's official implementation of Coarray Fortran. I have a little project https://github.com/zbeekman/nightly-docker-rebuild that lets you setup nightly docker image builds on hub.docker.com in under two minutes. I use this to trigger builds of https://github.com/zbeekman/nightly-gcc-trunk-docker-image because I can't rebuild GCC from source on Travis-CI.org without the build timing out. This way, I delegate the GCC nightly build to hub.docker.com and then just docker pull zbeekman/nightly-gcc-trunk-docker-image into a travis-ci instance to test OpenCoarrays against the latest GCC trunk.
I have been tasked to set up a Java based development environment across multiple Windows machines. The problem is that I want to the process to be done automatically and easily on each machine so the developers don't have to waste time downloading and installing all the different applications. Ideally, I would like to have the following:
Automated and unattended initial installs
Some sort of a monitor on those installations that would make sure the settings remain constant between all machines
A possibility to push new settings/programs/upgrades when required.
I've looked into several tools for the job. Currently the most promising one seems to be Puppet . However, Puppet doesn't work as well in Windows...
Using a VM image would solve the first requirement, but it is out of the question since the hardware is different across the machines and upgrades won't come easy.
Has anyone had any experience with this sort of task? How would you solve it?
I've been playing with Vagrant for a couple weeks and finding it a fantastic tool for this. It uses Puppet, Chef, or a customer "provisioner" on top of a VirtualBox, and is controlled by a simple command. They have a great tutorial/tour that will show you what it's capable of.
As an example, the direction I've been heading this week is writing Puppet scripts/modules to setup my production server, and all the dynamic parts are handled by parameterized classes. So my development environment will have the same OS, the same firewall settings, the same daemons, etc--all without affecting my host OS or doing any manual configuration steps.
That being said, I've not used it on a day-to-day basis so I don't know if there are any blocking issues, but I have used manually managed VirtualBox for the same purpose without trouble, so I don't foresee any problems.
The more interesting functionality is pulling information from the developers machines. The development environment changes, and different developers try out new things/programs/settings at a rate that is difficult to keep up with if it is not automated. Only having one configuration (the centralized model) kills your ability to respond to change. It is important to understand the differences between configurations, though.
One interesting option might be to standardize on the Eclipse IDE plus a set of plugins (SCM, testing, J2EE development etc.) and use the Eclipse update mechanism to deploy an identical configuration to every machine. Dependencies, synchronization and suchlike would be handled automatically by the Eclipse platform.
This might not work for you if you need some specific tools that are not available inthe Eclipse ecosystem, but my personal development environment is 100% Eclipse based so it is certainly possible to make this work.
Java can be installed globally easily. For Windows have your system administrator push out the MSI file embedded in the Java JRE installation executable. For Ubuntu ssh in and have the sun-java6-jdk module installed.
Then Eclipse is just a matter of pushing out an unzipped distribution to the users.
Most developers like to customise their setups, so I'm not sure this is going to be popular. You could go bleeding edge and look to provide them with instances on the cloud (once you've got one set-up correctly you can clone away!).
1) Use a Disk Image.
or
2) Put everything (Eclipse executable etc.) in SVN (or some other source repository). Then they just have to install SVN and checkout.