I wanted to know what is the easiest way to deploy a web server made using java or kotlin. With nodejs, I just keep all the server code on remote machine and edit it using the sshfs plugin for vscode. For jvm based servers, this doesn't appear as easy since intellij doesn't provide remote editing support. Is there a method for jvm based servers which allows quick iterative development cycle?
Do you have to keep your server code on remote machine? How about developing and testing it locally, and only when you want to test it on the actual deployment site, then deploy it?
I once tried to use SSH-FS with IntelliJ, and because of the way IntelliJ builds its cache, the performance was terrible. The caching was in progress, but after 15 minutes I gave up. And IntelliJ without its caching and smart hints would be close to a regular editor.
In my professional environment, I also use Unison from time to time: https://www.cis.upenn.edu/~bcpierce/unison/. I have it configured in a way to copy only code, not the generated sources. Most of the times it works pretty well, but it tends to have its quirks which can make you waste half a day on debugging it.
To sum up, I see such options:
Developing and testing locally, and avoiding frequent deployments to the remote machine.
VSCode with sshfs plugin, because why not, if it's enough for you for nodejs?
A synchronization tool like Unison.
Related answers regarding SSHFS from IntelliJ Support (several years old, but, I believe, still hold true):
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206592225-Indexing-on-a-project-hosted-via-SSHFS-makes-pycharm-unusable-disable-indexing-
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206599275-Working-directly-on-remote-project-via-ssh-
A professional deployment won't keep source code on the remote server, for several reasons:
It's less secure. If you can change your running application by editing source code and recompiling (or even if edits are deployed automatically), it's that much easier for an attacker to do the same.
It's less stable. What happens to users who try to access your application while you are editing source files or recompiling? At best, they get an error page; at worst, they could get a garbage response, or even a leak of customer data.
It's less testable. If you edit your source code and deploy immediately, how do you test to ensure that your application works? Throwing untested buggy code directly at your users is highly unprofessional.
It's less scalable. If you can keep your source code on the server, then by definition you only have one server. (Or, slightly better, a small number of servers that share a common filesystem.) But that's not very scalable: you're clearly hosted in only one geographic location and thus vulnerable to all kinds of single points of failure. A professional web-scale deployment will need to be geographically distributed and redundant at every level of the application.
If you want a "quick iterative development cycle" then the best way to do that is with a local development environment, which may involve a local VM (managed with something like Vagrant) or a local container (managed with something like Docker). VMs and containers both provide mechanisms to map a local directory containing your source code into the running application server.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am new to Docker and been reading a lot about it. But when I look at it from Java application perspective, I am not sure what value addition it does in terms of 'packaging dependencies' which is one of the important factor called out in their documentation.
Java is already a language which can run on multiple OS using the JVM layer abstraction.Build once , run anywhere is not new concept. Docker container do allow me to ship my JRE version along with my application code. So I see that benefit, but is there any other benefit I get, especially when my environments( host environments) aren't going to change. i.e I will be using Linux boxes for deployments.
A fat jar file is as good as a packaging can get to bundle all the dependencies using maven build. I understand that containers really help with deploying on the platforms like Kubernetes, but if I have to strictly judge containers in terms of packaging issue, isn't jar package enough? I may have to still containerize it to benefit from running light weight processes instead of running them on VMs.
Does the JRE layer gets reused in all other containers ? This would be akin to installing the JRE on my VM boxes. All apps on the box will use the same JRE version. Unless, I need to run diff versions of JRE for my application which is highly unlikely.
If you have a working deployment system using established technologies, you should absolutely keep using it, even if there is something newer and shinier. The Java app server space is very mature and established, and if you have a pure-JVM application and a working setup, there's nothing wrong with staying on that setup even if Docker containers exist now.
As you note, a Docker image contains a JVM, and an application server, and library dependencies, and the application. If you have multiple images, they're layered correctly, and these details match exactly then they could be shared, but there's also a very real possibility that one image has a slightly newer patch release of a JVM or the base Linux distribution than another. In general I'd say the Docker ecosystem broadly assumes that applications using "only" tens of megabytes of disk or memory aren't significant overhead; this is a major difference from the classic Java ecosystem where multiple applications would run on a single shared JVM inside a single process.
# This base image will be shared between derived images; _if_ the
# version/build matches exactly
FROM tomcat:10-jdk11
# These libraries will be shared between derived images; _if_ the
# _exact_ set of files match _exactly_, and Tomcat is also shared
COPY target/libs/*.jar /usr/local/tomcat/lib
# This jar file presumably won't be shared
COPY target/myapp.jar /usr/local/tomcat/webapps
I'd start looking into containers if you had a need to also incorporate non-JVM services into your overall system. If you have a Java component, and a Node component, and a Python component, and they all communicate over HTTP, then Docker will make them also all deploy the same way and it won't really matter which pieces are in which languages. Trying to stay within the JVM ecosystem (and maybe using JVM-based language implementations like JRuby or Jython, if JVM-native languages like Java/Kotlin/Groovy/Scala don't meet your needs) makes sense the way you've described your current setup.
The JVM/JRE doesn't get reused. You may feel that running in an application server environment would be better. Docker will in comparison to running on a JSSE have a higher overhead
The advantage of running just Docker is diminishingly small compared to that.
Some advantages could be:
Testing
Testing out your code on different JRE versions quickly
Automated testing. With a dockerfile, your CI/CD pipeline can check out your code, compile it, spin up a docker image, run the tests and spit out a junit formatted test report.
Having consistent environments (dependecy injection (like JKS, config, etc), OS version, JRE, etc)
Environment by configuration.
You don't have to spend time installing the OS, JRE, etc, that is a configuration file in your Source Control System of choice.
This makes disaster recovery much easier
Migrations are simplified (partially)
The adantages of running in an orchestrated environment PaaS for instance using just kubernetes, or openshift, or something like that are (in addition to the base docker):
Possibility to do canary deployments
Routing, scaling and loadbalancing across same or several machines to optimize usage based on machine (there are sweetspots beyond which JRE performance lags for some operations)
A fat jar file is as good as a packaging can get to bundle all the
dependencies using maven build
It's not as good as it can get.
Your app probably has a bunch of dependencies: Spring, Tomcat, whatever. In any enterprise application, the size of the final artifact will be made up of something like 99% dependencies, 1% your code. These dependencies probably change infrequently, only when you add some functionality or decide to bump up a version, while your code changes very frequently.
A fat JAR means that every time you deploy, every time you push to a repo host (e.g. Nexus), you are wasting time uploading or download something that's 99% identical every time. It's just a dumb zip file.
Docker is a smarter format. It has the concept of layers, so you can (and are encouraged to) use one layer for dependencies and another for your code. This means that if the dependency layer doesn't change, you don't have to deploy it again, or update it again in your deployments.
So you can have faster CI builds, that require less disk space in your repo host, and can be installed faster. You can also use the layers to more easily validate your assumptions that only business code has changed, say.
I am responsible for a number of java application servers, which host apps from different developers from different teams.
A common problem is that some app is sent that does not work as expected, and it turns out that the app was developed on some other platform, i.e. openJDK on winXp or 7, and then deployed to Linux running Oracle JDK, or vice versa.
It would be nice to be able to enforce something up front, but this is practically not possible.
Hence are there any techniques to detect problems upon deployment I mean, without the source code, by scanning the class files?
If that is not possible, what is the tool I can use to send to the developers so they can identify from their source code what incompatible they have relied upon?
It's not possible to detect all such problems in a totally automated way. For example, it is extremely difficult to detect hard coded pathnames which are probably the single biggest issue with cross-platform deployments.
My suggestion would be to compensate for this with more automated testing at deployment time.
For example, in a similar environment I used to insist on deployment test suites: typically implemented as a secure web page that I could navigate to as the administrator which would run a suite of tests on the deployed application and display all the results.
If anything failed, it was an immediate rollback.
Some of the tests were the same as the tests used in development, but some were different ones (e.g. checking the expected configuration of the environment being deployed into)
Obviously, this means you have to bundle at least some of your test code into your production application, but I think it is well worth it. As an administrator, you are going to be much more confident pushing a release to production if you've just seen a big screen full of green ticks in your staging environment.
I'm using JSP+Struts2+Tomcat6+Hibernate+MySQL as my J2EE developing environment. The first phase of the project has finished and it's up and running on a single server. due to growing scale of the website it's predicted that we're gonna face some performance issues in the future.
So we wanna distribute the application on several servers, What are my options around here?
Before optimize anything you should detect where your bottleneck is (Services, Database,...). If you do not do this, the optimization will be a waste of time and money.
And then the optimization is for example depending on you use case.
For example, if you have a read only application, add the bottleneck is both, Java Server and Database, then you can setup two database servers and two java servers.
Hardware is very important too. May the easiest way to to update the hardware. But this will only work if the hardware is the bottleneck.
You can use any J2EE application server that supports clustering (e.g. WebLogic, WebSphere, JBoss, Tomcat). You are already using Tomcat so you may want use their clustering solution. Note that each offering provides different levels of clustering support so you should do some research before picking a particular app server (make sure it is the right clustering solution for your needs).
Also porting code from a standalone to a cluster environment often requires a non-negligible amount of development work. Among many other things you'll need to make sure that your application doesn't rely on any local files on the file system (this is a bad J2EE practice anyway), that state (HTTP sessions or stateful EJB - if any) gets properly propagated to all nodes in your cluster, etc. As a general rule, the more stateless, the smoother the transition to a cluster environment.
As you are using Tomcat, I'd recommend to take a look at mod_cluster. But I suggest you to consider a real application server, like JBoss AS. Also, make sure to run some performance tests and understand where is the bottleneck of your application. Throwing more application servers is ineffective if, for instance, the bottleneck is at the database.
I'm looking for a way to boost my team's productivity, and one way to do that would be to shorten the time it takes to compile & unit test & package & deploy our Java EE application which is getting bigger and bigger.
The trivial solution that I know of is to set up a powerful computer with N processors (N ~= num of developers) and a blazingly fast disk system and a lot of memory, and run everything on this computer and connect to it via X remotely. It would certainly be much faster than compiling on our laptops, but still cheaper and easier to maintain than to buy each developer his/her own supercomputer.
Is there another way to solve this problem? For example, could we run our IDEs locally and then tell it to remote compile java source? Can Netbeans / Eclipse / IntelliJ / etc. do this? Or is there a special tool that enables remote java compilation, also that makes use of multiple processors? It need not be free/open source.
Unfortunately our laptops MUST run a (company managed) Windows Vista, so another reason to go for the separate server computer is to let us use linux on it and finally get rid of the annoying managed environment.
EDIT: to sum up the answers so far, one way to shorten build times is to leave compilation for the developers individually (because compiling is supposed to be fast), skip running unit tests and hot-deploy (without packaging) to the container.
Then, when the developer decides to check his/her code in, a continuous integration server (such as Hudson) is triggered to clean & build & run tests & package & deploy.
SOLUTION: I've accepted Thorbjørn's answer since I think that's going to be the closest to which way I'm planning to proceed. Although out of curiosity I'm still interested in solving the original problem (=remote Java compiling)...
You essentially need two workflows.
The OFFICIAL build, which checks out the sources, builds the whole thing from scratch, runs all the unit tests, and then builds the bits which will eventually ship to the customer after testing.
Developer hot-deploying after each source code change into the container the IDE knows about.
These two can actually be vastly different!
For the official build, get Jenkins up and running and tell it to watch your source repository and build whenever there is a change (and tell those who break the build). If you can get the big computer for building, use it for this purpose.
For the developers, look into a suitable container with very good IDE deployment options, and set that up for usage for each and every developer. This will VERY rapidly pay off! JBoss was previously very good for exactly this purpose.
And, no, I don't know of an efficient remote java compilation options, and I don't think this is what you should pursue for the developers.
See what Joel thinks about Build Servers: http://www.joelonsoftware.com/articles/fog0000000023.html
If you don't like Jenkins, plenty others exist.
(2016 edit: Hudson changed to Jenkins. See https://stackoverflow.com/a/4974032/53897 for the history behind the name change)
It's common to set up a build server , e.g. running hudson to do the compiling/packaging/unit-testing/deploying.
Though you'd likely still need the clients to at least perform a compile. Shifting to using a build server, you might need to change the work process too if you arn't using a build server now - e.g. if the goal is to take load off the client machines, your developers will check code in , automatic unit tests gets run, instead of running unit tests first, then checking in.
You could mount each developer dir with ntfs on the powerful machine and then create External Tool Configuration in Eclipse (GUI access), that would be triggering build on external server.
JavaRebel can increase productivity also. It eliminates the need for redeployments..
You can recompile a single file and see the changes being applied directly on the server.
When things start getting too big for efficient builds, it may be time to investigate breaking up your code into modules/JARs (how it breaks apart would depend on many project specifics and how your team tends to work). If you find a good setup, you can get away with less compiling (dont always need to rebuild the whole project) and more/quicker copying/jaring to get to the point where you can test new code.
What your project need is a build system to do the building, testing and packaging for you. Hudson is a good example of such a continuous integration build system.
I recently used a Java Web Start application. I launched it from my web browser using an embedded jnlp link in the page I was viewing. The application was downloaded, launched and worked just fine. It had access to my local file-system and remembered my preferences between restarting it.
What I want to know is why are Java Web Start applications not a more popular delivery format for complex applications on the web? Why do developers often spend considerable time & energy replicating desktop functionality in html/javascript when the power of a desktop application could be delivered more easily using Java & Java Web Start?
I know that in some corporate environments, e.g banking, they are relatively popular ways of delivering complex trading applications to clients, but why are they not pervasive across the web as a whole?
(For the sake of discussion let's assume a world where: download sources are "trusted" & applications are "signed" (i.e. no security concerns), download speeds are fast (load time is quick) and developers know Java (in the numbers they know html/js/php)).
I think the reason is not security nor startup time of the app. Let's understand what's behind the scene before we find out the root cause.
Java Control Panel has settings that allow users to use the default browser's proxy settings or to override them. In other words, infrastructure teams are able to customize the Windows or OS installation images to have JVM pre-installed with enterprise proxy settings. So I believe this is not an issue at all.
Java Web Start actually caches all apps with customizable settings in Java Control Panel. Once the app is cached, the app is "installed" just like other apps. Although first time execution may be slow, the second time will be fast due to JVM's smart memory allocation technique. So start up time could be an issue but a lot of web sites (even enterprise internal) are now migrated to portal. A web portal normally contains lots of unused libraries for development purposes due to the fact that the portal itself does not anticipate what kinds of portlets are built and deployed on a specific page. Therefore, downloading a single portal page could consume up to MBs and complete a page in more than 5 seconds; this is only one page and caching helps up to 30% but there are still lots of HTML/Javascript/CSS components required to download every time. With this, I am sure Java Web Start is an advantage here.
Java Web Start does not download again if it is cached as long as the server copy is NOT upgraded. Therefore, if, e.g. a project management software like MS Project, is completed using SmartClient (similar to JWS), the information exchange between the client and server would be purely data without presentation like browser's full page refresh. Even with the help of Ajax, it doesn't eliminate full page download entirely. Besides, a lot of companies consider Ajax to be immature and unsecured still. That is why Ajax is a hot topic in the circles of developers but not within enterprise software yet. With that in mind, JWS apps definitely have more advantages such as how JWS apps are deployed and executed in sandboxes, signed, and have much more interactive GUI.
Other advantages include faster development (easier to debug in code and performance), responsive user interface (does not require Comet Servers to provide PUSH functionality), and executing faster (for sure because client computers renders GUI without translation like HTML/Javascript/CSS, and less data processing).
After all these, I haven't touched the question yet, why JWS is not so famous?
My opinion is that it is the same as Brian Knoblauch's comment, it's without awareness.
IT folks are too attracted by the hype of Web Technologies, Ajax PUSH, GWT, and all those buzz words make them bias towards the fun of using different technologies or to resolve technical challenges instead of what's really working for the clients.
Take a look at Citrix. I think Citrix is actually a great idea. Citrix allows you to build your own app farms behind the scene. There are tons of upgrade and implementation strategies you can go for without impact to client experience. Citrix deployment is extremely easy, stable and secure. Enterprises are still using it. However, I think JWS is even better than Citrix. The idea of JWS is to run apps on client machines instead of hosting tons of server farms where client machines are capable of running these apps themselves. This saves a company a lot of money!!! With JWS, development team can still build business logic and data on server side. However, without the web processing unit and let the client computers do the rendering process, it greatly reduces the amount of network consumption and server processing power.
Another example of why JWS is an amazing idea is Blackberry MDS. Blackberry apps are actually Java apps translated from Javascript. With BB's MDS studio, you use the GUI tool to build BB app GUI, coding GUI logic in Javascript. Then apps are then translated and deployed on a BES server. Then the BES server will distribute these apps to BB. On each BB, it runs a thin Java App with GUI rendering and networking capability only. Whenever the app requires data, it communicates with the BES through web services to consume services from other servers. Isn't this just JWS BB version? It's been extremely successful.
Finally I think JWS is not popular because of how Sun advertises it. BB never advertises how good their BB Java apps are, they believe clients won't even care what is it. BB advertises the benefits of using MDS to develop apps: Fast, Cost Saving, Business Return.
Just my, a bit long, 2 cents... :)
A major roadblock for Java Webstart is probably that you still need to have a JVM installed before it can even attempt to download and start your application. Everyone has a browser. Not everyone has a JVM.
Edit:
I've since acquired some hands-on webstart experience and can now add these two points:
The Deployment Toolkit script and the modularized JVM released somewhere around Java 1.6u10 make the JVM requirement less problematic since it can automatically download a JVM and the API core and start the program wile downloading the rest.
Web Start is seriously buggy. Even among the Java 1.6 releases there was one which downloaded the entire app every time, and another which downloaded it and then failed with an obscure error message. All in all, I cannot really recommend relying on such a fragile system.
I think it's mostly due to a lack of awareness. It works very well. Quite seamless. App only downloads if it's the first time, there's been an upgrade, or if the end-user has cleared the cache. Great way to deploy full-blown desktop apps that user won't have to worry about manually upgrading!
The problem with Webstart is, that you actually have to 'start' something which isn't at all that fast even with a fast connection, while with a webapp you enter the URL and the app is there.
Also a lot of things can go wrong with webstart. Maybe the intended user doesn't have the privileges needed, or the proxy of webstart is configured wrong, or something went wrong with jre dependencies or there is simply no java installed in the first place. So for the average john doe in the internet it is not at all pleasent.
In controlled environments like a company it is a good and easy solution in many cases.
I've worked on a JWS-deployed application for a few years over a user base of a few thousands and its automatic upgrades are actually a huge pain.
On every update for some reason dozens of users get "stuck in the middle". All you get is the "class not found" exception (if you're lucky), or uninformative "unable to launch" from JWS before it even gets to your code. Looks like the update is half-downloaded. Or, in other words, it does not download and apply the update atomically AND has poor caching so that relaunching the app from the same URL does not fix anything.
There's no way to resolve it other than clearing JWS cache or providing a different URL (e.g. append ?dummyparam=jwssucks at the end). Even I as a developer hit it sometimes and don't see a way around.
When it works, it works. But too often it doesn't, and then it's a huge pain for you and your helpdesk. I would not recommend it for enterprise or mission-critical use.
There is a very big issue namely that it doesn't allow for "start the program instantly and THEN check for and download any updates in the background" deployments, which is what the defacto behaviour of applications are converging to.
I consider this personally so big an annoyance that we are actively looking for another technology which provides that.
From these posts it looks like when using Web start, it is important to make a good care about the server. The "huge pain" of downloading application on every startup may be caused by incorrect time stamp delivered from the server. Here not the application but the server must be configured to use caching properly and not just to disable it. About buggy start, I am not that much sure, but it seems to me that this also may be caused by unreliable connection.
Important advantage of Web start is that it works nicely with OpenJDK under Linux. Clients of some happy developers use Windows only but my clients do not.
HTML and JavaScript, mentioned in the initial question, are lighter approaches that work fine with smaller tasks like animated buttons or even interactive tables. Java niche seems around much more complex tasks.
Java Web Start is kind a successor of Java Applets, and applets got burned around the new millenium.
But, I still think Java Applets are way better than GWT or Javascript hell.
Java Web Start vs Embedded Java Applet