I have a 3rd party java webapp that runs within apache Tomcat. We are using docker to run this application. When run on my machine, we receive an error
I want to stress that we do not build this .war. We do not own this .war. We are simply trying to spin up a docker container that runs this already built code.
javax.servlet.ServletException: Servlet execution threw an exception
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
with the details
java.lang.IncompatibleClassChangeError: Class javax.ws.rs.core.Response$Status does not implement the requested interface javax.ws.rs.core.Response$StatusType
investigating this online a bit shows a lot of people having issues with conflicting jersey jars. We should not have that problem because we using docker for mac... and this works on everyone else's machine across multiple version of docker.
I have wiped out and reinstalled docker multiple times, restarting my machine inbetween each restart. I have used the same version as machines where it does work.
I am on OSX Sierra 10.12.2
The docker image is based on tomcat:8.5.9-jre8-alpine
It copies the relevant .war to $CATALINA_HOME/webapps/ and a single log4j .jar to $CATALINA_HOME/lib/ along with a two .properties files.'
is there anything on this specific machine that could possible be interfering with the container?
Dockerfile is as follows
FROM tomcat:8.5.9-jre8-alpine#sha256:780c0fc47cf5e212bfa79a379395444a36bbd67f0f3cd9d649b90a74a1a7be20
ENV JAVA_OPTS="-Xms2048m -Xmx2048m"
WORKDIR $CATALINA_HOME
COPY ./target/lib/* $CATALINA_HOME/lib/
COPY ./target/<appname>.war $CATALINA_HOME/webapps/
EXPOSE 8080
CMD ["catalina.sh", "run"]
Check all the volumes, most critical ones in that case are mounts from your ~/.m2 into the container ~/.m2 .. this would explain your issue.
Other then that, it would not make sense, that it works on one machine, and not the other, as long as you use the same image AND:
a) The compilation of the WAR file does not happen on the probelematic computer ( most probably local a maven / m2 ) issue then
b) You do not do any crazy stuff in the entrypoint really hurting the paradigm ( i do no think so )
c) You mount your local m2 into the containers m2 and thus share libs
We do run a lot of tomcat containers with different war apps and they, as you expect, run absolutely cross platform.
Hint: To avoid a) always build in the image using the image java env.
Related
For a new Spring project I'd like to setup a Docker container to build + run + debug my application.
At the moment I'm using this Dockerfile:
FROM maven:3.6.2-jdk-8-slim
COPY . /app/
WORKDIR /app/
RUN mvn clean package
FROM maven:3.6.2-jdk-8-slim
COPY target/app.jar app.jar
ENTRYPOINT ["java","-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005", "-jar","/app.jar"]
EXPOSE 5005
In the first step the project is built. In the second step the application is run exposing a 5005 port for "remote" debugging.
Then from my IDE (IntelliJ IDEA) I'm configuring a remote debugging configuration to execute debug on the container.
As you may guess is a bit awkward to execute these steps for every little edit I'd like to debug in the project.
So, I'm wondering if there's a more practical approach using IntelliJ to automatically build and attach the debugger to my application just like when developing directly on my dev machine...
First of all, you can open a pom.xml right from IntelliJ and run the application without the need to run maven (IntelliJ has an excellent maven plugin).
Since you're running it as a java -jar you don't even need an ultimate version of IntelliJ.
Now this is how we develop usually, even before maven. You can run mvn clean package locally as well if you want to, say, check that the tests are run (again, you can also do that in Idea). And when you push your changes create a docker and deploy on server.
This is by far the best solution I can recommend. The way you've described in the question suites more to debugging the remote servers (read ready environments).
If you absolutely need this way, you still can use HotSwap feature of JVM for small changes (as longs as these changes are inside the method): While connected via Remote Debugger, right click and "Recompile" the class that has a change. It will be automatically loaded to remote JVM so that you don't actually need to trigger all this process.
You also don't have to run all the tests in maven (mvn clean package -Dmaven.tests.skip)
A couple of ideas that you can implement:
Use CI/CD to build your docker images
Instead of offloading this work to docker files, let your CI/CD pipeline to build your artifact and then pack it into a docker image ( you'll have more control over the process ). Finally, you can also deploy it to a targeted environment.
Use IntelliJ to run and debug your project on the DEV machine
You gain almost no benefit by running your project using Docker on your DEV machine, only a lot of hassle.
I am working on a project which has a particular filesystem requirement. In order to build the project, I will have to create various sub-file systems on my Mac. However, I do not want to meddle with the file system on my actual mac as I could corrupt it. Hence, I want to use a docker container.
I use eclipse as my IDE. However, in order to use the docker file system in my IDE - I have to run the IDE from within the container. (I am able to do that successfully by following this
However, this is super super super slow, and I cannot develop on the IDE running inside the container.
Is there a way to use my IDE by running it outside the docker container (on my actual machine) BUT link it to the file system and directories of the container?
Having everything inside a docker container can quickly lead to absolutely horrible IO performance. See here for in-depth details.
We have a similar problem: a really large project, that can be built using a predefined docker infrastructure. But having the docker container work on the native MacOS filesystem is several times slower compared to running the same docker setup on a Linux machine (just because the IO from the docker to the underlying file system).
Our solution: the source code lives and gets edited directly on MacOS filesystem. Then there is a docker volume that contains a copy of the project. And: a permanent docker instance that does nothing else but rsync the two sides. Of course, the first rsync takes its time, but afterwards, it is just about small changes on either side.
Long story short: I suggest to "reverse" things. Don't move your IDE into docker, but move the source code out of docker.
I have a task to automaticly extend heap memory for our tomcat application. I know it isn't possible by JDK features, and I'm looking for any dirty(or not) hack to make that kinda feature. The main reason of that demand is minimal configuration of tomcat before start demo version of our application. In other word no user of our application will have any access to configuration of JVM/tomcat.
Application required ~1024M of heap memory, default value for tomcat8 is 256M, which isn't appropriate for our goals.
At this moment I have two possible solutions:
An a .sh/.bat script which will configure tomcat. pros - it will do the work, cons - it's another point of configuration demo stand(copying of a script)
An wrapper for our application, which goes in the same war file and configure tomcat if required. pros - it will do the work, and no new configuration point(Just install tomcat and copy the war file),
Is there another way to do that in more... common and simple way?
EDIT The main goal is to make installation of our application in following steps:
Install tomcat
Copy war file
Start tomcat
no any additional configuration, just copy war and start tomcat
That is commonly solved by wrapping the installation and configuration of tomcat in an installation script. Pros: the end user just download the installer and runs it; cons: the installation script must be tailored for the final environment (one for Windows, one for Linux) and can be hard to write. A sometimes simpler way is to provide a zip containing a readme.txt file and a install.bat (or install.sh)
An alternate way if the configuration is really complex is to directly configure a VM (VMDK is a de facto standard), and let the users install the virtualizer they prefere.
I am using a profiling tool, which gets loaded as and when I startup Tomcat with the application war file placed in the webapps directory. So once I run startup, my classes get instrumented and everything works fine.
But for this, I am taking the war file generated as part of maven install ( which downloads tomcat and deploys the war file in it) , and placing it in another tomcat which I have downloaded manually. Then I need to do some editing in the catalin.bat file, to set the JAVA_OPTS property to the javaagent so that it gets started on startup.
What I would like to do is, setup the tool and integrate it with maven such that on a clean and install, the classes gets instrumented and the profiling tool starts running. I believe we can do some configuration changes in pom.xml to achieve this? Any help in this regard would be greatly appreciated! Thanks
This is only part of what you need, but you should configure your tomcat in a different way - maybe this eases your task sufficiently that you'll be able to solve the rest yourself:
You don't need to update catalina.bat - instead create a file named setenv.bat in the same directory: It's not included in tomcat, but if it's there, it will be taken into account during startup/shutdown of tomcat.
Speaking about startup/shutdown: The JAVA_OPTS that you set in this file will be used for startup as well as shutdown (there's a java process started when tomcat shall shut down, running for a brief time). If you have massive memory requirements, allocate JMX ports etc, these will apply for both processes, thus may be conflicting. You rather want to set CATALINA_OPTS - this is just used for starting tomcat, not for shutting it down.
So, the typical content for setenv.bat is
SET CATALINA_OPTS="-DyourSettings -DwhateverYouLike"
And, by the way, the same works for setenv.sh on other platforms
In the past 10 years or so, I had the opportunity to deploy web applications into a tomcat countless times. I also wrote several scripts trying to do that automatically, but never
managed to completely automate it.
Here is the issue. I am trying to deploy a new war, with the same name as an existing war in the webapps of my tomcat.
Option 1: The naive approach - just copy the war and wait for it to update the exploded directory. This sometimes work. Many times - the exploded directory is not updated in a reasonable time.
Option 2: The through approach - stop the tomcat, delete all the wars and temporary files. copy the war and start the tomcat. This usually involves stopping the tomcat, waiting for a while - and then checking to see if the process is still alive and killing it.
Option 3: The manual approach - This might be surprising, but I found it to work many of the times - copy the war, wait for the exploded directory to be updated, and once it does -
restart the tomcat. if it doesn't - you can try to delete the temporary work files, and that sometimes help.
I also tried many options - with different order and subset of the actions - restart, stop, delete war, delete exploded, delete localhost context, delete localhost work directory, copy war, sleep, compare dates, ask the tomcat politely to reload, etc. Nothing seemed to just work.
It might be something that I am doing wrong, but I've heard the same experience from numerous people, so I'm here to get some advice - what say you? What is the best way to deploy a new war to a tomcat?
Thanks!
you can easily automate this in a shell script with curl
on tomcat 6:
curl --upload-file deployme.war "http://tomcat:s3cret#localhost:8088/manager/deploy?path=/deployme&update=true"
on tomcat 7
curl -T "deployme.war" "http://tomcat:s3cret#localhost:8080/manager/text/deploy?path=/deployme&update=true"
or via almost any porgramming language. I posted a java based solution here
I tend to go for Option 2. If there is a project I am working on in the ide especially with a debugger attached, I find things eventually start getting messed up. Might be chasing a red herring for an hour before I discover clearing everything away makes the problem go away. Then it is nice to have a script on the side that I can occasionally launch to clear everything up:
shutdown force with a 60s timeout
clear out the log, temp, work directories
clear out the webapp folder
copy in the new war file from the build location
explode the new war file
if necessary, run an awk script to customize machine specific values in the properties files (hence the previous explode)
startup with the CATALINA_PID environment variable set (to enable the shutdown force)
Normally things shutdown nicely. If not, then there is usually a background thread that was started up but missing a shutdown hook (say a memecached client) and needs to be hunted down. Normally, just dropping in the new war seems to work to. But if in a dev environment, a script for doing the full blown restart is nice.
Cargo - http://cargo.codehaus.org/ - can be used to remotely deploy WAR files to a number of web containers, Tomcat included.
See http://cargo.codehaus.org/Quick+start for examples in Java. Ant and Maven support is also available.
I upload the WAR to my home directory, cd to /usr/local/tomcat, then run the following commands:
bin/shutdown.sh
rm webapps/ROOT.war
rm -rf webapps/ROOT
cp ~/ROOT.war webapps
bin/startup.sh
Easy enough to automate, but I've been too lazy (or not lazy enough) to do that thus far.
I just use the Tomcat management tool to stop the process, remove it, and install the new WAR. Easy peasy.
See the section on "Deploying using the Client Deployer Package"
It's basically a ready made ant script to perform common tomcat deployment operations.
http://tomcat.apache.org/tomcat-7.0-doc/deployer-howto.html#Deploying_on_a_running_Tomcat_server