I am working on a project which has a particular filesystem requirement. In order to build the project, I will have to create various sub-file systems on my Mac. However, I do not want to meddle with the file system on my actual mac as I could corrupt it. Hence, I want to use a docker container.
I use eclipse as my IDE. However, in order to use the docker file system in my IDE - I have to run the IDE from within the container. (I am able to do that successfully by following this
However, this is super super super slow, and I cannot develop on the IDE running inside the container.
Is there a way to use my IDE by running it outside the docker container (on my actual machine) BUT link it to the file system and directories of the container?
Having everything inside a docker container can quickly lead to absolutely horrible IO performance. See here for in-depth details.
We have a similar problem: a really large project, that can be built using a predefined docker infrastructure. But having the docker container work on the native MacOS filesystem is several times slower compared to running the same docker setup on a Linux machine (just because the IO from the docker to the underlying file system).
Our solution: the source code lives and gets edited directly on MacOS filesystem. Then there is a docker volume that contains a copy of the project. And: a permanent docker instance that does nothing else but rsync the two sides. Of course, the first rsync takes its time, but afterwards, it is just about small changes on either side.
Long story short: I suggest to "reverse" things. Don't move your IDE into docker, but move the source code out of docker.
Related
I want to install an application that requires a Java SDK to be installed, but I don't want to install Java on my computer. Instead I would like to install Java to a Docker container and allow apps on my host machine to use it.
Is this possible?
Docker doesn’t really work that way. It’s best for packaged complete applications that you can access via a network connection; in the Java case, a JRE plus an app server plus an installed application would make a reasonable complete image.
It’s possible to share files between the host and a container, but mostly that direction is “push things into a container”. In your example you could use the docker run -v option to push your source directory into the container, but you’d have to actually run the JDK commands (javac, mvn, ...) from inside the container. You can use docker exec to do this, but it’s an unnatural workflow IMHO, and it requires you to have administrator-level privileges to do even routine things.
I’d put the JDK and the application in the same place: either bundle both into a single Docker image, or install the JDK on your host.
I have a task to automaticly extend heap memory for our tomcat application. I know it isn't possible by JDK features, and I'm looking for any dirty(or not) hack to make that kinda feature. The main reason of that demand is minimal configuration of tomcat before start demo version of our application. In other word no user of our application will have any access to configuration of JVM/tomcat.
Application required ~1024M of heap memory, default value for tomcat8 is 256M, which isn't appropriate for our goals.
At this moment I have two possible solutions:
An a .sh/.bat script which will configure tomcat. pros - it will do the work, cons - it's another point of configuration demo stand(copying of a script)
An wrapper for our application, which goes in the same war file and configure tomcat if required. pros - it will do the work, and no new configuration point(Just install tomcat and copy the war file),
Is there another way to do that in more... common and simple way?
EDIT The main goal is to make installation of our application in following steps:
Install tomcat
Copy war file
Start tomcat
no any additional configuration, just copy war and start tomcat
That is commonly solved by wrapping the installation and configuration of tomcat in an installation script. Pros: the end user just download the installer and runs it; cons: the installation script must be tailored for the final environment (one for Windows, one for Linux) and can be hard to write. A sometimes simpler way is to provide a zip containing a readme.txt file and a install.bat (or install.sh)
An alternate way if the configuration is really complex is to directly configure a VM (VMDK is a de facto standard), and let the users install the virtualizer they prefere.
I'm using Jubula to run some automation for a large Java project. The gateway is a launcher that sets all the parameters etc for the project to run. The gateway is wrapped as a .exe file. I converted it back to a jar in order to get Jubula working with it, I managed to one time but not all the projects jars were launched from the main project launcher. If I attempted to use the .exe within the AUT properties, it won't launch at all. If I convert back to a jar, then I run into an issue of either not being able to object map no matter how much I press CTL+SHIFT+Q or some of the apps don't launch when I use Jubula to automate. Also I need to create a bat file to launch either the jar or the .exe file in any case. I can't just launch the jar from settings within AUT properties.
Is this an issue of the .exe wrapper being the culprit and I should just launch everything without the project launcher or are there known issues with object mapping someone can alert me to?
The .exe should mean no problem. If you can't map it means the RC (Remote Control) .jar is either not present or is not launched. Could you check that?
The less likely possibilities are Firewall blocking the communication between remote client and AUT agent (corporate machines with Windows usually do), or that you're using incompatible versions of RC/AUT agent/Testexec triple. The latter can only happen if you've updated your Jubula on your machine to newer version.
Actually it wasn't an issue with Jubula at all. I had a permissions issue somehow with anything Java related. So I got my machine reimaged and now everything works like a champ.
I have a 3rd party java webapp that runs within apache Tomcat. We are using docker to run this application. When run on my machine, we receive an error
I want to stress that we do not build this .war. We do not own this .war. We are simply trying to spin up a docker container that runs this already built code.
javax.servlet.ServletException: Servlet execution threw an exception
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
with the details
java.lang.IncompatibleClassChangeError: Class javax.ws.rs.core.Response$Status does not implement the requested interface javax.ws.rs.core.Response$StatusType
investigating this online a bit shows a lot of people having issues with conflicting jersey jars. We should not have that problem because we using docker for mac... and this works on everyone else's machine across multiple version of docker.
I have wiped out and reinstalled docker multiple times, restarting my machine inbetween each restart. I have used the same version as machines where it does work.
I am on OSX Sierra 10.12.2
The docker image is based on tomcat:8.5.9-jre8-alpine
It copies the relevant .war to $CATALINA_HOME/webapps/ and a single log4j .jar to $CATALINA_HOME/lib/ along with a two .properties files.'
is there anything on this specific machine that could possible be interfering with the container?
Dockerfile is as follows
FROM tomcat:8.5.9-jre8-alpine#sha256:780c0fc47cf5e212bfa79a379395444a36bbd67f0f3cd9d649b90a74a1a7be20
ENV JAVA_OPTS="-Xms2048m -Xmx2048m"
WORKDIR $CATALINA_HOME
COPY ./target/lib/* $CATALINA_HOME/lib/
COPY ./target/<appname>.war $CATALINA_HOME/webapps/
EXPOSE 8080
CMD ["catalina.sh", "run"]
Check all the volumes, most critical ones in that case are mounts from your ~/.m2 into the container ~/.m2 .. this would explain your issue.
Other then that, it would not make sense, that it works on one machine, and not the other, as long as you use the same image AND:
a) The compilation of the WAR file does not happen on the probelematic computer ( most probably local a maven / m2 ) issue then
b) You do not do any crazy stuff in the entrypoint really hurting the paradigm ( i do no think so )
c) You mount your local m2 into the containers m2 and thus share libs
We do run a lot of tomcat containers with different war apps and they, as you expect, run absolutely cross platform.
Hint: To avoid a) always build in the image using the image java env.
I have a windows service running a java application. When I install this application on a share drive it takes ages to start. I decided i want to copy all the application jars and libs to a local path, and run it from there.
The problem is I can't find a clean way to do it. I understand i cannot run a batch script (to copy the files before starting the app) as a service. I don't want to create two services with dependancies on them and creating another java app just for copying the jars sounds and overkill for the problem.
Can you think if a nice way to do it? I thought maybe downloading a template of a generic windows service (I don't care what language, preferablly C\C++) and make it copy the jars\libs to local disk and then execute the regular service to run from there. If this is the write way, is there an equivilant of the exec linux system call in Windows? i don't want the startup executable to stay alive while the app is running.
Thanks in advance