Fixed Rest Endpoints - java

I have a war file deployed on tomcat ( /var/lib/tomcat7/webapps folder ), say, rest-api-webapp-0.0.1.war
To access the rest endpoint check, exposed in this war, I use curl in the format
curl -X POST -H "Content-Type:application/x-www-form-urlencoded" -d "remarks=Tester" https://localhost:8080/rest-api-webapp-0.0.1/check The problem I face is, whenever I up the patch/major/minor versions of my webapp, I need to change the curl appropriately (say the version is now 0.1.4, then the curl must change as curl -X POST -H "Content-Type:application/x-www-form-urlencoded" -d "remarks=Tester" https://localhost:8080/rest-api-webapp-0.1.4/check. I donot wish to change the way the client calls the endpoint (because it requires the client to upgrade their app, which they resent and see as high maintenance) Can this be avoided by doing something like this
Create a symbolic link as below ln -s rest-api-webapp-0.0.1.war rest-api inside the /var/log/tomcat7/webapps folder so that whennever I up the version, I just change the symlink to point to the new version and the client need not do anything to use the new version of the api. In effect, I need the api endpoint to be fixed and not change as and when I up the versions on the server. For Ex: I need the endpoint to be fixed as https:gva.atr.in/colouring-api/check and whenever I have a major change in the controllers and all that I need to do is just update the symlink and not change the endpoint. If you find that this approach is flawed, please show me the right direction as I have been trying to read about this from the past 2 days, but found very less articles that address my problem.

I read the tomcat documentation and realised that I need to create a symbolic link to the war file and give the symbolic file the war extension. So If this is what you have in your tomcat webapps directory
/var/lib/tomcat7/webapps user1$ ls
drwxr-xr-x 11 user1 wheel 374 Oct 19 21:52 .
drwxr-xr-x 9 user1 admin 306 Oct 19 19:36 ..
drwxr-xr-x 19 user1 wheel 646 Aug 29 20:19 ROOT
drwxr-xr-x 55 user1 wheel 1870 Aug 29 20:19 docs
drwxr-xr-x 4 user1 wheel 136 Oct 19 19:46 rest-api-webapp-0.2.1
-rw-r--r-- 1 user1 wheel 48258097 Oct 19 19:46 rest-api-webapp-0.2.1.war
drwxr-xr-x 7 user1 wheel 238 Aug 29 20:19 host-manager
drwxr-xr-x 8 user1 wheel 272 Aug 29 20:19 manager
Do this,
ln -s rest-api-webapp-0.2.1.war rest-api.war So that the directory looks like this (Wait for sometime for the tomcat engine to deploy the new war)
drwxr-xr-x 11 user1 wheel 374 Oct 19 21:52 .
drwxr-xr-x 9 user1 admin 306 Oct 19 19:36 ..
drwxr-xr-x 19 user1 wheel 646 Aug 29 20:19 ROOT
drwxr-xr-x 4 user1 wheel 136 Oct 19 21:52 rest-api
drwxr-xr-x 4 user1 wheel 136 Oct 19 19:46 rest-api-webapp-0.2.1
-rw-r--r-- 1 user1 wheel 48258097 Oct 19 19:46 rest-api-webapp-0.2.1.war
lrwxr-xr-x 1 user1 wheel 25 Oct 19 21:51 rest-api.war -> rest-api-webapp-0.2.1.war
drwxr-xr-x 7 user1 wheel 238 Aug 29 20:19 host-manager
drwxr-xr-x 8 user1 wheel 272 Aug 29 20:19 manager If needed restart your tomcat and you can use the curl command like this curl -X POST -H "Content-Type:application/x-www-form-urlencoded" -d "remarks=Tester" https://localhost:8080/rest-api/check totally not bothering about the major, minor, patch versions. All that you need to do once you have a new version is unlink rest-apiln -s rest-api-webapp-X.Y.Z.war rest-api.war

Related

Why is my jar so big?

I've got a relatively simple jar-with-dependencies being built with Maven that is way larger than it seems like it should be. It's around 20MB, and in order to figure out what's taking up so much space, I've done the following:
First, I ran mvn dependency:tree. Then I checked in my .m2 cache for the size of each of the jars in the dependency tree. If I add up all of those sizes, it comes to about 8MB. How can I figure out where the other 12MB are coming from?
One thing I noticed in looking through my .m2 was that for many of the dependencies, they'll have something like this:
total 5224
-rw-r--r-- 1 user 289B Jul 25 2016 _remote.repositories
-rw-r--r-- 1 user 1.7M Jul 25 2016 commons-compress-1.12-javadoc.jar
-rw-r--r-- 1 user 407B Jul 25 2016 commons-compress-1.12-javadoc.jar.lastUpdated
-rw-r--r-- 1 user 40B Jul 25 2016 commons-compress-1.12-javadoc.jar.sha1
-rw-r--r-- 1 user 427K Jul 25 2016 commons-compress-1.12-sources.jar
-rw-r--r-- 1 user 407B Jul 25 2016 commons-compress-1.12-sources.jar.lastUpdated
-rw-r--r-- 1 user 40B Jul 25 2016 commons-compress-1.12-sources.jar.sha1
-rw-r--r-- 1 user 432K Jul 22 2016 commons-compress-1.12.jar
-rw-r--r-- 1 user 407B Jul 22 2016 commons-compress-1.12.jar.lastUpdated
-rw-r--r-- 1 user 40B Jul 22 2016 commons-compress-1.12.jar.sha1
-rw-r--r-- 1 user 13K Jul 22 2016 commons-compress-1.12.pom
-rw-r--r-- 1 user 407B Jul 22 2016 commons-compress-1.12.pom.lastUpdated
-rw-r--r-- 1 user 40B Jul 22 2016 commons-compress-1.12.pom.sha1
What are the -sources and -javadoc jars? Are those included in my uber jar? Because if every one of my dependencies uses the -javadoc jar instead of the standard one, that gets me a lot closer to 20MB.
Run
jar tvvf <your_simple-jar-with-dependencies.jar>
Or, open it with any zip compatible archiver and examine the contents to determine what is being included. Alternatively, run maven with -X for more extensive runtime information.

zookeeper Starting error :'JAVA_HOME error in zookeeper.out'

When I do
bin/zkServer.sh start #It shows it has started
ZooKeeper JMX enabled by default
Using config: /data/sparkHA/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
Afetr few second when i check status, I got
ZooKeeper JMX enabled by default
Using config: /data/sparkHA/zookeeper-3.4.9/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
My zookeeper.out says
nohup: failed to run command â/usr/bin/java/bin/javaâ: Not a directory
But my JAVA_HOME in bashrc is usr/bin/java, How come one extra /bin/java is added , that results in an invalid directory.
Also echo $JAVA_HOME outputs
/usr/bin/java
How to approach this error. Please Help. Thanks.
Also tried setting JAVA_HOME in zkServer.sh also by following Zookeeper not starting, nohup error but getting same error.
Your JAVA_HOME point to /usr/bin/java file but it should point to root directory of your JDK. For example for me this is:
➜ ~ echo $JAVA_HOME
/Library/Java/JavaVirtualMachines/jdk1.8.0_112.jdk/Contents/Home
➜ ~ ll $JAVA_HOME
total 52064
-rw-rw-r-- 1 root wheel 3.2K Oct 1 09:00 COPYRIGHT
-rw-rw-r-- 1 root wheel 40B Oct 1 09:01 LICENSE
-rw-rw-r-- 1 root wheel 159B Oct 1 09:01 README.html
-rwxrwxr-x 1 root wheel 108K Sep 22 22:49 THIRDPARTYLICENSEREADME-JAVAFX.txt
-rw-rw-r-- 1 root wheel 173K Oct 1 09:01 THIRDPARTYLICENSEREADME.txt
drwxrwxr-x 46 root wheel 1.5K Oct 1 09:04 bin
drwxrwxr-x 9 root wheel 306B Oct 1 09:00 db
drwxrwxr-x 9 root wheel 306B Oct 1 09:00 include
-rwxrwxr-x 1 root wheel 4.9M Sep 22 22:49 javafx-src.zip
drwxrwxr-x 10 root wheel 340B Oct 1 09:02 jre
drwxrwxr-x 14 root wheel 476B Oct 1 09:02 lib
drwxrwxr-x 5 root wheel 170B Oct 1 09:01 man
-rw-rw-r-- 1 root wheel 529B Oct 1 09:01 release
-rw-rw-r-- 1 root wheel 20M Oct 1 09:01 src.zip
So, try to set valid path to JDK root directory. I think it should fix your problem.

How to have IntelliJ automatically overwrite files when exporting jar

In eclipse it does it anyway when exporting jars, and will only give a warning unless you tell it not to. However in IntelliJ, it refuses to delete the file when building a new version, and I have to go through and manually delete the jar myself for IntelliJ to export properly. Is there a way I force IntelliJ to overwrite jars when it exports?
IntelliJ always overrides artifacts. Try it again. Make sure no other process is using the file with:
$ lsof |grep [file name]
I just tried to make a project with artifact generation, no issues whatsoever.
bender:queues_jar demo$ ls -ltra
total 1408
-rw-r--r-- 1 demo staff 718737 Feb 11 21:26 queues.jar
drwxr-xr-x 3 demo staff 102 Feb 11 21:26 ..
drwxr-xr-x 3 demo staff 102 Feb 11 21:26 .
bender:queues_jar demo$ ls -ltra
total 1408
drwxr-xr-x 3 demo staff 102 Feb 11 21:26 ..
-rw-r--r-- 1 demo staff 718737 Feb 11 21:27 queues.jar
drwxr-xr-x 3 demo staff 102 Feb 11 21:27 .

Play! not shutting down H2 correctly

I'm using Play to write a webapp which is deployed in Tomcat. Because the app won't be processing very much data I'm using the default H2 database with Hibernate. When I want to deploy a new version of the app, I shut down tomcat, wipe the old webapp and WAR, add my new WAR, and start back up.
This worked until a few days ago, when I added the database component. Now, I am often unable to redeploy the app. When I delete the old directory, it is automatically regenerated with this structure:
$ ls -laR myapp/
myapp/:
total 24
drwxr-xr-x 3 root root 4096 Aug 24 17:20 .
drwxr-xr-x 13 root root 4096 Aug 24 17:20 ..
drwxr-xr-x 3 root root 4096 Aug 24 17:20 WEB-INF
myapp/WEB-INF:
total 24
drwxr-xr-x 3 root root 4096 Aug 24 17:20 .
drwxr-xr-x 3 root root 4096 Aug 24 17:20 ..
drwxr-xr-x 3 root root 4096 Aug 24 17:20 application
myapp/WEB-INF/application:
total 24
drwxr-xr-x 3 root root 4096 Aug 24 17:20 .
drwxr-xr-x 3 root root 4096 Aug 24 17:20 ..
drwxr-xr-x 3 root root 4096 Aug 24 17:20 db
myapp/WEB-INF/application/db:
total 24
drwxr-xr-x 3 root root 4096 Aug 24 17:20 .
drwxr-xr-x 3 root root 4096 Aug 24 17:20 ..
drwxr-xr-x 2 root root 4096 Aug 24 17:20 h2
myapp/WEB-INF/application/db/h2:
total 24
drwxr-xr-x 2 root root 4096 Aug 24 17:20 .
drwxr-xr-x 3 root root 4096 Aug 24 17:20 ..
-rw-r--r-- 1 root root 100 Aug 24 17:20 play.lock.db
The same happens when the WAR unzips.
I recently noticed a message whiz by in the catalina.out log complaining about my app not shutting down a process called something like "H2 File Lock Watchdog". Based on a brief search of the H2 docs, I think that process is what's interfering with my app.
EDIT
Here's the complaining line in the log file:
SEVERE: The web application [/myapp] appears to have started a thread named [H2 File Lock Watchdog /var/lib/apache-tomcat-6.0.32/webapps/myapp/WEB-INF/application/db/h2/play.lock.db] but has failed to stop it. This is very likely to create a memory leak.
So, how do I kill this process? I can't restart the machine because it's not mine, and I can't find the watchdog with top or ps. I'd prefer a way for Play to shut it down automagically, but I'm not above building it into my deployment script.
Thanks a million if you've read this far!
I shut down tomcat
Are you sure you have shut down tomcat completely? Because the H2 database is sill running. If you shut down the tomcat process, the database is also stopped (because H2 is running within the tomcat process). Except if you run the database in a different process.
Or did you just shut down the web application within tomcat? If that is the case, then at least one database connection was not closed, so that the database keeps running (and creates this .lock.db file).
Now, I don't know the play framework, and can't say how to ensure all database connections are closed.
One way to force the database to close is to run the SQL statement SHUTDOWN.
I can't find the watchdog with top or ps
top and ps only display processes. The H2 watchdog is a thread within a java process. To see the thread, use:
jps -l (to get the list of Java processes)
jstack -l <pid> (to get a full thread dump)

Subclipse can't rename file (OS X)

I can't perform any Subversion operations on my Eclipse project as Subclipse can't rename a file. The error is:
Caused by: org.tigris.subversion.javahl.ClientException: svn: Cannot rename file '/Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/tmp/entries' to '/Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/entries'
at org.tigris.subversion.javahl.JavaHLObjectFactory.throwException(JavaHLObjectFactory.java:777)
at org.tmatesoft.svn.core.javahl.SVNClientImpl.throwException(SVNClientImpl.java:1850)
at org.tmatesoft.svn.core.javahl.SVNClientImpl.cleanup(SVNClientImpl.java:863)
at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.cleanup(AbstractJhlClientAdapter.java:1958)
... 8 more
Caused by: org.tmatesoft.svn.core.SVNException: svn: Cannot rename file '/Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/tmp/entries' to '/Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/entries'
I'm running OSX Snow Leopard, Eclipse 3.5, Subclipse 1.6.5.
It looks like a permissions problem. If I list the directories in the error I get:
drwxrwxrwx 8 damianharvey staff 272 19 Nov 17:43 .
drwxrwxrwx 16 damianharvey staff 544 21 Sep 14:53 ..
-r--r--r-- 1 damianharvey staff 2030 21 Sep 14:53 all-wcprops
-r--r--r-- 1 damianharvey staff 2313 21 Sep 14:53 entries
drwxrwxrwx 2 damianharvey staff 68 21 Sep 14:53 prop-base
drwxrwxrwx 2 damianharvey staff 68 21 Sep 14:53 props
drwxrwxrwx 15 damianharvey staff 510 21 Sep 14:53 text-base
drwxrwxrwx 6 damianharvey staff 204 19 Nov 17:19 tmp
So I assume that it's the read-only permissions that is preventing this. If I try to chmod this to a very broad 777:
sudo chmod 777 /Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/entries
chmod: Unable to change file mode on /Users/damianharvey/Sites/Odyssey3.5/OdysseyEDIJAXB/src/com/locuslive/edi/edifact/d95b/coreor/.svn/entries: Operation not permitted
Any ideas? Would quite like to commit my code.
Many Thanks.
No worries. Aunty Google found it for me
chflags -R nouchg .
From the comments here:
If you're changing workspaces on OS X
and you import an SVN-based project
into your new workspace, some of your
files may have the uchg flag set.
SubClipse/SVN will not be able to
update this project. You will get an
error:
svn: Cannot rename file
every time you try invoke svn. If you
issue:
chflags -R nouchg .
at the top-level of the project
directory this will clear these flags
and restore SVN function.

Categories