How to manage Tomcat via Java - java

i'm looking for a way to manage tomcat (on localhost) programmatically via java.
I want to start/stop tomcat and deploy WARs.
Any help is appreciated.

You can run Tomcat embedded in your app.

The way to start/stop tomcat through java is to call execute on the bootstrap.jar (Use the class Runtime) with the sample parameters: -Dcatalina.home=c:/tomcat/
Sample code to see how ant executes tomcat start stop:
http://ptrthomas.wordpress.com/2006/03/25/how-to-start-and-stop-tomcat-from-ant
Sample code to see how external programs are executed from java:
http://www.linglom.com/2007/06/06/how-to-run-command-line-or-execute-external-application-from-java/

You can use java Runtime class to call a bat file. make sure User running java process has rights to start and stop tomcat.
try{
Runtime.getRuntime().exec("c:/program files/tomcat/bin/startup.bat");
} catch(IOException e) {System.out.println("exception");}

To manage tomcat programmatically, you may want to take a look at JMX and the bulit-in MBeans' capabilities of Tomcat.
In essence, you can write your own java based JMX client to talk to the MBeans via RMI or you can take advantage of the JMX Http Proxy in the Manager App and use plain old http requests to script and manage the tomcat instance.
For a good reference of JMX and Tomcat 6:
http://www.datadisk.co.uk/html_docs/java_app/tomcat6/tomcat6_jmx.htm
A good reference of Manager App and JMX Http Proxy:
http://tomcat.apache.org/tomcat-6.0-doc/manager-howto.html#JMX_Set_command
You should be able to deploy and undeploy WARs fairly easily.
I don't think there is an existing MBean that allow you to shutdown tomcat, but it's fairly easy to implement one yourself and call System.exit();

You can use tomcat manager, or see its sources to learn how manager process the deploy operations.

You can restart individual Tomcat connector i.e. port restart like 8843 where your application is running. One scenario when this is required is when you are getting signed certificate through API or you are modifying your truststore.
Here is the complete code/method that I am using to restart tomcat connectors after I add/delete certificates.
public void refreshTrustStore() throws Exception
{
try
{
//following line need to be replaced based on where you get your port. It may be passed in as argument
String httpsPort = configurationManager.getHttpsPort();
String objectString = "*:type=Connector,port=" + httpsPort + ",*";
final ObjectName objectNameQuery = new ObjectName(objectString);
for (final MBeanServer server : MBeanServerFactory.findMBeanServer(null))
{
if (server.queryNames(objectNameQuery, null).size() > 0)
{
MBeanServer mbeanServer = server;
ObjectName objectName = (ObjectName) server.queryNames(objectNameQuery, null).toArray()[0];
mbeanServer.invoke(objectName, "stop", null, null);
// Polling sleep to reduce delay to safe minimum.
// Use currentTimeMillis() over nanoTime() to avoid issues
// with migrating threads across sleep() calls.
long start = System.currentTimeMillis();
// Maximum of 6 seconds, 3x time required on an idle system.
long max_duration = 6000L;
long duration = 0L;
do
{
try
{
Thread.sleep(100);
}
catch (InterruptedException e)
{
Thread.currentThread().interrupt();
}
duration = (System.currentTimeMillis() - start);
} while (duration < max_duration &&
server.queryNames(objectNameQuery, null).size() > 0);
// Use below to get more accurate metrics.
String message = "TrustStoreManager TrustStore Stop: took " + duration + "milliseconds";
logger.information(message);
mbeanServer.invoke(objectName, "start", null, null);
break;
}
}
}
catch (Exception exception)
{
//Log and throw exception
throw exception
}
}

Related

How to wait for the WildFly server to reload?

I reload the WildFly server as follows
CliCommandBuilder cliCommandBuilder = ...
cliCommandBuilder
.setCommand(
"reload"
);
Launcher.of(cliCommandBuilder)
.inherit()
.setRedirectErrorStream(true)
.launch();
And I need to wait for the server to start, because then I will deploy the new content. How can I do this?
I tried use method .waitFor() from java.lang.Process
Launcher.of(cliCommandBuilder)
.inherit()
.setRedirectErrorStream(true)
.launch().waitFor();
But the thread continues to work after shutting down WildFly, not starting
I thought the reload command waited to terminate the process until WildFly was reloaded. However, there is a helper API you could use to check the process. Something like this should work:
final CliCommandBuilder commandBuilder = CliCommandBuilder.of("/opt/wildfly-27.0.0.Final")
.setConnection("localhost:9990")
.setCommand("reload");
final Process process = Launcher.of(commandBuilder)
.inherit()
.setRedirectErrorStream(true)
.launch();
// Wait for the process to end
if (!process.waitFor(5, TimeUnit.SECONDS)) {
throw new RuntimeException("The CLI process failed to terminate");
}
try (ModelControllerClient client = ModelControllerClient.Factory.create("localhost", 9990)) {
while (!ServerHelper.isStandaloneRunning(client)) {
TimeUnit.MILLISECONDS.sleep(200L);
}
if (!Operations.isSuccessfulOutcome(result)) {
throw new RuntimeException("Failed to check state: " + Operations.getFailureDescription(result).asString());
}
System.out.printf("Running Mode: %s%n", Operations.readResult(result).asString());
}
You need to explicitly connect the CLI to the server. By default, this is localhost:9990, which you can access by adding connect to the CLI commands (see the WildFly Admin Guide). Otherwise, you should use the setController method.

Hazelcast - Client mode - How to recover after cluster failure?

We are using hazelcast distributed lock and cache functions in our products. Usage of distributed locking is vitally important for our business logic.
Currently we are using the embedded mode(each application node is also a hazelcast cluster member). We are going to switch to client - server mode.
The problem we have noticed for client - server is that, once the cluster is down for a period, after several attempts clients are destroyed and any objects (maps, sets, etc.) that were retrieved from that client are no longer usable.
Also the client instance does not recover even after the Hazelcast cluster comes back up (we receive HazelcastInstanceNotActiveException )
I know that this issue has been addressed several times and ended up as being a feature request:
issue1
issue2
issue3
My question : What should be the strategy to recover the client? Currently we are planning to enqueue a task in the client process as below. Based on a condition it will try to restart the client instance...
We will check whether the client is running or not via clientInstance.getLifecycleService().isRunning() check.
Here is the task code:
private class ClientModeHazelcastInstanceReconnectorTask implements Runnable {
#Override
public void run() {
try {
HazelCastService hazelcastService = HazelCastService.getInstance();
HazelcastInstance clientInstance = hazelcastService.getHazelcastInstance();
boolean running = clientInstance.getLifecycleService().isRunning();
if (!running) {
logger.info("Current clientInstance is NOT running. Trying to start hazelcastInstance from ClientModeHazelcastInstanceReconnectorTask...");
hazelcastService.startHazelcastInstance(HazelcastOperationMode.CLIENT);
}
} catch (Exception ex) {
logger.error("Error occured in ClientModeHazelcastInstanceReconnectorTask !!!", ex);
}
}
}
Is this approach suitable? I also tried to listen LifeCycle events but could not make it work via events.
Regards
In Hazelcast 3.9 we changed the way connection and reconnection works in clients. You can read about the new behavior in the docs: http://docs.hazelcast.org/docs/3.9.1/manual/html-single/index.html#configuring-client-connection-strategy
I hope this helps.
In Hazelcast 3.10 you may increase connection attempt limit from 2 (by default) to maximum:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(Integer.MAX_VALUE);

Starting and stopping a Jetty server between JUnit tests

I'm trying to simulate tests of various run-throughs of my program, setting up a Jetty server in a #Before method and closing it down in an #After.
My first test will run successfully, but upon attempting to POST data in following tests com.sun.jersey.api.client.ClientHandlerException: java.net.SocketException: Software caused connection abort: recv failed occurs. Is there any way I can get my Server (and Client?) to shut down cleanly between tests?
My Before and After code is as follows:
#Before
public void startServer() {
try {
server = new Server(8080);
ServletContextHandler root = new ServletContextHandler(server, "/ingest", ServletContextHandler.SESSIONS);
root.addServlet(new Servlet(), "/*");
server.start();
client = new Client();
client.setChunkedEncodingSize(16 * 1024);
FileInputStream stream = new FileInputStream(testFile);
try {
client.resource(uri).type(MediaType.APPLICATION_OCTET_STREAM).post(stream);
} catch (Exception e) {
e.printStackTrace();
} finally {
Closeables.closeQuietly(stream);
client.destroy();
}
} catch (Exception e) {
e.printStackTrace();
fail("Unexpected Exception when starting up server.");
}
}
#After
public void shutDown() {
if (output.exists()) {
output.delete();
}
try {
server.stop();
} catch (Exception e) {
e.printStackTrace();
}
}
Best practice in testing scenarios is to not hard code the port. That only leads to conflicts when running elsewhere, especially on CI systems that have even a moderate load or variety of projects.
in Jetty 9 (same idea in 6, 7, 8)
_server = new Server();
_connector = new ServerConnector(_server);
_server.setConnectors(new Connector[] { _connector });
_server.start();
int port = _connector.getLocalPort();
It turns out that what I had was in fact working, however due to the asynchronous nature of the server.stop(), my new server was attempting to instantiate before the previous server's shut down thread had completely executed.
A simple Thread.sleep(n) after the server.stop() gives the server the time it needs to shut down between tests. Unfortunately, the server seems to prematurely claim that it has stopped thus preventing an exact solution through checking the server state - but perhaps there is something to poll on the server; possibly examining the thread pool could provide a consistent result?
In any case, as this is only for testing purposes, merely starting the server in the #BeforeClass and shutting it down in #AfterClass prevents the whole server shut down kerfuffle, but beware of then starting another server on the same port in your test suite.
My guess is that it was getting a port conflict. We actually do this for our tests, and surprisingly the performance hit isn't that bad. We began by starting a single server before all tests as the answer suggests, but we had to switch to support mutation testing. One downside to relying on Maven is that you have to start it up on the side to run a single test in an IDE.
For anyone interested, our implementation is here: embedded-test-jetty. It runs multiple servers at once on different ports(for parallel testing), checks port availability, supports SSL, etc.
I handle this using a couple of things. First, after each test, make sure your server is shutdown, and join() on it. Either do this in #After or #AfterClass depending on what you are doing.
server.stop();
server.join();
Next, before each test, make sure the port is available. I use the snippet available at Sockets: Discover port availability using Java
Then, the setup code becomes
public static void waitForPort(int port) {
while( !available(port) ) {
try { Thread.sleep(PORT_SLEEP_MILLIS); }
catch (InterruptedException e) {}
}
}
#Before
public void setUp() throws Exception {
waitForPort(9876);
waitForPort(9877);
// Make sure the ports are clear
Thread.sleep(500);
}
The little extra sleep at the end ensures that the port is available; because just checking that it is available might make the system not reuse it. Another option is to just set SO_REUSEADDR when you are opening the port after having checked it.
Try:
server = new Server();
SocketConnector connector = new SocketConnector();
connector.setPort(8080);
server.setConnectors(new Connector[] { connector });
WebAppContext context = new WebAppContext();
context.setServer(server);
context.setContextPath("/your-context");
context.setWar("path to war");
server.addHandler(context);
Thread monitor = new MonitorThread();
monitor.start();
server.start();
server.join();
then somewhere you say:
server.stop()
Helpful article:
http://www.codeproject.com/Articles/128145/Run-Jetty-Web-Server-Within-Your-Application
I realise that this doesn't directly answer your question... but starting and stopping a server in #Before and #After methods is inefficient when you have more than one integration test that requires a server to be running, as the server would be restarted for every test.
You may want to consider starting and stopping your server around your entire suite of tests. If you are using Maven for builds, you can do this with the combination of failsafe and Jetty plugins.

how can i avoid running of more than one instance on same java project at the same time?

i have a java project, works as a server. when an instance of this project running, i can run another instance.
how can i avoid running of more than one instance on same java project at the same time?
(Stop the server when another instance is detected)
import java.net.ServerSocket;
.....
private static final int PORT = 9999;
private static ServerSocket socket;
public static void main(String[] args) {
try {
socket = new ServerSocket(PORT, 0, InetAddress.getByAddress(new byte[] { 127, 0, 0, 1 }));
{/*here write your own code taht must be run in the main*/}
} catch (BindException e) {
System.err.println("**********************************Already running.");
System.exit(1);
} catch (IOException e) {
System.err.println("************************************Unexpected error.");
e.printStackTrace();
System.exit(2);
} catch (Exception e) {
System.err.println("************************************ Error");
System.exit(3);
}
}
i used this code and it work try it
Easiest way is to use lock file, this causes problems if the app crashed. Try writing the pid into the lock file, you can check if that pid exists (although not natively maybe in a wrapper shell script).
If you are running server can you not check if a port is open, or better still maybe a jmx instance on a known port.
I totally support #vickirk - his approach allows the second "un-needed" instance of your server become "dormant" instead of simply terminating, i.e. periodically run to perform a check if the "active" instance is still actually active/present, and take over if it went down.
In the distrubuted case, if the requirement is to have a single server instance spanning multiple machines, the approach is still to find a common resource that can be locked, physically or logically. For that purpose, I personally use a control database table where an active process writes its PID and "heartbeat", and all others are checking for that "heartbeat" to be fairly recent, and become active if its not.
you can write simple command line script for app start - that check is server runs before actually run new instance. Just check url with wget for example...

How can i check if MySQL and Tomcat are running?

I've created a Java application that is split in different subcomponents, each of those runs on a separate Tomcat instance. Also, some components use a MySQL db through Hibernate.
I'm now creating an administration console where it's reported the status of all my Tomcat instances and of MySQL. I don't need detailed information, but knowing if they are running or not it's enough.
What could be the best solution to do that?
Thanks
Most straightforward way would be to just connect the server and see if it succeeds.
MySQL:
Connection connection = null;
try {
connection = DriverManager.getConnection(url, username, password);
// Succes!
} catch (SQLException e) {
// Fail!
} finally {
if (connection != null) try { connection.close(); } catch (SQLException ignore) {}
}
Tomcat:
try {
new URL(url).openConnection().connect();
// Succes!
} catch (IOException e) {
// Fail!
}
If you want a bit more specific status, e.g. checking if a certain DB table is available or a specific webapp resource is available, then you have to fire a more specific SELECT statement or HTTP request respectively.
I assume that you know the ports of which are running in advance (or from configuration files). The easiest way to check is to make socket connections to those ports like a telnet program does. Something like:
public boolean isServerUp(int port) {
boolean isUp = false;
try {
Socket socket = new Socket("127.0.0.1", port);
// Server is up
isUp = true;
socket.close();
}
catch (IOException e)
{
// Server is down
}
return isUp;
}
Usage:
isTomcatUp = isServerUp(8080);
isMysqlUp = isServerUp(3306);
However, I would say that is a false-negative check.. Sometimes it says server UP but the server is stuck or not responding...
I would make sure that what ever monitoring you setup is actually exercising some code. Monitoring the JVM via jmx can also be helpful after the fact. Check out http://www.cacti.net/ .
Firing a simple fixed query through MySQL
SELECT 'a-ok';
and have the .jsp return that a-ok text. If it times out and/or doesn't respond with a-ok, then something's hinky. If you need something more detailed, you can add extra checks, like requesting now() or something bigger, like SHOW INNODB STATUS.
The easiest thing is to look for the MySQL and Tomcat PID files. You need to look at your start scripts to make sure of the exact location, but once you find it, you simply test for existence of the pid file.
Create a servlet as a status page. In the servlet perform a cheap query, if the query succeeds let the servlet print OK otherwise Error. Put the servlet into a war and deploy it to all instances.
This could be used for checks in yor admin console by doing a loop over all instances.
I'd create a simple REST webservice that runs on each Tomcat instance and does a no-op query against the database. That makes it easy to drive from anywhere (command line, web app, GUI app, etc.)
If these are publicly available servers you can use a service like binarycanary.com to poll a page or service in your app.

Categories