My Spring Boot application is quite small and has one job: Act as a client by opening a websocket connection to a third-party website and listen for messages.
The problem is that after my javax.websocket.Endpoint implementation has been initialised and the connection has been created, my Spring boot application closes.
I would have thought that any open websocket connection would keep my application up and running?
I don't need an embedded servlet container so I have specifically set web-environment: false in application.yaml.
Is there a way to remedy this without adding a servlet container I will never use?
I fixed this by using OkHttpClient and initialising it with a #PostConstruct in my #Configuration. I annotated the listener with #Component and it now stays alive without needing the embedded servlet container.
#PostConstruct
void handleRequest() {
Runnable runnable = () -> {
while (true) {
try {
JSONObject requestBody = getRequestBodyFromInput(serverSocket.accept());
requestHandler.handleRequest(requestBody);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
};
new Thread(runnable).start();
}
I started a new thread on while socket would keep listening and I am not letting that new thread die by looping it over within while loop.
You simply could simply loop forever.
while (true) {}
Related
I have a client class from third party jar that needs to be integrated in my WebLogic ear application. A #Startup #Singleton EJB is created to contain and call the initialization of this 3rd party client.
But during the initialization of the 3rd party client, it tried to create some of its own JMS connectors and failed (because no connection in a test environment), then it got stuck there and didn't return a client instance.
As a result, the deployment of my ear application got "Error Timed out waiting for completion: Activate State: STATE_DISTRIBUTED" and cannot complete a normal deployment and have a state Active.
I have tried to make the initialization method asynchronous or the whole singleton EJB asynchronous, but it didn't work out.
#Singleton
#Startup
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public class ClientStartupBean
{
ThirdPartyClient client = null;
public ClientStartupBean()
{
//some init
}
#PostConstruct
public void initialize()
{
try
{
Trace.info(TCI, MN, "Start to init 3rd party client");
client = ThirdPartyClient.getThirdPartyClient(some_init_para_a, init_para_b);
Trace.info(TCI, MN, "Finished init of 3rd party client");
}
catch (Exception e)
{
throw new EJBException(e);
}
}
//other stuff
}
I can see the 'Start to init..' in logs, but not the 'Finished init' in logs the line after initialization of 3rd party client.
What is a proper way to resolve this? Thanks in advance for your help,
I have a periodic job that has been run every second (this is configurable).
In this job, I first create a connection to Elasticsearch server:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(new HttpHost(address, port, "http")));
Then I check for the existence of a special index called test. If it doesn't exist, I create it first.
GetIndexRequest indexRequest = new GetIndexRequest();
indexRequest.indices("test");
boolean testIndexIsExists = false;
try {
testIndexIsExists = client.indices().exists(indexRequest, RequestOptions.DEFAULT);
} catch (IOException ioe) {
logger.error("Can't check the existence of test index in Elasticsearch!");
}
if(testIndexIsExists) {
// bulk request...
} else {
CreateIndexRequest testIndex = new CreateIndexRequest("test");
try {
testIndex.mapping("doc", mappingConfiguration);
client.indices().create(testIndex, RequestOptions.DEFAULT);
// bulk request...
} catch (IOException ioe) {
logger.error("Can't create test index in Elasticsearch");
}
}
And after doing a bulk request that has near 2000 document, I close the Elasticsearch client connection:
client.close();
Java High Level REST Client version:
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>6.4.0</version>
</dependency>
My problem is a bunch of TCP connection that has been established and don't be closed. These TCP connections occupy all operating system TCP connections over time.
On the other hand, I'm a bit confused. Should RestHighLevelClient instance be a singleton object for the entire application or I must create a new instance in every job running cycle and close the instance after doing that job?
The high level client is already maintaining a connection pool for you, so I would use it as a singleton. Constantly creating and closing connection pools is expensive, and the client and underlying HTTP connection pool are thread safe. Also, calling close() on the client just delegates to the Apache HTTP client shutdown() method, so you're at the mercy of how they handle cleanup and releasing resources.
If you're using Spring or some other DI framework, it's easy to create a singleton instance of the client that can be injected as needed. And you can add the call to client.close() as part of the bean shutdown/destroy lifecycle phase.
Quick example using Spring Boot:
#Configuration
#ConditionalOnClass(RestHighLevelClient.class)
public class ElasticSearchConfiguration {
#Value("${elasticsearch.address}")
String address;
#Value("${elasticsearch.port}")
int port;
#Bean(destroyMethod = "close")
public RestHighLevelClient restHighLevelClient() {
return new RestHighLevelClient(
RestClient.builder(new HttpHost(address, port, "http")));
}
}
Note: In this case Spring will automatically detect that the bean has a close method and call it for you when the bean is destroyed. Other frameworks may require you to specify how shutdown should be handled.
RestHighLevelClient should generally be singleton, unless you have a good reason. For example if your job is running every hour and not a minute it might make sense to create new instance and close it after the job.
If you are sure you are calling the close() in all cases (e.g. you haven't missed any exceptions) then my next guess is bug in the elastic client.
It look like they are forgetting to consume the response in the exists call:
https://github.com/elastic/elasticsearch/blob/v6.4.0/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java#L1419
Are you able to test without the exists call?
I have already implemented a kinesis stream consumer which will run forever and I want to integrate that into spring framework for monitoring and graceful shutdown. But I found I wasn't able to stop the consumer by the http shutdown request. More specifically, only the spring web app is stopped but not the consumer. Here's what I did:
I created a main class for spring as follows:
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication application = new SpringApplication(Application.class);
application.addListeners(new ApplicationPidFileWriter("./app.pid"));
application.run(args);
System.out.println("I'm Here!!");
}
}
And in the entrance of consumer class, I added #EventListener(ApplicationReadyEvent.class) to the startConsumer method
#EventListener(ApplicationReadyEvent.class)
public static void startConsumer() throws Exception {
init();
...
int exitCode = 0;
try {
worker.run(); // will run forever
} catch (Throwable t) {
System.err.println("Caught throwable while processing data.");
t.printStackTrace();
exitCode = 1;
}
System.exit(exitCode);
}
The consumer successfully started after mvn package && java -jar myJar, but when I use use the http shutdown to stop the program, only the spring app stops. The consumer was still running.
Any idea on how to stop the consumer? Or more generally how to integrate a long running process into spring framework? I've tried non-web choice, but that prevents me from using http requests to do monitoring.
Any suggestion will be appreciated!!!
One important thing is that it is not correct to block execution in EventListener. You should start a thread from the event listener method and that thread will do processing for you. So you need to invoke Worker.run in a separate thread. Alternatively you can mark your event listener as #Async.
Now the problem is to stop it correctly when spring boot application is stopped.
In order to be able to react to shutdown events in spring you can implement SmartLifecycle in your bean.
When stop is invoked you need to stop the Worker. This answer has some good options how to do that.
Make sure you invoke the Runnable passed to stop when worker shutdown is complete. For more details see SmartLifecycle.stop javadoc.
I'm looking to restart the spring boot app, so using Spring Actuator /restart endpoint is working using curl, but i'm looking to call the same function using java code from inside the app, i've tried this code, but it's not working:
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
RestartEndpoint p = new RestartEndpoint();
p.invoke();
}
});
thread.setDaemon(false);
thread.start();
You need to inject the RestartEndpoint:
#Autowired
private RestartEndpoint restartEndpoint;
...
Thread restartThread = new Thread(() -> restartEndpoint.restart());
restartThread.setDaemon(false);
restartThread.start();
It works, even though it will throw an exception to inform you that this may lead to memory leaks:
The web application [xyx] appears to have started a thread named
[Thread-6] but has failed to stop it. This is very likely to create a
memory leak. Stack trace of thread:
Note to future reader of this question/answer, RestartEndpoint is NOT included in spring-boot-actuator, you need to add spring-cloud-context dependency.
Get the json here
#Autowired
private HealthEndpoint healthEndpoint;
public Health getAlive() {
return healthEndpoint.health();
}
Add custom logic
I've a spring web app running in a Tomcat server. In this there is a piece of code,in one of the Spring beans, which waits for a database connection to become available. My scenario is that while waiting for a database connection, if the Tomcat is shutdown, I should stop waiting for a DB connection and break the loop.
private Connection prepareDBConnectionForBootStrapping() {
Connection connection = null;
while (connection == null && !Thread.currentThread().isInterrupted()) {
try {
connection = getConnection();
break;
} catch (MetadataServerException me) {
try {
if (!Thread.currentThread().isInterrupted()) {
Thread.sleep(TimeUnit.MINUTES.toMillis(1));
} else {
break;
}
} catch (InterruptedException ie) {
logger.error("Thread {} got interrupted while wating for the database to become available.",
Thread.currentThread().getName());
break;
}
}
}
return connection;
}
The above piece of code is executed by one of the Tomcat's thread and it's not getting interrupted when shutdown is invoked. I also tried to achieve my scenario by using spring-bean's destroy method, but to my surprise the destroy method was never called. My assumption is that Spring application context is not getting fully constructed - since, I've the above waiting loop in the Spring bean - and when shutdown is invoked corresponding context close is not getting invoked.
Any suggestions?
Tomcat defines a life-cycle for Web applications (well, it's a kind of common specification aspect and not just tomcat specific, but anyway...)
So there is a way to hook into this process and terminate the loop or whatever.
In spring its very easy, because if tomcat shuts down gracefully, tomcat attempts to "undeploy" the WAR before actually exiting the process. In servlet specification in order to do that, a special web listener can be defined and invoked (you can see javax.servlet.ServletContextListener API for more information)
Now spring actually implements such a listener and once invoked, it just closes the application context.
Once this happens, your #PreDestroy methods will be called automatically. This is already a spring stuff, tomcat has nothing to do with that.
So bottom line, specify #PreDestroy method on your bean that would set some flag or something and it would close the logic that attempts to close the connection.
Of course all the stated above doesn't really work if you just kill -9 <tomcat's pid> But in this case the whole jvm stops so the bean is irrelevant.