JMS - ActiveMQ - Servlet(Remote Server (Apache-ActiveMQ)) and Console Java Program - java

I have to send a message from a JAVA console program to a servlet on APACHE Tomcat 7.0.42 Server and using ActiveMQ 5.8.0 and send the acknowledgement message back to the program and continue the same thing until server goes offline.
I am completely new to JMS, i only know servlets,jsp,listeners,i.e. no frameworks.
I have: Eclipse-Kepler and JDK1.7 and was not able to configure ActiveMQ on Apache.
I read quite many blogs but nothing seems to work
Please, guide me how to go about the problem.
Thanks you.

If you are using a servlet-container only (Tomcat), you can create an unmanaged thread like this:
#WebListener
public class MyServletContextListener implements ServletContextListener {
public void contextInitialized(final ServletContextEvent sce) {
final java.util.Timer timer = new Timer();
// Executes repeatedly (delay = 4000, period = 5000)
timer.schedule(new ReplyTask(), 4000, 5000);
sce.getServletContext().setAttribute("replyTaskTimer", timer);
}
public void contextDestroyed(final ServletContextEvent sce) {
final java.util.Timer timer =
(Timer) sce.getServletContext().getAttribute("replyTaskTimer");
timer.cancel();
}
}
In the ReplyTask just read the incoming queue, and send something on an outgoing queue (I suggest using two different queues to ping and pong). You must cancel the timer, that thread will otherwise survive undeployments and redeployments.
Note: If you were using an application server (JBoss for example), you could do that using a Message driven bean (MDB). More elegant and concise, and the threading is managed by the application server. The extra benefit of using an application server like JBoss is the integrated HornetQ JMS provider.

Related

Spring boot thread hanging?

I have a simple (thats what I think) Spring boot application. There are 4 layers:
Rest Controller
Application Service (called by the Rest Controller)
Domain Service (called by Application Service. It connects to the database - repository layer)
Adapter Service (called by Application Service for outbound calls via Hystrix)
Now the problem is that it can only handle a max of 15 parallel calls. If any additional REST API request arrives while these calls are being processed, it makes it to the Application Service layer and then waits. Once one of those 15 parallel call returns, then the new request proceeds to make call to the Domain Service Layer and return.
I tried multiple things:
Increasing spare threads for the server in application.properties file
server.tomcat.min-spare-threads=1000
server.tomcat.max-connections=1000
server.tomcat.max-threads=1000
Once I do this, I see the # of http-nio-* threads increase to 1000 but the hanging issue is not fixed.
I found this snippet online to customize the tomcat container but it didn't help either:
#Bean
public WebServerFactoryCustomizer<TomcatServletWebServerFactory> containerCustomizer() {
return new WebServerFactoryCustomizer<TomcatServletWebServerFactory>() {
#Override
public void customize(TomcatServletWebServerFactory factory) {
factory.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
Arrays.stream(connector.getProtocolHandler().findUpgradeProtocols())
.filter(upgradeProtocol -> upgradeProtocol instanceof Http2Protocol)
.map(upgradeProtocol -> (Http2Protocol) upgradeProtocol)
.forEach(http2Protocol -> {
http2Protocol.setMaxConcurrentStreamExecution(1000);
});
}
});
}
};
}
I tried configuring the thread pool via code
#Bean(name = "taskExecutor")
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(200);
executor.setMaxPoolSize(300);
executor.setQueueCapacity(300);
executor.setThreadNamePrefix("anniversary");
executor.initialize();
System.out.println("******* name " + executor.getThreadNamePrefix());
System.out.println("********** core pool size " + executor.getCorePoolSize());
return executor;
}
But none of this helps and I believe the issue is not with the number of threads but elsewhere since the request is not able to go from one service to another. There are hundreds of http-nio-* threads in waiting state and when a new request comes in, its assigned its own thread and I can see that in the Debug mode.
Any pointers, help, tips are much appreciated. What resource is required for service to service invocation by Spring boot?
I believe your observation is right - it's most likely not tomcat who's the bottleneck here. From what you write, would rather look at the domain service. Is the domain service doing some communication with the database or talking to something else over the network (for example over HTTP)?
If you happen to do database in there, check for spring's datasource configuration. There is going to be a database connection pool with a limited number of maximum concurrent connections to the database. Once these connections are all in use, threads that want to talk to the DB will be blocked until one of the connection becomes free again.
Similar connection pools are in place with many other things that talk over network (e.g. Apache HTTP Client also has a connection pool that can be configured).
That's where i would look next.
Cheers,
Matthias

Perform route Shutdown Logic with apache camel

I've recently started playing with Apache Camel, and one of the things I've been having issues with is properly performing shutdown logic on selective routes. Since the shutdown logic would vary between routes, Camel's RoutePolicy made the most sense. Here's an example of why I'm trying to do.
public class ProcessingRouteBuilder extends RouteBuilder {
private ProducerTemplate prodTemplate;
public class ProcessingRouteBuilder(ProducerTemplate aProdTemplate) {
prodTemplate = aProdTemplate;
}
#Override
public void configure() {
from("direct://processing")
.routePolicy(new RoutePolicySupport() {
#Override
public void onStop(Route route) {
super.onStop(route);
prodTemplate.sendBody("direct://shutdownRoute", "msg");
}
})
.process(ex -> // Do stuff)
from("direct://shutdownRoute")
.log("Running shutdown A route body - ${body}");
}
}
The shutdown is done like (http://camel.apache.org/how-can-i-stop-a-route-from-a-route.html). The ProducerTemplate comes from the primary CamelContext (read that it is good practice to create one ProducerTemplate per context).
Running this gives me a DirectConsumerNotAvailableException, I've used seda and vm (i don't plan to interact with multiple contexts, but I gave this a shot anyways), both don't exception, but the shutdown routes are never hit. Some questions I have
I might be using the Producer Template wrong? It doesn't look like it's creating an exchange.
Can I even use the ProducerTemplate once the Shutdown hook has been initiated? I'm not sure how Camel performs the shutdown, but it makes sense that it wouldn't allow new messages to be sent, and if the shutdown route is even available at the time of sending.
One thing to note, that I'm not handling here, is ensuring that the shutdown route is performed after the processing route finishes processing all messages in its queue. I'm not entirely sure if the onStop() method is called after there are no more inflight messages and if not, how to enforce it?
I figure another approach is to use when/choice at the beginning of each route and send some sort of shutdown notifier or message, but this seems a little more clunkier.
Thanks guys!
To programmatic shut down a route you can also use the Control Bus EIP.
However the "stop" logic is not clear as you'd want to send a message to the shutdownroute when the processing route stops, but if the stop happen because you are shutting down the camel context it may be possible that the shutdownRoute has already been stopped.

wildfly have a ever running process run in the background

I'm making a html5/js game that will have online capabilities for my backend I've decided to use a wildfly server. The client will communicate with the server via web sockets.
I intended for my wildfly server to also be in charge of game logic decisions such as moving npc's. My plan was to have a #startup bean that would run a server game loop to handle this logic. The server loop would then talk to the serverEndPoint via HornetQ. My serverEndPoint and server loop look like this:
ServerEndPoint
#ServerEndpoint(value= "/game/{user-id}")
public class GameEndpoint {
#Inject
GameManager gameState;
GameWorld gameWorld;
Player p;
private Logger logger = Logger.getLogger(this.getClass().getName());
#OnOpen
public void onOpen(Session session){
//do stuff
}
#OnMessage
public void onMessage(Session session, String message){
//do stuff
}
#OnClose
public void onClose(CloseReason reason){
//do stuff
}
#OnError
public void error(Throwable t){
//do stuff
}
}
GameWorld
#Startup
#Singleton
public class GameWorld {
#Inject
GameManager gameState;
private Logger logger = Logger.getLogger(this.getClass().getName());
#PostConstruct
public void init(){
gameloop();
}
private void gameloop(){
while(true){
logger.info("This is a test!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
#PreDestroy
public void terminate(){
//do stuff
}
}
the issue with this is that the server loop freezes everything as it is a infinite loop(for instance if I try and access the html web page I get a 404). obviously this could be solved if the serverLoop was on its own seperate thread but after doing some research it seems threading in jboss is very difficult as its hard to know what dependencies to inject e.t.c
Can anyone shed some light on how I can solve this issue? any help on the matter would be amazing.
What you have encountered has to do with what Java EE is and what it not is: Java EE is optimized for handling many concurrent, short-lived requests, each of which (usually) handle a single transaction. The containers do that very well, particularly with stateless beans, but also with stateful beans (cluster replication etc). As such, Java EE might be well-suited to process the requests coming from your HTML5/JS clients and feed the requests to the messaging infrastructure. Java EE is not, however, designed for long running, thread-blocking background processes like yours.
FWIW: Another issue that you have not yet encountered is, even if you could get that one fixed: Next you'll encounter the transaction timeout on your #PostConstruct method.
I think you are better of with moving the game engine out of the Java EE stack. You already mentioned you plan to use HornetQ - then why not put the game engine in a simple standalone application that receives messages from HornetQ and feeds replies back into HornetQ.
Another option might be a dedicated Java game server engine, see, e.g., this question and its accepted answer on programmers.stackoverflow.com. (Update: it seems the "RedDwarf Server" project mentioned in that answer was discontinued 3 years ago).
If you absolutely want to use the Java EE environment, I suggest you use a TimerService instead. Note, however, that this also requires that your game loop calculation is quick and guaranteed to finish until the next timeout is scheduled, otherwise the container will skip the scheduled invocation (with a "still running" message or similar).
Finally, let me mention that if I were to start a new game server today, I would definitely take a look at Akka, Node.js or similar projects that support "reactive" programming.

Java TCPIP EJB Threads

I am developing a TCPIP application where the client will send information to a specified port and the server will listen to that port. I would like to achieve the following:
Reconnect to the the client/port to see whether it is active after a specified time period.
I have the below code implemented:
#Stateless
#Local
public Listener implements ConnectionListener {
public void listen() throws Exception {
ServerSocket serverSocket = new ServerSocket(somePort);
Socket socket = serverSocket.accept();
while(!socket.isClosed()) {
}
}
}
public interface ConnectionListener {
public void listen() throws Exception;
}
How can this be achived with EJB technology? I know the while loop is wrong but what can be included. I have two approaches:
Wait for a time period to reconnect
Thread
However, I do not wish to use the Thread appraoch.
I know in EJB there are things such as an EJBTimer. What would be the best way to implement this and how could this be implemented in the code. Could you help how I could change the while loop to do what I want it to do?
Also, the client has no API to call on this application. How can this instance be triggered on start up of the Application Server so that messages are listened to. Usually, this could be achieved through a Servlet but I don't think this will work. Any ideas?
This kind of functionality is at the "edge" of the types of things you would do in EJB. typically, EJBs do not directly do network type activity, especially long term listening.
what you really want is an EJB singleton. this is kind of like a standalone instance which can do more advanced things like custom concurrency/threading and (relevant to you) start a custom socket listener.

Pooling data from server using RMI and springs

I am new to RMI and springs and need a little help with a feature we are implementing.
We are creating chat software with java and want to use RMI with springs.
I can setup a client server interaction fairly easily using RMIServerExporter, interfaces, etc.
The problem I can get my head around is that the client needs to pool data from the server. We need to get keep checking for new messages.
We can't push data from the server for other reasons.
How do I go about setting up RMI with springs so that the client pools data from the server. I have looked up callbacks but this involved pushing from the server!?
Is there away to do this. Let me know if you need to me explain this further
RMI is just a transport protocol used for client-server communication. On client side, once your RmiProxyFactoryBean has been properly defined and initialized in Spring container, where, when and how to use this service bean is totally up to developer. For a server pooling implementation, we usually use ScheduledThreadPoolExecutor to schedule the RMI call in a given time interval, for example:
public class ChatClient {
// Defined and wired as RmiProxyFactoryBean in applicationContext.xml
private ChatService chatService;
private ScheduledExecutorService scheduleTaskService;
... ...
// At some point during chat application running life cyle:
scheduleTaskService = Executors.newScheduledThreadPool(5);
// This schedule pooling task to run every 2 minutes:
scheduleTaskService.scheduleAtFixedRate(new Runnable() {
public void run() {
// Pooling server using RMI call:
chatService.poolingData();
}
}, 0, 2, TimeUnit.MINUTES);
... ...
}
For a more enterprise solution, we usually use Quartz, check out this blog post as a live example.

Categories