All web containers occupied by one app that depends on 2nd app - java

We have an application on our WebSphere Application Server that calls a web service of a second application which is deployed on the same app server.
There are 100 available web containers (threads).
At times when there are many active users, application 1 allocates all available web container threads. When application 1 tries to call the web service (application 2) there are no free threads, so application 1 never finishes and therefore the whole system hangs.
How can I solve this? For example, is it possible to restrict the web container thread count per application? For example, I would only permit application 1 to use 50% of available threads.
A solution would be to add some code to application 1 that watches the count of requests being processed simultaneously. But I'd try to avoid that if possible, because I think this is very error prone. Earlier, we used the synchronized keyword. But that only allows 1 request at a time which caused even bigger problems.

It could be possible by defining separate transport chain and thread pool.
I dont have web console before me, so steps in rough order:
create separate thread pool for your soap service app
create separate web transport chain on new port e.g. 9045
associate that thread pool with transport chain
create new virtual host, with host alias *:9045
map your soap-service app to that port
If you will access app via 9045 port it will use your own separate thread pool for that.
Concerns:
if it is only local access (from one app to the other) then you just access it via localhost:9045 and you are good to go
if your soap service needs to be accessible ALSO from outside e.g. via plugin with the default https port (443), you would need to create different DNS hostname for that so it can be associadet with you soap sercvice app eg. soap-service.domain.com (and then you use that domain in the host alias instead of *. In that case plugin should use that 9045 port for transport also, but I dont have env at hand to verify that.
I hope I didnt complicate it too much. ;-)

Related

App with Event logger on port:8080 listening calls from API port:8090 in SpringBoot

I'm trying to create an app with notification service whenever a call is made on API.
Is it possible for me to create a logger on port:8080 and when app is run on the server it listens to api running on another server.
Both applications are run on local machine for testing purposes using Docker.
So far I've been reading https://www.baeldung.com/spring-boot-logging in order to implement it but I'm having problems with understanding the path mapping.
Any ideas?
First let's name the two applications:
API - the API service that you want to monitor
Monitor - which wants to see what calls are made to (1)
There are several ways to achieve this.
a) Open up a socket on Monitor for inbound traffic. Communicate the IP address and socket port manually to the API server, have it open up the connection to the Monitor and send some packet of data down this "pipe". This is the lowest level approach simple, but very fragile as you have to coordinate the starting of the services, and decide on a "protocol" for how the applications exchange data.
b) REST: Create a RESTful controller on the Monitor app that accepts a POST. Communicate the IP address and port manually to the API server. Initiate a POST request to the Monitor app when needed. This is more robust but still suffers from needing careful starting of the servers
c) Message Queue. install a message queue system like RabbitMQ or ActiveMQ (available in Docker containers). API server publishes a message to a Queue. Monitor subscribes to the Queue. Must more robust, still requires each application to be advised of the address of the MQ server, but now you can stop/start the two applications in any order
d) The java logging article is good started into java logging. Most use cases log to a local file on the local server. There are some implementations of backend logging that send logs to remote places (I don't think that article covers them), and there are ways of adding your own custom receiver of this log traffic. In this option, on the API side, it would use ordinary logging code with no knowledge of the downstream consumption of the logging. Your monitor app would need to integrate tightly into a particular logging system with this approach.

Why should a 12 Factor app be self contained?

In the 12 Factor article on Port Binding
http://12factor.net/port-binding there is a requirement that every app
be self-contained and not have a runtime injected e.g. Tomcat. For
what reason is this advised... what are advantages of self-contained apps for microservices?
To understand rules around port binding and self-contained apps, it's helpful to view things from the perspective of the platforms designed to run 12-factor apps, like Heroku or Deis.
These platforms are scaling applications at the process level. When processes are scaled up, the platform tries to place these additional workers behind the routing mesh so they can start serving traffic. If the app is not self-contained and, for example, is tightly coupled to a front-end Apache server using mod_jk -- it is not possible to scale by running more isolated worker processes.
Port binding exists to solve the problem of "port brokering" at the platform level. If every application worker listened on port 80 there would be conflicts. To solve this, port binding is a convention whereby the application listens on a port the platform has allocated -- and which is passed in as a $PORT environment variable. This ensures a) the application worker listens on the right port and b) the platform knows where to route traffic destined for that worker.
I think is because it gives you a great deal of flexibility when the time to scale up your app comes. If you use tomcat you'll have to copy your .war and drop it inside another tomcat and then load balance your requests to either of them.
Instead If your app has a self contained http server, you colud just run another instance in another port and forget about all that tomcat stuff. You would still have to load balance your requests to either of your app instances, but seems more straight forward.

How to get Ngnix advantages with Java webservice

Along the last years i used Apache httpd server for my servers.
As i understand it - the biggest advantage in using Nginx is that Apache opens a different Thread for each HTTP request - which might load my server very quickly, while Nginx uses some other technique (Event driven) in order to take the maximum out of my server's memory and hardware.
So far so good.
I'm building a new web service which i expect to have lots of HTTP traffic so i've decided to use Nginx.
As a good Java programmer i like Java more than PHP but i have a concept problem using it in my case:
In all the post I've found that the way to use Java on it is to wrap the application with Nginx + Tomcat (or other JavaServer) + Java - so, if i understand correctly - i will not get the Nginx advantage since the Tomcat will open a new thread for each request in order to use the Java web service.
Questions:
Did i understand it correctly?
Does using Nginx with PHP does open a new process for each request but not a new thread ?
You understand it correctly. In this case, nginx plays as a reverse proxy, tomcat works as an application server. IN most of time, the bottleneck appeared in application level: application server of application itself.
PHP use process not thread to execute requests, each request needs a php-cgi process to deal with, only when this request finished, the process would be released to deal with other request. For php-fpm, it usually pre-fork many child processes, like a pool, and we need to calculate the size of this pool according to the real QPS and stat of machine.
Yes you got it correctly, what you're doing here is putting an extra layer above the tomcat, so you'll not get the advantage, the only advantage that you'll get is serving assets ( images and static files ) without passing them to the apache, which might give a slight advantage.
Why php is has this advantage: because when using nginx instead of running php as a module of apache (mod_php) we install a separate server php-fcgi or php-fpm, so it's independent of apache's method of spawning workers or threads or whatever.

Multiple RMI Connections from single JVM using Weblogic RMI over T3

I'm attempting to use JMeter with some custom samplers to load test a Java application that is normally accessed via Weblogic RMI over T3 from a Swing-based GUI. Intention is to load the application server and measure the response time of particular transactions by simulating many concurrent user connections/interactions (up to ~500).
I have implemented a couple of JMeter samplers that acquire a RMI connection to the server via a JNDI lookup and that works fine. However I've noticed that, even if I acquire two contexts on different threads using different credentials, only one T3 connection is opened.
Is there a way to effectively create multiple independent connections to the app server from within one JVM, or will I be forced to run one user per JVM?
App is running in WLS 11g, currently on Hotspot 32bit but will be moving to JRockit 64bit.
Thanks.
You are running up against RMI connection pooling. There are ways to turn it down, see the RMI Home Page and the Properties pages linked from them, but it's still an unrealistic test for other reasons such as port exhaustion on the client host. You should really look at using as many client hosts as possible with as many separate JVMs as possible.

Tomcat's behaviour when two simultaneous requests come from same ip

When i try to run 2 wget commands simultaneously to my server (http://myserver), looks like tomcat allocates two threads to process them. But i believe when tomcat receives two simultaneously from same ip address, it will not create a new thread for processing the second request as it considers both the requests come from same session.
If i want to check if both the threads are same or different, is using thread.getId() the only way? I think this id may be reused for new threads. Is there any unique property of the thread existing to check its identity other than threadid?
I suggest to never rely on threads to identify their source. There are no Servlet spec guarantees about threads, and newer Servlet spec implementations make use of NIO. You are skating on a thin ice.
Web servers will almost always assign multiple threads (or processes) to multiple simultaneous requests, since the client can work faster when it does not have to wait for each response.
Newer servers may use asynchronous IO (nio), however, and a single thread can simultaneously serve many clients.
Yes, Thread.getId() is a way of identifying threads.
Session IDs are the mechanism used to identify requests from a single client.
The IP address is not a good way to do that, since multiple machines can expose the same IP when hiding behind a NAT.
I believe Tomcat will always create a new thread of execution irrespective of whether it comes from the same IP or not. In case, the client application running on the particular IP has a mechanism to send across the session-id, then Tomcat will simply associate the same session context with the request thread [making it stateful].
in your case, you'll need to customise wget to hold on to the session-id [the Tomcat web-app might send it across through a cookie or as a url parameter - jsessionid]. wget will then need to send it back with the subsequent requests [url rewrite and include the jsessionid parameter, or exchange cookies]. this way Tomcat will be able to treat each request coming from a unique client instance and associate a state with it.

Categories