I am having a issue with the integrated Google Recaptcha, as on submitting the request I am getting a "gateway timedout exception". This is mainly because our Application server (Jboss 4.2) is not able to receive a valid response from the Google server.
Googles Recaptcha states that "Google not frequently but occasionally" keeps changing the IP address and has issue with Java's JVM. And to fix this we need to refresh the DNS cache.
One suggestion given by Google is to set the ttl to 30 sec (which is definitely an overhead on our application servers to refresh the DNS cache every 30 secs) or restart JVM (which is not possible as its the production server).
If anybody could suggest how to clear the JVM DNS cache manually without restarting the JBoss application server or by means of console would be most appreciated.
Related
We have a spring java app using EWS to connect to our on prem 2016 Exchange server and 'stream' pulling emails. Every 30 minutes a new 30 minute subscription is made (via new thread). We assume old connection just expires.
When one instance is running in our environment, it works perfectly fine, but when two instances run, after some time one instance will eventually start throwing error about
You have exceeded the available concurrent connections for your account. Try again once your other requests have completed.
It seems like an issue which is then hit by throttling. I found that the Exchange servers config is:
EWSMaxConcurrency=27, MaxStreamingConcurrency=10,
HangingConnectionLimit=10
Our code previously didn't explicitly close connections and unsubscribe (was running fine without when one instance). We tried including both but the issue still persists and we noticed the close method for StreamingSubscriptionConnection throws error. The team that handles the Exchange server can find errors referencing the exceeding connection count error above, but nothing relating to the close connection error
...[m.e.w.d.n.StreamingSubscriptionConnection.close(349)]: java.lang.Exception: microsoft.exchange.webservices.data.notification.StreamingSubscriptionConnection
Currently we don't have much ability to make changes on the exchange server side. I'm not familiar with SOAP messages but I was planning to look into how to monitor them to see what inbound and outbound messages there are for some insights
For the service I set service.setTraceEnabled(true) and service.setTraceFlags(EnumSet.allOf(TraceFlags.class)
However I only see trace messages in console when an email arrives. I dont see any messages during start up when a subscription/connection is created
Can anyone help provide any advice on how I can monitor these subscription related messages?
I tried using SOAPUI but I'm having difficulty applying our server's WSDL. I considered using the Tunnelij plugin for intellij but I'm not too familiar with how to set it up either
My suspicion is that there is some intermittent latency issue on Exchange server side, perhaps response messages are not coming back in a timely manner, and this may be screwing up. I presume if I monitor these SOAP messages then I should see more than 10 requests to subscribe before that error appears
The EWS Logs on the CAS (Client Access Server) should have details about the throttling issue. Are you using Impersonation in you Application if you not using Impersonation then the concurrent connections are charged against the account your using with Impersonation that get charged against the account your impersonating. The difference here is that a single user can have no more the 10 streaming subscriptions (unless you modify the web.config) if your using impersonation than you can scale your application to 1000's of users see https://github.com/MicrosoftDocs/office-developer-exchange-docs/blob/main/docs/exchange-web-services/how-to-maintain-affinity-between-group-of-subscriptions-and-mailbox-server.md
I am using Google Cloud SQL using machine type of db-f1-micro for a project deployed on Google App Engine in Standard Environment(JAVA). Sometimes, I got below error while connecting with database. This scenario happens when open same page in multiple tabs at the same time(load/performance testing).
Source code used in project from https://github.com/GoogleCloudPlatform/appengine-cloudsql-native-mysql-hibernate-jpa-demo-java
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
The metrics from app engine log for error and mysql usage. You can easily see that mysql active connection usage is below 100%.
Please suggest what wrong I am doing?
Looks like this thread is old but we have this problem in our test environment. It happens frequently and repeatedly after our GAE test system is not used for a while. The first time someone tries to access the app we get one or two of these.
I assume it has something to do with GAE ramping up a server instance. Although I'm not sure why this happens with the db. I don't think we have any connection pooling (specifically because GAE can make our app go dormant).
And with the app just starting up, we can't be exceeding any connection limits.
From https://cloud.google.com/appengine/docs/standard/java/cloud-sql/pricing-access-limits
"Each App Engine instance cannot have more than 12 concurrent connections to a Google Cloud SQL instance."
How many requests are sending to App Engine and how many connections does the app instance open for each of those requests ?
I am completely new to the AWS and I have successfully deployed my Java program to Elastic Beanstalk.
First 30 minutes or sometimes even 6 hours it's working pretty fine.
But later I always got a message:
"Environment health has transitioned from Ok to Warning. 1 out of 1 instances are impacted. See instance health for details."
or
"Environment health has transitioned from Ok to Warning. 100.0 % of the requests are failing with HTTP 5xx."
And my site stop working and when I try to access it through my browser it's says
"Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /.
Reason: Error reading from remote server"
If I run my program on my computer its work fine without errors. So I think the problem is in my AWS Environment.
I am using free t2.micro instance - does it have some limits for processing power in hour or something like that?
If it's not - how can I find whats wrong is going on with my Environment or Instance ?
http 5xx error is coming from your application server and is most probably not AWS issue.. Please check server logs of your server.
Yes every server ( micro or the biggest server in this world ) has some limits; but I don't think thats the problem in your case..
Per the documentation, t2.micro instances only have 1GB of RAM. I suspect that your application is consuming more than that after some amount of time. As #Deepak suggested, your application logs should illuminate the problem.
All t2 instances are Burstable Performance Instances, which means that after a sustained period of load, their performance will drop off significantly. However, that alone shouldn't be causing your 5xx errors.
We have implemented a Java servlet running on JBoss container that uses CometD long-polling. This has been implemented in a few organizations without any issue, but in a recent implementation there are functional issues which appear to be related to the network setup of this organization.
Specifically, around 5% of the time, the connect requests are getting back 402 errors:
{"id":"39","error":"402::Unknown client","successful":false,"advice":{"interval":0,"reconnect":"handshake"},"channel":"/meta/connect"}
Getting this organization to address network performance is a significant challenge, so we are looking at a way to tune the implementation to reduce these issues.
Which cometd configuration parameters can be updated to improve this?
maxinterval, timeout, multiSessionInverval, etc?
Thank you!
The "402 unknown client" error is due to the fact that the server does not see /meta/connect heartbeat messages from the client and expires the correspondent session on the server. This is typically due to network issues.
Once the client network is restored, the client sends a /meta/connect heartbeat message but the server doesn't have the correspondent session, hence the 402.
The parameter that controls the server side expiration of sessions is maxInterval, documented here: https://docs.cometd.org/current/reference/#_java_server.
By default is 10 seconds. If you increase it, it means you are retaining in the server memory sessions for a longer time, so you need to take that into account.
I have a web-application built with GWT (2.0.3) and run on Apache Tomcat 6.
My application uses long polling to enable client-server conversations.
When a client is unable to connect to the server it displays a disconnected message on the page and grays out the controls until it is able to resume conversation with the server.
This happens through the use of the onFailure method of the rpc services; I keep track on how many consequtive exceptions I've received and if it passes a defined threshhold the above scenario happens.
This allows notifying the user of a problem while in the background continuing to resume the server conversation.
This has been the configuration for about 6 months, and without a problem.
I compiled the application after a change and wanted to see it in stand-alone mode so I started up tomcat (not via eclipse) and everything seemed to work fine.
When I ctrl+c'd the apache (while having clients up) I saw the clients displaying a 503 error instead of my app with the disconnected message.
I then tried to reproduce the issue but was unable as the next times the app behaved as expected.
I'm not sure if it's relevant but recently I added an UncaughtExceptionHandler to my module's onModuleLoad.
Has anyone encountered such an issue?
Do you know how I can make my client immune to such an issue?
Thanks a lot,
Ittai
Probably your app tried to connect to server while it was in process of shutting down. Some of the services might have already shut so the request failed with internal server error.
I've got similar issue having an apache httpd in front of the tomcat and stopping tomcat while one of the "background" async requests were being made, due to the security redirection policy the failing request ends redirecting the browser and voilĂ our 503 error page.