We are using the Exchange Web Service (EWS) API against Office 365 to create calendar events in the calendar of a user. This works fine for an on-premise deployment, but with an Office 365 deployment we seem to be hitting a throttling limit fairly quickly.
After creating 16 events, in the calendar of 16 different users (from a service account, using delegate access to the calendars), we receive the following error:
ErrorTooManyObjectsOpened - Too many concurrent connections opened
After ~5 minutes, this error clears and we can continue creating events. It seems that the EWS server caches connections to mailboxes, and Office 365 only seems to allow a connection to 16 mailboxes at a time.
We tried a lot to overcome this error, but have not found a 'final' solution or workaround. What we tried:
Using impersonation instead of delegation: this works, but is a no-go from a security perspective.
Using multiple service accounts: this works, although each account it still limited to ~16 users per 5 minutes.
We tried the X-AnchorMailbox and X-PreferServerAffinity headers, and we making requests with and without HTTP keep alive, and with and without keeping the HTTP cookies.This does not make any difference. From the debug info we see that we usually end up on the same front-end and back-end server if we keep the cookies/connection, and that we end up on a different front-end if we drop the cookies but send a X-AnchorMailbox header.
We have not yet tried the REST API, since the client credential flow is not available yet.
Only the CreateItems call seem to cause this problem, we can do a FindItems for many users without hitting a limit.
Is anyone aware of a way to overcome this limitation, e.g., is there some call we can do to close the cached mailbox sessions on the Office 365 side? Or is there an Office 365 administrator in the room who can shed some light on the exact throttling limits, and why they are so much lower than the on-premise Exchange throttling limits?
Other details: we are using a modified version of the EWS Java API, but have done some extensive research and are quite sure this problem is server-side.
Unfortunately there is no call you can make to close the connections. Impersonation is the recommended solution. You said it's a "no-go" from a security perspective, can you elaborate?
Related
I've been through different questions about this topic, however, none of them have cleared my doubts on the best approach notifying the client side of a server-client IM app.
The Problem:
The whole problem is how to notify the client application of updates. I've alread seen the following approaches:
Clients keeps checking for updates: From time to time, client app performs a check in the server to see if there are updates for that specific user;
Problem: it is not performatic at all. Suppose you have one million users and each one of them checks for new updates every second. Serve would have to deal with one million requests per second. Wont work.
Client app opens a socket: The client app opens a socket and sends its address to the server. Server, by its turn, persists this information and connects to the socket whenever it needs to notify the client of some update.
Problem: Often the client will be connected to a NAT, so, the IP it has access to is in a non-visible range. In order to send messages to this client, a port forwarding in the NAT would have to be configured, which can't be done.
Despite of the technology, I think this approach will always be used, however, I have no idea how the problem described above can be solved.
Google Cloud Message (GCM): use the GCM service to notify the client of any update. Problem: It does't seems right to use a third server to handle the IM and it raises concerns about the scalability of the system. When the number of messages and users increases exponentially, it seems that the service will go down. Despite that, it seems that passing the information for two servers before delivering to the targets just adds bottlenecks in the process.
A combination of 2 and 3: uses GCM to reach the client when the last persist addres is no longer available.
Problem: same as described in 2
XMPP: I've seen many answers indicating the use of XMPP for IM applications, however, XMPP is a protocol - as per what I've foun in the web. I don't see how it can solve the problem described in 2 for instance.
Given the options above, can someone indicate me what line should I try to go for? Which one of these approaches has the best chances of success?
Thank y'all in advanced.
Use Google Cloud Messaging. Opposing to what you stated this service is built to scale to billions of users it will generally not introduce performance bottlenecks.
What you basically want to do is to use the messaging service to wake up devices. If you insist you can then still use your client server approach and thus your own protocol to have the client lookup new messages from the backend.
I am using Java EWS API in my web application to connect my application to MS Exchange and read user email requests. Also I am using a scheduler to pull subscription every 1 minute.
Problem is when I start my application, EWS-API works fine. It gets all new mails and processes it. But after few days, whenever the scheduler tries to pull the subscription inbox, the application throws the following error :
microsoft.exchange.webservices.data.ServiceResponseException: The specified subscription was not found.
Maybe it is thread issue or memory issue, I am not sure. Please suggest any reason for this issue.
Have a look at this article, the Client Access Server affinity issue it describes maybe what you are encountering.
http://blogs.msdn.com/b/exchangedev/archive/2011/07/20/client-access-server-affinity-and-network-load-balancing-considerations-for-programmatic-access-to-exchange-online.aspx
Supposedly if you use the EWS Java version 1.1 library (or later) you shouldn't get this particular issue however.
So I'd try checking your EWS library version, and if you still get the problem, add retry logic into your app to recreate the subscription when you encounter this error.
I have been using EWS Push subscription since 2017 so not sure if below might help or not but if you can share your code I can check and see if I can found something.
For Push Subscription I have seen many different errors and to avoid any issues I am using an Object Pool of connection and if I encounter any random error from Exchange Server, I discard current connection and create a new one which mostly solve these kind of issues.
Also you can try setting anchor mailbox while establishing connection it helps with some of the issues.
Also if you can share some sample code I am happy to check.
I'm working with Apache Tomcat 7 (JSP and Servlets). In my application, I need to send some messages from server to client. Bellow, I'll explain a little bit what I'm working on.
Brief explanation: The application will bring up a login page if the user isn't logged in every time when he wants to connect to internet. After the user logged in successfully and his time is going to end, I will need to send to client a message with remained time (for example in last few minutes). It can also be another requirement to open advertising popup at a specific time.
I now about JMS but I don't know how fit is that for my scenario. I also read in other posts, the WebSocket can be also an option.
I'm running the server on CentOS 6.2.
Question: For this scenario, do you have some thoughts on how to treat it with Java technologies? If you have some other ideas, feel free to expose!
N.B. Related to JavaScript and PHP I found good answers on SO's questions. I'm interested on how to solve this issue with Java technologies especially.
http://jwebsocket.org/
Maybe this fits your needs.
You will not be able to initiate an HTTP connection from the server to the client. One solution will be to use WebSocket/Comet Framework. Unfortunately websockets are not really wide spread (server+browser) for now. I will suggest you to use a framework to fill the "gap": https://github.com/Atmosphere/atmosphere
I don't understand your obsession with us implementing the solution in Java - any valid solution should be portable across different serverside languages. However if the termination is to occur without synchronous user-driven interaction, then you're just creating load on your server by trying to handle it here. If you want somebody to write the code for you then this isn't the right forum.
I now about JMS....CentOS 6.2.
Not much help here.
The thing we really need to know is what you mean by:
After the user logged in successfully and his time is going to end
(I assume you mean the session time is going to end, unless you've written some software which predicts when people will die).
How do you determine when the session will be ended?
Is it a time limit per page?
Is it a fixed time from when they login?
Is it when the session is garbage collected by the Java?
In the case of 1, then the easiest way to achieve this would be to use javascript to set a timeout on the page (when the user navigates to a new screen the timeout will be discarded), e.g.
setTimeout(function() {
alert('5 minutes has expired since you got here - about to be logged out');
}, (300000)); // 5 minutes
In the 2nd case then you'd still use the method above, but reduce the timeout on the javascript by the time already spent on the server (write the javascript using java, or drop a cookie containing the javascript timestamp at login).
In the 3rd case.....you don't really have any way of knowing when the user will be logged out.
I am developing a chat website using jsp/servlet.I will be hosting my website on gooogle appengine .Now i have some doubts regarding whether to use server push or client pull technology
1)If i use server push and if i dont close the response of servlet will it cause the server to go slow?How many simultanious connection can a tyicall tomcat server can handle if i keep the socket open for the entire chat session between 2 clinets??
2)Will server push or clinet push be better??
If you are using a servlet (prior to 3.0), then I guess you'll have to go with pull because of the programming model of servlet. However, there ARE advantages in using a push model. Primarily, wasted load on server and the limitation in latency. That's why there are technologies such as comet. Servlet 3.0 also supports push model. These are commonly used in ajax based apps.
In fact I believe a push model is more suited for a chatting app. because of the fast response time (=better user experience) it can provide.
If you use a nio based implementation for push-model, you can support thousands or even more than 10k concurrent connections (obviously, your millage varies).
If you use a conventional IO based implementation, it will be likely in the range of hundreds of concurrent connections (don't take this estimation too seriously though. I'm just giving these numbers to give a very, very rough feeling).
As for tomcat, last time I checked, people were saying that it won't have a good push-model support until version 7.0. But I'm not following the current status so I'm not sure (Sorry, perhaps somebody else can help you on this). If that is the case, you might want to check out comet support of jetty.
grizzly and netty are also good NIO based network frameworks, but if you want to use JSP, and find that tomcat is not sufficient, I guess jetty would be the best bet.
edit: (some additional info)
In this "push models", it's not like the server opens a connection to the client. The connection will be kept alive, and the server will push messages as it sees fit.
Also, it's not like there are only "push" and "pull" models. You can have a hybrid, like long polling.
I don't know how are you thinking of achieving server push here. As far as I can see, server needs a request to respond over HTTP. So, when there is a request, server will respond to that.
If i use server push and if i dont close the response of servlet will it cause the server to go slow?
App Engine will not let you do that. You have to finish your response within thirty seconds, or it will be killed. The thirty seconds is also an edge case, most calculations they do (for quota and such) are based on a 75 millisecond response time.
How many simultanious connection can a tyicall tomcat server
Tomcat? I thought you are planning to use App Engine?
Pull. Always pull.
I know it's a manufacturing-oriented book but the advice from Lean Thinking (Womack & Jones) is invaluable in any context (roughly, from memory):
Start by defining value,
line up the activities that create value in the value-stream,
create flow across the value-stream,
let customers pull value from the value-stream,
compete against perfection rather than other organizations
If I misquoted them, I apologize. Anyway, all of those principles can easily be applied in the development of any software product just as they could in the production of any physical product but the one that matters for you is pull.
Letting consumers of a service pull rather than pushing to them not only makes your programming model easier, it aligns activity with demand. You can still use queuing to load-level over time, if you have to, just the way you could with push but, this way, you have complete visibility into what, exactly happens in any given transaction.
I don't quite get your first question but the answer is still pull.
The answer to your query depends on what underlying protocol you wish to use.
Since you have mentioned JSP/servlets, your app will be implemented over the HTTP protocol.
HTTP is a protocol over TCP. TCP is connection oriented and remains alive, until the connection is ended. However, HTTP connections are persistent, only for the duration of a single request-response cycle. The TCP connection is broken after every request-response cycle. So that should answer your doubt with regards to how many socket connections a typical TOMCAT server will be able to handle. The connections will not be persistent, at all. They will only last the duration of a HTTP request-response cycle.
Given this basic idea, I would suggest , you use a client pull strategy, to implement your app.
Even with server push, over HTTP, even though the name says "server push", it is always the web client that polls the server at regular intervals, which just gives an illusion of "server-push". HTTP specification mandates that the client makes a request to which the server responds.
I have considerable experience in developing chat applications (both mobile and web).
Let me know , if you need any assistance. I will be more that willing to help.
A friend and I are currently working on a turn-based game with chat with both desktop browser and Android clients, with Google App Engine as the server.
We're using the Java API for GAE and using HTTP for communication with the server. We've implemented simple chat functionality, and we're getting undesirable latencies 1-3 seconds from both the browser and Android clients while just posting simple one-word chat messages.
My friend thought it would be best to use XMPP instead of HTTP, but we want to use a Google Accounts cookie for authentication from the Android client, and according to the GAE documentation, XMPP clients cannot use a Google Accounts cookie and must use the user's password.
Does anyone have any suggestions as to where the latency might be coming from, how to troubleshoot it, and/or what to do about it?
Also, is anyone aware of any opensource implementations of chat (or something similar) on GAE done in Java? Can't seem to find any.
One way to analyze the situation would be to use Wireshark to look at the network traffic during the delays.
You don't say how your chat messages are getting from one JVM to the other. If you're using the datastore, maybe try memcache?
Also, startup time is often an issue; app engine starts and stops JVMs all the time, particularly for a low-traffic app. A way to diagnose this is to reload the page a bunch of times (send more messages) and see if it gets faster after a while. It should be pretty easy to tell the difference in the admin console logs.