I was wondering if there is anyway to save an IMAP session in database so that it can be reused.
Any help much appreciated
Darshan
TL;DR: No.
An IMAP session is something which involves quite an involved amount of state of at least two networked services. The concept of caching a connection is very useful if you want to avoid a possible costly process of connection establishment, setup, configuration, and perhaps entering a particular state on the other end of the connection. With IMAP, this typically involves both setting up a TCP connection, agreeing upon an TLS session on top of that, exchanging authentication data, reading the remote party's CAPABILITY bits, exchanging the ID data for software version troubleshooting etc etc etc.
You could do something, for example it's quite possible to set up an IMAP proxy and design your DB-caching for using some identifiers-within-the-DB as a key/index/token for your proxy, but that sounds like a lot of work compared to what IMAP can do (and compared to what IMAP can easily benefit from, for example TLS session caching).
Have you tried IMAP command pipelining already? Have you measured and verified your performance data to see where the bottlenecks really are?
Related
Total Crystal/EJB/JNDI noob so please be considerate :) I don't have any source codes to post (yet) because I don't know enough about these 3 combined to post anything meaningful.
Anyway...
I am maintaining an application that uses EJB3 and JNDI to connect to the DB for my Crystal reports. Now, due to our requirements, we need to ensure that the communication between the app and the DB is secure (e.g. encrypted). How should this be done?
I've been seeing discussions on using security domains, JAAS and roles (like this one), but from what I'm seeing, that's going to force me to put annotations on every method that's concerned with connecting to the DB, not to mention define roles for users (which is not needed at this point). Then there are those saying it's just a matter of configuring my application server (in this case, it's JBoss) or putting a transport-guarantee in my web.xml (e.g. CONFIDENTIAL).
What's the best approach (or do you think that this is redundant/unnecessary)? Any help or hints on where to start would be really appreciated as I don't really see how to tackle this.
Thanks in advance!
You have 2 options:
IPsec
The first option is a OS-level encryption. All network traffic between host endpoints are encrypted. You will need to configure the OS of both the APP and DB servers for IPsec.
encrypted JDBC connection
In this option, only the JDBC network traffic between the database server and client is encrypted. On the DB server side, you'll need to configure your database server to SUPPORT or REQUIRE encryption. On the client side, you need to configure your JDBC connection properties to use encryption. The exact configuration and path for the SSL keys are dependent on the database that you are using.
The 2 options are not mutually exclusive and you can implement both at the same time but I think that is an overkill.
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.
we are using a set of Active MQ servers (three ) behind a load balancer .
These configured queues will Persist the Data to a disk (For helping in case of a crash )
My question is Does a developer or MQ admin will take care of these things
Thanks
If the messages are REALLY important, you might think about replication of them. Once persisted to the disk, replicate them on some other machine also. That is minimum what you should do - not keep messages on the same machine. You should be looking at distributed queues:
Distributed Queue
Who's responsibility it is? Well, you companies, the people who design and build the solution. It's everyone's. If you can do it (and I am sure you can try at least), then go ahead.
IMHO in your case the ActiveMQ part needs to be done by developer, and the replication on the Server side by an admin, not necessarily an MQ Admin, but the admin. May be set up a cron job to replicate the needed data?
Cheers,Eugene.
Your setup is as secure as the weakest element of safety. You can loose messages when one server crash (disks). You will not be able to recover messages so You should take care for safety in app.
ActiveMQ can be more safe (but slower). Replicated Message Stores
Look here http://activemq.apache.org/clustering.html
First off, let me say that feel free to recommend me if long lived TCP persistent connections are the way to go or persistent HTTP connections are better.
I've also pre-read that instead of having a persistent connection, I can have a polling mechanism.
I'm just asking in the curious interest of how can I create a persistent connection from Android to a server?
Thanks!
This really depends on what your requirements are and if you actually need a persistant connection or not.
If you have time-sensitive data you need to push from the server to the device as soon as it becomes available, then a persistant TCP connection is your best bet.
If it is acceptable that your server and device only periodically exchange information, then polling or HTTP requests may be a better choice.
Personally I think a well-implemented long-lived TCP connection with a binary protocol is the superior choice when dealing with persistant connections where information must always be current.
HTTP connections are generally expensive in terms of overhead for each packet, especially if you are using an XML-based protocol, such as SOAP. Also, connecting and tearing down sockets all the time is generally quite expensive.
On the other hand, persistant TCP connections can be tricky to implement on both the client and server side. Battery life is a huge factor on the device side, and if you are expecting anything more than a handful of users connected at once then you may have to implement an asynchronous communication model on the server side, which brings its own set of challenges.
Good luck!
Long lived TCP connections are a bad idea in anything mobile because the network is so spotty. Personally I use UDP or transient HTTP connections with an HTTP session concept.
I have simple server-client application that uses JDBC to connect to database and things works ok. Application does few simple things with JDBC connection (get data, insert new line and few others).
Now, I would like to keep the same application but use it outside of firewall - so, I would put something else on some host:port (and open that port to outside world) - instead of JDBC opening database access directly.
I guess that this problem is faced many many times and sure there are a lot approches.
One way can be doing servlet on one side, accessing it on client side.
I guess, I haven't touched Spring yet, maybe another would be to do POJO Java Class and using Spring configure it as http service.
I have heard also "rumors" that Jetty has something that can help in this case (to minimaze coding on server and client side)
I would prefer something that:
- is not complicate (easy learning path)
- reuse something that is already done.
What approach would you recommend ?
Thank you and regards,
Igor
The normal approach would be to implement a web service, which can be pretty easy these days with Axis etc.
You really don't want to open direct JDBC to clients outside a firewall by tunnelling over HTTP... the server should strictly control what kind of interaction takes place with the database.
I would recommend using something like SSH tunnels to carry your JDBC connections through the firewall. Set up a tunnel on the DMZ machine on whatever publicly open port your can, and connect the other end of the tunnel to the appropriate port on the DB server.
Then just change your JDBC connection settings to connect to the tunnel machine's public port and it will transparently end up communicating with the database as usual, while passing through the firewall via the accepted port.
If this is an IT policy problem, in that they won't let you directly access the database, then you would need to work out what you are allowed to do and work with that as far as possible. Changing JDBC to another access method is unlikely to be acceptable to the IT policy in this case.
Edit: after reading Jon's answer, he may be right. I was assuming that the issue was the connection between your server/webapp, and the database server. If you were talking about the client creating direct JDBC connections to the database, then yes - firewall or no, this is very bad practice. The client should talk to your server to ask for what it wants, and your server should do the DB queries as required to get the information.
I think that would just be an unnecessary complication. Your DBMS (usually) brings access control and transport layer security. If you introduce your own layer, are you sure that you can make it safer than a direct connection to the DB?
I see your rationale, but if there isn't a framework to do this, avoid building your own! For example, PostgreSQL comes with a bunch of nifty options to tie things down. For example, require SSL certificate-based authentication on the transport level (clients must present a cert that the server checks), or IP-based access.
Of course you still have to trust your DBMS implementation to get basic details like access control right (= "uncrackable"), but you still need to rely on this anyway after the black hats have broken into your web-proxy ;)
#dtsazza: Maybe edit your answer to include the keyword "VPN"? ssh tunnels probably scale badly outside of a private setup.
Volker