I am planning to implement RESTful web services to return large XML response (upto 50MB), is it ideal for such requets or SOAP JX-WS better? Do I need to use some other protocol to make it more robust when it comes to marshalling/unmarshalling?
REST uses a regular HTTP get. HTTP get is stable for very large files. Downloading a 50MB file (or other content) is done quite regularly over HTTP.
You only need to make sure that there are not any other delays due to processing in the middle that would cause the connection to time out (usually ~2 minutes). This is unlikely to be a problem.
If you use Restlet you can stream data of any size back to the client using ReadableRepresentation (I'm doing gigabytes). It takes a bit of effort but it works fine.
Related
We use Jersey 2.x (Jersey + Spring + tomcat) to expose REST based web services. For monitoring purpose, I need to maintain a histogram capturing the size of responses sent out by the server to clients over time. Is it possible to do it using Jersey?
One suggested way is to write a Servlet Filter (for tomcat) which will read every (request &) response and capture size information for responses. But, I am worried about the performance impact due to introduction of filter. Hence, I am looking for any solution using Jersey.
Any other suggestion (way out) will also be appreciated.
I'm on a project that exposes ABAP functionality as a web service via Netweaver / Java (7.01 SP3 I think). We consume it on an .NET 4 UI tier. We are dealing with some large message structures (12MB serialized XML) that are taking too many seconds to shuttle between the various tiers.
We're tackling this performance on a number of fronts:
Disk, network, CPU and memory are fine and nowhere near saturated.
We're working to trial WCF Streaming mode
We may try gzip compression on the web service's server
And lastly, the point of this question: is there a way to enable binary serialization that's interoperable?
Assuming that you already tried everything to get the payload size down and to split it into smaller pieces (12 MB XML, seriously!), I'd say that depends on the kind of XML processing you need on the ABAP side. You could try to implement your own ICF HTTP handlers and go for some REST-style interface. This is especially interesting if you really want to transfer binary data (for example, some document you retrieve from an archive system) because you can then transfer the document via HTTP without the XML-binary-ugliness. Even if you have to use fill WSDL-y web services, you could try to refactor the binary parts out of that interface, just send some (GU)ID through the web service and have the client fetch the binary part from your custom ICF handler.
I'm trying to combine these articles: http://java.sun.com/developer/technicalArticles/RMI/rmi_corba/ and http://netbeans.org/kb/docs/javaee/entappclient.html to make simple client-server app using Glassfish, in which I could send a file from (local) client to a directory on the (local) server. This is something new for me and I feel a little overwhelmed at the moment. Any advice, please?
You're kind of in the wrong area. The things you're looking at are for support of RPC sessions. In theory you could send over an enormous byte array, but it is likely unwise to do so.
What would be preferable is to create a simple web app and push the file over HTTP.
Or you could try using a WS Web Service that's configured for MTOM -- it will handle large payloads as well. You can look here for an article of streaming MTOM messages. It's for WebLogic, but it's basically Sun JAX-WS so it should work on Glassfish out of the box.
An advantage of the Web Service is you can host it in an EJB, rather than having to deploy a separate WAR for this facility. What you want to watch out for is having the payload being all stored in RAM. For example, if you want to send a 10Gb file, the actual traffic is going to be the same, but done naively, you will end up holding all 10Gb in the heap on the client and/or the server, which obviously is not desirable.
In the end either way will work. The Web Service had the downside of having to dig in to the shadowy corners of the Web Service stack, where as with a generic Servlet and web app, it's more out in the open, however you will likely be diving in to the inner depths of HTTP to pull that off. For example, if you wanted to use Apache HTTP Client, you would need to create a custom RequestEntity to handle the streaming for you.
All possible, it's just less used and not the default, out of the box, 2 lines of code tutorial example.
I am working on a school project in which I query and receive some fairly large XML documents from a central server. This was fine in the beginning, as I was rarely making these requests (HTTP GET), but as the project progressed I came up with more things to do with this data, and now I have servlets requesting 3 or 4 XML documents, each in it's own separate GET request, which is causing upwards of 25 seconds page generation times.
It's not possible to change the way the data is served, neither the way in which it's requested as I have a fairly large code base, and it's not as decoupled as it perhaps should have been.
Is there a smart way to listen in on when my servlets execute these GET requests, intercept them, and perhaps supply them with a local, cached version instead? The data is not THAT volatile, so 'live' data is not needed.
So far, I have not been able to find information on listening on OUTgoing requests made by Tomcat...
I think a lot will depend upon your cache hit ratio. If the same 3-4 documents (or some small group of documents) are being requested on a regular basis, a local caching proxy server (like Squid) might be a possibility. Java can be configured to use a proxy server for HTTP requests.
You can probably implement this using HttpFilter. It can be used as a cache. If requested document is already in cache just return it; if not forward the HTTP request to your servlet.
I ended up using a ContextListener to load most of the data at start-up in addition to an 'expiry date' into the servlet context attributes. It makes for some slow start-ups (9 GetRequests to the central server!) but drastically reduces our page load times.
I have a .NET web service that returns XML, and I'd like to compress this before it is sent.
There's a couple of ways that I can do this, but I'd rather not have to do it in code.
Can I set up IIS to gzip all content returned by my WebService? It's not being called from a browser.
The other question is if this web service is being consumed by a Java client - will that affect anything?
I imagine that the client proxy will still need to decompress, but there shouldn't be any problem if I use gzip - that is a universal protocol, right?
The standard way to do this kind of thing is to use gzip compression over HTTP since it's directly supported by the protocol. As long as your client supports that then you should be good to go.
If you are writing the client from scratch with more fundamental tools you may need to add handling for this yourself: a good example of this is shown here (python).
I would expect a lot of SOAP client libraries to have built-in support for this but you'll have to try yours to be sure: if they lean on a lower level HTTP library to do their work, in all likelihood it should Just Work.
you can configure metabase.xml in iis for better control over compression. you may want redefine your web application format (.asp,.asmx,...) to metabase if it is not already included.
you can see below:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/25d2170b-09c0-45fd-8da4-898cf9a7d568.mspx?mfr=true
and also
http://www.businessanyplace.net/?p=wscompress