I am writing a REST web service for clients to download large data files. As part of this, I would like to implement a feature to enable resuming interrupt downloads in case an exception occurs or the connection is lost on the original request.
I did some research online and found that supporting Range/If-Range properties in the request header might be the solution, as indicated in http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html.
My question are
In the scope of REST web service, is it the most commonly used and best practice to support Range/If-Range properties in the client HTTP request header, or just pass the byte offset as a query parameter in the client GET request, e.g., hostname:port/download?token=?byteoffset=??
If taking the former approach, on the server side, is there a standard way to handle request with Range field in JAX-RS specification (I am using Java)? The straightforward way is to just open an InputStream from the file and bypass the given # of bytes.
In general, don't use parameters that have to do with meta-information on the resource (or the part of it you need), so you should be using the Range, and keep sure the server allows that.
Note that, for example, byteoffset is not a meaningful part, component or semantically interesting bit of the resource itself, but a way of overcoming partial content (also, identical for all the resources, so you have to use the headers allowed for that, and hey! they're there for that).
Related
I'm looking for any way to run PUT and DELETE methods for Java Servlets. In general, I have a simple application with web control (WAR) connected with EJB project that controls a simple list of libraries.
For each of CRUD actions I would like to use different different method:
Creating by POST,
Reading by GET,
Update by PUT,
Delete by DELETE
However create I make by simple form with post method, read (list) is executed by simple entering GET in the URL address (without form), but I have no idea how could I use put and delete for their purposes.
Unfortunately online resources I found didn't answer that question, but I'm just curious how I could use id.
I assume removing by e.g. /Servlet/:libraryId (without any body), update by the same URL schema (with updated schema). doPut() and doDelete() should just run proper actions in the EJB.
There's no way on the HTML level to create a DELETE request in a form submit (it's always GET or POST, except you start to write your own browser that handles the kind of non standard HTML you proposed), all you can do is to submit a value (taken from a hidden field, radio button, whatever) that expresses which action should be taken in the doPost() or doGet() handler on the server.
But then there's nothing special about DELETE on the http protocol level. The whole request is just a sequence of bytes, where the first few define the method (PUT, GET, DELETE, PATCH...), followed by space etc.
As far as your server is concerned you could even use web sockets to accept connections and create responses to self defined fantasy methods. If you're using servlets, you have doPut(), doGet() and doDelete() available anyway. To handle other http methods you'd have to override the service() method.
On the client side Angular's HttpClient allows for all three methods that you need, you could use jquery's ajax, or there's again the web socket approach on the client too, as long as the browser plays along.
If it is a Java client, again there are web sockets (but you'd want to send some valid http request instead of "Hello", as done in the example¹.), HttpUrlConnection and the Apache HttpClient, at a first glance.
As for testing, there are browser extensions available that let you compose requests other than PUT and GET, SoapUi as a tool is very popular, IntelliJ even has a http client, that allows you to type the complete request as plain text.
¹ If you have a hand crafted server that agrees to such a "Hello" protocol, that's fine too. Its just not http.
Since there was quite some confusion in the question, as it turned out, I'd like to promote the idea some more to write your own Http client/server with webSockts. You have ready made examples for the socket stuff, so that's easy.
But it's more interesting then, that you're at the raw tcp/ip level that http is built on, all this framework-redirection (servlets, whatever) drops away and you're left with a single text basically (well, a byte stream), that's the http request. All the magic is gone, things are obvious and simple, just as specified.
I'm spending sometime with DynaTrace.
I'm impressed by its feature related to cross jvm instrumentation.
In simple words, DynaTrace is able to instrument Java code creating trace with some statistical information. This is nothing new.
There is a feature really interesting: when a call to an external JVM is execute, DynaTrace is able to link this new trace to the caller one (i.e. remote session bean, web services, remote RMI and so on).
How could it be possible?
I'm not able to immagine how to implement this feature? Any ideas?
Thank you
Dynatrace actually doesnt rely on information from beans. As you correctly said in your questions - we are using Byte Code Instrumentation such as other tools in the market as well. We instrument key methods of certain frameworks, e.g: Servlet, Axis, JMS, JDBC, ...
In the scenario where you make a call from one JVM to another using e.g: HTTP-based communication we instrument both the sending side of the HTTP Request as well as the receiving side on the other JVM. On the sending side we attach an additional HTTP Header with the ID of the current PurePath. PurePath is our patent technology. So - every PurePath (=every single transaction) gets a unique ID. This ID "travels" with the request, e.g: we put it on the HTTP Request as an HTTP HEader. ON the receiving side - your second JVM - we inspect that HTTP HEader and therefore know that all the data we collect belongs to that PurePath. This allows us to do real end-to-end tracing without relying on things like Beans or without correlating this data based on e.g: timestamps
Makes sense?
If you have more questions let me know. I also recorded some videos and put on YouTube to explain the technology and the product itself: http://bit.ly/dttutorials
This information is normally extracted using MXBeans. Such beans provide a standard API for accessing standard runtime information. Similarly, such applications often scan the class loaders for specific classes and extract relevant information by hard-coded access. This is why less popular solutions are often not supported by monitoring tools.
I have made a Java EE 6 application where a user can browse a set of questions, add new questions and so on. The user can optionally log in so that he/she gets "credit" for adding the question or reporting it as bad.
Now I want to make a iPhone application where the user can do pretty much the same. So the answer is web service I assume. I have not worked with web service before but I see there are at least to alternatives: SOAP and REST.
Which one should I choose? I want the user to be able to log in from the application as well a as browse the questions in the database...pretty much many of the actions you can do on the web site.
I don't know much about the security and overhead they introduce.
Also I want the user to be able to retrieve the list of questions thorugh the web server and have the option to save it, so he/she won't need to have internet unless he/she wants to update it. Can I achieve this with both web services?
REST has less overhead than SOAP (WSDL contract, XML messages, supporting frameworks) so when the client is a mobile device REST seems more suitable. You could use JAX-RS (Jersey) to easily create REST services on the server side. The client request consists of the url structure and/or parameters like http://yourserver/questions/view/342 (to view question 342) or http://yourserver/questions/search?q=REST+vs+SOAP (to search for questions about REST vs SOAP). The response can be anything you want, but XML or JSON is pretty common.
Choosing REST means you will be leaning heavily on the HTTP protocol. For security a common approach is to use HTTP Basic authentication in combination with https. Basic authentication means you add an 'Authentication:' header to your HTTP request containing a Base64 encoded username:password pair. Note that Base64 does not encrypt anything, it just obfuscates. To avoid eavesdropping you need to use at least https meaning requests are encrypted using the server's public key. These requests can only be decrypted with the server's private key. To use https you need to set up the server with a certificate. If you want to avoid warnings about the certificate being 'untrusted' it needs to be issued by a recognized SSL certificate provider. For testing you can just generate it yourself.
Finally you asked about saving a list of questions for offline usage. This is a concern of the app, not of the service. To do this you need to store the retrieved data on the device and access that data if the device goes offline. I am not an iPhone developer, but I can imagine you could use a flat file or some lightweight database to store the data. When the device is offline, the app component that retrieves data should switch from network access to local storage access. Also some app functionalities like adding a question might need to be disabled. If you don't disable these, you would need to temporarily store any data entered by the user and send it to the server when the device comes online again. This could be a bit tricky to get right so my advice would be to leave this for later.
You can take a look at this previous SO post for some guidance. I would recommend using REST, it seems to be less messy than SOAP and Java has support available for it as shown here.
Through the use of annotations, you can simply created a facade to which users will connect. In turn, this facade will call the relevant logic which I am presuming you already have.
Well on a simple search REST vs SOAP, you will eventually get to this
There are plenty of other articles and even in-depth research papers, so it's only a matter of - do you really want to get serious with your research VS not really
Good luck!
Short answer: Yes, you can achieve that with web services.
Web services are only a facade to your system - they can expose (or not) any behavior you want to. If you have security concerns, you'll have to approach them anyway in both methods.
Personally, I'd use a RESTful approach as its usually simpler to implement and use. From Wikipedia:
A RESTful web service (also called a RESTful web API) is a simple web
service implemented using HTTP and the principles of REST. It is a
collection of resources, with four defined aspects:
the base URI for the web service, such as http://example.com/resources/
the Internet media type of the data supported by the web service. This is often JSON, >XML or YAML but can be any other valid Internet media type.
the set of operations supported by the web service using HTTP methods (e.g., GET, >PUT, POST, or DELETE).
The API must be hypertext driven.[11]
So you'd have a URL, say http://mywebsite.com/users and perform HTTP actions (GET, PUT, etc) on them. A GET request on /users/17 could return user 17, for instance, while a POST request on it would update said user.
As for login, when your users "log in" you would call a GET method that sends username:password (probably encrypted) and returns a login token. Every time the user executes an action, you would send said token with the request as an additional parameter.
I am working on a school project in which I query and receive some fairly large XML documents from a central server. This was fine in the beginning, as I was rarely making these requests (HTTP GET), but as the project progressed I came up with more things to do with this data, and now I have servlets requesting 3 or 4 XML documents, each in it's own separate GET request, which is causing upwards of 25 seconds page generation times.
It's not possible to change the way the data is served, neither the way in which it's requested as I have a fairly large code base, and it's not as decoupled as it perhaps should have been.
Is there a smart way to listen in on when my servlets execute these GET requests, intercept them, and perhaps supply them with a local, cached version instead? The data is not THAT volatile, so 'live' data is not needed.
So far, I have not been able to find information on listening on OUTgoing requests made by Tomcat...
I think a lot will depend upon your cache hit ratio. If the same 3-4 documents (or some small group of documents) are being requested on a regular basis, a local caching proxy server (like Squid) might be a possibility. Java can be configured to use a proxy server for HTTP requests.
You can probably implement this using HttpFilter. It can be used as a cache. If requested document is already in cache just return it; if not forward the HTTP request to your servlet.
I ended up using a ContextListener to load most of the data at start-up in addition to an 'expiry date' into the servlet context attributes. It makes for some slow start-ups (9 GetRequests to the central server!) but drastically reduces our page load times.
GWT RPC is proprietary but looks solid, supported with patterns by Google, and is mentioned by every book and tutorial I've seen. Is it really the choice for GWT client/server communcation? Do you use it and if not why and what you chose? I assume that I have generic server application code that can accommodate for RPC, EJBs, web services/SOAP, REST, etc.
Bonus question: any security issues with GWT RPC I need to be aware of?
We primarily use three methods of communications:
GWT-RPC - This is our primary and prefered mechanism, and what we use whenever possible. It is the "GWT way" of doing things, and works very well.
XMLHttpRequest using RequestBuilder - This is typically for interaction with non-GWT back ends, and we use this mainly to pull in static web content that we need during runtime (something like server side includes). It is useful especially when we need to integrate with a CMS. We wrap our RequestBuilder code in a custom "Panel" (that takes a content URI as its constructor parameter, and populates itself with the contents of the URI).
Form submission using FormPanel - This also requires interaction with a non-GWT back end (custom servlet), and is what we currently use to do cross site communications. We don't really communicate "cross site" per se, but we do sometimes need to send data over SSL on a non-SSL page, and this is the only way we've been able to do it so far (with some hacks).
The problem is that you are on a web browser so any non-http protocol is pretty much not guaranteed to work (might not get through a proxy).
What you can do is isolate the GWT-RPC stuff in a single replaceable class and strip it off as soon as possible.
Personally I'd just rely on transferring a collection of objects with the information I needed encoded inside the collection--that way there is very little RPC code because all your RPC code ever does is "Collection commands=getCollection()", but there would be a million other possibilities.
Or just use GWT-RPC as it was intended, I don't think it's going anywhere.