In trying to serve GWT permutations out of the blob store in order to escape the AppEngine hard limit of 150 mb for static files, I've succeed in doing so for "html" and image files "jpeg, png, .etc" and other .rpc calls, but am hung up on XSRF calls.
In the server logs, I see:
The serialization policy file '/theapplication/CCA65B31464BDB27545C23C142FEEEF8.gwt.rpc' was not found;
My upload log shows it was uploaded /CCA65B31464BDB27545C23C142FEEEF8.gwt.rpc : HTTP/1.1 200 OK
The request url shows http://14.applicationXYZ.appspot.com/xsrf
the RequestPayload shows: http://14.applicationXYZ.appspot.com/theapplication/|CCA65B31464BDB27545C23C142FEEEF8|com.google.gwt.user.client.rpc.XsrfTokenService|getNewXsrfToken|1|2|3|4|0|
Other rpc calls are resolving (via a server filter is looking for /theapplication and mapping the requests to a blob to serve) as in the following case where an rpc call is made without an Xsrf request (as the user is not logged in yet)
req url -- http://14.applicationXYZ.appspot.com/someRPCCall
RequestPayload -- http://14.applicationXYZ.appspot.com/theapplication/|62D7E6737056C685E10947B640409549|com.abc.client.rpc.Service|doWork|java.lang.String/2004016611|java.lang.Boolean/476441737|wwwerr|1|2|3|4|3|5|5|6|7|7|6|0|
So, I have two questions:
1) why is XSRF call failing to return the appropriate blob, ie. why doesn't the xrsf call get handled by the filter the way other url calls to /theapplication/* do?
2) What can I do about it?
3) Also, I tried setting the content type to "text/x-gwt-rpc; charset=UTF-8 and also as unspecified when I uploaded the blob. Anyone know what the content type should be for *.gwt.rpc in case I do get the xrsf working? Could having the wrong content type be causing the trouble?
***note applicationXYZ is not the real name so no the links won't work.
OK /xsrf is mapped to a servlet as well, so if the filter returns a blob without passing on the filter, it seems it won't reach the servlet.
Anyway, it's easy enough just to upload the few .rpc files as normal and not serve them as blobs.
Related
I am building a REST API using Spring and implementing the PUT functionality. I am trying to handle the scenario in which the client tries to PUT to a uri where the resource does not already exist. In this scenario, per the PUT spec, a new resource should be created at that ID. However because of the ID generation strategy I am using (#GeneratedValue(strategy = GenerationType.IDENTITY)), I cannot create resources with IDs out of sequence. The database must use the next available value. However, according to the w3 spec on PUT...
If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.
If the server desires that the request be applied to a different URI, it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its own decision regarding whether or not to redirect the request.
In this case, I can do neither of these. I cannot create a new resource at the existing URI due to the ID generation restrictions, and I cannot send a 301 Moved Permanently response because according to How do I know the id before saving an object in jpa it is impossible to know the next ID in a sequence before actually persisting the object. So I would have no way of telling the client what URI to redirect to in order to properly create the new resource.
I would imagine this problem has been solved many times over because it is the standard PUT functionality, yet I am having trouble finding any other people who have tried to do this. It seems most people just ignore the "create new resource" part of PUT, and simply use it as update only.
What I want to do is just go ahead and create the new resource, and then send the 301 Moved Permanently to redirect the client to the true location of the created resource - but as we see above, this violates the definition of PUT.
Is there a spring-y way to solve this problem? Or is the problem unsolved, and the true standard practice is to simply not allow creation of new resources via PUT?
If the server cant processes the request due to an error in the request, just return a 400.
400 Bad Request -
The server cannot or will not process the request due to an apparent client error (e.g., malformed request syntax, size too large, invalid request message framing, or deceptive request routing).
I'm building a java REST API using JAX-RS and to complete a GET request for a zip file I need a rather sizeable chunk of JSON to complete it. I'm not terribly experienced with REST but I do know that GET requests shouldn't have a request body and a POST shouldn't be returning a resource. So I guess my question is, how do I complete a request that contains JSON (currently in the message body) and expects a zip file in the response while keeping the application RESTful? It may be worth noting that the JSON could also contain a password
I have used POST for similar scenarios. This is a common scenarios for SEARCH operations where there is a need to send json data in request. Though using POST for getting an object is not as per REST standards, I found that to be the most suitable given the options available.
You can send body in GET requests, but that is not supported by all frameworks/tools/servers. This link discusses that in detail.
If you use POST for the operation, you can use https to send confidential information in the body.
You can think that your REST API exposes a virtual file system and the zip file you mentioned is just one resource in that VFS and have files in a certain directory to represent queries of that file system. Then you can create a new query object by sending a POST request to the queries directory, specifying all query parameters you need, such as chunk size and the path of the zip file in the VFS.
The virtual file system I am referring to is actually a directory containing other directories and files that can represent real files on the disk or metadata records in a database.
For example, say you start with the following directory layout in the VFS:
/myvfs
/files
/archive.zip
/queries
To download the archive.zip file you can send a simple GET request:
// Request:
GET /myvfs/files/archive.zip
But this will stream the entire file at once. In order to break it in parts, you can create a query in which you want to download chunks of 1MB:
// Request:
POST /myvfs/queries/archive.zip
{
chunk_size: 1048576
}
// Response:
{
query_id: 42,
chunks: 139
}
The new query lives at the address /myvfs/queries/archive.zip/42 and can be deleted by sending a DELETE request to that URL.
Now, you can download the zip file in parts. Note that the creation of the query does not actually create smaller files for each part, it only provides information about the offsets and the size of the chunks, information that can be persisted anywhere, from RAM to databases or plain text files.
To download the first 1MB chunk of the zip file, you can send a GET request:
GET /myvfs/queries/archive.zip/42/0
As a final note, you should also be aware that the query resource can be modeled to accommodate other scenarios, such as dynamic ranges of a certain file.
P.S. I am aware that the answer is not as clear as it should and I apologize for that. I will try to come back and refine it, as time permits.
I Have a Jersey RESTful webservice under Glassfish which accepts incoming POST requests for uploading images, consuming MediaType.MULTIPART_FORM_DATA which maps to mime type multipart.
When I receive the instance of FormDataContentDisposition fileDetail in my service and call fileDetail.getSize()
I always get -1
I wonder how is the appropriate way to fetch the correct file size using Jersey and Multipart file upload.
I also encountered this problem and I'm now seeking for RCA and solutions.
Just a workaround: save the file temporarily and use File#length() method to get the real size of uploaded file.
More update:
As complement of the above method, keep monitoring saved bytes' size, throw an exception (or do what the policy requires) when the file size threshold is reached.
What's more, don't trust the size given directly by client, which could be faked, unless the client is trusted.
I have been wondering if its possible to anonymize public URL. When user makes a request with this anonymized public URL, let Nginx decode, fetch and serve the URL.
Example
Public URL http://amazon.server.com/location/file.html
Anonymized URL https://amazon.server.com/09872340-932872389-390643289/983724.html
Nginx decodes 09872340-932872389-390643289/983724.html to location/file.html
Added image below for further clarification. Nginx has a reverse logic to decode, whereas Remote Server has the logic to Anonymize URL.
Question
All I need to know is how would Nginx decode anonymized URL? Nginx got anonymized URL request. There has to be a way to decode it.
This is an answer to the updated question:
Question All I need to know is how would Nginx decode anonymized URL? Nginx got anonymized URL request. There has to be a way to decode it.
Nginx would make a request to a script, e.g., either through proxy_pass or fastcgi_pass et al.
The script could decode the URL and provide the actual URL through a Location HTTP Response Header with a 302 Found HTTP Status.
Nginx would then have the decoded URL stored in the $upstream_http_location variable. It could subsequently be used in another proxy_pass et al within a named location #named, to which you could redirect the processing of the original request from the user through error_page 302 = #named.
In all, each user request would be processed twice within nginx, but it'll all be transparent to the user -- they simply receive the resource through the original URL, with all redirects being done internally within nginx.
Define Anonymize for a URL? You can use any of the same methods as URL shortners such as http://bitly.com. But that is not truely anonymous since there is a definite mapping between the shortened URL and the target public url. If you make this per user based there is still a mapping but it is user based.
Looks like what you are suggesting is a variation on the above scheme where instead of sending the user to the target URL via a redirect you want the your server to actually fetch the content and return to the user. You need to be aware of the linked content in the public URL such as style sheets and images and adjust them accordingly. Many of the standard proxies has this kind of functionality built in. Also take a look at
https://github.com/jenssegers/php-proxy
http://search.cpan.org/~book/HTTP-Proxy-0.304/lib/HTTP/Proxy.pm.
If you are planning to build your own these can serve as a base.
I think what you want to do here is somewhat similar to another question I've answered in the past, where for each request by the client, you effectively want to make two requests to two different upstreams under the hood (first one to an upstream capable of decoding the URL, second one to actually fetch said decoded URL), but, of course, only return one result.
https://serverfault.com/questions/202011/nginx-and-2-upstreams/485044#485044
As mentioned on serverfault, you could use error_page to process another request, after the first one is complete. You could then use $upstream_http_ to make the subsequent request based on the original one, for example, using $upstream_http_location.
You might also want to look into X-Accel-Redirect header, introduced in this context at proxy_ignore_headers.
I am working on a Web application and need to pass data across HTTP redirects. For example:
http://foo.com/form.html
POSTs to
http://foo.com/form/submit.html
If there is an issue with the data, the Response is redirected back to
http://foo.com/form.html?error=Some+error+message
and the query param "error"'s value is displayed on the page.
Is there any other reliable way to pass data across redirects (ie HTTP headers, etc.).
Passing the data as query params works but isn't ideal because:
its cleartext (and in the query string, so SSL cant be relied on to encyrpt) so I wouldn't want to pass sensitive data
URIs are limited in length by the browser (albiet the length is generally fairly long).
IMPORTANT: This platform is state-less and distributed across many app servers, so I can't track the data in a server-side session object.
From the client-server interaction point of view, this is a server internal dispatch issue.
Browsers are not meant to re-post the entity of the initial request automatically according to the HTTP specification: "The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD."
If it's not already the case, make form.html dynamic so that it's an HTML static file. Send the POST request to itself and pre-fill the value in case of error. Alternatively, you could make submit.html use the same template as form.html if there is a problem.
its cleartext (and in the query string, so SSL cant be relied on to
encyrpt) so I wouldn't want to pass sensitive data
I'm not sure what the issue is here. You're submitting everything over plain HTTP anyway. Cookie, query parameters and request entity will all be visible. Using HTTPS would actually protect all this, although query parameters can still be an issue with browser history and server logs (that's not part of the connection, which is what TLS protects).
I think using cookies would be a reasonable solution depending on the amount of data. As you can't track it on the server side (by using a sessions for example, which would be much simpler)
You can store error message in database on server and reference to it by id:
http://foo.com/form.html?error_id=42
If error texts are fixed you even don't need to use a database.
Also, you can use Web Storage. Instead of redirection with "Location" header you can display output page with this JavaScript:
var error_message = "Something is wrong";
if( typeof(Storage) !== "undefined" ) {
localStorage.error_message = error_message;
else {
// fallback for IE < 8
alert(error_message);
}
location.href = "new url";
And after redirection you can read localStorage.error_message using JavaScript and display the message.