I'm building a java REST API using JAX-RS and to complete a GET request for a zip file I need a rather sizeable chunk of JSON to complete it. I'm not terribly experienced with REST but I do know that GET requests shouldn't have a request body and a POST shouldn't be returning a resource. So I guess my question is, how do I complete a request that contains JSON (currently in the message body) and expects a zip file in the response while keeping the application RESTful? It may be worth noting that the JSON could also contain a password
I have used POST for similar scenarios. This is a common scenarios for SEARCH operations where there is a need to send json data in request. Though using POST for getting an object is not as per REST standards, I found that to be the most suitable given the options available.
You can send body in GET requests, but that is not supported by all frameworks/tools/servers. This link discusses that in detail.
If you use POST for the operation, you can use https to send confidential information in the body.
You can think that your REST API exposes a virtual file system and the zip file you mentioned is just one resource in that VFS and have files in a certain directory to represent queries of that file system. Then you can create a new query object by sending a POST request to the queries directory, specifying all query parameters you need, such as chunk size and the path of the zip file in the VFS.
The virtual file system I am referring to is actually a directory containing other directories and files that can represent real files on the disk or metadata records in a database.
For example, say you start with the following directory layout in the VFS:
/myvfs
/files
/archive.zip
/queries
To download the archive.zip file you can send a simple GET request:
// Request:
GET /myvfs/files/archive.zip
But this will stream the entire file at once. In order to break it in parts, you can create a query in which you want to download chunks of 1MB:
// Request:
POST /myvfs/queries/archive.zip
{
chunk_size: 1048576
}
// Response:
{
query_id: 42,
chunks: 139
}
The new query lives at the address /myvfs/queries/archive.zip/42 and can be deleted by sending a DELETE request to that URL.
Now, you can download the zip file in parts. Note that the creation of the query does not actually create smaller files for each part, it only provides information about the offsets and the size of the chunks, information that can be persisted anywhere, from RAM to databases or plain text files.
To download the first 1MB chunk of the zip file, you can send a GET request:
GET /myvfs/queries/archive.zip/42/0
As a final note, you should also be aware that the query resource can be modeled to accommodate other scenarios, such as dynamic ranges of a certain file.
P.S. I am aware that the answer is not as clear as it should and I apologize for that. I will try to come back and refine it, as time permits.
Related
For mocking currently we are recording SOAP request and response on file system with specific folder structure such as
Request folder
test1_request.xml
test2_request.xml
test3_request.xml
Response folder
test1_response.xml
test2_response.xml
test2_response.xml
When we run our test suite first we scan through these directory and store file content in a hashmap e.g.
Map.put(request, response)
Once all file contents are stored in map we start executing our test cases. In this process we construct the soap Request and pass it to our controller which in turn call this map and find a corresponding response for the request.
Now problem is over a period of time we have accumulated thousands of test cases and req/res which is slowing down the overall test execution process. For your information we have integrated this into our build process so everytime build is triggered we execute all our unit tests.
Any design recommendation to improve it?
I was thinking index these req/res files using solr or lucene but not sure if they provide any map machenism where in I pass soap request and get the matching response.
In case an incoming request must be completly equal to the cached request you can create a hash for the cached request and name the response files accordingly.
In your controller you can then create a hash for the request and you will directly find the response just opening the correct file.
Alternativly you can create a csv/properties file with "request hash" to "response filename" and just load this file. Should be much faster then always loading all requests and responses. This is more or less a simple index, lucene/solr should be really too much for just this use-case.
I'm trying to generate a pre-signed URL a client can use to upload an image to a specific S3 bucket. I've succesfully generated requests to GET files, like so:
GeneratePresignedUrlRequest urlRequest = new GeneratePresignedUrlRequest(bucket, filename);
urlRequest.setMethod(method);
urlRequest.setExpiration(expiration);
where expiration and method are Date and HttpMethod objects respectively.
Now I'm trying to create a URL to allow users to PUT a file, but I can't figure out how to set the maximum content-length. I did find information on POST policies, but I'd prefer to use PUT here - I'd also like to avoid constructing the JSON, though that doesn't seem possible.
Lastly, an alternative answer could be some way to pass an image upload from the API Gateway to Lambda so I can upload it from Lambda to S3 after validating file type and size (which isn't ideal).
While I haven't managed to limit the file size on upload, I ended up creating a Lambda function that is activated on upload to a temporary bucket. The function has a signature like the below
public static void checkUpload(S3EventNotification event) {
(this is notable because all the guides I found online refer to a S3Event class that doesn't seem to exist anymore)
The function pulls the file's metadata (not the file itself, as that potentially counts as a large download) and checks the file size. If it's acceptable, it downloads the file then uploads it to the destination bucket. If not, it simply deletes the file.
This is far from ideal, as uploads failing to meet the criteria will seem to work but then simply never show up (as S3 will issue a 200 status code on upload without caring what Lambda's response is).
This is effectively a workaround rather than a solution, so I won't be accepting this answer.
I am working on a use case where I am displaying user's messages on a JSP. Details of the flow are:
All the messages will be shown in a table with icon for attachments
When the user clicks on attachment, the file should get downloaded.
If there is more than one attachment, user can select the required
one to download.
The attachments will be stored on the local filesystem and the path for the attachments will be determined by the system.
I have tried to implement by referring to these SO questions:
Input and Output binary streams using JERSEY?
Return a file using Java Jersey
file downloading in restful web services
However, it's not solving my purpose. I have the following questions:
Is it possible to send message data (like subject, message, message id, etc) along with the attachments (Inputstream) in one response?
If yes, what needs to be the MediaType for #Produces annotation in my resource method? Currently my resource is annotated with #Produces(MediaType.APPLICATION_JSON). Will this work?
How to send the file data in the response?
Any pointers appreciated. TIA.
You can add custom data to the response Header, so yes you are able to send such message data. Add the data to the response Header.
#Produces(MediaType.APPLICATION_JSON) will not work, unless the clients will accept JSON as a file, what they should and will not do ;)
The correct MediaType depends on what kind of file you want to submit.
You can use the default MediaType / MIME-Type MediaType.APPLICATION_OCTET_STREAM / application/octet-stream (Is there a “default”
MIME type?) but I think it's better to use the correct and exact MIME-Type for your file.
You will find working examples for sending file data with jersey in Input and Output binary streams using JERSEY? - so there is no need to answer this again :)
Hope this was helpful somehow, have a nice day.
In trying to serve GWT permutations out of the blob store in order to escape the AppEngine hard limit of 150 mb for static files, I've succeed in doing so for "html" and image files "jpeg, png, .etc" and other .rpc calls, but am hung up on XSRF calls.
In the server logs, I see:
The serialization policy file '/theapplication/CCA65B31464BDB27545C23C142FEEEF8.gwt.rpc' was not found;
My upload log shows it was uploaded /CCA65B31464BDB27545C23C142FEEEF8.gwt.rpc : HTTP/1.1 200 OK
The request url shows http://14.applicationXYZ.appspot.com/xsrf
the RequestPayload shows: http://14.applicationXYZ.appspot.com/theapplication/|CCA65B31464BDB27545C23C142FEEEF8|com.google.gwt.user.client.rpc.XsrfTokenService|getNewXsrfToken|1|2|3|4|0|
Other rpc calls are resolving (via a server filter is looking for /theapplication and mapping the requests to a blob to serve) as in the following case where an rpc call is made without an Xsrf request (as the user is not logged in yet)
req url -- http://14.applicationXYZ.appspot.com/someRPCCall
RequestPayload -- http://14.applicationXYZ.appspot.com/theapplication/|62D7E6737056C685E10947B640409549|com.abc.client.rpc.Service|doWork|java.lang.String/2004016611|java.lang.Boolean/476441737|wwwerr|1|2|3|4|3|5|5|6|7|7|6|0|
So, I have two questions:
1) why is XSRF call failing to return the appropriate blob, ie. why doesn't the xrsf call get handled by the filter the way other url calls to /theapplication/* do?
2) What can I do about it?
3) Also, I tried setting the content type to "text/x-gwt-rpc; charset=UTF-8 and also as unspecified when I uploaded the blob. Anyone know what the content type should be for *.gwt.rpc in case I do get the xrsf working? Could having the wrong content type be causing the trouble?
***note applicationXYZ is not the real name so no the links won't work.
OK /xsrf is mapped to a servlet as well, so if the filter returns a blob without passing on the filter, it seems it won't reach the servlet.
Anyway, it's easy enough just to upload the few .rpc files as normal and not serve them as blobs.
I want to send a file to the Browser via the REST Interface.
Can you suggest the most efficient way to do it, Keeping in mind the following?
Not much traffic.
I am fetching the file from HBase which means when I fetch it from HBase I get it in Byte Array.
The files are not in any folder in the server. The files can only be fetched from the HBase table.
The Front end is PHP and I do not know PHP.
In the REST api you can just pass the byte array to Response and it takes care of itself.
Using the following code -
#Produces("image/jpg")
public Response getImage() {
<Fetch it from where ever you have it>
Response.ok(<byteArrayOfTheFile>).build();
}
I am giving case study of WebService by which i send file:
It is always good to encode the file content and send it to the destination where they will be decode it and read the content.
Sending as an attachment is always open to the world becasue it is not encrypted.And if the network having high trafic chances of failure is high.