I have a REST service to upload images and this is the main code in charge of the registration in mongodb:
public String writeFiles(InputStream inputStream, String fileName, String contentType) throws IOException {
// save the file
GridFS gridFS = new GridFS(getDB(), Collections.PICTURES_FILES.name());
GridFSInputFile gridFSInputFile = gridFS.createFile(inputStream, fileName);
gridFSInputFile.setContentType(contentType);
gridFSInputFile.setMetaData(new BasicDBObject(ORIGINAL_PICT_COL, true));
gridFSInputFile.save();
return gridFSInputFile.getId();
}
The service then return the file ID to the client so that this one can ask for and display the uploaded image.
The problem is for very large images: sometime while requesting an image by its ID right after the upload gives a HTTP 404 error (due to unknown image ID on server side, this is a correct behavior).
I suppose it happens because registration time on the server side is greater than time used to get the ID back and request the new image on the client side - i.e the '.save()' operation is async, right ?
My question: How to be sure that the save operation has been completed before returning the ID in the given code ?
Or how to obtain a result object as for .insert operation ?
Does a
gridFSInputFile.validate();
would be enought ?
Or
getDB().getLastError()
?
I cannot reproduce easily this "bug" so i ask the question in case someone with experience already know how to solve this. Thanks in advance for your help.
If you are using a recent version of the Java driver (2.10 or later), try creating an instance of MongoClient instead of an instance of Mongo. The default write concern is WriteConcern.ACKNOWLEDGED for instances of MongoClient, so the save method will not complete until the write operation has completed.
Otherwise, in your getDB method (not shown), call the method DB.setWriteConcern(WriteConcern.SAFE) to change the default write concern.
The other possibility is that you are reading from a secondary member of your replica set. The default is to read from the primary, but if you are overriding that, then your reads will be eventually consistent, and you could see this problem in that case as well.
Related
We are using REST endpoint as datasource for jasper reports, but everywhere using REST point its mandatory to create an adapter with rest url and header info and use that as datasource.
We don't want to use adapter, instead we want to use directly the constructor
public JsonDataSource(String location, String selectExpression) throws JRException
as a dataset expression so we formed expression as follow.
new net.sf.jasperreports.engine.data.JsonDataSource("http://vagrant.ptcnet.ptc.com:2280/Windchill/trustedAuth/servlet/odata/D...","value")
However this particular endpoint expects some header information from requestor ("Accept", "application/json") else it throws bad exception as error
Is there any way we can pass header info here?
You need to use the constructor where you pass a InputStream
public JsonDataSource(java.io.InputStream jsonStream,java.lang.String selectExpression)
The easiest way to provide the input stream is probably to create a method within your java project that execute the request and returns the result in for example a ByteArrayInputStream
If you need to do it directly within the report (jrxml) you need to do it in 1 expression (jrxml do not support multi-line code). In this case you could the apache HttpClients that already is included as dependency of the jasper report project.
It could be something like this
new net.sf.jasperreports.engine.data.JsonDataSource(
org.apache.http.impl.client.HttpClients.createDefault().execute(
org.apache.http.client.methods.RequestBuilder.
get().
setUri("http://vagrant.ptcnet.ptc.com:2280/Windchill/trustedAuth/servlet/odata/D...").
setHeader("Accept", "application/json").
build()
)
.getEntity()
.getContent()
,""
)
The getContent() will return the InputStream and jasper reports will close this stream when it is done. However both the client and the execute response are theoretically Closable which means that normally you should call close() to free up resource, hence I'm not sure that it is enough to close only the InputStream, you may risk leaking of resource. This is why I initially suggested to create a method within the/a java project where this can be handled appropriately.
I'm running a service that either creates or updates objects in a GCP bucket. I.e my code checks if the object exists, and if it does my code reads it, updates it and writes it back.
Occasionally I'm getting an exception when trying to read the object.
My code:
Storage storage = googleStorage.get();
BlobId blobId = BlobId.of(STORAGE_BUCKET, "path/to.obj"));
Blob blob = storage.get(blobId);
if (blob == null) return null;
byte[] blobContent = blob.getContent();
...
The stacktrace:
...
at com.google.cloud.storage.Blob.getContent(Blob.java:455)
at com.google.cloud.storage.StorageImpl.readAllBytes(StorageImpl.java:461)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:461)
at com.google.cloud.storage.StorageImpl$16.call(StorageImpl.java:464)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.load(HttpStorageRpc.java:588)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:220)
No such object: bucket/path/to.obj
com.google.cloud.storage.StorageException: 404 Not Found
I would expect to get null in blob if the object does not exist, or to be able to read if blob isn't null.
This behavior results in the object being updated several times (not sure if this is because my code retries the call or because of something the storage library is doing).
I'm using google-cloud-storage 1.27.0, it happens about once per ~10K objects.
I’ve tested the code you provided and it appears to work in the desired way - the Blob object is assigned null if the Cloud Storage Object can’t be located or if the path is not correct.
The incidence of failure is quite insubstantial. Perhaps if you configure exponential backoff using the RetryParams class, then you can eliminate or reduce the impact of these failures.
You don't need the BlobId. You can use this method:
Blob blob = storage.get(bucketName).get(pathToObject);
This method will return null if the blob does not exist at the specified path.
Using JQuery and Spring's #ModelAndView annotation for the controller.
I'm trying to code a process in which the user clicks an icon and if a certain criteria on the DB is met, a zip file will be produced on the server containing a bunch of files, then this zip file should be sent to the browser for saving.
If the criteria isn't met, then an error message should be sent to the browser telling there isn't any file to be created and produced.
However if I use JQuery' .post method, I can receive the error message (if that is the case) but never the zip binary file.
If I use a regular Href Link I can receive the file (if that is the case) but don't know how to receive the message when the file cannot be produced.
Is there an alternative or a standard way to do this?
Thanks for your support!
-Gabriel.
You should probably split your server-side method in two:
the first one validates the criteria. If unsuccessful, it notifies of an exception, otherwise it returns a URL to the method in next point
the second one actually returns the zip file
In your frontend, the code will look something like this:
$.post(urlToPoint1, data, function(response) {
if (response.success) {
// download the file using the url provided
// (pointing to method described in point 2)
window.location.href = response.url;
}
else {
alert('whatever');
}
});
I am doing an application where I have to read a URL from a webpage as a String[Its not the address of the page]. The URL that I will be reading contains query string, and I specifically need two queries from that URL. So I am using the Uri class available in Android. Now, the problem lies in the encoding/format of the URL and the query. One of the queries that I need is always an URL. Sometimes the query URL is %-encoded and sometimes not.
The URLs can be like the following :
Case 1 :
http://www.example.com/example/example.aspx?file=http%3A%2F%2FXX.XXX.XX.XXX%2FExample.file%3Ftoken%3D9dacfc85
Case 2 :
http://www.example.com/example/example.aspx?file=http://XX.XXX.XX.XXX/Example.file?token=9dacfc85
How do I get the correct Url contained in the file= query?
I am using the following [to accomplish the said work universally] :
Uri.decode(urlString.getQueryParameter("file"));
Is this the correct way to do it?
UPDATE
I have decided to first encode the whole URL regardless of its value and then get the query parameter. Theoretically, it should work.
If you are uncertain about the type of URL you would get then I would suggest you to decode every URL you get from the parameter. And when you need to use it then you can encode it.
As per my knowledge, you are doing it right.
I am using the Workflow Services Java API (11.1.1) for SOA Suite to access and manipulate human tasks. I would like to be able to access and add file attachments to existing human tasks. I am using the methods provided in the AttachmentType interface.
When adding an attachment, the problem I am running into is that an attachment does get created and associated with the task, however it is empty and has no content. I have attempted both setting the input stream of the attachment, as well as the content string and in each case have had no success (and setting the content string results in an exception when trying to update the corresponding task).
I have successfully added and accessed an attachment using the worklist application, however when trying to access the content of this attachment through code I receive an object with mostly null/0 values throughout, apart from the attachment name.
The code I am using to access attachments resembles:
List attachments = taskWithAttachments.getAttachment();
for(Object o : attachments){
AttachmentType a = (AttachmentType) o;
String content = a.getContent(); // NULL
InputStream str = a.getInputStream(); // NULL
String name = a.getName(); // Has the attachment name
String mime = a.getMimeType(); // Has the mime type
long size = a.getSize(); // 0
...
}
As the API's are not overly rich in documentation I may well be using them incorrectly. I would really appreciate any help/suggestions/alternatives in dealing with BPEL task attachments.
Thanks
After contacting Oracle for support, it turns out that the attachments portion of the Workflow API is broken in the current release. The fix will be included in a future release.