I have a task to make possibility to download simple .txt files from the application using Azure Blob Storage. The code is supposed to work. I didn't write it, but it looks OK to me and from what I'll show later in this post, it really connects to the Azure, and, what's more important, it really works only when I'm testing the app on localhost, but not on the publicly available site.
These are the steps I made:
uploaded files to the storage (the underlined is one of them):
added proper link to the button that should download the attachment via REST API
of course, I've also added reference to the attachment in the database (its ID, name etc.)
here's how it looks on frontend:
And this is what I get:
I've seen somewhere that it might be caused by Azure CORS settings that don't allow the app to access the storage. Here's what I've done so far:
went to portal.azure.com and changed CORS settings like this:
found something about putting some code into the app under this Microsoft link, but it's not Java. I guess there are some analogical ways in Java:
https://blogs.msdn.microsoft.com/windowsazurestorage/2014/02/03/windows-azure-storage-introducing-cors/ . Is it necessary after the CORS rules have been added in the Azure Portal?
Also, I've found information that it may be caused by the storage access permissions. The Public Access Level is set to Container:
Not sure if it gives anything, but these are the container's properties:
What else can be the problem with the BlobNotFound error I receive? Hope I've put enough information here, but if some more is needed say in comment and I'll provide it.
This is the code that's supposed to download the attachment of this method, contained in 3 classes:
Controller class part:
#GetMapping("/download/{id}")
#ResponseStatus(HttpStatus.OK)
public void downloadAttachment(#PathVariable long id, HttpServletResponse response) throws IOException {
dataUploadRequestAttachmentService.downloadStaticAttachment(response, id);
}
Controller service class part:
public void downloadStaticAttachment(HttpServletResponse response, long id) throws IOException {
ArticleAttachment articleAttachment = this.findAttachment(id);
String mimeType = URLConnection.guessContentTypeFromName(articleAttachment.getName());
if (mimeType == null){
mimeType = "application/octet-stream";
}
response.setContentType(mimeType);
response.setHeader("Content-Disposition", String.format("attachment; filename=\"%s\"", articleAttachment.getName()));
azureBlobStorageArticleAttachmentService.downloadArticleAttachment(
articleAttachment.getName(),
articleAttachment.getId(),
response.getOutputStream()
);
}
And the AzureBlobStorageArticleAttachmentService class:
public void downloadArticleAttachment(String attachmentName, Long articleId, OutputStream outputStream) {
try {
CloudBlockBlob blob = container.getBlockBlobReference(String.format("%s_%s", articleId, attachmentName));
blob.download(outputStream);
} catch (URISyntaxException | StorageException e) {
e.printStackTrace();
log.error(String.format("Download article attachment %s error", attachmentName));
}
}
According to your description, please debug to check if you get the correct blob name in the code: CloudBlockBlob blob = container.getBlockBlobReference(String.format("%s_%s", articleId, attachmentName));
Here is a demo about how to download blobs using Java SDK for your reference:
/// <summary>
/// download blob to memory
/// </summary>
/// <param name="containerName">blob container name</param>
/// <param name="blobName">blob Name</param>
public static ByteArrayOutputStream downloadBlobToMemory(String containerName, String blobName) {
CloudStorageAccount account = null;
CloudBlobContainer container = null;
ByteArrayOutputStream byteArrayOutputStream = null;
try {
account = CloudStorageAccount.parse(ConnString);
CloudBlobClient client = account.createCloudBlobClient();
container = client.getContainerReference(containerName);
container.createIfNotExists();
CloudBlockBlob cloudBlockBlob = container.getBlockBlobReference(blobName);
byteArrayOutputStream=new ByteArrayOutputStream();
cloudBlockBlob.download(byteArrayOutputStream);
}catch(Exception ex) {
ex.printStackTrace();
}
return byteArrayOutputStream;
}
/// <summary>
/// download blob to local disk
/// </summary>
/// <param name="containerName">blob container name</param>
/// <param name="blobName">blob Name</param>
/// <param name="filePath"> for example: C:\\Test\test.txt</param>
public static void downloadBlobToDisk(String containerName, String blobName, String filePath) {
CloudStorageAccount account = null;
CloudBlobContainer container = null;
try {
account = CloudStorageAccount.parse(ConnString);
CloudBlobClient client = account.createCloudBlobClient();
container = client.getContainerReference(containerName);
container.createIfNotExists();
CloudBlockBlob cloudBlockBlob = container.getBlockBlobReference(blobName);
FileOutputStream fileOutputStream=new FileOutputStream(filePath);
cloudBlockBlob.download(fileOutputStream);
}catch(Exception ex) {
ex.printStackTrace();
}
}
Lee Liu's suggestion about the Blob name was correct when I managed to find out the correct application address. It turned out that domain address visible by user was ending with "azureedge.net", but there's a different one when I went into portal.azure.com. It caused the main problem. After that, I indeed found problem with correct Blob names in storage - because of String.format, I had to add their ID in database with a "_" sign, then they started to be downloaded with content instead of empty files.
It seems that the code was OK, it was the problem with improper address and file names.
Related
I am still searching around this subject, but I cannot find a simple solution, and I don't sure it doesn't exist.
Part 1
I have a service on my application that's generating an excel doc, by the dynamic DB data.
public static void
notiSubscribersToExcel(List<NotificationsSubscriber>
data) {
//generating the file dynamically from DB's data
String prefix = "./src/main/resources/static";
String directoryName = prefix + "/documents/";
String fileName = directoryName + "subscribers_list.xlsx";
File directory = new File(directoryName);
if (! directory.exists()){
directory.mkdir();
// If you require it to make the entire directory path including parents,
// use directory.mkdirs(); here instead.
}
try (OutputStream fileOut = new FileOutputStream(fileName)) {
wb.write(fileOut);
fileOut.close();
wb.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Part 2
I want to access it from the browser, so when I call it will get downloaded.
I know that for the static content, all I need to do is to call to the file, from the browser like that:
http://localhost:8080/documents/myfile.xlsx
After I would be able to do it, all I need is to create link to this url from my client app.
The problem -
Currently if I call to the file as above, it will download only the file which have been there in the compiling stage, but if I am generating a new files after the app is running the content won't be available.
It seems that the content is (as it's called) "static" and cannot be changed after startup.
So my question is
is there is a way to define a folder on the app structure that will be dynamic? I just want to access the new generated file.
BTW I found this answer and others which doing configuration methods, or web services, but I don't want all this. And I have tried some of them, but the result is the same.
FYI I don't bundle my client app with the server app, I run them from different hosts
The problem is to download the file with the dynamic content from a Spring app.
This can be solved with Spring BOOT. Here is the solution as shown in this illustration - when i click Download report, my app generates a dynamic Excel report and its downloaded to the browser:
From a JS, make a get request to a Spring Controller:
function DownloadReport(e){
//Post the values to the controller
window.location="../report" ;
}
Here is the Spring Controller GET Method with /report:
#RequestMapping(value = ["/report"], method = [RequestMethod.GET])
#ResponseBody
fun report(request: HttpServletRequest, response: HttpServletResponse) {
// Call exportExcel to generate an EXCEL doc with data using jxl.Workbook
val excelData = excel.exportExcel(myList)
try {
// Download the report.
val reportName = "ExcelReport.xls"
response.contentType = "application/vnd.ms-excel"
response.setHeader("Content-disposition", "attachment; filename=$reportName")
org.apache.commons.io.IOUtils.copy(excelData, response.outputStream)
response.flushBuffer()
} catch (e: Exception) {
e.printStackTrace()
}
}
This code is implemented in Kotlin - but you can implement it as easily in Java too.
I have a Spring Boot application.
Users can login to my application and upload files.
All the files of users are stored in a Google Cloud Storage.
Now, I want the users to be able to download their files.
So, I have to download the files form the Cloud Storage.
I don't know how my controller should look.
With my current code I'm getting an empty file. The upload is already made and the connection is fine as well.
public static Blob downloadFile(Storage storage, String fileName){
Blob blob = storage.get(BUCKET_NAME, fileName);
return blob;
}
#RequestMapping(value = "/downloadFileTest")
#ResponseBody
public void downloadFile(HttpSession session,
HttpServletResponse response) {
Storage storage = de.msm.msmcenter.service.cloudstorage.Authentication.getStorage();
Blob blob = de.msm.msmcenter.service.cloudstorage.Authentication.downloadFile(storage,"test.txt");
ReadChannel readChannel = blob.reader();
InputStream inputStream = Channels.newInputStream(readChannel);
try {
response.setContentType("application/force-download");
response.setHeader("Content-Disposition", "attachment; filename=test.txt");
IOUtils.copy(inputStream, response.getOutputStream());
response.flushBuffer();
inputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
I actually want to be able to download any file, not only txt.
When the user opens the link, the file with the name test.txt gets downloaded but it's empty..
It seems like you just want to give access to the user to be able to download the file.
A solution for that would be to use a Signed URL, which can let you provide the user with an URL to access/download the object for a limited time. If you redirect the user directly to that URL the download would start immediately.
Thank you #Mayeru
I changed my code to :
public static String downloadFile(Storage storage, String fileName){
Blob blob = storage.get(BUCKET_NAME, fileName);
String PATH_TO_JSON_KEY = "/your/path";
URL signedUrl = null;
try {
signedUrl = storage.signUrl(BlobInfo.newBuilder(BUCKET_NAME, fileName).build(),
1, TimeUnit.DAYS, SignUrlOption.signWith(ServiceAccountCredentials.fromStream(
new FileInputStream(PATH_TO_JSON_KEY))));
} catch (IOException e) {
e.printStackTrace();
}
return signedUrl.toString();
}
#add this line to you spring-boot application.properties file
spring.cloud.gcp.credentials.location=classpath:key.json
// read/download objects
public static ResponseEntity<byte[]> getObjectFromGCP(String yourfileName) throws IOException {
String objectNameWithLocation ="your file location with file name in GCP bucket";
//create your storage object with your credentials
Credentials credentials = GoogleCredentials.fromStream(new
ClassPathResource("key.json").getInputStream());
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).build().getService();
BlobId blobId = BlobId.of(bucketName, objectNameWithLocation);
Blob blob = storage.get(blobId);
return ResponseEntity.ok().contentType(MediaType.valueOf(FileTypeMap.getDefaultFileTypeMap().getContentType(yourfileName)))
.body(blob.getContent(BlobSourceOption.generationMatch()));
}
So using Google Drive's API, I am trying to download a file from my drive account. I have followed Google's quickstart guide (https://developers.google.com/drive/web/quickstart/java) and used Google's DriveQuickStart.java to initialize the Drive object.
Everything with the object works correctly (i.e acquiring all the files from my google drive account and displaying their IDs and titles); however, when I tried downloading a file through the input stream of the function Google developed, I keep getting a null exception error.
Here is the code that I am using:
private static InputStream downloadFile(Drive service, File file) {
if (file.getDownloadUrl() != null && file.getDownloadUrl().length() > 0) {
try {
HttpResponse resp =
service.getRequestFactory().buildGetRequest(new GenericUrl(file.getDownloadUrl()))
.execute();
return resp.getContent();
} catch (IOException e) {
// An error occurred.
e.printStackTrace();
return null;
}
} else {
// The file doesn't have any content stored on Drive.
return null;
}
}
The problem is that when the method calls file.getDownloadURL(), it returns a null value. According to the documentation, it should return a null value if the file I am trying to download is a native Google Drive file; however, the file that I am downloading is simply a jar file, so it can't be because of the file extension (I also tried it on other formats too).
Why is it returning a null value, and what can I do to resolve this issue? Thank you!
Figured it out.
For anyone else who struggled with this, the answer is really simple:
In the DriveQuickStart.java code, pay attention to this part:
/** Global instance of the scopes required by this quickstart. */
private static final List<String> SCOPES =
Arrays.asList(DriveScopes.DRIVE_METADATA_READONLY);
And make sure you set it to:
/** Global instance of the scopes required by this quickstart. */
private static final List<String> SCOPES =
Arrays.asList(DriveScopes.DRIVE);
So the only reason why it didn't work was because program did not have the appropriate permission to do so.
I'm using Apache Tapestry v5.3.7 and I already use the normal Tapestry upload component in a form. For a better user experience I try now to integrate Dropzone.js in a normal Tapestry page without any form. The JavaScript integration works fine. The uploaded file data are transferred to my server with a post request and I can access the request with all of its parameters.
My question is now how can I access the binary data of the uploaded file (maybe as InputStream) to save them in my system? I already injected the http request but getInputStream returns a empty stream.
Thanks for any suggestions
/** Code snippet of page java part */
...
#Inject
protected HttpServletRequest _request;
public void onActivate (String rowId) {
String fileName=_request.getParameter("file");
try {
InputStream is=_request.getInputStream();
// if I do read from is it returns -1
// :-(
doSomeSaveStuff(is); // dummy code
}
catch(Exception e) {
e.printStackTrace();
}
}
...
Here's one way to do it:
In template:
<t:form t:id="testForm" class="dropzone">
</t:form>
In page.java
#Inject
MultipartDecoder multipartDecoder;
#Component(id = "testForm")
private Form testForm;
#Inject
RequestGlobals requestGlobals;
void onSubmitFromTestForm() throws ManagerException {
System.out.println("test form invoked");
HttpServletRequest r = requestGlobals.getHTTPServletRequest();
UploadedFile u = multipartDecoder.getFileUpload("file");
The uploaded file contains what you uploaded and you can work with it the way you want.
Note: the HttpServletRequest::getParameterMap() , told me that the handle to to the file is called file which is how I know that passing file to getFileUpload makes the decoder correctly parse the multipart/post
I am trying to write a workflow process step for the DAM update asset such that the uploaded asset will be sent to an external service that will modify the asset and then the modified asset can be sent to the Metadata extraction step. So I've added my process step to the DAM update asset like this:
And my code looks like this so far:
public void execute(WorkItem item, WorkflowSession wfsession,MetaDataMap args) throws WorkflowException {
try
{
log.info("Here2 in execute method"); //ensure that the execute method is invoked
final Map<String, Object> map = new HashMap<String, Object>();
map.put( "user.jcr.session", wfsession.getSession());
ResourceResolver rr = resolverFactory.getResourceResolver(map);
String path = item.getWorkflowData().getPayload().toString();
log.info("Here2 path: " + path);
Resource resource = rr.getResource(path);
log.info("Here2 resource: " + resource);
InputStream is = resource.adaptTo(InputStream.class);
log.info("Here2 assets IS: " + is);
}
catch (Exception e)
{
log.info("Here Error");
e.printStackTrace();
}
}
This is what I see in the logs when I upload an asset:
Here2 in execute method
Here2 path: /content/dam/photo1.JPG/jcr:content/renditions/original
Here2 asset: null
Question
My external service has an API accepting requests over HTTP. How should I send over the asset to the external service?
Once the external service modifies the asset, what should I do so that the Metadata extraction step reads the modified asset instead of the original?
In order to access your external service via HTTP, you have to write a client. CQ provides commons-httpclient bundle and you may use it to access the service. Documentation for the library can be found here. I don't know if the service expects that the file will be send using PUT or POST, but httpclient provides all these methods. All you have to do is to provide appropriate InputStream. Adapt your resource to Rendition and use getStream() method to get the InputStream.
When you'll get the modified asset from the webservice, you need to replace the original one:
// rendition = ...; // original rendition object created as above
// newInputStream = ...; // new asset received from your webservice
Asset asset = rendition.getAsset();
asset.addRendition("original", newInputStream, rendition.getMimeType());