AWS Java S3 Uploading error: "profile file cannot be null" - java

I get an exception when trying to upload a file to Amazon S3 from my Java Spring application. The method is pretty simple:
private void productionFileSaver(String keyName, File f) throws InterruptedException {
String bucketName = "{my-bucket-name}";
TransferManager tm = new TransferManager(new ProfileCredentialsProvider());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
bucketName, keyName, new File("/mypath/myfile.png"));
try {
// Or you can block and wait for the upload to finish
upload.waitForCompletion();
System.out.println("Upload complete.");
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
It is basically the same that amazon provides here, and the same exception with the exactly same message ("profile file cannot be null") appears when trying this other version.
The problem is not related to the file not existing or being null (I have already checked in a thousand ways that the File argument recieved by TransferManager.upload method exists before calling it).
I cannot find any info about my exception message "profile file cannot be null". The first lines of the error log are the following:
com.amazonaws.AmazonClientException: Unable to complete transfer: profile file cannot be null
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:281)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:265)
at com.amazonaws.services.s3.transfer.internal.AbstractTransfer.waitForCompletion(AbstractTransfer.java:103)
at com.fullteaching.backend.file.FileController.productionFileSaver(FileController.java:371)
at com.fullteaching.backend.file.FileController.handlePictureUpload(FileController.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
My S3 policy allows getting and puttings objects for all kind of users.
What's happening?

ProfileCredentialsProvider() creates a new profile credentials provider that returns the AWS security credentials configured for the default profile.
So, if you haven't any configuration for default profile at ~/.aws/credentials, while trying to put object, it yields that error.
If you run your code on Lambda service, it will not provide this file. In that case, you also do not need to provide credentials. Just assign right IAM Role to your lambda function, then using default constructor should solve issue.
You may want to change TransferManager constructor according to your needs.

The solution was pretty simple: I was trying to implement this communication without an AmazonS3 bean for Spring.
This link will help with the configuration:
http://codeomitted.com/upload-file-to-s3-with-spring/

my code worked fine as below:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).withRegion(clientRegion).build();

Related

Azure Function timeout when accessing Blob

I have encountered a strange problem when accessing an Azure Storage Blob from an Azure Function. I have a Function written in Java which is supposed to download some data from a Blob and then execute a PowerShell command. The PowerShell command can launch another Java application, which accesses the same Blob. I have this process working except for where the Function first downloads the Blob, which always gives a timeout while trying to get the size.
The weird thing is that the Java application launched by the PowerShell command uses the same code to download the Blob and can do so without any trouble at all.
Here is the relevant code snippet:
try {
blob = new BlobClientBuilder().endpoint(connStr).buildClient();
int dataSize = (int) blob.getProperties().getBlobSize(); // <- timeout occurs here
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(dataSize);
blob.download(outputStream);
outputStream.close();
String result = new String(outputStream.toByteArray(), "UTF-8");
return JsonParser.parseString(result).getAsJsonObject();
}
catch(Exception e) {
System.out.println("ERROR: "+e.getMessage());
e.printStackTrace();
return null;
}
Some relevant info:
The Blob is very small - only a few KB.
The same connection string is used in both cases.
The Blob is not the trigger for the function, but rather stores data for it.
EDIT
After getting better logs with a Log Analytics workspace, I found the timeout is being caused by a NoSuchMethodError.
java.lang.NoSuchMethodError: io.netty.handler.ssl.SslProvider.isAlpnSupported(Lio/netty/handler/ssl/SslProvider;)Z
I've seen this error before when I had the wrong version of netty-all-x.x.xFINAL.jar. Having already fixed this in the jars I upload with my code, I am now wondering where the Function gets libraries from other than what I include.
Following the exception mentioned in the edit led me to this thread:
https://github.com/Azure/azure-functions-java-worker/issues/381.
The issue was that the dependencies for the Function App itself were loading before my dependencies for my code and there is a conflict between them as mentioned here: https://github.com/Azure/azure-functions-java-worker/issues/365.
The solution was to set FUNCTIONS_WORKER_JAVA_LOAD_APP_LIBS = 1 in the Configuration settings of the Function App.
One more solution is to find out the exact error by running azure in local environment. most of the time following error is misleading.
FailureException: ClassCastException: java.lang.NoSuchMethodError cannot be cast to java.lang.RuntimeExceptionStack: java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at
following link will help you to run and debug azure function in local.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-maven-eclipse
most of the time the issues is also due to dependency jars conflicts as explained in previous post.

Hyperledger fabric endorsement policy error using Java SDK

I am working on golang version of fabcar smart contract while seeking to implement a Java-SDK API which enrolls an admin, registers a user and performs query-update value operations based on https://github.com/hyperledger/fabric-samples/tree/master/fabcar/java
I have successfully set up a 3 org-9 peers blockchain network, installed, instantiated and invoked chaincode on peers.
However, as i am working on implementing the relative API, i am only able to successfully query blockchain database, while getting a "Could not meet endorsement policy for chaincode mycc"
Please find below screenshot of relative error
Endorsement policy is "OR ('Org1MSP.member','Org2MSP.member', 'Org3MSP.member')".
Should registered user somehow get an Org1/Org2/Org3.member attribute? Any leads would be appreciated!
Like #Ikar Pohorský said, for me this got resolved after I used correct method name. Also, ensure that you delete 'wallet' folder in order to regenerate the user if your HLF n/w was recreated.
#Test
public void testMyMethodToBeInvoked() throws Exception {
deleteDirectory(".\\wallet");
EnrollAdmin.main(null);
RegisterUser.main(null);
// Load a file system based wallet for managing identities.
final Path walletPath = Paths.get("wallet");
final Wallet wallet = Wallet.createFileSystemWallet(walletPath);
// load a CCP
final Path networkConfigPath = Paths
.get("C:\\sw\\hlf146-2\\fabric-samples\\first-network\\connection-org1.yaml");
final Gateway.Builder builder = Gateway.createBuilder();
builder.identity(wallet, "user1").networkConfig(networkConfigPath).discovery(true);
// create a gateway connection
try (Gateway gateway = builder.connect()) {
final Network network = gateway.getNetwork("mychannel");
final Contract contract = network.getContract("mycc");
String myJSONString="{\"a\":\"b\"}";
byte[] result;
// Following did NOT work. Control goes directly to 'invoke' when 'submitTransaction' is done directly. 'invoke' need not be mentioned here.
// result = contract.submitTransaction("invoke", myJSONString);
// Following DID work. In chaincode (my chain code was Java) I had a method named 'myMethodToBeInvoked'. The chain code was written similar to https://github.com/hyperledger/fabric-samples/blob/release-1.4/chaincode/chaincode_example02/java/src/main/java/org/hyperledger/fabric/example/SimpleChaincode.java
result = contract.submitTransaction("myMethodToBeInvoked", my);
System.out.println(new String(result));
}
}
EDIT: Also, please remember that if your chaincode throws errorResponse, even then we can have this endorsement fail issue. So, check if your chain code is working without any issues.

Error while making Azure IoT Hub Device Identities in bulk

I am following https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-bulk-identity-mgmt to do Bulk upload of Device Identities in Azure IoT Hub. All codes given here are in C# so I am converting it to JAVA equivalent.
Using Import devices example – bulk device provisioning I am getting following json-
{"id":"d3d78b0d-6c8c-4ef5-a321-91fbb6a4b7d1","importMode":"create","status":"enabled","authentication":{"symmetricKey":{"primaryKey":"f8/UZcYbhPxnNdbSl2J+0Q==","secondaryKey":"lbq4Y4Z8qWmfUxAQjRsDjw=="}}}
{"id":"70bbe407-8d65-4f57-936f-ef402aa66d07","importMode":"create","status":"enabled","authentication":{"symmetricKey":{"primaryKey":"9e7fDNIFbMu/NmOfxo/vGg==","secondaryKey":"nwFiKR4HV9KYHzkeyu8nLA=="}}}
To import the file from blob following function is called-
CompletableFuture<JobProperties> importJob = registryManager
.importDevicesAsync(inURI, outURI);
In the above code, we need to provide URI with SAS code, for that Get the container SAS URI equivalent code is below-
static String GetContainerSasUri(CloudBlobContainer container) {
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.setSharedAccessExpiryTime(new Date(new Date().getTime() + TimeUnit.DAYS.toMillis(1)));
sasConstraints.setPermissions(EnumSet.of(SharedAccessBlobPermissions.READ, SharedAccessBlobPermissions.WRITE,
SharedAccessBlobPermissions.LIST, SharedAccessBlobPermissions.DELETE));
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
permissions.getSharedAccessPolicies().put("testpolicy", sasConstraints);
try {
container.uploadPermissions(permissions);
} catch (StorageException e1) {
e1.printStackTrace();
}
String sasContainerToken = null;
try {
sasContainerToken = container.generateSharedAccessSignature(sasConstraints, "testpolicy");
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (StorageException e) {
e.printStackTrace();
}
System.out.println("URI " + container.getUri() +"?"+ sasContainerToken);
return container.getUri() + "?" + sasContainerToken;
}
Now the problem is coming here. For the output container I am getting following error-
java.util.concurrent.ExecutionException: com.microsoft.azure.sdk.iot.service.exceptions.IotHubBadFormatException: Bad message format! ErrorCode:BlobContainerValidationError;Unauthorized to write to output blob container. Tracking ID:2dcb2efbf1e14e33ba60dc8415dc03c3-G:4-TimeStamp:11/08/2017 16:16:10
Please help me to know why I am getting Bad Message Format error? Is there a problem with the SAS key generating code or my blob container is not having Write permission?
are you using a service or Account-level SAS? The error thrown suggests the service isn't authorized or have the delegated permissions to write to the designated blob container. Check out the resource here on how to setup an account level SAS and how to delegate read, write and delete operations on blob containers. https://learn.microsoft.com/en-us/rest/api/storageservices/Delegating-Access-with-a-Shared-Access-Signature?redirectedfrom=MSDN "snipped content: "An account-level SAS, introduced with version 2015-04-05. The account SAS delegates access to resources in one or more of the storage services. All of the operations available via a service SAS are also available via an account SAS. Additionally, with the account SAS, you can delegate access to operations that apply to a given service, such as Get/Set Service Properties and Get Service Stats. You can also delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a service SAS. See Constructing an Account SAS for more information about account SAS."
I was facing the same issue while using private storage account as import/output container.
It is working smooth after I started using public storage account.
Anyway, it should work even with private storage account. So, I have raised an issue. For more into, you may refer this link.

FileSystemInteractionException: Could not access target file while using documents4j

I am using documents4j to convert word documents to pdf and some time I am getting below exception
2016-03-28 09:29:16.982 INFO 3660 --- [pool-1-thread-2] c.d.c.msoffice.MicrosoftWordBridge : Requested conversion from C:\conversion-temp\2b33637b-b74a-4aaa-ac65-a5ebc1eb3efc\temp3 (application/msword) to C:\conversion-temp\2b33637b-b74a-4aaa-ac65-a5ebc1eb3efc\temp4 (application/pdf)
2016-03-28 09:29:17.372 ERROR 3660 --- [http-nio-8080-exec-9] c.s.c.e.mappers.ExceptionMapper : Exception while handling request
com.documents4j.throwables.FileSystemInteractionException: Could not access target file
at com.documents4j.util.Reaction$FileSystemInteractionExceptionBuilder.make(Reaction.java:180) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.util.Reaction$ExceptionalReaction.apply(Reaction.java:75) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.conversion.ExternalConverterScriptResult.resolve(ExternalConverterScriptResult.java:70) ~[documents4j-transformer-api-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.evaluateExitValue(ProcessFutureWrapper.java:48) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:36) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:11) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.job.AbstractFutureWrappingPriorityFuture.run(AbstractFutureWrappingPriorityFuture.java:78) ~[documents4j-util-conversion-1.0.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_74]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
After this exception any further requests are rejected by the documents4j library with below exception
com.documents4j.throwables.ConverterAccessException: The converter seems to be shut down
at com.documents4j.util.Reaction$ConverterAccessExceptionBuilder.make(Reaction.java:117) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.util.Reaction$ExceptionalReaction.apply(Reaction.java:75) ~[documents4j-util-all-1.0.2.jar:na]
at com.documents4j.conversion.ExternalConverterScriptResult.resolve(ExternalConverterScriptResult.java:70) ~[documents4j-transformer-api-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.evaluateExitValue(ProcessFutureWrapper.java:48) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:36) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.conversion.ProcessFutureWrapper.get(ProcessFutureWrapper.java:11) ~[documents4j-util-transformer-process-1.0.2.jar:na]
at com.documents4j.job.AbstractFutureWrappingPriorityFuture.run(AbstractFutureWrappingPriorityFuture.java:78) ~[documents4j-util-conversion-1.0.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_74]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_74]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
This is how I am doing the documents conversion.
I am instantiating an instance of LocalConverter
LocalConverter.builder().workerPool(corePoolSize, maximumPoolSize, keepAliveTime, TimeUnit.MINUTES).baseFolder(baseFolder).processTimeout(processTimeout, TimeUnit.SECONDS).build();
corePoolSize is 5
maximumPoolSize is 10
keepAliveTime is 3 minutes
processTimeout is 20 minutes
And I am using this instance like
public File convertFile(MultipartFile file) throws ConversionException {
try(InputStream docStream = file.getInputStream(); ByteArrayOutputStream pdfStream = new ByteArrayOutputStream()) {
boolean status = iConverter.convert(docStream, false).as(DocumentType.DOC).to(pdfStream, false).as(DocumentType.PDF).execute();
if(status) {
// conversion is success, send the response
File response = new File();
//InputStream responseStream = new ByteArrayInputStream(pdfStream.toByteArray());
response.setContentLength(pdfStream.size());
//response.setInputStream(responseStream);
response.setOutputStream(pdfStream);
return response;
} else {
LOGGER.error("Failed to convert word to pdf, conversion status is {}", status);
throw new ConversionException("failed to convert word to pdf");
}
} catch (FileSystemInteractionException fsie) {
LOGGER.error("documents4j file system interaction exception", fsie);
throw new ConversionException("File system exception", fsie);
} catch(IOException ioe) {
throw new ConversionException("Cannot read the input stream of file", ioe);
}
}
This multipart file is spring multipart file.
I checked the vb script that documents4j uses for the conversion and I came to know that this error occurs when the wordDocument was not closed properly. Below is the snippet from vb script which is the source of this error
' Close the source document.
wordDocument.Close WdDoNotSaveChanges
If Err <> 0 Then
WScript.Quit -3
End If
On Error GoTo 0
I am not sure why I am getting FileSystemInteractionException.
There are two assumptions that I can think of
I am sending multiple simultaneous requests and the file is deleted by some other thread
I am getting the inputstream from MultipartFile object and the multipart file is a temporary and as per the documentation the user is responsible to copy the content to a persistent storage.
spring official docs
How can I resolve this error and what is the root cause of this error.
There can be multiple reasons for this error:
com.documents4j.throwables.FileSystemInteractionException: Could not access target file
Exception documentation here
Have you tried saving the uploaded multi-part file to a temporary file, then passing this temporary file to the converter? I am aware this is an unnecessary overhead. However, if this works, then we can safely assume that the input "docstream" isn't populated completely when the IConverter instance tries to access it, and hence the error. In this case, you should ensure that the inputstream is populated before attempting conversion and that should resolve your issue.
If you get this error even for "file-based" conversion scenarios, try the following steps:
Ensure that MS Office applications aren't running (because you opened a word document externally)
Ensure that there is one and only one instance of IConverter running across the physical machine (and not just the JVM)
If you are running Tomcat as a service (I'm assuming you're deploying this on Tomcat), you are running tomcat not as a SYSTEM account service but a local user account.
In a web application, you should have the IConverter instance being created once (like in a singleton class), and it should return the same instance whenever requested by one of your business methods. Also, do not shut down the converter if you anticipate simultaneous document conversion requests.
Ideally one of these steps should solve your issue at hand, let me know in the comments if you still face this issue.
I am also facing the same error.
I used this PDF conversion inside a Spring Boot application and deployed it, in a Windows Server.
When I run this application manually (using java -jar), it's working perfectly fine.
But, when I start this as a Windows Service(using winsw.exe), it is giving me the error:
com.documents4j.throwables.FileSystemInteractionException: Could not access target file

Amazon S3 - Batch File Upload Using Java API?

We're looking to begin using S3 for some of our storage needs and I'm looking for a way to perform a batch upload of 'N' files. I've already written code using the Java API to perform single file uploads, but is there a way to provide a list of files to pass to an S3 bucket?
I did look at the following question is-it-possible-to-perform-a-batch-upload-to-amazon-s3, but it is from two years ago and I'm curious if the situation has changed at all. I can't seem to find a way to do this in code.
What we'd like to do is to be able to set up an internal job (probably using scheduled tasking in Spring) to transition groups of files every night. I'd like to have a way to do this rather than just looping over them and doing a put request for each one, or having to zip batches up to place on S3.
The easiest way to go if you're using the AWS SDK for Java is the TransferManager. Its uploadFileList method takes a list of files and uploads them to S3 in parallel, or uploadDirectory will upload all the files in a local directory.
public void uploadDocuments(List<File> filesToUpload) throws
AmazonServiceException, AmazonClientException,
InterruptedException {
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withCredentials(getCredentials()).withRegion(Regions.AP_SOUTH_1)
.build();
TransferManager transfer = TransferManagerBuilder.standard().withS3Client(s3).build();
String bucket = Constants.BUCKET_NAME;
MultipleFileUpload upload = transfer.uploadFileList(bucket, "", new File("."), filesToUpload);
upload.waitForCompletion();
}
private AWSCredentialsProvider getCredentials() {
String accessKey = Constants.ACCESS_KEY;
String secretKey = Constants.SECRET_KEY;
BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
return new AWSStaticCredentialsProvider(awsCredentials);
}

Categories