I have integrated AWS Java SDK in my applcaition.Unfoutunately am getting "Internal Failure. Please try your request again" as the response.
This is how I have implemeneted it.
Using Maven, added this in pom.xml
<dependencies>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>transcribe</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>bom</artifactId>
<version>2.10.12</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
And in code,
String localAudioPath = "/home/****.wav";
String key = config.awsSecretAccessKey;
String keyId = config.awsAccessKeyId;
String regionString = config.awsRegion; //"ap-south-1"
String outputBucketName = config.awsOutputBucket;
Region region = Region.of(regionString);
String inputLanguage = "en-US";
LanguageCode languageCode = LanguageCode.fromValue(inputLanguage);
AwsCredentials credentials = AwsBasicCredentials.create(keyId, key);
AwsCredentialsProvider transcribeCredentials=StaticCredentialsProvider.create(credentials);
AWSCredentialsProvider s3AwsCredentialsProvider = getS3AwsCredentialsProvider(key, keyId);
String jobName = subJob.getId()+"_"+subJob.getProgram_name().replace(" ", "");
String fileName = jobName + ".wav";
AmazonS3 s3 =
AmazonS3ClientBuilder.standard().withRegion(regionString).withClientConfiguration(new
ClientConfiguration()).withCredentials(s3AwsCredentialsProvider).build();
s3.putObject(outputBucketName, fileName, new File(localAudioFilePath));
String fileUri = s3.getUrl(outputBucketName, fileName).toString();
System.out.println(fileUri);
Media media = Media.builder().mediaFileUri(fileUri).build();
String mediaFormat = MediaFormat.WAV.toString();
jobName = jobName +"_"+ System.currentTimeMillis();
Settings settings = Settings.builder()
.showSpeakerLabels(true)
.maxSpeakerLabels(10)
.build();
StartTranscriptionJobRequest request = StartTranscriptionJobRequest.builder()
.languageCode(languageCode)
.media(media)
.mediaFormat(mediaFormat)
.settings(settings)
.transcriptionJobName(jobName)
.build();
TranscribeAsyncClient client = TranscribeAsyncClient.builder()
.region(region)
.credentialsProvider(transcribeClientCredentialsProvider)
.build();
CompletableFuture<StartTranscriptionJobResponse> response =
client.startTranscriptionJob(request);
System.out.println(response.get().toString());
GetTranscriptionJobRequest jobRequest =
GetTranscriptionJobRequest.builder().transcriptionJobName(jobName).build();
while( true ){
CompletableFuture<GetTranscriptionJobResponse> transcriptionJobResponse =
client.getTranscriptionJob(jobRequest);
GetTranscriptionJobResponse response1 = transcriptionJobResponse.get();
if (response1 != null && response1.transcriptionJob() != null) {
if (response1.transcriptionJob().transcriptionJobStatus() ==
TranscriptionJobStatus.FAILED) {
//It comes here and gives response1.failureReason = "Internal Failure. Please try your request again".
break;
}
}
}
private AWSCredentialsProvider getS3AwsCredentialsProvider(String key, String keyId) {
return new AWSCredentialsProvider() {
#Override
public AWSCredentials getCredentials() {
return new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
return keyId;
}
#Override
public String getAWSSecretKey() {
return key;
}
};
}
#Override
public void refresh() {
}
};
}
The same thing is working with Python SDK. Same region, same wav file, same language, same settings, same output bucket etc. What am doing wrong??
Your flow looks correct. It may be an issue with the audio file you are uploading to AWS. I suggest you check it once.
Related
First time
I am trying to develop a controller to save data in DocumentDB in AWS.
In the first time it saves, but in the second time, I am looking for this register saved in database, I got this and change some data, and save, but...
I am getting this error:
Caused by: com.mongodb.MongoCommandException: Command failed with error 301: 'Retryable writes are not supported' on server aws:27017. The full response is {"ok": 0.0, "code": 301, "errmsg": "Retryable writes are not supported", "operationTime": {"$timestamp": {"t": 1641469879, "i": 1}}}
This my java code
#Service
public class SaveStateHandler extends Handler<SaveStateCommand> {
#Autowired
private MongoRepository repository;
#Autowired
private MongoTemplate mongoTemplate;
#Override
public String handle(Command command) {
SaveStateCommand cmd = (SaveStateCommand) command;
State state = buildState(cmd);
repository.save(state);
return state.getId();
}
private State buildState(SaveStateCommand cmd) {
State state = State
.builder()
.activityId(cmd.getActivityId())
.agent(cmd.getAgent())
.stateId(cmd.getStateId())
.data(cmd.getData())
.dataAlteracao(LocalDateTime.now())
.build();
State stateFound = findState(cmd);
if (stateFound != null) {
state.setId(stateFound.getId());
}
return state;
}
private State findState(SaveStateCommand request) {
Query query = new Query();
selectField(query);
where(request, query);
return mongoTemplate.findOne(query, State.class);
}
private void selectField(Query query) {
query.fields().include("id");
}
private void where(SaveStateCommand request, Query query) {
query.addCriteria(new Criteria().andOperator(
Criteria.where("activityId").is(request.getActivityId()),
Criteria.where("agent").is(request.getAgent())));
}
}
In AWS they suggest to use retryWrites=false but I donĀ“t know how to do it in Spring Boot.
I use Spring Boot 2.2.1
I tryed to do this
#Bean
public MongoClientSettings mongoSettings() {
return MongoClientSettings
.builder()
.retryWrites(Boolean.FALSE)
.build();
}
But not worked.
=================================================================================
Second Time
I connected to AWS DocumentDb with SSH Tunnel.
Started my application with these database configuration
#Configuration
#EnableConfigurationProperties({MongoProperties.class})
public class MongoAutoConfiguration {
private final MongoClientFactory factory;
private final MongoClientOptions options;
private MongoClient mongo;
public MongoAutoConfiguration(MongoProperties properties, ObjectProvider<MongoClientOptions> options, Environment environment) {
this.options = options.getIfAvailable();
if (StringUtils.isEmpty(properties.getUsername()) || StringUtils.isEmpty(properties.getPassword())) {
properties.setUsername(null);
properties.setPassword(null);
}
properties.setUri(createUri(properties));
this.factory = new MongoClientFactory(properties, environment);
}
private String createUri(MongoProperties properties) {
String uri = "mongodb://";
if (StringUtils.hasText(properties.getUsername()) && !StringUtils.isEmpty(properties.getPassword())) {
uri = uri + properties.getUsername() + ":" + new String(properties.getPassword()) + "#";
}
return uri + properties.getHost() + ":" + properties.getPort() + "/" + properties.getDatabase() + "?retryWrites=false";
}
#PreDestroy
public void close() {
if (this.mongo != null) {
this.mongo.close();
}
}
#Bean
public MongoClient mongo() {
this.mongo = this.factory.createMongoClient(this.options);
return this.mongo;
}
}
And localy it saves the data without error.
But, if I put my API update in AWS ECS, and try to save, got the same error.
=================================================================================
Dependencies
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-mongodb</artifactId>
<version>4.1.4</version>
</dependency>
When you construct your connection string, you can include the parameters for disabling retryable writes, by adding this to your connection URI:
?replicaSet=rs0&readPreference=primaryPreferred&retryWrites=false&maxIdleTimeMS=30000
Then use this when creating the database factory and mongo template (this example uses the Reactive database factory, but the principle is the same for the SimpleMongoClientDatabaseFactory:
#Bean
fun reactiveMongoDatabaseFactory(
#Value("\${spring.data.mongodb.uri}") uri: String,
#Value("\${mongodb.database-name}") database: String
): ReactiveMongoDatabaseFactory {
val parsedURI = URI(uri)
return SimpleReactiveMongoDatabaseFactory(MongoClients.create(uri), database)
}
I created Jira issue by using jira liberary
<dependency>
<groupId>com.atlassian.jira</groupId>
<artifactId>jira-rest-java-client-app</artifactId>
<version>5.2.1</version>
</dependency>
but while creating I am not able to set Assignee or AssigneeName for created JiraIssue
below is my code
BasicUser user = projectType.get().getLead();
System.out.println(user.getDisplayName());
builder = new IssueInputBuilder(project, issueType, issueDTO.getIssueSummery());
builder.setProject(project);
builder.setDescription(issueDTO.getIssueDescription());
IssueInput input = builder.build();
IssueRestClient client = restClient.getIssueClient();
BasicIssue issue = client.createIssue(input).claim();
//input = IssueInput.createWithFields(new FieldInput(IssueFieldId.ASSIGNEE_FIELD, ComplexIssueInputFieldValue.with("name", "Wraplive User")));
builder.setPriorityId(1L);
builder.setAssigneeName("Wraplive User");
IssueInput issueInput = builder.build();
client.updateIssue(issue.getKey(), issueInput);
I tried builder.setAssignee(user); // here it sets AssigneeName as Project lead which I don't require, I want to set another user or logged in username.
Can anyone help me where I am going wrong.
I tried with FieldInput which is commented in above code.
public JiraRestClient getJiraRestClient()
{
return new AsynchronousJiraRestClientFactory().createWithBasicHttpAuthentication(getJiraUri(), JIRA_USERNAME, JIRA_PASSWORD);
}
public URI getJiraUri()
{
return URI.create(JIRA_URL);
}
//loadConnectionProperties();
restClient = getJiraRestClient();
BasicProject project = null;
IssueType issueType = null;
IssueInputBuilder builder = null;
try
{
final Iterable<BasicProject> projects = restClient.getProjectClient().getAllProjects().claim();
for(BasicProject projectStr : projects)
{
if(projectStr.getKey().equalsIgnoreCase(PROJECT_KEY))
{
project = projectStr;
}
}
Promise<Project> projectType = restClient.getProjectClient().getProject(PROJECT_KEY);
for(IssueType type : (projectType.get()).getIssueTypes())
{
if(type.getName().equalsIgnoreCase(Issue_Type))
{
issueType = type;
}
}
builder = new IssueInputBuilder(project, issueType, issueDTO.getIssueSummery());
builder.setProject(project);
builder.setDescription(issueDTO.getIssueDescription());
builder.setPriorityId(1L);
***builder.setFieldInput(new FieldInput("assignee", ComplexIssueInputFieldValue.with("accountId", "557058:0fa57746-30a2-498c-9e34-9306679d0be7")));***
IssueInput input = builder.build();
IssueRestClient client = restClient.getIssueClient();
BasicIssue issue = client.createIssue(input).claim();
System.out.println(issue.getKey());
LOG.error("Jira Created for " + issueDTO.getIssueSummery() + " ID is :: " + issue.getKey());
}
catch(Exception e)
{
e.printStackTrace();
return false;
}
return true;
I'm trying to read the files available on Amazon S3, as the question explains the problem. I couldn't find an alternative call for the deprecated constructor.
Here's the code:
private String AccessKeyID = "xxxxxxxxxxxxxxxxxxxx";
private String SecretAccessKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
private static String bucketName = "documentcontainer";
private static String keyName = "test";
//private static String uploadFileName = "/PATH TO FILE WHICH WANT TO UPLOAD/abc.txt";
AWSCredentials credentials = new BasicAWSCredentials(AccessKeyID, SecretAccessKey);
void downloadfile() throws IOException
{
// Problem lies here - AmazonS3Client is deprecated
AmazonS3 s3client = new AmazonS3Client(credentials);
try {
System.out.println("Downloading an object...");
S3Object s3object = s3client.getObject(new GetObjectRequest(
bucketName, keyName));
System.out.println("Content-Type: " +
s3object.getObjectMetadata().getContentType());
InputStream input = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
while (true) {
String line = reader.readLine();
if (line == null) break;
System.out.println(" " + line);
}
System.out.println();
} catch (AmazonServiceException ase) {
//do something
} catch (AmazonClientException ace) {
// do something
}
}
Any help? If more explanation is needed please mention it.
I have checked on the sample code provided in .zip file of SDK, and it's the same.
You can either use AmazonS3ClientBuilder or
AwsClientBuilder as alternatives.
For S3, simplest would be with AmazonS3ClientBuilder.
BasicAWSCredentials creds = new BasicAWSCredentials("access_key", "secret_key");
AmazonS3 s3Client = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(creds))
.build();
Use the code listed below to create an S3 client without credentials:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
An usage example would be a lambda function calling S3.
You need to pass the region information through the
com.amazonaws.regions.Region object.
Use AmazonS3Client(credentials, Region.getRegion(Regions.REPLACE_WITH_YOUR_REGION))
You can create S3 default client as follows(with aws-java-sdk-s3-1.11.232):
AmazonS3ClientBuilder.defaultClient();
Deprecated with only creentials in constructor, you can use something like this:
val awsConfiguration = AWSConfiguration(context)
val awsCreds = CognitoCachingCredentialsProvider(context, awsConfiguration)
val s3Client = AmazonS3Client(awsCreds, Region.getRegion(Regions.EU_CENTRAL_1))
Using the AWS SDK for Java 2.x, one can also build its own credentialProvider like so:
// Credential provider
package com.myproxylib.aws;
import software.amazon.awssdk.auth.credentials.AwsCredentials;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
public class CustomCredentialsProvider implements AwsCredentialsProvider {
private final String accessKeyId;
private final String secretAccessKey;
public CustomCredentialsProvider(String accessKeyId, String secretAccessKey) {
this.secretAccessKey = secretAccessKey;
this.accessKeyId = accessKeyId;
}
#Override
public AwsCredentials resolveCredentials() {
return new CustomAwsCredentialsResolver(accessKeyId, secretAccessKey);
}
}
// Crenditals resolver
package com.myproxylib.aws;
import software.amazon.awssdk.auth.credentials.AwsCredentials;
public class CustomAwsCredentialsResolver implements AwsCredentials {
private final String accessKeyId;
private final String secretAccessKey;
CustomAwsCredentialsResolver(String accessKeyId, String secretAccessKey) {
this.secretAccessKey = secretAccessKey;
this.accessKeyId = accessKeyId;
}
#Override
public String accessKeyId() {
return accessKeyId;
}
#Override
public String secretAccessKey() {
return secretAccessKey;
}
}
// Usage of the provider
package com.myproxylib.aws.s3;
public class S3Storage implements IS3StorageCapable {
private final S3Client s3Client;
public S3Storage(String accessKeyId, String secretAccessKey, String region) {
this.s3Client = S3Client.builder().credentialsProvider(new CustomCredentialsProvider(accessKeyId, secretAccessKey)).region(of(region)).build();
}
NOTE:
of course, the library user can get the credentials from wherever he wants, parse it into a java Properties before calling the S3 constructor.
When possible, favour the other methods mentionned in other answers and doc. My use case was necessary for this.
implementation 'com.amazonaws:aws-android-sdk-s3:2.16.12'
val key = "XXX"
val secret = "XXX"
val credentials = BasicAWSCredentials(key, secret)
val s3 = AmazonS3Client(
credentials, com.amazonaws.regions.Region.getRegion(
Regions.US_EAST_2
)
)
val expires = Date(Date().time + 1000 * 60 * 60)
val keyFile = "13/thumbnail_800x600_13_photo.jpeg"
val generatePresignedUrlRequest = GeneratePresignedUrlRequest(
"bucket_name",
keyFile
)
generatePresignedUrlRequest.expiration = expires
val url: URL = s3.generatePresignedUrl(generatePresignedUrlRequest)
GlideApp.with(this)
.load(url.toString())
.apply(RequestOptions.centerCropTransform())
.into(image)
I have to authorize my server to Firebase for the Firebase SDK. But unfortunately I can't read the credentials.json file. I have put my service.json file into my WEB-INF folder.
I have added this to my appengine-web.xml file:
<resource-files>
<include path="/service.json"/>
</resource-files>
And I am trying to read the file with this code:
FirebaseOptions options = new FirebaseOptions.Builder().setServiceAccount(ServletServletContext.class.getResourceAsStream("/WEB-INF/service.json"))
But when I try to read the file I get a NullPointerException.
This is my whole class:
#Api(
name = "myApi",
version = "v1",
namespace = #ApiNamespace(
ownerDomain = "backend",
ownerName = "backend",
packagePath=""
)
)
public class MyEndpoint {
private static final Logger log = Logger.getLogger(MyEndpoint.class.getName());
private String uid;
#ApiMethod(name = "signup")
public MyBean signup(#Named("token")String token)
{
uid = "empty";
uid = "init";
log.info(new File(getClass().getResource("/WEB-INF/service.json").toString()).exists()+"");
FirebaseOptions options = new FirebaseOptions.Builder()
.setServiceAccount(getClass().getResourceAsStream("/WEB-INF/service.json"))
.setDatabaseUrl("url")
.build();
FirebaseApp.initializeApp(options);
uid = "initialized";
log.severe("initialized");
FirebaseAuth.getInstance().verifyIdToken(token)
.addOnSuccessListener(new OnSuccessListener<FirebaseToken>() {
#Override
public void onSuccess(FirebaseToken decodedToken) {uid = decodedToken.getUid();
}
});
MyBean b = new MyBean();
if(!uid.equals(""))
b.setData(uid);
// else
// b.setData("failed");
return b;
}
}
Thanks in advance for your help.
You can use
this.getClass().getResourceAsStream("/WEB-INF/service.json");
I'm working on the application where user will upload ZIP file to my server, on the server that ZIP file will be expanded and then I need to upload it to the server. Now my questions is: how to upload directory with multiple files and sub-folders using Java to S3 bucket? Is there any examples for that? Currently i'm using JetS3t to manage all my communications with S3.
HI This is the simple way to upload the Directory into S3 bucket.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(access_key_id,
secret_access_key);
AmazonS3 s3Client = new AmazonS3Client(awsCreds);
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build();
MultipleFileUpload upload = tm.uploadDirectory(existingBucketName,
"BuildNumber#1", "FilePathYouWant", true);
I built something very similar. After expanding the zip on the server call FileUtils.listFiles() which will recursively list files in a folder. Just iterate the list and create s3objects and upload the files to s3. Make use of the threadedstorage service so that multiple files can be uploaded at the same time. Also ensure you process the upload events. If some files couldn't be uploaded the jets3t library will tell you.
I could post the code I wrote once get into the office.
EDIT: CODE:
Here's the code:
private static ProviderCredentials credentials;
private static S3Service s3service;
private static ThreadedS3Service storageService;
private static S3Bucket bucket;
private List<S3Object> s3Objs=new ArrayList<S3Object>();
private Set<String> s3ObjsCompleted=new HashSet<String>();
private boolean isErrorOccured=true;
private final ByteFormatter byteFormatter = new ByteFormatter();
private final TimeFormatter timeFormatter = new TimeFormatter();
private void initialise() throws ServiceException, S3ServiceException {
credentials=<create your credentials>;
s3service = new RestS3Service(credentials);
bucket = new S3Bucket(<bucket details>);
storageService=new ThreadedS3Service(s3service, this);
}
}
private void uploadFolder(File folder) throws NoSuchAlgorithmException, IOException {
readFolderContents(folder);
uploadFilesInList(folder);
}
private void readFolderContents(File folder) throws NoSuchAlgorithmException, IOException {
Iterator<File> filesinFolder=FileUtils.iterateFiles(folder,null,null);
while(filesinFolder.hasNext()) {
File file=filesinFolder.next();
String key = <create your key from the filename or something>;
S3Object s3Obj=new S3Object(bucket, file);
s3Obj.setKey(key);
s3Obj.setContentType(Mimetypes.getInstance().getMimetype(s3Obj.getKey()));
s3Objs.add(s3Obj);
}
}
private void uploadFilesInList(File folder) {
logger.debug("Uploading files in folder "+folder.getAbsolutePath());
isErrorOccured=false;
s3ObjsCompleted.clear();
storageService.putObjects(bucket.getName(), s3Objs.toArray(new S3Object[s3Objs.size()]));
if(isErrorOccured || s3Objs.size()!=s3ObjsCompleted.size()) {
logger.debug("Have to try uploading a few objects again for folder "+folder.getAbsolutePath()+" - Completed = "+s3ObjsCompleted.size()+" and Total ="+s3Objs.size());
List<S3Object> s3ObjsRemaining=new ArrayList<S3Object>();
for(S3Object s3Obj : s3Objs) {
if(!s3ObjsCompleted.contains(s3Obj.getKey())) {
s3ObjsRemaining.add(s3Obj);
}
}
s3Objs=s3ObjsRemaining;
uploadFilesInList(folder);
}
}
#Override
public void event(CreateObjectsEvent event) {
super.event(event);
if (ServiceEvent.EVENT_IGNORED_ERRORS == event.getEventCode()) {
Throwable[] throwables = event.getIgnoredErrors();
for (int i = 0; i < throwables.length; i++) {
logger.error("Ignoring error: " + throwables[i].getMessage());
}
}else if(ServiceEvent.EVENT_STARTED == event.getEventCode()) {
logger.debug("**********************************Upload Event Started***********************************");
}else if(event.getEventCode()==ServiceEvent.EVENT_ERROR) {
isErrorOccured=true;
}else if(event.getEventCode()==ServiceEvent.EVENT_IN_PROGRESS) {
StorageObject[] storeObjs=event.getCreatedObjects();
for(StorageObject storeObj : storeObjs) {
s3ObjsCompleted.add(storeObj.getKey());
}
ThreadWatcher watcher = event.getThreadWatcher();
if (watcher.getBytesTransferred() >= watcher.getBytesTotal()) {
logger.debug("Upload Completed.. Verifying");
}else {
int percentage = (int) (((double) watcher.getBytesTransferred() / watcher.getBytesTotal()) * 100);
long bytesPerSecond = watcher.getBytesPerSecond();
StringBuilder transferDetailsText=new StringBuilder("Uploading.... ");
transferDetailsText.append("Speed: " + byteFormatter.formatByteSize(bytesPerSecond) + "/s");
if (watcher.isTimeRemainingAvailable()) {
long secondsRemaining = watcher.getTimeRemaining();
if (transferDetailsText.length() > 0) {
transferDetailsText.append(" - ");
}
transferDetailsText.append("Time remaining: " + timeFormatter.formatTime(secondsRemaining));
}
logger.debug(transferDetailsText.toString()+" "+percentage);
}
}else if(ServiceEvent.EVENT_COMPLETED==event.getEventCode()) {
logger.debug("**********************************Upload Event Completed***********************************");
if(isErrorOccured) {
logger.debug("**********************But with errors, have to retry failed uploads**************************");
}
}
}
Here is how I did it in December of 2021 since BasicAWSCredentials is deprecated now.
AWSCredentials = new BasicAWSCredentials(env.getProperty("AWS_ACCESS_KEY_ID"),
env.getProperty("AWS_SECRET_ACCESS_KEY"));
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).withCredentials(new AWSStaticCredentialsProvider(AWSCredentials))
.build();
TransferManager tm = TransferManagerBuilder.standard().withS3Client(s3Client).build();
MultipleFileUpload upload = tm.uploadDirectory(existingBucketName,
"BuildNumber#1", "FilePathYouWant", true);