AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build() is waiting something?
I am using Amazon S3 and put many files to S3 every day by using this code.
AmazonS3 s3 = null;
s3 = AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build();
try {
s3.putObject(bucket_name, key_name, new File(file_path));
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
At one time, it works fine.
But AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build()
is not responding sometimes suddenly.
If I wait for a day or so, it works again.
This is a stack trace when
AmazonS3ClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build()
is not responding.
Do you have any idea?
java.lang.Thread.State: RUNNABLE
at java.lang.ClassLoader$NativeLibrary.load0(java.base#11.0.12/Native Method)
at java.lang.ClassLoader$NativeLibrary.load(java.base#11.0.12/ClassLoader.java:2442)
at java.lang.ClassLoader$NativeLibrary.loadLibrary(java.base#11.0.12/ClassLoader.java:2498)
- locked <0x000000070c9cf5c8> (a java.util.HashSet)
at java.lang.ClassLoader.loadLibrary0(java.base#11.0.12/ClassLoader.java:2694)
at java.lang.ClassLoader.loadLibrary(java.base#11.0.12/ClassLoader.java:2648)
at java.lang.Runtime.loadLibrary0(java.base#11.0.12/Runtime.java:830)
at java.lang.System.loadLibrary(java.base#11.0.12/System.java:1873)
at sun.security.ec.SunEC$1.run(jdk.crypto.ec#11.0.12/SunEC.java:63)
at sun.security.ec.SunEC$1.run(jdk.crypto.ec#11.0.12/SunEC.java:61)
at java.security.AccessController.doPrivileged(java.base#11.0.12/Native Method)
at sun.security.ec.SunEC.<clinit>(jdk.crypto.ec#11.0.12/SunEC.java:61)
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(java.base#11.0.12/Native Method)
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(java.base#11.0.12/NativeConstructorAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(java.base#11.0.12/DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(java.base#11.0.12/Constructor.java:490)
at java.util.ServiceLoader$ProviderImpl.newInstance(java.base#11.0.12/ServiceLoader.java:780)
at java.util.ServiceLoader$ProviderImpl.get(java.base#11.0.12/ServiceLoader.java:722)
at java.util.ServiceLoader$3.next(java.base#11.0.12/ServiceLoader.java:1395)
at sun.security.jca.ProviderConfig$ProviderLoader.load(java.base#11.0.12/ProviderConfig.java:340)
at sun.security.jca.ProviderConfig$3.run(java.base#11.0.12/ProviderConfig.java:248)
at sun.security.jca.ProviderConfig$3.run(java.base#11.0.12/ProviderConfig.java:242)
at java.security.AccessController.doPrivileged(java.base#11.0.12/Native Method)
at sun.security.jca.ProviderConfig.doLoadProvider(java.base#11.0.12/ProviderConfig.java:242)
at sun.security.jca.ProviderConfig.getProvider(java.base#11.0.12/ProviderConfig.java:222)
- locked <0x000000070cde52d0> (a sun.security.jca.ProviderConfig)
at sun.security.jca.ProviderList.getProvider(java.base#11.0.12/ProviderList.java:266)
at sun.security.jca.ProviderList.getService(java.base#11.0.12/ProviderList.java:379)
at sun.security.jca.GetInstance.getInstance(java.base#11.0.12/GetInstance.java:157)
at javax.net.ssl.SSLContext.getInstance(java.base#11.0.12/SSLContext.java:168)
at com.amazonaws.internal.SdkSSLContext.getPreferredSSLContext(SdkSSLContext.java:32)
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.getPreferredSocketFactory(ApacheConnectionManagerFactory.java:91)
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:65)
at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:58)
at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:50)
at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:38)
at com.amazonaws.http.AmazonHttpClient.<init>(AmazonHttpClient.java:315)
at com.amazonaws.http.AmazonHttpClient.<init>(AmazonHttpClient.java:299)
at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:172)
at com.amazonaws.services.s3.AmazonS3Client.<init>(AmazonS3Client.java:638)
at com.amazonaws.services.s3.AmazonS3Builder$1.apply(AmazonS3Builder.java:35)
at com.amazonaws.services.s3.AmazonS3Builder$1.apply(AmazonS3Builder.java:32)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.build(AmazonS3ClientBuilder.java:64)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.build(AmazonS3ClientBuilder.java:28)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
You are using a very old AWS SDK. Best practice is to update to AWS SDK for Java V2.
Here is the Java V2 code to use to place an object into an Amazon S3 bucket. You can find many other examples in Github.
package com.example.s3;
// snippet-start:[s3.java2.s3_object_upload.import]
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;
import software.amazon.awssdk.services.s3.model.S3Exception;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
// snippet-end:[s3.java2.s3_object_upload.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class PutObject {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <objectPath> \n\n" +
"Where:\n" +
" bucketName - the Amazon S3 bucket to upload an object into.\n" +
" objectKey - the object to upload (for example, book.pdf).\n" +
" objectPath - the path where the file is located (for example, C:/AWS/book2.pdf). \n\n" ;
if (args.length != 3) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName =args[0];
String objectKey = args[1];
String objectPath = args[2];
System.out.println("Putting object " + objectKey +" into bucket "+bucketName);
System.out.println(" in bucket: " + bucketName);
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.build();
String result = putS3Object(s3, bucketName, objectKey, objectPath);
System.out.println("Tag information: "+result);
s3.close();
}
// snippet-start:[s3.java2.s3_object_upload.main]
public static String putS3Object(S3Client s3,
String bucketName,
String objectKey,
String objectPath) {
try {
Map<String, String> metadata = new HashMap<>();
metadata.put("x-amz-meta-myVal", "test");
PutObjectRequest putOb = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.metadata(metadata)
.build();
PutObjectResponse response = s3.putObject(putOb,
RequestBody.fromBytes(getObjectFile(objectPath)));
return response.eTag();
} catch (S3Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
// Return a byte array
private static byte[] getObjectFile(String filePath) {
FileInputStream fileInputStream = null;
byte[] bytesArray = null;
try {
File file = new File(filePath);
bytesArray = new byte[(int) file.length()];
fileInputStream = new FileInputStream(file);
fileInputStream.read(bytesArray);
} catch (IOException e) {
e.printStackTrace();
} finally {
if (fileInputStream != null) {
try {
fileInputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return bytesArray;
}
// snippet-end:[s3.java2.s3_object_upload.main]
}
Related
I have one method that uploads files to Amazon S3. I am trying to write JUnit for this method but get NullPointerException on the S3AsyncClient:
my class:
public class S3Client<T> {
private static final Logger log = LoggerFactory.getLogger(S3Client.class);
S3AsyncClient client;
/**
*
* #param s3Configuration
*/
public S3Client(AWSS3Configuration s3Configuration) {
this.client = s3Configuration.getAsyncClient();
}
/**
* Uploads a file s3 bucket and returns etag
* #param uploadData
* #return
* #throws S3Exception
*/
public CompletableFuture<String> uploadFile(S3UploadData<T> uploadData) throws S3Exception {
int contentLength;
AsyncRequestBody asyncRequestBody;
if(uploadData.getContent() instanceof String) {
String content = (String) uploadData.getContent();
contentLength = content.length();
asyncRequestBody = AsyncRequestBody.fromString(content);
}
else if(uploadData.getContent() instanceof byte[]){
byte[] bytes = (byte[]) uploadData.getContent();
contentLength = bytes.length;
asyncRequestBody = AsyncRequestBody.fromBytes(bytes);
}
else{
throw new IllegalArgumentException("Unsupported upload content type");
}
PutObjectRequest putObjRequest = PutObjectRequest.builder()
.bucket(uploadData.getBucketName())
.key(uploadData.getFileName())
.metadata(uploadData.getMetaData())
.contentLength((long) contentLength).build();
CompletableFuture<String> response = client.putObject(putObjRequest, asyncRequestBody).thenApply(
getPutObjectResponse -> {
log.info("Got response from S3 upload={}", getPutObjectResponse.eTag());
return getPutObjectResponse.eTag();
});
response.exceptionally(throwable -> {
log.error("Exception occurred while uploading a file intuit_tid={} file={}",uploadData.getTransactionId(),uploadData.getFileName());
throw new S3Exception(throwable.getMessage());
});
return response;
}
input for this class object of S3UploadData:
`
#Getter
#AllArgsConstructor
public class InputData<T> {
T content;
String fileName;
String bucketName;
String transactionId;
Map<String, String> metaData;
}`
can u help please with writing Junit for uploadFile method?
You have no JUNIT test code. You should have code that uses org.junit.jupiter.api.*
Instead of using a MOCK, call the actual S3 Async code in a #TestInstance integration test to make sure it works. For example, here is my test in IntelliJ.
As you can see, my test passed and I Know my code works -- which is the point of this AWS integration test.
If my code failed or threw an exception for some reason, my test would fail. For example, if I passed a bucket name that does not exist, I would get:
Here is my Java Amazon S3 Async code:
package com.example.s3.async;
import software.amazon.awssdk.core.async.AsyncRequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;
import java.nio.file.Paths;
import java.util.concurrent.CompletableFuture;
// snippet-end:[s3.java2.async_ops.import]
// snippet-start:[s3.java2.async_ops.main]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class S3AsyncOps {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" S3AsyncOps <bucketName> <key> <path>\n\n" +
"Where:\n" +
" bucketName - the name of the Amazon S3 bucket (for example, bucket1). \n\n" +
" key - the name of the object (for example, book.pdf). \n" +
" path - the local path to the file (for example, C:/AWS/book.pdf). \n";
if (args.length != 3) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String key = args[1];
String path = args[2];
Region region = Region.US_WEST_2;
S3AsyncClient client = S3AsyncClient.builder()
.region(region)
.build();
putObjectAsync(client, bucketName, key, path);
}
public static void putObjectAsync(S3AsyncClient client,String bucketName, String key, String path) {
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();
// Put the object into the bucket
CompletableFuture<PutObjectResponse> future = client.putObject(objectRequest,
AsyncRequestBody.fromFile(Paths.get(path))
);
future.whenComplete((resp, err) -> {
try {
if (resp != null) {
System.out.println("Object uploaded. Details: " + resp);
} else {
// Handle error
err.printStackTrace();
}
} finally {
// Only close the client when you are completely done with it
client.close();
}
});
future.join();
}
}
Now for my test, i want to call this code, not MOCK it. I have setup my test in IntelliJ like this,
import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import software.amazon.awssdk.regions.Region;
import java.io.*;
import java.util.*;
import com.example.s3.async.*;
import software.amazon.awssdk.services.s3.S3AsyncClient;
#TestInstance(TestInstance.Lifecycle.PER_METHOD)
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class AmazonS3AsyncTest {
private static S3AsyncClient s3AsyncClient;
// Define the data members required for the tests
private static String bucketName = "";
private static String objectKey = "";
private static String objectPath = "";
private static String toBucket = "";
#BeforeAll
public static void setUp() throws IOException {
// Run tests on Real AWS Resources
s3AsyncClient = S3AsyncClient.builder()
.region(Region.US_EAST_1)
.build();
try (InputStream input = AmazonS3Test.class.getClassLoader().getResourceAsStream("config.properties")) {
Properties prop = new Properties();
if (input == null) {
System.out.println("Sorry, unable to find config.properties");
return;
}
//load a properties file from class path, inside static method
prop.load(input);
// Populate the data members required for all tests
bucketName = prop.getProperty("bucketName");
objectKey = prop.getProperty("objectKey");
objectPath= prop.getProperty("objectPath");
toBucket = prop.getProperty("toBucket");
} catch (IOException ex) {
ex.printStackTrace();
}
}
#Test
#Order(1)
public void whenInitializingAWSS3Service_thenNotNull() {
assertNotNull(s3AsyncClient);
System.out.println("Test 1 passed");
}
#Test
#Order(2)
public void putObject() {
S3AsyncOps.putObjectAsync(s3AsyncClient, bucketName, objectKey, objectPath);
System.out.println("Test 2 passed");
}
}
You could use Mockito to mock the S3AsyncClient operations.
#Mock
private S3AsyncClient s3AsyncClient;
Below is the unit test case for my upload file implementation. It will surely give you insights how it is done and it can be done.
#Nested
class UploadFile {
#Captor
ArgumentCaptor<PutObjectRequest> putObjectRequestCaptor;
#Captor
ArgumentCaptor<AsyncRequestBody> requestBodyCaptor;
#Test
void testSuccessfulUpload() {
Flux<ByteBuffer> body = Flux.just();
var expectedResponse = PutObjectResponse.builder().build();
when(s3AsyncClient.putObject(putObjectRequestCaptor.capture(), requestBodyCaptor.capture())) .thenReturn(CompletableFuture.completedFuture(expectedResponse));
fileUploadService.upload("TEST_PREFIX", "test.zip", body);
assertThat(putObjectRequestCaptor.getValue().bucket()).isEqualTo(TEST_BUCKET);
assertThat(putObjectRequestCaptor.getValue().key()).isEqualTo("TEST_PREFIX/test.zip");
assertThat(requestBodyCaptor.getValue()).isNotNull();
}
}
I'm making an application with Google SpeechClient that has the requirements to set a GOOGLE_APPLICATION_CREDENTIALS environment variable that, once set, you can use the voice to text api.
My application is required to run in linux and windows. In linux it runs perfectly, however, on windows, when running the project, it throws an exception com.google.api.gax.rpc.UnavailableException: "io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata" when trying to run this thread
package Controller.Runnables;
import Controller.GUI.VoxSpeechGUIController;
import Model.SpokenTextHistory;
import com.google.api.gax.rpc.ClientStream;
import com.google.api.gax.rpc.ResponseObserver;
import com.google.api.gax.rpc.StreamController;
import com.google.cloud.speech.v1.*;
import com.google.protobuf.ByteString;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.TargetDataLine;
import java.io.IOException;
import java.util.ArrayList;
public class SpeechRecognizerRunnable implements Runnable{
private VoxSpeechGUIController controller;
public SpeechRecognizerRunnable(VoxSpeechGUIController voxSpeechGUIController) {
this.controller = voxSpeechGUIController;
}
#Override
public void run() {
MicrofoneRunnable micrunnable = MicrofoneRunnable.getInstance();
Thread micThread = new Thread(micrunnable);
ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
try (SpeechClient client = SpeechClient.create()) {
ClientStream<StreamingRecognizeRequest> clientStream;
responseObserver =
new ResponseObserver<StreamingRecognizeResponse>() {
ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();
public void onStart(StreamController controller) {}
public void onResponse(StreamingRecognizeResponse response) {
try {
responses.add(response);
StreamingRecognitionResult result = response.getResultsList().get(0);
// There can be several alternative transcripts for a given chunk of speech. Just
// use the first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
String transcript = alternative.getTranscript();
System.out.printf("Transcript : %s\n", transcript);
String newText = SpokenTextHistory.getInstance().getActualSpeechString() + " " + transcript;
SpokenTextHistory.getInstance().setActualSpeechString(newText);
controller.setLabelText(newText);
}
catch (Exception ex){
System.out.println(ex.getMessage());
ex.printStackTrace();
}
}
public void onComplete() {
}
public void onError(Throwable t) {
System.out.println(t);
}
};
clientStream = client.streamingRecognizeCallable().splitCall(responseObserver);
RecognitionConfig recognitionConfig =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setLanguageCode("pt-BR")
.setSampleRateHertz(16000)
.build();
StreamingRecognitionConfig streamingRecognitionConfig =
StreamingRecognitionConfig.newBuilder().setConfig(recognitionConfig).build();
StreamingRecognizeRequest request =
StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig)
.build(); // The first request in a streaming call has to be a config
clientStream.send(request);
try {
// SampleRate:16000Hz, SampleSizeInBits: 16, Number of channels: 1, Signed: true,
// bigEndian: false
AudioFormat audioFormat = new AudioFormat(16000, 16, 1, true, false);
DataLine.Info targetInfo =
new DataLine.Info(
TargetDataLine.class,
audioFormat); // Set the system information to read from the microphone audio
// stream
if (!AudioSystem.isLineSupported(targetInfo)) {
System.out.println("Microphone not supported");
System.exit(0);
}
// Target data line captures the audio stream the microphone produces.
micrunnable.targetDataLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
micrunnable.targetDataLine.open(audioFormat);
micThread.start();
long startTime = System.currentTimeMillis();
while (!micrunnable.stopFlag) {
long estimatedTime = System.currentTimeMillis() - startTime;
if (estimatedTime >= 55000) {
clientStream.closeSend();
clientStream = client.streamingRecognizeCallable().splitCall(responseObserver);
request =
StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig)
.build();
startTime = System.currentTimeMillis();
} else {
request =
StreamingRecognizeRequest.newBuilder()
.setAudioContent(ByteString.copyFrom(micrunnable.sharedQueue.take()))
.build();
}
clientStream.send(request);
}
} catch (Exception e) {
System.out.println(e);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
I've been working hard for hours and have not found a solution that solves my problem.
It is worth mentioning that the environment variable is being set correctly.
Has anyone ever had this problem with Google? What should I do to fix this?
This is my envirounment variable creator:
PS: I`ve already tried use all google alternatives to validate credentials, but all return me errors.
package Controller.Autentication;
import java.io.*;
import java.lang.reflect.Field;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
public class GoogleAuthentication {
private static final String GOOGLE_APPLICATION_CREDENTIALS = "GOOGLE_APPLICATION_CREDENTIALS";
private static final String VoxSpeechFolder = ".vox";
private static final String GoogleAuthenticationJsonFile = "VoxAuthentication.json";
public static void setupGoogleCredentials() {
String directory = defaultDirectory();
directory += File.separator+VoxSpeechFolder;
File voxPath = new File(directory);
if (!voxPath.exists()) {
voxPath.mkdirs();
}
ClassLoader classLoader = new GoogleAuthentication().getClass().getClassLoader();
File srcFile = new File(classLoader.getResource(GoogleAuthenticationJsonFile).getFile());
if(srcFile.exists()){
try {
String voxDestPath = defaultDirectory() + File.separator + VoxSpeechFolder +File.separator+ GoogleAuthenticationJsonFile;
File destFile = new File(voxDestPath);
copyFile(srcFile,destFile);
} catch (IOException e) {
e.printStackTrace();
}
}
try {
Map<String,String> googleEnv = new HashMap<>();
String path = defaultDirectory() +File.separator+ VoxSpeechFolder +File.separator+ GoogleAuthenticationJsonFile;
googleEnv.put(GOOGLE_APPLICATION_CREDENTIALS, path);
setGoogleEnv(googleEnv);
} catch (Exception e) {
e.printStackTrace();
}
}
static void copyFile(File sourceFile, File destFile)
throws IOException {
InputStream inStream ;
OutputStream outStream ;
System.out.println(destFile.getPath());
if(destFile.createNewFile()){
inStream = new FileInputStream(sourceFile);
outStream = new FileOutputStream(destFile);
byte[] buffer = new byte[1024];
int length;
while ((length = inStream.read(buffer)) > 0){
outStream.write(buffer, 0, length);
}
inStream.close();
outStream.close();
}
}
static String defaultDirectory()
{
String OS = getOperationSystem();
if (OS.contains("WIN"))
return System.getenv("APPDATA");
else if (OS.contains("MAC"))
return System.getProperty("user.home") + "/Library/Application "
+ "Support";
else if (OS.contains("LINUX")) {
return System.getProperty("user.home");
}
return System.getProperty("user.dir");
}
static String getOperationSystem() {
return System.getProperty("os.name").toUpperCase();
}
protected static void setGoogleEnv(Map<String, String> newenv) throws Exception {
try {
Class<?> processEnvironmentClass = Class.forName("java.lang.ProcessEnvironment");
Field theEnvironmentField = processEnvironmentClass.getDeclaredField("theEnvironment");
theEnvironmentField.setAccessible(true);
Map<String, String> env = (Map<String, String>) theEnvironmentField.get(null);
env.putAll(newenv);
Field theCaseInsensitiveEnvironmentField = processEnvironmentClass.getDeclaredField("theCaseInsensitiveEnvironment");
theCaseInsensitiveEnvironmentField.setAccessible(true);
Map<String, String> cienv = (Map<String, String>) theCaseInsensitiveEnvironmentField.get(null);
cienv.putAll(newenv);
} catch (NoSuchFieldException e) {
Class[] classes = Collections.class.getDeclaredClasses();
Map<String, String> env = System.getenv();
for(Class cl : classes) {
if("java.util.Collections$UnmodifiableMap".equals(cl.getName())) {
Field field = cl.getDeclaredField("m");
field.setAccessible(true);
Object obj = field.get(env);
Map<String, String> map = (Map<String, String>) obj;
map.clear();
map.putAll(newenv);
}
}
}
String genv = System.getenv(GOOGLE_APPLICATION_CREDENTIALS);
System.out.println(genv);
}
}
I've created a webapplication which creates folders when filling destination input ( for example => C:\xxx\xxx path).
When i run on my local (http:\localhost:8080), it works perfectly. it finds local windows path and creates folders.
But now i want to open this webapp to group of people, deployed tomcat on internal unix server (http:\ipnumber\portnumber).
The problem is that when user fills input with local destination, program code can not find the path or can not access local computer folder structure, it looks unix server folder structure.
How can i achieve this? I use angularjs for frontend with call restapi with http.post, the backend side is java.
package com.ama.ist.controller;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.ama.ist.model.CustomErrorType;
import com.ama.ist.model.Patch;
import com.ama.ist.service.PatchService;
#RestController
public class PatchController {
#Autowired
private PatchService patchService;
#CrossOrigin(origins = "http://ipnumber:portnumber")
#RequestMapping(value = "/mk", method = RequestMethod.POST)
public ResponseEntity<?> createFolder(#RequestBody Patch patch) {
System.out.println("patch ddest: => " + patch.getDestination());
String iscreatedstatus = patchService.create(patch);
System.out.println("iscreatedstatus" + iscreatedstatus);
if (!(iscreatedstatus.equals("Success"))) {
System.out.println("if success" );
return new ResponseEntity<Object>(new CustomErrorType("ER",iscreatedstatus), HttpStatus.NOT_FOUND);
}
System.out.println("if disinda success" );
return new ResponseEntity<Object>(new CustomErrorType("OK",iscreatedstatus), HttpStatus.CREATED);
}
//
#RequestMapping("/resource")
public Map<String,Object> home() {
Map<String,Object> model = new HashMap<String,Object>();
model.put("id", UUID.randomUUID().toString());
model.put("content", "Hello World");
return model;
}
}
This is Service
package com.ama.ist.service;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.PrintWriter;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.springframework.stereotype.Service;
import org.tmatesoft.svn.core.SVNDepth;
import org.tmatesoft.svn.core.SVNException;
import org.tmatesoft.svn.core.SVNProperties;
import org.tmatesoft.svn.core.SVNURL;
import org.tmatesoft.svn.core.auth.BasicAuthenticationManager;
import org.tmatesoft.svn.core.auth.ISVNAuthenticationManager;
import org.tmatesoft.svn.core.wc.SVNCommitClient;
import org.tmatesoft.svn.core.wc.SVNWCUtil;
import com.ama.ist.model.Patch;
import com.ama.ist.model.User;
#Service
public class PatchService {
public String create(Patch patch) {
String ConstantPath = patch.getDestination();
File testFile = new File("");
String currentPath = testFile.getAbsolutePath();
System.out.println("current path is: " + currentPath);
System.out.println("ConstantPath => " + ConstantPath);
// if (!(isValidPath(ConstantPath))) {
// return "invalid Path";
// }
// System.out.println("Valid mi " + isValidPath(ConstantPath));
String foldername = patch.getWinNum() + " - " + patch.getWinName();
System.out.println(ConstantPath + foldername);
File files = new File(ConstantPath + foldername);
if (files.exists()) {
return "The Folder is already created in that path";
}
File files1 = new File(ConstantPath + foldername + "\\Patch");
File files2 = new File(ConstantPath + foldername + "\\Backup");
File files3 = new File(ConstantPath + foldername + "\\Backup\\UAT");
File files4 = new File(ConstantPath + foldername + "\\Backup\\PROD");
if (!files.exists()) {
if (files.mkdirs()) {
files1.mkdir();
files2.mkdir();
files3.mkdir();
files4.mkdir();
createReadme(ConstantPath + foldername, patch);
if (patch.isChecked()) {
System.out.println("patch.getDestination => " + patch.getDestination());
System.out.println("patch.getDetail => " + patch.getDetail());
System.out.println("patch.getSvnPath => " + patch.getSvnPath());
System.out.println("patch.getWinName => " + patch.getWinName());
System.out.println("patch.getWinNum => " + patch.getWinNum());
System.out.println("patch.getUserName => " + patch.getUser().getUserName());
System.out.println("patch.getPassword => " + patch.getUser().getPassword());
ImportSvn(patch);
}
System.out.println("Multiple directories are created!");
return "Success";
} else {
System.out.println("Failed to create multiple directories!");
return "Unknwon error";
}
} else {
return "File name is already exists";
}
}
public static boolean isValidPath(String path) {
System.out.println("path => " + path);
File f = new File(path);
if (f.isDirectory()) {
System.out.println("true => ");
return true;
} else {
System.out.println("false => ");
return false;
}
}
public void createReadme(String path, Patch patch) {
try {
ClassLoader classLoader = getClass().getClassLoader();
File file = new File(classLoader.getResource("Readme.txt").getFile());
// System.out.println("!!!!!!!!!!" + new java.io.File("").getAbsolutePath());
// File file = new File("resources/Readme.txt");
System.out.println(file.getAbsolutePath());
FileReader reader = new FileReader(file);
BufferedReader bufferedReader = new BufferedReader(reader);
String line;
PrintWriter writer = new PrintWriter(path + "\\Readme.txt", "UTF-8");
System.out.println(path + "\\Readme.txt");
while ((line = bufferedReader.readLine()) != null) {
line = line.replace("#Winnumber", Integer.toString(patch.getWinNum()));
line = line.replace("#NameSurname", " ");
line = line.replace("#Type", "Package");
line = line.replace("#detail", patch.getDetail());
SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy");
String date = sdf.format(new Date());
line = line.replace("#Date", date);
line = line.replace("#Desc", patch.getWinName());
writer.println(line);
System.out.println(line);
}
reader.close();
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public void ImportSvn(Patch patch) {
String name = patch.getUser().getUserName();
String password = patch.getUser().getPassword();
// String filename = patch.getWinName()
String filename = patch.getWinNum() + " - " + patch.getWinName();
String url = patch.getSvnPath() + "/" + filename;
ISVNAuthenticationManager authManager = new BasicAuthenticationManager(name, password);
SVNCommitClient commitClient = new SVNCommitClient(authManager, SVNWCUtil.createDefaultOptions(true));
File f = new File(patch.getDestination() + filename);
try {
String logMessage = filename;
commitClient.doImport(f, // File/Directory to be imported
SVNURL.parseURIEncoded(url), // location within svn
logMessage, // svn comment
new SVNProperties(), // svn properties
true, // use global ignores
false, // ignore unknown node types
SVNDepth.INFINITY);
// SVNClientManager cm =
// SVNClientManager.newInstance(SVNWCUtil.createDefaultOptions(true),authManager);
//
// SVNUpdateClient uc = cm.getUpdateClient();
// long[] l = uc.doUpdate(new File[]{dstPath},
// SVNRevision.HEAD,SVNDepth.INFINITY, true,true);
} catch (SVNException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
This is Angularjs side
$scope.Create = function() {
$scope.obj = [];
console.log("$scope.svnPath" + $scope.patch.svnPath);
console.log("$scope.userName" + $scope.patch.user.userName);
$http({
method : "POST",
url : "http://ipnumber:port/patchinit/mk",
data : $scope.patch
}).then(function mySuccess(response) {
console.log("Success!! ");
$scope.obj = response.data;
$scope.errorMessage = response.data.errorMessage;
$scope.errorCode = response.data.errorCode;
}, function myError(response) {
//$scope.obj = response.statusText;
$scope.errorMessage = response.data.errorMessage;
$scope.errorCode = response.data.errorCode;
});
}
You can share that folder on windows and mount that shared folder in unix. Once mounted, it can be easily accessed using samba(smb://192.168.1.117/Your_Folder).
Samba is standard on nearly all distributions of Linux and is commonly included as a basic system service on other Unix-based operating systems as well.
Simple i want to apply image compression using PNG/JPEG/Bitmap file.
Android we have Bitmap.CompressFormat to compressed our bitmap file and use for further operation.
Bitmap.CompressFormat class allow to compress in 3 format as below :
JPEG
PNG
WEBP
My query is i want to compress file in any on of below format :
JBIG2
TIFF G4
TIFF LZW
I have found some image compression library like ImageIo & ImageMagick but didn't get any success. I want to use this file to upload on AmazonServer. Please guide me how to achieve this or is there any other option to upload image on amazon server.
Thanks for your time.
I don't know about those file's compression but i created this class to upload files programatically into an Amazon s3 bucket that uses the Amazon SDK api:
package com.amazon.util;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.PropertiesCredentials;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CannedAccessControlList;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.PutObjectResult;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
public class AmazonS3Files {
private static final String existingBucketName = "bucketName";
private final String strProperties = "accessKey = MYACESSKEY \n"
+ "secretKey = my+secret+key";
private static final String urlRegion = "https://s3.amazonaws.com/";
public static final String urlPathS3 = urlRegion + existingBucketName;
public String UploadFile(InputStream inFile, String pathDocument, String fileName) {
return UploadFile(inFile, pathDocument, fileName, false);
}
public void deleteObjectsInFolder(String folderPath) {
InputStream inputStreamCredentials = new ByteArrayInputStream(strProperties.getBytes());
folderPath = folderPath.replace('\\', '/');
if (folderPath.charAt(folderPath.length() - 1) == '/') {
folderPath = folderPath.substring(0, folderPath.length() - 1);
}
if (folderPath.charAt(0) == '/') {
folderPath = folderPath.substring(1, folderPath.length());
}
try {
AmazonS3 s3Client = new AmazonS3Client(new PropertiesCredentials(inputStreamCredentials));
for (S3ObjectSummary file : s3Client.listObjects(existingBucketName, folderPath).getObjectSummaries()) {
s3Client.deleteObject(existingBucketName, file.getKey());
}
} catch (IOException | AmazonClientException e) {
System.out.println(e);
}
}
public void deleteFile(String filePath) {
InputStream inputStreamCredentials = new ByteArrayInputStream(strProperties.getBytes());
filePath = filePath.replace('\\', '/');
if (filePath.charAt(0) == '/') {
filePath = filePath.substring(1, filePath.length());
}
try {
AmazonS3 s3Client = new AmazonS3Client(new PropertiesCredentials(inputStreamCredentials));
s3Client.deleteObject(existingBucketName, filePath);
} catch (IOException | AmazonClientException e) {
System.out.println(e);
}
}
public String UploadFile(InputStream inFile, String pathDocument, String fileName, boolean bOverwiteFile) {
InputStream inputStreamCredentials = new ByteArrayInputStream(strProperties.getBytes());
String amazonFileUploadLocationOriginal;
String strFileExtension = fileName.substring(fileName.lastIndexOf("."), fileName.length());
fileName = fileName.substring(0, fileName.lastIndexOf("."));
fileName = fileName.replaceAll("[^A-Za-z0-9]", "");
fileName = fileName + strFileExtension;
pathDocument = pathDocument.replace('\\', '/');
try {
if (pathDocument.charAt(pathDocument.length() - 1) == '/') {
pathDocument = pathDocument.substring(0, pathDocument.length() - 1);
}
if (pathDocument.charAt(0) == '/') {
pathDocument = pathDocument.substring(1, pathDocument.length());
}
amazonFileUploadLocationOriginal = existingBucketName + "/" + pathDocument;
AmazonS3 s3Client = new AmazonS3Client(new PropertiesCredentials(inputStreamCredentials));
s3Client.setRegion(Region.getRegion(Regions.SA_EAST_1));
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(inFile.available());
if (!bOverwiteFile) {
boolean bFileServerexists = true;
int tmpIntEnum = 0;
while (bFileServerexists) {
String tmpStrFile = fileName;
if (tmpIntEnum > 0) {
tmpStrFile = fileName.substring(0, fileName.lastIndexOf(".")) + "(" + tmpIntEnum + ")" + fileName.substring(fileName.lastIndexOf("."), fileName.length());
}
if (!serverFileExists(urlRegion + amazonFileUploadLocationOriginal + "/" + tmpStrFile)) {
bFileServerexists = false;
fileName = tmpStrFile;
}
tmpIntEnum++;
}
}
String strFileType = fileName.substring(fileName.lastIndexOf("."), fileName.length());
if (strFileType.toUpperCase().equals(".jpg".toUpperCase())) {
objectMetadata.setContentType("image/jpeg");
} else if (strFileType.toUpperCase().equals(".png".toUpperCase())) {
objectMetadata.setContentType("image/png");
} else if (strFileType.toUpperCase().equals(".gif".toUpperCase())) {
objectMetadata.setContentType("image/gif");
} else if (strFileType.toUpperCase().equals(".gmap".toUpperCase())) {
objectMetadata.setContentType("text/plain");
}
PutObjectRequest putObjectRequest = new PutObjectRequest(amazonFileUploadLocationOriginal, fileName, inFile, objectMetadata).withCannedAcl(CannedAccessControlList.PublicRead);
PutObjectResult result = s3Client.putObject(putObjectRequest);
return "/" + pathDocument + "/" + fileName;
} catch (Exception e) {
// TODO: handle exception
return null;
}
}
public boolean serverFileExists(String URLName) {
try {
HttpURLConnection.setFollowRedirects(false);
HttpURLConnection con =
(HttpURLConnection) new URL(URLName).openConnection();
con.setRequestMethod("HEAD");
return (con.getResponseCode() == HttpURLConnection.HTTP_OK);
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
}
And for usage with your file:
BufferedImage img = null;
try {
img = ImageIO.read(new File("file.jpg"));
String strReturn = AmazonS3Files.UploadFile(new ByteArrayInputStream(((DataBufferByte)(img).getRaster().getDataBuffer()).getData()), "path/to/file", "newfilename.jpg"); //Returns null if the upload doesn't work or the s3 file path of the uploaded file
} catch (IOException e) {
//Handle Exception
}
Local - Wowza Streaming Engine 4.1.0, Windows 8, Java version 1.7.0_67
Server - The Wowza Streaming Engine AMI here. Java version 1.7.0_65
I have Wowza running locally and on an EC2 instance.
Locally it works fine and I can connect and publish streams to my application without a problem. I cannot connect or publish streams to the application on my server, however.
I removed the .jar (module) that went with the application, and I was able to connect and publish to my app, though it gave me a warning that it couldn't find the associated module, which was to be expected.
I put the module back in, restarted the server, and I was unable to connect.
It appears that my .jar file is stopping the application from loading for some reason.
Here's the source for my module:
package com.xxxxxxxxxxxxxxx.recorder;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.S3ClientOptions;
import com.amazonaws.services.s3.model.CannedAccessControlList;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.wowza.wms.application.*;
import com.wowza.wms.amf.*;
import com.wowza.wms.client.*;
import com.wowza.wms.module.*;
import com.wowza.wms.request.*;
import com.wowza.wms.stream.*;
import com.wowza.wms.rtp.model.*;
import com.wowza.wms.httpstreamer.model.*;
import com.wowza.wms.httpstreamer.cupertinostreaming.httpstreamer.*;
import com.wowza.wms.httpstreamer.smoothstreaming.httpstreamer.*;
public class RecorderModules extends ModuleBase implements AWSCredentialsProvider {
IApplicationInstance appInstance;
private String videoBucket;
private String thumbBucket;
private String videoDistro;
private String thumbnailDistro;
private String region;
private AmazonS3Client s3;
private String dir;
public void onAppStart(IApplicationInstance appInstance) {
String fullname = appInstance.getApplication().getName() + "/"
+ appInstance.getName();
getLogger().info("onAppStart: " + fullname);
this.appInstance = appInstance;
try{
videoBucket = appInstance.getProperties().getPropertyStr("videoBucket");
getLogger().info("Video bucket is " + videoBucket);
thumbBucket = appInstance.getProperties().getPropertyStr("thumbBucket");
getLogger().info("Thumb bucket is " + thumbBucket);
videoDistro = appInstance.getProperties().getPropertyStr("videoDistro");
getLogger().info("Video distro is " + videoDistro);
thumbnailDistro =appInstance.getProperties().getPropertyStr("thumbnailDistro");
getLogger().info("thumbnail distro is " + thumbnailDistro);
region = appInstance.getProperties().getPropertyStr("region");
getLogger().info("region is " + region);
s3 = new AmazonS3Client();
s3.setEndpoint(region);
getLogger().info("AmazonS3Client is created");
}catch(Exception e){
getLogger().info("Could not read config " + e);
}
}
public void doSave(IClient client, RequestFunction function, AMFDataList params) {
getLogger().info("doSave hit ");
new File(dir + params.getString(3) + ".flv").renameTo(new File(dir+params.getString(4)+".flv"));
getLogger().info("Starting upload");
String thumbName = params.getString(4).replace("vid_", "thumb_")+".jpg";
String flvName = params.getString(4)+".flv";
String mp4Name = params.getString(4)+".mp4";
try{
PutObjectRequest p = new PutObjectRequest(videoBucket,flvName, new File(dir+flvName));
p.setRequestCredentials(getCredentials());
p.setCannedAcl(CannedAccessControlList.BucketOwnerFullControl);
getLogger().info("attempting to upload " + flvName + " to " + videoBucket);
s3.putObject(p);
getLogger().info("flv upload complete " + videoBucket + " " + flvName);
PutObjectRequest p2 = new PutObjectRequest(thumbBucket,thumbName, new File(dir+thumbName));
p2.setRequestCredentials(getCredentials());
p2.setCannedAcl(CannedAccessControlList.PublicRead);
getLogger().info("attempting to upload " + thumbName + " to " + thumbBucket);
s3.putObject(p2);
getLogger().info("thumb upload complete " + thumbBucket + " " + thumbName);
String[] info = new String[5];
info[0] = videoDistro+params.getString(4);
info[1] = thumbnailDistro+thumbName;
info[2] = params.getString(4);
info[3] = videoBucket;
info[4] = thumbBucket;
getLogger().info("sending info to client " + info[0]);
//client.call("uploadDone", null,(Object[])info);
}catch(Exception e){
getLogger().info("Upload failed");
getLogger().info(e);
//client.call("uploadFailed")
}
//transcode
//-crf 23 -refs 3 -profile:v baseline -level 3.0 -pix_fmt yuv420p -preset veryslow
String[] command = {"ffmpeg",
"-i", dir+params.getString(4)+".flv",
"-crf", "23",
"-refs","3",
"-profile:v","baseline",
"-level","3.0",
"-pix_fmt","yuv420p",
"-preset","veryslow",
dir+params.getString(4)+".mp4"};
try {
ProcessBuilder builder = new ProcessBuilder(command);
builder.redirectErrorStream(true);
getLogger().info("Starting process");
Process process = builder.start();
BufferedReader in = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line = null;
while((line = in.readLine()) != null) {
System.out.println(line);
}
process.waitFor();
PutObjectRequest p = new PutObjectRequest(videoBucket,mp4Name, new File(dir+mp4Name));
p.setRequestCredentials(getCredentials());
p.setCannedAcl(CannedAccessControlList.BucketOwnerFullControl);
getLogger().info("transcoding completed");
s3.putObject(p);
getLogger().info("mp4 file uploaded");
} catch (Exception e) {
getLogger().info("Error running ffmpeg");
e.printStackTrace();
}
deleteFiles(params.getString(4).replace("vid_",""));
}
public void saveThumbnail(IClient client, RequestFunction function, AMFDataList params){
String dir = client.getAppInstance().getStreamStoragePath()+"/"+"thumb_"+params.getString(4).split(",")[2]+".jpg";
getLogger().info(params);
Path path = Paths.get(dir);
byte[] byteArr = (byte[])((AMFDataByteArray)params.get(3)).getValue();
try {
Files.write(path, byteArr, StandardOpenOption.CREATE_NEW);
} catch (IOException e) {
e.printStackTrace();
}
}
public void onAppStop(IApplicationInstance appInstance) {
String fullname = appInstance.getApplication().getName() + "/"
+ appInstance.getName();
getLogger().info("onAppStop: " + fullname);
}
public void onConnect(IClient client, RequestFunction function, AMFDataList params) {
getLogger().info("onConnect: " + client.getClientId());
}
public void onConnectAccept(IClient client) {
getLogger().info("onConnectAccept: " + client.getClientId());
}
public void onConnectReject(IClient client) {
getLogger().info("onConnectReject: " + client.getClientId());
}
public void onDisconnect(IClient client) {
getLogger().info("onDisconnect: " + client.getClientId());
}
public void onStreamCreate(IMediaStream stream) {
getLogger().info("onStreamCreate: " + stream.getSrc());
}
public void onStreamDestroy(IMediaStream stream) {
getLogger().info("onStreamDestroy: " + stream.getSrc());
}
public void onHTTPSessionCreate(IHTTPStreamerSession httpSession) {
getLogger().info("onHTTPSessionCreate: " + httpSession.getSessionId());
}
public void onHTTPSessionDestroy(IHTTPStreamerSession httpSession) {
getLogger().info("onHTTPSessionDestroy: " + httpSession.getSessionId());
}
public void onHTTPCupertinoStreamingSessionCreate(HTTPStreamerSessionCupertino httpSession) {
getLogger().info(
"onHTTPCupertinoStreamingSessionCreate: "
+ httpSession.getSessionId());
}
public void onHTTPCupertinoStreamingSessionDestroy(HTTPStreamerSessionCupertino httpSession) {
getLogger().info(
"onHTTPCupertinoStreamingSessionDestroy: "
+ httpSession.getSessionId());
}
public void onHTTPSmoothStreamingSessionCreate( HTTPStreamerSessionSmoothStreamer httpSession) {
getLogger().info(
"onHTTPSmoothStreamingSessionCreate: "
+ httpSession.getSessionId());
}
public void onHTTPSmoothStreamingSessionDestroy( HTTPStreamerSessionSmoothStreamer httpSession) {
getLogger().info(
"onHTTPSmoothStreamingSessionDestroy: "
+ httpSession.getSessionId());
}
public void onRTPSessionCreate(RTPSession rtpSession) {
getLogger().info("onRTPSessionCreate: " + rtpSession.getSessionId());
}
public void onRTPSessionDestroy(RTPSession rtpSession) {
getLogger().info("onRTPSessionDestroy: " + rtpSession.getSessionId());
}
public void onCall(String handlerName, IClient client, RequestFunction function, AMFDataList params) {
getLogger().info("onCall: " + handlerName);
}
/* Overwritten method: Delete content of the same name before starting */
public void publish(IClient client, RequestFunction function, AMFDataList params) {
getLogger().info("publish hit");
String name = params.getString(3).replace("flv:","").replace("vid_","").replace("_temp", "");
getLogger().info("name:" + name);
dir = appInstance.decodeStorageDir("${com.wowza.wms.AppHome}"+"/content/recorder/");
deleteFiles(name);
invokePrevious(client,function,params);
}
private void deleteFiles(String name){
getLogger().info("deleting " + name);
try {
if(Files.exists(Paths.get(dir+"thumb_"+name+".jpg"))){
getLogger().info("deleting thumbnail");
Files.delete(Paths.get(dir+"thumb_"+name+".jpg"));
}
if(Files.exists((Paths.get(dir+"vid_"+name+".flv")))){
getLogger().info("deleting video");
Files.delete(Paths.get(dir+"vid_"+name+".flv"));
}
if(Files.exists((Paths.get(dir+"vid_"+name+".mp4")))){
getLogger().info("deleting mp4 video");
Files.delete(Paths.get(dir+"vid_"+name+".mp4"));
}
if(Files.exists((Paths.get(dir+"vid_"+name+"_temp.flv")))){
getLogger().info("deleting temp video");
Files.delete(Paths.get(dir+"vid_"+name+"_temp.flv"));
}
} catch (IOException e) {
getLogger().info("Could not delete old files");
}
}
#Override
public AWSCredentials getCredentials() {
getLogger().info("getting credentials");
return new BasicAWSCredentials(appInstance.getProperties().getPropertyStr("accessKey"),appInstance.getProperties().getPropertyStr("secretKey"));
}
#Override
public void refresh() {
// TODO Auto-generated method stub
}
}
It could be related to this:
http://www.wowza.com/forums/showthread.php?36693-Aws-plugin-breaks-application-with-no-errors
That might mean that my .jar file isn't being built with the required dependencies (the AWS stuff).
EDIT:
So I included all the dependencies, making sure that the AWS stuff was in the .jar (I looked at it with winrar), and now it gives me "Module class not found or could not be loaded" when the application starts. I can see that the application is there.
This might be related with this error I get in Eclipse when I tried to create a runnable jar with all the dependencies extracted: "Could not find main method from given launch configuration." Even though I got this error, it appeared to work, as the .jar file grew several times in size.
Make sure the applications folder actually exists; in your wowza-install-folder there must be a subfolder called "applications". If it's not there, create it manually.
In Wowza 4.x, you must use the Engine Manager to create applications and manage them. Open the Engine Manager (http://your.ec2.server:8088) and choose the specific application; then select "Incoming security" and check what it says under "RTMP Publishing". If you don't want any protection, change it to "Open (no authentication required)"; otherwise you must send along credentials from your AS3 code when connecting with the NetConnection.
You may also want to check the Wowza logs in [wowza-install-folder]/logs - if the connection fails, there should be a message in the log about this that may give you useful information.
PS: I usually use the Pre-Built AMIs from Wowza's website to initiate a new instance.
I resolved it by installing everything differently on a new server.
I took by auto-generated .jar file, and the two jar files that it depended on, commons-codec-1.9 and aws-java-sdk-1.4.7, and installed them using the "startup package" method, as opposed to transferring the dependencies later over ftp.
And everything worked fine.