Writing Junit Test case for merging files in sftp server - java

I am using apache commons VFS to connect to sftp server and write the content of files in /input directory into a single file in /output directory.The names of files in input directory is provided as a List. I am struggling to write Junit test case for it.My intention is that once the code gets executed, I will compare the contents of file in /input against content of file in /output
public void exportFile(List<String> fileNamesList){
for (String file : fileNamesList){
try(FileObject fileObject= //getsFileObject
OutputStream fileOutputStream= fileObject.resolveFile("/output/"+"exportfile.txt").getContent().getOutputStream(true);
)
fileObject.resolveFile("/input/"+file).getContent().getInputStream().transferTo(fileOutputStream);
}
}
I want to write Junit test case for the above. The below is my setup for test case
#BeforeAll
static void setUpSftpServer() throws IOException {
System.out.println("inside setup ssh");
sshd= SshServer.setUpDefaultServer();
sshd.setPort(1234);
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider());
sshd.setPublickeyAuthenticator(AcceptAllPublickeyAuthenticator.INSTANCE);
sshd.setSubsystemFactories(Arrays.asList(new SftpSubsystemFactory()));
sshd.start();
}
#Test
void exportFileTest() throws IOException, URISyntaxException {
System.out.println("Inside exportFile test");
FileObject fileObject=getFileObject();
when(sftpConfiguration.connect()).thenReturn(fileObject);
myobject.exportFile(Arrays.asList("a.txt"));
String actualContent=fileObject.resolveFile("/input/a.txt").getContentContent().getString("UTF-8");
String expectedContent=fileObject.resolveFile("/output/exportFile.txt").getContentContent().getString("UTF-8");
assertTrue(actualContent.equals(expectedContent));
}
static FileObject getFileObject() throws URISyntaxException, FileSystemException {
String userInfo = "uname" + ":" + "pwd";
SftpFileSystemConfigBuilder sftpConfigBuilder = SftpFileSystemConfigBuilder.getInstance();
FileSystemOptions options = new FileSystemOptions();
IdentityProvider identityInfo = new IdentityInfo(new File("/fake/path/to/key"), "test".getBytes());
sftpConfigBuilder.setIdentityProvider(options, identityInfo);
URI uri= new URI("sftp", userInfo, "127.0.0.1", Objects.requireNonNullElse(1234, -1), null, null, null);
FileObject fileObject= VFS.getManager().resolveFile(uri.toString(),options);
System.out.println("creating file object complete");
fileObject.resolveFile("/input").createFolder(); //create a folder in the path
fileObject.resolveFile("/output").createFolder();
//code to create a file called a.txt inside /input and write the string "abc" to the file
return fileObject;
}
But I am getting an exception like below
org.apache.commons.vfs2.FileSystemException: Unknown message with code "Could not get the user id of the current user (error code: -1)".
This exception I am getting at the line
FileObject fileObject= VFS.getManager().resolveFile(uri.toString(),options);
How do I write the unittest for this case correctly?

This is caused by the SftpFileSystem failing to run the command id -u which doesn't exist on some SSH connections such as Windows OpenSSH. It runs this command when attempting to detect the exec channel. Resolve this by adding the following SFTP configuration:
sftpConfigBuilder.setDisableDetectExecChannel(options, true);
See here.

Related

Java geoip2 java.io.FileNotFoundException:

I use geoip2 to determine the country by ip. During development and testing of the code, I have no problems, but when I run the compiled archive, I encounter a java.io.FileNotFoundException exception. I understand that this is because the path to the file is absolute, and in the archive it changes. Question: How do I need to change my code so that even from the archive I can access the file?
public static String getCountryByIp(String ip) throws Exception {
File database = new File(URLDecoder.decode(GeoUtils.class.getResource("/GeoLite2-Country.mmdb").getFile(),"UTF-8"));
DatabaseReader dbReader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName(ip);
CountryResponse response = dbReader.country(ipAddress);
return response.getCountry().getName();
}
test.war/
test.war/WEB-INF/classes
You can try this
InputStream is = this.getClass().getClassLoader().getResourceAsStream("GeoLite2-Country.mmdb");

How to get resources directory path using java

I am testing test in cucumber which i want to upload file from testData to S3 bucket:
String bucket = bucketname+ "/ADL";
String ActualFilesPathForComparison = Environment.getInstance().getValue(DATAINPUTPATH);
temp = ActualFilesPathForComparison+inputPath+ File.separator+ file;
s3.uploadFile(bucket, file, new File (temp));
public void uploadFile(String bucketName, String fileKeyName, File localFilePath) {
try {
this.s3.putObject((new PutObjectRequest(bucketName, fileKeyName, localFilePath)).withCannedAcl(CannedAccessControlList.PublicRead));
} catch (Exception var5) {
throw new RuntimeException("Upload file failed.", var5);
}
}
I have this file:
src\main\resources\testData\testInputsFile\testLZInputUnZippedFiles\Log.csv
when i run the test i am getting from the debug:
localFilePath = testData\testInputsFile\testLZInputUnZippedFiles\Log_WithHeader.csv
And getting excption:
com.amazonaws.SdkClientException: Unable to calculate MD5 hash: testData\testInputsFile\testLZInputUnZippedFiles\Log_WithHeader.csv (The system cannot find the path specified)
what should i fixed? i want to avoid to copy the file outside from the src.
To access a file named "my.properties" which is inside src/main/resources/, you just need to do:
File propertiesFile = new File(this.getClass().getClassLoader().getResource("my.properties").getFile());

Test a Camel sFTP endpoint

I've got the following route:
public void configure() throws Exception {
from(ftpEndpoint)
.routeId("import-lib-files")
.log(INFO, "Processing file: '${headers.CamelFileName}' from Libri-FTP")
.choice()
.when(method(isFilenameAlreadyImported))
.log(DEBUG, "'${headers.CamelFileName}' is already imported.")
.endChoice()
.otherwise()
.bean(method(unzipLibFile))
.bean(method(persistFilename))
.log(DEBUG, "Import file '${headers.CamelFileName}'.")
.endChoice()
.end()
.end();
}
inside the unzipLibFile processor bean the file from the ftp gets uncompressed and is written to the HD.
I want to test (integration test) this route, like:
Copy file to ftp
Start the route
evaluate the 'outcome'
I like:
#Before
public void setUp() throws Exception {
// delete test-file from sftp
final String uploaded = ftpPath + "/" + destination + "/libri-testfile.zip";
final File uploadedFile = new File(uploaded);
uploadedFile.delete();
// delete unzipped test-file
final String unzippedFile = unzipped + "/libri-testfile.xml";
final File expectedFile = new File(unzippedFile);
expectedFile.delete();
// delete entries from db
importedLibFilenameRepository.deleteAll();
// copy file to ftp
final File source =
new ClassPathResource("vendors/references/lib.zip/libri-testfile.zip").getFile();
final String target = ftpPath + "/" + destination + "/libri-testfile.zip";
FileUtils.copyFile(new File(source.getAbsolutePath()), new File(target));
}
#Test
#Ignore
public void testStuff() throws Exception {
// Well here is a problem, I can't fix at the moment
// the Camel-Context within the SpringContext get started when the tests starts
// during this process the Camel-Routes are executed and because i copied the file to
// the ftp all is fine... but I don't want to have a sleep in a test, I want to start the
// route (like commented code beneath the sleep)
Thread.sleep(2000);
// final Map<String, Object> headers = Maps.newHashMap();
// headers.put("CamelFileName", "libri-testfile.zip");
//
// final File file =
// new ClassPathResource("vendors/references/lib.zip/libri-testfile.zip").getFile();
// final GenericFile<File> genericFile =
// FileConsumer.asGenericFile(file.getParent(), file, StandardCharsets.UTF_8.name(), false);
//
// final String uri = libFtpConfiguration.getFtpEndpoint();
// producer.sendBodyAndHeaders(uri, InOut, genericFile, headers);
// test if entry was made in the database
final List<ImportedLibFilename> filenames = importedLibFilenameRepository.findAll();
assertThat(filenames).usingElementComparatorIgnoringFields("id", "timestamp")
.containsExactly(expectedFilename("libri-testfile.zip"));
// test if content of unzipped file is valid
final String expected = unzipped + "/libri-testfile.xml";
final Path targetFile = Paths.get(expected);
final byte[] encoded = Files.readAllBytes(targetFile);
final String actualFileContent = new String(encoded, Charset.defaultCharset());
final String expectedFileContent = "This is my little test file for Libri import";
assertThat(actualFileContent).isEqualTo(expectedFileContent);
}
private ImportedLibFilename expectedFilename(final String filename) {
final ImportedLibFilename entity = new ImportedLibFilename();
entity.setFilename(filename);
return entity;
}
The problem is:
All camel route are started automaticly and because I copied the file to the FTP the test is green. But I've a #sleep inside my test, which I don't want. I want no camel route starting and start only the route I need.
My questions are:
How can I prevent the Camel-Routes from starting automaticly
Is the commented code (in the test method) the right way to start a route manually?
What are best practices to test a camel route with a ftp
Use .autoStartup(yourVariable) in your routes to make their startup configurable. Set the variable to true in normal environments and to falsein your test cases.
I don't see code to start a route?!?
Well, take a step back. Think about splitting your FTP route. For testing and more reasons:
For example split the route into an FTP and a processing route. The first does only the FTP transfer and then sends the received messages to the processing route (for example a direct: route).
Benefits:
SRP: Both routes do just one thing and you can concentrate on it.
Testability: You can test the processing route easily by sending messages to the direct: endpoint of the processing route. The tests can focus on one thing too.
Extensibility: Imagine there is a new input channel (JMS, HTTP, whatever). Then you just add another input route that also sends to your processing route. Done.
When you really want to test the whole process from FTP file drop until the end, think about using the Citrus test framework or similar tooling. Camel route tests are (in my opinion) a kind of "Unit tests for Camel routes", not full integration tests.
Thx to #burki...
His advise to split the routes (Single Responsibility) helped me to solve my problem:
Here is the route:
The "Main-Route" consuming from the sFTP:
#Override
public void configure() throws Exception {
// #formatter:off
from(endpoint)
.setHeader("Address", constant(address))
.log(INFO, "Import Libri changeset: Consuming from '${headers.Address}' the file '${headers.CamelFileName}'.")
.to("direct:import-new-file");
// #formatter:on
}
The first sub-route:
#Override
public void configure() throws Exception {
// #formatter:off
from("direct:import-new-file")
.choice()
.when(method(isFilenameAlreadyImported))
.log(TRACE, "'${headers.CamelFileName}' is already imported.")
.endChoice()
.otherwise()
.log(TRACE, "Import file '${headers.CamelFileName}'.")
.multicast()
.to("direct:persist-filename", "direct:unzip-file")
.endChoice()
.end()
.end();
// #formatter:on
}
The two multicasts:
#Override
public void configure() throws Exception {
// #formatter:off
from("direct:persist-filename")
.log(TRACE, "Try to write filename '${headers.CamelFileName}' to database.")
.bean(method(persistFilename))
.end();
// #formatter:on
}
and
#Override
public void configure() throws Exception {
// #formatter:off
from("direct:unzip-file")
.log(TRACE, "Try to unzip file '${headers.CamelFileName}'.")
.bean(method(unzipFile))
.end();
// #formatter:on
}
And with this setup I can write my tests like:
#Test
public void testRoute_validExtractedFile() throws Exception {
final File source = ZIP_FILE_RESOURCE.getFile();
producer.sendBodyAndHeaders(URI, InOut, source, headers());
final String actual = getFileContent(unzippedPath, FILENAME);
final String expected = "This is my little test file for Libri import";
assertThat(actual).isEqualTo(expected);
}
#Test
public void testRoute_databaseEntryExists() throws Exception {
final File source = ZIP_FILE_RESOURCE.getFile();
producer.sendBodyAndHeaders(URI, InOut, source, headers());
final List<ImportedFilename> actual = importedFilenameRepository.findAll();
final ImportedFilename expected = importedFilename(ZIPPED_FILENAME);
assertThat(actual).usingElementComparatorIgnoringFields("id", "timestamp")
.containsExactly(expected);
}
private String getFileContent(final String path, final String filename) throws IOException {
final String targetFile = path + "/" + filename;
final byte[] encodedFileContent = Files.readAllBytes(Paths.get(targetFile));
return new String(encodedFileContent, Charset.defaultCharset());
}
private Map<String, Object> headers() {
final Map<String, Object> headers = Maps.newHashMap();
headers.put("CamelFileName", ZIPPED_FILENAME);
return headers;
}
I can start the camel route with the ProducerTemplate (producer) and send a message to a direct endpoint (instead the ftp endpoint).

Mapr using Java

I am new to Hadoop, Mapr and Pivotal. I have written java code to write into pivotal but facing issue while writing into Mapr.
public class HadoopFileSystemManager {
private String url;
public void writeFile(String filePath,String data) throws IOException, URISyntaxException {
Path fPath = new Path(filePath);
String url = url = "hdfs://"+ip+":"+"8020";
FileSystem fs = FileSystem.get(new URI(url),new Configuration());
System.out.println(fs.getWorkingDirectory());
FSDataOutputStream writeStream = fs.create(fPath);
writeStream.writeChars(data);
writeStream.close();
}
}
This code works fine with pivoatal but fails with Mapr.
For Mapr i am using port = 7222.
I am getting the following error
"An existing connection was forcibly closed by the remote host"
Please let me know if am using the right port or anything needs to be changed in the code specific to Mapr.
I have stopped the iptables.
Any info is much appreciated.
Thanks
Heading
Try this code. But make sure you have MapR client setup in the node from where you are executing the test.
public class HadoopFileSystemManager {
private String url;
public void writeFile(String filePath,String data) throws IOException, URISyntaxException {
System.setProperty( "java.library.path", "/opt/mapr/lib" );
Path fPath = new Path(filePath);
String url = url = "hdfs://"+ip+":"+"8020";
FileSystem fs = FileSystem.get(new URI(url),new Configuration());
System.out.println(fs.getWorkingDirectory());
FSDataOutputStream writeStream = fs.create(fPath);
writeStream.writeChars(data);
writeStream.close();
}
}
Add the following to the classpath:
/opt/mapr/hadoop/hadoop-0.20.2/conf:/opt/mapr/hadoop/hadoop-0.20.2/lib/hadoop-0.20.2-dev-core.jar:/opt/mapr/hadoop/hadoop-0.20.2/lib/maprfs-0.1.jar:.:/opt/mapr/hadoop/hadoop-0.20.2/lib/commons-logging-1.0.4.jar:/opt/mapr/hadoop/hadoop-0.20.2/lib/zookeeper-3.3.2.jar
This statement in the above code: System.setProperty( "java.library.path", "/opt/mapr/lib" ); can be removed and can be supplied using -Djava.library.path too, if you are running your program from terminal when building.
/opt/mapr may not be your path to mapr files. If that's the case replace the path accordingly wherever applicable.
After comment:
If you are using Maven to build your project, try using the following in the pom.xml,
and with scope provided. MapR is compatible with the normal Apache Hadoop distribution too. So, while building you can use the same. Then when you run your program, you would supply the mapR jars in the classpath.
<dependency>
<groupid>hadoop</groupid>
<artifactid>hadoop</artifactid>
<version>0.20.2</version>
<scope>provided</scope>
</dependency>

How can I copy a file on the server using JSch's SCP support?

I managed to create a method which uploads a file into a directory.
How would I have to change this so I could copy a file from /123.html to /en/123.html via JSch?
public void upFile(String source, String fileName, String destination) throws Exception {
try {
try {
// 改变当前路径
client.cd(destination);
} catch (Exception e) {
System.out.println("当前目录不存在,新建目录!");
JschCreateDir.createDir(host, port, username, password, destination);
client.cd(destination);
}
// 上传本地文件 到当前目录
File file = new File(source + fileName);
client.put(new FileInputStream(file), fileName);
} catch (Exception e) {
logout();
throw e;
}
}
I understand your question that you want to copy a file on the server from one directory to another one (and not a local file to the server, which your code already seems to do).
Unfortunately, the SFTP protocol (which is implemented by JSch's ChannelSFTP class) doesn't support copying directly on the server.
You certainly can combine the put and get to copy the file from one location to another, but this will send the contents twice through the wire from server to client and back.
A better way would be to use an exec channel, and simply directly issue the server's system's copy command. On a unixoid server, this would be cp /123.html /en/123.html. (This assumes you have shell access to the server, not an sftp-only access, as I already did see somewhere.)
Here is some code (not tested, you might need to add exception handling):
public void copyFile(Session session, String sourceFile, String destinationFile) {
ChannelExec channel = (ChannelExec) session.openChannel("exec");
channel.setCommand("cp " + sourceFile + " " + destinationFile);
channel.connect();
while(channel.isConnected()) {
Thread.sleep(20);
}
int status = channel.getExitStatus();
if(status != 0)
throw new CopyException("copy failed, exit status is " + status);
}

Categories