Zipping a folder in remote machine running app server through J2EE app - java

I am working on a zipper J2EE application. This application requires the following:
The application has to zip a folder in remote machine where app server is running so that other applications can directly download this zipped folder instead of downloading each file one by one.
I am able to do this in my local machine using absolute path don't know how to go ahead with remote machine.
Code i'm using for zipping in local machine:
File file = new File(myFolderPath);
int index = myFolderPath.lastIndexOf("/");
String folderName =myFolderPath.substring(index);
String folderPath = myFolderPath.substring(0, index);
File outFolder = new File(folderPath + folderName + ".zip");
if (!outFolder.exists()) {
zip(file, outFolder);
}
Here myFolderPath is a string. But how should i go ahead if it is a URL?
Thanks in advance.

URLs don't allow directory listings, so this is not possible. You will need absolute paths on the server, too, or convert the URL to an absoltue path on the server, maybe by replacing a http://server/ with the root folder of the webapp.

My suggestion would be to use a message driven bean for this case. You have a MDB that listens to incoming msgs on a JMS queue. The message has the folder to zip. The message bean then will call a helper that would zip the folder that is provided to the MDB. I'm basing this on the details available in your question. There is a app server running and you want the zipping action to happen on that server machine. It will be much more efficient and clean in my opinion.

The application has to zip a folder in
remote machine where app server is
running
I guess you have fair idea about path to the directory you wanted to zip. So code same thing what you have done locally but in servlet, create zip fiel and write the content of the zip in output stream of HttpServletResponse in order to download it.

Related

Kafka Configure PKCS12 `ssl.keystore.location=user.p12` without access to local file system

I can successfully connect to an SSL secured Kafka cluster with the following client properties:
security.protocol=SSL
ssl.truststore.type=PKCS12
ssl.truststore.location=ca.p12
ssl.truststore.password=<redacted>
ssl.keystore.type=PKCS12
ssl.keystore.location=user.p12
ssl.keystore.password=<redacted>
However, I’m writing a Java app that is running in a managed cloud environment, where I don’t have access to the file system. So I can’t just give it a local file path to .p12 files.
Are there any other alternatives, like using loading from S3, or from memory, or from a JVM classpath resource?
Specifically, this is a Flink app running on Amazon's Kinesis Analytics Managed Flink cluster service.
Sure, you can download whatever you want from wherever before you give a properties object to a KafkaConsumer. However, the user running the Java process will need some access to the local filesystem in order to download files.
I think packaging the files as part of your application JAR makes more sense, however, I don't know an easy way to refer to a classpath resource as if it were a regular filesystem path. If the code runs in YARN cluster, you can try using yarn.provided.lib.dirs option when submitting as well
I used a workaround tempoarily, upload your certificates to a fileshare and make your application, during initialization, dowload the certificates from the file share and save it to the location of choice like /home/site/ca.p12 then kakfa properties should read
...
ssl.truststore.location=/home/site/ca.p12
...
Here are few lines of code to help you download and save your certificate.
// Create the Azure Files client.
CloudFileClient fileClient = storageAccount.createCloudFileClient();
// Get a reference to the file share
CloudFileShare share = fileClient.getShareReference("[SHARENAME]");
// Get a reference to the root directory for the share.
CloudFileDirectory rootDir = share.getRootDirectoryReference();
// Get a reference to the directory where the file to be deleted is in
CloudFileDirectory containerDir = rootDir.getDirectoryReference("[DIRECTORY]");
CloudFile file = containerDir.getFileReference("[FILENAME]");
file.downloadToFile("/home/site/ca.p12");

java.io.FileNotFoundException when using tomcat

I have an application running fine on localhost but I am having issues when It is deployed on tomcat
The code I am using to read the file is :
File jasperFile = new File(getClass().getClassLoader().getResource("reports/Header.jasper").getFile());
I get this error in catalina :
net.sf.jasperreports.engine.JRException: java.io.FileNotFoundException: file:/usr/local/apache-tomcat9/webapps/com.peek.facture.server/WEB-INF/lib/facture.server-1.0.0-SNAPSHOT.jar!/reports/Header.jasper
What triggers me is the "!" at the end of the jar name, where does it come from?
Also I have tried to download the jar, extract it, and my Header.jasper is correctly in the resources/reports/ folder
When you run on your local a stand-alone physical file Header.jasper exists (you can physically see it when you browse the reports directory).
However when you deploy to a tomcat server, that stand-alone physical file no longer exists. Instead, if you set-up your build correctly, when you open up your jar (facture.server-1.0.0-SNAPSHOT.jar), there should be a directory called reports in it with the file Header.jasper within that directory.
So when your try get a resource via getClass().getClassLoader().getResource(...).getFile() you are actually trying to access a stand-alone physical file. Instead you need to get the resource as an InputStream and then work with if from there...
InputStream inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream("reports/Header.jasper");
When working with resources, it's always better to rather access them this way. Especially if you are planning to deploy anywhere with a single artifact, because your resources should be packaged in with your artifact.

How can I modify files at server by servlet?

I need to modify html file that is placed at server folder from my servlet.
No other way than read it by FileInputStream to byte[], convert to String[] splitting lines by "\n", change what I need and then rewrite it.
I don't see.
This is not possible by design. Your server might just have to serve a .WAR file. If the server is not configured to unzip it, your server will have to read all files directly from this archive. You can now guess that you cannot write at this location.
You would need to create some kind of working directory and also serve files from there, too. You can always use this directory as working directory:
File workingDir = (File)servletContext.getAttribute(ServletContext.TEMPDIR);

java desktop and web application using the same filepath

I wrote a desktop java application with a class (say ClassA) that reads the content of a file, processes it and returns some results. The filename was specified relative to the project using
File input = new File("config.xml");
Now, I want to upgrade the project into a web project. I wrote a servlet which calls the same java class (i.e. ClassA) for reading the content of the same file but this time I get an error message saying file not found.
How do I refactor my code so that both the desktop and the web versions run smoothly.
Just copy the file config.xml into the proper location on the web server e.g. public_html/www/
The "working directory" of a web application is different - depends on the configuration of the web server you are deploying it to.
If you read a file without specifying a path, it is read from the current dir, which you can access with System.getProperty("user.dir");
So you can try to find out what value is returned by System.getProperty("user.dir") in your web app and place the file there.
But this may differ depending on the environment and servlet server (Tomcat etc.) and may be not a reliable solution.
Another way is to change your code, so it reads the file from the user.home directory and place the file there.

How to refer a file system in Cloudbess?

I'm new to Cloudbees. I just opened an account and trying to upload a .JAR file which basically downloads a file to the location mentioned by user (through java command line arguments). So far I have run the .JAR in my local. So far, I was referring to my local file system to save the file. If I deploy my .JAR file to Cloudbees SDK, where can I save the downloaded file (and then process it).
NOTE: I know this is not a new requirement in java if we deploy the jar in UNIX/WINDOWS OS where we can refer the file system w.r.t to home directory.
Edit#1:
I've seen couple of discussions about the same topic.
link#1
link#2
Everywhere they are talking about the ephemeral (temporary) file system which we can access through System.getProperty("java.io.tempDir"). But I'm getting null when I access java.io.tempDir. Did anyone manage to save files temporarily using this tempDir?
You can upload a jar with the java stack specifying the class and classpath (http://developer.cloudbees.com/bin/view/RUN/Java+Container)
Our filesystem however is not persistent, so if you are talking about saving a file from within your application, you could save it in this path
System.getProperty("java.io.tmpdir")
but it will be gone when your application hibernates, scales-up/down or is redeployed.
If you want a persistent way to store file/images, you can use AmazonS3 from your cloudbees application: uploading your files there will ensure their persistence.
You can find an example of how to do that in this clickstart:
http://developer-blog.cloudbees.com/search?q=amazon+s3
More information here: https://wiki.cloudbees.com/bin/view/RUN/File+system+access

Categories