I have an Java application with lots of NIO methods like Files.copy, Files.move, Files.delete, FileChannel...
What I now trying to achieve: I want to access a remote WebDAV server and modify data on that server with the basic functions like upload, delete or update the remote WebDAV data - without changing every method on my application. So here comes my idea:
I think an WebDAV FileSystem implementation would do the trick. Adding a custom WebDAV FileSystemProvider which is managing the mentioned file operations on the remote data. I've googled a lot and the Apache VFS with Sardine implementation looks good - BUT it seems that the Apache VFS is not compatible with NIO?
Here's some example code, as I imagine it:
public class WebDAVManagerTest {
private static DefaultFileSystemManager fsManager;
private static WebdavFileObject testFile1;
private static WebdavFileObject testFile2;
private static FileSystem webDAVFileSystem1;
private static FileSystem webDAVFileSystem2;
#Before
public static void initWebDAVFileSystem(String webDAVServerURL) throws FileSystemException, org.apache.commons.vfs2.FileSystemException {
try {
fsManager = new DefaultFileSystemManager();
fsManager.addProvider("webdav", new WebdavFileProvider());
fsManager.addProvider("file", new DefaultLocalFileProvider());
fsManager.init();
} catch (org.apache.commons.vfs2.FileSystemException e) {
throw new FileSystemException("Exception initializing DefaultFileSystemManager: " + e.getMessage());
}
String exampleRemoteFile1 = "/foo/bar1.txt";
String exampleRemoteFile2 = "/foo/bar2.txt";
testFile1 = (WebdavFileObject) fsManager.resolveFile(webDAVServerURL + exampleRemoteFile1);
webDAVFileSystem1 = (FileSystem) fsManager.createFileSystem(testFile1);
Path localPath1 = webDAVFileSystem1.getPath(testFile1.toString());
testFile2 = (WebdavFileObject) fsManager.resolveFile(webDAVServerURL + exampleRemoteFile2);
webDAVFileSystem2 = (FileSystem) fsManager.createFileSystem(testFile2);
Path localPath2 = webDAVFileSystem1.getPath(testFile1.toString());
}
}
After that I want to work in my application with localPath1 + localPath2. So that e.g. a Files.copy(localPath1, newRemotePath) would copy a file on the WebDAV server to a new directory.
Is this the right course of action? Or are there other libraries to achieve that?
Apache VFS uses it's own FileSystem interface not the NIO one. You have three options with varying levels of effort.
Change your code to use an existing webdav project that uses it's own FileSystem ie Apache VFS.
Find an existing project that uses webdav and implements NIO FileSystem etc.
Implement the NIO FileSystem interface yourself.
Option 3 has already been done so you may be able to customize what someone else has already written, have a look at nio-fs-provider or nio-fs-webdav. I'm sure there are others but these two were easy to find using Google.
Implementing a WebDav NIO FileSystem from scratch would be quite a lot of work so I wouldn't recommend starting there, I'd likely take what someone has done and make that work for me ie Option 2.
Related
I just started with minio and apache beam. I have created a bucket on play.min.io and added few files (let suppose files stored are one.txt and two.txt). I want to access the files stored on that bucket with Apache beam java sdk. When i deal with local files i just pass the path of file like C://new//.. but i don't know how to get files from minio. Can anyone help me with the code.
I managed to have it work with some configurations on top of the standard AWS configuration :
AwsServiceEndpoint should point to your minio server (here localhost:9000).
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();
...
options.as(AwsOptions.class).setAwsServiceEndpoint("http://localhost:9000");
PathStyleAccess has to be enabled (so that bucket access does not translate to a request to "http://bucket.localhost:9000" but to "http://localhost:9000/bucket").
This can be done by extending DefaultS3ClientBuilderFactory with this kind of MinioS3ClientBuilderFactory :
public class MinioS3ClientBuilderFactory extends DefaultS3ClientBuilderFactory {
#Override
public AmazonS3ClientBuilder createBuilder(S3Options s3Options) {
AmazonS3ClientBuilder builder = super.createBuilder(s3Options);
builder.withPathStyleAccessEnabled(true);
return builder;
}
}
and inject it in the options like this :
Class<? extends S3ClientBuilderFactory> builderFactory = MinioS3ClientBuilderFactory.class;
options.as(S3Options.class).setS3ClientFactoryClass(builderFactory);
I use Apache Thrift protocol for tablet-server and interlanguage integration, and all is OK few years.
Integration is between languages (C#/C++/PC Java/Dalvik Java) and thrift is probably one of simplest and safest. So I want pack-repack sophisticated data structures (and changed over years) with Thrift library. Lets say in thrift terms kind of OfflineTransport or OfflineProtocol.
Scenario:
I want to make backup solution, for example during internet provider failure process data in offline mode: serialise, store, try to process in few ways. For example sent serialised data by normal email via poor backup connection etc.
Question is: where in Thrift philosophy is best extension point for me?
I understand, only part of online protocol is possible to backup offline, ie real time return of value is not possible, that is OK.
Look for serializer. There are misc. implementations but they all share the same common concept to use a buffer or file / stream as transport medium:
Writing data in C#
E.g. we plan to store the bits into a bytes[] buffer. So one could write:
var trans = new TMemoryBuffer();
var prot = new TCompactProtocol( trans);
var instance = GetMeSomeDataInstanceToSerialize();
instance.Write(prot);
Now we can get a hold of the data:
var data = trans.GetBuffer();
Reading data in C#
Reading works similar, except that you need to know from somewhere what root instance to construct:
var trans = new TMemoryBuffer( serializedBytes);
var prot = new TCompactProtocol( trans);
var instance = new MyCoolClass();
instance.Read(prot);
Additional Tweaks
One solution to the chicken-egg problem during load could be to use a union as an extra serialization container:
union GenericFileDataContainer {
1 : MyCoolClass coolclass;
2 : FooBar foobar
// more to come later
}
By always using this container as the root instance during serialization it is easy to add more classes w/o breaking compatibility and there is no need to know up front what exactly is in a file - you just read it and check what element is set in the union.
There is an RPC framework that uses the standard thrift Protocol named "thrifty", and it is the same effect as using thrift IDL to define the service, that is, thrify can be compatible with code that uses thrift IDL, which is very helpful for cross-platform. And has a ThriftSerializer class in it:
[ThriftStruct]
public class LogEntry
{
[ThriftConstructor]
public LogEntry([ThriftField(1)]String category, [ThriftField(2)]String message)
{
this.Category = category;
this.Message = message;
}
[ThriftField(1)]
public String Category { get; }
[ThriftField(2)]
public String Message { get; }
}
ThriftSerializer s = new ThriftSerializer(ThriftSerializer.SerializeProtocol.Binary);
byte[] s = s.Serialize<LogEntry>();
s.Deserialize<LogEntry>(s);
you can try it:https://github.com/endink/Thrifty
I am using the vCloud Java API provided by VMWare to automate the creation of VMs in their enterprise cloud solution. I have been able to do this just fine. However I am not able to figur out to set custom properties on the VM. I have checked out the VMWare API reference and I cannot find anything which intuitively suggests how to do this. Any insight may be helpful?
Here is the code I have written till now to configure the VM and I want to add the custom property configuration to it.
private static SourcedCompositionItemParamType addVAppTemplateItem(String vAppNetwork, MsgType networkInfo, String vmHref, String ipAddress, String vmName) {
SourcedCompositionItemParamType vappTemplateItem = new SourcedCompositionItemParamType();
ReferenceType vappTemplateVMRef = new ReferenceType();
vappTemplateVMRef.setHref(vmHref);
vappTemplateVMRef.setName(vmName);
vappTemplateItem.setSource(vappTemplateVMRef);
NetworkConnectionSectionType networkConnectionSectionType = new NetworkConnectionSectionType();
networkConnectionSectionType.setInfo(networkInfo);
NetworkConnectionType networkConnectionType = new NetworkConnectionType();
networkConnectionType.setNetwork(vAppNetwork);
networkConnectionType.setIpAddressAllocationMode(IpAddressAllocationModeType.MANUAL.value());
networkConnectionType.setIpAddress(ipAddress);
networkConnectionType.setIsConnected(true);
networkConnectionSectionType.getNetworkConnection().add(networkConnectionType);
InstantiationParamsType vmInstantiationParamsType = new InstantiationParamsType();
List<JAXBElement<? extends SectionType>> vmSections = vmInstantiationParamsType.getSection();
vmSections.add(new ObjectFactory().createNetworkConnectionSection(networkConnectionSectionType));
vappTemplateItem.setInstantiationParams(vmInstantiationParamsType);
return vappTemplateItem;
}
After going through the REST API documentation I realized that you put Custom Properties into the ProductSection. Unfortunately I could not figure out a way to add a ProductSection when creating a VApp so added the ProductSection after creating the VApp by retrieving the VM and calling updateProductSections on it.
Response from VMWare community forum
My objective is to:
Use Firefox to take a series of screendump images and save on the local Filesystem with a reference.
also via my custom extension send a reference to a java program that performs the ftp to a remote server.
This is pretty intimidating
https://developer.mozilla.org/en/JavaScript/Guide/LiveConnect_Overview
Is it possible?
Can you see any potential problems or things Id need to consider?
(I'm aware of file system problems but its for local use only)
Are there any tutorials / references that might be handy?
Ive tried linking to java but hit problems using my own classes Im getting a class not found exception when I try
JS:
var myObj = new Packages.message();
Java file:
public class Message {
private String message;
public Message()
{
this.message = "Hello";
}
public String getMessage()
{
return this.message;
}
}
not using a package java side.
Just trying to run a quick test to see if it is viable and under time pressure from those above so just wanted to see if it was a worthwhile time investment or a dead end
You might consider this Java tutorial instead: http://www.oracle.com/technetwork/java/javase/documentation/liveconnect-docs-349790.html.
What Java version are you using? Is your message class an object inside Java applet?
What is the experience of working with OpenOffice in server mode? I know OpenOffice is not multithreaded and now I need to use its services in our server.
What can I do to overcome this problem?
I'm using Java.
With the current version of JODConverter (3.0-SNAPSHOT), it's quite easy to handle multiple threads of OOo in headless-mode, as the library now supports starting up several instances and keeping them in a pool, by just providing several port numbers or named pipes when constructing a OfficeManager instance:
final OfficeManager om = new DefaultOfficeManagerConfiguration()
.setOfficeHome("/usr/lib/openoffice")
.setPortNumbers(8100, 8101, 8102, 8103)
.buildOfficeManager();
om.start();
You can then us the library e.g. for converting documents without having to deal with the pool of OOo instances in the background:
OfficeDocumentConverter converter = new OfficeDocumentConverter(om);
converter.convert(new File("src/test/resources/test.odt"), new File("target/test.pdf"));
Yes, I am using OpenOffice as a document conversion server.
Unfortunately, the solution to your problem is to spawn a pool of OpenOffice processes.
The commons-pool branch of JODConverter (before it moved to code.google.com) implemented this out-of-the-box for you.
Thanks Bastian. I found another way, based on Bastian's answer. Opening several ports it provides access to create multithreads. But without many ports(enought several) we can improve performence by increase task queue timeout here is a documentation. And one thing again, we decided not to start and stop officeManager on each convertion process.At the end, I solved this task by this approach:
public class JODConverter {
private static volatile OfficeManager officeManager;
private static volatile OfficeDocumentConverter converter;
public static void startOfficeManager(){
try {
officeManager = new DefaultOfficeManagerConfiguration()
.setOfficeHome(new File('libre office home path'))
.setPortNumbers(8100, 8101, 8102, 8103, 8104 )
.setTaskExecutionTimeout(600000L) // for big files
.setTaskQueueTimeout(200000L) // wait if all port were busy
.buildOfficeManager();
officeManager.start();
// 2) Create JODConverter converter
converter = new OfficeDocumentConverter(officeManager);
} catch (Throwable e){
e.printStackTrace();
}
}
public static void convertPDF(File inputFile, File outputFile) throws Throwable {
converter.convert(inputFile, outputFile);
}
public static void stopOfficeManager(){
officeManager.stop();
}
}
I call JODConverter's convertPDF when convertion is need. It will be stopped only when application was down.
OpenOffice can be used in headless mode, but it has not been built to handle a lot of requests in a stressfull production environment.
Using OpenOffice in headless mode has several issues:
The process might die/become unavailable.
There are several memory leaks issues.
Opening several OpenOffice "workers" does not scale as expected, and needs some tweaking to really have different open proccesses (having several OpenOffice copies, several services, running under different users.)
As suggested, jodconverter can be used to access the OpenOffice process.
http://code.google.com/p/jodconverter/wiki/GettingStarted
you can try this:
http://www.jopendocument.org/
its an opensource java based library that allows you to work with open office documents without open office, thus removing the need for the OOserver.
Vlad is correct about having to run multiple instances of OpenOffice on different ports.
I'd just like to add that OpenOffice doesn't seem to be stable. We run 10 instances of it in a production environment and set the code up to re-try with another instance if the first attempt fails. This way when one of the OpenOffice servers crashes (or doesn't crash but doesn't respond either) production is not affected. Since it's a pain to keep restarting the servers on a daily basis, we're slowly converting all our documents to JasperReports (see iReport for details). I'm not sure how you're using the OpenOffice server; we use it for mail merging (filling out forms for customers). If you need to convert things to PDF, I'd recommend iText.