Is it possible to save a trained network to a file, then to use it again (to load file)? You can give a simple example? Currently I should run every time training:
EncogUtility.trainConsole (network, trainingSet, TRAINING_MINUTES)
You can use something like this to save the trained network file (this is C#, Java may have a different class for FileInfo):
FileInfo networkFile = new FileInfo(#"C:\Data\network.eg");
Encog.Persist.EncogDirectoryPersistence.SaveObject(networkFile, (BasicNetwork)network);
You can then use something like this to reload the network file:
network = (BasicNetwork)(Encog.Persist.EncogDirectoryPersistence.LoadObject(networkFile));
Use this example to save/load your network:
import static org.encog.persist.EncogDirectoryPersistence.*;
String filename = "C:/tmp/network.eg";
// save network...
saveObject(new File(filename), network);
// load network...
BasicNetwork loadFromFileNetwork = (BasicNetwork) loadObject(new File(filename));
Source: https://github.com/encog
Related
I have an image file
image = JavaSparkContext.binaryFiles("/path/to/image.jpg");
I would like to process then save the binary info using Spark to HDFSSomething like :
image.saveAsBinaryFile("hdfs://cluster:port/path/to/image.jpg")
Is this possible, not saying 'as simple', just possible to do this? if so how would you do this. Trying to keep a one to one if possible as in keeping the extension and type, so if I directly download using hdfs command line it would still be a viable image file.
Yes, it is possible. But you need some data serialization plugin, for example avro(https://github.com/databricks/spark-avro).
Assume image is presented as binary(byte[]) in your program, so the images can be a Dataset<byte[]>.
You can save it using
datasetOfImages.write()
.format("com.databricks.spark.avro")
.save("hdfs://cluster:port/path/to/images.avro");
images.avro would be a folder contains multiple partitions and each partition would be an avro file saving some images.
Edit:
it is also possible but not recommended to save the images as separated files. You can call foreach on the dataset and use HDFS api to save the image.
see below for a piece of code written in Scala. You should be able to translate it into Java.
import org.apache.hadoop.fs.{FileSystem, Path}
datasetOfImages.foreachPartition { images =>
val fs = FileSystem.get(sparkContext.hadoopConfiguration)
images.foreach { image =>
val out = fs.create(new Path("/path/to/this/image"))
out.write(image);
out.close();
}
}
I am trying to save thousands of models produced by ML Pipeline. As indicated in the answer here, the models can be saved as follows:
import java.io._
def saveModel(name: String, model: PipelineModel) = {
val oos = new ObjectOutputStream(new FileOutputStream(s"/some/path/$name"))
oos.writeObject(model)
oos.close
}
schools.zip(bySchoolArrayModels).foreach{
case (name, model) => saveModel(name, Model)
}
I have tried using s3://some/path/$name and /user/hadoop/some/path/$name as I would like the models to be saved to amazon s3 eventually but they both fail with messages indicating the path cannot be found.
How to save models to Amazon S3?
One way to save a model to HDFS is as following:
// persist model to HDFS
sc.parallelize(Seq(model), 1).saveAsObjectFile("hdfs:///user/root/linReg.model")
Saved model can then be loaded as:
val linRegModel = sc.objectFile[LinearRegressionModel]("linReg.model").first()
For more details see (ref)
Since Apache-Spark 1.6 and in the Scala API, you can save your models without using any tricks. Because, all models from the ML library come with a save method, you can check this in the LogisticRegressionModel, indeed it has that method. By the way to load the model you can use a static method.
val logRegModel = LogisticRegressionModel.load("myModel.model")
So FileOutputStream saves to local filesystem (not through the hadoop libraries), so saving to a locally directory is the way to go about doing this. That being said, the directory needs to exist, so make sure the directory exists first.
That being said, depending on your model you may wish to look at https://spark.apache.org/docs/latest/mllib-pmml-model-export.html (pmml export).
I'm working on a microcontroller and I'm trying to write some data from some sensors into a .txt file on the SDcard and later on place the sd card in a card reader and read the data on the PC.
Does anyone know how to write a .txt file from scratch for a FAT32 file system? I don't have any predefined code/methods/functions to call, I'll need to create the code from nothin.
It's not a question for a specific programming language, that is why I tagged more than one. I can later on convert the code from C or Java to my programming language of choice. But I can't seem to find such low level methods/functions in any type of language :)
Any ideas?
FatFs is quite good, and highly portable. It has support for FAT12, FAT16 and FAT32, long filenames, seeking, reading and writing (most of these things can be switched on and off to change the memory footprint).
If you're really tight on memory there's also Petit FatFs, but it doesn't have write support by default and adding it would take some work.
After mounting the drive you'd simply open a file to create it. For example:
FATFS fatFs;
FIL newFile;
// The drive number may differ
if (f_mount(0, &fatFs) != FR_OK) {
// Something went wrong
}
if (f_open(&newFile, "/test.txt", FA_WRITE | FA_OPEN_ALWAYS) != FR_OK) {
// Something went wrong
}
If you really need to create the file using only your own code you'll have to traverse the FAT, looking for empty space and then creating new LFN entries (where you store the filename) and DIRENTs (which specify the clusters on the disk that will hold the file data).I can't see any reason for doing this except if this is some kind of homework / lab exercise. In any case you should do some reading about the FAT structure first and return with some more specific questions once you've got started.
In JAVA you can do like this
Writer output = null;
String text = "This is test message";
File file = new File("write.txt");
output = new BufferedWriter(new FileWriter(file));
output.write(text);
output.close();
System.out.println("Your file has been written");
I'm trying to generate a PDF document from an uploaded ".docx" file using JODConverter.
The call to the method that generates the PDF is something like this :
File inputFile = new File("document.doc");
File outputFile = new File("document.pdf");
// connect to an OpenOffice.org instance running on port 8100
OpenOfficeConnection connection = new SocketOpenOfficeConnection(8100);
connection.connect();
// convert
DocumentConverter converter = new OpenOfficeDocumentConverter(connection);
converter.convert(inputFile, outputFile);
// close the connection
connection.disconnect();
I'm using apache commons FileUpload to handle uploading the docx file, from which I can get an InputStream object. I'm aware that Java.io.File is just an abstract reference to a file in the system.
I want to avoid the disk write (saving the InputStream to disk) and the disk read (reading the saved file in JODConverter).
Is there any way I can get a File object refering to an input stream? just any other way to avoid disk IO will also do!
EDIT: I don't care if this will end up using a lot of system memory. The application is going to be hosted on a LAN with very little to zero number of parallel users.
File-based conversions are faster than stream-based ones (provided by StreamOpenOfficeDocumentConverter) but they require the OpenOffice.org service to be running locally and have the correct permissions to the files.
Try the doc to avoid disk writting:
convert(java.io.InputStream inputStream, DocumentFormat inputFormat, java.io.OutputStream outputStream, DocumentFormat outputFormat)
There is no way to do it and make the code solid. For one, the .convert() method only takes two Files as arguments.
So, this would mean you'd have to extend File, which is possible in theory, but very fragile, as you are required to delve into the library code, which can change at any time and make your extended class non functional.
(well, there is a way to avoid disk writes if you use a RAM-backed filesystem and read/write from that filesystem, of course)
Chances are that commons fileupload has written the upload to the filesystem anyhow.
Check if your FileItem is an instance of DiskFileItem. If this is the case the write implementation of DiskFileItem willl try to move the file to the file object you pass. You are not causing any extra disk io then since the write already happened.
I am a beginner in java and I am building an android app.
I want to have an xml file that has text in it.
Whenever the server sends updates, I want to change some lines in that file (what I mean by update is changing some lines in that file by erasing the some part of the text written already and replace by the update)
I know nothing about creating,writing or reading from files.
When I searched I found out that Internal storage suits me best.
But I do not know if I have to create an xml file manually in any directory or just use the code bellow to create this file automatically?
// If this is the first time run,execute one time code
// create XML Internal store
String FILENAME = "My_XML_file";
try{
FileOutputStream fos = openFileOutput(FILENAME, Context.MODE_APPEND);
} catch (final IOException e) {
e.printStackTrace();
}
Thank you in advance!
- First give the External Storage permission in the Manifest.xml file.
- You can use JAXP & JAXB, and even CASTOR to handle XML in a better way, but still DOM and SAX are inbuilt into Android.
You can use something like this
String s = "/sdcard/Myfolder/mytext.txt";
File f = new File(s);
The code you have will create a file in internal storage but you need a bit more to create and maintain an XML file easily.
I suggest you use the Build in Android DOM Parser (Android developers site docs on XML Parse options)
I found this example which explains how to use the dom parser to build a specific (new) XML file from code. In your context where the output stream in created:
StreamResult result = new StreamResult(new File("C:\\file.xml"));
you might want to use the other constructor based on the output stream you created above
StreamResult result = new StreamResult(fos);
In a similar fashion this DOM library allows you to read from an input stream (which you might get from android openFileInput) using DocumentBuilder.parse()