I've got wrapper for BufferedReader that reads in files one after the other to create an uninterrupted stream across multiple files:
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.ArrayList;
import java.util.zip.GZIPInputStream;
/**
* reads in a whole bunch of files such that when one ends it moves to the
* next file.
*
* #author isaak
*
*/
class LogFileStream implements FileStreamInterface{
private ArrayList<String> fileNames;
private BufferedReader br;
private boolean done = false;
/**
*
* #param files an array list of files to read from, order matters.
* #throws IOException
*/
public LogFileStream(ArrayList<String> files) throws IOException {
fileNames = new ArrayList<String>();
for (int i = 0; i < files.size(); i++) {
fileNames.add(files.get(i));
}
setFile();
}
/**
* advances the file that this class is reading from.
*
* #throws IOException
*/
private void setFile() throws IOException {
if (fileNames.size() == 0) {
this.done = true;
return;
}
if (br != null) {
br.close();
}
//if the file is a .gz file do a little extra work.
//otherwise read it in with a standard file Reader
//in either case, set the buffer size to 128kb
if (fileNames.get(0).endsWith(".gz")) {
InputStream fileStream = new FileInputStream(fileNames.get(0));
InputStream gzipStream = new GZIPInputStream(fileStream);
// TODO this probably needs to be modified to work well on any
// platform, UTF-8 is standard for debian/novastar though.
Reader decoder = new InputStreamReader(gzipStream, "UTF-8");
// note that the buffer size is set to 128kb instead of the standard
// 8kb.
br = new BufferedReader(decoder, 131072);
fileNames.remove(0);
} else {
FileReader filereader = new FileReader(fileNames.get(0));
br = new BufferedReader(filereader, 131072);
fileNames.remove(0);
}
}
/**
* returns true if there are more lines available to read.
* #return true if there are more lines available to read.
*/
public boolean hasMore() {
return !done;
}
/**
* Gets the next line from the correct file.
* #return the next line from the files, if there isn't one it returns null
* #throws IOException
*/
public String nextLine() throws IOException {
if (done == true) {
return null;
}
String line = br.readLine();
if (line == null) {
setFile();
return nextLine();
}
return line;
}
}
If I construct this object on a large list of files (300MB worth of files), then print nextLine() over and over again in a while loop performance continually degrades until there is no more RAM to use. This happens even if I'm reading in files that are ~500kb and using a virtual machine that has 32MB of memory.
I want this code to be able to run on positively massive data-sets (hundreds of gigabytes worth of files) and it is a component of a program that needs to run with 32MB or less of memory.
The files that are used are mostly labeled CSV files, hence the use of Gzip to compress them on disk. This reader needs to handle gzip and uncompressed files.
Correct me if I'm wrong, but once a file has been read through and had its lines spat out the data from that file, the objects related to that file, and everything else should be viable for garbage collection?
With Java 8, GZIP support has moved from Java code to native zlib usage.
Non-closed GZIP streams leak native memory (I really said "native" not "heap" memory) and it is far from easy to diagnose. Depending of application usage of such streams, operating system may reach its memory limit quite fast.
Symptom is that operating system process memory usage is not consistent with JVM memory usage produced by Native Memory Tracking https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html
You will find full story details at http://www.evanjones.ca/java-native-leak-bug.html
The last call to setFile won't close your BufferedReader so you are leaking ressources.
Indeed in nextLine you read the first file until the end. When the end is reached you call setFile and check if there is more file to process. However if there is no more file you return imediatly without closing the last BufferReader user.
Furthermore if you don't process all files you will have a ressource still in use.
There is at least one leak in your code: Method setFile() does not close the last BufferedReader because the if (fileNames.size() == 0) check comes before if (br != null) check.
However, this could lead to the described effect only if LogFileStream is instantiated multiple times.
It would also be better to use LinkedList instead of ArrayList as fileNames.remove(0) is more 'expensive' on the ArrayList than on the LinkedList. You could instantiate it using following single line in the constructor: fileNames = new LinkedList<>(files);
Every once in a while, you could flush() or close() the BufferedReader. This will clear the reader's contents, so maybe every time you use the setFile() method, flush the reader. Then, just before every call like br = new BufferedReader(decoder, 131072), close() the BufferedReader
The GC starts to work after you close your connection/ reader. If you are using Java 7 or above, you may want to consider to use the try-with-resource statement which is a better way to deal with IO operation.https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
Related
Reading 20 uncompressed parquet files with total size 3.2GB, takes more then 12GB in RAM, when reading them "concurrently".
"concurrently" means that I need to read the second file before closing the first file, not multithreading.
The data is time series, so my program needs to read all the files up to some time, and then proceed.
I expect Arrow to use the amount of memory that corresponds to a single batch multiplied by the amount of files, but in reality the memory used is much more then the entire files.
The files were created with pandas default config (using pyarrow), and reading them in java gives the correct values.
when reading each file to the fullest, and then closing the file, the amount of ram used is ok.
I have tried to switch between the netty, and unsafe memory jars but they have the same results.
-Darrow.memory.debug.allocator=true did not produce any error.
trying to limit the amount of direct memory (the excess memory is outside of the JVM) I have tried to replace NativeMemoryPool.getDefault() with
NativeMemoryPool.createListenable(DirectReservationListener.instance()) or NativeMemoryPool.createListenable(.. some custom listener ..)
but the result is exception:
Exception in thread "main" java.lang.RuntimeException: JNIEnv was not attached to current thread
at org.apache.arrow.dataset.jni.JniWrapper.nextRecordBatch(Native Method)
at org.apache.arrow.dataset.jni.NativeScanner$NativeReader.loadNextBatch(NativeScanner.java:134)
at ParquetExample.main(ParquetExample.java:47)
using -XX:MaxDirectMemorySize=1g, -Xmx4g anyways had no effect.
the runtime is using env varibale:
_JAVA_OPTIONS="--add-opens=java.base/java.nio=ALL-UNNAMED"
on JDK 17.0.2 with arrow 9.0.0
the code is extracted to this simple example, taken from the official documentation:
import org.apache.arrow.dataset.file.FileFormat;
import org.apache.arrow.dataset.file.FileSystemDatasetFactory;
import org.apache.arrow.dataset.jni.NativeMemoryPool;
import org.apache.arrow.dataset.scanner.ScanOptions;
import org.apache.arrow.dataset.scanner.Scanner;
import org.apache.arrow.dataset.source.Dataset;
import org.apache.arrow.dataset.source.DatasetFactory;
import org.apache.arrow.memory.BufferAllocator;
import org.apache.arrow.memory.RootAllocator;
import org.apache.arrow.vector.VectorSchemaRoot;
import org.apache.arrow.vector.ipc.ArrowReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
public class ParquetExample {
static BufferAllocator allocator = new RootAllocator(128 * 1024 * 1024); // limit does not affect problem
public static ArrowReader read_parquet_file(Path filePath, NativeMemoryPool nativeMemoryPool) {
String uri = "file:" + filePath;
ScanOptions options = new ScanOptions(/*batchSize*/ 64 * 1024 * 1024);
try (
DatasetFactory datasetFactory = new FileSystemDatasetFactory(
allocator, nativeMemoryPool, FileFormat.PARQUET, uri);
Dataset dataset = datasetFactory.finish()
) {
Scanner scanner = dataset.newScan(options);
return scanner.scan().iterator().next().execute();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static void main(String[] args) throws IOException {
List<VectorSchemaRoot> schemaRoots = new ArrayList<>();
for (Path filePath : [...] ) { // 20 files, total uncompressed size 3.2GB
ArrowReader arrowReader = read_parquet_file(file,
NativeMemoryPool.getDefault());
if (arrowReader.loadNextBatch()) { // single batch read
schemaRoots.add(arrowReader.getVectorSchemaRoot());
}
}
}
}
the question is - why Arrow using so much memory in a straight-forward example, and why replacing the NativeMemoryPool results in crash?
Thanks
I'm attempting to copy / duplicate a DocumentFile in an Android application, but upon inspecting the created duplicate, it does not appear to be exactly the same as the original (which is causing a problem, because I need to do an MD5 check on both files the next time a copy is called, so as to avoid overwriting the same files).
The process is as follows:
User selects a file from a ACTION_OPEN_DOCUMENT_TREE
Source file's type is obtained
New DocumentFile in target location is initialised
Contents of first file is duplicated into second file
The initial stages are done with the following code:
// Get the source file's type
String sourceFileType = MimeTypeMap.getSingleton().getExtensionFromMimeType(contextRef.getContentResolver().getType(file.getUri()));
// Create the new (empty) file
DocumentFile newFile = targetLocation.createFile(sourceFileType, file.getName());
// Copy the file
CopyBufferedFile(new BufferedInputStream(contextRef.getContentResolver().openInputStream(file.getUri())), new BufferedOutputStream(contextRef.getContentResolver().openOutputStream(newFile.getUri())));
The main copy process is done using the following snippet:
void CopyBufferedFile(BufferedInputStream bufferedInputStream, BufferedOutputStream bufferedOutputStream)
{
// Duplicate the contents of the temporary local File to the DocumentFile
try
{
byte[] buf = new byte[1024];
bufferedInputStream.read(buf);
do
{
bufferedOutputStream.write(buf);
}
while(bufferedInputStream.read(buf) != -1);
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (bufferedInputStream != null) bufferedInputStream.close();
if (bufferedOutputStream != null) bufferedOutputStream.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
The problem that I'm facing, is that although the file copies successfully and is usable (it's a picture of a cat, and it's still a picture of a cat in the destination), it is slightly different.
The file size has changed from 2261840 to 2262016 (+176)
The MD5 hash has changed completely
Is there something wrong with my copying code that is causing the file to change slightly?
Thanks in advance.
Your copying code is incorrect. It is assuming (incorrectly) that each call to read will either return buffer.length bytes or return -1.
What you should do is capture the number of bytes read in a variable each time, and then write exactly that number of bytes. Your code for closing the streams is verbose and (in theory1) buggy as well.
Here is a rewrite that addresses both of those issues, and some others as well.
void copyBufferedFile(BufferedInputStream bufferedInputStream,
BufferedOutputStream bufferedOutputStream)
throws IOException
{
try (BufferedInputStream in = bufferedInputStream;
BufferedOutputStream out = bufferedOutputStream)
{
byte[] buf = new byte[1024];
int nosRead;
while ((nosRead = in.read(buf)) != -1) // read this carefully ...
{
out.write(buf, 0, nosRead);
}
}
}
As you can see, I have gotten rid of the bogus "catch and squash exception" handlers, and fixed the resource leak using Java 7+ try with resources.
There are still a couple of issues:
It is better for the copy function to take file name strings (or File or Path objects) as parameters and be responsible for opening the streams.
Given that you are doing block reads and writes, there is little value in using buffered streams. (Indeed, it might conceivably be making the I/O slower.) It would be better to use plain streams and make the buffer the same size as the default buffer size used by the Buffered* classes .... or larger.
If you are really concerned about performance, try using transferFrom as described here:
https://www.journaldev.com/861/java-copy-file
1 - In theory, if the bufferedInputStream.close() throws an exception, the bufferedOutputStream.close() call will be skipped. In practice, it is unlikely that closing an input stream will throw an exception. But either way, the try with resource approach will deals with this correctly, and far more concisely.
I use Java to read list of files. Some of these has different encoding, ANSI instead of UTF-8. java.util.Scanner is unable to read these files and get empty output string.
I tried another approach:
FileInputStream fis = new FileInputStream(my_file);
BufferedReader br = new BufferedReader(new InputStreamReader(fis));
InputStreamReader isr = new InputStreamReader(fis);
isr.getEncoding();
I am not sure how to change character encoding in case of ANSI ones. UTF-8 and ANSI files are mixed in same folder. I try to use Apache Tika for this.
After I get encoding of file, I use Scanner, but I get empty output.
Scanner scanner = new Scanner(my_file, detector.getCharset().toString());
line = scanner.nextLine();
There is a library called juniversalchardet, which can help you at guessing the right encoding. It was updated recently and is currently located on GitHub:
https://github.com/albfernandez/juniversalchardet
However, there is no fail-safe tool to detect encodings, as there are many things unknown:
Is this file text at all or some PNG?
Is it stored in a (1,...,k,...,n)-bit encoding?
Which k-bit encoding was used?
Some guesswork can be done by counting the amount of control characters that are not commonly used. When a file contains many control symbols, it is likely that you've chosen the wrong encoding. (Then try the next one.)
Juniversalchardet tries multiple and also more successful ways to determine encodings (even chinese ones). It also provides convenient ways to open a reader from a file with the correct encoding already selected:
(Snippet taken from https://github.com/albfernandez/juniversalchardet#creating-a-reader-with-correct-encoding and adapted)
import org.mozilla.universalchardet.ReaderFactory;
import java.io.File;
import java.io.IOException;
import java.io.Reader;
public class TestCreateReaderFromFile {
public static void main (String[] args) throws IOException {
if (args.length != 1) {
System.err.println("Usage: java TestCreateReaderFromFile FILENAME");
System.exit(1);
}
Reader reader = null;
try {
File file = new File(args[0]);
reader = ReaderFactory.createBufferedReader(file);
String line;
while((line=reader.readLine())!=null){
System.out.println(line); //Print each line to console
}
}
finally {
if (reader != null) {
reader.close();
}
}
}
}
Edit: Added ScannerFactory
/*
(C) Copyright 2016-2017 Alberto Fernández <infjaf#gmail.com>
Adapted by Fritz Windisch 2018-11-15
The contents of this file are subject to the Mozilla Public License Version
1.1 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.mozilla.org/MPL/
Software distributed under the License is distributed on an "AS IS" basis,
WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
for the specific language governing rights and limitations under the
License.
Alternatively, the contents of this file may be used under the terms of
either the GNU General Public License Version 2 or later (the "GPL"), or
the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
in which case the provisions of the GPL or the LGPL are applicable instead
of those above. If you wish to allow use of your version of this file only
under the terms of either the GPL or the LGPL, and not to allow others to
use your version of this file under the terms of the MPL, indicate your
decision by deleting the provisions above and replace them with the notice
and other provisions required by the GPL or the LGPL. If you do not delete
the provisions above, a recipient may use your version of this file under
the terms of any one of the MPL, the GPL or the LGPL.
*/
import java.io.BufferedInputStream;
import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Objects;
import java.util.Scanner;
import org.mozilla.universalchardet.UniversalDetector;
import org.mozilla.universalchardet.UnicodeBOMInputStream;
/**
* Create a scanner from a file with correct encoding
*/
public final class ScannerFactory {
private ScannerFactory() {
throw new AssertionError("No instances allowed");
}
/**
* Create a scanner from a file with correct encoding
* #param file The file to read from
* #param defaultCharset defaultCharset to use if can't be determined
* #return Scanner for the file with the correct encoding
* #throws java.io.IOException if some I/O error ocurrs
*/
public static Scanner createScanner(File file, Charset defaultCharset) throws IOException {
Charset cs = Objects.requireNonNull(defaultCharset, "defaultCharset must be not null");
String detectedEncoding = UniversalDetector.detectCharset(file);
if (detectedEncoding != null) {
cs = Charset.forName(detectedEncoding);
}
if (!cs.toString().contains("UTF")) {
return new Scanner(file, cs.name());
}
Path path = file.toPath();
return new Scanner(new UnicodeBOMInputStream(new BufferedInputStream(Files.newInputStream(path))), cs.name());
}
/**
* Create a scanner from a file with correct encoding. If charset cannot be determined,
* it uses the system default charset.
* #param file The file to read from
* #return Scanner for the file with the correct encoding
* #throws java.io.IOException if some I/O error ocurrs
*/
public static Scanner createScanner(File file) throws IOException {
return createScanner(file, Charset.defaultCharset());
}
}
Your approach will not give you the right encoding.
FileInputStream fis = new FileInputStream(my_file);
BufferedReader br = new BufferedReader(new InputStreamReader(fis));
InputStreamReader isr = new InputStreamReader(fis);
isr.getEncoding();
This will return the encoding being used by this InputStream (read javadoc) and not that of the charcters written in the file (my_file in your case). And if the encoding is wrong Scanner won't be able to read the file properly.
In fact, do correct me if i am wrong, there is no way to get encoding used for a particular file with 100% accuracy. There are few projects which have a better success rate at guessing the encoding but not 100% accuracy. On the other hand if you know the encoding used then you can read the file using,
Scanner scanner = new Scanner(my_file, "charset");
scanner.nextLine();
Also, find out the correct charset name used in java for ANSI. It's either US-ASCII or Cp1251.
Whichever path you go, be on lookout for any IOException which might point you in the right direction.
To make Scanner available to work with different encoding, you have to provide correct one to the scanner's constructor.
To define file encoding it is better to use external lib (e.g https://github.com/albfernandez/juniversalchardet). But if you definitely know possible encodings, you can check it manually according to Wikipedia
public static void main(String... args) throws IOException {
List<String> lines = readLinesFromFile(new File("d:/utf8.txt"));
}
public static List<String> readLinesFromFile(File file) throws IOException {
try (Scanner scan = new Scanner(file, getCharsetName(file))) {
List<String> lines = new LinkedList<>();
while (scan.hasNext())
lines.add(scan.nextLine());
return lines;
}
}
private static String getCharsetName(File file) throws IOException {
try (InputStream in = new FileInputStream(file)) {
if (in.read() == 0xEF && in.read() == 0xBB && in.read() == 0xBF)
return StandardCharsets.UTF_8.name();
return StandardCharsets.US_ASCII.name();
}
}
I am using following standalone class to calculate size of zipped files before zipping.
I am using 0 level compression, but still i am getting a difference of few bytes.
Can you please help me out in this to get exact size?
Quick help will be appreciated.
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.zip.CRC32;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;
import org.apache.commons.io.FilenameUtils;
public class zipcode {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
CRC32 crc = new CRC32();
byte[] b = new byte[1024];
File file = new File("/Users/Lab/Desktop/ABC.xlsx");
FileInputStream in = new FileInputStream(file);
crc.reset();
// out put file
ZipOutputStream out = new ZipOutputStream(new FileOutputStream("/Users/Lab/Desktop/ABC.zip"));
// name the file inside the zip file
ZipEntry entry = new ZipEntry("ABC.xlsx");
entry.setMethod(ZipEntry.DEFLATED);
entry.setCompressedSize(file.length());
entry.setSize(file.length());
entry.setCrc(crc.getValue());
out.setMethod(ZipOutputStream.DEFLATED);
out.setLevel(0);
//entry.setCompressedSize(in.available());
//entry.setSize(in.available());
//entry.setCrc(crc.getValue());
out.putNextEntry(entry);
// buffer size
int count;
while ((count = in.read(b)) > 0) {
System.out.println();
out.write(b, 0, count);
}
out.close();
in.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Firstly, I'm not convinced by explanation for why you need to do this. There is something wrong with your system design or implementation if it is necessary to know the file size before you start uploading.
Having said that, the solution is basically to create the ZIP file on the server side so that you know its size before you start uploading it to the client:
Write the ZIP file to a temporary file and upload from that.
Write the ZIP file to an buffer in memory and upload from that.
If you don't have either the file space or the memory space on the server side, then:
Create "sink" outputStream that simply counts the bytes that are written to calculate the nominal file size.
Create / write the ZIP file to the sink, and capture the file size.
Open your connection for uploading.
Send the metadata including the file size.
Create / write the ZIP a second time, writing to the socket stream ... or whatever.
These 3 approaches will all allow you to create and send a compressed ZIP, if that is going to help.
If you insist on trying to do this on-the-fly in one pass, then you are going to need to read the ZIP file spec in forensic detail ... and do some messy arithmetic. Helping you is probably beyond the scope of a SO question.
I had to do this myself to write the zip results straight to AWS S3 which requires a file size. Unfortunately there is no way I found to compute the size of a compressed file without performing the computation on each block of data.
One method is to zip everything twice. The first time you throw out the data but add up the number of bytes:
long getSize(List<InputStream> files) throws IOException {
final AtomicLong counter = new AtomicLong(0L);
final OutputStream countingStream = new OutputStream() {
#Override
public void write(int b) throws IOException {
counter.incrementAndGet();
}
};
ZipOutputStream zoutcounter = new ZipOutputStream(countingStream);
// Loop through files or input streams here and do compression
// ...
zoutcounter.close();
return counter.get();
}
The alternative is to do the above creating an entry for each file but then don't write any actual data (don't call write()) so you can compute the total size of just the zip entry headers. This will only work if you turn off compression like this:
entry.setMethod(ZipEntry.STORED);
The size of the zip entries plus the size of each uncompressed file should give you an accurate final size, but only with compression turned off. You don't have to set the CRC values or any of those other fields when computing the zip file size as those entries always have the same size in the final entry header. It's only the name, comment and extra fields on the ZipEntry that vary in size. The other entries like the file size, CRC, etc. take up the same space in the final zip file whether or not they were set.
There is one more solution you can try. Guess the size conservatively and add a safety margin, then compress it aggressively. Pad the rest of the file until it equals your estimated size. Zip ignores padding. If you implement an output stream that wrappers your actual output stream but implements the close operation as a noop then you can pass that as the output stream for your ZipOutputStream. After you close your ZipOutputStream instance, write the padding to the actual output stream to equal your estimated number of bytes, then close it for real. The file will be larger than it could be but you save the computation of the accurate file size and the result will benefit from at least some compression.
I would like to have a method that would return a list of BufferedReader objects (for example for all files in a directory):
private List<BufferedReader> getInputReaders(List<String> filenames) {
List<BufferedReader> result = new ArrayList<BufferedReader>();
for(String filename : filenames)
result.add(new BufferedReader(new InputStreamReader(new FileInputStream(filename), "UTF-8")));
}
return result;
}
Will this be a major waste of resources?
Will all those streams be opened at the moment of creation and remain so therefore holding system resources?
If yes, can I create those readers in "passive" mode without actually opening streams, or is there any other workaround (so I can build a List with thousands of readers safely)?
Yes, the constructor for FileInputStream invokes open() in its constructor. open() is a native method, which will most likely reserve a file descriptor for the file.
Instead of immediately returning a list of BufferedReaders, why not return a list of something that will open the underlying stream as needed? You can create a class that holds onto a filename and simply open the resource when called.
I'm pretty sure it's a bad idea. You risk to consume all the available file descriptors, and there is no point in opening a reader to a file if you don't want to read from it.
If you want to read from the file, then open a reader, read from the file, and close the reader. Then, do the same for the next file to read from.
If you want a unique abstraction to read from various sources (URLs, files, etc.), then create your own Source interface, and multiple implementations which would wrap the resource to read from (URLSource, FileSource, etc.). Only open the actual reader on the wrapped resource when reading from your Source instance.
yes those streams will be opened as soon as they are created
good way to avoid this is to create a LazyReader class that only initializes the Reader on first read
public class LazyReader extends Reader{
String fileName;
Reader reader=null;
public LazyReader(String filename){
super();
this.fileName=fileName;
}
private void init(){
if(reader==null)
reader = new BufferedReader(new InputStreamReader(new FileInputStream(filename), "UTF-8"));
}
public int read(char[] cbuf, int off, int len){
init();
return reader.read(cbuff, off,len);
}
public int close(){
init();
reader.close();
}
//if you want marking you should also implement mark(int), reset() and markSupported()
}