Is there any way InputStream wrapping a list of UTF-8 String? I'd like to do something like:
InputStream in = new XyzInputStream( List<String> lines )
You can read from a ByteArrayOutputStream and you can create your source byte[] array using a ByteArrayInputStream.
So create the array as follows:
List<String> source = new ArrayList<String>();
source.add("one");
source.add("two");
source.add("three");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for (String line : source) {
baos.write(line.getBytes());
}
byte[] bytes = baos.toByteArray();
And reading from it is as simple as:
InputStream in = new ByteArrayInputStream(bytes);
Alternatively, depending on what you're trying to do, a StringReader might be better.
You can concatenate all the lines together to create a String then convert it to a byte array using String#getBytes and pass it into ByteArrayInputStream. However this is not the most efficient way of doing it.
In short, no, there is no way of doing this using existing JDK classes. You could, however, implement your own InputStream that read from a List of Strings.
EDIT: Dave Web has an answer above, which I think is the way to go. If you need a reusable class, then something like this might do:
public class StringsInputStream<T extends Iterable<String>> extends InputStream {
private ByteArrayInputStream bais = null;
public StringsInputStream(final T strings) throws IOException {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
for (String line : strings) {
outputStream.write(line.getBytes());
}
bais = new ByteArrayInputStream(outputStream.toByteArray());
}
#Override
public int read() throws IOException {
return bais.read();
}
#Override
public int read(byte[] b) throws IOException {
return bais.read(b);
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
return bais.read(b, off, len);
}
#Override
public long skip(long n) throws IOException {
return bais.skip(n);
}
#Override
public int available() throws IOException {
return bais.available();
}
#Override
public void close() throws IOException {
bais.close();
}
#Override
public synchronized void mark(int readlimit) {
bais.mark(readlimit);
}
#Override
public synchronized void reset() throws IOException {
bais.reset();
}
#Override
public boolean markSupported() {
return bais.markSupported();
}
public static void main(String[] args) throws Exception {
List source = new ArrayList();
source.add("foo ");
source.add("bar ");
source.add("baz");
StringsInputStream<List<String>> in = new StringsInputStream<List<String>>(source);
int read = in.read();
while (read != -1) {
System.out.print((char) read);
read = in.read();
}
}
}
This basically an adapter for ByteArrayInputStream.
You can create some kind of IterableInputStream
public class IterableInputStream<T> extends InputStream {
public static final int EOF = -1;
private static final InputStream EOF_IS = new InputStream() {
#Override public int read() throws IOException {
return EOF;
}
};
private final Iterator<T> iterator;
private final Function<T, byte[]> mapper;
private InputStream current;
public IterableInputStream(Iterable<T> iterable, Function<T, byte[]> mapper) {
this.iterator = iterable.iterator();
this.mapper = mapper;
next();
}
#Override
public int read() throws IOException {
int n = current.read();
while (n == EOF && current != EOF_IS) {
next();
n = current.read();
}
return n;
}
private void next() {
current = iterator.hasNext()
? new ByteArrayInputStream(mapper.apply(iterator.next()))
: EOF_IS;
}
}
To use it
public static void main(String[] args) throws IOException {
Iterable<String> strings = Arrays.asList("1", "22", "333", "4444");
try (InputStream is = new IterableInputStream<String>(strings, String::getBytes)) {
for (int b = is.read(); b != -1; b = is.read()) {
System.out.print((char) b);
}
}
}
In my case I had to convert a list of string in the equivalent file (with a line feed for each line).
This was my solution:
List<String> inputList = Arrays.asList("line1", "line2", "line3");
byte[] bytes = inputList.stream().collect(Collectors.joining("\n", "", "\n")).getBytes();
InputStream inputStream = new ByteArrayInputStream(bytes);
You can do something similar to this:
https://commons.apache.org/sandbox/flatfile/xref/org/apache/commons/flatfile/util/ConcatenatedInputStream.html
It just implements the read() method of InputStream and has a list of InputStreams it is concatenating. Once it reads an EOF it starts reading from the next InputStream. Just convert the Strings to ByteArrayInputStreams.
you can also do this way create a Serializable List
List<String> quarks = Arrays.asList(
"up", "down", "strange", "charm", "top", "bottom"
);
//serialize the List
//note the use of abstract base class references
try{
//use buffering
OutputStream file = new FileOutputStream( "quarks.ser" );
OutputStream buffer = new BufferedOutputStream( file );
ObjectOutput output = new ObjectOutputStream( buffer );
try{
output.writeObject(quarks);
}
finally{
output.close();
}
}
catch(IOException ex){
fLogger.log(Level.SEVERE, "Cannot perform output.", ex);
}
//deserialize the quarks.ser file
//note the use of abstract base class references
try{
//use buffering
InputStream file = new FileInputStream( "quarks.ser" );
InputStream buffer = new BufferedInputStream( file );
ObjectInput input = new ObjectInputStream ( buffer );
try{
//deserialize the List
List<String> recoveredQuarks = (List<String>)input.readObject();
//display its data
for(String quark: recoveredQuarks){
System.out.println("Recovered Quark: " + quark);
}
}
finally{
input.close();
}
}
catch(ClassNotFoundException ex){
fLogger.log(Level.SEVERE, "Cannot perform input. Class not found.", ex);
}
catch(IOException ex){
fLogger.log(Level.SEVERE, "Cannot perform input.", ex);
}
I'd like to propose my simple solution:
public class StringListInputStream extends InputStream {
private final List<String> strings;
private int pos = 0;
private byte[] bytes = null;
private int i = 0;
public StringListInputStream(List<String> strings) {
this.strings = strings;
this.bytes = strings.get(0).getBytes();
}
#Override
public int read() throws IOException {
if (pos >= bytes.length) {
if (!next()) return -1;
else return read();
}
return bytes[pos++];
}
private boolean next() {
if (i + 1 >= strings.size()) return false;
pos = 0;
bytes = strings.get(++i).getBytes();
return true;
}
}
Related
I am kind of stuck, I usually know how to create single csv, it looks like I am missing or disconnecting from this code. I am not able to create multiple csv file from Pojo class. The file usually is more than 15mb, but I need to split into multiple csv file like 5mb each. Any suggestion would be great helped. Here is sample code that I am trying but failing.
public static void main(String[] args) throws IOException {
getOrderList();
}
public static void getOrderList() throws IOException {
List<Orders> ordersList = new ArrayList<>();
Orders orders = new Orders();
orders.setOrderNumber("1");
orders.setProductName("mickey");
Orders orders1 = new Orders();
orders1.setOrderNumber("2");
orders1.setProductName("mini");
ordersList.add(orders);
ordersList.add(orders1);
Object [] FILE_HEADER = {"orderNumber","productName"};
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
int rowCount = 0;
int fileCount = 1;
try {
BufferedWriter fileWriter = new BufferedWriter(new OutputStreamWriter(byteArrayOutputStream));
CSVPrinter csvFilePrinter = new CSVPrinter(fileWriter,
CSVFormat.DEFAULT.withRecordSeparator("\n"));
csvFilePrinter.printRecord(FILE_HEADER);
for (Orders patient : ordersList) {
rowCount++;
patient.getOrderNumber();
patient.getProductName();
if (rowCount <= 1) {
csvFilePrinter.printRecord(patient);
csvFilePrinter.flush();
}
if (rowCount > 1 ) {
csvFilePrinter.printRecord(patient);
fileCount++;
csvFilePrinter.flush();
}
}
} catch (IOException e) {
throw new RuntimeException("Cannot generate csv file", e);
}
byte[] csvOutput = byteArrayOutputStream.toByteArray();
OutputStream outputStream = null;
outputStream = new FileOutputStream("demos" + fileCount + ".csv");
byteArrayOutputStream = new ByteArrayOutputStream();
byteArrayOutputStream.write(csvOutput);
byteArrayOutputStream.writeTo(outputStream);
}
public static class Orders {
private String orderNumber;
private String productName;
public String getOrderNumber() {
return orderNumber;
}
public void setOrderNumber(String orderNumber) {
this.orderNumber = orderNumber;
}
public String getProductName() {
return productName;
}
public void setProductName(String productName) {
this.productName = productName;
}
}
When sending file, you can do ctx.writeAndFlush(new ChunkedFile(new File("file.png")));.
how about a List<Object>?
The list contains String and bytes of image.
from the documentation there's ChunkedInput() but I'm not able to get the use of it.
UPDATE
let's say in my Handler, inside channelRead0(ChannelHandlerContext ctx, Object o) method where I want to send the List<Object> I've done the following
#Override
protected void channelRead0(ChannelHandlerContext ctx, Object o) throws Exception {
List<Object> msg = new ArrayList<>();
/**getting the bytes of image**/
byte[] imageInByte;
BufferedImage originalImage = ImageIO.read(new File(fileName));
// convert BufferedImage to byte array
ByteArrayOutputStream bAoS = new ByteArrayOutputStream();
ImageIO.write(originalImage, "png", bAoS);
bAoS.flush();
imageInByte = baos.toByteArray();
baos.close();
msg.clear();
msg.add(0, "String"); //add the String into List
msg.add(1, imageInByte); //add the bytes of images into list
/**Chunk the List<Object> and Send it just like the chunked file**/
ctx.writeAndFlush(new ChunkedInput(DONT_KNOW_WHAT_TO_DO_HERE)); //
}
Just implement your own ChunkedInput<ByteBuf>. Following the implementations shipped with Netty you can implement it as follows:
public class ChunkedList implements ChunkedInput<ByteBuf> {
private static final byte[] EMPTY = new byte[0];
private byte[] previousPart = EMPTY;
private final int chunkSize;
private final Iterator<Object> iterator;
public ChunkedList(int chunkSize, List<Object> objs) {
//chunk size in bytes
this.chunkSize = chunkSize;
this.iterator = objs.iterator();
}
public ByteBuf readChunk(ChannelHandlerContext ctx) {
return readChunk(ctx.alloc());
}
public ByteBuf readChunk(ByteBufAllocator allocator) {
if (isEndOfInput())
return null;
else {
ByteBuf buf = allocator.buffer(chunkSize);
boolean release = true;
try {
int bytesRead = 0;
if (previousPart.length > 0) {
if (previousPart.length > chunkSize) {
throw new IllegalStateException();
}
bytesRead += previousPart.length;
buf.writeBytes(previousPart);
}
boolean done = false;
while (!done) {
if (!iterator.hasNext()) {
done = true;
previousPart = EMPTY;
} else {
Object o = iterator.next();
//depending on the encoding
byte[] bytes = o instanceof String ? ((String) o).getBytes() : (byte[]) o;
bytesRead += bytes.length;
if (bytesRead > chunkSize) {
done = true;
previousPart = bytes;
} else {
buf.writeBytes(bytes);
}
}
}
release = false;
} finally {
if (release)
buf.release();
}
return buf;
}
}
public long length() {
return -1;
}
public boolean isEndOfInput() {
return !iterator.hasNext() && previousPart.length == 0;
}
public long progress() {
return 0;
}
public void close(){
//close
}
}
In order to write ChunkedContent there is a special handler shipped with Netty. See io.netty.handler.stream.ChunkedWriteHandler. So just add to your downstream. Here is the quote from documentation:
A ChannelHandler that adds support for writing a large data stream
asynchronously neither spending a lot of memory nor getting
OutOfMemoryError. Large data streaming such as file transfer requires
complicated state management in a ChannelHandler implementation.
ChunkedWriteHandler manages such complicated states so that you can
send a large data stream without difficulties.
I've found the Resources.readLines() and Files.readLines() to be helpfull in simplifiying my code.
The problem is that I often read gzip-compressed txt-files or txt-files in zip archives from URL's (HTTP and FTP).
Is there a way to use Guava's methods to read from these URL's too? Or is that only possible with Java's GZIPInputStream/ZipInputStream?
You can create your own ByteSources:
For GZip:
public class GzippedByteSource extends ByteSource {
private final ByteSource source;
public GzippedByteSource(ByteSource gzippedSource) { source = gzippedSource; }
#Override public InputStream openStream() throws IOException {
return new GZIPInputStream(source.openStream());
}
}
Then use it:
Charset charset = ... ;
new GzippedByteSource(Resources.asByteSource(url)).toCharSource(charset).readLines();
Here is the implementation for the Zip. This assumes that you read only one entry.
public static class ZipEntryByteSource extends ByteSource {
private final ByteSource source;
private final String entryName;
public ZipEntryByteSource(ByteSource zipSource, String entryName) {
this.source = zipSource;
this.entryName = entryName;
}
#Override public InputStream openStream() throws IOException {
final ZipInputStream in = new ZipInputStream(source.openStream());
while (true) {
final ZipEntry entry = in.getNextEntry();
if (entry == null) {
in.close();
throw new IOException("No entry named " + entry);
} else if (entry.getName().equals(this.entryName)) {
return new InputStream() {
#Override
public int read() throws IOException {
return in.read();
}
#Override
public void close() throws IOException {
in.closeEntry();
in.close();
}
};
} else {
in.closeEntry();
}
}
}
}
And you can use it like this:
Charset charset = ... ;
String entryName = ... ; // Name of the entry inside the zip file.
new ZipEntryByteSource(Resources.asByteSource(url), entryName).toCharSource(charset).readLines();
As Olivier Grégoire said, you can create the necessary ByteSources for whatever compression scheme you need in order to use Guava's readLines function.
For zip archives though, although it's possible to do it, I don't think it's worth it. It will be easier to make your own readLines method that iterates over the zip entries and reads the lines of each entry on your own. Here's a class that demonstrates how to read and output the lines of a URL pointing at a zip archive:
public class ReadLinesOfZippedUrl {
public static List<String> readLines(String urlStr, Charset charset) {
List<String> retVal = new LinkedList<>();
try (ZipInputStream zipInputStream = new ZipInputStream(new URL(urlStr).openStream())) {
for (ZipEntry zipEntry = zipInputStream.getNextEntry(); zipEntry != null; zipEntry = zipInputStream.getNextEntry()) {
// don't close this reader or you'll close the underlying zip stream
BufferedReader reader = new BufferedReader(new InputStreamReader(zipInputStream, charset));
retVal.addAll(reader.lines().collect(Collectors.toList())); // slurp all the lines from one entry
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
return retVal;
}
public static void main(String[] args) {
String urlStr = "http://central.maven.org/maven2/com/google/guava/guava/18.0/guava-18.0-sources.jar";
Charset charset = StandardCharsets.UTF_8;
List<String> lines = readLines(urlStr, charset);
lines.forEach(System.out::println);
}
}
I want to hash a file in Java by calling a file that ends with .raw. These are the codes I used:
FileSearch.java
public class FileSearch
{
private static final File file = null;
public static File findfile(File file) throws IOException
{
String drive = (new DetectDrive()).USBDetect();
Path start = FileSystems.getDefault().getPath(drive);
Files.walkFileTree(start, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs)
{
if (file.toString().endsWith(".raw"))
{
System.out.println(file);
}
return FileVisitResult.CONTINUE;
}
});
return file;
}
public static void main(String[] args) throws Exception
{
Hash hasher = new Hash();
FileSearch.findfile(file);
try
{
if (file.toString().endsWith("raw"))
{
hasher.hash(file);
}
} catch (IOException e)
{
e.printStackTrace();
}
}
}
Hash.java
public class Hash
{
public void hash(File file) throws Exception
{
MessageDigest md = MessageDigest.getInstance("MD5");
FileInputStream fis = new FileInputStream(file);
byte[] dataBytes = new byte[1024];
int nread = 0;
while ((nread = fis.read(dataBytes)) != -1)
{
md.update(dataBytes, 0, nread);
};
byte[] mdbytes = md.digest();
StringBuffer sb = new StringBuffer();
for (int i = 0; i < mdbytes.length; i++)
{
sb.append(Integer.toString((mdbytes[i] & 0xff) + 0x100, 16).substring(1));
}
System.out.println("Digest(in hex format):: " + sb.toString());
}
}
The first code is used to find the file and perform hash by running the main method and the second code is the method for hashing the file (by MD5). However, when I run the it gives an ouput:
"name of raw file"
Exception in thread "main" java.lang.NullPointerException at FileSearch.main(FileSearch.java:33)
line 33 is the if (file.toString().endsWith("raw")) portion. Anyone knows how I can fix this?
You never initalize file with anything (Well, you initalize it with null)
private static final File file = null;
So when you call
if (file.toString().endsWith("raw"))
file can only be null.
What you probably want is just
file = FileSearch.findfile(file);
See:
What is a NullPointerException, and how do I fix it?
In a traditional blocking-thread server, I would do something like this
class ServerSideThread {
ObjectInputStream in;
ObjectOutputStream out;
Engine engine;
public ServerSideThread(Socket socket, Engine engine) {
in = new ObjectInputStream(socket.getInputStream());
out = new ObjectOutputStream(socket.getOutputStream());
this.engine = engine;
}
public void sendMessage(Message m) {
out.writeObject(m);
}
public void run() {
while(true) {
Message m = (Message)in.readObject();
engine.queueMessage(m,this); // give the engine a message with this as a callback
}
}
}
Now, the object can be expected to be quite large. In my nio loop, I can't simply wait for the object to come through, all my other connections (with much smaller workloads) will be waiting on me.
How can I only get notified that a connection has the entire object before it tells my nio channel it's ready?
You can write the object to a ByteArrayOutputStream allowing you to give the length before an object sent. On the receiving side, read the amount of data required before attempting to decode it.
However, you are likely to find it much simpler and more efficient to use blocking IO (rather than NIO) with Object*Stream
Edit something like this
public static void send(SocketChannel socket, Serializable serializable) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for(int i=0;i<4;i++) baos.write(0);
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(serializable);
oos.close();
final ByteBuffer wrap = ByteBuffer.wrap(baos.toByteArray());
wrap.putInt(0, baos.size()-4);
socket.write(wrap);
}
private final ByteBuffer lengthByteBuffer = ByteBuffer.wrap(new byte[4]);
private ByteBuffer dataByteBuffer = null;
private boolean readLength = true;
public Serializable recv(SocketChannel socket) throws IOException, ClassNotFoundException {
if (readLength) {
socket.read(lengthByteBuffer);
if (lengthByteBuffer.remaining() == 0) {
readLength = false;
dataByteBuffer = ByteBuffer.allocate(lengthByteBuffer.getInt(0));
lengthByteBuffer.clear();
}
} else {
socket.read(dataByteBuffer);
if (dataByteBuffer.remaining() == 0) {
ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dataByteBuffer.array()));
final Serializable ret = (Serializable) ois.readObject();
// clean up
dataByteBuffer = null;
readLength = true;
return ret;
}
}
return null;
}
Inspired by the code above I've created a (GoogleCode project)
It includes a simple unit test:
SeriServer server = new SeriServer(6001, nthreads);
final SeriClient client[] = new SeriClient[nclients];
//write the data with multiple threads to flood the server
for (int cnt = 0; cnt < nclients; cnt++) {
final int counterVal = cnt;
client[cnt] = new SeriClient("localhost", 6001);
Thread t = new Thread(new Runnable() {
public void run() {
try {
for (int cnt2 = 0; cnt2 < nsends; cnt2++) {
String msg = "[" + counterVal + "]";
client[counterVal].send(msg);
}
} catch (IOException e) {
e.printStackTrace();
fail();
}
}
});
t.start();
}
HashMap<String, Integer> counts = new HashMap<String, Integer>();
int nullCounts = 0;
for (int cnt = 0; cnt < nsends * nclients;) {
//read the data from a vector (that the server pool automatically fills
SeriDataPackage data = server.read();
if (data == null) {
nullCounts++;
System.out.println("NULL");
continue;
}
if (counts.containsKey(data.getObject())) {
Integer c = counts.get(data.getObject());
counts.put((String) data.getObject(), c + 1);
} else {
counts.put((String) data.getObject(), 1);
}
cnt++;
System.out.println("Received: " + data.getObject());
}
// asserts the results
Collection<Integer> values = counts.values();
for (Integer value : values) {
int ivalue = value;
assertEquals(nsends, ivalue);
System.out.println(value);
}
assertEquals(counts.size(), nclients);
System.out.println(counts.size());
System.out.println("Finishing");
server.shutdown();