Java: How to check that 2 binary files are same? - java

What is the easiest way to check (in a unit test) whether binary files A and B are equal?

Are third-party libraries fair game? Guava has Files.equal(File, File). There's no real reason to bother with hashing if you don't have to; it can only be less efficient.

There's always just reading byte by byte from each file and comparing them as you go. Md5 and Sha1 etc still have to read all the bytes so computing the hash is extra work that you don't have to do.
if (file1.length() != file2.length()) {
return false;
}
try( InputStream in1 = new BufferedInputStream(new FileInputStream(file1));
InputStream in2 = new BufferedInputStream(new FileInputStream(file2));
) {
int value1, value2;
do {
//since we're buffered, read() isn't expensive
value1 = in1.read();
value2 = in2.read();
if(value1 != value2) {
return false;
}
} while(value1 >= 0);
// since we already checked that the file sizes are equal
// if we're here we reached the end of both files without a mismatch
return true;
}

With assertBinaryEquals.
public static void assertBinaryEquals(java.io.File expected,
java.io.File actual)
http://junit-addons.sourceforge.net/junitx/framework/FileAssert.html

Read the files in (small) blocks and compare them:
static boolean binaryDiff(File a, File b) throws IOException {
if(a.length() != b.length()){
return false;
}
final int BLOCK_SIZE = 128;
InputStream aStream = new FileInputStream(a);
InputStream bStream = new FileInputStream(b);
byte[] aBuffer = new byte[BLOCK_SIZE];
byte[] bBuffer = new byte[BLOCK_SIZE];
do {
int aByteCount = aStream.read(aBuffer, 0, BLOCK_SIZE);
bStream.read(bBuffer, 0, BLOCK_SIZE);
if (!Arrays.equals(aBuffer, bBuffer)) {
return false;
}
}
while(aByteCount < 0);
return true;
}

If you want to avoid dependencies you can do it using quite nicely with Files.readAllBytes and Assert.assertArrayEquals
Assert.assertArrayEquals("Binary files differ",
Files.readAllBytes(Paths.get(expectedBinaryFile)),
Files.readAllBytes(Paths.get(actualBinaryFile)));
Note: This will read the whole file so it might not be efficient with large files.

Since Java 12 you could also use the Files.mismatch method JavaDoc. It will return -1L if the files are the same.

I had to do the same in a unit test too, so I used SHA1 hashes to do that, to spare the the calculation of the hashes I check if the files sizes are equal first. Here was my attempt:
public class SHA1Compare {
private static final int CHUNK_SIZE = 4096;
public void assertEqualsSHA1(String expectedPath, String actualPath) throws IOException, NoSuchAlgorithmException {
File expectedFile = new File(expectedPath);
File actualFile = new File(actualPath);
Assert.assertEquals(expectedFile.length(), actualFile.length());
try (FileInputStream fisExpected = new FileInputStream(actualFile);
FileInputStream fisActual = new FileInputStream(expectedFile)) {
Assert.assertEquals(makeMessageDigest(fisExpected),
makeMessageDigest(fisActual));
}
}
public String makeMessageDigest(InputStream is) throws NoSuchAlgorithmException, IOException {
byte[] data = new byte[CHUNK_SIZE];
MessageDigest md = MessageDigest.getInstance("SHA1");
int bytesRead = 0;
while(-1 != (bytesRead = is.read(data, 0, CHUNK_SIZE))) {
md.update(data, 0, bytesRead);
}
return toHexString(md.digest());
}
private String toHexString(byte[] digest) {
StringBuilder sha1HexString = new StringBuilder();
for(int i = 0; i < digest.length; i++) {
sha1HexString.append(String.format("%1$02x", Byte.valueOf(digest[i])));
}
return sha1HexString.toString();
}
}

Related

How to convert Reader to InputStream in java

I need to convert a Reader object into InputStream. My solution right now is below. But my concern is since this will handle big chunks of data, it will increase the memory usage drastically.
private static InputStream getInputStream(final Reader reader) {
char[] buffer = new char[10240];
StringBuilder builder = new StringBuilder();
int charCount;
try {
while ((charCount = reader.read(buffer, 0, buffer.length)) != -1) {
builder.append(buffer, 0, charCount);
}
reader.close();
} catch (final IOException e) {
e.printStackTrace();
}
return new ByteArrayInputStream(builder.toString().getBytes(StandardCharsets.UTF_8));
}
Since I use StringBuilder this will keep the full content of the reader object in memory. I want to avoid this. Is there a way I can pipe Reader object? Any help regarding this highly appreciated.
Using the Apache Commons IO library, you can do this conversion in one line:
//import org.apache.commons.io.input.ReaderInputStream;
InputStream inputStream = new ReaderInputStream(reader, StandardCharsets.UTF_8);
You can read the documentaton for this Class at https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/input/ReaderInputStream.html
It might be worth trying this to see if it solves the memory issue too.
First: a rare requirement, often it is the other way around, or there is a FileChannel, so one can use a ByteBuffer.
A PipedInputStream would be possible, starting a PipedOutputStream in a second thread. However that is unneeded.
A Reader gives chars. Unicode code points are derived from either one or two chars (the latter a surrogate pair).
/**
* Reader for an InputSteam of UTF-8 text bytes.
*/
public class ReaderInputStream extends InputStream {
private final Reader reader;
private boolean eof;
private int byteCount;
private byte[] bytes = new byte[6];
public ReaderInputStream(Reader reader) {
this.reader = reader;
}
#Override
public int read() throws IOException {
if (byteCount > 0) {
int c = bytes[0];
--byteCount;
for (int i = 0; i < byteCount; ++i) {
bytes[i] = bytes[i + 1];
}
return c;
}
if (eof) {
return -1;
}
int c = reader.read();
if (c == -1) {
eof = true;
return -1;
}
char ch = (char) c;
String s;
if (Character.isHighSurrogate(ch)) {
c = reader.read();
if (c == -1) {
// Error, low surrogate expected.
eof = true;
//return -1;
throw new IOException("Expected a low surrogate char i.o. EOF");
}
char ch2 = (char) c;
if (!Character.isLowSurrogate(ch2)) {
throw new IOException("Expected a low surrogate char");
}
s = new String(new char [] {ch, ch2});
} else {
s = Character.toString(ch);
}
byte[] bs = s.getBytes(StandardCharsets.UTF_8);
byteCount = bs.length;
System.arraycopy(bs, 0, bytes, 0, byteCount);
return read();
}
}
Path source = Paths.get("...");
Path target = Paths.get("...");
try (Reader reader = Files.newBufferedReader(source, StandardCharsets.UTF_8);
InputStream in = new ReaderInputStream(reader)) {
Files.copy(in, target);
}

Servlet getContentLength() returns > 0 but getInputStream().available() returns 0 [duplicate]

How do I read an entire InputStream into a byte array?
You can use Apache Commons IO to handle this and similar tasks.
The IOUtils type has a static method to read an InputStream and return a byte[].
InputStream is;
byte[] bytes = IOUtils.toByteArray(is);
Internally this creates a ByteArrayOutputStream and copies the bytes to the output, then calls toByteArray(). It handles large files by copying the bytes in blocks of 4KiB.
You need to read each byte from your InputStream and write it to a ByteArrayOutputStream.
You can then retrieve the underlying byte array by calling toByteArray():
InputStream is = ...
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[16384];
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
return buffer.toByteArray();
Finally, after twenty years, there’s a simple solution without the need for a 3rd party library, thanks to Java 9:
InputStream is;
…
byte[] array = is.readAllBytes();
Note also the convenience methods readNBytes(byte[] b, int off, int len) and transferTo(OutputStream) addressing recurring needs.
Use vanilla Java's DataInputStream and its readFully Method (exists since at least Java 1.4):
...
byte[] bytes = new byte[(int) file.length()];
DataInputStream dis = new DataInputStream(new FileInputStream(file));
dis.readFully(bytes);
...
There are some other flavors of this method, but I use this all the time for this use case.
If you happen to use Google Guava, it'll be as simple as using ByteStreams:
byte[] bytes = ByteStreams.toByteArray(inputStream);
Safe solution (close streams correctly):
Java 9 and newer:
final byte[] bytes;
try (inputStream) {
bytes = inputStream.readAllBytes();
}
Java 8 and older:
public static byte[] readAllBytes(InputStream inputStream) throws IOException {
final int bufLen = 4 * 0x400; // 4KB
byte[] buf = new byte[bufLen];
int readLen;
IOException exception = null;
try {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
while ((readLen = inputStream.read(buf, 0, bufLen)) != -1)
outputStream.write(buf, 0, readLen);
return outputStream.toByteArray();
}
} catch (IOException e) {
exception = e;
throw e;
} finally {
if (exception == null) inputStream.close();
else try {
inputStream.close();
} catch (IOException e) {
exception.addSuppressed(e);
}
}
}
Kotlin (when Java 9+ isn't accessible):
#Throws(IOException::class)
fun InputStream.readAllBytes(): ByteArray {
val bufLen = 4 * 0x400 // 4KB
val buf = ByteArray(bufLen)
var readLen: Int = 0
ByteArrayOutputStream().use { o ->
this.use { i ->
while (i.read(buf, 0, bufLen).also { readLen = it } != -1)
o.write(buf, 0, readLen)
}
return o.toByteArray()
}
}
To avoid nested use see here.
Scala (when Java 9+ isn't accessible) (By #Joan. Thx):
def readAllBytes(inputStream: InputStream): Array[Byte] =
Stream.continually(inputStream.read).takeWhile(_ != -1).map(_.toByte).toArray
As always, also Spring framework (spring-core since 3.2.2) has something for you: StreamUtils.copyToByteArray()
public static byte[] getBytesFromInputStream(InputStream is) throws IOException {
ByteArrayOutputStream os = new ByteArrayOutputStream();
byte[] buffer = new byte[0xFFFF];
for (int len = is.read(buffer); len != -1; len = is.read(buffer)) {
os.write(buffer, 0, len);
}
return os.toByteArray();
}
In-case someone is still looking for a solution without dependency and If you have a file.
DataInputStream
byte[] data = new byte[(int) file.length()];
DataInputStream dis = new DataInputStream(new FileInputStream(file));
dis.readFully(data);
dis.close();
ByteArrayOutputStream
InputStream is = new FileInputStream(file);
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[(int) file.length()];
while ((nRead = is.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
RandomAccessFile
RandomAccessFile raf = new RandomAccessFile(file, "r");
byte[] data = new byte[(int) raf.length()];
raf.readFully(data);
Do you really need the image as a byte[]? What exactly do you expect in the byte[] - the complete content of an image file, encoded in whatever format the image file is in, or RGB pixel values?
Other answers here show you how to read a file into a byte[]. Your byte[] will contain the exact contents of the file, and you'd need to decode that to do anything with the image data.
Java's standard API for reading (and writing) images is the ImageIO API, which you can find in the package javax.imageio. You can read in an image from a file with just a single line of code:
BufferedImage image = ImageIO.read(new File("image.jpg"));
This will give you a BufferedImage, not a byte[]. To get at the image data, you can call getRaster() on the BufferedImage. This will give you a Raster object, which has methods to access the pixel data (it has several getPixel() / getPixels() methods).
Lookup the API documentation for javax.imageio.ImageIO, java.awt.image.BufferedImage, java.awt.image.Raster etc.
ImageIO supports a number of image formats by default: JPEG, PNG, BMP, WBMP and GIF. It's possible to add support for more formats (you'd need a plug-in that implements the ImageIO service provider interface).
See also the following tutorial: Working with Images
If you don't want to use the Apache commons-io library, this snippet is taken from the sun.misc.IOUtils class. It's nearly twice as fast as the common implementation using ByteBuffers:
public static byte[] readFully(InputStream is, int length, boolean readAll)
throws IOException {
byte[] output = {};
if (length == -1) length = Integer.MAX_VALUE;
int pos = 0;
while (pos < length) {
int bytesToRead;
if (pos >= output.length) { // Only expand when there's no room
bytesToRead = Math.min(length - pos, output.length + 1024);
if (output.length < pos + bytesToRead) {
output = Arrays.copyOf(output, pos + bytesToRead);
}
} else {
bytesToRead = output.length - pos;
}
int cc = is.read(output, pos, bytesToRead);
if (cc < 0) {
if (readAll && length != Integer.MAX_VALUE) {
throw new EOFException("Detect premature EOF");
} else {
if (output.length != pos) {
output = Arrays.copyOf(output, pos);
}
break;
}
}
pos += cc;
}
return output;
}
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
while (true) {
int r = in.read(buffer);
if (r == -1) break;
out.write(buffer, 0, r);
}
byte[] ret = out.toByteArray();
#Adamski: You can avoid buffer entirely.
Code copied from http://www.exampledepot.com/egs/java.io/File2ByteArray.html (Yes, it is very verbose, but needs half the size of memory as the other solution.)
// Returns the contents of the file in a byte array.
public static byte[] getBytesFromFile(File file) throws IOException {
InputStream is = new FileInputStream(file);
// Get the size of the file
long length = file.length();
// You cannot create an array using a long type.
// It needs to be an int type.
// Before converting to an int type, check
// to ensure that file is not larger than Integer.MAX_VALUE.
if (length > Integer.MAX_VALUE) {
// File is too large
}
// Create the byte array to hold the data
byte[] bytes = new byte[(int)length];
// Read in the bytes
int offset = 0;
int numRead = 0;
while (offset < bytes.length
&& (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}
// Ensure all the bytes have been read in
if (offset < bytes.length) {
throw new IOException("Could not completely read file "+file.getName());
}
// Close the input stream and return bytes
is.close();
return bytes;
}
Input Stream is ...
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int next = in.read();
while (next > -1) {
bos.write(next);
next = in.read();
}
bos.flush();
byte[] result = bos.toByteArray();
bos.close();
Java 9 will give you finally a nice method:
InputStream in = ...;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
in.transferTo( bos );
byte[] bytes = bos.toByteArray();
We are seeing some delay for few AWS transaction, while converting S3 object to ByteArray.
Note: S3 Object is PDF document (max size is 3 mb).
We are using the option #1 (org.apache.commons.io.IOUtils) to convert the S3 object to ByteArray. We have noticed S3 provide the inbuild IOUtils method to convert the S3 object to ByteArray, we are request you to confirm what is the best way to convert the S3 object to ByteArray to avoid the delay.
Option #1:
import org.apache.commons.io.IOUtils;
is = s3object.getObjectContent();
content =IOUtils.toByteArray(is);
Option #2:
import com.amazonaws.util.IOUtils;
is = s3object.getObjectContent();
content =IOUtils.toByteArray(is);
Also let me know if we have any other better way to convert the s3 object to bytearray
I know it's too late but here I think is cleaner solution that's more readable...
/**
* method converts {#link InputStream} Object into byte[] array.
*
* #param stream the {#link InputStream} Object.
* #return the byte[] array representation of received {#link InputStream} Object.
* #throws IOException if an error occurs.
*/
public static byte[] streamToByteArray(InputStream stream) throws IOException {
byte[] buffer = new byte[1024];
ByteArrayOutputStream os = new ByteArrayOutputStream();
int line = 0;
// read bytes from stream, and store them in buffer
while ((line = stream.read(buffer)) != -1) {
// Writes bytes from byte array (buffer) into output stream.
os.write(buffer, 0, line);
}
stream.close();
os.flush();
os.close();
return os.toByteArray();
}
I tried to edit #numan's answer with a fix for writing garbage data but edit was rejected. While this short piece of code is nothing brilliant I can't see any other better answer. Here's what makes most sense to me:
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buffer = new byte[1024]; // you can configure the buffer size
int length;
while ((length = in.read(buffer)) != -1) out.write(buffer, 0, length); //copy streams
in.close(); // call this in a finally block
byte[] result = out.toByteArray();
btw ByteArrayOutputStream need not be closed. try/finally constructs omitted for readability
See the InputStream.available() documentation:
It is particularly important to realize that you must not use this
method to size a container and assume that you can read the entirety
of the stream without needing to resize the container. Such callers
should probably write everything they read to a ByteArrayOutputStream
and convert that to a byte array. Alternatively, if you're reading
from a file, File.length returns the current length of the file
(though assuming the file's length can't change may be incorrect,
reading a file is inherently racy).
Wrap it in a DataInputStream if that is off the table for some reason, just use read to hammer on it until it gives you a -1 or the entire block you asked for.
public int readFully(InputStream in, byte[] data) throws IOException {
int offset = 0;
int bytesRead;
boolean read = false;
while ((bytesRead = in.read(data, offset, data.length - offset)) != -1) {
read = true;
offset += bytesRead;
if (offset >= data.length) {
break;
}
}
return (read) ? offset : -1;
}
Java 8 way (thanks to BufferedReader and Adam Bien)
private static byte[] readFully(InputStream input) throws IOException {
try (BufferedReader buffer = new BufferedReader(new InputStreamReader(input))) {
return buffer.lines().collect(Collectors.joining("\n")).getBytes(<charset_can_be_specified>);
}
}
Note that this solution wipes carriage return ('\r') and can be inappropriate.
The other case to get correct byte array via stream, after send request to server and waiting for the response.
/**
* Begin setup TCP connection to PC app
* to open integrate connection between mobile app and pc app (or mobile app)
*/
mSocket = new Socket(IP, port);
// mSocket.setSoTimeout(30000);
DataOutputStream mDos = new DataOutputStream(mSocket.getOutputStream());
String str = "MobileRequest#" + params[0] + "#<EOF>";
mDos.write(str.getBytes());
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
/* Since data are accepted as byte, all of them will be collected in the
following byte array which initialised with accepted data length. */
DataInputStream mDis = new DataInputStream(mSocket.getInputStream());
byte[] data = new byte[mDis.available()];
// Collecting data into byte array
for (int i = 0; i < data.length; i++)
data[i] = mDis.readByte();
// Converting collected data in byte array into String.
String RESPONSE = new String(data);
You're doing an extra copy if you use ByteArrayOutputStream. If you know the length of the stream before you start reading it (e.g. the InputStream is actually a FileInputStream, and you can call file.length() on the file, or the InputStream is a zipfile entry InputStream, and you can call zipEntry.length()), then it's far better to write directly into the byte[] array -- it uses half the memory, and saves time.
// Read the file contents into a byte[] array
byte[] buf = new byte[inputStreamLength];
int bytesRead = Math.max(0, inputStream.read(buf));
// If needed: for safety, truncate the array if the file may somehow get
// truncated during the read operation
byte[] contents = bytesRead == inputStreamLength ? buf
: Arrays.copyOf(buf, bytesRead);
N.B. the last line above deals with files getting truncated while the stream is being read, if you need to handle that possibility, but if the file gets longer while the stream is being read, the contents in the byte[] array will not be lengthened to include the new file content, the array will simply be truncated to the old length inputStreamLength.
I use this.
public static byte[] toByteArray(InputStream is) throws IOException {
ByteArrayOutputStream output = new ByteArrayOutputStream();
try {
byte[] b = new byte[4096];
int n = 0;
while ((n = is.read(b)) != -1) {
output.write(b, 0, n);
}
return output.toByteArray();
} finally {
output.close();
}
}
This is my copy-paste version:
#SuppressWarnings("empty-statement")
public static byte[] inputStreamToByte(InputStream is) throws IOException {
if (is == null) {
return null;
}
// Define a size if you have an idea of it.
ByteArrayOutputStream r = new ByteArrayOutputStream(2048);
byte[] read = new byte[512]; // Your buffer size.
for (int i; -1 != (i = is.read(read)); r.write(read, 0, i));
is.close();
return r.toByteArray();
}
Java 7 and later:
import sun.misc.IOUtils;
...
InputStream in = ...;
byte[] buf = IOUtils.readFully(in, -1, false);
You can try Cactoos:
byte[] array = new BytesOf(stream).bytes();
Here is an optimized version, that tries to avoid copying data bytes as much as possible:
private static byte[] loadStream (InputStream stream) throws IOException {
int available = stream.available();
int expectedSize = available > 0 ? available : -1;
return loadStream(stream, expectedSize);
}
private static byte[] loadStream (InputStream stream, int expectedSize) throws IOException {
int basicBufferSize = 0x4000;
int initialBufferSize = (expectedSize >= 0) ? expectedSize : basicBufferSize;
byte[] buf = new byte[initialBufferSize];
int pos = 0;
while (true) {
if (pos == buf.length) {
int readAhead = -1;
if (pos == expectedSize) {
readAhead = stream.read(); // test whether EOF is at expectedSize
if (readAhead == -1) {
return buf;
}
}
int newBufferSize = Math.max(2 * buf.length, basicBufferSize);
buf = Arrays.copyOf(buf, newBufferSize);
if (readAhead != -1) {
buf[pos++] = (byte)readAhead;
}
}
int len = stream.read(buf, pos, buf.length - pos);
if (len < 0) {
return Arrays.copyOf(buf, pos);
}
pos += len;
}
}
Solution in Kotlin (will work in Java too, of course), which includes both cases of when you know the size or not:
fun InputStream.readBytesWithSize(size: Long): ByteArray? {
return when {
size < 0L -> this.readBytes()
size == 0L -> ByteArray(0)
size > Int.MAX_VALUE -> null
else -> {
val sizeInt = size.toInt()
val result = ByteArray(sizeInt)
readBytesIntoByteArray(result, sizeInt)
result
}
}
}
fun InputStream.readBytesIntoByteArray(byteArray: ByteArray,bytesToRead:Int=byteArray.size) {
var offset = 0
while (true) {
val read = this.read(byteArray, offset, bytesToRead - offset)
if (read == -1)
break
offset += read
if (offset >= bytesToRead)
break
}
}
If you know the size, it saves you on having double the memory used compared to the other solutions (in a brief moment, but still could be useful). That's because you have to read the entire stream to the end, and then convert it to a byte array (similar to ArrayList which you convert to just an array).
So, if you are on Android, for example, and you got some Uri to handle, you can try to get the size using this:
fun getStreamLengthFromUri(context: Context, uri: Uri): Long {
context.contentResolver.query(uri, arrayOf(MediaStore.MediaColumns.SIZE), null, null, null)?.use {
if (!it.moveToNext())
return#use
val fileSize = it.getLong(it.getColumnIndex(MediaStore.MediaColumns.SIZE))
if (fileSize > 0)
return fileSize
}
//if you wish, you can also get the file-path from the uri here, and then try to get its size, using this: https://stackoverflow.com/a/61835665/878126
FileUtilEx.getFilePathFromUri(context, uri, false)?.use {
val file = it.file
val fileSize = file.length()
if (fileSize > 0)
return fileSize
}
context.contentResolver.openInputStream(uri)?.use { inputStream ->
if (inputStream is FileInputStream)
return inputStream.channel.size()
else {
var bytesCount = 0L
while (true) {
val available = inputStream.available()
if (available == 0)
break
val skip = inputStream.skip(available.toLong())
if (skip < 0)
break
bytesCount += skip
}
if (bytesCount > 0L)
return bytesCount
}
}
return -1L
}
You can use cactoos library with provides reusable object-oriented Java components.
OOP is emphasized by this library, so no static methods, NULLs, and so on, only real objects and their contracts (interfaces).
A simple operation like reading InputStream, can be performed like that
final InputStream input = ...;
final Bytes bytes = new BytesOf(input);
final byte[] array = bytes.asBytes();
Assert.assertArrayEquals(
array,
new byte[]{65, 66, 67}
);
Having a dedicated type Bytes for working with data structure byte[] enables us to use OOP tactics for solving tasks at hand.
Something that a procedural "utility" method will forbid us to do.
For example, you need to enconde bytes you've read from this InputStream to Base64.
In this case you will use Decorator pattern and wrap Bytes object within implementation for Base64.
cactoos already provides such implementation:
final Bytes encoded = new BytesBase64(
new BytesOf(
new InputStreamOf("XYZ")
)
);
Assert.assertEquals(new TextOf(encoded).asString(), "WFla");
You can decode them in the same manner, by using Decorator pattern
final Bytes decoded = new Base64Bytes(
new BytesBase64(
new BytesOf(
new InputStreamOf("XYZ")
)
)
);
Assert.assertEquals(new TextOf(decoded).asString(), "XYZ");
Whatever your task is you will be able to create own implementation of Bytes to solve it.

How to convert binary text into useable file

So I use the following methods
(File is converted to Byte Array through 'convertFileToByteArray()', then written to .txt file by 'convertByteArrayToBitTextFile()'
to convert any kind of file into a Binary Text file (and by that I mean only 1's and 0's in human readable form.)
public static byte[] convertFileToByteArray(String path) throws IOException
{
File file = new File(path);
byte[] fileData;
fileData = new byte[(int)file.length()];
FileInputStream in = new FileInputStream(file);
in.read(fileData);
in.close();
return fileData;
}
public static boolean convertByteArrayToBitTextFile(String path, byte[] bytes)
{
String content = convertByteArrayToBitString(bytes);
try
{
PrintWriter out = new PrintWriter(path);
out.println(content);
out.close();
return true;
}
catch (FileNotFoundException e)
{
return false;
}
}
public static String convertByteArrayToBitString(byte[] bytes)
{
String content = "";
for (int i = 0; i < bytes.length; i++)
{
content += String.format("%8s", Integer.toBinaryString(bytes[i] & 0xFF)).replace(' ', '0');
}
return content;
}
Edit: Additional Code:
public static byte[] convertFileToByteArray(String path) throws IOException
{
File file = new File(path);
byte[] fileData;
fileData = new byte[(int)file.length()];
FileInputStream in = new FileInputStream(file);
in.read(fileData);
in.close();
return fileData;
}
public static boolean convertByteArrayToBitTextFile(String path, byte[] bytes)
{
try
{
PrintWriter out = new PrintWriter(path);
for (int i = 0; i < bytes.length; i++)
{
out.print(String.format("%8s", Integer.toBinaryString(bytes[i] & 0xFF)).replace(' ', '0'));
}
out.close();
return true;
}
catch (FileNotFoundException e)
{
return false;
}
}
public static boolean convertByteArrayToByteTextFile(String path, byte[] bytes)
{
try
{
PrintWriter out = new PrintWriter(path);
for(int i = 0; i < bytes.length; i++)
{
out.print(bytes[i]);
}
out.close();
return true;
}
catch (FileNotFoundException e)
{
return false;
}
}
public static boolean convertByteArrayToRegularFile(String path, byte[] bytes)
{
try
{
PrintWriter out = new PrintWriter(path);
for(int i = 0; i < bytes.length; i++)
{
out.write(bytes[i]);
}
out.close();
return true;
}
catch (FileNotFoundException e)
{
return false;
}
}
public static boolean convertBitFileToByteTextFile(String path)
{
try
{
byte[] b = convertFileToByteArray(path);
convertByteArrayToByteTextFile(path, b);
return true;
}
catch (IOException e)
{
return false;
}
}
I do this to try methods of compression on a very fundamental level, so please let's not discuss why use human-readable form.
Now this works quite well so far, however I got two problems.
1)
It takes foreeeever (>20 Minutes for 230KB into binary text). Is this just a by-product of the relatively complicated conversion or are there other methods to do this faster?
2) and main problem:
I have no idea how to convert the files back to what they used to be. Renaming from .txt to .exe does not work (not too surprising as the resulting file is two times larger than the original)
Is this still possible or did I lose Information about what the file is supposed to represent by converting it to a human-readable text file?
If so, do you know any alternative that prevents this?
Any help is appreciated.
The thing that'll cost you most time is the construction of an ever increasing String. A better approach would be to write the data as soon as you have it.
The other problem is very easy. You know that every sequence of eight characters ('0' or '1') was made from a byte. Hence, you know the values of each character in an 8-character block:
01001010
^----- 0*1
^------ 1*2
^------- 0*4
^-------- 1*8
^--------- 0*16
^---------- 0*32
^----------- 1*64
^------------ 0*128
-----
64+8+2 = 74
You only need to add the values where an '1' is present.
You can do it in Java like this, without even knowing the individual bit values:
String sbyte = "01001010";
int bytevalue = 0;
for (i=0; i<8; i++) {
bytevalue *= 2; // shifts the bit pattern to the left 1 position
if (sbyte.charAt(i) == '1') bytevalue += 1;
}
Use StringBuilder to avoid generating enormous numbers of unused String instances.
Better yet, write directly to the PrintWriter instead of building it in-memory at all.
Loop through every 8-character subsequence and call Byte.parseByte(text, 2) to parse it back to a byte.

compression on java nio direct buffers

The gzip input/output stream dont operate on Java direct buffers.
Is there any compression algorithm implementation out there that operates directly on direct buffers?
This way there would be no overhead of copying a direct buffer to a java byte array for compression.
I don't mean to detract from your question, but is this really a good optimization point in your program? Have you verified with a profiler that you indeed have a problem? Your question as stated implies you have not done any research, but are merely guessing that you will have a performance or memory problem by allocating a byte[]. Since all the answers in this thread are likely to be hacks of some sort, you should really verify that you actually have a problem before fixing it.
Back to the question, if you're wanting to compress the data "in place" in on a ByteBuffer, the answer is no, there is no capability to do that built into Java.
If you allocated your buffer like the following:
byte[] bytes = getMyData();
ByteBuffer buf = ByteBuffer.wrap(bytes);
You can filter your byte[] through a ByteBufferInputStream as the previous answer suggested.
Wow old question, but stumbled upon this today.
Probably some libs like zip4j can handle this, but you can get the job done with no external dependencies since Java 11:
If you are interested only in compressing data, you can just do:
void compress(ByteBuffer src, ByteBuffer dst) {
var def = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
try {
def.setInput(src);
def.finish();
def.deflate(dst, Deflater.SYNC_FLUSH);
if (src.hasRemaining()) {
throw new RuntimeException("dst too small");
}
} finally {
def.end();
}
}
Both src and dst will change positions, so you might have to flip them after compress returns.
In order to recover compressed data:
void decompress(ByteBuffer src, ByteBuffer dst) throws DataFormatException {
var inf = new Inflater(true);
try {
inf.setInput(src);
inf.inflate(dst);
if (src.hasRemaining()) {
throw new RuntimeException("dst too small");
}
} finally {
inf.end();
}
}
Note that both methods expect (de-)compression to happen in a single pass, however, we could use slight modified versions in order to stream it:
void compress(ByteBuffer src, ByteBuffer dst, Consumer<ByteBuffer> sink) {
var def = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
try {
def.setInput(src);
def.finish();
int cmp;
do {
cmp = def.deflate(dst, Deflater.SYNC_FLUSH);
if (cmp > 0) {
sink.accept(dst.flip());
dst.clear();
}
} while (cmp > 0);
} finally {
def.end();
}
}
void decompress(ByteBuffer src, ByteBuffer dst, Consumer<ByteBuffer> sink) throws DataFormatException {
var inf = new Inflater(true);
try {
inf.setInput(src);
int dec;
do {
dec = inf.inflate(dst);
if (dec > 0) {
sink.accept(dst.flip());
dst.clear();
}
} while (dec > 0);
} finally {
inf.end();
}
}
Example:
void compressLargeFile() throws IOException {
var in = FileChannel.open(Paths.get("large"));
var temp = ByteBuffer.allocateDirect(1024 * 1024);
var out = FileChannel.open(Paths.get("large.zip"));
var start = 0;
var rem = ch.size();
while (rem > 0) {
var mapped=Math.min(16*1024*1024, rem);
var src = in.map(MapMode.READ_ONLY, start, mapped);
compress(src, temp, (bb) -> {
try {
out.write(bb);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
});
rem-=mapped;
}
}
If you want fully zip compliant data:
void zip(ByteBuffer src, ByteBuffer dst) {
var u = src.remaining();
var crc = new CRC32();
crc.update(src.duplicate());
writeHeader(dst);
compress(src, dst);
writeTrailer(crc, u, dst);
}
Where:
void writeHeader(ByteBuffer dst) {
var header = new byte[] { (byte) 0x8b1f, (byte) (0x8b1f >> 8), Deflater.DEFLATED, 0, 0, 0, 0, 0, 0, 0 };
dst.put(header);
}
And:
void writeTrailer(CRC32 crc, int uncompressed, ByteBuffer dst) {
if (dst.order() == ByteOrder.LITTLE_ENDIAN) {
dst.putInt((int) crc.getValue());
dst.putInt(uncompressed);
} else {
dst.putInt(Integer.reverseBytes((int) crc.getValue()));
dst.putInt(Integer.reverseBytes(uncompressed));
}
So, zip imposes 10+8 bytes of overhead.
In order to unzip a direct buffer into another, you can wrap the src buffer into an InputStream:
class ByteBufferInputStream extends InputStream {
final ByteBuffer bb;
public ByteBufferInputStream(ByteBuffer bb) {
this.bb = bb;
}
#Override
public int available() throws IOException {
return bb.remaining();
}
#Override
public int read() throws IOException {
return bb.hasRemaining() ? bb.get() & 0xFF : -1;
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
var rem = bb.remaining();
if (rem == 0) {
return -1;
}
len = Math.min(rem, len);
bb.get(b, off, len);
return len;
}
#Override
public long skip(long n) throws IOException {
var rem = bb.remaining();
if (n > rem) {
bb.position(bb.limit());
n = rem;
} else {
bb.position((int) (bb.position() + n));
}
return n;
}
}
and use:
void unzip(ByteBuffer src, ByteBuffer dst) throws IOException {
try (var is = new ByteBufferInputStream(src); var gis = new GZIPInputStream(is)) {
var tmp = new byte[1024];
var r = gis.read(tmp);
if (r > 0) {
do {
dst.put(tmp, 0, r);
r = gis.read(tmp);
} while (r > 0);
}
}
}
Of course, this is not cool since we are copying data to a temporary array, but nevertheless, it is sort of a roundtrip check that proves that nio-based zip encoding writes valid data that can be read from standard io-based consumers.
So, if we just ignore crc consistency checks we can just drop header/footer:
void unzipNoCheck(ByteBuffer src, ByteBuffer dst) throws DataFormatException {
src.position(src.position() + 10).limit(src.limit() - 8);
decompress(src, dst);
}
If you are using ByteBuffers you can use some simple Input/OutputStream wrappers such as these:
public class ByteBufferInputStream extends InputStream {
private ByteBuffer buffer = null;
public ByteBufferInputStream( ByteBuffer b) {
this.buffer = b;
}
#Override
public int read() throws IOException {
return (buffer.get() & 0xFF);
}
}
public class ByteBufferOutputStream extends OutputStream {
private ByteBuffer buffer = null;
public ByteBufferOutputStream( ByteBuffer b) {
this.buffer = b;
}
#Override
public void write(int b) throws IOException {
buffer.put( (byte)(b & 0xFF) );
}
}
Test:
ByteBuffer buffer = ByteBuffer.allocate( 1000 );
ByteBufferOutputStream bufferOutput = new ByteBufferOutputStream( buffer );
GZIPOutputStream output = new GZIPOutputStream( bufferOutput );
output.write("stackexchange".getBytes());
output.close();
buffer.position( 0 );
byte[] result = new byte[ 1000 ];
ByteBufferInputStream bufferInput = new ByteBufferInputStream( buffer );
GZIPInputStream input = new GZIPInputStream( bufferInput );
input.read( result );
System.out.println( new String(result));

CharBuffer vs. char[]

Is there any reason to prefer a CharBuffer to a char[] in the following:
CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE);
while( in.read(buf) >= 0 ) {
out.append( buf.flip() );
buf.clear();
}
vs.
char[] buf = new char[DEFAULT_BUFFER_SIZE];
int n;
while( (n = in.read(buf)) >= 0 ) {
out.write( buf, 0, n );
}
(where in is a Reader and out in a Writer)?
No, there's really no reason to prefer a CharBuffer in this case.
In general, though, CharBuffer (and ByteBuffer) can really simplify APIs and encourage correct processing. If you were designing a public API, it's definitely worth considering a buffer-oriented API.
I wanted to mini-benchmark this comparison.
Below is the class I have written.
The thing is I can't believe that the CharBuffer performed so badly. What have I got wrong?
EDIT: Since the 11th comment below I have edited the code and the output time, better performance all round but still a significant difference in times. I also tried out2.append((CharBuffer)buff.flip()) option mentioned in the comments but it was much slower than the write option used in the code below.
Results: (time in ms)
char[] : 3411
CharBuffer: 5653
public class CharBufferScratchBox
{
public static void main(String[] args) throws Exception
{
// Some Setup Stuff
String smallString =
"1111111111222222222233333333334444444444555555555566666666667777777777888888888899999999990000000000";
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0; i < 1000; i++)
{
stringBuilder.append(smallString);
}
String string = stringBuilder.toString();
int DEFAULT_BUFFER_SIZE = 1000;
int ITTERATIONS = 10000;
// char[]
StringReader in1 = null;
StringWriter out1 = null;
Date start = new Date();
for (int i = 0; i < ITTERATIONS; i++)
{
in1 = new StringReader(string);
out1 = new StringWriter(string.length());
char[] buf = new char[DEFAULT_BUFFER_SIZE];
int n;
while ((n = in1.read(buf)) >= 0)
{
out1.write(
buf,
0,
n);
}
}
Date done = new Date();
System.out.println("char[] : " + (done.getTime() - start.getTime()));
// CharBuffer
StringReader in2 = null;
StringWriter out2 = null;
start = new Date();
CharBuffer buff = CharBuffer.allocate(DEFAULT_BUFFER_SIZE);
for (int i = 0; i < ITTERATIONS; i++)
{
in2 = new StringReader(string);
out2 = new StringWriter(string.length());
int n;
while ((n = in2.read(buff)) >= 0)
{
out2.write(
buff.array(),
0,
n);
buff.clear();
}
}
done = new Date();
System.out.println("CharBuffer: " + (done.getTime() - start.getTime()));
}
}
If this is the only thing you're doing with the buffer, then the array is probably the better choice in this instance.
CharBuffer has lots of extra chrome on it, but none of it is relevant in this case - and will only slow things down a fraction.
You can always refactor later if you need to make things more complicated.
The difference, in practice, is actually <10%, not 30% as others are reporting.
To read and write a 5MB file 24 times, my numbers taken using a Profiler. They were on average:
char[] = 4139 ms
CharBuffer = 4466 ms
ByteBuffer = 938 (direct) ms
Individual tests a couple times favored CharBuffer.
I also tried replacing the File-based IO with In-Memory IO and the performance was similar. If you are trying to transfer from one native stream to another, then you are better off using a "direct" ByteBuffer.
With less than 10% performance difference, in practice, I would favor the CharBuffer. It's syntax is clearer, there's less extraneous variables, and you can do more direct manipulation on it (i.e. anything that asks for a CharSequence).
Benchmark is below... it is slightly wrong as the BufferedReader is allocated inside the test-method rather than outside... however, the example below allows you to isolate the IO time and eliminate factors like a string or byte stream resizing its internal memory buffer, etc.
public static void main(String[] args) throws Exception {
File f = getBytes(5000000);
System.out.println(f.getAbsolutePath());
try {
System.gc();
List<Main> impls = new java.util.ArrayList<Main>();
impls.add(new CharArrayImpl());
//impls.add(new CharArrayNoBuffImpl());
impls.add(new CharBufferImpl());
//impls.add(new CharBufferNoBuffImpl());
impls.add(new ByteBufferDirectImpl());
//impls.add(new CharBufferDirectImpl());
for (int i = 0; i < 25; i++) {
for (Main impl : impls) {
test(f, impl);
}
System.out.println("-----");
if(i==0)
continue; //reset profiler
}
System.gc();
System.out.println("Finished");
return;
} finally {
f.delete();
}
}
static int BUFFER_SIZE = 1000;
static File getBytes(int size) throws IOException {
File f = File.createTempFile("input", ".txt");
FileWriter writer = new FileWriter(f);
Random r = new Random();
for (int i = 0; i < size; i++) {
writer.write(Integer.toString(5));
}
writer.close();
return f;
}
static void test(File f, Main impl) throws IOException {
InputStream in = new FileInputStream(f);
File fout = File.createTempFile("output", ".txt");
try {
OutputStream out = new FileOutputStream(fout, false);
try {
long start = System.currentTimeMillis();
impl.runTest(in, out);
long end = System.currentTimeMillis();
System.out.println(impl.getClass().getName() + " = " + (end - start) + "ms");
} finally {
out.close();
}
} finally {
fout.delete();
in.close();
}
}
public abstract void runTest(InputStream ins, OutputStream outs) throws IOException;
public static class CharArrayImpl extends Main {
char[] buff = new char[BUFFER_SIZE];
public void runTest(InputStream ins, OutputStream outs) throws IOException {
Reader in = new BufferedReader(new InputStreamReader(ins));
Writer out = new BufferedWriter(new OutputStreamWriter(outs));
int n;
while ((n = in.read(buff)) >= 0) {
out.write(buff, 0, n);
}
}
}
public static class CharBufferImpl extends Main {
CharBuffer buff = CharBuffer.allocate(BUFFER_SIZE);
public void runTest(InputStream ins, OutputStream outs) throws IOException {
Reader in = new BufferedReader(new InputStreamReader(ins));
Writer out = new BufferedWriter(new OutputStreamWriter(outs));
int n;
while ((n = in.read(buff)) >= 0) {
buff.flip();
out.append(buff);
buff.clear();
}
}
}
public static class ByteBufferDirectImpl extends Main {
ByteBuffer buff = ByteBuffer.allocateDirect(BUFFER_SIZE * 2);
public void runTest(InputStream ins, OutputStream outs) throws IOException {
ReadableByteChannel in = Channels.newChannel(ins);
WritableByteChannel out = Channels.newChannel(outs);
int n;
while ((n = in.read(buff)) >= 0) {
buff.flip();
out.write(buff);
buff.clear();
}
}
}
I think that CharBuffer and ByteBuffer (as well as any other xBuffer) were meant for reusability so you can buf.clear() them instead of going through reallocation every time
If you don't reuse them, you're not using their full potential and it will add extra overhead. However if you're planning on scaling this function this might be a good idea to keep them there
You should avoid CharBuffer in recent Java versions, there is a bug in #subsequence(). You cannot get a subsequence from the second half of the buffer since the implementation confuses capacity and remaining. I observed the bug in java 6-0-11 and 6-0-12.
The CharBuffer version is slightly less complicated (one less variable), encapsulates buffer size handling and makes use of a standard API. Generally I would prefer this.
However there is still one good reason to prefer the array version, in some cases at least. CharBuffer was only introduced in Java 1.4 so if you are deploying to an earlier version you can't use Charbuffer (unless you roll-your-own/use a backport).
P.S If you use a backport remember to remove it once you catch up to the version containing the "real" version of the backported code.

Categories