I'm trying to Gzip a file for output in Play Framework 2.2.1, with Java.
This is not an asset, it is not a static file. For instance, it can be a user avatar which the user uploads. For example, a PNG image.
I've searched a for this and found only how to GZIP strings and that the Play Framework does automatic Gzipping for public assets, which this is not.
Some code I've tried:
public static Result userAvatar(long userId) throws IOException {
UserAvatar avatar = UserAvatar.get(userId);
InputStream avatarStream;
Long version;
// Use the default avatar.
if (avatar == null) {
avatarStream = Play.current().resourceAsStream("public/images/noavatar.png").get();
version = 0L;
} else {
avatarStream = new ByteArrayInputStream(avatar.avatar);
version = avatar.version;
}
byte[] byteArray = IOUtils.toByteArray(avatarStream);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(byteArray.length);
OutputStream gzip = new GZIPOutputStream(outputStream);
int len;
while ((len = avatarStream.read(byteArray)) > 0) {
gzip.write(byteArray, 0, len);
}
avatarStream.close();
gzip.close();
// The client has the correct image cached if the ETag matches
String eTag = request().getHeader("If-None-Match");
if (eTag != null && eTag.equals(version.toString())) {
return status(NOT_MODIFIED, "Not modified");
}
response().setContentType("image/png");
response().setHeader(ETAG, version.toString());
return Results.ok(outputStream.toByteArray());
}
This did not work and google is only returning answers for gzipping strings. Can anyone help?
EDIT: Does not work in this case means the result was 0 bytes.
len is bound to always be 0.
In this line:
byte[] byteArray = IOUtils.toByteArray(avatarStream);
you read all of avatarStream - it is now empty, 0 bytes left.
And in this line:
while ((len = avatarStream.read(byteArray)) > 0) {
you check how much you can still read of it - which is 0 bytes.
Replace
int len;
while ((len = avatarStream.read(byteArray)) > 0) {
gzip.write(byteArray, 0, len);
}
by just
gzip.write(byteArray);
Related
Currently in my iOS app I am using zlib to deflate data and I would like to implement the same logic in Android, so that the deflated data processed in these two platforms are compatible and can be transferred.
In the code below, both inputString are any random string like:
Developers trust Stack Overflow to help solve coding problems and use Stack Overflow Careers to find job opportunities. We’re committed to making the internet a better place, and our products aim to enrich the lives of developers as they grow and mature in their careers.
In iOS, the following code segment is used:
NSData *rawData = [inputString dataUsingEncoding:NSUTF8StringEncoding];
NSInputStream * src = [NSInputStream inputStreamWithData:rawData];
[src open];
NSOutputStream * dest = [NSOutputStream outputStreamToMemory];
[dest open];
int res = [self deflateDataStream:src toOutputStream:dest level:Z_DEFAULT_COMPRESSION];
[dest close];
[src close];
if (res != Z_OK) return nil;
NSData *ret = [dest propertyForKey:NSStreamDataWrittenToMemoryStreamKey];
+ (int) deflateDataStream:(NSInputStream *)source toOutputStream:(NSOutputStream *)dest level:(int)level {
int ret, flush;
unsigned have;
z_stream strm;
unsigned char inBuf[CHUNK];
unsigned char outBuf[CHUNK];
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
ret = deflateInit2(&strm, level, Z_DEFLATED, (16+MAX_WBITS), MAX_MEM_LEVEL, Z_DEFAULT_STRATEGY);
if (ret != Z_OK) return ret;
do {
NSInteger res = [source read:inBuf maxLength:CHUNK];
if (res < 0) {
NSLog(#"!!! Error reading stream: %ld %#", (long)source.streamStatus, source.streamError);
(void)deflateEnd(&strm);
return Z_ERRNO;
}
flush = [source hasBytesAvailable] ? Z_NO_FLUSH : Z_FINISH;
strm.avail_in = (uInt)res;
strm.next_in = inBuf;
do {
strm.avail_out = CHUNK;
strm.next_out = outBuf;
ret = deflate(&strm, flush);
assert(ret != Z_STREAM_ERROR);
have = CHUNK - strm.avail_out;
res = [dest write:outBuf maxLength:have];
if (res != have || res < 0) {
(void)deflateEnd(&strm);
return Z_ERRNO;
}
} while (strm.avail_out == 0);
assert(strm.avail_in == 0);
} while (flush != Z_FINISH);
assert(ret == Z_STREAM_END);
(void)deflateEnd(&strm);
return Z_OK;
}
Afterwards the compressed data will be processed further (undergo encryption, etc) and saved.
And then for Android version which I am currently working on, from the document page here the Deflater class perform deflation with zlib logic, so I tried with the following code segment:
byte[] dataToBeDeflated = inputString.getBytes(Charset.forName("UTF-8"));
Deflater deflater = null;
ByteArrayOutputStream outputStream = null;
byte[] deflatedData = null;
try {
deflater = new Deflater();
deflater.setStrategy(Deflater.DEFAULT_STRATEGY);
deflater.setLevel(Deflater.DEFAULT_COMPRESSION);
deflater.setInput(dataToBeDeflated);
outputStream = new ByteArrayOutputStream(dataToBeDeflated.length);
deflater.finish();
byte[] buffer = new byte[1024];
while (!deflater.finished()) {
int count = deflater.deflate(buffer);
outputStream.write(buffer, 0, count);
}
deflatedData = outputStream.toByteArray();
} catch (Exception e) {
Log.e(TAG, "Deflate exception", e);
} finally {
if (outputStream != null) {
try {
outputStream.close();
} catch (IOException e) {
Log.e(TAG, "Failed to close the output stream", e);
}
}
}
However, the result returned by the above implementation on Android doesn't yield the same result as in iOS, making it not usable by my existing iOS app.
Using the test string I quoted, iOS yield NSData of size 197 bytes, where the original string data is 273 bytes. While the original input on Android is also of size 273 bytes, but the above implementation gives a result of size 185.
Changing the logic on iOS side would not be viable currently as this would involve many additional process like submitting for review, etc.
I assume the underlying algorithm in both platforms should be the same? If this is the case why the results are different? Did I do something wrong and how can I correct it and obtain the same result on Android?
Thanks!
The 16+MAX_WBITS in deflateInit2() is requesting the gzip format, whereas the Deflater class is requesting the zlib format. You can get rid of the 16+ in the iOS code to request the zlib format.
Note that the output may still be different, since there is no requirement that the compressed data from different compressors be the same for the same input. All that matters is that you get the same thing from the decompressor that you gave to whichever compressor.
There are two deflate methods in iOS:
1. deflateInit(strm, level)
2. deflateInit2(strm, level, method, windowBits, memLevel, strategy)
1st one is compatible with the java's deflater. Also make sure you are using same compression level in both iOS and Java(Android).
Compression Levels:
-1: default
0: NO_COMPRESSION
1: BEST_SPEED //generally used
......
9: BEST_COMPRESSION
Well you are using different levels, on iOS you are using MAX_MEM_LEVEL (9), while on Android you are using DEFAULT_COMPRESSION(-1). Try using BEST_COMPRESSION(9) on Android.
For a project I am working on I have a few MP4 video files sitting on a Server.
A Java based web app I am writing needs to play these files in the browser. Due to a security restriction, only the server hosting the web app has access to this server, the browsers using the web app do not have access, making it impossible to use HTML 5 for play back.
The solution I am working on, is having a servlet (sitting on the web app server), access the video file. Write the video as it's output, and have the servlet be the source for the HTML player.
However I seem to be unable to successfully output the video file as servlet output in a streaming fashion.
I've done a large amount of research. The closest thing I came to a solution is this:
private static final int BUFFER_LENGTH = 1024 * 16;
private static final long EXPIRE_TIME = 1000 * 60 * 60 * 24;
private static final Pattern RANGE_PATTERN = Pattern.compile("bytes=(?<start>\\d*)-(?<end>\\d*)");
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
URL video = new URL("http://localhost/App/Videos/FileSD.mp4");
URLConnection yc = video.openConnection();
yc.setDoOutput(true);
int length = yc.getContentLength();
int start = 0;
int end = length - 1;
int contentLength = end - start + 1;
response.reset();
response.setBufferSize(BUFFER_LENGTH);
response.setHeader("Accept-Ranges", "bytes");
response.setDateHeader("Last-Modified", yc.getLastModified());
response.setDateHeader("Expires", System.currentTimeMillis()
+ EXPIRE_TIME);
response.setContentType(yc.getContentType());
response.setHeader("Content-Range",
String.format("bytes %s-%s/%s", start, end, length));
response.setHeader("Content-Length", String.format("%s", contentLength));
response.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);
ReadableByteChannel input = Channels.newChannel(yc.getInputStream());
int bytesRead;
int bytesLeft = contentLength;
ByteBuffer buffer = ByteBuffer.allocate(BUFFER_LENGTH);
try (OutputStream output = response.getOutputStream()) {
while ((bytesRead = input.read(buffer)) != -1 && bytesLeft > 0) {
buffer.clear();
output.write(buffer.array(), 0,
bytesLeft < bytesRead ? bytesLeft : bytesRead);
bytesLeft -= bytesRead;
output.flush();
}
output.close();
input.close();
}
}
For the most part, this code works fine, it takes a URL as input, and streams it as Output. The problem is, it only works for HD video files. SD video files simply don't play. Was hoping anyone has any idea why that is, and how it can be fixed.
I have uploaded a video file .mp4(18MB) into gridfs . and trying to read it from java code .here are some points i am unable to move further
1) i can able to retrieve the whole video into byte array and able to play
2) for first Nbytes means starting from first chunk to n no of chunks also i can able to play using directly querying from fs.chunks ... as below and giving to servletOutputstream ..
DBCollection a= db.getCollection("fs.chunks");DBCursor cur1=a.find().limit(10);
System.out.println(cur1);
byte[] destination2 =new byte[2621440];
int length2 = 0;
while(cur1.hasNext()) {
byte[] b2 = (byte[]) cur1.next().get("data");
System.arraycopy(b2, 0, destination2, length2, b2.length);
length2 += b2.length;
System.out.println("##########");
System.out.println(destination2.length);
}
3) I was stuck here, while reading from middle of the chunks , means after skip(n) chunks in the find() operation , unable to play the video by windows media player.saying unable to codec and etc error.. am i trying in a right way ?
DBCollection a= db.getCollection("fs.chunks");
DBCursor cur1=a.find(new BasicDBObject("n",new BasicDBObject("$gt",9))).limit(10);
System.out.println(cur1);
byte[] destination2 =new byte[2621440];
int length2 = 0;
while(cur1.hasNext()) {
byte[] b2 = (byte[]) cur1.next().get("data");
System.arraycopy(b2, 0, destination2, length2, b2.length);
length2 += b2.length;
System.out.println("##########");
System.out.println(destination2.length);
}
...........
public void showVideos(Model model,HttpServletResponse response) throws IOException {............response.setHeader("Content-Type", "video/quicktime");
response.setHeader("Content-Disposition", "inline; filename=\"" + filename + "\"");//byte[] bytearray =destination2
//response.s
ServletOutputStream out = response.getOutputStream();
System.out.println("hello");
int n=0;
//while(is.read(bytes, 0, 4096) != -1)
{
System.out.println(n++);
out.write(bytearray);
}
please suggest me for retrieving the part of the video file and play it from grid fs?
I'd use the GridFS classes for this purpose. Pseudo code below. myFS points to the bucket and findOne looks for the id of the file.
GridFS myFS = null;
if (bucket.isPresent()) {
myFS = new GridFS(m.getDb(), bucket.get());
} else {
myFS = new GridFS(m.getDb());
}
return Optional.fromNullable(myFS.findOne(id));
I have some trouble extracting raw AMR audio frames from a .3gp file. I followed the link: "http://android.amberfog.com/?p=181" but instead of "mdat" box type I've got "moov". I read somewhere that "moov" box and "mdat" box location differ from device to device. Does anyone knows how to correctly skip the .3gp headers and extract raw AMR data? Below is a code snippet:
public AMRFile convert3GPDataToAmr(String rawAmrFilePath) throws IOException {
if (_3gpFile == null) {
return null;
}
System.out.println("3GP file length: "+_3gpFile.getRawFile().length());
//FileInputStream is = new FileInputStream(_3gpFile.getRawFile());
ByteArrayInputStream bis = new ByteArrayInputStream(FileUtils.readFileToByteArray(_3gpFile.getRawFile()));
// read FileTypeHeader
System.out.println("Available1: "+bis.available());
FileTypeBox ftypHeader = new FileTypeBox(bis);
System.out.println("Available2: "+bis.available());
String header = ftypHeader.getHeaderAsString();
if(!FileTypeBox.HEADER_TYPE.equalsIgnoreCase(header)){
throw new IOException("File is not 3GP. ftyp header missing.");
}
// You can check if it is correct here
// read MediaDataHeader
MediaDataBox mdatHeader = new MediaDataBox(bis);
System.out.println("Available3: "+bis.available());
header = mdatHeader.getHeaderAsString();
if(!MediaDataBox.HEADER_TYPE.equalsIgnoreCase(header)){
//here is THE PROBLEM!!!!! - header is "moov" instead of "mdat" !!!!!!!!!!!!!
throw new IOException("File is not 3GP. mdat header missing.");
}
// You can check if it is correct here
int rawAmrDataLength = mdatHeader.getDataLength();
System.out.println("RAW Amr length: "+bis.available());
int fullAmrDataLength = AMR_MAGIC_HEADER.length + rawAmrDataLength;
byte[] amrData = new byte[fullAmrDataLength];
System.arraycopy(AMR_MAGIC_HEADER, 0, amrData, 0, AMR_MAGIC_HEADER.length);
bis.read(amrData, AMR_MAGIC_HEADER.length, rawAmrDataLength);
bis.close();
//create raw amr file
File rawAmrFile = new File(rawAmrFilePath);
FileOutputStream fos = new FileOutputStream(rawAmrFile);
AMRFile amrFile = null;
try {
fos.write(amrData);
} catch (Exception e) {
Log.e(getClass().getName(), e.getMessage(), e);
}
finally{
fos.close();
amrFile = new AMRFile(rawAmrFile);
}
System.out.println("AMR file length: "+amrFile.getRawFile().length());
return amrFile;
}
I used some HEX Viewer tool to look into my .3gp file and I saw that mdat box wasn't in the place where the algorithm has looked for, so I decided to read from the stream until I find the mdat box. Now the extraction works ok. I modified MediaDataBox a little bit:
public MediaDataBox(FileInputStream is) throws IOException {
super(is);
//check the mdat header - if not read until mdat if found
long last32Int = 0;
long curr32Int = 0;
long temp;
while (is.available()>=8) {
temp = curr32Int = readUint32(is);
//test like this to avoid low memory issues
if((HEADER_TYPE.charAt(0) == (byte)(temp >>> 24)) &&
(HEADER_TYPE.charAt(1) == (byte)(temp >>> 16)) &&
(HEADER_TYPE.charAt(2) == (byte)(temp >>> 8)) &&
(HEADER_TYPE.charAt(3) == (byte)temp)){
size = last32Int;
type = curr32Int;
boxSize = 8;
break;
}
last32Int = curr32Int;
}
}
and super class:
public _3GPBox(FileInputStream is) throws IOException{
size = readUint32(is);
boxSize += 4;
type = readUint32(is);
boxSize += 4;
}
I'm trying to write a simple RTF document pretty much from scratch in Java, and I'm trying to embed JPEGs in the document. Here's an example of a JPEG (a 2x2-pixel JPEG consisting of three white pixels and a black pixel in the upper left, if you're curious) embedded in an RTF document (generated by WordPad, which converted the JPEG to WMF):
{\pict\wmetafile8\picw53\pich53\picwgoal30\pichgoal30
0100090000036e00000000004500000000000400000003010800050000000b0200000000050000
000c0202000200030000001e000400000007010400040000000701040045000000410b2000cc00
020002000000000002000200000000002800000002000000020000000100040000000000000000
000000000000000000000000000000000000000000ffffff00fefefe0000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000
0000001202af0801010000040000002701ffff030000000000
}
I've been reading the RTF specification, and it looks like you can specify that the image is a JPEG, but since WordPad always converts images to WMF, I can't see an example of an embedded JPEG. So I may also end up needing to transcode from JPEG to WMF or something....
But basically, I'm looking for how to generate the binary or hexadecimal (Spec, p.148: "These pictures can be in hexadecimal (the default) or binary format.") form of a JPEG given a file URL.
Thanks!
EDIT: I have the stream stuff working all right, I think, but still don't understand exactly how to encode it, because whatever I'm doing, it's not RTF-readable. E.g., the above picture instead comes out as:
ffd8ffe00104a464946011106006000ffdb0430211211222222223533333644357677767789b988a877adaabcccc79efdcebcccffdb04312223336336c878ccccccccccccccccccccccccccccccccccccccccccccccccccffc0011802023122021113111ffc401f001511111100000000123456789abffc40b5100213324355440017d123041151221314161351617227114328191a182342b1c11552d1f024336272829a161718191a25262728292a3435363738393a434445464748494a535455565758595a636465666768696a737475767778797a838485868788898a92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae1e2e3e4e5e6e7e8e9eaf1f2f3f4f5f6f7f8f9faffc401f103111111111000000123456789abffc40b51102124434754401277012311452131612415176171132232818144291a1b1c19233352f0156272d1a162434e125f11718191a262728292a35363738393a434445464748494a535455565758595a636465666768696a737475767778797a82838485868788898a92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae2e3e4e5e6e7e8e9eaf2f3f4f5f6f7f8f9faffda0c31021131103f0fdecf09f84f4af178574cd0b42d334fd1744d16d22bd3f4fb0b74b6b5bb78902450c512091c688aaaa8a0500014514507ffd9
This PHP library would do the trick, so I'm trying to port the relevant portion to Java. Here is is:
$imageData = file_get_contents($this->_file);
$size = filesize($this->_file);
$hexString = '';
for ($i = 0; $i < $size; $i++) {
$hex = dechex(ord($imageData{$i}));
if (strlen($hex) == 1) {
$hex = '0' . $hex;
}
$hexString .= $hex;
}
return $hexString;
But I don't know what the Java analogue to dechex(ord($imageData{$i})) is. :( I got only as far as the Integer.toHexString() function, which takes care of the dechex part....
Thanks all. :)
Given a file URL for any file you can get the corresponding bytes by doing (exception handling omitted for brevity)...
int BUF_SIZE = 512;
URL fileURL = new URL("http://www.somewhere.com/someurl.jpg");
InputStream inputStream = fileURL.openStream();
byte [] smallBuffer = new byte[BUF_SIZE];
ByteArrayOutputStream largeBuffer = new ByteArrayOutputStream();
int numRead = BUF_SIZE;
while(numRead == BUF_SIZE) {
numRead = inputStream.read(smallBuffer,0,BUF_SIZE);
if(numRead > 0) {
largeBuffer.write(smallBuffer,0,BUF_SIZE);
}
}
byte [] bytes = largeBuffer.toByteArray();
I'm looking at your PHP snippet now and realizing that RTF is a bizarre specification! It looks like each byte of the image is encoded as 2 hex digits (which doubles the size of the image for no apparent reason). The the entire thing is stored in raw ASCII encoding. So, you'll want to do...
StringBuilder hexStringBuilder = new StringBuilder(bytes.length * 2);
for(byte imageByte : bytes) {
String hexByteString = Integer.toHexString(0x000000FF & (int)imageByte);
if(hexByteString .size() == 1) {
hexByteString = "0" + hexByteString ;
}
hexStringBuilder.append(hexByteString);
}
String hexString = hexStringBuilder.toString();
byte [] hexBytes = hexString.getBytes("UTF-8"); //Could also use US-ASCII
EDIT: Updated code sample to pad 0's on the hex bytes
EDIT: negative bytes were getting logically right shifted when converted to ints >_<
https://joseluisbz.wordpress.com/2013/07/26/exploring-a-wmf-file-0x000900/
Maybe help you this:
String HexRTFBytes = "Representations text of bytes from Image RTF File";
String Destiny = "The path of the output File";
FileOutputStream wmf;
try {
wmf = new FileOutputStream(Destiny);
HexRTFBytes = HexRTFBytes.replaceAll("\n", ""); //Erase New Lines
HexRTFBytes = HexRTFBytes.replaceAll(" ", ""); //Erase Blank spaces
int NumBytesWrite = HexRTFBytes.length();
int WMFBytes = NumBytesWrite/2;//One byte is represented by 2 characters
byte[] ByteWrite = new byte[WMFBytes];
for (int i = 0; i < WMFBytes; i++){
se = HexRTFBytes.substring(i*2,i*2+2);
int Entero = Integer.parseInt(se,16);
ByteWrite[i] = (byte)Entero;
}
wmf.write(ByteWrite);
wmf.close();
}
catch (FileNotFoundException fnfe)
{System.out.println(fnfe.toString());}
catch (NumberFormatException fnfe)
{System.out.println(fnfe.toString());}
catch (EOFException eofe)
{System.out.println(eofe.toString());}
catch (IOException ioe)
{System.out.println(ioe.toString());}
This code take the representation in one string, and result is stored in a file.
https://joseluisbz.wordpress.com/2011/06/22/script-de-clases-rtf-para-jsp-y-php/
Now if you want to obtain the representation of the image file, you can use this:
private void ByteStreamImageString(byte[] ByteStream) {
this.Format = 0;
this.High = 0;
this.Wide = 0;
this.HexImageString = "Error";
if (ByteStream[0]== (byte)137 && ByteStream[1]== (byte)80 && ByteStream[2]== (byte)78){
this.Format = PNG; //PNG
this.High = this.Byte2PosInt(ByteStream[22],ByteStream[23]);
this.Wide = this.Byte2PosInt(ByteStream[18],ByteStream[19]);
}
if (ByteStream[0]== (byte)255 && ByteStream[1]== (byte)216
&& ByteStream[2]== (byte)255 && ByteStream[3]== (byte)224){
this.Format = JPG; //JPG
int PosJPG = 2;
while (PosJPG < ByteStream.length){
String M = String.format("%02X%02X", ByteStream[PosJPG+0],ByteStream[PosJPG+1]);
if (M.equals("FFC0") || M.equals("FFC1") || M.equals("FFC2") || M.equals("FFC3")){
this.High = this.Byte2PosInt(ByteStream[PosJPG+5],ByteStream[PosJPG+6]);
this.Wide = this.Byte2PosInt(ByteStream[PosJPG+7],ByteStream[PosJPG+8]);
}
if (M.equals("FFDA")) {
break;
}
PosJPG = PosJPG+2+this.Byte2PosInt(ByteStream[PosJPG+2],ByteStream[PosJPG+3]);
}
}
if (this.Format > 0) {
this.HexImageString = "";
int Salto = 0;
for (int i=0;i < ByteStream.length; i++){
Salto++;
this.HexImageString += String.format("%02x", ByteStream[i]);
if (Salto==64){
this.HexImageString += "\n"; //To make readable
Salto = 0;
}
}
}
}