Print pdf file through bluetooth printer - java

As new to the android bluetooth connecting and printing the content in printer.But now able to connect to bluetooth printer with my android device.So,with my project requirement i have to print the content of the pdf file.Previously with the same project i am able to print the string variable content.
Now with the project requirement change there will be a pdf file priniting task.First I am creating the pdf with the itext library in java for pdf file creation.So,the file creation part is done.And what i did for printing that file is generate the byte array for the pdf file.
here is the line of code
FileInputStream fin=new FileInputStream(pdffile);
fileContent=new byte[(int) pdffile.length()];//file content is the byte array for the pdf file.
Next line connecting to the bluetooth of the printer to the android device.
mBTAdapter = BluetoothAdapter.getDefaultAdapter();
BluetoothDevice mdevice = mBTAdapter.getRemoteDevice(PRINTER_MAC_ID);
Method m = mdevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class });
mBTSocket = (BluetoothSocket) m.invoke(mdevice, 1);
mBTSocket.connect();
with that socket(mBTSocket) getting the OutputStream.
and than writing the byte array to that OutputStream.
os.write(fileContent);
os.flush();
mBTSocket.close();
so when i tried to print the pdf file content through the bluetooth printer,nothing happen there is no exception or application crash but it give warning getbluetoothservice() called with no bluetoothmanagercallback and not content print on the paper.So anyone can tell me which/where i m doing wrong.I also search for this topic but all thing i got about string printing only but no file.
One of the link tell about some sdk named StarIOsdk for android for printing file.And one more problem in the android sdk that the new printing methodlogy is introduced in API level 4.4 but how we will do in prior API level.The printer used here is bluetooth thermal printer(small size 2 inche paper size).Thanks in advance.

mBTAdapter = BluetoothAdapter.getDefaultAdapter();
BluetoothDevice mdevice = mBTAdapter.getRemoteDevice(PRINTER_MAC_ID);
Method m = mdevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class });
mBTSocket = (BluetoothSocket) m.invoke(mdevice, 1);
mBTSocket.connect();
Thread.sleep(100);
After Socket Connect put thread to sleep ..This worked for me

Related

Android application crashes while defining byte buffer to read a file

I am trying to read a file into a byte buffer in android. The application crashes whenever I initialize the byte buffer with the size equal to the size of the file. I have checked correctly and the file size is well below the int max value. Because of certain project setup, I have to test the application on the device due to which I don't have the access to the logcat.
File outputdir = new File(localcontext.getFilesDir(), "appData");
if(!outputdir.exists()){
if(outputdir.mkdir()){
Toast.makeText(localcontext, outputdir.getAbsolutePath(), Toast.LENGTH_SHORT).show();
}
}
tempfile = new File(outputdir, "runningfile.mp4");
bytebuffer = new byte[(int)encryptedfile.length()];
OutputStream os = new FileOutputStream(tempfile.getAbsolutePath(), false);
// DataInputStream dataInputStream = new DataInputStream(fis);
// dataInputStream.readFully(bytebuffer);
// dataInputStream.close();
The application runs fine and displays some message when I comment out the byte buffer initialization line but crashes otherwise.
I am unable to figure out what's wrong here. Please help. Thanks.
Use try catch block and display error with toast.
try{
bytebuffer = new byte[(int)encryptedfile.length()];
}catch(Exception e){
Toast.makeText(getActivity(), e.getMessage(),Toast.LENGTH_LONG).show();
}

GCP Speech to text - Java API not working

I have a sample .webm file recorded using MediaRecorder in Chrome Browser. When I use Google speech java client to get transcription for the video, it returns empty transcription. Here is what my code looks like
SpeechSettings settings = null;
Path path = Paths.get("D:\\scrap\\gcp_test.webm");
byte[] content = null;
try {
content = Files.readAllBytes(path);
settings = SpeechSettings.newBuilder().setCredentialsProvider(credentialsProvider).build();
} catch (IOException e1) {
throw new IllegalStateException(e1);
}
try (SpeechClient speech = SpeechClient.create(settings)) {
// Builds the request for remote FLAC file
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setUseEnhanced(true)
.setModel("video")
.setEnableAutomaticPunctuation(true)
.setSampleRateHertz(48000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(ByteString.copyFrom(content)).build();
// RecognitionAudio audio = RecognitionAudio.newBuilder().setUri("gs://xxxx/gcp_test.webm") .build();
// Use blocking call for getting audio transcript
RecognizeResponse response = speech.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getMessage());
}
If, I use the same file and visit https://cloud.google.com/speech-to-text/ and upload file in the demo section. It seems to work fine and shows transcription. I am clueless about whats going wrong here. I verified the request sent by demo and here it what looks like
I am sending the exact set of parameters, but that didn't work. Tried uploading file to Cloud storage but that too gave same result (no transcription).
After going through error and trials (and looking at the javascript samples), I could solve the issue. The serialized version of audio should be in FLAC format. I was sending the video file(webm) as is to Google Cloud. The demo on the site extracts audio stream using Javascript Audio API and then sents the data in base64 format to make it work.
Here are the steps that I executed to get the output.
Used FFMPEG to extract audio stream into FLAC format from webm.
ffmpeg -i sample.webm -vn -acodec flac sample.flac
The extracted file should be made available using either Storage cloud or send as ByteString.
Set the appropriate model while calling the speech API (for english language video model works, while for french language command_and_search). I don't have any logical reason for this. I realised it after trial and error with demo on Google cloud site.
I got results with flac encoded file.
Sample code results words with timestamp,
public class SpeechToTextSample {
public static void main(String... args) throws Exception {
try (SpeechClient speechClient = SpeechClient.create()) {
String gcsUriFlac = "gs://yourfile.flac";
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setEnableWordTimeOffsets(true)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUriFlac).build(); //for large files
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response = speechClient.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(1000);
}
// Performs speech recognition on the audio file
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
for (WordInfo wordInfo : alternative.getWordsList()) {
System.out.println(wordInfo.getWord());
System.out.printf(
"\t%s.%s sec - %s.%s sec\n",
wordInfo.getStartTime().getSeconds(),
wordInfo.getStartTime().getNanos() / 100000000,
wordInfo.getEndTime().getSeconds(),
wordInfo.getEndTime().getNanos() / 100000000);
}
}
}
}
}
GCP supports different languages, I have used "en-US" for my example.
Please refer following link document to know language list.

PDFBox creating Sound object with link/reference to external mp3 or wav file

I am writing a utility application using open source java based PDFBox to convert PDF file containing 'Hyperlink to open an mp3 file' to replace it with sound object.
I used PDFBox API since it appears to be mature enough to work with Sound object. I could read the PDF file and find the hyperlink with reference to mp3. But I am not able to replace it with sound object. I created the Sound Object and associate with action but it does not work. I think I am missing some important part how to create Sound object using PDActionSound object. Is it possible to refer to external wav file using PDFBox API?
for (PDPage pdPage : pages) {
List<PDAnnotation> annotations = pdPage.getAnnotations();
for (PDAnnotation pdAnnotation : annotations) {
if (pdAnnotation instanceof PDAnnotationLink) {
PDAnnotationLink link = ((PDAnnotationLink) pdAnnotation);
PDAction action = link.getAction();
if (action instanceof PDActionLaunch) {
PDActionLaunch launch = ((PDActionLaunch) action);
String fileInfo = launch.getFile().getFile();
if (fileInfo.contains(".mp3")) {
/* create Sound object referring to external mp3*/
//something like
PDActionSound actionSound = new PDActionSound(
soundStream);
//set the ActionSound to the link.
link.setAction(actionSound);
}
}
}
}
}
How to create sound object (PDActionSound) and add to link successfully?
Speaking of mature, that part has never been used, and now that I had a closer look at the code, I think some work remains to be done... Please try this, I created this with PDFBox 2.0 after reading the PDF specification:
PDSimpleFileSpecification fileSpec = new PDSimpleFileSpecification(new COSString("/C/dir1/dir2/blah.mp3")); // see "File Specification Strings" in PDF spec
COSStream soundStream = new COSStream();
soundStream.createOutputStream().close();
soundStream.setItem(COSName.F, fileSpec);
soundStream.setInt(COSName.R, 44100); // put actual sample rate here
PDActionSound actionSound = new PDActionSound();
actionSound.getCOSObject().setItem(COSName.getPDFName("Sound"), soundStream));
link.setAction(actionSound); // reassign the new action to the link annotation
edit: as the above didn't work, here's an alternative solution as requested in the comments. The file is embedded. It works only with .WAV files, and you have to know details of them. About 1/2 seconds are lost at the beginning. The sound you should hear is "I am Al Bundy". I tried with MP3 and didn't succeed. While googling, I found some texts saying that only "old" formats (wav, aif etc) are supported. I did find another way to play sounds ("Renditions") that even worked with embedded mp3 in another product, but the generated structure in the PDF is even more complex.
COSStream soundStream = new COSStream();
OutputStream os = soundStream.createOutputStream(COSName.FLATE_DECODE);
URL url = new URL("http://cd.textfiles.com/hackchronii/WAV/ALBUNDY1.WAV");
InputStream is = url.openStream();
// FileInputStream is = new FileInputStream(".....WAV");
IOUtils.copy(is, os);
is.close();
os.close();
// See p. 506 in PDF spec, Table 294
soundStream.setInt(COSName.C, 1); // channels
soundStream.setInt(COSName.R, 22050); // sampling rate
//soundStream.setString(COSName.E, "Signed"); // The encoding format for the sample data
soundStream.setInt(COSName.B, 8); // The number of bits per sample value per channel. Default value: 8
// soundStream.setName(COSName.CO, "MP3"); // doesn't work
PDActionSound actionSound = new PDActionSound();
actionSound.getCOSObject().setItem(COSName.getPDFName("Sound"), soundStream);
link.setAction(actionSound);
Update 9.7.2016:
We discussed this on the PDFBox mailing list, and thanks to Gilad Denneboom we know two more things:
1) in Adobe Acrobat it only lets you select either WAV or AIF files
2) code by Gilad Denneboom with MP3SPI to convert MP3 to raw:
private static InputStream getAudioStream(String filename) throws Exception {
File file = new File(filename);
AudioInputStream in = AudioSystem.getAudioInputStream(file);
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_UNSIGNED,
baseFormat.getSampleRate(),
baseFormat.getSampleSizeInBits(),
baseFormat.getChannels(),
baseFormat.getChannels(),
baseFormat.getSampleRate(),
false);
return AudioSystem.getAudioInputStream(decodedFormat, in);
}

Custom streaming implementation

I'm trying to implement my own version of streaming. I'm sending byte arrays over a websocket. Once I get the first message I write it to a temporary and using android's MediaPlayer to play the file. For the first message everything works fine, I turn the byte array into an mp3 and audio comes out. However I'm not really sure how to keep writing to the file every time a message comes over.
some example code
File test;
FileOutputStream fos;
MediaPlayer mediaPlayer;
FileInputStream MyFile;
Everytime a message comes through this code gets run.
try {
if (fos == null) {
test = File.createTempFile("TCL", "mp3", getCacheDir());
fos = new FileOutputStream(test);
fos.write(bytearray);
mediaPlayer = new MediaPlayer();
MyFile = new FileInputStream(test);
mediaPlayer.setDataSource(MyFile.getFD());
mediaPlayer.prepare();
if(!mediaPlayer.isPlaying()){
mediaPlayer.start();
}
}else{
fos.write(bytearray);
}
} catch (IOException ex) {
ex.printStackTrace();
}
I thought I could just keep writing incoming byte[]'s to the file but that doesn't seem to be working. Any advice would be appreciated.
What you're trying to do (play the audio in a file that keeps growing indefinitely) is not supported by MediaPlayer. Instead, look into decoding the audio yourself and sending the raw PCM data to AudioTrack. It's a lot more work, but AudioTrack is the easiest way to progressively play a stream of audio data.

sending video file to browser over websocket

I want to send a video file from a server written in java to a web browser client.
The socket connection works fine and I have no trouble sending text.
The library I'm using to make a socket server is this https://github.com/TooTallNate/Java-WebSocket
This is the code for sending the file
public void sendFile(WebSocket conn,String path)
{
try
{
File file = new File(path);
byte[] data = new byte[(int)file.length()];
DataInputStream stream = new DataInputStream(new FileInputStream(file));
stream.readFully(data);
stream.close();
conn.send(data);
..snip catch statements..
Here is my javascript code for catching the file
function connect()
{
conn = new WebSocket('ws://localhost:8887');
conn.onopen = function(){alert("Connection Open");};
conn.onmessage = function(evt){if(evt.data instanceof Blob){readFile(evt);}else{alert(evt.data);}};
conn.onclose = function(){alert('connection closed');};
}
function readFile(file_data)
{
var video = document.getElementById('area');
video.src = window.URL.createObjectURL(file_data.data);
}
..skip to html element for playing the file..
<video id='area' controls="controls"></video>
I want to be able to receive the file in the browser and play it.
The error I get while trying to send a webm video file to fireox is:
HTTP "Content-Type" of "application/octet-stream" is not supported. Load of media resource blob:794345a5-4b6d-4585-b92b-3acb51612a6c failed.
Is it possible to receive a video file from a websocket and play it?
Am I implementing something wrong?
Video element requires right content-type, ws Blob comes with generic one, and it seems (to me) there is no way to set it serverside or clientside.
Fortunately, Blob has slice(start, end, contentType) method:
var rightBlob = originalBlob.slice(0, originalBlob.size, 'video/webm')

Categories