Android - demod FSK using goertzel - java

I am using this code https://stackoverflow.com/questions/23432398/audio-recorder-in-android-process-the-audio-bytes to capture the mic audio but I am writing the data to a ByteArrayOutputStream. After I finish the record I want to demodule the signal captured by using Goertzel Algorithm. The FSK signal consists out of 2 frequencies, 800Hz for '1' and 400Hz for '0' each bit is moduled using 100 samples. I am using this class of Goertzel: http://courses.cs.washington.edu/courses/cse477/projectwebs04sp/cse477m/code/public/Goertzel.java I am trying to use a bin size of 150.
Here is what I try to do:
the code, after I finish the recording:
private void stopRecording()
{
if(recorder != null)
{
isRecording= false;
recorder.stop();
recorder.release();
recorder = null;
recordingThread = null;
int BlockSize = 150;
float HighToneFrequency = 800;
float LowToneFrequency = 400;
byte[] byteArrayData = ByteArrayAudioData.toByteArray();
/*final AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
8000, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, byteArrayData.length,
AudioTrack.MODE_STATIC);
audioTrack.write(byteArrayData, 0, byteArrayData.length);
audioTrack.play();*/
double[] daOriginalSine = convertSample2Sine(byteArrayData);
int i = 0;
while(i < daOriginalSine.length)
{
double t1 = testSpecificFrequency(i, HighToneFrequency,BlockSize, daOriginalSine);
double t2 = testSpecificFrequency(i, LowToneFrequency,BlockSize, daOriginalSine);
i+=BlockSize;
}
}
}
And the function testSpecificFrequency:
private double testSpecificFrequency(int startIndex, float ToneFreq, int BlockSize, double[] sample)
{
Goertzel g = new Goertzel(RECORDER_SAMPLERATE, ToneFreq, BlockSize, false);
g.initGoertzel();
for(int j=startIndex ; j<startIndex+BlockSize ; j++)
{
g.processSample(sample[j]);
}
double res= Math.sqrt(g.getMagnitudeSquared());
return res;
}
I just tried to see what will be the results, by sending 800Hz to the constructor and afterwars sending 400Hz,don't really know how to proceed from this point =\
Any ideas?

Related

Merge/Mux multiple mp4 video files on Android

I have a series of mp4 files saved on the device that need to be merged together to make a single mp4 file.
video_p1.mp4 video_p2.mp4 video_p3.mp4 > video.mp4
The solutions I have researched such as the mp4parser framework use deprecated code.
The best solution I could find is using a MediaMuxer and MediaExtractor.
The code runs but my videos are not merged (only the content in video_p1.mp4 is displayed and it is in landscape orientation, not portrait).
Can anyone help me sort this out?
public static boolean concatenateFiles(File dst, File... sources) {
if ((sources == null) || (sources.length == 0)) {
return false;
}
boolean result;
MediaExtractor extractor = null;
MediaMuxer muxer = null;
try {
// Set up MediaMuxer for the destination.
muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
//int bufferSize = MAX_SAMPLE_SIZE;
int bufferSize = 1 * 1024 * 1024;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
long timeOffsetUs = 0;
int dstTrackIndex = -1;
for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);
// Set up MediaExtractor to read from the source.
extractor = new MediaExtractor();
extractor.setDataSource(sources[fileIndex].getPath());
// Set up the tracks.
SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
for (int i = 0; i < extractor.getTrackCount(); i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
if (dstTrackIndex < 0) {
dstTrackIndex = muxer.addTrack(format);
muxer.start();
}
indexMap.put(i, dstTrackIndex);
}
long lastPresentationTimeUs = 0;
int currentSample = 0;
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
timeOffsetUs += (lastPresentationTimeUs + 0);
}
else {
lastPresentationTimeUs = extractor.getSampleTime();
bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
}
extractor.advance();
frameCount++;
currentSample++;
Log.d("tag2", "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
extractor.release();
extractor = null;
}
result = true;
}
catch (IOException e) {
result = false;
}
finally {
if (extractor != null) {
extractor.release();
}
if (muxer != null) {
muxer.stop();
muxer.release();
}
}
return result;
}
public static int getNumberOfSamples(File src) {
MediaExtractor extractor = new MediaExtractor();
int result;
try {
extractor.setDataSource(src.getPath());
extractor.selectTrack(0);
result = 0;
while (extractor.advance()) {
result ++;
}
}
catch(IOException e) {
result = -1;
}
finally {
extractor.release();
}
return result;
}
I'm using this library for muxing videos: ffmpeg-android-java
gradle dependency:
implementation 'com.writingminds:FFmpegAndroid:0.3.2'
Here's how I use it in my project to mux video and audio in kotlin: VideoAudioMuxer
So basically it works like the ffmpeg in terminal but you're inputing your command to a method as an array of strings along with a listener.
fmpeg.execute(arrayOf("-i", videoPath, "-i", audioPath, "$targetPath.mp4"), object : ExecuteBinaryResponseHandler() {
You'll have to search how to merge videos in ffmpeg and convert the commands into array of strings for the argument you need.
You could probably do almost anything, since ffmpeg is a very powerful tool.

How To Send Data Using Headset Port

Hello I want to send data using headset port from my phone to a handmade clock as a challenge. So I Decided to use different audio frequency as different message. For example Left channel 1KHz means Hour , 1.1KHz means Minute, 1.2KHz means Day, Right channel 1KHz means 1, 1.1KHz means 2, and so on. Now I have this to make sound:
public class SetTimeAndDay_Activity extends Activity {
private final int duration = 1; // seconds
private final int sampleRate = 16000;
private final int numSamples = duration * sampleRate;
private final double sample[] = new double[numSamples];
private final double freqOfTone = 1000; // hz
private final byte generatedSnd[] = new byte[2 * numSamples];
Handler handler = new Handler();
int Hour, Minute, Day;
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(R.layout.settime_layout);
//<editor-fold desc="Get Real Time">
final Calendar real = Calendar.getInstance();
Hour = real.get(Calendar.HOUR_OF_DAY);
Minute = real.get(Calendar.MINUTE);
Day = real.get(Calendar.DAY_OF_WEEK);
//</editor-fold>
//<editor-fold desc="Preset View">
TextView timetxt = findViewById(R.id.SetTime_Time_Txt);
TextView datetxt = findViewById(R.id.SetTime_Day_Txt);
ProgressBar progressBar= findViewById(R.id.SetTime_progressBar);
timetxt.setText(Hour + ":" + Minute);
datetxt.setText(getResources().getTextArray(R.array.WDay)[Day]);
progressBar.setMax(100);
progressBar.setProgress(1);
//</editor-fold>
// Use a new tread as this can take a while
final Thread thread = new Thread(new Runnable() {
public void run() {
genTone();
handler.post(new Runnable() {
public void run() {
playSound();
}
});
}
});
thread.start();
}
void genTone(){
// fill out the array
for (int i = 0; i < numSamples; ++i) {
sample[i] = Math.sin(2 * Math.PI * i / (sampleRate/freqOfTone));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
}
void playSound(){
final AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, numSamples,
AudioTrack.MODE_STATIC);
audioTrack.write(generatedSnd, 0, generatedSnd.length);
audioTrack.play();
}
}
But there is SOME PROBLEMS!
At first I can not reduce sound duration from a second.
Second problem is that I don't know when I could send next message. I mean how I could realize a message is sent.
Next problem is that when I define front_left channel to play, it plays from right channel too.
And last problem is that I can not realize that, headset port is connected or not.
Please Help Me Any Way And Don't Give Me Minus.

Android wave file

I am working on my final project, hence I need to read wav file - I succeed to do it.
When I run the code in java it is take less then 1 sec, but when I try to run the code on Nexus 5, it is take almost 1 minute!!
It is take a lot of time to copy the (sb.get(i)) to original_signal[i].
for (int i = 0; i < 1710080; i++) {
original_signal[i] = (sb.get(i));
}
Please, need help,
Thanks!!!
public static void jjjj(){
String filepath = Environment.getExternalStorageDirectory().getAbsolutePath() + "/ilia.wav";
Wave wave = new Wave(filepath);
byte[] arr = wave.getBytes();
System.out.println();
wave.length();
ByteBuffer bb = ByteBuffer.wrap(arr);
System.out.println(bb.capacity());
bb.order(ByteOrder.LITTLE_ENDIAN);
ShortBuffer sb = bb.asShortBuffer();
original_signal = new double[1710080];
// double firstSample;
//THIS FOR LOOP TAKE A LOF OF TIME
for (int i = 0; i < 1710080; i++) {
original_signal[i] = (sb.get(i));
}
System.out.println("sss");
}
I solved the issue
Wave wave = new Wave(filepath);
double[] original_signal = wave.getNormalizedAmplitudes();
System.out.println(wave.getWaveHeader());
Parameters.Fs = wave.getWaveHeader().getSampleRate();
return original_signal;

Java openCV video processing frames/s

I have a task to write program with 1 camera, 1 kinect, a lot of video processing and then controlling a robot.
This code just shows captured video frames without processing, but I only have 20 frames/s approximately. The same simple frames displaying program in Matlab gave me 29 frames/s. I was hoping that I will win some speed in Java, but it doesn't look like that, am I doing sth wrong? If not, how I can increase the speed?
public class Video implements Runnable {
//final int INTERVAL=1000;///you may use interval
IplImage image;
CanvasFrame canvas = new CanvasFrame("Web Cam");
public Video() {
canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
#Override
public void run() {
FrameGrabber grabber = new VideoInputFrameGrabber(0); // 1 for next camera
int i=0;
try {
grabber.start();
IplImage img;
int g = 0;
long start2 = 0;
long stop = System.nanoTime();
long diff = 0;
start2 = System.nanoTime();
while (true) {
img = grabber.grab();
if (img != null) {
// cvFlip(img, img, 1);// l-r = 90_degrees_steps_anti_clockwise
// cvSaveImage((i++)+"-aa.jpg", img);
// show image on window
canvas.showImage(img);
}
g++;
if(g%200 == 0){
stop = System.nanoTime();
diff = stop - start2;
double d = (float)diff;
double dd = d/1000000000;
double dv = dd/g;
System.out.printf("frames = %.2f\n",1/dv);
}
//Thread.sleep(INTERVAL);
}
} catch (Exception e) {
}
}
public static void main(String[] args) {
Video gs = new Video();
Thread th = new Thread(gs);
th.start();
}
}

Video buffer delay with MJPG Feed

I am trying to delay my live mjpeg video feed by 10 seconds.
I am trying to modify this code and but I am unable to incorporate the mjpg url.
It keeps on saying 'The constructor CaptureMJPEG(String, int, int, int) is undefined' when I try to put the url in.
The original line said:
capture = new CaptureMJPEG(this, capture_xsize, capture_ysize, capture_frames);
I changed it to:
capture = new CaptureMJPEG ("http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);
import processing.video.*;
import it.lilik.capturemjpeg.*;
Capture myCapture;
CaptureMJPEG capture;
VideoBuffer monBuff;
int display_xsize = 800; // display size
int display_ysize = 600;
int capture_xsize = 320; // capture size
int capture_ysize = 240;
int delay_time = 10; // delay in seconds
int capture_frames = 20; // capture frames per second
void setup() {
size(display_xsize,display_ysize, P3D);
// Warning: VideoBuffer must be initiated BEFORE capture- or movie-events start
monBuff = new VideoBuffer(delay_time*capture_frames, capture_xsize,capture_ysize);
capture = new CaptureMJPEG ("http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);
}
void captureEvent(Capture capture) {
capture.read();
monBuff.addFrame( capture );
}
void draw() {
PImage bufimg = monBuff.getFrame();
PImage tmpimg = createImage(bufimg.width,bufimg.height,RGB);
tmpimg.copy(bufimg,0,0,bufimg.width,bufimg.height,0,0,bufimg.width,bufimg.height);
tmpimg.resize(display_xsize,display_ysize);
image( tmpimg, 0, 0 );
}
class VideoBuffer
{
PImage[] buffer;
int inputFrame = 0;
int outputFrame = 0;
int frameWidth = 0;
int frameHeight = 0;
/*
parameters:
frames - the number of frames in the buffer (fps * duration)
width - the width of the video
height - the height of the video
*/
VideoBuffer( int frames, int width, int height )
{
buffer = new PImage[frames];
for(int i = 0; i < frames; i++)
{
this.buffer[i] = new PImage(width, height);
}
this.inputFrame = frames - 1;
this.outputFrame = 0;
this.frameWidth = width;
this.frameHeight = height;
}
// return the current "playback" frame.
PImage getFrame()
{
int frr;
if(this.outputFrame>=this.buffer.length)
frr = 0;
else
frr = this.outputFrame;
return this.buffer[frr];
}
// Add a new frame to the buffer.
void addFrame( PImage frame )
{
// copy the new frame into the buffer.
System.arraycopy(frame.pixels, 0, this.buffer[this.inputFrame].pixels, 0, this.frameWidth * this.frameHeight);
// advance the input and output indexes
this.inputFrame++;
this.outputFrame++;
// wrap the values..
if(this.inputFrame >= this.buffer.length)
{
this.inputFrame = 0;
}
if(this.outputFrame >= this.buffer.length)
{
this.outputFrame = 0;
}
}
}
Reading the reference docs:
https://bytebucket.org/nolith/capturemjpeg/wiki/api/index.html
These are the only two constructors:
CaptureMJPEG(PApplet parent, String url)
Creates a CaptureMJPEG without HTTP Auth credential
CaptureMJPEG(PApplet parent, String url, String username, String password)
Creates a CaptureMJPEG with HTTP Auth credential
So the first argument must always point to your processing applet instance. So
capture = new CaptureMJPEG (this, "http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);

Categories