Hello I want to send data using headset port from my phone to a handmade clock as a challenge. So I Decided to use different audio frequency as different message. For example Left channel 1KHz means Hour , 1.1KHz means Minute, 1.2KHz means Day, Right channel 1KHz means 1, 1.1KHz means 2, and so on. Now I have this to make sound:
public class SetTimeAndDay_Activity extends Activity {
private final int duration = 1; // seconds
private final int sampleRate = 16000;
private final int numSamples = duration * sampleRate;
private final double sample[] = new double[numSamples];
private final double freqOfTone = 1000; // hz
private final byte generatedSnd[] = new byte[2 * numSamples];
Handler handler = new Handler();
int Hour, Minute, Day;
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(R.layout.settime_layout);
//<editor-fold desc="Get Real Time">
final Calendar real = Calendar.getInstance();
Hour = real.get(Calendar.HOUR_OF_DAY);
Minute = real.get(Calendar.MINUTE);
Day = real.get(Calendar.DAY_OF_WEEK);
//</editor-fold>
//<editor-fold desc="Preset View">
TextView timetxt = findViewById(R.id.SetTime_Time_Txt);
TextView datetxt = findViewById(R.id.SetTime_Day_Txt);
ProgressBar progressBar= findViewById(R.id.SetTime_progressBar);
timetxt.setText(Hour + ":" + Minute);
datetxt.setText(getResources().getTextArray(R.array.WDay)[Day]);
progressBar.setMax(100);
progressBar.setProgress(1);
//</editor-fold>
// Use a new tread as this can take a while
final Thread thread = new Thread(new Runnable() {
public void run() {
genTone();
handler.post(new Runnable() {
public void run() {
playSound();
}
});
}
});
thread.start();
}
void genTone(){
// fill out the array
for (int i = 0; i < numSamples; ++i) {
sample[i] = Math.sin(2 * Math.PI * i / (sampleRate/freqOfTone));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
}
void playSound(){
final AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, numSamples,
AudioTrack.MODE_STATIC);
audioTrack.write(generatedSnd, 0, generatedSnd.length);
audioTrack.play();
}
}
But there is SOME PROBLEMS!
At first I can not reduce sound duration from a second.
Second problem is that I don't know when I could send next message. I mean how I could realize a message is sent.
Next problem is that when I define front_left channel to play, it plays from right channel too.
And last problem is that I can not realize that, headset port is connected or not.
Please Help Me Any Way And Don't Give Me Minus.
Related
{
Intent i = getIntent();
Bundle bundle = i.getExtras();
if (bundle != null) {
String url = i.getStringExtra("movieUrl");
urlArray = url.trim().split(",");
urlLength = urlArray.length;
tempString = urlArray[loop].toString();
mVideoView.setVideoPath(tempString);
mVideoView.setMediaController(new MediaController(this));
mVideoView.requestFocus();
// Show progressbar
progressDialog.show();
}
Because my .ts file size 12MB but it takes only 4MB per segment. How to increase?
You can set buffer size by accessing MediaPlayer object.
mVideoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mp) {
//mp.setVideoQuality(MediaPlayer.VIDEOQUALITY_LOW);
//mp.setPlaybackSpeed(1.0f);
mp.setBufferSize(1024*1024*4);//4MB buffer size
}
});
You can check the library for more info.
/**
* The buffer to fill before playback, default is 1024*1024 Byte
*
* #param bufSize buffer size in Byte
*/
public native void setBufferSize(long bufSize);
You can use the following buffer size calculator. It gives you how many seconds you should set for the buffer size. (source) And then you can calculate your buffer size in bytes.
// buffer padding in sec.
// should be at least twice as long as the keyframe interval and fps, e.g.:
// keyframe interval of 30 at 30fps --> min. 2 sec.
public static int BUFFER_PADDING = 3;
// videoLength in sec., videoBitrate and bandwidth in kBits/Sec
public static int calculate(int videoLength, int videoBitrate, int bandwidth) {
int bufferTime;
if (videoBitrate > bandwidth) {
bufferTime = (int) Math.ceil(videoLength - videoLength / (videoBitrate / bandwidth));
} else {
bufferTime = 0;
}
bufferTime += BUFFER_PADDING;
return bufferTime;
}
I am using this code https://stackoverflow.com/questions/23432398/audio-recorder-in-android-process-the-audio-bytes to capture the mic audio but I am writing the data to a ByteArrayOutputStream. After I finish the record I want to demodule the signal captured by using Goertzel Algorithm. The FSK signal consists out of 2 frequencies, 800Hz for '1' and 400Hz for '0' each bit is moduled using 100 samples. I am using this class of Goertzel: http://courses.cs.washington.edu/courses/cse477/projectwebs04sp/cse477m/code/public/Goertzel.java I am trying to use a bin size of 150.
Here is what I try to do:
the code, after I finish the recording:
private void stopRecording()
{
if(recorder != null)
{
isRecording= false;
recorder.stop();
recorder.release();
recorder = null;
recordingThread = null;
int BlockSize = 150;
float HighToneFrequency = 800;
float LowToneFrequency = 400;
byte[] byteArrayData = ByteArrayAudioData.toByteArray();
/*final AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
8000, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, byteArrayData.length,
AudioTrack.MODE_STATIC);
audioTrack.write(byteArrayData, 0, byteArrayData.length);
audioTrack.play();*/
double[] daOriginalSine = convertSample2Sine(byteArrayData);
int i = 0;
while(i < daOriginalSine.length)
{
double t1 = testSpecificFrequency(i, HighToneFrequency,BlockSize, daOriginalSine);
double t2 = testSpecificFrequency(i, LowToneFrequency,BlockSize, daOriginalSine);
i+=BlockSize;
}
}
}
And the function testSpecificFrequency:
private double testSpecificFrequency(int startIndex, float ToneFreq, int BlockSize, double[] sample)
{
Goertzel g = new Goertzel(RECORDER_SAMPLERATE, ToneFreq, BlockSize, false);
g.initGoertzel();
for(int j=startIndex ; j<startIndex+BlockSize ; j++)
{
g.processSample(sample[j]);
}
double res= Math.sqrt(g.getMagnitudeSquared());
return res;
}
I just tried to see what will be the results, by sending 800Hz to the constructor and afterwars sending 400Hz,don't really know how to proceed from this point =\
Any ideas?
I wrote an Android app that plays multi-track audio files and it works completely in the simulator. On the device, it plays for a few seconds and then starts skipping and popping every few seconds. If I continuously tap the screen in the dead space of the app, the skipping doesn't occur and then recurs about 5 seconds after screen tapping ceases. I presume that this has something to do with thread priority, but I log the thread priority in the play loop and it never changes.
I'm hoping that somebody can tell me either:
a hack where I can simulate a screen tap every second so that I can run a beta test without the app skipping
explain a way to debug activity/thread/etc priority when it seems that my thread priority isn't changing when it seems like it is.
Here is how the player code is executed:
private class DecodeOperation extends AsyncTask<Void, Void, Void> {
#Override
protected Void doInBackground(Void... values) {
AudioTrackPlayer.this.decodeLoop();
return null;
}
#Override
protected void onPreExecute() {
}
#Override
protected void onProgressUpdate(Void... values) {
}
}
Here is the relevant player code:
private void decodeLoop()
{
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
// extractor gets information about the stream
extractor = new MediaExtractor();
try {
extractor.setDataSource(this.mUrlString);
} catch (Exception e) {
mDelegateHandler.onRadioPlayerError(AudioTrackPlayer.this);
return;
}
MediaFormat format = extractor.getTrackFormat(0);
String mime = format.getString(MediaFormat.KEY_MIME);
// the actual decoder
codec = MediaCodec.createDecoderByType(mime);
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
codecInputBuffers = codec.getInputBuffers();
codecOutputBuffers = codec.getOutputBuffers();
// get the sample rate to configure AudioTrack
int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
Log.i(LOG_TAG,"mime "+mime);
Log.i(LOG_TAG,"sampleRate "+sampleRate);
// create our AudioTrack instance
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_OUT_5POINT1,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize (
sampleRate,
AudioFormat.CHANNEL_OUT_5POINT1,
AudioFormat.ENCODING_PCM_16BIT
),
AudioTrack.MODE_STREAM
);
// start playing, we will feed you later
audioTrack.play();
extractor.selectTrack(0);
// start decoding
final long kTimeOutUs = 10000;
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
boolean sawInputEOS = false;
boolean sawOutputEOS = false;
int noOutputCounter = 0;
int noOutputCounterLimit = 50;
while (!sawOutputEOS && noOutputCounter < noOutputCounterLimit && !doStop) {
//Log.i(LOG_TAG, "loop ");
noOutputCounter++;
if (!sawInputEOS) {
inputBufIndex = codec.dequeueInputBuffer(kTimeOutUs);
bufIndexCheck++;
// Log.d(LOG_TAG, " bufIndexCheck " + bufIndexCheck);
if (inputBufIndex >= 0) {
ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];
int sampleSize =
extractor.readSampleData(dstBuf, 0 /* offset */);
//Log.d(LOG_TAG, "SampleLength = " + String.valueOf(sampleSize));
long presentationTimeUs = 0;
if (sampleSize < 0) {
Log.d(LOG_TAG, "saw input EOS.");
sawInputEOS = true;
sampleSize = 0;
} else {
presentationTimeUs = extractor.getSampleTime();
}
// can throw illegal state exception (???)
codec.queueInputBuffer(
inputBufIndex,
0 /* offset */,
sampleSize,
presentationTimeUs,
sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
if (!sawInputEOS) {
extractor.advance();
}
}
else
{
Log.e(LOG_TAG, "inputBufIndex " +inputBufIndex);
}
}
int res = codec.dequeueOutputBuffer(info, kTimeOutUs);
if (res >= 0) {
//Log.d(LOG_TAG, "got frame, size " + info.size + "/" + info.presentationTimeUs);
if (info.size > 0) {
noOutputCounter = 0;
}
int outputBufIndex = res;
ByteBuffer buf = codecOutputBuffers[outputBufIndex];
final byte[] chunk = new byte[info.size];
buf.get(chunk);
buf.clear();
audioTrack.write(chunk,0,chunk.length);
if(this.mState != State.Playing)
{
mDelegateHandler.onRadioPlayerPlaybackStarted(AudioTrackPlayer.this);
}
this.mState = State.Playing;
}
codec.releaseOutputBuffer(outputBufIndex, false /* render */);
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d(LOG_TAG, "saw output EOS.");
sawOutputEOS = true;
}
} else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
codecOutputBuffers = codec.getOutputBuffers();
Log.d(LOG_TAG, "output buffers have changed.");
} else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat oformat = codec.getOutputFormat();
Log.d(LOG_TAG, "output format has changed to " + oformat);
} else {
Log.d(LOG_TAG, "dequeueOutputBuffer returned " + res);
}
}
Log.d(LOG_TAG, "stopping...");
relaxResources(true);
this.mState = State.Stopped;
doStop = true;
// attempt reconnect
if(sawOutputEOS)
{
try {
AudioTrackPlayer.this.play();
return;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if(noOutputCounter >= noOutputCounterLimit)
{
mDelegateHandler.onRadioPlayerError(AudioTrackPlayer.this);
}
else
{
mDelegateHandler.onRadioPlayerStopped(AudioTrackPlayer.this);
}
}
Have you monitored the CPU frequency while your application is running? The CPU governor is probably scaling the CPU up on touch and scaling back down on a timer. Increasing the priority on your background thread to THREAD_PRIORITY_DEFAULT will probably fix the issue, the default priority for AsyncTask is quite low and not appropriate for Audio.
You could also increase the size of the AudioTrack's buffer to some multiple of the value returned by getMinBufferSize, that method only returns the minimum possible buffer for the Class to operate, it does not guarantee smooth playback.
I have a task to write program with 1 camera, 1 kinect, a lot of video processing and then controlling a robot.
This code just shows captured video frames without processing, but I only have 20 frames/s approximately. The same simple frames displaying program in Matlab gave me 29 frames/s. I was hoping that I will win some speed in Java, but it doesn't look like that, am I doing sth wrong? If not, how I can increase the speed?
public class Video implements Runnable {
//final int INTERVAL=1000;///you may use interval
IplImage image;
CanvasFrame canvas = new CanvasFrame("Web Cam");
public Video() {
canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
#Override
public void run() {
FrameGrabber grabber = new VideoInputFrameGrabber(0); // 1 for next camera
int i=0;
try {
grabber.start();
IplImage img;
int g = 0;
long start2 = 0;
long stop = System.nanoTime();
long diff = 0;
start2 = System.nanoTime();
while (true) {
img = grabber.grab();
if (img != null) {
// cvFlip(img, img, 1);// l-r = 90_degrees_steps_anti_clockwise
// cvSaveImage((i++)+"-aa.jpg", img);
// show image on window
canvas.showImage(img);
}
g++;
if(g%200 == 0){
stop = System.nanoTime();
diff = stop - start2;
double d = (float)diff;
double dd = d/1000000000;
double dv = dd/g;
System.out.printf("frames = %.2f\n",1/dv);
}
//Thread.sleep(INTERVAL);
}
} catch (Exception e) {
}
}
public static void main(String[] args) {
Video gs = new Video();
Thread th = new Thread(gs);
th.start();
}
}
I am trying to delay my live mjpeg video feed by 10 seconds.
I am trying to modify this code and but I am unable to incorporate the mjpg url.
It keeps on saying 'The constructor CaptureMJPEG(String, int, int, int) is undefined' when I try to put the url in.
The original line said:
capture = new CaptureMJPEG(this, capture_xsize, capture_ysize, capture_frames);
I changed it to:
capture = new CaptureMJPEG ("http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);
import processing.video.*;
import it.lilik.capturemjpeg.*;
Capture myCapture;
CaptureMJPEG capture;
VideoBuffer monBuff;
int display_xsize = 800; // display size
int display_ysize = 600;
int capture_xsize = 320; // capture size
int capture_ysize = 240;
int delay_time = 10; // delay in seconds
int capture_frames = 20; // capture frames per second
void setup() {
size(display_xsize,display_ysize, P3D);
// Warning: VideoBuffer must be initiated BEFORE capture- or movie-events start
monBuff = new VideoBuffer(delay_time*capture_frames, capture_xsize,capture_ysize);
capture = new CaptureMJPEG ("http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);
}
void captureEvent(Capture capture) {
capture.read();
monBuff.addFrame( capture );
}
void draw() {
PImage bufimg = monBuff.getFrame();
PImage tmpimg = createImage(bufimg.width,bufimg.height,RGB);
tmpimg.copy(bufimg,0,0,bufimg.width,bufimg.height,0,0,bufimg.width,bufimg.height);
tmpimg.resize(display_xsize,display_ysize);
image( tmpimg, 0, 0 );
}
class VideoBuffer
{
PImage[] buffer;
int inputFrame = 0;
int outputFrame = 0;
int frameWidth = 0;
int frameHeight = 0;
/*
parameters:
frames - the number of frames in the buffer (fps * duration)
width - the width of the video
height - the height of the video
*/
VideoBuffer( int frames, int width, int height )
{
buffer = new PImage[frames];
for(int i = 0; i < frames; i++)
{
this.buffer[i] = new PImage(width, height);
}
this.inputFrame = frames - 1;
this.outputFrame = 0;
this.frameWidth = width;
this.frameHeight = height;
}
// return the current "playback" frame.
PImage getFrame()
{
int frr;
if(this.outputFrame>=this.buffer.length)
frr = 0;
else
frr = this.outputFrame;
return this.buffer[frr];
}
// Add a new frame to the buffer.
void addFrame( PImage frame )
{
// copy the new frame into the buffer.
System.arraycopy(frame.pixels, 0, this.buffer[this.inputFrame].pixels, 0, this.frameWidth * this.frameHeight);
// advance the input and output indexes
this.inputFrame++;
this.outputFrame++;
// wrap the values..
if(this.inputFrame >= this.buffer.length)
{
this.inputFrame = 0;
}
if(this.outputFrame >= this.buffer.length)
{
this.outputFrame = 0;
}
}
}
Reading the reference docs:
https://bytebucket.org/nolith/capturemjpeg/wiki/api/index.html
These are the only two constructors:
CaptureMJPEG(PApplet parent, String url)
Creates a CaptureMJPEG without HTTP Auth credential
CaptureMJPEG(PApplet parent, String url, String username, String password)
Creates a CaptureMJPEG with HTTP Auth credential
So the first argument must always point to your processing applet instance. So
capture = new CaptureMJPEG (this, "http:/url.com/feed.mjpg", capture_xsize, capture_ysize, capture_frames);