I am using android equalizer API to create a high pass filter. But even if I set every band to -1500 it does not seems to work. The audio is playing well but no eq effects. Here is my code.
private void attachEq(int audioSessionId) {
Equalizer eq = new Equalizer(100,audioSessionId);
short[] freqRange = eq.getBandLevelRange();
short minLvl = freqRange[0];
short maxLvl = freqRange[1];
eq.setBandLevel((short) 4,minLvl);
eq.setBandLevel((short) 3,minLvl);
eq.setBandLevel((short) 2,minLvl);
eq.setBandLevel((short) 1,minLvl);
eq.setBandLevel((short) 0,minLvl);
}
I am getting the audio session-id by
at.getAudioSessionId()
where at is an already initialized AudioTrack. As I said AudioTrack is playing fine but eq doesn't seem to have any effect.
Edit: Do I have to set band levels before I call at.play() or after? I am doing it before at.play() and it doesn't seem to work.
I figured it out! I wasn't calling eq.setEnabled(true).
Now it works!
Related
Struggling with Android MediaCodec, I'm looking for a straight forward process to change the resolution of a video file in Android.
For now I'm trying a single thread transcoding method that makes all the work step by step so I can understand it well, and at high level it looks as follows:
public void TranscodeVideo()
{
// Extract
MediaTrack[] tracks = ExtractTracks(InputPath);
// Decode
MediaTrack videoTrack = tracks.Where(o => o.IsVideo).FirstOrDefault();
MediaTrack rawVideoTrack = DecodeTrack(videoTrack);
// Edit?
// ResizeVideoTrack(rawVideoTrack);
// Encode
MediaFormat newFormat = MediaHelper.CreateVideoOutputFormat(videoTrack.Format);
MediaTrack encodeVideodTrack = EncodeTrack(rawVideoTrack , newFormat);
// Muxe
encodeVideodTrack.Index = videoTrack.Index;
tracks[Array.IndexOf(tracks, videoTrack)] = encodeVideodTrack;
MuxeTracks(OutputPath, tracks);
}
Extraction works fine, returning a track with audio only and a track with video only. Muxing works fine combining again two previous tracks. Decoding works but I don't know how to check it, the raw frames on the track weight much more than the originals so I assume that it's right.
Problem
The encoder input buffer size is smaller than the raw frames size, and also related to the encoding configured format, so I assume that I need to resize the frames in some way but I don't find anything useful. I'm correct on this? I'm missing something? What is the way to go resizing Raw video frames? Any help? :S
PD
Maybe you will notice that I'm using C# (Xamarin.Android) for more fun. But the underlaying API is of course Java.
I'm using ByteBuffers, not Surfaces because it seems easier. I will be the next step using surfaces, any advice is welcome.
I know that the single thread process is highly inefficient, but makes it simple. It will be another next step to connect the decoder output buffer to the encoder input buffer.
I digged through PhilLab, Grafika and Bigflake examples but nothing seems to be very useful for me.
Avoiding to use ffmpeg on Android.
Thank you everyone for your time.
Going off of the comment above to implement libVLC
Add this to your app root's build.gradle
allprojects {
repositories {
...
maven {
url 'https://jitpack.io'
}
}
}
Add this to your dependent app's build.gradle
dependancies {
...
implementation 'com.github.masterwok:libvlc-android-sdk:3.0.13'
}
Here is an example of loading an RTSP stream as an activity
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera_stream_layout);
// Get URL
this.rtspUrl = getIntent().getExtras().getString(RTSP_URL);
Log.d(TAG, "Playing back " + rtspUrl);
this.mSurface = findViewById(R.id.camera_surface);
this.holder = this.mSurface.getHolder();
ArrayList<String> options = new ArrayList<>();
options.add("-vvv"); // verbosity
//Add vlc transcoder options here
this.libvlc = new LibVLC(getApplicationContext(), options);
this.holder.setKeepScreenOn(true);
//this.holder.setFixedSize();
// Create media player
this.mMediaPlayer = new MediaPlayer(this.libvlc);
this.mMediaPlayer.setEventListener(this.mPlayerListener);
// Set up video output
final IVLCVout vout = this.mMediaPlayer.getVLCVout();
vout.setVideoView(this.mSurface);
//Set size of video to fit app screen
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
ViewGroup.LayoutParams videoParams = this.mSurface.getLayoutParams();
videoParams.width = displayMetrics.widthPixels;
videoParams.height = displayMetrics.heightPixels;
vout.setWindowSize(videoParams.width, videoParams.height);
vout.addCallback(this);
vout.attachViews();
final Media m = new Media(this.libvlc, Uri.parse(this.rtspUrl));
//Use this to add transcoder options m.addOption("vlc transcode options here");
this.mMediaPlayer.setMedia(m);
this.mMediaPlayer.play();
}
Here is the documentation of vlc transcoder options
https://wiki.videolan.org/Documentation:Streaming_HowTo_New/
You are right, the input buffer size of the encoder is smaller because it expects input to be of the specified dimensions. The encoder only, like the name suggests, encodes.
I read your question as more of a "why" than a "how" question so i'll only point you to where you'll find the "why's"
The decoded frame is a YUV image (is suggest to quickly skim through the wikipedia article), usually NV21 if i'm not mistaken but might be different from device to device. To do this i suggest you use a library as every every plane of the image needs to be scaled down differently and it usually takes care of filtering.Check out libYUV. If you are interested in the actual resizing algorithms check out this and for implementations this.
If you are not required to handle the decoding and encoding with bytebuffers, i suggest to use a surface as you already mentioned. It has multiple benefits over decoding to bytebuffers.
More memory efficient as there is no copy between the native buffer and app allocated buffer, the native buffers are simply geting swapped from and to the surface.
If you plan to render the frame, be it for resizing or displaying, it can be done by the devices graphic processor. On how to do that check out BigFlakes DecodeEditEncode test.
In hope this answers some of your questions.
I am working on a simple music app in android and I have tried adding EnviromentalReverb and PresetReverb to mediaPlayer (wav and m4a formats) but the reverb doesn't apply. There is no change when the audio plays. I have checked whether my device supports the reverb using the below code and it does. I have looked at similar questions on stackoverflow but there isn't an answer that works.
final AudioEffect.Descriptor[] effects = AudioEffect.queryEffects();
// Determine available/supported effects
for (final AudioEffect.Descriptor effect : effects) {
Log.d("Effects", effect.name.toString() + ", type: " + effect.type.toString());
}
The code used for EnvironmentalReverb and PresetReverb is below
First try
EnvironmentalReverb eReverb = new EnvironmentalReverb(1,0);
eReverb.setReverbDelay(85);
eReverb.setEnabled(true);
mMediaPlayer.attachAuxEffect(eReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
Second try
PresetReverb mReverb = new PresetReverb(1, 0);
mReverb.setPreset(PresetReverb.PRESET_LARGEROOM);
mReverb.setEnabled(true);
mMediaPlayer.attachAuxEffect(mReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
Both return 0 for setEnabled(true) but neither work on the audio. Can someone please point me in the right direction? I am not sure what is wrong with the implementation.
Answering my question so it can be helpful for someone else.
I wasn't able to get the PresetReverb to work. The EnvironmentalReverb however was working but to find out whether it was working I had to add seekbars for room level and reverb level so I could alter it in real time.
EnvironmentalReverb eReverb = new EnvironmentalReverb(0,0);
mMediaPlayer.attachAuxEffect(eReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
I enabled the reverb on click of a button and then used seek bars to change the room level and reverb level.
I'm trying to make a desktop audio recorder application with JAVA.
And I found JSyn good but I don't know how to make good use of it.
I don't know how to... process the audio data.(like.. if I want to analyze the frequency domain or sample attitudes, filter out microphone noises, make the voice sound like male or female, things like that... )
I want to make those things "customizable".
The user guide says I can write a custom unit generator which extends UnitFilter.
But I don't know what those values mean( double[] input, double[] output, start, limit... )
Is it possible to use "AudioSpectrumListener" with this thing?
It's already tough enough for me this JAVA noob to figure out how to record sound from microphone.
Synthesizer synth = JSyn.createSynthesizer();
LineIn micIn = new LineIn();
synth.add(micIn);
synth.start(44100, AudioDeviceManager.USE_DEFAULT_DEVICE, 2, AudioDeviceManager.USE_DEFAULT_DEVICE, 2);
File file = new File("record.wav");
try {
WaveRecorder recorder = new WaveRecorder(synth, file);
micIn.output.connect(0,recorder.getInput(),0);
micIn.output.connect(1,recorder.getInput(),1);
recorder.start();
System.out.println("start recording");
Thread.sleep(5000);
System.out.println("stop recording");
recorder.stop();
recorder.close();
synth.stopUnit(micIn);
synth.stop();
...
After this, I don't know how to get the data or how to make sound louder or quieter and the program doesn't exit itself after I close the recorder and stop the synthesizer.
What did I miss to stop or close?
//---
nah I'm just going to use TargetDataLine and process the data like the answer to this another audio data question do.
I use this code to try:
AudioManager audioManager = (AudioManager)getApplication().getSystemService(Context.AUDIO_SERVICE);
audioManager.setMode(AudioManager.MODE_IN_CALL);
audioManager.setSpeakerphoneOn(true);
And then:
Ringtone r = RingtoneManager.getRingtone(getApplicationContext(), Uri.parse("url.mp3"));
r.play();
But my app doesn' reproduce any sound.
How can i solve my problem?
According to the OP, the best answer can be found using the MediaPlayer (a link with an example is given in the comments section of this answer).
-- Previous Edits --
Haven't tested this so forgive me if it is buggy, but I think it may work better by setting the default value for the ringtone and then calling that default value. I haven't had a chance to test the code, but it should look something like...
To route the audio to your earpiece:
private AudioManager audioManager;
audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
audioManager.setMode(AudioManager.MODE_IN_CALL);
audioManager.setSpeakerphoneOn(false);
Then check out AudioTracks, it might be the way to go with what you want to do as Ringtone's have default actions based on Android's native processing; it should be something like
InputStream in =getResources().openRawResource("user_mp3");
AudioTrack audio = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM);
byte[] sound = null;
sound = new byte[in.available()];
sound =convertStreamToByteArray(in);
in.close();
audio.write(sound, 0, sound.length());
audio.play();
But be sure to set your mode back to normal with your AudioManager when you are done. I think this should work. There is also the deprecated AudioManager.ROUTE_EARPIECE call, you might want to check and see how they have replaced it.
Again, didn't have time to test this, just typed it up on the fly. Let me know if you find an error.
My original "ringtone" style output:
Uri soundPath = Uri.parse("uri_link_for_mp3");
RingtoneManager.setActualDefaultRingtoneUri(getApplicationContext(),
RingtoneManager.TYPE_RINGTONE, soundPath);
Uri ringtone = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION);
Ringtone r = RingtoneManager.getRingtone(getApplicationContext(), ringtone);
r.play();
Had to type this up kind of quick, might have missed something just not thinking of it. There is a good link for this though: Setting Ringtone in Android
I'm planning on doing a application for Android 2.1 that changes song every minute (through what I hope exists in Android, "next") for the application using the audio device atm.
So if I have Spotify running in background already, playing music, can I through my program change to the next track?
Let me know if I was unclear about anything.
Thanks in advance!
I know this is a bit old question, but it took me some time searching something other then what is mentioned here.
There is a workaround - broadcasting media button action. There is one catch - receiver can recognize if the broadcast was from system or from another app, so they can ignore the non-system broadcasts.
Intent i = new Intent(Intent.ACTION_MEDIA_BUTTON);
synchronized (this) {
i.putExtra(Intent.EXTRA_KEY_EVENT, new KeyEvent(KeyEvent.ACTION_DOWN, KeyEvent.KEYCODE_MEDIA_NEXT));
sendOrderedBroadcast(i, null);
i.putExtra(Intent.EXTRA_KEY_EVENT, new KeyEvent(KeyEvent.ACTION_UP, KeyEvent.KEYCODE_MEDIA_NEXT));
sendOrderedBroadcast(i, null);
}
There's no universal audio transport API for music applications, so you'd need to see if the music applications you're targeting publicly expose service bindings or intents. If not, you won't be able to do this.
Just posted a relevant answer here
Using the AudioManager's dispatchMediaKeyEvent() method with a defined KeyEvent worked for me using the latest SDK.
The system music homescreen widget sends this intent for the built-in music player:
final ComponentName serviceName = new ComponentName(context,
MediaPlaybackService.class);
intent = new Intent(MediaPlaybackService.NEXT_ACTION);
intent.setComponent(serviceName);
pendingIntent = PendingIntent.getService(context,
0 /* no requestCode */, intent, 0 /* no flags */);
views.setOnClickPendingIntent(R.id.control_next, pendingIntent);
But it looks like this might take some hackery to implement outside packages in the music app itself because the MediaPlaybackService only accepts explicit Intents and isn't accessible from the outside. This thread seems to indicate it's possible with a bit of hackery, though.
But even then, as Roman said, not every music player will respect that Intent. You'll have to check with Spotify/Pandora/Last.fm themselves and see if they have any available intents to bind like that.
Looks that it's possible to use AudioManager to inject media keys.
Here is a snippet from another question
this.mAudioManager = (AudioManager) this.context.getSystemService(Context.AUDIO_SERVICE);
long eventtime = SystemClock.uptimeMillis();
KeyEvent downEvent = new KeyEvent(eventtime, eventtime, KeyEvent.ACTION_DOWN, KeyEvent.KEYCODE_MEDIA_NEXT, 0);
mAudioManager.dispatchMediaKeyEvent(downEvent);
KeyEvent upEvent = new KeyEvent(eventtime, eventtime, KeyEvent.ACTION_UP, KeyEvent.KEYCODE_MEDIA_NEXT, 0);
mAudioManager.dispatchMediaKeyEvent(upEvent);
The same way you can inject PlayPause button and some others.
I've tested it within a background service controlling Youtube and it worked for Android 6