Can notification.setSound() be used with multiple audio sources? - java

Expected: A different audio file plays for each if statement
Actual: The first audio file plays for all notifications. The text message updates correctly, so it is meeting the if statement requirements.
Is there a way to have the correct audio file play?
if (alarm.getMINS().equals("0")) {
alarmSound = audioAlert;
text = context.getString(R.string.message);
} else if (alarmn.getMINS().equals("2")) {
alarmSound = audioReminder;
text = context.getString(R.string.reminder);
} else if (alarm.getMINS().equals("5")) {
alarmSound = audioImportant;
text = context.getString(R.string.important);
}
notification.setSound(alarmSound, audioAttributes);
...
.setContentText(text)

Sound on a per-Notification basis has been deprecated since Android 5.0. Sounds are now set on the notification channel, and each channel has one sound.

Related

NFC stops working after writing on iCODE tags, Android 6.0

I'm developing an android application to read and and write different NFC tags. I've encountered a problem with a specific tag ,iCODE SLI X and iCODE SLI S. After i write information on the tag, i'm not able to do any other action, looks like NFC stops working correctly because if i restart it, it will actually read the tag. This does not happen if i use another tag type like MIFARE Classic 1K. Android version is 6.0.
On the other hand, if i try the application on another device with Android 6.1 or 7.0 (exact same code), iCODE SLI X and iCODE SLIS will work okay, but not MIFARE Classic 1K.
Besides trying different samples of codes, i have also tried 2 applications on these devices. On "NFC Tools" you can see exactly the same problems that i have on my application. "TagWriter" from NXP is the only application that works like a charm with all types of tags.
Here is the code I'm using to write the information on the tag:
#Override
protected void onNewIntent(Intent intent) {
if (mNfcAdapter.ACTION_TAG_DISCOVERED.equals(intent.getAction())) {
Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
if (tag != null) {
try {
Ndef ndef = Ndef.get(tag);
NdefRecord text1 = new NdefRecord(NdefRecord.TNF_WELL_KNOWN,
youstring1.getBytes(Charset.forName("US-ASCII")),
null,
youstring1.getBytes());
NdefRecord text2 = new NdefRecord(NdefRecord.TNF_WELL_KNOWN,
youstring2.getBytes(Charset.forName("US-ASCII")),
null,
youstring2.getBytes());
NdefRecord[] records = {text1, text2};
NdefMessage message = new NdefMessage(records);
if (ndef != null) {
NdefMessage ndefMesg = ndef.getCachedNdefMessage();
if (ndefMesg != null) {
ndef.connect();
ndef.writeNdefMessage(message);
ndef.close();
}
} else {
NdefFormatable ndefFormatable = NdefFormatable.get(tag);
if (ndefFormatable != null) {
// initialize tag with new NDEF message
try {
ndefFormatable.connect();
ndefFormatable.format(message);
ndefFormatable.close();
} finally {
try {
//ndefFormatable.close();
} catch (Exception e) {
}
}
}
}
}catch (FormatException |IOException ue){}
}
}
}
I can't understand what I'm possibly doing wrong ...
I managed to understand what was wrong with my application, so I'm posting the answer myself. Here is the thing :
When i try to write the information on the tag I first check if the tag is formatted to use "Ndef" technology on it, if not I use "NdefFormatable" to format the tag.
The strange thing is, a certain tag in some devices supports "NdefFormatable" and in some it doesn't. (not sure if its related to NFC itself or the OS version). This was causing NFC to misbehave or not to work at all after I tried to use "NdefFormatable".
What I'm doing now is that i have build this function that gives the technologies that I can use on the tag. Depending on it, I use "NdefFormatable" or "NfcV" (for iCODE tags) to read or write on the tag.

JavaFX audio doesn't seem to be playing

I am fairly new to JavaFX and recently wanted to play audio with an MP3 file rather than WAV. From what I can tell, I am doing things correctly and I don't get any errors, but I also don't hear any sound.
I will post the parts of my code that matter below. If I'm missing something please let me know. Thanks.
try {
URL sound = getClass().getResource("/resources/origin.mp3");
Media hit = new Media(sound.toExternalForm());
musicPlayer = new MediaPlayer(hit);
musicPlayer.setVolume(1.0);
}
catch(Exception e) {
System.out.println("whoops: " + e);
}
checkMusic();
Check Music Method:
public void checkMusic() {
if(music)
musicPlayer.setAutoPlay(true);
else
musicPlayer.stop();
}
I also tried just musicPlayer.play(); as well.
EDIT
And yes, I am sure the code within the if statement runs, I have checked it with println, and they print out. The music boolean is just a controller for settings in the program/game.
instead of
Media hit = new Media(sound.toExternalForm());
try this:
final Media media = new Media(sound.toString());

Manual add song to Mediastore as a music track

I want to create a Music player which can download a song online and add it to MediaStore. I'm using Download Manager and allow MediaScanner scan this file when download completed.
DownloadManager.Request request ....
request.allowScanningByMediaScanner();
...
downloadManager.enqueue(request);
It's work fine in android 5.0 and above.
But the song was downloaded using codec (opus) which not supported in android below lollipop version, so MediaScanner doesn't add this file to MediaStore.
That's my problem, my app can play opus codec but the song didn't exist in MediaStore after it has downloaded, so my app can't find this song.
How to force MediaScanner add downloaded file to MediaStore.Audio as a Music track. If can not, how can I manual add this song to MediaStore.Audio after download completed:
public class BroadcastDownloadComplete extends BroadcastReceiver {
#Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals("android.intent.action.DOWNLOAD_COMPLETE")) {
//addSongToMediaStore(intent);
}
}
}
From the source code here, we can see the final implementation of the scanner has two steps to scan an audio file. If either of these two step fail, the audio file will not insert into media provider.
step 1 check the file extension
static bool FileHasAcceptableExtension(const char *extension) {
static const char *kValidExtensions[] = {
".mp3", ".mp4", ".m4a", ".3gp", ".3gpp", ".3g2", ".3gpp2",
".mpeg", ".ogg", ".mid", ".smf", ".imy", ".wma", ".aac",
".wav", ".amr", ".midi", ".xmf", ".rtttl", ".rtx", ".ota",
".mkv", ".mka", ".webm", ".ts", ".fl", ".flac", ".mxmf",
".avi", ".mpeg", ".mpg"
};
static const size_t kNumValidExtensions =
sizeof(kValidExtensions) / sizeof(kValidExtensions[0]);
for (size_t i = 0; i < kNumValidExtensions; ++i) {
if (!strcasecmp(extension, kValidExtensions[i])) {
return true;
}
}
return false;
}
More extensions have been added since Android 5.0. The common container for opus codec is ogg, this extension exists before Android 5.0. Assume your audio file extension is ogg, the scanning process is fine at this step.
step2 retrieve metadata
After the first step passed, the scanner need to retrieve media's metadata for later database insertion. I think the scanner do the codec level checking at this step.
sp<MediaMetadataRetriever> mRetriever(new MediaMetadataRetriever);
int fd = open(path, O_RDONLY | O_LARGEFILE);
status_t status;
if (fd < 0) {
// couldn't open it locally, maybe the media server can?
status = mRetriever->setDataSource(path);
} else {
status = mRetriever->setDataSource(fd, 0, 0x7ffffffffffffffL);
close(fd);
}
if (status) {
return MEDIA_SCAN_RESULT_ERROR;
}
For Android version before 5.0, the scanner might be failed at this step. Because of lacking of built-in opus codec support, setDataSource will get failed at last. The media file won't be added to media provider finally.
suggested solution
Because we know the audio file will be added to
MediaStore.Audio.Media.EXTERNAL_CONTENT_URI
we can do database operation manually. If you want your audio file keeps consistent with other audio files in the database, you have to retrieve all the metadata by yourself. Since you can play the opus file, I think it's easy to retrieve the metadata.
// retrieve more metadata, duration etc.
ContentValues contentValues = new ContentValues();
contentValues.put(MediaStore.Audio.AudioColumns.DATA, "/mnt/sdcard/Music/example.opus");
contentValues.put(MediaStore.Audio.AudioColumns.TITLE, "Example track");
contentValues.put(MediaStore.Audio.AudioColumns.DISPLAY_NAME, "example");
// more columns should be filled from here
Uri uri = getContentResolver().insert(MediaStore.Audio.Media.EXTERNAL_CONTENT_URI, contentValues);
Log.d(TAG, uri.toString());
After that, you app can find the audio file.
getContentResolver().query(MediaStore.Audio.Media.EXTERNAL_CONTENT_URI...
You can use MediaScannerConnection to ask Android to scan a file to be included as media. You'll want to use the scanFile() static method.

Custom streaming implementation

I'm trying to implement my own version of streaming. I'm sending byte arrays over a websocket. Once I get the first message I write it to a temporary and using android's MediaPlayer to play the file. For the first message everything works fine, I turn the byte array into an mp3 and audio comes out. However I'm not really sure how to keep writing to the file every time a message comes over.
some example code
File test;
FileOutputStream fos;
MediaPlayer mediaPlayer;
FileInputStream MyFile;
Everytime a message comes through this code gets run.
try {
if (fos == null) {
test = File.createTempFile("TCL", "mp3", getCacheDir());
fos = new FileOutputStream(test);
fos.write(bytearray);
mediaPlayer = new MediaPlayer();
MyFile = new FileInputStream(test);
mediaPlayer.setDataSource(MyFile.getFD());
mediaPlayer.prepare();
if(!mediaPlayer.isPlaying()){
mediaPlayer.start();
}
}else{
fos.write(bytearray);
}
} catch (IOException ex) {
ex.printStackTrace();
}
I thought I could just keep writing incoming byte[]'s to the file but that doesn't seem to be working. Any advice would be appreciated.
What you're trying to do (play the audio in a file that keeps growing indefinitely) is not supported by MediaPlayer. Instead, look into decoding the audio yourself and sending the raw PCM data to AudioTrack. It's a lot more work, but AudioTrack is the easiest way to progressively play a stream of audio data.

Route MIC input through headset and AUDIO output through phone's speaker

is there a way to route microphone input through headset and use simultaneously the AUDIO output through smartphone's speaker ?
I've been watching around for hours now and I saw that it's clearly impossible on iOS but what about Android..
I'm using a Samsung Galaxy S4.
Here is a part of my code to route the audio output through the speaker even if the headset is plugged in :
AudioManager speakerOutput = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
speakerOutput.setMode(AudioManager.MODE_IN_CALL);
speakerOutput.setSpeakerphoneOn(true);
But then when I tried to control my app by the headset's microphone, nothing. So I tried with the smartphone's one and it's obviously working.
It don't seems like others AudioManager.MODEs route audio output only and leave the mic input to the headset. So what can I do now without having to build a new kernel, modify drivers, HAL, …etc ?
Thanks, C.
There is no way to loopback from headset Mic to Speaker if you do not change HAL or driver on Android. If u can change hardware, it possible works out.
If you use AudioManager.STREAM_RING, the audio should be routed to both the loudspeaker and the wired headset, just as when the ringtone is played for an incoming call (remove the setMode and setSpeakerphoneOn calls).
Yes, It's possible. I successfully do it a few days ago. There are 2 solutions for it:
Solution 1:
Suppose you are in a normal mood & Headset connected. We just need to call setPreferredDevice function to set your preferred audio input/output device. Our input source is already coming from the headset. So we only need to route the audio output source to the Phone Speaker. Here is my sample code by Media Player:
#RequiresApi(Build.VERSION_CODES.P)
private fun playAudioFromPhoneSpeaker(context: Context, audioResId: Int) {
val mMediaPlayer: MediaPlayer = MediaPlayer.create(context, audioResId)
val mCurrentDevice: AudioDeviceInfo? = mMediaPlayer.preferredDevice
val mDeviceSpeaker: AudioDeviceInfo? = getDeviceOutputSpeaker(context)
mDeviceSpeaker?.let { speaker ->
if (mCurrentDevice != speaker) {
val result: Boolean? = mMediaPlayer.setPreferredDevice(mDeviceSpeaker)
Log.i("PreferredDevice", "is set as speaker: $result")
}
}
mMediaPlayer.start()
mMediaPlayer.setOnCompletionListener { mediaPlayer ->
mediaPlayer.release()
}
}
#RequiresApi(Build.VERSION_CODES.M)
private fun getDeviceOutputSpeaker(context: Context): AudioDeviceInfo? {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val mAudioDeviceOutputList = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
mAudioDeviceOutputList?.filterNotNull()?.forEach { speaker ->
if (speaker.type == AudioDeviceInfo.TYPE_BUILTIN_SPEAKER) {
return speaker
}
}
return null
}
The positive thing about this solution is that we don't need to think about the input source. We only change the audio output source and it doesn't have any impact on the input source.
The problem is setPreferredDevice of MediaPlayer come from Android 9. So this solution is only supported from Android 9.
Solution 2:
In 2nd solution we turn on device speaker by Audio Manager. So after turn on the device speaker the audio input & output source will come to the device internal speaker & device mic. Now we have to route the mic to the headset. Sample code:
private val mAudioRecord: AudioRecord? = null
private fun RecordAudioFromHeadphone(context: Context) {
//Step 1: Turn on Speaker
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
audioManager.isSpeakerphoneOn = true
//Step 2: Init AudioRecorder TODO for you
//Step 3: Route mic to headset
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
setPreferredInputMic(getWiredHeadPhoneMic(this))
}
}
#RequiresApi(Build.VERSION_CODES.M)
private fun setPreferredInputMic(mAudioDeviceInfo: AudioDeviceInfo?) {
val result: Boolean? = mAudioRecord?.setPreferredDevice(mAudioDeviceInfo)
}
#RequiresApi(Build.VERSION_CODES.M)
private fun getWiredHeadPhoneMic(context: Context): AudioDeviceInfo? {
val audioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val mAudioDeviceOutputList = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
mAudioDeviceOutputList?.filterNotNull()?.forEach { device ->
if (device.type == AudioDeviceInfo.TYPE_WIRED_HEADSET) {
return device
}
}
return null
}
The positive think of this solution is supported from Android 6. And negative thing is it's little bit complex and you have to maintain the voice input source.
AudioRecorder, MediaRecorder, AudioTrack also support this kind of solution.
Thanks

Categories