I want to create a Music player which can download a song online and add it to MediaStore. I'm using Download Manager and allow MediaScanner scan this file when download completed.
DownloadManager.Request request ....
request.allowScanningByMediaScanner();
...
downloadManager.enqueue(request);
It's work fine in android 5.0 and above.
But the song was downloaded using codec (opus) which not supported in android below lollipop version, so MediaScanner doesn't add this file to MediaStore.
That's my problem, my app can play opus codec but the song didn't exist in MediaStore after it has downloaded, so my app can't find this song.
How to force MediaScanner add downloaded file to MediaStore.Audio as a Music track. If can not, how can I manual add this song to MediaStore.Audio after download completed:
public class BroadcastDownloadComplete extends BroadcastReceiver {
#Override
public void onReceive(Context context, Intent intent) {
if (intent.getAction().equals("android.intent.action.DOWNLOAD_COMPLETE")) {
//addSongToMediaStore(intent);
}
}
}
From the source code here, we can see the final implementation of the scanner has two steps to scan an audio file. If either of these two step fail, the audio file will not insert into media provider.
step 1 check the file extension
static bool FileHasAcceptableExtension(const char *extension) {
static const char *kValidExtensions[] = {
".mp3", ".mp4", ".m4a", ".3gp", ".3gpp", ".3g2", ".3gpp2",
".mpeg", ".ogg", ".mid", ".smf", ".imy", ".wma", ".aac",
".wav", ".amr", ".midi", ".xmf", ".rtttl", ".rtx", ".ota",
".mkv", ".mka", ".webm", ".ts", ".fl", ".flac", ".mxmf",
".avi", ".mpeg", ".mpg"
};
static const size_t kNumValidExtensions =
sizeof(kValidExtensions) / sizeof(kValidExtensions[0]);
for (size_t i = 0; i < kNumValidExtensions; ++i) {
if (!strcasecmp(extension, kValidExtensions[i])) {
return true;
}
}
return false;
}
More extensions have been added since Android 5.0. The common container for opus codec is ogg, this extension exists before Android 5.0. Assume your audio file extension is ogg, the scanning process is fine at this step.
step2 retrieve metadata
After the first step passed, the scanner need to retrieve media's metadata for later database insertion. I think the scanner do the codec level checking at this step.
sp<MediaMetadataRetriever> mRetriever(new MediaMetadataRetriever);
int fd = open(path, O_RDONLY | O_LARGEFILE);
status_t status;
if (fd < 0) {
// couldn't open it locally, maybe the media server can?
status = mRetriever->setDataSource(path);
} else {
status = mRetriever->setDataSource(fd, 0, 0x7ffffffffffffffL);
close(fd);
}
if (status) {
return MEDIA_SCAN_RESULT_ERROR;
}
For Android version before 5.0, the scanner might be failed at this step. Because of lacking of built-in opus codec support, setDataSource will get failed at last. The media file won't be added to media provider finally.
suggested solution
Because we know the audio file will be added to
MediaStore.Audio.Media.EXTERNAL_CONTENT_URI
we can do database operation manually. If you want your audio file keeps consistent with other audio files in the database, you have to retrieve all the metadata by yourself. Since you can play the opus file, I think it's easy to retrieve the metadata.
// retrieve more metadata, duration etc.
ContentValues contentValues = new ContentValues();
contentValues.put(MediaStore.Audio.AudioColumns.DATA, "/mnt/sdcard/Music/example.opus");
contentValues.put(MediaStore.Audio.AudioColumns.TITLE, "Example track");
contentValues.put(MediaStore.Audio.AudioColumns.DISPLAY_NAME, "example");
// more columns should be filled from here
Uri uri = getContentResolver().insert(MediaStore.Audio.Media.EXTERNAL_CONTENT_URI, contentValues);
Log.d(TAG, uri.toString());
After that, you app can find the audio file.
getContentResolver().query(MediaStore.Audio.Media.EXTERNAL_CONTENT_URI...
You can use MediaScannerConnection to ask Android to scan a file to be included as media. You'll want to use the scanFile() static method.
Related
I am using exoplayer in recyclerview to render a list of videos. This recyclerview is rendered in an activity and I know the list of video urls before opening the activity.
I want to be able to preload or cache the videos before going to that activity. The videos are usually less than 1 minute. So I am not looking for a solution to smooth stream videos. I just want the videos to be in the cache before opening the activity so that once the recyclerview opens the videos start playing without any buffering just like in tiktok.
I found a way to cache already played videos using LocalCacheDataSourceFactory in
MediaSource videoSource =
new ExtractorMediaSource(Uri.parse(mediaUrl),
new LocalCacheDataSourceFactory(context, 100 * 1024 * 1024, 5 * 1024 * 1024), new DefaultExtractorsFactory(), null, null);
This only allows me to cache the videos that are already played but not preloading or precaching them.
Found this medium article from exoplayer team but no other example integration for my specific requirement. article
Checkout this article. I have written a step by step guide to preload/precache videos using Exoplayer2.
In your case you must create a recursion or looping of caching method inside a background thread.
Video preloading/ precaching using Exoplayer 2 in Android
I have achieved this in my own app. Follow these steps:
add this video caching dependency:
implementation 'com.danikula:videocache:2.7.1'
Setup dependency like this:
It is advised that you do this in the application class:
private HttpProxyCacheServer proxy;
public static HttpProxyCacheServer getProxy(Context context) {
AppController app = (AppController) context.getApplicationContext();
return app.proxy == null ? (app.proxy = app.newProxy()) : app.proxy;
}
private HttpProxyCacheServer newProxy() {
return new HttpProxyCacheServer(this);
}
//Note that you need to have a single instance of HttpProxyCacheServer.
//Note the library is tradinally meant to allow you to cache a video so exoplayer does not re-buffer it every single time you scroll to recycler item.
Then setup your exoplayer like this:
a.
HttpProxyCacheServer proxyServer; // maintain a single instance of this. you can init this in oncreate method:
public ViewRecommendedShortVideosAdapter(Context context) {
proxyServer = AppController.getProxy(context);
}
b. In your method to play exoplayer file:
String streamingURL = shortVideo.getStreamingURL();
String proxyURL = proxyServer.getProxyUrl(streamingURL);
MediaItem mediaItem = MediaItem.fromUri(proxyURL); // you are now crating the media item from the proxy url you get by calling getProxyUrl on your proxyServer instance.
exoPlayer.setMediaItem(mediaItem);
exoPlayer.prepare();
exoPlayer.setPlayWhenReady(true);
Note: the above still only gives you caching of the video once it has been played, and not caching of yet to play videos.
To cache yet to play videos, do this:
a. you can cache only the next video in the recycler data arraylist (this helps save user internet costs) or you can cache the whole (I don't recommend, cos of data costs on users).
b. To cache the next video, create a method such as prepareNextVideo(), this will contain the logic to cache the next video. you will call this method in the original method to play an exoplayer media item. (maybe after setting up the first media item and set exoplayer.play()).
c. the code in the prepareNextVideo():
private void prepareNextVideo(int currentPosition){
URL url = null;
try {
ShortVideo shortVideo = videosArrayList.get(currentPosition+1); //you can put some if conditions here to check if the element exists. but a try cache should handle it to.
url = new URL(proxyServer.getProxyUrl(shortVideo.getStreamingURL())); // we are using our proxy url again.
InputStream inputStream = url.openStream();
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
int length = 0;
while ((length = inputStream.read(buffer)) != -1) {
//nothing to do here. you are just reading the stream.
}
} catch (Exception e) {
e.printStackTrace();
//can fetch
}
}
Finally, simply call this method to buffer the next video. you can also put it in a loop, run asynchronously if you want to buffer all the videos.
I have a sample .webm file recorded using MediaRecorder in Chrome Browser. When I use Google speech java client to get transcription for the video, it returns empty transcription. Here is what my code looks like
SpeechSettings settings = null;
Path path = Paths.get("D:\\scrap\\gcp_test.webm");
byte[] content = null;
try {
content = Files.readAllBytes(path);
settings = SpeechSettings.newBuilder().setCredentialsProvider(credentialsProvider).build();
} catch (IOException e1) {
throw new IllegalStateException(e1);
}
try (SpeechClient speech = SpeechClient.create(settings)) {
// Builds the request for remote FLAC file
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setUseEnhanced(true)
.setModel("video")
.setEnableAutomaticPunctuation(true)
.setSampleRateHertz(48000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(ByteString.copyFrom(content)).build();
// RecognitionAudio audio = RecognitionAudio.newBuilder().setUri("gs://xxxx/gcp_test.webm") .build();
// Use blocking call for getting audio transcript
RecognizeResponse response = speech.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getMessage());
}
If, I use the same file and visit https://cloud.google.com/speech-to-text/ and upload file in the demo section. It seems to work fine and shows transcription. I am clueless about whats going wrong here. I verified the request sent by demo and here it what looks like
I am sending the exact set of parameters, but that didn't work. Tried uploading file to Cloud storage but that too gave same result (no transcription).
After going through error and trials (and looking at the javascript samples), I could solve the issue. The serialized version of audio should be in FLAC format. I was sending the video file(webm) as is to Google Cloud. The demo on the site extracts audio stream using Javascript Audio API and then sents the data in base64 format to make it work.
Here are the steps that I executed to get the output.
Used FFMPEG to extract audio stream into FLAC format from webm.
ffmpeg -i sample.webm -vn -acodec flac sample.flac
The extracted file should be made available using either Storage cloud or send as ByteString.
Set the appropriate model while calling the speech API (for english language video model works, while for french language command_and_search). I don't have any logical reason for this. I realised it after trial and error with demo on Google cloud site.
I got results with flac encoded file.
Sample code results words with timestamp,
public class SpeechToTextSample {
public static void main(String... args) throws Exception {
try (SpeechClient speechClient = SpeechClient.create()) {
String gcsUriFlac = "gs://yourfile.flac";
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setEnableWordTimeOffsets(true)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUriFlac).build(); //for large files
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response = speechClient.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(1000);
}
// Performs speech recognition on the audio file
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
for (WordInfo wordInfo : alternative.getWordsList()) {
System.out.println(wordInfo.getWord());
System.out.printf(
"\t%s.%s sec - %s.%s sec\n",
wordInfo.getStartTime().getSeconds(),
wordInfo.getStartTime().getNanos() / 100000000,
wordInfo.getEndTime().getSeconds(),
wordInfo.getEndTime().getNanos() / 100000000);
}
}
}
}
}
GCP supports different languages, I have used "en-US" for my example.
Please refer following link document to know language list.
I am looking for some help/example to perform a resumeable upload to Google Drive using the new v3 REST API in Java.
I know there is a low level description here: Upload files | Google Drive API. But at the moment I am not willing to understand any of these low level requests, if there isn't another, simpler method ( like former MediaHttpUploader, which is deprecated now...)
What I currently do is:
File fileMetadata = new File();
fileMetadata.setName(name);
fileMetadata.setDescription(...);
fileMetadata.setParents(parents);
fileMetadata.setProperties(...);
FileContent mediaContent = new FileContent(..., file);
drive.files().create(fileMetadata, mediaContent).execute();
But for large files, this isn't good if the connection interrupts.
I've just created an implementation on that recently. It will create a new file on your DriveFolder and return its metadata when the task succeeds. While uploading, it will also update the listener with uploading info. I added comments to make it auto explanable:
public Task<File> createFile(java.io.File yourfile, MediaHttpUploaderProgressListener uploadListener) {
return Tasks.call(mExecutor, () -> {
//Generates an input stream with your file content to be uploaded
FileContent mediaContent = new FileContent("yourFileMimeType", yourfile);
//Creates an empty Drive file
File metadata = new File()
.setParents(parents)
.setMimeType(yourFileMimeType)
.setName(yourFileName);
//Builds up the upload request
Drive.Files.Create uploadFile = mDriveService.files().create(metadata, mediaContent);
//This will handle the resumable upload
MediaHttpUploader uploader = uploadBackup.getMediaHttpUploader();
//choose your chunk size and it will automatically divide parts
uploader.setChunkSize(MediaHttpUploader.MINIMUM_CHUNK_SIZE);
//according to Google, this enables gzip in future (optional)
uploader.setDisableGZipContent(false); versions
//important, this enables resumable upload
uploader.setDirectUploadEnabled(false);
//listener to be updated
uploader.setProgressListener(uploadListener);
return uploadFile.execute();
});
}
And make your Activity extends MediaHttpUploaderProgressListener so you have real time updates on the file progress:
#Override
public void progressChanged(MediaHttpUploader uploader) {
String sizeTemp = "Uploading"
+ ": "
+ Formatter.formatShortFileSize(this, uploader.getNumBytesUploaded())
+ "/"
+ Formatter.formatShortFileSize(this, totalFileSize);
runOnUiThread(() -> textView.setText(sizeTemp));
}
For calculating the progress percentage, you simply do:
double percentage = uploader.getNumBytesUploaded() / totalFileSize
Or use this one:
uploader.getProgress()
It gives you the percentage of bytes that have been uploaded, represented between 0.0 (0%) and 1.0 (100%). But be sure to have your content length specified, otherwise it will throw IllegalArgumentException.
I'm trying to create AudioPlayer's (as per the Native-Audio NDK sample) but without using the AssetManager as the files to be played are downloaded dynamically, and hence not packaged as an Assset.
So I was wondering if it were possible to pass a FileDescriptor from Java to the JNI to be used in such a situation as the sample JNI code below (without using AssetManager):
// open asset as file descriptor
off_t start, length;
int fd = AAsset_openFileDescriptor(asset, &start, &length);
assert(0 <= fd);
AAsset_close(asset);
// configure audio source
SLDataLocator_AndroidFD loc_fd = {SL_DATALOCATOR_ANDROIDFD, fd, start, length};
SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
SLDataSource audioSrc = {&loc_fd, &format_mime}
Unfortunately on the Java side the FileDescriptor is a class and not an int (SLuint32) as to be used by SLDataLocator_AndroidFD.
Constructor below:
/** File Descriptor-based data locator definition, locatorType must be SL_DATALOCATOR_ANDROIDFD */ typedef struct SLDataLocator_AndroidFD_ {
SLuint32 locatorType;
SLint32 fd;
SLAint64 offset;
SLAint64 length; } SLDataLocator_AndroidFD;
Any help would be appreciated!
Not sure if this answers the question exactly, but here's a bit of code I found to read files from the sd card. If you're downloading the files to external storage, this should work for you:
SLchar path[] = "/sdcard/audio/my_audio.mp3" ;
SLDataLocator_URI loc_uri = {SL_DATALOCATOR_URI, path};
SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL,
SL_CONTAINERTYPE_UNSPECIFIED};
SLDataSource audioSrc = {&loc_uri, &format_mime};
I have a little bit strange question: this time everything works, but I can't understand why.
AFAIK it's possible to mount more than one sd card. Everything will be mounted to /mnt directory. (is it true?)
On my device there is only one sd card which mounted to /mnt/sdcard. And in my application I open files from it. I'm using next code:
private void open() {
// get file extension
String extension = "";
int dotIndex = downloadedFile.lastIndexOf('.');
if (dotIndex != -1) {
extension = downloadedFile.substring(dotIndex + 1, downloadedFile.length());
}
// create an intent
Intent intent = new Intent(android.content.Intent.ACTION_VIEW);
Uri data = Uri.fromFile(new File(downloadedFile));
String type = MimeTypeMap.getSingleton().getMimeTypeFromExtension(extension);
if (type == null || type.length() == 0) {
// if there is no acceptable mime type
type = "application/octet-stream";
}
intent.setDataAndType(data, type);
// get the list of the activities which can open the file
List resolvers = context.getPackageManager().queryIntentActivities(intent, PackageManager.MATCH_DEFAULT_ONLY);
if (resolvers.isEmpty()) {
(new AlertDialog.Builder(context)
.setMessage(R.string.AttachmentUnknownFileType)
.setNeutralButton(R.string.NeutralButtonText, null)
.create()).show();
} else {
context.startActivity(intent);
}
}
Actually downloadedFile variable has value like file:///sdcard/mydir/myfile.txt. But the code works. Why? How Android understand what /sdcard/... is the same as /mnt/sdcard/...?
And main question: what happened if sd card will be mounted to other dir (for exmaple, /mnt/another-sd/ or even /media/sd)? What if more than one sd cards will be mounted: how android understand what card to use?
Thank you for any help! Have a good day!
It's simple android configueres the mounting over a settings file on phone boot so if there are mor sdcards Android will simply prefer one of them to set as
/sdcard/
so when the mount settings change your code is simply useless you can only hope the settings being untouched .
Every company that procuces Android smartphones use the "sdcard" path even custom roms like use it