I am using Exoplayer version 2.9.6 to stream videos. Recently I added download video feature to my app using Exoplayer's in built download manager. The problem with download is that it consumes more data while downloading. For a 18 mb video (the size is determined by checking the size of exo player's download folder after download) , it takes about 240 mb of network data. I am using DASH to stream videos. Any one know why this is happening.?
Storage is often reported in Megabytes and network usage in Mega bits.
That would mean your 18 MegaByte file would need 144 Megabits of network transmission. If you add the networking overhead then you may find this explains much of the difference you are seeing.
Related
400 Bad Request
Max file size is 32000000 bytes. File "WEB-INF/lib/gwt-user.jar" is 32026261 bytes.
I've been deploying this app for years without issues and this file (gwt-user.jar) has been part of this deployment (it has not been updated for 2 years). Anybody have any ideas as to what could have changed?
There were no recent changes related to the file size in App Engine. According to the official documentation, the limit of each file to be uploaded is 32 megabytes.
Deployments
An application is limited to 10,000 uploaded files per version. Each
file is limited to a maximum size of 32 megabytes. Additionally, if
the total size of all files for all versions exceeds the initial free
1 gigabyte, then there will be a $ 0.026 per GB per month charge.
I would suggest to :
Make sure WAR file contains only the essential libraries required for the application to start.
Use BlobStore for deployment of your App Engine app containing other dependencies (split up the necessary libraries) link.
I have a Vert.x web service that will occasionally download large ZIP files from AWS S3. After downloading, the archive is unzipped and individual files are re-uploaded to AWS S3. The web service is hosted as a t2.large (8GB memory) instance in AWS Elastic Beanstalk. The Java application is currently configured with between 2-4GB of heap space, and the ZIP files will be at most 10GB in size (but most will be closer to 2-4GB at most).
When the application tries to download ZIP files >2GB in size, either the initial download of the ZIP file or the re-upload of individual files always fails with a stack trace similar to the following:
Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1895825439, max: 1908932608)
After doing some research, it appears that Vert.x uses Netty to speed up Network I/O, which in turn utilizes direct memory to improve download performance. It appears that the direct memory isn't being freed sufficiently quickly, which leads to out-of-memory exceptions like the above.
The simplest solution would just be to increase the instance size to 16GB t2.xlarge and allocate more direct memory at runtime (eg. -XX:MaxDirectMemorySize), but I'd like to explore other solutions first. Is there a way to programmatically force Netty to free direct memory after it's no longer in use? Is there additional Vert.x configuration I can add that might alleviate this problem?
Please check this
github.com/aws/aws-sdk-java-v2/issues/1301
we have identified an issue within the SDK where it could cause
excessive buffer usage and eventually OOM when using s3 async client
to download a large object to a file. The fix #1335 is available in
2.7.4. Could you try with the latest version? Feel free to re-open if you continue to see the issue. " – AWS PS 21 hours ago Delete
After installing application on a testing device, the size of app increases as I use it daily. But I would like to limit my application size, let's say up to 100mb. Means new data must overwrite old data when app size reaches to 100 mb, i.e. app size must not exceed 100mb anyways.
This is possible, altough it would be a little harder, as you'd have to do 2 things simultaneously:
Learn the exact app size after installation
Constantly monitor and control the app cache and storage
You may have noticed already, that the .apk or .aab (bundle) size is far less than the actual size of the app after it's installed on the device. But you can use org.reflection API and PackageManager to get it with a few lines of code at runtime:
PackageManager pm = getPackageManager();
Method getPackageSizeInfo = pm.getClass().getMethod("getPackageSizeInfo", String.class, IPackageStatsObserver.class);
getPackageSizeInfo.invoke(pm, "com.android.mms", new IPackageStatsObserver.Stub() {
#Override
public void onGetStatsCompleted(PackageStats pStats, boolean succeeded)
throws RemoteException {
Log.i(TAG, "Size of the installed app: " + pStats.codeSize);
}
});
This would be the code you'd want to run just after the installation, and save the value, as you'll use it later to calculate the size on the disk by combining it with other cache you have.
Secondly, each time you are going to persist something - either using SharedPreferences, a database (SQLite) or saving to a file, you'd have to count the bytes of the data saved, and calculate the sum of the saved data and the app size after installation.
Also, I'd suggest a best way to go for precision is be to save everything to a file, and avoid using databases and SharedPreferences, as it's much easier and consistent to get the exact file size. As your file size + the app size after installation > 100mb, you can tell the user, that he cannot save any more data, and has to free some space, etc.
I hope you got the idea..
Can we Limit our application size after installation?- NO
App size is one of the biggest factors that can affect your app’s install and uninstall metrics, so it’s important to regularly monitor and understand how you can reduce your app’s download and install sizes. Since the two sizes are related, here’s how they’re different from each other:
Download size: The size of your app that users download on Google
Play. When an app has a larger download size, it takes longer to
download.
Size on device (at install time): The amount of space required to
install your app. Since apps are compressed when they’re downloaded,
it can make install sizes larger than download sizes. When an app has
a larger install size, more space is required on a user’s device to
complete installation. After the app is opened, its size on disk
varies depending on app usage.
I am building an application which downloads files from around 100 ip cameras paralelly over the network throught the day. The files in the camera are of length 1 minute varying around 1.5 to 2.5 Mb in size. Each of the files takes around 10 to 20 secs to download. The server I am using, has a four-core processeor with 8GB of RAM. The task is to download 1 file from each of the cameras per minute (60 files per hour) for 24x7.
First I tried by creaing as many threads as the number of cameras. This worked well for a relatively small number of cameras but as the numbers increased, I found that the number of files getting downloaded per camera is very less. Even in some days, the downloading is entirely missed for certain cameras. This I thought might be because I am creating one thread per camera, ie around 100 threads, some of the threads might not be getting the CPU at all. So I tried a second method.
I kept the number of threads same as that of the number of cores ie 4. So, all the cameras are handled by only four cores. This was done to ensure that the cameras are not getting missed. The code for this was done using the Executer class in Java handling a pool of four threads. As is the case, the files got downloaded from all the cameras. But now, the number of files getting downloaded per camera is very less. That is, in an hour, the server can download only around 15 files from each camera, while it is supposed to download 60 files in an hour (as each file is of duration one minute). This can be attributed to the fact that, each thread is handling 25 cameras, and the downloading performed by a particular thread is sequential. That is each thread will serially take up one of the 25 cameras and download from it before taking on the next. And each file takes around 15 secs on an average to get downloaded.
That network bandwidth is very high for the server but the network is slow in the camera side where it has got around 256 to 512 Kbps.
How should I optimize the code or the server config so as to download all the files from the camera. Moreover, the files in the camera are formed in real time, so I have go a single minute to download the files from all the cameras, otherwise, files will get skipped.
I have a Struts 1 web application that needs to upload fairly large files (>50 MBytes) from the client to the server. I'm currently using its built-in org.apache.struts.upload.DiskMultipartRequestHandler to handle the HTTP POST multipart/form-data requests. It's working properly, but it's also very very slow, uploading at about 100 KBytes per second.
Downloading the same large files from server to client occurs greater than 10 times faster. I don't think this is just the difference between the upload and download speeds of my ISP because using a simple FTP client to transfer the file to the same server takes less than 1/3 the time.
I've looked at replacing the built-in DiskMultipartRequestHandler with the newer org.apache.commons.fileupload package, but I'm not sure how to modify this to create the MultipartRequestHandler that Struts 1 requires.
Barend commented below that there is a 'bufferSize' parameter that can be set in web.xml. I increased the size of the buffer to 100 KBytes, but it didn't improve the performance. Looking at the implementation of DiskMultipartRequestHandler, I suspect that its performance could be limited because it reads the stream one byte at a time looking for the multipart boundary characters.
Is anyone else using Struts to upload large files?
Has anyone customized the default DiskMultipartRequestHandler supplied with Struts 1?
Do I just need to be more patient while uploading the large files? :-)
The page StrutsFileUpload on the Apache wiki contains a bunch of configuration settings you can use. The one that stands out for me is the default buffer size of 4096 bytes. If you haven't already, try setting this to something much larger (but not excessively large as a buffer is allocated for each upload). A value of 2MB seems reasonable. I suspect this will improve the upload rate a great deal.
Use Apache Commons,
this gives more flexibility to upload file . We can configure upload file size (Max file size) and temporary file location for swapping the file (this improves performance).
please visit this link http://commons.apache.org/fileupload/using.html