400 Bad Request
Max file size is 32000000 bytes. File "WEB-INF/lib/gwt-user.jar" is 32026261 bytes.
I've been deploying this app for years without issues and this file (gwt-user.jar) has been part of this deployment (it has not been updated for 2 years). Anybody have any ideas as to what could have changed?
There were no recent changes related to the file size in App Engine. According to the official documentation, the limit of each file to be uploaded is 32 megabytes.
Deployments
An application is limited to 10,000 uploaded files per version. Each
file is limited to a maximum size of 32 megabytes. Additionally, if
the total size of all files for all versions exceeds the initial free
1 gigabyte, then there will be a $ 0.026 per GB per month charge.
I would suggest to :
Make sure WAR file contains only the essential libraries required for the application to start.
Use BlobStore for deployment of your App Engine app containing other dependencies (split up the necessary libraries) link.
Related
I am using Exoplayer version 2.9.6 to stream videos. Recently I added download video feature to my app using Exoplayer's in built download manager. The problem with download is that it consumes more data while downloading. For a 18 mb video (the size is determined by checking the size of exo player's download folder after download) , it takes about 240 mb of network data. I am using DASH to stream videos. Any one know why this is happening.?
Storage is often reported in Megabytes and network usage in Mega bits.
That would mean your 18 MegaByte file would need 144 Megabits of network transmission. If you add the networking overhead then you may find this explains much of the difference you are seeing.
Is there any limit on Google Bigquery load data with a local file with API?
As Google Bigquery document mention regarding Web UI, local file size is than <=10 MB and 16,000 Rows. Is the same limit will apply to API?
There is no BigQuery API to load local files. Local file load via bq command or Web UI - and I believe what happened when you do this - it is just upload file to GCS on your behalf and after this just doing normal API load job from GCS - you can see it clearly in UI. But because Google want to have reasonable user experience from WebUI/bq command - additional much more strict limits are here for upload "local" files.
My recommendation to go GCS path to load big files
(https://cloud.google.com/bigquery/docs/loading-data-cloud-storage)
Important thing - it is free (compare with streaming where you will pay for streamed data)
Limits are following (from https://cloud.google.com/bigquery/quotas)
Load jobs per table per day — 1,000 (including failures)
Maximum columns per table — 10,000
Maximum size per load job — 15 TB across all input files for CSV, JSON, and Avro
Maximum number of files per load job — 10 Million total files including all files matching all wildcard URIs
For CSV and JSON - 4 GB compressed file, 5TB uncompressed
There are no special limits for local file uploads, 10MB and 16000 rows is only for UI. But I don't recommend uploading huge local files.
Open Search Server is crashing while crawling files. OSS is running as a daemon on an Ubuntu box. This is a production server with 64gb ram and 12 cores, crawling files on an extremely fast nas that it mounts, about 20 gb of files. 2gb memory allotted for OSS. The largest file that should get crawled is about 1.3gb. There are 5 mp4 files that are all over 1gb.
Usually at some point during the crawl process, OSS will become completely unresponsive. Restarting OSS fixes the problem. Today I monitored a crawl, which usually uses one or two cores at a time. When it crashed it was maxing out all 12 cores. Total memory usage on the server was fine, but I'm not sure how much OSS was using.
We've looked at the oss log files and there's not a single error that happens before each crash, but there are two errors that are pretty common in the logs:
WARN: org.apache.cxf.jaxrs.utils.JAXRSUtils - Both com.jaeksoft.searchlib.webservice.crawler.database.DatabaseImpl#run and com.jaeksoft.searchlib.webservice.crawler.database.DatabaseImpl#run are equal candidates for handling the current request which can lead to unpredictable results
WARN: root - Low memory free conditions: flushing crawl buffer
We have one index that handles all files. It is based on the file crawler template—the only changes are:
An extra analyzer that uses 4 regex replaces.
An extra field that copies the url field and uses the analyzer from
We added one disk location, which has all the files.
We join another index in our query.
When we are able to crawl, querying the index works fine afterwards. I think maybe the crashes only happen if there's a search query on the index during the crawl, but haven't been able to confirm that yet.
Attempts to deploy my application to appengine have failed because of the hard limit on uploads i.e. 10,000.
My application is using external libraries and constants in 2 other languages. Please refer to the following snapshot:
GWT.Async blocks have been placed in necessary positions in the project.
Following compile time options are used:
-localWorkers 3 -XfragmentCount 10
But the problem is when I upload the project to appengine I get the following exception:
**
java.io.IOException: Applications are limited to 10000 files, you have
34731
**
I am aware that I can cut down on the file count by reducing the cross browser compatibility or by reducing locales. But that won't be a practical approach while deploying
So please suggest me some alternatives.
Another thing I wish to mention is the project extensively uses VerticalPanel/HorizontalPanel/FlexTable/DialogBox in most of its screens. I am not sure if this has something to do with this problem.
I'm afraid this will happen to me also, I had that problem at the middle of the project, so I limited browsers to chrome and ff. But when I'll have to really deploy, this could be an issue.
An application is limited to 10,000 uploaded files per version. Each file is limited to a maximum size of 32 megabytes. Additionally, if the total size of all files for all versions exceeds the initial free 1 gigabyte, then there will be a $0.13 per GB per month charge.
https://developers.google.com/appengine/docs/quotas#Deployments
The solution could be to deploy each language as an application, if your data is not related together between languages
Sounds like you might also be deploying all of your gwt classes along with your application.
When I was a heavy appengine user, I was sure to jars all my uploaded classes (and not include any non-shared gwt code). You might want to $ find . -n "*.class" | wc -l to count how many classes you are sending.
Jarring up your classes beforehand will make 15000 class files = 1 jar file.
It just sucks to make huge jars since you'll need to redeploy the whole jar on every change. Better to have lots of small jars. ;)
What I did is to put all the GWT generated files into a ZIP and serve them with a servlet.
To optimize things a bit I put every file in memcache after dezipping.
I have a Struts 1 web application that needs to upload fairly large files (>50 MBytes) from the client to the server. I'm currently using its built-in org.apache.struts.upload.DiskMultipartRequestHandler to handle the HTTP POST multipart/form-data requests. It's working properly, but it's also very very slow, uploading at about 100 KBytes per second.
Downloading the same large files from server to client occurs greater than 10 times faster. I don't think this is just the difference between the upload and download speeds of my ISP because using a simple FTP client to transfer the file to the same server takes less than 1/3 the time.
I've looked at replacing the built-in DiskMultipartRequestHandler with the newer org.apache.commons.fileupload package, but I'm not sure how to modify this to create the MultipartRequestHandler that Struts 1 requires.
Barend commented below that there is a 'bufferSize' parameter that can be set in web.xml. I increased the size of the buffer to 100 KBytes, but it didn't improve the performance. Looking at the implementation of DiskMultipartRequestHandler, I suspect that its performance could be limited because it reads the stream one byte at a time looking for the multipart boundary characters.
Is anyone else using Struts to upload large files?
Has anyone customized the default DiskMultipartRequestHandler supplied with Struts 1?
Do I just need to be more patient while uploading the large files? :-)
The page StrutsFileUpload on the Apache wiki contains a bunch of configuration settings you can use. The one that stands out for me is the default buffer size of 4096 bytes. If you haven't already, try setting this to something much larger (but not excessively large as a buffer is allocated for each upload). A value of 2MB seems reasonable. I suspect this will improve the upload rate a great deal.
Use Apache Commons,
this gives more flexibility to upload file . We can configure upload file size (Max file size) and temporary file location for swapping the file (this improves performance).
please visit this link http://commons.apache.org/fileupload/using.html