Bug #6662

Metacat fails large-file upload

Added by Matt Jones over 5 years ago. Updated over 5 years ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:


Metacat seems to have a hard limit set on file upload size, at least for the DataONE MN.create() API. I attempted to call create() on a 4GiB file, which produced the error below in the logs.

Looking into the code, for Metacat 2.4.2, it appears the size limit is hardcoded on line 677 of

MultipartRequestResolver mrr =
new MultipartRequestResolver(tmpDir.getAbsolutePath(), 1000000000, 0);

To fix this, we should set a reasonable size that allows individual files to include typical multi-gigabyte-sized files. At a minimum this should be configurable and not hard-coded.

The produced error was:
org.dataone.service.exceptions.ServiceFailure: Could not resolve multipart files: the request was rejected because its size (1000001678) exceeds the configured maximum (1000000000)
at edu.ucsb.nceas.metacat.restservice.D1ResourceHandler.collectMultipartFiles(
at edu.ucsb.nceas.metacat.restservice.MNResourceHandler.putObject(
at edu.ucsb.nceas.metacat.restservice.MNResourceHandler.handle(
at edu.ucsb.nceas.metacat.restservice.D1RestServlet.doPost(
at javax.servlet.http.HttpServlet.service(
at javax.servlet.http.HttpServlet.service(
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
at org.apache.catalina.core.ApplicationFilterChain.doFilter(
at edu.ucsb.nceas.metacat.restservice.D1URLFilter.doFilter(
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
at org.apache.catalina.core.ApplicationFilterChain.doFilter(
at org.apache.catalina.core.StandardWrapperValve.invoke(
at org.apache.catalina.core.StandardContextValve.invoke(
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(
at org.apache.catalina.core.StandardHostValve.invoke(
at org.apache.catalina.valves.ErrorReportValve.invoke(
at org.apache.catalina.valves.AccessLogValve.invoke(
at org.apache.catalina.core.StandardEngineValve.invoke(
at org.apache.catalina.connector.CoyoteAdapter.service(
at org.apache.coyote.ajp.AjpProcessor.process(
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$


#1 Updated by Matt Jones over 5 years ago

  • % Done changed from 0 to 90
  • Status changed from New to In Progress

Removed the hardcoding by adding a config parameter in commit r9094. Needs testing.

Should also discuss the alternative strategy, which would be to not have a hard limit at all, and instead use the HTTP Content-Length header to determine how large the max upload needs to be on a request. This might be an opening for a DOS though.

#2 Updated by Matt Jones over 5 years ago

Although the value is no longer hardcoded in the source, it still is limited to 2GB because MultipartRequestResolver takes an int value for the max size, which in java has Integer.MAX_VALUE = 2147483647. Setting a value higher than this in the config file results in a parsing error. Changes to MultipartRequestResolver will be needed to allow a larger upload.

#3 Updated by Matt Jones over 5 years ago

  • Target version changed from 2.5.0 to 2.4.3

#4 Updated by Matt Jones over 5 years ago

  • % Done changed from 90 to 100
  • Status changed from In Progress to Closed

Turns out that setting the value to -1 allows unlimited sized uploads (only limited by memory/disk). I refactored Metacat to move the size limit into the configuration file (, and tested it with a setting of -1 and was able to successfully upload a 4GB file from the R client. See

Also available in: Atom PDF