org.apache.commons.fileupload.FileUploadBase and SizeLimitExceededException - file-upload

I am trying to upload a file in my application. The file size is of 2055 kb. After uploading the file. It throws an exception is:
04-Feb-2016 15:42:41.141 INFO [http-nio-8084-exec-78] com.opensymphony.xwork2.util.logging.commons.CommonsLogger.info Unable to find 'struts.multipart.saveDir' property setting. Defaulting to javax.servlet.context.tempdir
04-Feb-2016 15:42:41.203 WARNING [http-nio-8084-exec-78] com.opensymphony.xwork2.util.logging.commons.CommonsLogger.warn Request exceeded size limit!"
org.apache.commons.fileupload.FileUploadBase$SizeLimitExceededException: the request was rejected because its size (2104281) exceeds the configured maximum (2097152)
I am using Struts Framework.

Struts2 allows to configure the limits for uploaded file(s).
The Struts 2 default.properties file defines several settings that affect the behavior of file uploading. You may find in necessary to change these values. The names and default values are:
struts.multipart.parser=jakarta
struts.multipart.saveDir=
struts.multipart.maxSize=2097152
Note, without 0 at the end.
You can change this by setting a constant to increase a request limit
<constant name="struts.multipart.maxSize" value="20971520" />
Please remember that the struts.multipart.maxSize is the size limit of the whole request, which means when you uploading multiple files, the sum of their size must be below the struts.multipart.maxSize!
There's also a limit on individual file that you can change by overriding the action config
<interceptor-ref name="fileUpload">
<param name="maximumSize">20971520</param>
</interceptor-ref>
More in formation about File Upload you can find on the docs page.

Related

Issue with Universal Forwarder forwarding logs to index

I have installed splunk UF on windows . I have one static log file in system (json) and that need to be monitored. I have configure this in inputs.conf file.
I see only system/application and security logs being sent to indexer whereas the static log file is not seen.
I ran "splunk list inputstatus" and checked,
C:\Users\Administrator\Downloads\test\test.json
file position = 75256
file size = 75256
percent = 100.00
type = finished reading
So, this means the file is being read properly.
What can be the issue that I dont see test.json logs at splunk side ? I tried checking index=_internal at indexer but not able to figure out what is causing issue, I checked few blogs on Internet as well. Can anyone please help on this.
inputs.conf stanza:
[monitor://C:\Users\Administrator\Downloads\data test\test.json]
disabled = 0
index = test_index
sourcetype = test_data

How to read values dynamically from a file for a property in updateAttribute?

I added some custom properties in the 'updateAttribute' processor using the '+' button. For example: I declared a property 'DBConnectionURL' and gave the value as 'jdbc:mysql://localhost:3306/test'. Then, in the 'DBCPConnectionPool' service controller, I simple used the value'${DBConnectionURL}' for 'Database Connection URL' property. But, I manually gave the value for 'DBConnectionURL' property.I want a way where I can feed the value dynamically from a file, so that i just need to change the value in the file and the value for 'DBConnectionURL' changes dynamically based on the value present in the file. Is there a way to do it?
Rishab,
You have to use nifi variable registry.
In conf/nifi.properties, you could configure the below configuration in it for dynamically update a value in your data flow.
nifi.variable.registry.properties=./dynamic.properties
You can give your variables in that file dynamic.properties it should present in conf directory.
For an example, If dynamic.properties files contains below values
DBCPURL= jdbc://<host>:<port>
you can use that in your data flow by using ${DBCPURL}
Note: You should restart nifi services if you change any configuration in conf/nifi.properties.Otherwise your changes not worked in dataflow.
Feel free to accept it be answer if it worked for you.

Size limit on ContentVersion object in Salesforce

I was trying to create and insert a ContentVersion object in Salesforce lightning(for file upload) using the following code:
ContentVersion v = new ContentVersion();
v.versionData = EncodingUtil.base64Decode(content);
v.title = fileName;
v.pathOnClient = fileName;
insert v;
This works fine for smaller files. But when i try loading a file which is just 750KB the above operation fails(actual allowed size could be still less).
Is there any limit on the size if the files that could be uploaded using the above code?
As per the similar question from the Salesforce StackExchange.
From Base Lightning Components Considerations:
When working with type="file", you must provide your own server-side logic for uploading files to Salesforce. [...]
Uploading files using this component is subject to regular Apex controller limits, which is 1 MB. To accommodate file size increase due to base64 encoding, we recommend that you set the maximum file size to 750 KB. You must implement chunking for file size larger than 1 MB. Files uploaded via chunking are subject to a size limit of 4 MB.
The Base64 is pushing the file size past the Maximum HTTP POST form sizeā€”the size of all keys and values in the form limit of 1 MB. Or at least this seems like the applicable limit here.
Instead you will need to go with either an embed Visualforce page as used in How to Build a Lightning File Uploader Component. This gets you up to the Maximum file size for a file uploaded using a Visualforce page limit of 10 MB. Just remember to keep the file processing to a minimum before the heap size limit catches up with you.

How to create a http request that contains multiple FileHeaders?

I am trying to test a uploading service that supports multiple files uploading,and I found this:
golang POST data using the Content-Type multipart/form-data
that introduced how to create a request to upload a single file,but I need to upload multiple files,is there simple way to create this kind of request?
update:
please check line:38 and 39 in post:to support html5 multiple files uploading
line 38 files := m.File["myfiles"]
line 29 for i, _ := range files {
It seems that it needs to set single name for multiple file headers to stimulate the html5 multiple files uploading.
For each file, call CreateFormFile to create the header for the file. Call Write on the writer returned from CreateFormFile one or more times to write data to the file. When done with all files, close the multipart writer.
The top answer in the linked question uploads two files, one named "image" and one named "key". The data for the "image" is copied from a file. The data for "key" is simply the bytes "KEY".
The field name is the first argument to CreateFormFile. If you want to upload multiple files with the same name, use the same name each time you call CreateFormFile.

File Object from Camel-exchange body is null in Camel FTP

The route definition is simple based on FTP2 component.
Endpoint[sftp://server.com:22//path/to/file/?consumer.delay=900000&password=xxxxxx&username=user]
I am trying to read a file from a FTP folder.
JAXBContext jaxBContext = JAXBContext.newInstance(ObjectFactory.class);
Unmarshaller unmarshaller = jaxBContext.createUnmarshaller();
File authBatchFile = exchange.getIn().getBody(File.class);
AuthorizationFeed batchAuthFeed = (AuthorizationFeed) JAXBIntrospector
.getValue(unmarshaller.unmarshal(authBatchFile));
The exchange has everything it should have
Body [Body is file based: RemoteFile[fileName.txt]]
Header also shows the size of the file CamelFileLength=81612. However, I am getting the below exception just after the exchange trace.
java.lang.IllegalArgumentException: The value for the "java.io.File" parameter cannot be null.
at com.ibm.xml.xlxp2.jaxb.unmarshal.AbstractUnmarshallerImpl.reportNullParameter(AbstractUnmarshallerImpl.java:180)
at com.ibm.xml.xlxp2.jaxb.unmarshal.AbstractUnmarshallerImpl.unmarshal(AbstractUnmarshallerImpl.java:72)
at com.wellpoint.clihub.hie.um.camel.processor.BatchCFFProcessor.process(BatchCFFProcessor.java:47)
I found the solution to it by adding to the route definition
&localWorkDirectory=/tmp.
That way it doesn't consider it to be a remote file and considers it to be a java.io.File. I think Camel should incorporate that as a default feature when dealing with remote files. Per their documentation,
The route above is ultra efficient as it avoids reading the entire file content into memory. It will download the remote file directly to a local file stream. The java.io.File handle is then used as the Exchange body.