Request Time Out Error while uploading to Amazon S3 with InputStream - file-upload

I am trying to upload a file on AMazon S3 using InputStream, My code is as follows and I am getting Request Time out Error, The size of file is very small around 1 MB.
ObjectMetadata metadata = new ObjectMetadata();
Long contentLength = Long.valueOf(IOUtils.toByteArray(fis).length);
metadata.setContentLength(contentLength);
try {
s3Handler.putObject(new PutObjectRequest(bucketName, s3key, fis,metadata));
} catch (AmazonServiceException ase) {
s3ExceptionHandler.processAmazonServiceException(ase);
} catch (AmazonClientException ace) {
s3ExceptionHandler.processAmazonClientException(ace);
}
Request Time Out:--
Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
Jan 30, 2013 10:15:42 AM javacode.S3ExceptionHandler processAmazonServiceException
SEVERE: HTTP Status Code: 400
Jan 30, 2013 10:15:42 AM javacode.S3ExceptionHandler processAmazonServiceException
SEVERE: AWS Error Code: RequestTimeout
It was working fine when I was using file instead of inputstream but the problem is I have only inputStream object available.
Please help.

Look this : amazon s3 upload file time out
Reset your connection after :
Long contentLength = Long.valueOf(IOUtils.toByteArray(fis).length);
fis.reset();
s3Handler.putObject(new PutObjectRequest(bucketName, s3key, fis,metadata));
It works for me

Related

S3 file upload failing

I am using S3 transfer manager V2 to upload a file to AWS S3 bucket and its failing with following exception.
software.amazon.awssdk.core.exception.SdkClientException: Failed to send the request: Failed to write to TLS handler
Following is the code snippet
StaticCredentialsProvider credentials = StaticCredentialsProvider.create(AWSSessionCredentials.create(access_key_id, secret_access_key);
S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder().credentialsProvider(credentials).region(region).build();
S3TransferManager s3tm = S3TransferManager.builder().s3Client(s3AsyncClient).build();
UploadFileRequest uploadFileRequest = UploadFileRequest.builder().putObjectRequest(req -> req.bucket("bucket").key("key")).source(Paths.get("/file_to_upload.txt")).build();
FileUpload upload = transferManager.uploadFile(uploadFileRequest);
upload.completionFuture().join();
What am I missing here? Please advise

I get an exception when try to read file from minio with amazon SDK

I am trying to use minio as a local Amazon S3 server. I started minio server on my computer, created a test bucket, and uploaded one file - Completed.jpg. Now, I have this file in the minio and I can download it via link http://localhost:9000/minio/testbucket/Completed.jpg. But when I try to read this file from java, I get an exception. I wrote this test:
#Test
public void readObject() {
ClientConfiguration clientConf = PredefinedClientConfigurations.defaultConfig().withProtocol(Protocol.HTTPS).withMaxErrorRetry(1);
BasicAWSCredentials awsCredentials = new BasicAWSCredentials("minioadmin", "minioadmin");
AmazonS3ClientBuilder builder = AmazonS3ClientBuilder.standard()
.withClientConfiguration(clientConf)
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://127.0.0.1:9000/minio", "us-east-1"));
AmazonS3 amazonS3 = builder.build();
S3Object object = amazonS3.getObject(new GetObjectRequest("testbucket", "Completed.jpg"));
assertNotNull(object);
}
And It is the exception:
com.amazonaws.services.s3.model.AmazonS3Exception: All access to this bucket has been disabled. (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled; Request ID: /minio/testbucket/Completed.jpg; S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce)
, S3 Extended Request ID: 4a46a947-6473-4d53-bbb3-a4f908d444ce
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1359)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5052)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4998)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1486)

Amazon S3 Extended Request ID null

I'm trying to upload a file to Amazon bucket using S3 service. Getting below exception when connecting through company's "proxy server".
<code>
AWSCredentials credentials = new BasicSessionCredentials(
uploadLocation.getAccessKeyId(), uploadLocation.getSecretAccessKey(), uploadLocation.getSessionToken());
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setProxyHost(proxyAddress);
clientConfiguration.setProxyPort(Integer.parseInt(proxyPort));
clientConfiguration.setProtocol(Protocol.HTTP);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withClientConfiguration(clientConfiguration)
.withRegion("us-east-1")
.build();
PutObjectResult putObjectResult = s3Client.putObject(new PutObjectRequest(uploadLocation.getBucket(),
uploadLocation.getObject_key(), file));
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1592)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4227)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4174)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1722)
</code>
Please note that it works perfectly when proxy is not used and hence ClientConfiguration object is not required.
Don't have much visibility into S3 bucket configuration as a different vendor company is using Amazon S3 as their storage system.
Any suggestions, please?

Unexpected server response (0) while retrieving pdf

We are specifically getting this error when using Amazon ec2 instance. Configuration on aws instance is Tomcat 7, Ubuntu 16.04 and memory is 8gb. It occurs when the user tries to view pdf file. In our application, we are having one functionality where the user can only view PDF file onto browser, but won't be able to download it. PDF file is on the same server. We are using cors minimal configuration. We have tried it locally with Ubuntu and it is working fine.
Code snippet:
var fileSplitContent = fileName.split(".");
if ($('#viewImageOnlyForm')[0] != undefined && $('#viewPdfOnlyForm')[0] != undefined) {
if (fileSplitContent[fileSplitContent.length - 1].toLowerCase() != "pdf") {
$('#imageSource').val(requestURL + $.param(inputData));
$('#viewImageOnlyForm').submit();
} else {
var requestURL = "rest/file/getCapitalRaiseFile?";
$('#pdFSource').val(requestURL + $.param(inputData));
$('#viewPdfOnlyForm').submit();
}
} else {
// pop up download attachment dialog box
downloadIFrame.attr("src", requestURL + $.param(inputData));
}
}
Jan 04, 2017 5:07:31 AM org.glassfish.jersey.server.ServerRuntime$Responder writeResponse
SEVERE: An I/O error has occurred while writing a response message entity to the container output stream.
org.glassfish.jersey.server.internal.process.MappableException: org.apache.catalina.connector.ClientAbortException: java.net.SocketException: Connection reset
Caused by: org.apache.catalina.connector.ClientAbortException: java.net.SocketException: Broken pipe (Write failed)
Depending where you access the document this can be because of you have a download manager installed on your browser. This sometimes causes problems - maybe take a look at your extensions. You should try by disabling downloader manager extension in your browser.

Files downloaded from Amazon S3 using Knox and Node.js are corrupt

I'm using knox to access my Amazon S3 bucket for file storage. I'm storing all kinds of files - mostly MS Office and pdfs but could be binary or any other kind. I'm also using express 4.13.3 and busboy with connect-busboy for streaming support; when uploading file I'm handling with busboy and thence direct to S3 via knox, so avoiding having to write them to local disk first.
The files upload fine (I can browse and download them manually using Transmit) but I'm having problems downloading.
For clarity I don't want to write the file to local disk, instead keeping it in an in-memory buffer. Here's the code I'm using to handle the GET request:
// instantiate a knox object
var s3client = knox.createClient({
key: config.AWS.knox.key,
secret: config.AWS.knox.secret,
bucket: config.AWS.knox.bucket,
region: config.AWS.region
});
var buffer = undefined;
s3client.get(path+'/'+fileName)
.on('response', function(s3res){
s3res.setEncoding('binary');
s3res.on('data', function(chunk){
buffer += chunk;
});
s3res.on('end', function() {
buffer = new Buffer(buffer, 'binary');
var fileLength = buffer.length;
res.attachment(fileName);
res.append('Set-Cookie', 'fileDownload=true; path=/');
res.append('Content-Length', fileLength);
res.status(s3res.statusCode).send(buffer);
});
}).end();
The file downloads to the browser - I'm using John Culviner's jquery.fileDownload.js - but what is downloaded is corrupt and can't be opened. As you can see I'm using express' .attachment to set the headers for mime type and .append for the additional headers (using .set instead makes no difference).
When the file downloads in Chrome I see the message 'Resource interpreted as Document but transferred with MIME type application/vnd.openxmlformats-officedocument.spreadsheetml.sheet:' (for an Excel file), so express is setting the header correctly, and the size of the file downloaded matches that I see when examining the bucket.
Any ideas what's going wrong?
Looks like the contents might not be being sent to the browser as binary. Try something like the following:
if (s3Res.headers['content-type']) {
res.type( s3Res.headers['content-type'] );
}
res.attachment(fileName);
s3Res.setEncoding('binary');
s3Res.on('data', function(data){
res.write(data, 'binary');
});
s3Res.on('end', function() {
res.send();
});
It will also send the data one chunk at a time as it comes in, so it should be a bit more memory efficient.