java.io.FileNotFoundException (Read-only file system) // Uploading files to S3 - amazon-s3

I am trying to create csv files from a list of maps and uploading them to S3 bucket through a lambda function. Following is the code:
public void createCSV(List<Map<String, AttributeValue>> changedRecords, Context context, String tableName)
throws IOException {
Calendar calendar = Calendar.getInstance();
SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss");
String outputName = tableName + "_" + formatter.format(calendar.getTime()) + ".csv";
List<String> headers = changedRecords.stream().flatMap(map -> map.keySet().stream()).distinct()
.collect(Collectors.toList());
try (FileWriter writer = new FileWriter(outputName, true);) {
for (String string : headers) {
writer.write(string);
writer.write(",");
}
writer.write("\r\n");
for (Map<String, AttributeValue> lmap : changedRecords) {
for (Entry<String, AttributeValue> string2 : lmap.entrySet()) {
writer.write(string2.getValue().getS());
writer.write(",");
}
writer.write("\r\n");
}
}
catch (Exception e) {
e.printStackTrace();
}
s3.putObject(new PutObjectRequest("bucket_name", "data/" + outputName, outputName));
}
Getting the following fileNotFound exception:
java.io.FileNotFoundException: data_20200227192207.csv (Read-only file
system)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.(FileOutputStream.java:213)
at java.io.FileOutputStream.(FileOutputStream.java:133)
at java.io.FileWriter.(FileWriter.java:78)
at com.amazonaws.lambda.demo.PLMLambda.createCSV(PLMLambda.java:84)
at com.amazonaws.lambda.demo.PLMLambda.handleRequest(PLMLambda.java:54)
at com.amazonaws.lambda.demo.PLMLambda.handleRequest(PLMLambda.java:1)
at lambdainternal.EventHandlerLoader$PojoHandlerAsStreamHandler.handleRequest(EventHandlerLoader.java:178)
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:906)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:341)
at lambdainternal.AWSLambda.(AWSLambda.java:63)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:114)

Change the line:
try (FileWriter writer = new FileWriter(outputName, true);) {
to
try (FileWriter writer = new FileWriter("/tmp" + outputName, true);) {
In Lambda you can only write to the /tmp directory.

If your CSV content is not so huge (like 4GB), you can simple use StringWriter instead of FileWriter with AWS Lambda, and then just put directly the string to S3.

Related

How to get InputStream from MultipartFormDataInput?

I'm trying to save pdf in wildfly, I'm using RestEasy MultipartFormDataInput provided with wildfly 20.0.1,
but it doesn't work.
This is what I have:
public static Response uploadPdfFile(MultipartFormDataInput multipartFormDataInput) {
// local variables
MultivaluedMap<String, String> multivaluedMap = null;
String fileName = null;
InputStream inputStream = null;
String uploadFilePath = null;
try {
Map<String, List<InputPart>> map = multipartFormDataInput.getFormDataMap();
List<InputPart> lstInputPart = map.get("poc");
for(InputPart inputPart : lstInputPart){
// get filename to be uploaded
multivaluedMap = inputPart.getHeaders();
fileName = getFileName(multivaluedMap);
if(null != fileName && !"".equalsIgnoreCase(fileName)){
try {
// write & upload file to UPLOAD_FILE_SERVER
//here I have the error: Unable to find a MessageBodyReader for media type:
//application/pdf
inputStream = inputPart.getBody(InputStream.class,InputStream.class);
uploadFilePath = writeToFileServer(inputStream, fileName);
}catch (Exception e) {
e.printStackTrace();
}
// close the stream
inputStream.close();
}
}
}
catch(IOException ioe){
ioe.printStackTrace();
}
finally{
// release resources, if any
}
return Response.ok("File uploaded successfully at " + uploadFilePath).build();
}
I'm using postman for test, http POST method, in the body I send: form-data - file and selected the file.pdf.
When I sent the request, I have the next RunTimeException when I try:
inputStream = inputPart.getBody(InputStream.class,null);
I get:
java.lang.RuntimeException: RESTEASY007545: Unable to find a MessageBodyReader for media type: application/pdf and class type org.jboss.resteasy.util.Base64$InputStream
At the moment I am saving the file receiving it in Base64, but I think that with MultipartFormDataInput it is the correct way.
This is what I have when debug:
Thanks for your support.
I solved this changing the InputStream from "org.jboss.resteasy.util.Base64.InputStream"
to "java.io.InputStream"

A task was canceled Exception when trying to upload file to S3 bucket

A task was canceled exception is thrown when Im trying to call fileTransferUtility.UploadAsync to upload a file i S3. I'm using dot net core 2.0 and trying to Upload file to S3.What is that i'm doing wrong in the below code?
Is is something to do with Timeout? If so how to set time for s3 bucket? or Do I have to set some properties on S3 bucket?
Below is my controller code:
public class UploadController : Controller
{
private IHostingEnvironment _hostingEnvironment;
private AmazonS3Client _s3Client = new AmazonS3Client(RegionEndpoint.APSoutheast1);
private string _bucketName = "fileupload";//this is my Amazon Bucket name
private static string _bucketSubdirectory = String.Empty;
private string uploadWithKeyName = "testFile";
public UploadController(IHostingEnvironment environment)
{
_hostingEnvironment = environment;
}
[HttpPost("UploadExcelData")]
public async Task PostExcelData()
{
var files = Request.Form.Files;
var stringVal = Request.Form.Keys;
long size = files.Sum(f => f.Length);
foreach (var formFile in files)
{
if (formFile.Length > 0)
{
var filename = ContentDispositionHeaderValue
.Parse(formFile.ContentDisposition)
.FileName
.TrimStart().ToString();
filename = _hostingEnvironment.WebRootPath + $#"\uploads" + $#"\{formFile.FileName}";
size += formFile.Length;
using (var fs = System.IO.File.Create(filename))
{
formFile.CopyTo(fs);
fs.Flush();
}//these code snippets saves the uploaded files to the project directory
await UploadToS3(filename);//this is the method to upload saved file to S3
}
}
// return Ok();
}
public async Task UploadToS3(string filePath)
{
try
{
TransferUtility fileTransferUtility = new
TransferUtility(_s3Client);
string bucketName;
if (_bucketSubdirectory == "" || _bucketSubdirectory == null)
{
bucketName = _bucketName; //no subdirectory just bucket name
}
else
{ // subdirectory and bucket name
bucketName = _bucketName + #"/" + _bucketSubdirectory;
}
// 1. Upload a file, file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName, uploadWithKeyName).ConfigureAwait(false);
Console.WriteLine("Upload 1 completed");
}
catch (AmazonS3Exception s3Exception)
{
Console.WriteLine(s3Exception.Message,
s3Exception.InnerException);
}
catch (Exception ex)
{
Console.WriteLine("Unknown error", ex.Message);
}
}
}
I forgot to pass the credentials :
private AmazonS3Client _s3Client = new AmazonS3Client(DynamoDbCRUD.Credentials.AccessKey,DynamoDbCRUD.Credentials.SecretKey, RegionEndpoint.APSoutheast1);
This line works fine.

Read resource file from inside SonarQube Plugin

I am developing a plugin using org.sonarsource.sonarqube:sonar-plugin-api:6.3. I am trying to access a file in my resource folder. The reading works fine in unit testing, but when it is deployed as a jar into sonarqube, it couldn't locate the file.
For example, I have the file Something.txt in src/main/resources. Then, I have the following code
private static final String FILENAME = "Something.txt";
String template = FileUtils.readFile(FILENAME);
where FileUtils.readFile would look like
public String readFile(String filePath) {
try {
return readAsStream(filePath);
} catch (IOException ioException) {
LOGGER.error("Error reading file {}, {}", filePath, ioException.getMessage());
return null;
}
}
private String readAsStream(String filePath) throws IOException {
try (InputStream inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(filePath)) {
if (inputStream == null) {
throw new IOException(filePath + " is not found");
} else {
return IOUtils.toString(inputStream, StandardCharsets.UTF_8);
}
}
}
This question is similar with reading a resource file from within a jar. I also have tried with /Something.txt and Something.txt, both does not work.If I put the file Something.txt in the classes folder in sonarqube installation folder, the code will work.
Try this:
File file = new File(getClass().getResource("/Something.txt").toURI());
BufferredReader reader = new BufferedReader(new FileReader(file));
String something = IOUtils.toString(reader);
Your should not use getContextClassLoader(). see Short answer: never use the context class loader!

inserting image into mongo from java result in a strange error

I have the following code for saving image in mongodb:
public static void insertImage() throws Exception {
String newFileName = "mkyong-java-image";
File imageFile = new File("c:\\JavaWebHosting.png");
GridFS gfsPhoto = new GridFS(db, "photo");
GridFSInputFile gfsFile = gfsPhoto.createFile(imageFile);
gfsFile.setFilename(newFileName);
gfsFile.save();
}
And I got this from this link:
link for code
But when I use that I get the following error and I do not know how to fix it ... Can anyone help?
Exception in thread "main" java.lang.NullPointerException
at com.mongodb.gridfs.GridFS.<init>(GridFS.java:97)
for more explanation the error is at exactly this line:
GridFS gfsPhoto = new GridFS(db, "photo");
Update:
Here is the code for creating db connection
public static DB getDBConnection() {
// If it's not connected to the database, make connection
if (db == null) {
initialize();
makeConnections();
}
return db;
}
private static void makeConnections() {
MongoCredential credential = MongoCredential.createMongoCRCredential(dbUser, dbName, dbPass.toCharArray());
MongoClient mongoClient;
try {
mongoClient = new MongoClient(new ServerAddress(dbHost, Integer.parseInt(dbPort)), Arrays.asList(credential));
db = mongoClient.getDB(dbName);
} catch (UnknownHostException e) {
e.printStackTrace();
}
}
Update:
String newFileName = "mkyong-java-image";
File imageFile = new File("D:/1.jpg");
db = MongoDB.getDBConnection();
collection = db.getCollection("test");
// create a "photo" namespace
GridFS gfsPhoto = new GridFS(db, "photo");
// get image file from local drive
GridFSInputFile gfsFile = gfsPhoto.createFile(imageFile);
// set a new filename for identify purpose
gfsFile.setFilename(newFileName);
// save the image file into mongoDB
gfsFile.save();
// print the result
DBCursor cursor = gfsPhoto.getFileList();
while (cursor.hasNext()) {
System.out.println(cursor.next());
}
// get image file by it's filename
GridFSDBFile imageForOutput = gfsPhoto.findOne(newFileName);
// save it into a new image file
imageForOutput.writeTo("D:\\JavaWebHostingNew.jpg");
// remove the image file from mongoDB
// gfsPhoto.remove(gfsPhoto.findOne(newFileName));
System.out.println("Done");

Uploading a directory as a zipped file from Elastic MapReduce to S3

I would like to upload a directory from an EMR local file system to s3 as a zipped file.
Is there be a better way to approach this than the method I'm currently using?
Would it be possible to return a ZipOutputStream as a Reducer output?
Thanks
zipFolderAndUpload("target", "target.zip", "s3n://bucketpath/");
static public void zipFolderAndUpload(String srcFolder, String zipFile, String dst) throws Exception {
//Zips a directory
FileOutputStream fileWriter = new FileOutputStream(zipFile);
ZipOutputStream zip = new ZipOutputStream(fileWriter);
addFolderToZip("", srcFolder, zip);
zip.flush();
zip.close();
// Copies the zipped file to the s3 filesystem,
InputStream in = new BufferedInputStream(new FileInputStream(zipFile));
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(dst+zip), conf);
OutputStream out = fs.create(new Path(dst+zip));
IOUtils.copyBytes(in, out, 4096, true);
}
static private void addFileToZip(String path, String srcFile, ZipOutputStream zip) throws Exception {
File folder = new File(srcFile);
if (folder.isDirectory()) {
addFolderToZip(path, srcFile, zip);
} else {
byte[] buf = new byte[1024];
int len;
FileInputStream in = new FileInputStream(srcFile);
zip.putNextEntry(new ZipEntry(path + "/" + folder.getName()));
while ((len = in.read(buf)) > 0) {
zip.write(buf, 0, len);
}
}
}
static private void addFolderToZip(String path, String srcFolder, ZipOutputStream zip) throws Exception {
File folder = new File(srcFolder);
for (String fileName : folder.list()) {
if (path.equals("")) {
addFileToZip(folder.getName(), srcFolder + "/" + fileName, zip);
} else {
addFileToZip(path + "/" + folder.getName(), srcFolder + "/" + fileName, zip);
}
}
}
The approach you are taking looks fine. If you find that it is too slow because it is single-threaded, then you can create your own Hadoop OutputFormat implementation that writes to zip files.
One thing you have to be careful of, is that Java SE's implementation of ZipOutputFormat does not support Zip64, which means that it does not support ZIP files larger than 4GB in size. There are other Java implementations of ZIP that do, like TrueZIP.