Vertex upload file: BodyHandler methods not working - kotlin

I want to upload avatars. My endpoints are set with OpenAPI3RouterFactory(api.yaml file).
api.yaml:
#other endpoints
/api/v1/upload-avatar:
post:
summary: Uploading avatars endpoint
operationId: upload-avatar
tags:
- sign up
- registration
#other ones
HandlerVerticle.kt:
//OpenApi3Router is set
//other endpoints
routerFactory.addHandlerByOperationId("upload-avatar", BodyHandler.create().setDeleteUploadedFilesOnEnd(true).setUploadsDirectory("mp-upload").setMergeFormAttributes(true))
routerFactory.addHandlerByOperationId("upload-avatar", { routingContext ->
val fileUploadSet = routingContext.fileUploads()
val fileUploadIterator = fileUploadSet.iterator()
while (fileUploadIterator.hasNext()) {
val fileUpload = fileUploadIterator.next()
val uploadedFile = vertx.fileSystem().readFileBlocking(fileUpload.uploadedFileName())
try {
val fileName = URLDecoder.decode(fileUpload.fileName(), "UTF-8")
vertx.fileSystem().writeFileBlocking(fileName , uploadedFile)
routingContext.response().end()
} catch (e: Exception) {
e.printStackTrace()
}
}
})
//other routes
when I upload the image it uploads successful, but in server side vertx creates wrong upload directory, and it doesn't delete temporary uploaded files. Can anyone help me?

Not sure if this should be considered a bug, or something intentional, but...
OpenAPI3RouterFactoryImpl.getRouter() always overrides BodyHandler
No matter what you set there before.
I opened a new issue for now: https://github.com/vert-x3/vertx-web/issues/860

Related

Use KTOR as a pipe for simultaneously fetching and responding a file

I have a KTOR backend which serves as a broker between the frontend-client and an external REST API. I want to make KTOR fetch the chunks of a file from the REST API, and as it receives these chunks KTOR should pass them on to the client, without having to temporarily store the entire file. The file can be very large, which is why the only option is to stream it.
I have made this simple illustration to show what I want to achieve:
I have something like this in my code so far, but it doesn't seem to work correctly:
get("/file") {
val uri = "/rest-api"
downloadFileClient.prepareGet(uri).execute {response ->
call.respondOutputStream(ContentType.Application.Pdf, HttpStatusCode.OK, producer = {response.bodyAsChannel()})
}
}
You can respond with an object of the OutgoingContent.ReadChannelContent class which can use client's response as a source:
get("/file") {
val uri = "/rest-api"
downloadFileClient.prepareGet(uri).execute { response ->
val channel = response.bodyAsChannel()
call.respond(object : OutgoingContent.ReadChannelContent() {
override fun readFrom(): ByteReadChannel = channel
override val status: HttpStatusCode = HttpStatusCode.OK
override val contentType: ContentType = ContentType.Application.Pdf
})
}
}

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

Can't upload files in spring boot

I've been struggling with this for the past 3 days now, I keep getting the following exception when I try upload a file in my spring boot project.
org.springframework.web.multipart.support.MissingServletRequestPartException: Required request part 'file' is not present
I'm not sure if it makes a differance but I am deploying my application as a war onto weblogic,
here is my controller
#PostMapping
public AttachmentDto createAttachment(#RequestParam(value = "file") MultipartFile file) {
logger.info("createAttachment - {}", file.getOriginalFilename());
AttachmentDto attachmentDto = null;
try {
attachmentDto = attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (IOException e) {
e.printStackTrace();
}
return attachmentDto;
}
multi part beans I can see in spring boot actuator
payload seen in chrome
Name attribute is required for #RequestParm 'file'
<input type="file" class="file" name="file"/>
You can try use #RequestPart, because it uses HttpMessageConverter, that takes into consideration the 'Content-Type' header of the request part.
Note that #RequestParam annotation can also be used to associate the part of a "multipart/form-data" request with a method argument supporting the same method argument types. The main difference is that when the method argument is not a String, #RequestParam relies on type conversion via a registered Converter or PropertyEditor while #RequestPart relies on HttpMessageConverters taking into consideration the 'Content-Type' header of the request part. #RequestParam is likely to be used with name-value form fields while #RequestPart is likely to be used with parts containing more complex content (e.g. JSON, XML).
Spring Documentation
Code:
#PostMapping(consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public AttachmentDto createAttachment(#RequestPart("file") MultipartFile file) {
logger.info("Attachment - {}", file.getOriginalFilename());
try {
return attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (final IOException e) {
logger.e("Error creating attachment", e);
}
return null;
}
You are using multi part to send files so there is nothing much configuration to do to get desired result.
I m having the same requirement and my code just run fine :
#RestController
#RequestMapping("/api/v2")
public class DocumentController {
private static String bucketName = "pharmerz-chat";
// private static String keyName = "Pharmerz"+ UUID.randomUUID();
#RequestMapping(value = "/upload", method = RequestMethod.POST, consumes = MediaType.MULTIPART_FORM_DATA)
public URL uploadFileHandler(#RequestParam("name") String name,
#RequestParam("file") MultipartFile file) throws IOException {
/******* Printing all the possible parameter from #RequestParam *************/
System.out.println("*****************************");
System.out.println("file.getOriginalFilename() " + file.getOriginalFilename());
System.out.println("file.getContentType()" + file.getContentType());
System.out.println("file.getInputStream() " + file.getInputStream());
System.out.println("file.toString() " + file.toString());
System.out.println("file.getSize() " + file.getSize());
System.out.println("name " + name);
System.out.println("file.getBytes() " + file.getBytes());
System.out.println("file.hashCode() " + file.hashCode());
System.out.println("file.getClass() " + file.getClass());
System.out.println("file.isEmpty() " + file.isEmpty());
/**
BUSINESS LOGIC
Write code to upload file where you want
*****/
return "File uploaded";
}
None of the above solutions worked for me, but when I digged deeper i found that spring security was the main culprit. Even if i was sending the CSRF token, I repeatedly faced the issue POST not supported. I came to know that i was receiving forbidden 403 when i inspected using developer tools in google chrome and saw the status code in the network tab. I added the mapping to ignoredCsrfMapping in Spring Security configuration and then it worked absolutely without any other flaw. Don't know why i was not allowed to post multipart data by security. Some of the mandatory setting that needs to be stated in application.properties file are as follows:
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=10MB
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=10MB
spring.http.multipart.enabled=true

Trying to save dstream chepoints in a location on amazon s3

I want to save chekpoint tests in a location on amazon S3, this is the part of my scala code on DStream,using below format but getting the error..
Exception in thread "main" java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
Code:
val creatingFunc = { ()=>
// Create a StreamingContext
val ssc = new StreamingContext(sc, Seconds(batchIntervalSeconds))
val ggsnLines = ssc.fileStream[LongWritable, Text, TextInputFormat]("C:\\Users\\Mbazarganigilani\\Documents\\RA\\GGSN\\Files1",filterF,false)
val ccnLines= ssc.fileStream[LongWritable, Text, TextInputFormat]("C:\\Users\\Mbazarganigilani\\Documents\\RA\\CCN\\Files1",filterF,false)
val probeLines= ssc.fileStream[LongWritable, Text, TextInputFormat]("C:\\Users\\Mbazarganigilani\\Documents\\RA\\Probe\\Files1",filterF,false)
val ggssnArrays=ggsnLines.map(x=>(x._1,x._2.toString())).filter(!_._2.contains("ggsnIPAddress")).map(x=>(x._1,x._2.split(",")))
ggssnArrays.foreachRDD(s=> {
s.collect().take(10).foreach(u=>println(u._2.mkString(",")))
})
ssc.remember(Minutes(1)) // To make sure data is not deleted by the time we query it interactively
ssc.checkpoint("s3n://probecheckpoints/checkpoints")
println("Creating function called to create new StreamingContext")
newContextCreated = true
ssc
}
def main(args:Array[String]): Unit =
{
//the minremeberduration is set to read the previous files from the directory
//the kyroclasses serialization needs to be enabled for the filestream
if (stopActiveContext) {
StreamingContext.getActive.foreach { _.stop(stopSparkContext = false) }
}
// Get or create a streaming context
val hadoopConfiguration:Configuration=new Configuration()
hadoopConfiguration.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "AKIAIOPSJVBDTEUHUJCQ")
hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "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")
//val ssc = StreamingContext.getActiveOrCreate(creatingFunc)
val ssc=StreamingContext.getActiveOrCreate("s3n://probecheckpoints/SparkCheckPoints",creatingFunc,hadoopConfiguration,false)
if (newContextCreated) {
println("New context created from currently defined creating function")
} else {
println("Existing context running or recovered from checkpoint, may not be running currently defined creating function")
}
// Start the streaming context in the background.
ssc.start()

uploading a file in a non-blocking manner without using gridFSBodyParser(gridFS)

The plugin play-reactivemongo offers an easy way to upload a file:
def upload = Action(gridFSBodyParser(gridFS)) { request =>
val futureFile: Future[ReadFile[BSONValue]] = request.body.files.head.ref
futureFile.map { file =>
// do something
Ok
}.recover { case e: Throwable => InternalServerError(e.getMessage) }
}
Unfortunately this solution doesn't suit me because:
I would like only my DAO layer to depend on reactive-mongo.
I need to save the file only if a user is authenticated (with SecureSocial) and use some user's properties as checks and metadata.
If no user is authenticated the request body shouldn't be parsed at all (see also this question).
It would be something along the lines
def upload = SecuredAction { request =>
val user = request.user
val enumerator = an enumrator from the body parsing ???
myDAO.saveFile(user, enumerator)
object myDAO {
def saveFile(user:User, enumerator:Enumerator[Array[Byte]]) = {
...
val fileToSave = DefaultFileToSave(...)
gridfs.save(enumerator, fileToSave)
...
}
}
Unfortunately it seems there is no way to get an enumerator from the parsing of the request body. The only way seems to provide the Action with a parser and an Iteratee that will be fed with the the body being parsed.
I couldn't figure out how to achieve it in a reactive way (without using a temporary file or storing the body in memory). Is it at all possible?
Actually, you might consider not using girdFS built-in parser at all:
val gfs = new GridFS(db)
// the controller method, Authenticated here is custom object extending ActionBuilder
def upload = Authenticated.async(parse.multipartFormData) { request =>
...
request.body.file("photo") match {
// handle error cases
...
case Some(photo) =>
val fileToSave = DefaultFileToSave(photo.filename, photo.contentType)
// here some more operations, basically you don't need the and need only photo.ref.file
val enumerator = Enumerator(Image(photo.ref.file).fitToWidth(120).write)
gfs.save(enumerator, fileToSave) map {
//handle responses and stuff
...
}
}
}
}