uploading a file in coldfusion - coldfusion-9

We would like to know if there is a way to upload a file by using coldfusion.Our requirement is to upload a csv file..So intially we move the csv file data to a temporary table and then we shud write a proc and call it that would kick of the actual upload process.

The simple answer is, yes. ColdFusion makes this is quite a simple process. Just read the CF9 docs concerning cffile (http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-738f.html)

Related

Compressing PDF to ZIP and save in table inside SQL

I'm getting a Base64 file send by some application to my application and I need to Decode it in SQL, make that file to be a *.zip and then store as a zip in a proper column in table.
The decoding part is ready. Now, I'm stuck on the step in which I need to make that file to be a ZIP.
The question is: Is there any way or chance to make the compression only using SQL?

Save pdf file loaded in iFrame to database after edit Oracle APEX

I am trying to save a PDF file that is loaded in an iFrame after sign it, i am using the (PSPDFKit standalone) in Oracle APEX 190200 version.
I need save the is database instead of download file.
How I can get file and save file in database through AJAX callback?
Screenshot:
You can use instance.exportPDF() to get the PDF as an ArrayBuffer. Then you can convert the ArrayBuffer to Blob and send it to the server. Hopefully, this should solve your issue.
I would suggest you to reach our support directly. We offer a blazing fast assistance and the questions are handled directly by the Web team: https://pspdfkit.com/support/request/.

Generate A Large File Inside s3 with .NET

I would to generate a big file (several TB) with special format using my C# logic and persist it to S3. What is the best way to do this. I can launch a node in EC2 and then write the big file into EBS and then upload the file from the EBS into S3 using the S3 .net Clinent library.
Can I stream the file content as I am generating in my code and directly stream it to S3 until the generation is done specially for such large file and out of memory issues. I can see this code help with stream but it sounds like the stream should have already filled up with. I obviously can not put such a mount of data to memory and also do not want to save it as a file to the disk first.
PutObjectRequest request = new PutObjectRequest();
request.WithBucketName(BUCKET_NAME);
request.WithKey(S3_KEY);
request.WithInputStream(ms);
s3Client.PutObject(request);
What is my best bet to generate this big file ans stream it to S3 as I am generating it?
You certainly could upload any file up to 5 TB that's the limit. I recommend using the streaming and multipart put operations. Uploading a file 1TB could easily fail in the process and you'd have to do it all over, break it up into parts when you're storing it. Also you should be aware that if you need to modify the file you would need to download the file, modify the file and re-upload. If you plan on modifying the file at all i recommend trying to split it up into smaller files.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UploadingObjects.html

Write files to S3 through Java

I have a program which takes input from S3, generates a text file, and then sends it to the mapper class. I am unable to write the file to S3, from where the mapper can read it later. Now, I realize that we cannot write files to S3 directly, so I am trying to upload the text file created to S3 using copyFromLocalFile(). However, I get a null pointer exception in the following line:
fs.copyFromLocalFile(true, new Path(tgiPath), mapIP);
I am creating the text file in main function, so I am not sure where exactly it's being created. The only reason behind the null pointer exception, that I can think of is that the text file is not being written on the local disk. So my question is: How do I write files on the local disk? If I just specify the name of the file while creating it, where is it created and how do I access it?
Have a look at Jets3t
This seems to be exactly what you need.
Jets3t is awesome, but I am using Google's App Engine, and it doesn't work on there because of threading limitations.
I banged my head against the wall until I came up with a solution that worked on App Engine by combining a bunch of existing libraries: http://socialappdev.com/using-amazon-s3-with-google-app-engine-02-2011

Difficulty in testing a CSV upload, due to Rails thinking it should use it to populate test data

I have a case where I want to test a file upload but I'm having problems breaking a circle of pain.
We are checking that the file uploaded is a CSV file, so the file should have the extention .csv.
To test we need to use ActionDispatch::TestProcess.fixture_file_upload and we place the test file in the test/fixtures/files diectory.
Now when I try to run a test, Rails sees this csv file in the sub-folder of fixtures and thinks it should use the file to populate some table. Which is wrong.
Is there any way I can tell Rails not to try and use this file for populating a table?
I can't rename the extension as that is what I'm trying to test.
I can't move the file out of fixtures as then the fixture_file_upload won't work
I can't seem to tell Rails to leave this file alone when populating fixture data.
:(
Well the answer I've come up with so far is to move the files out of fixtures directory and to place the equivalent of "../" at the start of the filename (in the fixture_file_upload parameter) to make it backdown a level.
Not exactly a good fix, so would still like to hear if it's possible to tell Rails not to load the csv file as table data.