Rails3 deferred ftp upload - ruby-on-rails-3

I need a way to implement deferred FTP uploads, to different servers, in a Rails3 application. This will be the scenario:
The user builds a folder full of files and subfolders, with a simple Rails3 CMS (DONE)
When the user ends his work, he would click on a deploy button.
The system takes the control and stores the user request.
The systen gives the control back to the user, this way he can work on other stuff.
At the same time the system initiates 10 FTP uploads of the same folder.
When an upload ends it will store its status somewhere.
The user can see the deployment status, at any time, by going on a specific page.
Uploaded folders size will be from 600Mb to 1Gb. They will contain PNG images, little mp4 movies and xml files.
The web server and all the ftp server will be on the same network, same subnet. No need of extra security for now.
I'm completely new to asynchronous or delayed jobs. The application will have just one or two users: no need to handle a lot of deploy requests at the same time.
How can I accomplish this task? If you need more info please ask in the comments.

Once you have delayed_job set up, you can set a method to perform in the background while you go about your business. In this case, the deploy method would always be in the background–set by handle_asynchronously.
class UploadStatus < ActiveRecord::Base
def deploy
# write your ftp loop here
# periodically update this model in the db with the status
end
handle_asynchronously :deploy
end
Now, you can just call #upload_status.deploy() and it will run in the background.
You could also write a job method, but I think that it makes more sense in an ActiveRecord class because you will be updating the deploy status.

Related

How to save PDF from HTML in Azure Functions

I'm developing an application which will have a web crawler for some sites.
The application will trigger a Azure Function by URL where the crawler will start the work.
So far, so good, but, we'll have to save some evidence that the crawler passed though the site. We're thinking of save a PDF file with the screen that the crawler passed, but, as Azure Functions doesn't have GDI+, it won't work with Selenium or PhantomJS.
One different approach can be download the HTML content and somehow save this HTML string (with all the JS and CSS dependency) into a PDF file.
i'd like of some library which can work with Azure Functions to make the screenshot of some URL (or HTML string) and save to PDF.
Thanks.
Unfortunately the App Service Sandbox whose rules Azure Functions live by is going to block most GDI+ API calls. We have had success with one third party library (ByteScout) for some PDF generation needs but I think in your case that type of operation is explicitly blocked. You can find out more details here https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#win32ksys-user32gdi32-restrictions
There is no workaround that I'm aware of because at the end of the day most of these solutions are relying on GDI+ in the underlying OS (directly or indirectly).
Your only real option is to offload that workload to virtual machine without the restriction on the API.That could take the form of a dedicated VM or something like an Azure Container Instance whose life-cycle you can manage more dynamically as needed. We do something similar today where we have a message queue being monitored on a VM and our azure function drops the request into the queue for processing.

First time mass-emailing using Sendgrid with rails

I am running a rails app on heroku and would like to send an email to 160 users. This is the first time I am doing this so I would to very whether the method below will lead to a successful outcome.
Sendgrid is all sent up and I have a controller setup that executes the following:
#users = User.all
#users.each do |u|
Email.send_email(p).deliver
end
I am assuming that since the number of recipients is relatively low I would be able to get by without using delayed_job or some other background processing.
SendGrid actually makes it easy to send out emails without having to use a background worker. You can do it using the X-SMTPAPI header and setting an array of email addresses in the to field. For example:
X-SMTPAPI: {
to: ["john.doe#example.com", "jackson#example.com", "freddy#example.com"]
}
In this example, each of these three emails will receive a separate copy of the email. No background workers, no complexity.
There's a gem called sendgrid that does a good job of adding some useful helpers to action mailer. Have a look at the "multiple recipients" section of the README
https://github.com/stephenb/sendgrid
I would advise that you invest the time into some background processing, as this could potentially be a hit or miss, all depending on the emailing service.

How to work with Dragonfly's before_serve block to watermark pdf files

I'm trying to serve PDF files that are watermarked with a users email address and timestamp at the time of downloading a file using the dragonfly gem and I'm having a bit of trouble working with the before_serve block. What I'm not able to work out is how to use the job object dragonfly passes to the block and how to apply a custom processor. Calling process on the job object directly doesn't appear to run the processor
app.configure do |config|
config.server.before_serve do |job, env|
user = # user record grabbed from database
job.process(:watermark, user: user)
end
end
However if I call process on the dragonfly object returned by the rails model the pdf is processed correctly but I'm not sure how to actually instruct dragonfly on what to send to the browser
app.configure do |config|
config.server.before_serve do |job, env|
user = # user record grabbed from database
report = # report grabbed from database
report.pdf.process(:watermark, user: user)
end
end
So in both cases the same file is returned to the browser, the original non-watermarked version. Maybe I'm trying to get dragonfly to do something it's not supposed to do in a before_serve block? My alternative implementation would be to block access to direct file downloads and do all of this in a rails controller instead. I'd like to use dragonfly's before_serve block if possible as I've already added all of the authentication within the block to make sure users are allowed to download the file.
Thanks
After further investigation it would appear the best solution is to block all direct file access in the before_serve block and use a standard rails controller to watermark the file and present it for download.

How to update file upload messages using backbone?

I am uploading multiple files using javascript.
After I upload the files, I need to run several processing functions.
Because of the processing time that is required, I need a UI on the front telling the user the estimated time left of the entire process.
Basically I have 3 functions:
/upload - this is an endpoint for uploading the files
/generate/metadata - this is the next endpoint that should be triggered after /upload
/process - this is the last endpoint. SHould be triggered after /generate/metadata
This is how I expect the screen to look like basically.
Information such as percentage remaining and time left should be displayed.
However, I am unsure whether to allow server to supply the information or I do a hackish estimate solely using javascript.
I would also need to update the screen like telling the user messages such as
"currently uploading"
if I am at function 1.
"Generating metadata" if I am at function 2.
"Processing ..." if I am at function 3.
Function 2 only occurs after the successful completion of 1.
Function 3 only occurs after the successful completion of 2.
I am already using q.js promises to handle some parts of this, but the code has gotten scarily messy.
I recently come across Backbone and it allows structured ways to handle single page app behavior which is what I wanted.
I have no problems with the server-side returning back json responses for success or failure of the endpoints.
I was wondering what would be a good way to implement this function using Backbone.js
You can use a "progress" file or DB entry which stores the state of the backend process. Have your backend process periodically update this file. For example, write this to the file:
{"status": "Generating metadata", "time": "3 mins left"}
After the user submits the files have the frontend start pinging a backend progress function using a simple ajax call and setTimeout. the progress function will simply open this file, grab the JSON-formatted status info, and then update the frontend progress bar.
You'll probably want the ajax call to be attached to your model(s). Have your frontend view watch for changes to the status and update accordingly (e.g. a progress bar).
Long Polling request:
Polling request for updating Backbone Models/Views
Basically when you upload a File you will assign a "FileModel" to every given file. The FileModel will start a long polling request every N seconds, until get the status "complete".

Gaining Root Access w/ Elevated Helper & SMJobBless

I'm working on something that needs to install files periodically into a folder in /Library.
I understand that in the past I could have used one of the Authenticate methods but those have since been deprecated in 10.7.
What I've understood from my reading so far:
I should create a helper that somehow gets authenticated and have that helper do all of the moving tasks. I've taken a look at some of the sample code, including some involving XPC and one called Elevator but I'm a bit confused.
A lot of it seems to deal with setting up some sort of client / server model but I'm not sure how this would translate into me actually installing my files into the correct directories. Most of the examples are just passing strings.
My question simply: How can I create my folder in /Library programmatically and periodically write files to it while only prompting the user for a password ONCE and never again? I'm really not sure how to approach this and there doesn't seem to be much documentation.
You are correct that there isn't much documentation for this. You'll basically write another app, the helper app, which will get installed with SMJobBless(). Not surprisingly,
the tricky part here is the code signing. The least obvious part for me was that the SMAuthorizedClients and SMPrivilegedExecutables entries in the info plist files of each app are dependent on the identity/certificate that you used to sign the app with. There is also a trick with the compiler/linker to getting the info plist file compiled into the helper tool, which will be a single executable file, rather than a bundle.
Once you get the helper app up and running then you have to devise a way to communicate with it since these are two different processes. XPC is one option, perhaps the easiest. XPC is typically used with server processes, but what you are using here is the communication side of XPC only. Basically it passes dictionaries back and forth between the two apps. Create a standard format for the dictionary. I used #"action", #"source", and #"destination" with 3 different action values, #"filemove", #"filecopy", and #"makedirectory". Those are the 3 things that my helper app can do and I can easily add more if necessary.
The helper app will basically setup the XPC connection and event handler stuff and wait for a connection and commands. The commands will just be a dictionary so you check for the appropriate keys/values and do whatever.
I can provide more details and code if you need more help, but this question is 9 months old so I don't want to waste time giving you details you've already figured out.