Google transfer service error notification - amazon-s3

I've been looking everywhere and can't seem to find the answer. I set up a file transfer service between a S3 bucket and a google storage bucket. I know I can see the error messages by clicking on the file transfer, but I wan't to access the log, so I can set up an email notification when an error occurs. Where can I find the log? Or is there another way to set this email notification up?

Google's Transfer Service does not currently have any mechanism for email/pubsub/etc. notifications about the progress of a job or errors it encounters.
Until such a feature exists, I think the closest available solution would be something based on the access logs or notifications directly from GCS or S3 (but that would include other traffic on the bucket, not just Transfer Service). E.g., for errors encountered when writing objects to GCS, you could analyze the access logs or the object change notifications.

Related

Amazon SES Deliver to S3 bucket action fails sometimes

I have configured receipt rules on Amazon SES to Deliver to S3 bucket action. I noticed that sometimes, this step fails however and the email does not get delivered to S3. I tried sending it with an attachment that was 8KB and that was fine, but then when sending with an attachment of 115KB it fails. But I see on the docs that maximum size allowed is 40MB so not sure what the problem is.
Anyone have any clues or know how i can debug SES rule failures?

Red5 stream audio from Azure Storage or Amazon S3

I'm wondering if its possible to stream audio in Red5 from files stored in Azure? I am aware of how to manipulate the playback path via a custom file name generator IStreamFilenameGenerator, our legacy Red5 webapp uses it. It would seem to me though that this path needs to be on the local red5 server, is this correct?
I studied the example showing how to use Amazon S3 for file persistence and playback (https://goo.gl/7IIP28) and while the file recording + upload makes perfect sense, I'm just not seeing how the playback file name that is returned is streaming from S3. Tracing the StringBuilder appends/inserts, it looks like the filename is going to end up to be something like {BucketLocation}/{SessionID}/{FileKey} ... this lead me to believe that bucket.getLocation() on Line 111 was returning an HTTP/S endpoint URL, and Red5 would somehow be able to use it. I wrote a console app to test what bucket.getLocation() returned, and it only returns null for US servers, and EU for Europe. So, I'm not even sure where/how this accesses S3 for direct playback. Am I missing something?
Again, my goal is to access files stored in Azure, but I figured the above Amazon S3 example would have given me a hint.
I totally understand that you cannot record directly to Azure or S3, the store locally + upload makes sense. What I am failing to see is how to stream directly from a blob cloud storage. If anyone has suggestions, I would greatly appreciate it.
Have you tried using Azure Media Services? I believe looking at their documentation will be a good start for your scenario.

Using Mule how to pull a file from an FTP site in response to an incoming VM event?

When I get a triggering event on an inbound VM queue I want to pull a file from an FTP site.
The problem is the flow needs to be triggered by the incoming VM message not the FTP file's availability.
I cannot figure out how to have what is essentially two inputs. I considered using a content enricher but it seems to call an outbound endpoint. The Composite Source can have more than one input, but it runs when any of the message sources trigger it, not a sum of sources.
I am setting up an early alert resource monitoring of FTP, file systems, databases, clock skew, trading partner availability, etc. Periodically I would like to read a custom configuration file that tells what to check and where to do it and send a specific request to other flows.
Some connectors like File and FTP do not lend themselves to be triggered by an outside event. The database will allow me to select on the fly but there is no analog for File and FTP.
It could be that I am just thinking about it in the wrong light but I am a little stumped. I tried having the VM event trigger a script that starts a flow that had an initial state of “stopped” and that flow pulls from an FTP site but VM seems to not play well with starting and stopping flows, and it begins to feel like a 'cluttered' solution.
Thank you,
- Don
For this kind of scenarios, you should use the Mule requester module.

Can I easily limit which files a user can download from an Amazon S3 server?

I have tried looking for an answer to this but I think I am perhaps using the wrong terminology so I figure I will give this a shot.
I have a Rails app where a company can have an account with multiple users each with various permissions etc. Part of the system will be the ability to upload files and I am looking at S3 for storage. What I want is the ability to say that users from Company A can only download the files associated with that company?
I get the impression I can't unless I restrict the downloads to my deployment servers IP range (which will be Heroku) and then feed the files through a controller and a send_file() call. This would work but then I am reading data from S3 to Heroku then back to the user vs. direct from S3 to the user.
If I went with the send_file method can I close off my S3 server to the outside world and have my Heroku app send the file direct?
A less secure idea I had was to create a unique slug for each file and store it under that name to prevent random guessing of files i.e. http://mys3server/W4YIU5YIU6YIBKKD.jpg etc. This would be quick and dirty but not 100% secure.
Amazon S3 Buckets support policies for granting or denying access based on different conditions. You could probably use those to protect your files from different user groups. Have a look at the policy documentation to get an idea what is possible. After that you can switch over to the AWS policy generator to generate a valid policy depending on your needs.

Correct Server Schema to upload pictures in Amazon Web Services

I want to upload pictures to the AWS s3 through the iPhone. Every user should be able to upload pictures but they must remain private for each one of them.
My question is very simple. Since I have no real experience with servers I was wondering which of the following two approaches is better.
1) Use some kind of token vending machine system to grant the user access to the AWS s3 database to upload directly.
2) Send the picture to the EC2 Servlet and have the virtual server place it on the S3 storage.
Edit: I would also need to retrieve, should i do it directly or through the servlet?
Thanks in advance.
Hey personally I don't think it's a good idea to use token vending machine to directly upload the data via the iPhone, because it's much harder to control the access privileges, etc. If you have a chance use ec2 and servlet, but that will add costs to your solution.
Also when dealing with S3 you need to take in consideration that some files are not available right after you save them. Look at this answer from S3 FAQ.
For retrieving data directly from S3 you will need to deal with the privileges issue again. Check the access model for S3, but again it's probably easier to manage the access for non public files via the servlet. The good news is that there is no data transfer charge for data transferred between EC2 and S3 within the same region.
Another important point to consider the latter solution
High performance in handling load and network speeds within amazon ecosystem. With direct uploads the client would have to handle complex asynchronous operations of multipart uploads etc instead of focusing on the presentation and rendering of the image.
The servlet hosted on EC2 would be way more powerful than what you can do on your phone.