My org uses Tumbleweed Secure File Transport to transfer files to different locations.
I have a requirement to move files to S3 but am not sure whether this is possible using Tumbleweed.
The way my org currently does it is to sftp the files across to an EC2 instance which then transfers it to S3.
Does anyone know if Tumbleweed can send files directly to S3?
Thanks in advance.
S3 uploading is secure, and there are numerous tools that implement the API. The AWS CLI, SDKs, and 3rd party applications. Even if Tumbleweed doesn't support it, you can find a tool that does
Related
I am creating an Angular 6 frontend application. My backend api are created in DotNet. Assume the application is similar to https://www.amazon.com/.
My query is related to frontend portion deployment related only, on AWS. Large number of users with variable count pattern are expected on my portal. I thought of using AWS elastic beanstalk as PAAS web server.
Can AWS S3/ ELB be used instead of PAAS beanstalk without any limitations?
I'm not 100% sure what you mean by combining an Elastic Load Balancer with S3. I think you may be confused as to the purpose of the ELB, which is to distribute requests to multiple servers e.g. NodeJS servers, but cannot be used with S3 which is already highly available.
There are numerous options when serving an Angular app:
You could serve the files using a nodejs app, but unless you are doing server-side rendering (using Angular Universal), then I don't see the point because you are just serving static files (files that don't get stitched together by a server such as when you use PHP). It is more complicated to deploy and maintain a server, even using Elastic Beanstalk, and it is probably difficult to get the same performance as you could do with other setups (see below).
What I suspect most people would do is to configure an S3 bucket to host and serve the static files of your Angular app (https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html). You basically configure your domain name to resolve to the S3 bucket's url. This is extremely cheap, as you are not paying for a server which is running constantly, but rather only have to pay the small storage cost and plus a data transfer fee which would be directly proportional to your traffic.
You can further improve on the S3 setup by creating a CloudFront distribution that uses your S3 bucket as it's origin (the location that it get files from). When you configure your domain name to resolve to your CloudFront distribution, then instead of a user's request getting the files from the S3 bucket (which could be in a region on the other side of the world and so slower), the request will be directed to the closest "edge location" which will be much closer to your user, and check if files are cached there first. It is basically a global content delivery network for your files. This is a bit more expensive than S3 on it's own. See https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/.
We want to store our files somewhere on a storage server, some of which need to be password protected. S3 is a very good option since
it can be password protected.
we can access it programmatically (say we can upload or download files from java)
Although the storage is cheap, download/upload price on S3 is not that cheap. So we are looking for alternatives. One option is to use our own servers. Is there any way to simulate a similar behavior with a personal server?
You can consider using this: https://www.minio.io/
It's an object storage server that is compatible with Amazon S3
I have been using Transmit FTP program to access my Amazon S3 storage buckets. Just been reading on here that this isn't that secure.
I'm not a command line person as you can probably tell so what would be the best way for me to access my S3 storage on my Mac?
I'm using to store image files that I am making available for download on my website.
Thanks
FTP isn't secure, but it sounds like you are confusing this fact with the fact that you are using a multiprotocol client to access S3, and that client also happens to support FTP.
S3 cannot be directly accessed using the FTP protocol, so the client you are using can't actually be accessing S3 using FTP... hence, your security-related concern appears to be unfounded.
I'm trying to transfer buckets between Amazon s3 accounts. I see there is s3cmd for unix and CloudBerry Explorer for Windows. I have tested both, but im not sure if the transfer (both accounts in same region) are server side or client side? Can this be done server side?
S3cmd
Cloudberry explorer
From this question: Best way to move files between S3 buckets?
I am also checking this: http://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectUsingRuby.html, i like ruby :)
Edit: Also in the CloudBerry Explorer, i have checked the server side transfer option, and i am using sync option, BUT i am still not sure if this is entirely client side.
Yes, if these tools are using the PUT - Copy method the file duplication is done entirely on the server-side.
You can read-up on the details in the API docs.
I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image.
We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.
THe Amazon instances are just like normal hosted server instances once they are up and running so your Apache configuration could assumedly be identical to what you currently have.
Your only issue will be where to store the images. The new Amazon Elastic Block Store makes it easy to mount a drive based on S3 backed data. You could store all your images on such a volume and use it with your Apache instance.