I'm developing a website in EC2 and I have a lampp server in the original /opt/lampp folder. The thing is that I store all my images related to the website including users' profile images there(/opt/lampp/htdocs). I doubt this is the most efficient way? I have links to the images in my MySQL server.
I actually have no idea of what Amazon EBS and Amazon S3 is, how can I utulize it?
EBS is like an external usb hard drive. You can easily access content from filesystem (/mnt/).
S3 is more like an API based cloud storage. You'll have much more work to integrate it into your system.
You have a pretty good summary here :
http://www.differencebetween.net/technology/internet/difference-between-amazon-s3-and-amazon-ebs/
Google has a lot of infos about this.
Related
We're currently moving our setup from a few VPS to Digital Ocean. Our setup includes a staging site, which has a replica of our live db, however all E-Mails are caught using Mail Catcher and it has it's own storage location for assets, as we don't want to delete all production assets, when we delete a user on the staging system.
Therefore the question is: Is there a way to replicate all the assets from one DO space to another. Basically the idea is that if someone uploads an image on the prod site, it will be uploaded to the prod media bucket and should be replicated to the stage media bucket. Same when someone deletes a file in the prod media bucket (though I could live without this).
The only option to achieve my goals so far is to use Rclone on a VPS using a cron job, as described by DO here, however I hope there is a more "elegant" solution
I don't think there is a way to do it natively like S3 so rclone or something custom is your best bet - you can always OSS it to help the community
Currently I’m using a shared storage(azure file storage) to store profile pictures and company logos and also some custom python scripts uploaded by admins. My rest services are running in a docker swarm cluster where all the nodes have access to the shared location. Are there any drawbacks to this kind of design? I’m currently saving the files to the location and creating a url for that file and serving it as a static resource using my nginx reverse proxy/load balancer. So I was curious to know if there are any drawbacks to this design and how can I make it better?
There are several ways to access, store, and manipulate files in Azure file storage using REST API:
The Azure File service offers the following four resources: the storage account, shares, directories, and files. Shares provide a way to organize sets of files and also can be mounted as an SMB file share that is hosted in the cloud.
More info here
When it comes to the design, it will depend of what kind of concerns your customers may have, slow connectivity, are they going to need these files permanently etc ...
I want to upload pictures to the AWS s3 through the iPhone. Every user should be able to upload pictures but they must remain private for each one of them.
My question is very simple. Since I have no real experience with servers I was wondering which of the following two approaches is better.
1) Use some kind of token vending machine system to grant the user access to the AWS s3 database to upload directly.
2) Send the picture to the EC2 Servlet and have the virtual server place it on the S3 storage.
Edit: I would also need to retrieve, should i do it directly or through the servlet?
Thanks in advance.
Hey personally I don't think it's a good idea to use token vending machine to directly upload the data via the iPhone, because it's much harder to control the access privileges, etc. If you have a chance use ec2 and servlet, but that will add costs to your solution.
Also when dealing with S3 you need to take in consideration that some files are not available right after you save them. Look at this answer from S3 FAQ.
For retrieving data directly from S3 you will need to deal with the privileges issue again. Check the access model for S3, but again it's probably easier to manage the access for non public files via the servlet. The good news is that there is no data transfer charge for data transferred between EC2 and S3 within the same region.
Another important point to consider the latter solution
High performance in handling load and network speeds within amazon ecosystem. With direct uploads the client would have to handle complex asynchronous operations of multipart uploads etc instead of focusing on the presentation and rendering of the image.
The servlet hosted on EC2 would be way more powerful than what you can do on your phone.
In my application; users can upload videos and I want to keep them on file system and not in database. If I use Amazon Web Services ( AWS ) and use only one EC2 instance with EBS ( for storage ) its fine.
But if I use auto-scaling or multiple EC2 instances ; then if user uploads the video it gets saved on one of the EC2 ( associated with one of EBS ) . Next time if user logs in ( or if session stickiness is not there; then if the user's next request goes to another EC2 instance ) how will it access his video ?
What is the best solution for this problem ? Is using S3 the only solution but for that I cant simply do java.io.File .. I think then I will have to use AWS SDK api to access the uploaded videos.. but wont that be slower ?
Is there any better solution ?
I would use Amazon S3. It's fast, relatively cheap and works well if you are using other Amazon Web Services.
You could upload to the EC2 instance first and use Simple Workflow Servie to transfer the files centrally automatically. This is also useful if you want to re-encode the video to a multiple bit-rates later.
You need to setup NAS. It can be NFS or something like GlusterFS. The later is modern and it scales well...
We want to be able to have a folder that can securely serve images across a cluster of web servers. What's the best way to handle this with Amazon Web Services (AWS)? Amazon S3? Amazon Elastic Block Store (EBS)? Amazon Cloudfront?
EDIT: Answer no longer needed...thanks.
I'm not sure what your main goal is or if you have read about the services you ask about. But I will try to explain it as far as I've understood AWS and your choices:
S3 is a STORAGE (with buckets and objects, a sort of folder structure with meta access)
EBS is a VOLUME (these are attached to an EC2 instance as extra drive you can access as a local harddrive)
CloudFront is a WEB-CACHE (you select which datacenter you want them in, and then you point at a S3 bucket and Amazon will replicate the content for you)
So we only need to figure out what you mean by "securely" as there are two options as I see it:
You can protect buckets in the S3 or make access levels with accounts, for "administrator access" only and PUBLIC READABLE...
You can store the data in a EBS volume and keep them there, then they are very secure and NOT public, but shareable (I believe) among the servers (I've planned to check out this myself within the next week)
You cannot protect "cloudfront" data as it's controlled by the Bucket permissions from S3...
Hope you can use this a little. I've not stated anything regarding SPEED nor COST, thats for you to benchmark/test with your data requirements. :o)