what is difference between s3 web static content or web server - amazon-s3

I am little confused for a web server or s3 static web content, what should be good, Please explain what is advantages of the web server and what is disadvantages of the web server as well as s3.

Amazon S3 is highly scaled. It supports some of the world's largest websites.
Data stored on Amazon S3 is replicated to multiple data centres, making the data highly resilient.
It is commonly used to store and serve static web content (eg style sheets, images, scripts) and is also used by companies around the world to store data for internal consumption (known as data lakes).
Compared to a web server, it is much simpler to use, more reliable, does not require engineering support (it's a managed service) and is likely to be cheaper, too.
However, please note that it only stores and serves static content. It cannot provide any back-end logic for applications. You would use a server or AWS Lambda functions to provide that functionality.

Related

Amazon web Services actual Storage procedure

what is the necessity to provide different storage types in AWS?
in which scenarios will use S3/EBS/RDS in AWS??
Amazon S3 is an object store. It can store any number of objects (files), each up to 5TB. Objects are replicated between Availability Zones (Data Centers) within a region. Highly reliable, high bandwidth, fully managed. Can also serve static websites without a server. A great places for storing backups or files that you want to share.
Amazon Elastic Block Store (EBS) is a virtual disk for Amazon EC2 instances, when a local file system is required.
Amazon RDS is a managed database. It installs and maintains a MySQL, PostgreSQL, Oracle or Microsoft SQL Server database as a managed service.
There are more storage services such as DynamoDB, which is a fully-managed NoSQL database service that provides fast and predictable performance with seamless scalability; DocumentDB that provides MongoDB-compatible storage; Amazon Neptune, which is a graph database; Amazon Redshift, which is a petabyte-scale data warehouse; ElastiCache, which is a fully-managed Redis or Memcached caching service; etc.
The world of IT has created many different storage options, each of which serves a different purpose or sweet-spot.

GCP - CDN Server

I'm trying to architect a system on GCP for scalable web/app servers. My initial intention was to have one disk per web server group hosting the OS, and another hosting the source code + imagery etc. My idea was to mount the OS disk on multiple VM instances so to have exact clones of the servers, with one place to store PHP session files (so moving in between different servers would be transparent and not cause problems).
The second idea was to mount a 2nd disk, containing the source code and media files, which would then be shared with 2 web servers, one configured as a CDN server and one with the main website and backend. The backend would modify/add/delete media files, and the CDN server would supply them to the browser when requested.
My problem arises when reading that the Persistent Disk Storage is only mountable on a single VM instance with read/write access, and if it's needed on multiple instances it can be mounted only in write access. I need to have one of the instances with read/write access with the others (possibly many) with read only access.
Would you be able to suggest ways or methods on how to implement such a system on the GCP, or if it's not possible at all?
Unfortunately, it's not possible.
But, you can create a Single-Node File Server and mount it as a read and write disk on other VMs.
GCP has documentation on how to create a single-Node File Server
An alternative to using persistent (which as you said, only alows a single RW mount or many read-only) is to use Cloud Storage - which can be mounted through FUSE.

Amazon web services: Where to start

I am a recent grad and wanted to learn about doing web application using AWS. I have gone through the documentation and ran their sample Travel Log application Successfully.
But still I am not clear about the terminologies used. can anyone explain me the difference between Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB in simple words.
I am looking to come up with a web app that has a signin page and people posting some text there. may i know what services of amazon would be required for me to build this app.
Thanks
Amazon Simple Storage Service (S3) is for load static content , maybe images, videos, or something you want to save, You could think of it like a hard drive for storage.
Amazon Elastic Compute Cloud: ( EC2) basically is your Virtual Operative System, you can install whatever OS you want (Debian, Ubuntu, Fedora, Centos, Windows Server, Suse enterprise). ( if your application uses server side processing this will be its home)
Amazon Simple DB, is a no-sql database system, that you could use for your aplications, and Amazon gives you as a service, but if you want to use something more, you could install yours on EC2, or use RDS for Database server (MySql for example)
If you want to know more, there are some books, like: "programming Amazon EC2" or see Amazon screencast at http://www.youtube.com/user/AmazonWebServices or its presentation on http://www.slideshare.net/AmazonWebServices
Amazon Simple Storage Service (Amazon S3)
Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online backup and archiving of data and application programs. It allows to upload, store, and download any type of files up to 5 TB in size. This service allows the subscribers to access the same systems that Amazon uses to run its own web sites. The subscriber has control over the accessibility of data, i.e. privately/publicly accessible.
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
Amazon SimpleDB
Amazon SimpleDB is a highly available NoSQL data store that offloads the work of database administration. Developers simply store and query data items via web services requests and Amazon SimpleDB does the rest.
Unbound by the strict requirements of a relational database, Amazon SimpleDB is optimized to provide high availability and flexibility, with little or no administrative burden. Behind the scenes, Amazon SimpleDB creates and manages multiple geographically distributed replicas of your data automatically to enable high availability and data durability. The service charges you only for the resources actually consumed in storing your data and serving your requests. You can change your data model on the fly, and data is automatically indexed for you. With Amazon SimpleDB, you can focus on application development without worrying about infrastructure provisioning, high availability, software maintenance, schema and index management, or performance tuning.
For more information, go through these:
https://aws.amazon.com/simpledb/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_s3.htm
Amazon S3 is used for storage of files. It is basically like the hard drives like on your system you use C or D your files. If you are developing any application you can use S3 for storing the static files or any backup files.
Amazon EC2 is exactly like your physical machine. Only difference is EC2 is on cloud. You can install and run software, applications store files exactly you do on your physical machines.
Amazon Simple DB is a a database on cloud. you can integrate it with your application and make queries.

Web Server being used as File Storage - How to improvise?

I am making a DR plan for a web application which is hosted on a production web server. Now that web server also acts as a file storage for storing the feed uploads files (used by the web application as input) and report files( output of web application processing). Now if the web server goes down , the files data is also lost, so need to design a solution and give recomendations which eliminates this single point of failiure.
I have thought of some recommendations as follows-
1) Use a seperate file server however it requires a new resources
2) Attach a data volume mounted on the web server which is mapped to some network filer ( network storage) which can be used to store the feeds and reports. In case the web server goes down , the network filer can be mounted and attached to the contingency web server.
3) There is one more web server which is load balanced however that is not currently being used as file storage , and if we can implement a feature which takes the back up of the file data regularly to that load balanced second web server , we can start using that incase the first web server goes down. The back up can be done through a back up script, or seperate windows service , or some scheduling job for scheduling the backup job every night.
Please help me to review above or suggest new recommendations to help eliminate this single point of failiure problem on the web server. It would be highly appreciated?
Regards
Kapil
I've successfully used Amazon's S3 to store the "output" data of web and non-web applications. Using a service like that is beneficial from the single-point-of-failure perspective because then any other instance of that web application, or a different type of client, on the same server or in a completely different datacenter still has access to the same output files. Another similar option is Rackspace's CloudFiles.
Both of these services are very redundant, and you could use them as the back, and keep the primary storage on your server, or use them as the primary and keep a backup on your other web server. There are lots of options! Hops this info helps.

Web Service - Efficient way to transfer file to cloud platform storage

I have a service that requires some input files from the client. This service is run from a number of different cloud platforms (Amazon, Azure, etc.). Further, it requires that these input files be stored in their respective cloud platform's persistent storage. What is the best way to transfer those input files from the client to the service without requiring that the client knows about the specific cloud platform's details (e.g. secret keys)?
Given these requirements, the simple approach of the client transferring these files directly to the platform's storage (e.g. S3) isn't possible.
I could have the client call a service method and pass it the file blob to transfer up to the service, but that would require that the service receive the file, then do it's own file transfer to the cloud platform's storage (in other words, do 2 file transfers for 1 file).
So, the question is, what is the best way to transfer a file to storage given that we don't know what cloud platform the service is running on?
In this setting your proposed proxy solutions seems to be the only one and although the double transfer seems to be bad, in practice its not that evil as the cloud storage is commonly in the same datacenter as the compute nodes and therefore very fast to access.