Is it possible to use Amazon S3 for folder in a .net site - amazon-s3

Is it possible to use Amazon Simple Storage Service (S3) for folders & files on a .net site?
Background:
I have 200 websites sites and I would like to have a single common code base. Right now they are on a single dedicated server. I plan to move them to an EC2 server.
As you can see, some of the folders & files are not on S3 and some are.
Admin Panel - is a folder that requires authentication - is this an issue?
/Bin/ - contains DLL's - is this an issue?

EC2 is normal Windows Server like your current dedicated server. You remote desktop into it, install whatever you need, setup IIS etc.
S3 on the other hand is just a storage device. Think of it like a big NAS device. So you can use it to serve your static content (possible in conjunction with Cloudfront) but the actual website (Dlls, aspx pages etc) will have to be on EC2 in IIS.

Related

AWS S3 and AWS ELB instead of AWS Elastic beanstalk for SPA Angular 6 application

I am creating an Angular 6 frontend application. My backend api are created in DotNet. Assume the application is similar to https://www.amazon.com/.
My query is related to frontend portion deployment related only, on AWS. Large number of users with variable count pattern are expected on my portal. I thought of using AWS elastic beanstalk as PAAS web server.
Can AWS S3/ ELB be used instead of PAAS beanstalk without any limitations?
I'm not 100% sure what you mean by combining an Elastic Load Balancer with S3. I think you may be confused as to the purpose of the ELB, which is to distribute requests to multiple servers e.g. NodeJS servers, but cannot be used with S3 which is already highly available.
There are numerous options when serving an Angular app:
You could serve the files using a nodejs app, but unless you are doing server-side rendering (using Angular Universal), then I don't see the point because you are just serving static files (files that don't get stitched together by a server such as when you use PHP). It is more complicated to deploy and maintain a server, even using Elastic Beanstalk, and it is probably difficult to get the same performance as you could do with other setups (see below).
What I suspect most people would do is to configure an S3 bucket to host and serve the static files of your Angular app (https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html). You basically configure your domain name to resolve to the S3 bucket's url. This is extremely cheap, as you are not paying for a server which is running constantly, but rather only have to pay the small storage cost and plus a data transfer fee which would be directly proportional to your traffic.
You can further improve on the S3 setup by creating a CloudFront distribution that uses your S3 bucket as it's origin (the location that it get files from). When you configure your domain name to resolve to your CloudFront distribution, then instead of a user's request getting the files from the S3 bucket (which could be in a region on the other side of the world and so slower), the request will be directed to the closest "edge location" which will be much closer to your user, and check if files are cached there first. It is basically a global content delivery network for your files. This is a bit more expensive than S3 on it's own. See https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/.

ImageResizer (as standalone image server) and backend image sources

Current Setup: Large CMS that has an "authoring" server and multiple "delivery" servers. Both the authoring and delivery servers are Windows servers running 64-bit Apache. The Apache Web root is setup on each server exactly the same--points to a directory on a SAN. The delivery servers are load balanced via another Windows server running Apache. I just finished setting up the ImageResizer server as a standalone image server and wanted to see what the best approach to getting the ImageResizer server to access and thus serve up the images.
authoring server - a.site.com
delivery server - d.site.com
image server - i.site.com
So I guess the question is, what is the best way to allow the ImageResizer server access to the images that are part of a large CMS site? RemoteReader plugin? Setup the IIS site with the Web root as the same as the authoring and delivery servers? Any security issues with this approach? Any suggestions/alternate approaches?
Thank you!
If the SAN is well-behaved, SMB2 or later, and low-latency, you could mount the directory as a virtual folder within the IIS root (which you might want to keep separate).
For performance, it would be best to disable FCNMode - http://imageresizing.net/docs/v3/docs/fcnmode - particularly if you have lots of directories in the SAN.
Alternatively, you could use RemoteReader and point it to your Apache web servers, although you'll sacrifice the ability to update existing files (RemoteReader perma-caches everything to make performance reasonable).

Can i have 2 services (.svc) in one Azure WCFWebRole?

I have my Azure cloud solution, with a WCFWebRole, and i created 2 services (.svc).
On localhost they both work great.
But when i publish my solution to Azure, only one of the .svc's are uploaded
What am i missing?
I read a lot of threads about people combining all their service interfaces into one .svc file for some reason, but i see no point in that, if worst comes i will divide the .svc files into 2 WebRoles (which will be a waste, and probably not possible when thinking about it, because i have Windsor Castle and Nhibernate configured on the WebRole, so Lifestyles won't be kept between the webroles)
It doesn't seem like a big dig having more than one .svc when working on localhost...
Thanks
Yes, you can have more than one .svc WCF service in a single WCF project assigned to a single Azure Cloud Services Web Role.
This GitHub project is a sample of such a project that, after configured with storage credentials in ServiceConfiguration.Cloud.cscfg and when deployed to a Windows Azure Cloud Service, will answer to requests to both Service1.svc?wsdl and Service2.svc?wsdl.
To verify if your .svc files are being uploaded in the package, you can go to the bin\Release\app.publish directory under your cloud project and extract the .cspkg file (it is in ZIP format). Inside it you'll find a large .cssx file. Extract it as well. Inside this file look into the approot directory. You'll find the project files there. The same files should be found in the csx\Release\roles under your cloud project.
If the .svc files are indeed being uploaded and they're not executing in the cloud environment, check your WCF bindings and endpoints.
You may also activate Remote Desktop in a single development cloud instance and connect to the server to verify logs and events, and to inspect the application directories.

How can I create a shared /home dir across my Amazon EC2 servers

I have a cluster of EC2 servers spun up with Ubuntu 12.04. This will be a dev environment where several developers will be ssh-ing in. I would like to set it up where the /home directory is shared across all 4 of these servers. I want to do this to A) ease the deployment of the servers, and B) make it easier on the devs so that everything in their homedir is available to them on all servers.
I have seen this done in the past with a NetApp network attached drive, but I can't seem to figure out how to create the equivalent using AWS components.
Does anyone have an idea of how I can create this same setup using Amazon services?
You'll probably need to have a server host an NFS share to store the home directories. I'd try out what this guy has done in his answer https://serverfault.com/questions/19323/is-it-feasible-to-have-home-folder-hosted-with-nfs.

404 redirect with cloud storage

I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image.
We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.
THe Amazon instances are just like normal hosted server instances once they are up and running so your Apache configuration could assumedly be identical to what you currently have.
Your only issue will be where to store the images. The new Amazon Elastic Block Store makes it easy to mount a drive based on S3 backed data. You could store all your images on such a volume and use it with your Apache instance.