How can I create a shared /home dir across my Amazon EC2 servers - amazon-s3

I have a cluster of EC2 servers spun up with Ubuntu 12.04. This will be a dev environment where several developers will be ssh-ing in. I would like to set it up where the /home directory is shared across all 4 of these servers. I want to do this to A) ease the deployment of the servers, and B) make it easier on the devs so that everything in their homedir is available to them on all servers.
I have seen this done in the past with a NetApp network attached drive, but I can't seem to figure out how to create the equivalent using AWS components.
Does anyone have an idea of how I can create this same setup using Amazon services?

You'll probably need to have a server host an NFS share to store the home directories. I'd try out what this guy has done in his answer https://serverfault.com/questions/19323/is-it-feasible-to-have-home-folder-hosted-with-nfs.

Related

Can I access a full list of cloudflare tunnels through their dashboard?

I have some tunnels created through CLI on several servers locally.
Their domain names are shown in Cloudflare DNS settings as "managed by a cloudflare tunnel".
However, in Access -> Tunnels I do not see their domain names listed.
Are CLI-created tunnels accessible anywhere in their Web GUI?
When created using the CLI, you will see the tunnels on the GUI but not the ingress hosts listed on the config of the tunnel.
If you want to get them on the GUI, you will need to migrate the config so that it is hosted on CF.
Bear in mind, the migration process is one way only (once hosted, you cannot reverse the process).
This shouldn't be a problem unless you're using free TLDs (like .tk .ga .ml ... and the likes) cause all these TLDs are excluded from the CF API and consequently you will not be able to manage them from within the GUI.

Redis Configuration settings are changing dynamically after couple of hours

I installed Redis Server on the cloud machine (Ubuntu 18.0) where it contains SSD.
In the configuration file, I changed the dir to /temp and the dbfilename to dump.rdb
I restarted the server and checked the runtime settings with CONFIG GET.
It is showing the values what I set in the redis.conf file.
After 6 Hours, I checked it again. The strange thing is, these values got changed dir=/var/spool/cron"
and dbfilename=root.
I am sure, nobody attacked my server and it is under our own VPN and not publicly accessible.
Now, I did one more test, I installed a Docker Container (Ubuntu 18.0) in that cloud instance (Same instance) and did the test in the container. There is no change in the configuration at runtime after couple of hours.
Also, suspect if the cloud machine is built with magnetic HDD redis seems working fine. If I built with SSD then redis not working after couple of hours.
Can anybody help in this regard.
Thanks
I had a similar situation on the my Redis server.
If your Redis server is accessible from the public network, It might be an attack.
In my case, I changed the default port of Redis servers and add password protection.
After that, the same situation does not happen.
Check the below issue in Redis GitHub, You can get more information about your case.
https://github.com/redis/redis/issues/3594

How to use packer to build AMI without SSH

I would like to use packer to build AMI's where SSH is not running. This will be for immutable infrastructure. We will be building base / golden images and then building more streamlined images from the base image but, ultimately, I don't want SSH or any other means of remote access to the image. Can packer do this?
I'm not sure about Packer's ability to do this. However, you could use AWS Security Groups to control SSH access to your EC2 instances after they've been spun up using your AMIs.
Just create a Security Group that denies all ingress connections, and place your EC2 instance into it.

Is it possible to use Amazon S3 for folder in a .net site

Is it possible to use Amazon Simple Storage Service (S3) for folders & files on a .net site?
Background:
I have 200 websites sites and I would like to have a single common code base. Right now they are on a single dedicated server. I plan to move them to an EC2 server.
As you can see, some of the folders & files are not on S3 and some are.
Admin Panel - is a folder that requires authentication - is this an issue?
/Bin/ - contains DLL's - is this an issue?
EC2 is normal Windows Server like your current dedicated server. You remote desktop into it, install whatever you need, setup IIS etc.
S3 on the other hand is just a storage device. Think of it like a big NAS device. So you can use it to serve your static content (possible in conjunction with Cloudfront) but the actual website (Dlls, aspx pages etc) will have to be on EC2 in IIS.

Memcached in a trusted shared environment?

We are a university IT organization that hosts all of the university's websites on several shared servers on our server room floor. We have several VMs, each running its own instance of Apache as a web server for each respective server.
If we were going to setup a memcached server, is it feasible to use it as a shared instance?
If shared by several servers, or even multiple web apps running on the same server, what's the best way to keep each app's cache stores separate? Prefix the key?
Would each VM require its own instance of memcached, or could we setup 1 memcached server and allow our multiple VMs to read/write to it?
We wrote bucket engine specifically to allow for a large number of memcached virtual instances running under a single process.