Amazon EC2 Instance Remotely Start - api

Can someone elaborate more on the details of how to remotely start a EC2 instance remotely?
I have a Linux box set up locally, and would like to set up a cronjob on it to start an instance in Amazon EC2. How do I do that?
I've never worked with API's, if there are ways to use API's, can someone please explain how to do so...

Pretty Simple.
Download EC2 API. There is a CLI with it.
keep EC2_PRIVATE_KEY and EC2_CERT in as your envt variables, where they are private key and certificate files that you generate from EC2 console.
then call ec2-reboot-instances instance_id [instance_id ...]
Done.
Refer: http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RebootInstances.html
Edit 1
Do I download this directly onto my Linux box? And how do I access the CLI on the linux box of the EC2 API? Sorry to ask so many questions, just need to know detailed steps of how to do this.
Yes. Download it from here
If you have unzipped the API in /home/naishe/ec2api, you can call /home/naishe/ec2api/bin/ec2-reboot-instance <instance_id>. Or event better set unzipped location as your envt variable EC2_API_HOME and append $EC2_API_HOME/bin to your system's PATH.
Also, try investing some time on Getting Started Doc which is amazingly simple.

Related

Mount S3 bucket as an NFS share on an EC2 instance

long time reader but I've usually been able to find the answers I've been looking for in existing posts - but this time I've not been able to.
I am essentially teaching myself AWS CDK from scratch, I've only really just started with it so not finding anything which helps me on my mission may be a result of not knowing enough yet to be asking the right questions... so please bare with me.
Thus far I've used the AWS CDK with Python to create a stack which creates an S3 bucket, and also fires up an EC2 instance with an AWS file storage gateway AMI loaded on it (so running Amazon Linux). This deploys and runs fine - however now I'd like to programmatically set up the S3 bucket to be accessed via an NFS share on the EC2 instance. From what I've seen I'd assumed it is or should be fairly trivial however I keep getting a bit lost in documentation and internet hunts and not quite sure I'm looking in the right places or asking search engines the right questions to unlock the path to achieve this.
It looks like I should be able to script something up to make it happen when the instance is start using user-data but I'm a bit lost. Is anyone able to throw me some crumbs to follow to find a good way of achieving this, or a better way of achieving what I want to happen (which is basically accessing the S3 bucket contents as though they are files on an EC2 instance) - if not tell me how to do it if it's trivial enough?
Much appreciated :)
Dan
You are on good track. user_data can be used for that.
I don't have full code to give you as its use case specific (e.g. which OS are you using?), but the user_data would have to download and install s3fs:
s3fs allows Linux and macOS to mount an S3 bucket via FUSE. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.
However, S3 is an object storage system, and it can't be really mounted on an instance like you would do with NFS or EBS storage solutions. But with s3fs-fuse you can mimic such a behavior. And for some use-cases it will be sufficient.
So what you can do, is to setup the user_data script through console, verify that it works, and then basically just copy and paste to CDK. Its more of a trial-and-see approach, but this is the best way to learn.

How to configure Redis cache on local?

I have implemented Redis cache with .net core 2.1 application. Now the issue is I have only development connection string. I want to configure and test Redis cache somehow on my local pc. I have read somewhere that it is possible using chocalatey. So can body refer me any link?
PS: When I tried to run redis cache from development server using vpn, It shown me popup to select "ResultBox.cs" file. So I created new ResultBox.cs file and give it the path, but when I call rediscache.Get() method it opens ResultBox.cs file but nothing happens then. Can anybody tell what is ResultBox.cs for?
I have found a way to configure Redis on local using chocolatey. Use this link. If you face Misconf issues while testing on redis-cli this link will be helpful.
You can run a local docker redis image. See this and this for reference.

Can I use GUI/UI interface instead of command line on AWS Lightsail?

I just created a aws lightsail instance, which includes nodejs under Ubuntu, and it is quickly setup, which looks cool.
However I only find the command line operation. I still cannot find the GUI, as it is uncomfortable to edit file through command line.
Any idea of how can I use GUI on it?
Welcome to the world.
No, You cannot use the GUI/UI to edit the files on the Amazon Lightsail instances. You can connect to the server using winscp software and edit the web root files on your machine.
To edit the files out side web root you have to rely on the editors suggested by David J Eddy
Well, looks like I'm a bit late in answering this question, but yes, there is a way to edit files using a GUI.
To do this, install an FTP client on your own computer (not Lightsail server). Popular examples include FileZilla and Cyberduck, and both have free versions.
Once downloaded, you just type in your Server IP address and link your .pem keychain file (should be downloaded from lightsail.aws.amazon.com).
With this new software, you can edit files from your instance and the file structure on your own computer.
Nope. The Ubuntu flavors used in LightSail are of the 'Server' variety. You may not like the idea of editing via the CLI but, honestly, learn it. VIM, eMacs, Nano, etc. Find an editor you can handle and learn it as well as you can. This will help you much later.

Is there an Ansible module for creating 'instance-store' based AMI's?

Creating AMI's from EBS backed instances is exceedingly easy, but doing the same from an instance-store based instance seems like it can only be done manually using the CLI.
So far I've been able to bootstrap the creation of an 'instance-store' based server off of an HVM Amazon Linux AMI with Ansible, but I'm getting lost on the steps that follow... I'm trying to follow this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-instance-store-ami.html#amazon_linux_instructions
Apparently I need to store my x.509 cert and key on the instance, but which key is that? Is that...
one I have to generate on the instance with openssl,
one that I generate/convert from AWS,
one I generate with Putty, or
one that already exists in my AWS account?
After that, I can't find any reference to ec2-bundle-vol in Ansible. So I'm left wondering if the only way to do this is with Ansible's command module.
Basically what I'm hoping to find out is: Is there a way to easily create instance-store based AMI's using Ansible, and if not, if anyone can reference the steps necessary to automate this? Thanks!
Generally speaking, Ansible AWS modules are meant to manage AWS resources by interacting with AWS HTTP API (ie. actions you could otherwise do in the AWS Management Console).
They are not intended to run AWS specific system tools on EC2 instances.
ec2-bundle-vol and ec2-upload-bundle must be run on the EC2 instance itself. It is not callable via the HTTP API.
I'm afraid you need to write a custom playbook / role to automate the process.
On the other hand, aws ec2 register-image is an AWS API call and correspond to the ec2_ami Ansible module.
Unfortunately, this module doesn't seem to support image registering from an S3 bucket.

Cloud9 workspace using S3 bucket as source?

Given the popularity of hosting static sites from AWS S3 buckets it would be great to be able to do that from Cloud9 too.
Is there any way I can set up an FTP-based workspace that uses an S3 bucket as the source?
Transmit and other FTP apps have the ability to work directly with an S3 bucket. I did try setting up an FTP workspace in Cloud9 using the following:
Host: s3.amazonaws.com
Username: My-Access-Key
Password: My-Secret-Key
I know it was a long-shot and I have since read confirmation that Amazon doesn't allow simple FTP access to buckets like that.
Any ideas if this is possible?
FTP workspaces on Cloud9 are actually being phased out, so I'd recommend using the mounting feature described in this blog post to mount an FTP source: https://c9.io/site/blog/2014/12/ftp-sftp-mounting-beta
Unfortunately, S3 doesn't support the FTP protocol, so this would have to be a new feature. Luckily we're opening up our SDK to be able to implement features like this. If you're interested in contributing please email us via https://support.c9.io
Codeanywhere (https://codeanywhere.com) does this now. However, you'll have to shell out $7 to $10/m for that capability.
But then again, like Cloud9 (which I'm a big fan of), you get a bunch of features on the Codeanywhere IDE.
I was disappointed when Cloud9 discontinued its efforts on S/FTP. Codeanywhere seems to be taking on the cloud/storage issue head on by handling cloud access to S3, FTP, SFTP, Google Drive and others.