Can I use an API such as chef to automatically create, name and set passwords to multiple servers? - api

I am new to this so forgive me for not understanding the lingo.
I have been using rackspace cloud control panel to build multiple virtual servers, i use them for maybe a couple of hours then i delete them. I need these servers to all have specific and unique names such as: "server1, server2, server3, etc." I also need them to have a specific password unlike the randomly generated password that is assigned by default.
I have been creating each individual server manually (based on an image that's set up) then I have to go back and reset the password andreboot all of them. Doing each one manually is a bit time consuming and I'm sure there is an easier way. Please help me figure this out.
I've been doing some searching but I haven't found anything too relevant to my problem on top of that I'm not too familiar with programming and such.
Basically what I'm looking to do is automatically create these servers with their appropriate names and passwords already built in from the start. I'm not sure if some sort of "API" is the answer, or if there's some sort of script that can be written, or both.
Any assistance is much appreciated.
thanks,
Chris

Related

What's elasticsearch and is it safe to delete logstash?

I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.

CloudTrail RunInstances event, who actually provisioned EC2 instance when STS AssumeRole used?

My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.

How to keep my vb.net program relatively secure

I wrote a small program in vb.net and I'm looking for a simple way to keep people from just copying the executable and running it on another machine for reverse engineering without the installer. I understand that if people want the program bad enough they will figure out a way to get a hold of it, I'm basically just looking for some kind of deterrent to keep our competitors from walking around and copying it.
Logan,
The bad news is that you cannot stop people from reverse engineering your desktop application. You have 2 options:
Create a web application instead. The code will run securely on your server.
Use Remote Desktop Services. This way you can install your program on your server and let the users use it via RDS. Here is an article that illustrates the concept and how to implement it on Microsoft Azure: https://technet.microsoft.com/en-us/library/cc755055.aspx
The standard approach is to create a license key that will only work on a specific machine and store it in the registry. This can be something as simple as:
When your app starts get a unique machine id (http://www.dreamincode.net/forums/topic/181408-get-unique-machine-ids/)
Perform a one way hash on it
See if this value is stored in the registry
If it isn't, display a dialog displaying the unique machine id and asking for the 'license'
Accept input of the license so they don't need to ask again
You can manually calculate the one way hash yourself for computers that you want to run the software on.
This won't stop a determined hacker but it'll keep the 99.9% of people who can't hack your software honest.

Get list of files and programs touched by AS400/iSeries service account

I am trying to get a list of the programs (RPG/CL/SQL) and files a service account on the iSeries has touched. The idea is that having this list we can tie specific permissions (I know this will really complicate things) to the user account in order to achieve a more secure application specific service account. Is there any way to do this and maybe get a report by running a command. Maybe there is a SQL statement?
Please excuse me if my terms are not appropriate, I am still new to the iSeries.
The audit journal will have what you are looking for....if so configured.
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzarl/rzarlusesecjnl.htm
The newest 7.1 TR includes stored procedures to allow easy read of journals.
https://www.ibm.com/developerworks/community/wikis/home/wiki/IBM%20i%20Technology%20Updates/page/DISPLAY_JOURNAL%20(easier%20searches%20of%20Audit%20Journal)
Charles
So though Charles' answer might be the one one should set up to get a thorough report. I wound up doing the following as suggested by one of my peers.
Please note that my goal though not properly explained as so, was to create an application specific user/service account for a program. This is to avoid using one with many privileges and thus gain some security.
1.Go through the source code (in my case classic ASP) and jot down all the names of the procedures used by that program.
2.Create a CL program that outputs the program references to a display file. Then export the file's contents onto Excel and massage where necessary.
PGM
DSPPGMREF PGM(MYLIB/PGM001) OUTPUT(*OUTFILE) OUTFILE(MYLIB/DSPPGMREF) OUTMBR(*FIRST *REPLACE)
DSPPGMREF PGM(MYLIB/PGM002) OUTPUT(*OUTFILE) OUTFILE(MYLIB/DSPPGMREF) OUTMBR(*FIRST *ADD)
ENDPGM
I was told however that service programs references cannot be displayed with DSPPGMREF. So the following was done for those.
PGM
ADDLIBLE LIB(ABSTRACT) POSITION(*LAST)
MONMSG MSGID(CPF0000)
WRKOBJR OBJ(SRVPGM01) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*REPLACE)
WRKOBJR OBJ(SRVPGM02) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*ADD)
WRKOBJR OBJ(SRVPGM03) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*ADD)
ENDPGM
Thank you for all your help. I apologize that my answer is a little more specific than my question but in the end this was what I wanted to achieve, I had to generalize to ask the question. I'd thought i'd post post my answer anyways in case it helps someone in the future.

Setup Content Server

This is more of a strategy question instead of a 'help with code' question.
I have a content server and need to supply content to various shared hosting and would love to know how you guys would set this up. We would like the content to be 'cached' on the servers so that is appears as static (and to reduce the load on the content server).
Any help would be appreciated. The basic question is regarding how you would deliver and then handle the content for the shared hosting.
The usual way is to use rsync to keep the "slave" servers in sync with the "master" server. The slaves are then configured to simply display the local files. rsync will make sure that files are copied efficiently and correctly (including time stamps and permissions). It will also make sure that clients don't see partial files and it will return useful error codes, and many more issues that only someone who has been doing this for decades would think of.