How can I extract the Events that are present in Event viewer programatically - hyper-v

I have been working with Hyper-V failover cluster for the past two months.During migration some events have been logged in the Event-Viewer including the node to which the failover has been happened.Now my question is that, How can I extract this event from the Event Viewer programatically, so that I could use it in my analysis of failover cluster.

Get-WinEvent -LogName Microsoft-Windows-Hyper-V-VMMS-Admin | ?{$_.message.contains("storage migration")}
Pulls windows events giving full event objects that notify of storage migration.
Adjust log name accordingly (see Properties, "Full Name:" of the relevant log) and specify the wording you are looking for.
Alternatively you can filter by event ID if it suits you more:
Get-WinEvent -LogName Microsoft-Windows-Hyper-V-VMMS-Admin | ?{$_.id -eq 20927}
You will be able to parse the event further from there.
-FilterXPath parameter will enhance efficiency of the query, but you might have to invest more time in that syntax.

Related

BigQuery Error loading location is interrupting scheduled queries

A few days ago, I started receiving an error in my Scheduled Queries dashboard Error loading location europe-west8: BigQuery Data Transfer Service does not yet support location: europe-west8.
I'm in the US, so I have set all 4 of my storage buckets are set to US or REGION, and have confirmed their locations.
Datasets are all US:
Scheduled queries are all Region "us"
Since this error started, my BigQuery Scheduled Queries that append data to tables have stopped running.
Where can I change the setting that seems to be calling europe-west8?
You need to check the region of the dataset you are using. The destination table for your scheduled query must be in the same region as the data being queried.
You can see the scheduled queries are supported in these locations here.
You specify a location for storing your BigQuery data when you create a dataset. After you create the dataset, the location cannot be changed, but you can copy the dataset to a different location, or manually move (recreate) the dataset in a different location.
You can see more information about how locations work in BigQuery here.
EDIT
This is a known issue from BigQuery UI, and the engineering team is aware of and is working towards a solution, although so far there isn't a specific ETA. Feel free to start the issue to raise further awareness towards it.
There are two possible workarounds you can try to circumvent this.
More specifically,
Workaround#1
Using the old UI, you can do it by clicking on "Disable editor
tabs".
Workaround#2
In Scheduled Query Editor > click the SCHEDULE dropdown > choose "Enable scheduled queries".
The overlay shows up with the message box ("Enable scheduled queries").
Click anywhere on the screen to close the overlay
Click the SCHEDULE dropdown again, and the create/update options are there.
If you are running schedule queries check that the processing location is set to the location of your data source and the destination table is also correct.
Checking the docs about setting a query location.
https://cloud.google.com/bigquery/docs/scheduling-queries

Repast - locate and track a specific agent using agent monitor

I have 500000+ agents which are added in the context but not in the display. Is there an easy way for me to locate a specific agent (without displaying it) and track its property change over time using the agent monitor like below:
The probe panel is only available through displays so you would need to be able to click on a specific agent. You might be able to use the agent table via the table icon in the toolbar which will provide a snapshot of all of the agents and their properties at a specific time.
If you know the ID of the agent you want to probe before the model is run, you could selectively log data from a single agent, or you could have a display with just a single agent by specifying in the style class that the returned shape is null for all agents except the agents you would like to see. That way it would at least be possible to show a few agents in the display and probe them.

How to display a status depending on the data flow position

Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.

CloudTrail RunInstances event, who actually provisioned EC2 instance when STS AssumeRole used?

My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.

Event raised when Access object saved?

This question references some events raised by the VBIDE. I'm looking for an event I can hook that is raised whenever an Access object is saved (form, querydef, module, class module, etc.).
If such an event is unavailable, I'm looking for workarounds. A project-wide save event or a code module change event would be acceptable alternatives. Perhaps there is some creative way to be notified when one of the "msys" system tables is updated and, ideally, which row.
Worst-case scenario, it looks like I can iterate through the CurrentDb.QueryDefs .LastUpdated or CurrentProject.AllForms/.AllModules/.AllReports .DateModified property and just poll it on some interval, but I would like to avoid that if possible.
There aren't any events that you can catch, but there is probably a better solution than polling the database objects.
The Database Window (that contains all of the tables, queries and other objects) will receive Windows messages when certain things happen in the User Interface. A quick look with Spy++ shows that the Database Window appears to receive a WM_ENABLE message when an object is saved. If you can trap that message using Win32, you might have the beginnings of a reliable "event".
Note that VBA UserForms can be used in Access Projects, but they don't appear in the Database Window, so that might be a problem.
Also, anything that programmatically changes/adds/deletes database objects might not trigger an automatic Database Window refresh or message.