How to create azure policies against storage account logging - azure-storage

I would like to create a deployIfNotExists policy against storage account logs (enable them if they are not, and if possible, add a trigger to them). If this is not possible, I would at least like an audit policy on them. But this functionality does not appear to exist at this time. Is this possible or does this process have to be "manual"?

There has to be an alias for this to be possible. I didn't find one from the storage resource, but in log profiles I found this: "Microsoft.Insights/logProfiles/storageAccountId". However I am not sure if that is the same thing that you are looking for. There is also this built in that can help: https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json

Related

How to find the role after logging in to bigquery?

I already have access to Google analytics provided by my client and the bigquery has been configured to the project. But i want to know if i can create jobs. How do i find the role assigned to my id ?
i want to know if i can create jobs
Below is simple way to get this:
Just open Web UI and try to switch to project of your interest
a. If you do have it in the list of available projects – just select it and then run (just in case) some simple query (SELECT 1)
If it is run successfully - you can create jobs in this project (because any query is in reality a job)
b. If it is not in the initial list – select “Display Project” and enter project of your interest and also check “Make this my current project” box. If result is successful – most likely you again lucky and can create jobs in this project (but still – run some simple query to be 110% sure
How do i find the role assigned to my id
This would be more involved – you will need to use respective IAM (Google Identity and Access Management) APIs
For example you can use testIamPermissions() API that allows you to test Cloud IAM permissions on a user for a resource. It takes the resource URL and a set of permissions as input parameters, and returns the set of permissions that the caller is allowed.
The permission you should look for is bigquery.jobs.create, but yo can pass to this API list of any permissions you want to check if you have

CloudTrail RunInstances event, who actually provisioned EC2 instance when STS AssumeRole used?

My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.

Using the Multi Tenant feature to configure permissions

I want to make access polices on the folder each of costumer:
DocumentLibrary/Custumers/CostmerA
DocumentLibrary/Custumers/CostmerA
.
DocumentLibrary/Custumers/CostmerN
Then the CustomerA (tenant user) can't access the folders of other another Customers
I think abut doing this using "Tenants", and a would like to see an exeample.
The Multi Tenant (MT) feature in Alfresco does not do what you ask for. Especially, tenants don't share the same document library, they are completely separated.
You could use MT to achieve complete separation of tenants. This separation would include not only documents but users, groups, permissions, everything you deploy in the Data Dictionary.
I recommend to use a single (default) tenant and normal folder permissions if you just want to handle read/write permissions.
Before using Multi-Tenancy, pay close attention to the features you will be giving up, which are documented here.
The correct way to do what you are attempting to do is to simply use permissions. Give all of your customers access to the /customers folder. Then, in each customer-specific folder, break the ACL inheritance (Manage Permissions, Un-check Inherit Permissions), then assign the specific customer (perhaps using a group) to the folder with the appropriate access.
You might even consider using a separate Share site for each customer, which would make this easier.
The caveat to this is that if you are trying to hide all of your users and groups from each other, then really what you want are separate repositories, and that's what Multi-Tenancy provides, at the expense of the features I referenced at the top of the post.
If you go that route, you'll have to use the tenant console to create each customer-specific tenant. Your tenants will be separated into their own repositories. And you won't have a way to view all of your customer documents side-by-side without switching tenants.
Honestly, due to the stability of the multi-tenancy feature and the other features you have to give up, I'd be more inclined to use completely separate servers, even though that increases your maintenance burden. Alfresco doesn't even use their own multi-tenancy feature in their own cloud product.
You really should have no problems. MT is already there, you just need to enable it. There's no additional work that you should do in order to hide tenants from each others - that's the whole point of this feature.
http://docs.alfresco.com/5.1/concepts/mt-intro.html

Can you read/write to the registry without administrative permissions?

Essentially I need somewhere to store an expiration date for my software and do not want this to be accidentally deleted (the likelihood of anyone tampering with my software is relatively minimal). I thought about writing this to the registry, however this appears to require administrative permissions. Is there any way to get around this issue?
Thanks
The better alternative would be to use Isolated Storage.
If you really need to modify the registry, and you don't have sufficient privileges to do so, you would need to either ask for an administrator's credentials, so you could temporary elevate your privileges, or you would need to make a request to another process, such as a service, which is already running under another account with sufficient privileges.
If Isolated Storage is a bigger tool than you need for the job, another simpler option would be to use an App.config setting. You can create a setting in your project properties designer screen, and then you can read/write the setting via My.Settings.

TRAC, hide a project in available projects page depending on permissions

I have multiple projects in TRAC. I'm using mod_wsgi, and my wsgi script file TRAC_ENV_PARENT_DIR variable is pointing to the folder containing folders with all these projects. A few users have access to different projects. When a user visits the TRAC URL, she can see the listing containing all these projects, yet has no access to some of them.
Is there any way to show to a user only those projects this user has access to?
Please advise.
Preamble: I abhor security through obscurity. Your request could be read as cosmetics in web site presentation. Don't aim at improved access control, because knowing a valid path will still give access to each Trac environment depending on it's settings. Of course better navigation is a good reason.
Requiring to hide folders depending on user's permission means you require authentication before granting access to TRAC_ENV_PARENT_DIR. This could be done with standard mechanisms that your web server supports. This is just the precondition.
As you say, you have some non-public Trac instances in your Trac environment folder collection. How complicated it is to identify all folders correctly, that depends on how much you want to spend on initial implementation vs. maintenance.
I should be trivial, but error-prone, to provide a list of either the public or the private directories, of course whatever is easier to maintain. Zero additional configuration would require to open each Trac environment and look up user permissions. )** This sounds rather cumbersome and means probably a performance penalty for applications with large user base and frequent access. You will at least work with a cached list, if you go down this road.
You can't use Trac's auto-generated Available projects list but you'll have to deliver at least two versions of an index page for authenticated/unprivileged and authenticated and privileged users.
For the sake of maintenability you'll want to consolitate configuration and permissions. For access to each Trac environment you could use trac.ini inheritance and a shared .htpasswd file. However you can't inherit permissions, because these settings are stored inside the Trac db. You could give TracUserSyncPlugin a shot, but it seems not yet fit for production, or at least lacks feedback of all the happy users, if they exist.
)** While I'm not aware of dedicated documentation about this, there are actually several possibilities. Since permissions are stored in the Trac db, all involve reading/querying the permission db table. It's structure is documented with all other tables of the Trac db schema. To read you'll want to open the Trac environment(s) and then use a direct query on the table (see a AccountManagerPlugin changeset for an example) or construct and query a PermissionCache object.
It may be an old question, but so far i've found the answers to be rather complex without need.
I think using the information stated here, http://trac.edgewall.org/wiki/TracInterfaceCustomization#ProjectList , one could build a template that checks for users and permissions and then show the data it should.
In my case, i just needed to point the "TRAC_ENV_INDEX_TEMPLATE" variable to blank HTML, and that was enough for me.