AWS CloudTrail custom selector for Data events - amazon-s3

I would like to enable CloudTrail Data Events just for Delete events for just couple of S3 buckets. I have the following selector:
[
{
"name": "Deletes CT selector",
"fieldSelectors": [
{
"field": "eventCategory",
"equals": [
"Data"
]
},
{
"field": "resources.type",
"equals": [
"AWS::S3::Object"
]
},
{
"field": "eventName",
"startsWith": [
"Delete"
]
},
{
"field": "resources.ARN",
"startsWith": [
"arn:aws:s3:::bucket-1/",
"arn:aws:s3:::bucket-2/",
"arn:aws:s3:::bucket-3/"
]
},
{
"field": "readOnly",
"equals": [
"false"
]
}
]
}
]
When I try and test by deleting some object, I can see all other related events prior and after the delete but there is no DeleteObject or DeleteObjects, no Delete events at all. I am loading the logs in Athena and checking there, also manually checked the gzipped json generated by CloudTrails. No Delete events.
Does someone have managed to setup this scenario in CloudTrail?

I thought I had a similar problem, using startWith = DeleteObject instead of Delete.
After a while it turns out events were just delayed. Working setup:
[
{
"name": "abc",
"fieldSelectors": [
{
"field": "eventCategory",
"equals": [
"Data"
]
},
{
"field": "resources.type",
"equals": [
"AWS::S3::Object"
]
},
{
"field": "eventName",
"startsWith": [
"DeleteObject"
]
},
{
"field": "resources.ARN",
"startsWith": [
"arn:aws:s3:::xxxxxx"
]
}
]
}
]

Related

Run dynamic stored procedure from azure logic app

I want to run multiple stored procedures from logic app for Azure SQL database. I want names of the stored procedure to be calculated based on a variable name.
I have a variable with values (API_test1_SP1, API_test2_SP1, API_test3_SP1).
In a for loop, I want to run these stored procedures API_test1, API_test2 and API_test3.
I want to remove _SPI from the variable names and run the stored procedures (API_test1, API_test2, API_test3) for Azure SQL database.
I tried following expression without luck
#{concat(API_,slice(#{variables('variable_name')},1,lastIndexOf('_')))}
Is it possible to run stored procedure like this in logic app?
You can use the below expression to achieve your requirement.
first(split(variables('Array')?[iterationIndexes('Until')],'_SP1'))
To reproduce the issue, I have used the below flow in my logic app.
RESULTS:
Below is the codeview of my logic app
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Initialize_variable_-_array": {
"inputs": {
"variables": [
{
"name": "Array",
"type": "array",
"value": [
"API_test1_SP1",
"API_test2_SP1",
"API_test3_SP1"
]
}
]
},
"runAfter": {},
"type": "InitializeVariable"
},
"Initialize_variable_-_loop": {
"inputs": {
"variables": [
{
"name": "loop",
"type": "integer",
"value": 0
}
]
},
"runAfter": {
"Initialize_variable_-_array": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Until": {
"actions": {
"Compose": {
"inputs": "#first(split(variables('Array')?[iterationIndexes('Until')],'_SP1'))",
"runAfter": {},
"type": "Compose"
},
"Increment_variable": {
"inputs": {
"name": "loop",
"value": 1
},
"runAfter": {
"Compose": [
"Succeeded"
]
},
"type": "IncrementVariable"
}
},
"expression": "#equals(variables('loop'), length(variables('Array')))",
"limit": {
"count": 60,
"timeout": "PT1H"
},
"runAfter": {
"Initialize_variable_-_loop": [
"Succeeded"
]
},
"type": "Until"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {
"schema": {}
},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}

Why does Safari ignore manifest.json matches when Always Allowed

I am trying to make my WebExtension work with Safari.
Why do all content scripts get injected into every page regardless of what I set matches to in manifest.json?
{
"name": "Search Engine Detector",
"version": "1.0.0",
"manifest_version": 2,
"permissions": [ "*://*/*" ],
"content_scripts": [ {
"js": [ "js/Bing.js" ],
"matches": [ "*://*.bing.com/*" ]
}, {
"js": [ "js/DuckDuckGo.js" ],
"matches": [ "*://*.duckduckgo.com/*" ]
}, {
"js": [ "js/Google.js" ],
"matches": [ "*://*.google.com/*" ]
}, {
"js": [ "js/Yahoo.js" ],
"matches": [ "*://*.yahoo.com/*" ]
} ]
}
To clarify, this only happens if I click 'Always Allow on Every Website' on install, or set 'For other websites' to Allow. Everything works fine if the configuration looks like this:

Azure Policy (deployifnotexists) not behaving as expected

This is my first post here. What I'm trying to do in Azure is deployifnotexists for storage accounts if certain settings are not enabled. I've attached my code. What I want to do is this:
Check for secure transfer being enabled
Check for TLS1_2 only
Check the FW
On the FW, have the Azure Services accepted (e.g. nsg flow logs etc)
If any of those conditions are not met, then deploy them through the ARM template. What is catching me is that I have intentionally put in bad settings to see it work and it will not say that they are non-compliant.
{
"mode": "All",
"policyRule": {
"if": {
"field": "type",
"equals": "Microsoft.Storage/storageAccounts"
},
"then": {
"effect": "deployIfNotExists",
"details": {
"type": "Microsoft.Storage/storageAccounts",
"roleDefinitionIds": [
"/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
],
"existenceCondition": {
"allOf": [
{
"field": "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly",
"equals": true
},
{
"field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
"equals": "TLS1_2"
},
{
"field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction",
"equals": "deny"
},
{
"field": "Microsoft.Storage/storageAccounts/networkAcls.bypass",
"contains": "AzureServices"
}
]
},
"deployment": {
"properties": {
"mode": "incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "String",
"metadata": {
"description": "storageAccountName"
}
},
"location": {
"type": "String",
"metadata": {
"description": "location"
}
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-06-01",
"name": "[parameters('storageAccountName')]",
"location": "[parameters('location')]",
"properties": {
"minimumTlsVersion": "TLS1_2",
"networkAcls": {
"bypass": "AzureServices",
"defaultAction": "Deny"
},
"supportsHttpsTrafficOnly": true
}
}
],
"outputs": {}
},
"parameters": {
"storageAccountName": {
"value": "[field('Name')]"
},
"location": {
"value": "[field('location')]"
}
}
}
}
}
}
},
"parameters": {}
}
Thanks everyone
So through further reading and talking with more experienced colleagues I've determined that "deployIfNotExists" conditions are not to be used for a resources own settings.
By that I mean I cannot "deployIfNotExists" to a storage accounts storage account settings (as above) but i could deploy diagnostic logging to a SA. I am closing this question. I will try append and if I do anything good I'll loop it back in to this question for keen eyes.

Add Bootstrap Actions while creating EMR cluster from AWS Step Functions

I'm creating EMR cluster from Step Functions using below code,
"spinning_emr_cluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name": "CombineFiles",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.29.0",
"Applications": [
{
"Name": "Spark"
}
],
"ServiceRole": "EMR_DefaultRole",
"JobFlowRole": "EMR_EC2_DefaultRole",
"LogUri": "s3://awsmssqltos3/emr_logs/",
"Instances": {
"KeepJobFlowAliveWhenNoSteps": true,
"InstanceFleets": [
{
"Name": "Master",
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m1.large"
}
]
},
{
"Name": "Slave",
"InstanceFleetType": "CORE",
"TargetOnDemandCapacity": 1,
"InstanceTypeConfigs": [
{
"InstanceType": "m1.large"
}
]
}
]
}
},
"ResultPath": "$.CreateClusterResult",
"Next": "lambda"
I want to add bootstrap actions while creating the cluster from AWS Step Functions. I have tried searching online but could not find any syntax for that.
"BootstrapActions": [
{
"Name": "CustomBootStrapAction",
"ScriptBootstrapAction": {
"Path": "",
"Args": []
}
}
]
Please Add above code inside Parameters Block.

Intellij IDEA, control page up, page down scroll size

I am not satisfied with the scrolling behaviour in intellij for page up, page down. It doesn't feel right. It always feels as if I get out of the scope.
Is it possible to adjust the scroll size of page up, page down? Perhaps to half a page or similar.
I had taken #yole's answer and implemented all of the actions he had described in a separate plugin:
There is no way to control this through the settings. What you can do is write a plugin that performs scrolling in the way that you prefer. It's fairly easy: all you need to do is copy the existing
PageUpAction/PageDownAction classes and the methods they call
(EditorActionUtil.moveCaretPageUp/Down) to scroll by as much as you want.
This plugin implements new actions "Partial Page Up" and "Partial Page Down" which allow one to scroll a configurable size of screen definable in the usual IDEA settings dialog.
There's an installable version of the plugin in official JetBrains repository.
There is no way to control this through the settings. What you can do is write a plugin that performs scrolling in the way that you prefer. It's fairly easy: all you need to do is copy the existing PageUpAction/PageDownAction classes and the methods they call (EditorActionUtil.moveCaretPageUp/Down) to scroll by as much as you want.
Since many are inquiring about this, for mac users this can be controlled globally instead by instead scrolling on page up/down using karabiner application and adding the following complex rule:
{
"description": "mmm.karabiner.page.up.down.to.scroll",
"manipulators": [
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_if"
}
],
"from": {
"key_code": "page_up"
},
"to": [
{
"mouse_key": {
"vertical_wheel": -51
}
}
],
"to_delayed_action": {
"to_if_invoked": [
{
"pointing_button": "button1"
}
]
},
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_unless"
}
],
"from": {
"key_code": "page_up"
},
"to": [
{
"mouse_key": {
"vertical_wheel": -51
}
}
],
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_if"
}
],
"from": {
"key_code": "up_arrow",
"modifiers": {
"mandatory": [
"fn"
]
}
},
"to": [
{
"mouse_key": {
"vertical_wheel": -51
}
}
],
"to_delayed_action": {
"to_if_invoked": [
{
"pointing_button": "button1"
}
]
},
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_unless"
}
],
"from": {
"key_code": "up_arrow",
"modifiers": {
"mandatory": [
"fn"
]
}
},
"to": [
{
"mouse_key": {
"vertical_wheel": -51
}
}
],
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_if"
}
],
"from": {
"key_code": "page_down"
},
"to": [
{
"mouse_key": {
"vertical_wheel": 51
}
}
],
"to_delayed_action": {
"to_if_invoked": [
{
"pointing_button": "button1"
}
]
},
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_unless"
}
],
"from": {
"key_code": "page_down"
},
"to": [
{
"mouse_key": {
"vertical_wheel": 51
}
}
],
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_if"
}
],
"from": {
"key_code": "down_arrow",
"modifiers": {
"mandatory": [
"fn"
]
}
},
"to": [
{
"mouse_key": {
"vertical_wheel": 51
}
}
],
"to_delayed_action": {
"to_if_invoked": [
{
"pointing_button": "button1"
}
]
},
"type": "basic"
},
{
"conditions": [
{
"bundle_identifiers": [
"^net.java.openjdk.cmd",
"^com.jetbrains.intellij"
],
"type": "frontmost_application_unless"
}
],
"from": {
"key_code": "down_arrow",
"modifiers": {
"mandatory": [
"fn"
]
}
},
"to": [
{
"mouse_key": {
"vertical_wheel": 51
}
}
],
"type": "basic"
}
]
},
Also do note, to get smooth scrolling, consider down loading Mos application and adjust the preferences if desired.
https://mos.caldis.me/
This might have other consequences on your Mac so you might need to have to adjust other things since your page_up/down is no longer a page_up/down but mouse scrolls instead.