over time, many different people have created a test transfer in our demo project and never cleaned it up. I would love to bulk delete all transfers. Does anyone know of a way to do this?
I've built something similar on Cloud Workflows.
Essentially you need a Workflow that will do the steps for you start with BigQuery Data Transfer API and issue a few commands:
get a list of transfer configs
loop through them
issue a delete API call for them
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest
My code was to cleanup Cloud Workflows (you need to adapt a little bit this one with the Transfer Service API calls)
#cloud workflow to cleanup
main:
steps:
- initialize:
assign:
- project: "marton-data"
- location: "us-central1"
- getList:
call: WorkflowsList
args:
project: ${project}
location: ${location}
result: items
- loopItems:
call: WorkflowListLoopItems
args:
items: ${items}
result: res
- final:
return: ${res}
WorkflowsList:
params: [project,location]
steps:
- list:
call: googleapis.workflows.v1.projects.locations.workflows.list
args:
parent: ${"projects/"+project+"/locations/"+location}
pageSize: 100
result: listResult
- documentFound:
return: ${listResult.workflows}
WorkflowDelete:
params: [name]
steps:
- delete:
call: googleapis.workflows.v1.projects.locations.workflows.delete
args:
name: ${name}
result: deleteResult
- logStep:
call: sys.log
args:
text: ${"Calling "+name+" \r\n Results returned "+json.encode_to_string(deleteResult)}
- deleteDone:
return: ${deleteResult}
WorkflowListLoopItems:
params: [items]
steps:
- init:
assign:
- i: 0
- result: ""
- check_condition:
switch:
- condition: ${items[i].name=="projects/marton-data/locations/us-central1/workflows/workflow_cleanup"}
next: assign_loop
- condition: ${len(items) > i}
next: iterate
next: exit_loop
- iterate:
steps:
- process_item:
call: WorkflowDelete
args:
name: ${items[i].name}
result: result
- assign_loop:
assign:
- i: ${i+1}
next: check_condition
- exit_loop:
return: ${result}
logMessage:
params: [collection]
steps:
- log:
call: http.post
args:
url: https://logging.googleapis.com/v2/entries:write
auth:
type: OAuth2
body:
entries:
- logName: ${"projects/" + sys.get_env("GOOGLE_CLOUD_PROJECT_ID") + "/logs/workflow_logger"}
resource:
type: "audited_resource"
labels: {}
jsonPayload: ${collection}
Related
Since I’m new to HA and also Jinja2 my thoughts on the following issue are based on how I would realise it with PHP.
I’d like to built an automation that controls an rgb bulb that reminds me on an open window after several timers have passed. Due to the fact that I don’t wanna set specific values for rgb and brightness these attributes are variables.
So the logic behind my intended automation should be:
retrieve current state and attributes of the lamp and store these information in an array to reset the lamp later on:
{{states('light.bulb‘)}}
{{state_attr('light.bulb‘,’brightness')}}
{{state_attr('light.lbulb‘,’rgb_color')}}
$initial_state = array()
$initial_state['state_on_off'] = {{states('light.bulb‘)}}
$initial_state['rgb'] = {{state_attr('light.lbulb‘,’rgb_color')}}
$initial_state['brightness'] = {{states('light.bulb‘)}}
some automation stuff like:
## trigger ###############################
trigger:
- platform: state
entity_id:
- binary_sensor.lumi_lumi_sensor_magnet_aq2_opening_2
to: "on"
for:
hours: 0
minutes: 10
seconds: 0
id: BA
#######################################
## actions ###############################
action:
- if:
- condition: trigger
id: BA
then:
- service: light.turn_on
data:
rgb_color:
## set fixed values
- 67
- 180
- 252
brightness_pct: 20
target:
entity_id: light.bulb
alias: 1st stage
…
some more actions and now set back to initial attributes
…
action:
- if:
- condition: trigger
id: BA
then:
- service:
## depending from the initial state #############
if ($initial_state['state_on_off'] == ’on’){
light.turn_on
} else {
light.turn_off
}
#######################################
data:
rgb_color:
## depending from the initial color #############
$initial_state['rgb']
#######################################
brightness_pct:
## depending from the initial color #############
$initial_state['brightness']
#######################################
target:
entity_id: light.bulb
alias: 3rd stage
Thanks in advance.
/example:
/{uriParams}:
get:
is: [defaultResponses, commonHeaders]
uriParameters:
uriParams:
description: Example description uriParams
body:
application/json:
example: !include examples.example.json
I would like creating the ruleset that checking the example !include and the traits (defaultResponse, commonHeaders) Now I have like this but this ruleset working separately.(It's mean that if I have ruleset with "traits" and "example" in the same file there is only working "traits". If I delete the ruleset from file "traits". It's working the ruleset "example".) But I would like that they working together.
And also I'm trying doing ruleset for checking all fields are have name with camelCase example: "camelCase-exampleTwo"
provide-examples:
message: Always include examples in request and response bodies
targetClass: apiContract.Payload
rego: |
schema = find with data.link as $node["http://a.ml/vocabularies/shapes#schema"]
nested_nodes[examples] with data.nodes as object.get(schema, "http://a.ml/vocabularies/apiContract#examples", [])
examples_from_this_payload = { element |
example = examples[_]
sourcemap = find with data.link as object.get(example, "http://a.ml/vocabularies/document-source-maps#sources", [])
tracked_element = find with data.link as object.get(sourcemap, "http://a.ml/vocabularies/document-source-maps#tracked-element", [])
tracked_element["http://a.ml/vocabularies/document-source-maps#value"] = $node["#id"]
element := example
}
$result := (count(examples_from_this_payload) > 0)
traits:
message: common default
targetClass: apiContract.EndPoint
propertyConstraints:
apiContract.ParametrizedTrait:
core.name:
pattern: defaultResponses
camel-case-fields:
message: Use camelCase.
targetClass: apiContract.EndPoint
if:
propertyConstraints:
shacl.name:
in: ['path']
then:
propertyConstraints:
shacl.name:
pattern: "^[a-z]+([A-Z][a-z]+)*$"
I am unable to get parameters in Lambda function. If I mention parameters value in lambda it works fine. when I remove parameters values from Lambda function and try from API gateway or test lambda it process default parameters values. please help
My Lambda function is :
import boto3
import time
import json
datetime = time.strftime("%Y%m%d%H%M%S")
stackname = 'myec2'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname+ '-' + datetime),
TemplateURL='https://testnaeem.s3.amazonaws.com/ec2tags.yaml',
Parameters=[
{
"ParameterKey": "MyInstanceName",
"ParameterValue": " "
},
{
"ParameterKey": "MyInstanceType",
"ParameterValue": " "
}
]
)
def lambda_handler(event, context):
return(response)
My CloudFormation template is:
---
Parameters:
MyInstanceType:
Description: Instance type description
Type: String
MyInstanceName:
Description: Instance type description
Type: String
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: us-east-1a
ImageId: ami-047a51fa27710816e
InstanceType: !Ref MyInstanceType
KeyName: miankeyp
Tags:
- Key : Name
Value : !Ref MyInstanceName
- Key : app
Value : demo
Please help what changes required in the Lambda function.
My test Values are:
{
"MyInstanceName": "demott",
"MyInstanceType": "t2.micro"
}
I modified the code of your lambda function. Please check comments in the code for clarification:
import boto3
import time
import json
datetime = time.strftime("%Y%m%d%H%M%S")
stackname = 'myec2'
client = boto3.client('cloudformation')
def lambda_handler(event, context):
print(event) # to check what your even actually is
# it will be printed out in CloudWatch Logs for your
# function
# you have to check what the event actually looks like
# and adjust event['MyInstanceName'] and event['MyInstanceType']
# in the following code
response = client.create_stack(
StackName= (stackname+ '-' + datetime),
TemplateURL='https://testnaeem.s3.amazonaws.com/ec2tags.yaml',
Parameters=[
{
"ParameterKey": "MyInstanceName",
"ParameterValue": event['MyInstanceName']
},
{
"ParameterKey": "MyInstanceType",
"ParameterValue": event['MyInstanceType']
}
]
)
return(response)
By the way, such function and API gateway can spin up a lot of ec2 instances very quickly. So that you are aware of this.
I have a pipeline that is starting VMs and running some tests on them. For parameters, I use two strings: one to indicate the list of test cases or VM management operations and another for the list of node names.
Currently, the pipeline executes first on one node and then on the next. I need them it to execute on all nodes simultaneously.
I tried using parallel and it can't co-exist with the for loops.
Here is my script:
nodes = params._RNX_OS.split('¤');
for (String VMnode : nodes) {
stage("Prepare environment"){
build job: "TA_StartVM", parameters: [string(name: "_RNX_OS", value: VMnode), string (name:"_RNX_SNAPSHOT", value:"Configured")];
}
configs = params._RNX_STAGES.toString().split('¤');
for (String config : configs) {
switch (config) {
case "Restart":
stage("Restart VM"){
build job: 'TA_RestartVM', parameters: [string(name: "_RNX_OS", value: VMnode)];
}
break
case ~/.*Start.*/:
param = config.toString().split(':');
snapshot = param[1];
stage("Start VM"){
build job: 'TA_StartVM', parameters: [string(name: "_RNX_OS", value: VMnode), string (name:"_RNX_SNAPSHOT", value: snapshot)];
}
break
case "Shutdown":
stage("Shut down VM"){
build job: 'TA_ShutdownVM', parameters: [string(name: "_RNX_OS", value: VMnode)];
}
break
case ~/.*Save.*/:
param = config.toString().split(':');
snapshot = param[1]
stage("Save VM Snapshot"){
build job: 'TA_SaveVMSnapshot', parameters: [string(name: "_RNX_OS", value: VMnode), string (name:"_RNX_SNAPSHOT", value: snapshot)];
}
break
default:
stage("Run " +config + " Test"){
build job: 'TA_RunTest', parameters: [[$class: 'LabelParameterValue', name: 'node', label: VMnode], string(name: "_RNX_TESTCONF", value: config), string(name: "_RNX_OS", value: VMnode)], propagate: false;
}
break
}
}
stage("Test Results Table"){
build job: 'TA_TestResultsTable',parameters: [[$class: 'LabelParameterValue', name: 'node', label: VMnode]], propagate: false;
}
stage("Publish Test Results"){
build job: 'TA_CopyTestResults', propagate: false;
}
stage("Stop Slave"){
build job: "TA_ShutdownVM", parameters: [string(name: "_RNX_OS", value: VMnode)];
}
}
I solved it!
Found this very nice set of examples:
https://jenkins.io/doc/pipeline/examples/
The one I use is Parallel Multiple Nodes.
For an internal status logging in my jenkins pipeline I have created a "template map which I want do use in multiple stages which are running independently in parallel
def status= [
a : '',
b: [
b1: '',
b2: '',
b3: ''
],
c: [
c1: '',
c2 : ''
]
]
this status template I want to pass to multiple parallel running functions/executors. Inside the parallel branches I want to modify the status independently. See the following minimal example
def status= [
a : '',
b: [
b1: '',
b2: '',
b3: ''
],
c: [
c1: '',
c2 : ''
]
]
def label1 = "windows"
def label2 = ''
parallel firstBranch: {
run_node(label1, status)
}, secondBranch: {
run_node(label2, status)
},
failFast: true|false
def run_node (label, status){
node(label) {
status.b.b1 = env.NODE_NAME +"_"+ env.EXECUTOR_NUMBER
sleep(1)
echo "env.NODE_NAME_env.EXECUTOR_NUMBER: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBER
this.a_function(status)
echo "env.NODE_NAME_env.EXECUTOR_NUMBER: ${status.b.b1}"
// expected(still): env.NODE_NAME_env.EXECUTOR_NUMBER (off current node)
// is: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info AND probably from the wrong node
}
}
def a_function(status){
status.b.b1 += "more Info"
echo "env.NODE_NAME_env.EXECUTOR_NUMBERmore Info: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info
sleep(0.5)
echo "env.NODE_NAME_env.EXECUTOR_NUMBERmore Info: ${status.b.b1}"
// expected: env.NODE_NAME_env.EXECUTOR_NUMBERmore Info
}
Which results in
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Info
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER>more Info:
LR-Z4933-39110bdb_0more Info
[firstBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Infomore Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBERmore Info:
LR-Z4933-39110bdb_0more Infomore Info
[secondBranch] env.NODE_NAME_env.EXECUTOR_NUMBER:
LR-Z4933-39110bdb_0more Infomore Info
Note that in the status in the first branch is overwritten by the second branch and the other way around.
How to realize independent status variables when passing thm as a parameter to functions
You could define the template map. When you need multiple instances of the same which you may want to modify differently per instance by using cloned template map.
Here is short code snippet to show the example.
def template = [a: '', b: '']
def instancea = template.clone()
def instanceb = template.clone()
def instancec = template.clone()
instancea.a = 'testa'
instanceb.a = 'testb'
instancec.a = 'testc'
println instancea
println instanceb
println instancec
Of course, you can include bigger map, the above is only for demonstration.
You are passing status by reference to the function. But even if you do a status.clone(), I suspect this isn't a deep copy of status. status.b probably still points to the same reference. You need to make a deep copy of status and send that deep copy to the function.
I'm not sure a deep copy of a framework map is the right way to do this. You could just send an empty map [:] and let the called functions add the pieces to the map that they need. If you really need to pre-define the content of the map, then I think you should add a class and create new objects from that class.