Invalid or unexpected token in Kantu if condition - selenium

I'm using the Kantu web automation tool for the first time. Most of it is intuitive, but I'm now encountering an error when looping through a CSV. The relevant part of my script is:
{
"Command": "echo",
"Target": "Found customer with email ${emailAddress}",
"Value": ""
},
{
"Command": "echo",
"Target": "Expected email name: ${!COL1}",
"Value": ""
},
{
"Command": "if",
"Target": "${emailAddress} == \"${!COL1}#domain.com\"",
"Value": ""
},
This produces the following log:
[info] Executing: | echo | Found customer with email ${emailAddress} | |
[echo] Found customer with email 70866223#domain.com
[info] Executing: | echo | Expected email name: ${!COL1} | |
[echo] Expected email name: 70866223
[info] Executing: | if | ${emailAddress} == "${!COL1}#domain.com" | |
[error] Error in runEval condition of if: Invalid or unexpected token
So you can see the variables ${emailAddress} and ${!COL1} are stored correctly, but my if condition is not evaluating correctly. I've also tried changing \"${!COL1}#domain.com\" to ${!COL1} + \"#domain.com\" with same result.
I assume this is something to do with escape characters or something, but I can't find anything related in the docs. Any pointers appreciated.

The if expression is handled like in storeEval. To quote from one of the storeEval examples in the docs :
x="${myvar}"; x.length;
Note that our variable ${myvar} is converted to a text string before the Javascript EVAL is executed. Therefore ${myvar} has to be inside "..." like any other text.
So I'd say the reason your code fails on the if is that your ${emailAddress} is not inside a String.
"${emailAddress}" == "${!COL1}#domain.com"
should work.

Related

Set the position for service CMake files

I'm building a C[++] project with CMake.
After running cmake --build ... I have a build folder in my project containing my binary plus some CMake service file like that:
.
|____CMakeLists.txt
|____build
| |____compile_commands.json
| |____CMakeFiles
| |____Makefile
| |____cmake_install.cmake
| |____CMakeCache.txt
| |____project.a
| |____.cmake
|____include
|____src
Is it possible to configure CMake to move all those files (except the actually built binaries) to some other place?
I can imagine something like:
.
|____CMakeLists.txt
|____build
| |____project.a
| |____.cmake
| |____compile_commands.json
| |____CMakeFiles
| |____Makefile
| |____cmake_install.cmake
| |____CMakeCache.txt
| |____etc
| |____...
|____include
|____src
Trying to force cmake to generate a specific file structure well supported. I recommend approaching the problem from the other end instead: Determine the location where cmake outputs the binaries. You simply need to set some or all of the following variables:
CMAKE_ARCHIVE_OUTPUT_DIRECTORY
CMAKE_LIBRARY_OUTPUT_DIRECTORY
CMAKE_PDB_OUTPUT_DIRECTORY
CMAKE_RUNTIME_OUTPUT_DIRECTORY
CMake presets would be a convenient place to set this kind of info:
...
configurePresets": [
{
...
"cacheVariables": {
"CMAKE_RUNTIME_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_LIBRARY_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_ARCHIVE_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_PDB_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
}
},
...
},
...
]
...
This only provides default values though. The corresponding target properties (e.g. RUNTIME_OUTPUT_DIRECTORY) take precedence. Furthermore there's the possibility of the cache variables being shadowed by variables of the same name specified in your cmake files.
Note: If you want to put the binaries for a project in a file structure suitable for deployment, you should be using install() functionality instead.

Local changes on s3-cloudformation-template.json are overriden on amplify push

As part of my backend configuration, I need that an S3 bucket to get its objects automatically expired after 1 day. I included the S3 bucket to my backend with amplify storage add, but AMPLIFY-CLI is a bit limited on which can be configured for buckets.
So, after creating the bucket through amplify, I opened s3-cloudformation-template.json and manualy included a rule for object expiration:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "S3 resource stack creation using Amplify CLI",
"Parameters": {...},
"Conditions": {...},
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DependsOn": [
"TriggerPermissions"
],
"DeletionPolicy" : "Retain",
"Properties": {
"BucketName": {...},
"NotificationConfiguration": {...},
"CorsConfiguration": {...},
"LifecycleConfiguration": {
"Rules": [
{
"Id": "ExpirationRule",
"Status": "Enabled",
"ExpirationInDays": 1
}
]
}
}
},
...
}
After that, I issued an amplify status, where the change in the cloudformation template was detected:
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------------- | --------- | ----------------- |
| Storage | teststorage | Update | awscloudformation |
Finally, I issued an amplify push but the command finished without any cloudformation logs for this change, and a new indication of No Change for the S3 storage:
✔ Successfully pulled backend environment dev from the cloud.
Current Environment: dev
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------------- | --------- | ----------------- |
| Storage | teststorage | No Change | awscloudformation |
Checking s3-cloudformation-template.json again I noticed that the configuration I've added before was overriden/removed during the push command:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "S3 resource stack creation using Amplify CLI",
"Parameters": {...},
"Conditions": {...},
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DependsOn": [
"TriggerPermissions"
],
"DeletionPolicy" : "Retain",
"Properties": {
"BucketName": {...},
"NotificationConfiguration": {...},
"CorsConfiguration": {...}
}
},
...
}
So, pretty sure I'm making some mistake here since I couldn't find other posts with this problem, but, where is the mistake?
You need to "override" the s3 bucket configuration (Override Amplify-generated S3)...
From the command line type:
amplify override storage
This will show:
✅ Successfully generated "override.ts" folder at C:\myProject\amplify\backend\storage\staticData
√ Do you want to edit override.ts file now? (Y/n) · yes
Press Return to choose yes then update the override.ts file with the following:
import { AmplifyS3ResourceTemplate } from '#aws-amplify/cli-extensibility-helper'
import * as s3 from '#aws-cdk/aws-s3'
export function override(resources: AmplifyS3ResourceTemplate) {
const lifecycleConfigurationProperty: s3.CfnBucket.LifecycleConfigurationProperty =
{
rules: [
{
status: 'Enabled',
id: 'ExpirationRule',
expirationInDays: 1
}
]
}
resources.s3Bucket.lifecycleConfiguration = lifecycleConfigurationProperty
}
Back to the command line, press Return to continue:
Edit the file in your editor: C:\myProject\amplify\backend\storage\staticData\override.ts
? Press enter to continue
Now update the backend using:
amplify push
And you're done.
Further information on the configuration options can be found at interface LifecycleConfigurationProperty
Further information on what else can be overridden for the S3 bucket can be found at class CfnBucket (construct)

Ansible if nested value doesn't exist in nested array

I'd like to make my Ansible EIP creation idempotent. In order to do that I only want the task to run when Tag "Name" value "tag_1" doesn't exist.
However I'm not sure how I could add this as a 'when' at the end of a task.
"eip_facts.addresses": [
{
"allocation_id": "eipalloc-blablah1",
"domain": "vpc",
"public_ip": "11.11.11.11",
"tags": {
"Name": "tag_1",
}
},
{
"allocation_id": "eipalloc-blablah2",
"domain": "vpc",
"public_ip": "22.22.22.22",
"tags": {
"Name": "tag_2",
}
},
{
"allocation_id": "eipalloc-blablah3",
"domain": "vpc",
"public_ip": "33.33.33.33",
"tags": {
"Name": "tag_3",
}
}
]
(Tags are added later) I'm looking for something like:
- name: create elastic ip
ec2_eip:
region: eu-west-1
in_vpc: yes
when: eip_facts.addresses[].tags.Name = "tag_1" is not defined
What is the correct method of achieving this? Bear in mind the value can not exist in that parameter in the entire array, not just a single iteration.
Ok, I found a semi-decent solution
- name: Get list of EIP Name Tags
set_fact:
eip_facts_Name_tag: "{{ eip_facts.addresses | map(attribute='tags.Name') | list }}"
Which extracts the Name tag and puts them into an array
ok: [localhost] => {
"msg": [
"tag_1",
"tag_2",
"tag_3"
]
}
and then...
- debug:
msg: "Hello"
when: '"tag_1" in "{{ eip_facts_Name_tag }}"'
This will work, beware though, this doesn't do an exact string search. So if you did a search for just 'tag' that'd count as a hit too.

backstopjs missmatch errors issue

I am new to backstopjs, I was able to download it globally. My project directory structure looks like the following. I set up instance of backstopjs in my tests/backstopjs directory with backstop init :
The page I want to reference is index.html in my app directory:
My scenarios in the backstop.js file are the following:
"scenarios": [
{
"label": "My index test",
"url": "~/app/index.html",
"referenceUrl": "",
"readyEvent": "",
"readySelector": "",
"delay": 0,
"hideSelectors": [],
"removeSelectors": [],
"hoverSelector": "",
"clickSelector": "",
"postInteractionWait": "",
"selectors": [
".list-content"
],
"selectorExpansion": true,
"misMatchThreshold" : 0.1,
"requireSameDimensions": true
}
],
I am trying to taget the list-content class on my index.html page.
The error I am getting is:
report | *** Mismatch errors found ***
COMMAND | Command `report` ended with an error after [0.089s]
COMMAND | Command `test` ended with an error after [32.08s]
And the report page result:
Is my url path completely wrong, or is it something else I have missed?
Sorry, my url path was wrong, I tried it out on a simpler project setup.
But from the screen dumps above, can someone suggest the proper relative path for my url setting from my tests/backstopjs folder where the backstop.json exits to my app where the index.html file exists?

when enabling errors to bigquery I do not receive the bad record number

I'm using bigquery command line tool to upload these records:
{name: "a"}
{name1: "b"}
{name: "c"}
.
➜ ~ bq load --source_format=NEWLINE_DELIMITED_JSON my_dataset.my_table ./names.json
this is the result I get:
Upload complete.
Waiting on bqjob_r7fc5650eb01d5fd4_000001560878b74e_1 ... (2s) Current status: DONE
BigQuery error in load operation: Error processing job 'my_dataset:bqjob...4e_1': JSON table encountered too many errors, giving up.
Rows: 2; errors: 1.
Failure details:
- JSON parsing error in row starting at position 5819 at file:
file-00000000. No such field: name1.
when I use bq --format=prettyjson show -j <jobId> I get:
{
"status": {
"errorResult": {
"location": "file-00000000",
"message": "JSON table encountered too many errors, giving up. Rows: 2; errors: 1.",
"reason": "invalid"
},
"errors": [
{
"location": "file-00000000",
"message": "JSON table encountered too many errors, giving up. Rows: 2; errors: 1.",
"reason": "invalid"
},
{
"message": "JSON parsing error in row starting at position 5819 at file:
file-00000000. No such field: name1.",
"reason": "invalid"
}
],
"state": "DONE"
}
}
As you can see I receive an error which tells me in what line I had an error. : Rows: 2; errors: 1
Now I'm trying to enable errors by using max_bad_errors
➜ ~ bq load --source_format=NEWLINE_DELIMITED_JSON --max_bad_records=3 my_dataset.my_table ./names.json
here is what I receive:
Upload complete.
Waiting on bqjob_...ce1_1 ... (4s) Current status: DONE
Warning encountered during job execution:
JSON parsing error in row starting at position 5819 at file: file-00000000. No such field: name1.
when I use bq --format=prettyjson show -j <jobId> I get:
{
.
.
.
"status": {
"errors": [
{
"message": "JSON parsing error in row starting at position 5819 at file: file-00000000. No such field: name1.",
"reason": "invalid"
}
],
"state": "DONE"
},
}
when I check - it actually uploads the good records to the table and ignores the bad record,
but now I do not know in what record the error was.
Is this a big query bug?
can it be fixed so I receive record number also when enabling bad records?
Yes this is what max_bad_records does. If the number of errors is below max_bad_records the load will succeed. The error message tells you the start position of the failed line, 5819, and the file name, file-00000000. The file name is changed since you're doing an upload and load.
The previous "Rows: 2; errors: 1" means 2 rows are parsed and there is 1 error. It's not always the 2nd row in the file. A big file can be processed by many workers in parallel. Worker n starts processing at position xxxx, parsed two rows, and found an error. It'll also report the same error message and apparently 2 doesn't mean the 2nd row in the file. And it doesn't make sense for worker n to scan the file from beginning to find out which line it starts with. Instead, it'll just report the start position of the line.