airflow test mode xcom pull/push not working - testing

I try to test 2 tasks through the airflow cli test command`
The first task run, auto pushes last console out to xcom and i see the value some value in the airflow GUI as expected
When i run the second task via airflow cli test command i just get None as return value but as i have read here: How to test Apache Airflow tasks that uses XCom that it should work and at least the xcom_push is obvious working, why not the xcom_pull?
Someone has a hint how to get this working?
Provide context is set to true.
Example code:
t1 = BashOperator(
task_id='t1',
bash_command='echo "some value"',
xcom_push=True,
dag=dag
)
t2 = BashOperator(
task_id='t2',
bash_command='echo {{ ti.xcom_pull(task_ids="t1") }}',
xcom_push=True,
dag=dag
)
Thanks!
Edit: when i run the code (DAG) without test mode the xcom_pull works fine

As far as I know, "test" runs without saving anything to the metadata database which is why when you run the puller task, you get "None" as a result and when you actually run the DAG code, it works.
You can query the metadata database directly after testing the first task to verify this.

Context seems to be missing here, along with xcom_push=True, we need to use provide_context=True

Related

Polarion: xUnitFileImport creates duplicate testcases instead of referencing existing ones

I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.

sqlite3.OperationalError: When trying to connect to S3 Airflow Hook

I'm currently exploring implementing hooks in some of my DAGs. For instance, in one dag, I'm trying to connect to s3 to send a csv file to a bucket, which then gets copied to a redshift table.
I have a custom module written which I import to run this process. I am trying to currently set up an S3Hook to undergo this process instead. But I'm a little confused in setting up the connection, and how everything works.
First, I input the hook
from airflow.hooks.S3_hook import S3Hook
Then I try to make the hook instance
s3_hook = S3Hook(aws_conn_id='aws-s3')
Next I try to set up the client
s3_client = s3_hook.get_conn()
However when I run the client line above, I received this error
OperationalError: (sqlite3.OperationalError)
no such table: connection
[SQL: SELECT connection.password AS connection_password, connection.extra AS connection_extra, connection.id AS connection_id, connection.conn_id AS connection_conn_id, connection.conn_type AS connection_conn_type, connection.description AS connection_description, connection.host AS connection_host, connection.schema AS connection_schema, connection.login AS connection_login, connection.port AS connection_port, connection.is_encrypted AS connection_is_encrypted, connection.is_extra_encrypted AS connection_is_extra_encrypted
FROM connection
WHERE connection.conn_id = ?
LIMIT ? OFFSET ?]
[parameters: ('aws-s3', 1, 0)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
I'm trying to diagnose the error, but the tracebook is long. I'm a little confused on why sqlite3 is involved here, when I'm trying to utilize s3 here. Can anyone unpack this? Why is this error being thrown when trying to set up the client?
Thanks
Airflow is not just a library - it's also an application.
To execute Airflow code you must have airflow instance running this mean also having a database with the needed schema.
To create the tables you must execute airflow init db.
Edit:
After the discussion in comments. Your issue is that you have working Airflow application inside docker but your DAGs are written on your local disk. Docker is closed environment if you want Airflow to recognize your dags you must move the files to the DAG folder in the docker.

Click: Test click.group commands without running their code

I´m currently writing tests for my application and therefore, I have to test some click.group commands I defined:
Let´s say I defined them like:
#click.group(cls=MyGroup)
#click.pass_context
def myapp(ctx):
init_stuff()
#myapp.command()
#click.option('--myOption')
def foo(myOption: str) -> None:
do_stuff() # change some files, print, create other files
I know that I could use the CliRunner from click.testing. However, I just want to make sure, that the command is called, but I DONT WANT it to execute any code (for example by applying the CliRunner.invoke()).
How could this be done?
I couldn´t come up with a solution using mocking with foo for example. Or do I have to execute code lets say using the isolated_filesystem() which CliRunner provides?
So the question is: What would be the most efficient way to test my commands when defined like shown above?
Many thanks in advance
You could add a --dry-run flag to your group or some commands, and save it it inside the context, and if the flag is enabled, do not execute any code. Then you can use CliRunner.invoke() with the --dry-run flag enabled and just check your invocations have happened, without actually executing the code.

Maximo 7.6 Intergration Automation Script

I'm trying to create an Intergration Automation Script for a PUBLISHED Channel which updates a database field.
Basically for WOACTIVITY I just want a field value setting to 1 for the Work Order if the PUBLISHED channel is triggered.
Any ideas or example scripts that anyone has or can help with please? Just can't get it work.
What about using a SET processing rule on the publish channel? Processing rules are evaluated every time the channel is activated, and a SET action will let you set an attribute for the parent object to a specified value. You can read more about processing rules here.
Adding a new answer because experience, in case it helps anyone.
If you create an automation script for integration against a Publish Channel and then select External Exit or User Exit there's an implicit variable irData that has access to the MBO being worked on. You can then use that MBO as you would in any other script. Note that because you're changing a record that's integrated you'll probably want a skip rule in your publish channel that skips records with your value set or you may run into an infinite publish --> update --> publish loop.
woMbo = irData.getCurrentMbo()
woMboSet = woMbo.getThisMboSet()
woMbo.setValue("FIELD", 1)
woMboSet.save()

Jenkins' EnvInject Plugin does not persist values

I have a build that uses EnvInject Plugin to set an environmental value.
A different job needs to scan last good Jenkins build of that job and get the value of that environmental variable.
This all works well, except sometimes the variable will disappear from build history. It seems that after some time passes, when I look at the 'Environment variables' section in build history, the injected value simply disappears.
How can I make this persist? Is this a bug, or part of the design?
If it make any difference, the value of the injected variable is +1500 chars and in the following format: 'component1=1.1.2;component2=1.1.3,component3=4.1.2,component4=1.1.1,component4=1.3.2,component4=1.1.4'
Looks like EnvInject and/or JobDSL have a bug.
Steps to reproduce:
Set up a job that runs this JobDSL:
job('run_deploy_mock') {
steps {
environmentVariables {
env('deployedArtifacts', 'component1=1.0.0.2')
}
}
}
Run it and it will create a job called 'deploy_mock'
Run the 'deploy_mock' job. After build #1 is done, go to build details and check 'Environmental Variables' section for an entry called 'component1'
Run the JobDSL job again
Check 'Environmental Variables' section for 'deploy_mock' build #1. The 'component1' variable is now missing.
If I substitute the '=' for something else, it works as expected.
Created Jenkins Jira