I have rundeck 2.6.11 and I'm trying to config value in one of the options inside job using api
I already have valit token and I can perform get uri requests
for example:
curl --noproxy -x GET http://rundeck.domain.com:4440/api/18/projects?authtoken=##########
And I can get the data
But I couldn't find any option to set the value to one of the options in a job
I also tried this kb http://rundeck.org/docs/api/ but I didn't find a solution for my case
any idea?
Rundeck API doesn't have direct implementation for update job options, but you can update a job by updating the job definition file.
Export the job definition via
GET /api/1/job/[ID]
You can specify format=xml or format=yaml
Rundeck - Exporting Jobs
Update the option in the job definition using your preferred programming language.
Import the job definition you've updated via
POST /api/1/project/[PROJECT]/jobs/import
By default the uuidOption is set to preserve, which means you are updating an existing with same UUID
Rundeck - Importing Jobs
Related
I'm currently exploring implementing hooks in some of my DAGs. For instance, in one dag, I'm trying to connect to s3 to send a csv file to a bucket, which then gets copied to a redshift table.
I have a custom module written which I import to run this process. I am trying to currently set up an S3Hook to undergo this process instead. But I'm a little confused in setting up the connection, and how everything works.
First, I input the hook
from airflow.hooks.S3_hook import S3Hook
Then I try to make the hook instance
s3_hook = S3Hook(aws_conn_id='aws-s3')
Next I try to set up the client
s3_client = s3_hook.get_conn()
However when I run the client line above, I received this error
OperationalError: (sqlite3.OperationalError)
no such table: connection
[SQL: SELECT connection.password AS connection_password, connection.extra AS connection_extra, connection.id AS connection_id, connection.conn_id AS connection_conn_id, connection.conn_type AS connection_conn_type, connection.description AS connection_description, connection.host AS connection_host, connection.schema AS connection_schema, connection.login AS connection_login, connection.port AS connection_port, connection.is_encrypted AS connection_is_encrypted, connection.is_extra_encrypted AS connection_is_extra_encrypted
FROM connection
WHERE connection.conn_id = ?
LIMIT ? OFFSET ?]
[parameters: ('aws-s3', 1, 0)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
I'm trying to diagnose the error, but the tracebook is long. I'm a little confused on why sqlite3 is involved here, when I'm trying to utilize s3 here. Can anyone unpack this? Why is this error being thrown when trying to set up the client?
Thanks
Airflow is not just a library - it's also an application.
To execute Airflow code you must have airflow instance running this mean also having a database with the needed schema.
To create the tables you must execute airflow init db.
Edit:
After the discussion in comments. Your issue is that you have working Airflow application inside docker but your DAGs are written on your local disk. Docker is closed environment if you want Airflow to recognize your dags you must move the files to the DAG folder in the docker.
I have a setUp thread Group and a normal thread Group. Jmeter runs the thread groups consecutively.
Into the setup group I select a username from a database and I put that into a property with a JSR223 sampler like this:
import org.apache.jmeter.util.JMeterUtils;
JMeterUtils.setProperty("schemaProp", vars.get("schemaVar_1"));
Into the second thread I have a JDBC Connector which access the property like this:
${__property(sshUserProp,)}
But It doesn't work.
LOG:
2021-12-16 08:54:58,802 DEBUG o.a.j.p.j.c.DataSourceElement: Driver: org.postgresql.Driver DbUrl: jdbc:postgresql://127.0.0.1:50539/JB7 User: sshUserProp
The JDBC Connector doesn't really see the property, although it is actually set, cause I can access it with another JSR223 sampler:
import org.apache.jmeter.util.JMeterUtils;
log.info ("------------------------- MESSAGE: " + JMeterUtils.getProperty("sshUserProp"))
LOG:
2021-12-16 08:54:58,961 INFO o.a.j.p.j.s.J.JSR223 Sampler: ------------------------- MESSAGE: 160116
I'm afraid it's not possible, at least not with JMeter 5.4.1, looking into JDBC Connection Configuration source code the element is being set up in the testStarted() function and according to the JavaDoc
Called just before the start of the test from the main engine thread. This is before the test elements are cloned. Note that not all the test variables will have been set up at this point.
So you need to determine the user name beforehand somewhere somehow, i.e. by running another JMeter test and saving the username(s) into a file via Flexible File Writer and then readine the username from the file using __StringFromFile() function or pass the username via -J command-line argument - this way your current approach with __property() function will work.
We are using the Beta Scheduled query feature of BigQuery.
Details: https://cloud.google.com/bigquery/docs/scheduling-queries
We have few ETL scheduled queries running overnight to optimize the aggregation and reduce query cost. It works well and there hasn't been much issues.
The problem arises when the person who scheduled the query using their own credentials leaves the organization. I know we can do "update credential" in such cases.
I read through the document and also gave it some try but couldn't really find if we can use a service account instead of individual accounts to schedule queries.
Service accounts are cleaner and ties up to the rest of the IAM framework and is not dependent on a single user.
So if you have any additional information regarding scheduled queries and service account please share.
Thanks for taking time to read the question and respond to it.
Regards
BigQuery Scheduled Query now does support creating a scheduled query with a service account and updating a scheduled query with a service account. Will these work for you?
While it's not supported in BigQuery UI, it's possible to create a transfer (including a scheduled query) using python GCP SDK for DTS, or from BQ CLI.
The following is an example using Python SDK:
r"""Example of creating TransferConfig using service account.
Usage Example:
1. Install GCP BQ python client library.
2. If it has not been done, please grant p4 service account with
iam.serviceAccout.GetAccessTokens permission on your project.
$ gcloud projects add-iam-policy-binding {user_project_id} \
--member='serviceAccount:service-{user_project_number}#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountTokenCreator'
where {user_project_id} and {user_project_number} are the user project's
project id and project number, respectively. E.g.,
$ gcloud projects add-iam-policy-binding my-test-proj \
--member='serviceAccount:service-123456789#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com'\
--role='roles/iam.serviceAccountTokenCreator'
3. Set environment var PROJECT to your user project, and
GOOGLE_APPLICATION_CREDENTIALS to the service account key path. E.g.,
$ export PROJECT_ID='my_project_id'
$ export GOOGLE_APPLICATION_CREDENTIALS=./serviceacct-creds.json'
4. $ python3 ./create_transfer_config.py
"""
import os
from google.cloud import bigquery_datatransfer
from google.oauth2 import service_account
from google.protobuf.struct_pb2 import Struct
PROJECT = os.environ["PROJECT_ID"]
SA_KEY_PATH = os.environ["GOOGLE_APPLICATION_CREDENTIALS"]
credentials = (
service_account.Credentials.from_service_account_file(SA_KEY_PATH))
client = bigquery_datatransfer.DataTransferServiceClient(
credentials=credentials)
# Get full path to project
parent_base = client.project_path(PROJECT)
params = Struct()
params["query"] = "SELECT CURRENT_DATE() as date, RAND() as val"
transfer_config = {
"destination_dataset_id": "my_data_set",
"display_name": "scheduled_query_test",
"data_source_id": "scheduled_query",
"params": params,
}
parent = parent_base + "/locations/us"
response = client.create_transfer_config(parent, transfer_config)
print response
As far as I know, unfortunately you can't use a service account to directly schedule queries yet. Maybe a Googler will correct me, but the BigQuery docs implicitly state this:
https://cloud.google.com/bigquery/docs/scheduling-queries#quotas
A scheduled query is executed with the creator's credentials and
project, as if you were executing the query yourself
If you need to use a service account (which is great practice BTW), then there are a few workarounds listed here. I've raised a FR here for posterity.
This question is very old and came on this thread while I was searching for same.
Yes, It is possible to use service account to schedule big query jobs.
While creating schedule query job, click on "Advance options", you will get option to select service account.
By default is uses credential of requesting user.
Image from bigquery "create schedule query"1
I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.
I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox