How to setup alias parameter in apscheduler config? - apscheduler

There is a question about setup config. Can I set a parameter alias, when use config method to setup executers and jobstores, which likes that use the add_jobstore method with alias paramter
# scheduler add_jobstore
scheduler.add_jobstore(jobstore_type, alias=alias, **config)
When I setup the alias in config, get TypeError: unexpected keyword argument alias. Below it's my configuration:
{
"executors": {
...
}
},
"jobstores": {
"apscheduler.jobstores.redis":{
"class": "apscheduler.jobstores.redis:RedisJobStore",
...
"socket_timeout": 5,
"alias": "Test" # here set a alias
},
"apscheduler.jobstores.mongo": {
"class": "apscheduler.jobstores.mongodb:MongoDBJobStore",
...
"minPoolSize":20
}
}
}

I fix the problem. The jobstores , executers alias value should be declare at begining.
There is a question about setup config. Can I set a parameter alias, when use config method to setup executers and jobstores, which likes that use the add_jobstore method with alias paramter
# scheduler add_jobstore
scheduler.add_jobstore(jobstore_type, alias=alias, **config)
When I setup the alias in config, get TypeError: unexpected keyword argument alias. Below it's my configuration:
{
"executors": {
...
}
},
"jobstores": {
"redis":{
"class": "apscheduler.jobstores.redis:RedisJobStore",
...
"socket_timeout": 5,
},
"mongo": {
"class": "apscheduler.jobstores.mongodb:MongoDBJobStore",
...
"minPoolSize":20
}
}
}
If like this, scheduler will get the jobstore keyes are redis and mongo those are the alias values.

Related

OPA authorization policies with scopes and roles

I'm using Open Policy Agent as an authorization component together with OIDC enabled apps.
I have input from the apps in the format:
{
"token": {
"scopes": [
"read:books",
"write:books"
]
},
"principal": {
"roles": [
"user",
"moderator"
]
},
"context": {
"action": "read",
"resource": "books"
}
}
Then I have data with access mapping in the format:
{
"user": [
"read:books"
],
"moderator": [
"read:books",
"write:books"
],
"administrator": [
"read:books",
"write:books",
"read:store",
"write:store"
]
}
And the policy currently looks like this:
package whatever.authz
context_scope := concat(":", [input.context.action, input.context.resource])
default allow = false
allow {
token_has_context_scope
principal_has_resource_access
}
token_has_context_scope {
context_scope == input.token.scopes[_]
}
principal_has_resource_access {
principal_role := input.principal.roles[_]
context_scope == data[principal_role][_]
}
This produces the following error:
2 errors occurred:
policy.rego:16: rego_recursion_error: rule principal_has_resource_access is recursive: principal_has_resource_access -> principal_has_resource_access
policy.rego:7: rego_recursion_error: rule allow is recursive: allow -> principal_has_resource_access -> allow
It is the recursive lookup in the principal_has_resource_access function that is causing the error.
I need to check if one of the roles of the principal is allowed to access the resource as specified by the context. Since roles is an array i need to find the union of all access scopes in the data and see if one of them matches the context scope. What am I doing wrong in the policy?
The snippet can be found in the Rego Playground https://play.openpolicyagent.org/p/KhovLRgMup
OPA stores all data under the data path, including policy and rules. There's no way for the compiler to know that the input you're providing isn't referencing the policy itself (i.e. data["whatever"]) which would be recursive. The easiest way to work around this is to simply use a top level attribute for your data which differs from your policy (i.e package name), like this:
{
"attributes": {
"user": [
"read:books"
],
"moderator": [
"read:books",
"write:books"
],
"administrator": [
"read:books",
"write:books",
"read:store",
"write:store"
]
}
}
And update your policy to reference this:
context_scope == data["attributes"][principal_role][_]
Since data.attributes != data.whatever.authz there is no risk of recursion, and the compiler won't complain. You might want a better name than "attributes", but I'll leave that to you :)

Airflow BigQueryInsertJobOperator configuration

I'm having some issue converting from the deprecated BigQueryOperator to BigQueryInsertJobOperator. I have the below task:
bq_extract = BigQueryInsertJobOperator(
dag="big_query_task,
task_id='bq_query',
gcp_conn_id='google_cloud_default',
params={'data': Utils().querycontext},
configuration={
"query": {"query": "{% include 'sql/bigquery.sql' %}", "useLegacySql": False,
"writeDisposition": "WRITE_TRUNCATE", "destinationTable": {"datasetId": bq_dataset}}
})
this line in my bigquery_extract.sql query is throwing the error:
{% for field in data.bq_fields %}
I want to use 'data' from params, which is calling a method, this method is reading from a .json file:
class Utils():
bucket = Variable.get('s3_bucket')
_qcontext = None
#property
def querycontext(self):
if self._qcontext is None:
self.load_querycontext()
return self._qcontext
def load_querycontext(self):
with open(path.join(conf.get("core", "dags"), 'traffic/bq_query.json')) as f:
self._qcontext = json.load(f)
the bq_query.json is this format, and I need to use the nested bq_fields list values:
{
"bq_fields": [
{ "name": "CONCAT(ID, '-', CAST(ID AS STRING), "alias": "new_id" },
{ "name": "TIMESTAMP(CAST(visitStartTime * 1000 AS INT64)", "alias": "new_timestamp" },
{ "name": "TO_JSON_STRING(hits.experiment)", "alias": "hit_experiment" }]
}
this file has a list which I want to use in the above mentioned query line, but its throwing this error:
jinja2.exceptions.UndefinedError: 'data' is undefined
There are two issues with your code.
First "params" is not a supported field in BigQueryInsertJobOperator. See this post where I post how to pass params to sql file when using BigQueryInsertJobOperator. How do you pass variables with BigQueryInsertJobOperator in Airflow
Second, if you happen to get an error that your file cannot be found, make sure you set the full path of your file. I have had to do this when migrating from local testing to the cloud, even though file is in same directory. You can set the path in the dag config with example below(replace path with your path):
with DAG(
...
template_searchpath = '/opt/airflow/dags',
...
) as dag:

Generate "Instances" definition programmatically to create EMR cluster in StepFunctions

I have a case where I want to dynamically create an EMR cluster based on a user-defined configuration and execute a sequence of steps on it using AWS Step Functions.
For this, I am planning to provide the instance configuration as an input to the step functions workflow.
Based on the StepFunctions-EMR Integration Documentation, the definition is the same as that of the RunJobFlow API.
However, when I try to generate the definition by serializing an instance of JobFlowInstancesConfig to JSON and pass it to the StateMachine as an input, it throws an error saying:
The field 'Instances.KeepJobFlowAliveWhenNoSteps' is required but was missing
Here is the JSON generated post serialization:
{
"instanceFleets": [
{
"instanceFleetType": "MAIN",
"targetOnDemandCapacity": 1,
"instanceTypeConfigs": [
{
"instanceType": "m5.xlarge"
}
]
},
{
"instanceFleetType": "CORE",
"targetOnDemandCapacity": 1,
"instanceTypeConfigs": [
{
"instanceType": "c5.2xlarge"
}
]
}
],
"keepJobFlowAliveWhenNoSteps": true
}
I am passing this in the input, and accessing it in my StepFunctions definition in the below Task (where I expect the above definition to be replacing $.jobFlowInstancesConfig):
...
"GetCluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name.$": "$.clusterName",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.30.0",
"Applications": [
{
"Name": "Spark"
}
],
"ServiceRole": "EMR_DefaultRole",
"JobFlowRole": "EMR_EC2_DefaultRole",
"LogUri": "s3://my-aws-logs/elasticmapreduce/",
"Instances.$": "$.jobFlowInstancesConfig"
}
}
...
My suspicion is that this is failing because StepFunctions expects the field names to start with upper case.
Question: How do I programmatically generate the appropriate definition without having to play around with Strings for generating the JSON? Is there a straightforward way to serialize the above definition to one that will work with StepFunctions?

How can I modify a Vue Loader setting in Laravel Mix?

Using Laravel Mix by the way... and trying to use Vue Apollo, it says we need to add this to babel config:
{
test: /\.vue$/,
use: [
{
loader: 'vue-loader',
options: {
transpileOptions: {
transforms: {
dangerousTaggedTemplateString: true
}
}
}
}
]
},
but that is giving me Failed to mount component: template or render function not defined. error that I haven't been able to find a solution for, except for a thread somewhere in Google saying it's because I'm using vue-loader twice..
So, what I'm trying to do now that may fix this error is to apply that dangerousTaggedTemplateString setting to the existing webpack configuration for .vue files.
Anyone knows how to do that?
Try this (untested), leave the mix.js line you mentioned untouched.
Then on a new line:
mix.options({
vue: {
transpileOptions: {
transforms: {
dangerousTaggedTemplateString: true
}
}
}
});

stylelint on create-react-app #import-normalize throw error

I followed this doc to add CSS reset to my app.
https://create-react-app.dev/docs/adding-css-reset/#indexcss
But it showed this message:
"stylelint": {
"extends": "stylelint-config-recommended",
"rules": {
"at-rule-no-unknown": null
}
How to fix this problem?it is annoying...
To fix this warning you just need to add this line to.vscode/settings.json inside your project (you can create this file if it doesn't already exist):
{
"css.lint.unknownAtRules": "ignore"
}
Source: https://create-react-app.dev/docs/adding-css-reset/#indexcss
For VS Code -
To make the VS Code recognise this custom CSS directive, you can provide custom data for VS Code's CSS Language Service as mentioned here - https://github.com/Microsoft/vscode-css-languageservice/blob/master/docs/customData.md.
Create a CSS custom data set file with the following info. Place it at location .vscode/custom.css-data.json relative to the project root.
{
"version": 1.1,
"properties": [],
"atDirectives": [
{
"name": "#import-normalize",
"description": "bring in normalize.css styles"
}
],
"pseudoClasses": [],
"pseudoElements": []
}
Now, if you don't have already, create a .vscode\settings.json file relative to project root. Add a field with key "css.customData" and value as the path to custom data set. For example,
{
"css.customData": ["./.vscode/custom.css-data.json"]
}
Now, you will no longer get "Unknown at rule" warning. When you hover over "#import-normalize", you will see the description you set for it in custom.css-data.json
#import-normalize is a non-standard at-rule. From the rule's documentation:
This rule considers at-rules defined in the CSS Specifications, up to and including Editor's Drafts, to be known.
However, the rule has an ignoreAtRules secondary option for exactly this use case, where you can list the non-standard imports you are using.
For example, in your package.json:
{
"stylelint": {
"extends": "stylelint-config-recommended",
"rules": {
"at-rule-no-unknown": [true, {
"ignoreAtRules": ["import-normalise"]
}]
}
}
}
Or within your .stylelintrc file:
{
"extends": "stylelint-config-recommended",
"rules": {
"at-rule-no-unknown": [true, {
"ignoreAtRules": ["import-normalise"]
}
}
}