How can I assign the output of a BigQuery cell magic operation in Jupyter if it has parameters?
%%bq execute --query sql_my_query --to-dataframe
parameters:
- name: min_val
type: STRING
value: $min_val
- name: max_val
type: STRING
value: $max_val
I've tried placing a variable in front of the BQ magic (e.g. myvar = %%bq ...), I've tried using myvar << %%bq, adding parentheses or braces around the entire expression but nothing seems to work. Does anyone have any ideas?
There are no examples in the examples either except with the Python API, which seems a bit messy for something that should be fairly standard.
You can define the query in one cell using the magic command.
But then to execute the query and supply the parameter values, do it in another cell without the magic command:
# in the first cell
%%bq query --name day_extract_query
SELECT EXTRACT(DAY FROM #input_date) AS day
After that, execute using pure python with no magic
# then, in a second cell
query_params1 = [{ "name": "input_date",
"parameterType": { "type": "DATE" },
"parameterValue": { "value": "2019-01-03" } }]
day1 = day_extract_query.execute(query_params=query_params1).result()
Related
I wanted to output json data not as array object and I did the changes mentioned in the pentaho document, but the output is always array even for the single set of values. I am using PDI 9.1 and I tested using the ktr from the below link
https://wiki.pentaho.com/download/attachments/25043814/json_output.ktr?version=1&modificationDate=1389259055000&api=v2
below statement is from https://wiki.pentaho.com/display/EAI/JSON+output
Another special case is when 'Nr. rows in a block' = 1.
If used with empty json block name output will looks like:
{
"name" : "item",
"value" : 25
}
My output comes like below
{ "": [ {"name":"item","value":25} ] }
I have resolved myself. I have added another JSON input step and defined as below
$.wellDesign[0] to get the array as string object
I recently refactored my declartive pipeline into a scripted form. Even if everything seems to work fine, I have a problem coming from the initialization of a multiple valued paramenter.
In my declarative pipeline, I was using the following definition of multiple valued parameter (which was working as it should):
parameters {
choice(choices: ['fix', 'major', 'minor', 'none'], description: "Increase version's number: MAJOR.MINOR.FIX", name: "VERSIONING")
}
I refactored it into this form for the scripted pipeline:
properties([
parameters([
choice(choices: ['fix\nmajor\nminor\nnone'], description: "Increase version's number: MAJOR.MINOR.FIX", name: "VERSIONING"),
]),
])
The problem is that when I realised that something wasn't working as it should, and printed the variable value with a sh """echo "Versioning parameter check:" ${params.VERSIONING}""" step, I got this from the Jenkins' console:
Versioning parameter check: false
Which is both a value not in the list, and of a different type (boolean instead of string).
Is there a way to implement multiple value parameter initialization in Jenkins scripted pipelines?
Why this directive doesn't work out of the box in the scripted pipeline, whereas it does in the declarative type?
Is this a bug or am I doing something wrong?
Your definition is absolutely fine. You just need to pass the choices as list items and not as \n seperated values.
properties([
parameters([
choice(choices: ['fix', 'major', 'minor', 'none'], description: "Increase version's number: MAJOR.MINOR.FIX", name: "VERSIONING"),
])
])
Try the other option for defining choice parameter:
properties([
parameters([
[$class: 'ChoiceParameterDefinition',
choices: 'fix\nmajor\nminor\nnone\n',
name: 'VERSIONING',
description: "Increase version's number: MAJOR.MINOR.FIX"
],
]),
])
We use the following for a choice parameter, so it looks like your own definition but without the brackets:
properties([
parameters([
choice(choices: 'fix\nmajor\nminor\nnone', description: "Increase version's number: MAJOR.MINOR.FIX", name: "VERSIONING"),
]),
])
If you want the default value to be empty, just add an empty first choice like this:
choice(choices: '\nfix\nmajor\nminor\nnone'
I am trying to insert previously defined variable inside graphql query but I'm not able to find any example on how to do that except creating variables outside of query text and then making request with variables.
There is one problem for me for example in this example
queries: [{type: TERM, match: EQUAL, field: "fieldOne", value: "#(id)"},
{type: TERM, match: EQUAL, field: "fieldTwo", value: null}]
I want to insert value #(id) only for the first object in graphql query. Can anyone please provide some example for me or any suggestions on how to do that?
Alright I was thinking that it will be possible to directly replace text inside query, but I found solution from karate documentation with.
queries: [{type: TERM, match: EQUAL, field: "fieldOne", value: "<id>"},
{type: TERM, match: EQUAL, field: "fieldTwo", value: null}]
enclose id inside query text in angle brackets <> and then replace id inside query with id stored in variable id by calling
* replace query.id = id
How can i conditionally test the output from an Execute SQL Query to make sure it returns some rows of data.
In my example below if the query returns no rows I don't want it to send an email, I want to do something else. What is the test?
Thanks for your time
I test, if it queries result is no rows, the query body will be like this:
{
"OutputParameters": {},
"ResultSets": {}
}
So you could add a Condition with #{body('Execute_a_SQL_query')['OutputParameters']} is equal to {}. If true, do the things you want. Yo could set this in the Code view mode.
The below is the test result, hope this is what you want.
This will work in Query SQL V2.
What is does is takes the ResultSet and converts to string. This prevent s a null error on the length function. As an empty result set is {}, the length is 2. So if the length is 2 then the the result is empty.
"expression": {
"and": [
{
"equals": [
"#length(string(body('Execute_a_SQL_query_(V2)')?['ResultSets']))",
2
]
}
]
}
I am using similar to this in an until condition which runs until the length is zero. I guess you could do the same?
#equals(length(body('Execute_a_SQL_query')?['value']), 0)
I've a sql column filled with json document, one for row:
[{
"ID":"TOT",
"type":"ABS",
"value":"32.0"
},
{
"ID":"T1",
"type":"ABS",
"value":"9.0"
},
{
"ID":"T2",
"type":"ABS",
"value":"8.0"
},
{
"ID":"T3",
"type":"ABS",
"value":"15.0"
}]
How is it possible to trasform it into tabular form? I tried with redshift json_extract_path_text and JSON_EXTRACT_ARRAY_ELEMENT_TEXT function, also I tried with json_each and json_each_text (on postgres) but didn't get what expected... any suggestions?
desired results should appear like this:
T1 T2 T3 TOT
9.0 8.0 15.0 32.0
I assume you printed 4 rows. In postgresql
SELECT this_column->'ID'
FROM that_table;
will return column with JSON strings. Use ->> if you want text column. More info here: https://www.postgresql.org/docs/current/static/functions-json.html
In case you were using some old Postgresql (before 9.3), this gets harder : )
Your best option is to use COPY from JSON Format. This will load the JSON directly into a normal table format. You then query it as normal data.
However, I suspect that you will need to slightly modify the format of the file by removing the outer [...] square brackets and also the commas between records, eg:
{
"ID": "TOT",
"type": "ABS",
"value": "32.0"
}
{
"ID": "T1",
"type": "ABS",
"value": "9.0"
}
If, however, your data is already loaded and you cannot re-load the data, you could either extract the data into a new table, or add additional columns to the existing table and use an UPDATE command to extract each field into a new column.
Or, very worst case, you can use one of the JSON Functions to access the information in a JSON field, but this is very inefficient for large requests (eg in a WHERE clause).