I have the below code in my Github respository:
INSERT INTO lookup_tables.view_da_status_lookup(status_id, object_type, status_name, status_type, ui_dropdown_flag, last_updt_ts)
VALUES
(1,'View','Validation success','Maker Validation',true,CURRENT_TIMESTAMP),
(2,'View','Validation Failed','Maker Validation',true,CURRENT_TIMESTAMP),
(3,'View','Approved','Checker Validation',true,CURRENT_TIMESTAMP);
Maker Validation is repeated in the DML and it expects us to use a variable and have the command in a PL-SQL construct.
This is the error I get:
Define a constant instead of duplicating this literal 1 times.
However, we do not have to implement PL-SQL. Is there a way to tell it to ignore the error?
On the following link, you can find:
Exclude specific rules from specific files
Analysis Scope > D. Issue > Exclusions > Ignore Issues on Multiple Criteria
You can prevent specific rules from being applied to specific files by
combining one or more pairs of strings consisting of a** rule key
pattern and a **file path pattern.
The key for this parameter is sonar.issue.ignore.multicriteria.
However, because it is a multi-value property, we recommend that only
be set through the UI.
For example, if you use Maven, the setting should look like this:
<properties>
<sonar.issue.ignore.multicriteria>e1,e2</sonar.issue.ignore.multicriteria>
<sonar.issue.ignore.multicriteria.e1.ruleKey>squid:S00100</sonar.issue.ignore.multicriteria.e1.ruleKey>
<sonar.issue.ignore.multicriteria.e1.resourceKey>**/*Steps.java</sonar.issue.ignore.multicriteria.e1.resourceKey>
<sonar.issue.ignore.multicriteria.e2.ruleKey>squid:S1118</sonar.issue.ignore.multicriteria.e2.ruleKey>
<sonar.issue.ignore.multicriteria.e2.resourceKey>**/PropertyPlaceholderConfig.java</sonar.issue.ignore.multicriteria.e2.resourceKey>
</properties>
where e1 and e2 are local rule names, e1.ruleKey is sonarqube rule id and e1.resourceKey is path to file.
Related
Hello Could anyone help me how to simulate this scenario. Example I want to validate these 3 fields on my table "symbol_type", "symbol_subtype", "taker_symbol" and return unique combination/result.
I tried to use this command, however Its not working properly on my test. Not sure if this is the correct syntax to simulate my scenario. Your response is highly appreciated.
Expected Result: These 3 fields should return my unique combination using DBT commands.
I'd recommend to either:
use the generate_surrogate_key (docs) macro in the model, or
use the dbt_utils.unique_combination_of_columns (docs) generic test.
For the first case, you would need to define the following in the model:
select
{{- dbt_utils.generate_surrogate_key(['symbol_type', 'symbol_subtype', 'taker_symbol']) }} as hashed_key_,
(...)
from your_model
This would create a hashed value of the three columns. You could then use a unique test in your YAML file.
For the second case, you would only need to add the generic test in your YAML file as follows:
# your model's YAML file
- name: your_model_name
description: ""
tests:
- dbt_utils.unique_combination_of_columns:
combination_of_columns:
- symbol_type
- symbol_subtype
- taker_symbol
Both these approaches will let you check whether the combination of the three columns is unique over the whole model's output.
I have a question regarding the templating option for XML in Open Refine. Is it possible to export data from two columns in a nested XML-structure, if both columns contain multiple values, that need to be split first?
Here's an example to illustrate better what I mean. My columns look like this:
Column1
Column2
https://d-nb.info/gnd/119119110;https://d-nb.info/gnd/118529889
Grützner, Eduard von;Elisabeth II., Großbritannien, Königin
https://d-nb.info/gnd/1037554086;https://d-nb.info/gnd/1245873660
Müller, Jakob;Meier, Anina
Each value separated by semicolon in Column1 has a corresponding value in Column2 in the right order and my desired output would look like this:
<rootElement>
<recordRootElement>
...
<edm:Agent rdf:about="https://d-nb.info/gnd/119119110">
<skos:prefLabel xml:lang="zxx">Grützner, Eduard von</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/118529889">
<skos:prefLabel xml:lang="zxx">Elisabeth II., Großbritannien, Königin</skos:prefLabel>
</edm:Agent>
...
</recordRootElement>
<recordRootElement>
...
<edm:Agent rdf:about="https://d-nb.info/gnd/1037554086">
<skos:prefLabel xml:lang="zxx">Müller, Jakob</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/1245873660">
<skos:prefLabel xml:lang="zxx">Meier, Anina</skos:prefLabel>
</edm:Agent>
...
</recordRootElement>
<rootElement>
(note: in my initial posting, the position of the root element was not indicated and it looked like this:
<edm:Agent rdf:about="https://d-nb.info/gnd/119119110">
<skos:prefLabel xml:lang="zxx">Grützner, Eduard von</skos:prefLabel>
</edm:Agent>
<edm:Agent rdf:about="https://d-nb.info/gnd/118529889">
<skos:prefLabel xml:lang="zxx">Elisabeth II., Großbritannien, Königin</skos:prefLabel>
</edm:Agent>
)
I managed to split the values separated by ";" for both columns like this
{{forEach(cells["Column1"].value.split(";"),v,"<edm:Agent rdf:about=\""+v+"\">"+"\n"+"</edm:Agent>")}}
{{forEach(cells["Column2"].value.split(";"),v,"<skos:prefLabel xml:lang=\"zxx\">"+v+"</skos:prefLabel>")}}
but I can't find out how to nest the splitted skos:prefLabel into the edm:Agent element. Is that even possible? If not, I would work with seperate columns or another workaround, but I wanted to make sure, if there's a more direct way before.
Thank you!
Kristina
I am going to expand the answer from RolfBly using the Templating Exporter from OpenRefine.
I do have the following assumptions:
There is some other column left of Column1 acting as record identifying column (see first screenshot).
The columns actually have some proper names
The columns URI and Name are the only columns with multiple values. Otherwise we might produce empty XML elements with the following recipe.
We will use the information about records available via GREL to determine whether to write a <recordRootElement> or not.
Recipe:
Split first Name and then URI on the separator ";" via "Edit cells" => "Split multi-valued cells".
Go to "Export" => "Templating..."
In the prefix field use the value
<?xml version="1.0" encoding="utf-8"?>
<rootElement>
Please note that I skipped the namespace imports for edm, skos, rdf and xml.
In the row template field use the value:
{{if(row.index - row.record.fromRowIndex == 0, '<recordRootElement>', '')}}
<edm:Agent rdf:about="{{escape(cells['URI'].value, 'xml')}}">
<skos:prefLabel xml:lang="zxx">{{escape(cells['Name'].value, 'xml')}}</skos:prefLabel>
</edm:Agent>
{{if(row.index - row.record.fromRowIndex == row.record.rowCount - 1, '</recordRootElement>', '')}}
The row separator field should just contain a linebreak.
In the suffix field use the value:
</rootElement>
Disclaimer: If you're keen on using only OpenRefine, this won't be the answer you were hoping for. There may be ways in OR that I don't know of. That said, here's how I would do it.
Edit The trick is to keep URL and literal side by side on one line. b2m's answer below does just that: go from right to left splitting, not from left to right. You can then skip steps 2 and 3, to get the result in the image.
split each column into 2 columns by separator ;. You'll get 4 columns, 1 and 3 belong together, and 2 and 4 belong together. I'm assuming this will be the case consistently in your data.
export 1 and 3 to a file, and export 2 and 4 to another file, of any convenient format, using the custom tabular exporter.
concatenate those two files into one single file using an editor (I use Notepad++), or any other method you may prefer. Several ways to Rome here. Result in OR would be something like this.
You then have all sorts of options to put text strings in front, between and after your two columns.
In OR, you could use transform on column URL to build your XML using the below code
(note the \n for newline, that's probably just a line feed, you may want to use \r\n for carriage return + line feed if you're using Windows).
'<edm:Agent rdf:about="' + value + '">\n<skos:prefLabel xml:lang="zxx">' + cells.Name.value + '</skos:prefLabel>\n</edm:Agent>'
to get your XML in one column, like so
which you can then export using the custom tabular exporter again. Or instead you could use Add column based on this column in a similar manner, if you want to retain your URL column.
You could even do this in the editor without re-importing the file back into OR, but that's beyond the scope of this answer.
I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')
I am building a report with Microsoft SSRS (2012) having a multi-value parameter #parCode for the user to filter for certain codes. This works perfectly fine. Generally, my query looks like this:
SELECT ...
FROM ...
WHERE
TblCode.Code IN (#Code)
ORDER BY...
The codes are of following type (just an excerpt):
C73.0
C73.1
...
C79.0
C79.1
C79.2
Now, in additon to filtering for multiple of these codes I would like to als be able to filter for sub-strings of the codes. Meaning, when the user enters (Example 1)
C79
for #parCodes The output should be
C79.0
C79.1
C79.2
So eventually the user should be able to enter (Example 2)
C73.0
C79
for #parCodes and the output would be
C73.0
C79.0
C79.1
C79.2
I managed to implement both functionalities seperately, so either filtering for multiple "complete" codes or filterting for sub-string of code, but not both simultaneously.
I tried to do something like
...
WHERE
TblCode.Code IN (#parCode +'%')
ORDER BY...
but this screws up the Example 2. On the other hand, if I try to work with LIKE or = instead of IN statement, then I won't be able to make the parameter multi-valued.
Does anyone have an idea how to realize such functionality or whether IN statement pared with multi-valued parameters simply doesn't allow for it?
Thank you very much!
Assuming you are using SQL server
WHERE (
TblCode.Code IN (#parCode)
OR
CASE
WHEN CHARINDEX('.', Code)>0 THEN LEFT(TblCode.Code, CHARINDEX('.', TblCode.Code)-1)
ELSE TblCode.Code
END IN (#parCode)
)
The first clause makes exact match so for your example matches C73.0
The second clause matches characters before the dot character so it would get values C79.0, C79.1, C79.2 etc
Warning: Filtering using expressions would invalidate the use of an index on TblCode.Code
I am trying to use AWS Config Advanced Query to generate a report against a specific rule I have created.
SELECT
configuration.targetResourceId,
configuration.targetResourceType,
configuration.complianceType,
configuration.configRuleList
WHERE
configuration.configRuleList.configRuleName = 'aws_config-requiredtags-rule'
AND configuration.complianceType = 'NON_COMPLIANT'
Results look similar to this:
[
0:{
"configRuleName":"aws_configrequiredtags-rule"
"configRuleArn":"arn:aws:config:us-east-2:123456789:config-rule/config-rule-dl6gsy"
"configRuleId":"config-rule-dl6gsy"
"complianceType":"COMPLIANT"
}
1:{
"configRuleName":"eaws_config-instanceinvpc-rule"
"configRuleArn":"arn:aws:config:us-east-2:123456789:config-rule/config-rule-dc4f1x"
"configRuleId":"config-rule-dc4f1x"
"complianceType":"NON-COMPLIANT"
}
While this query produces results, it separates my config rule and compliance type, so I am not only getting results where my config rule is ONLY Non-compliance for 'aws_config-requiredtags-rule' results.
I am pretty novice with SQL, but hope there is a way for me to specify that I only want to see Non-Compliant results against a specific rule.
thanks,
This is a limitation of the AWS Config Service - and a pretty big one IMO. When you filter on properties within arrays, those filters are treated like OR operations instead of AND. There doesn't seem to be a good way of performing meaningful queries for individual rules.
From the docs:
When querying against multiple properties within an array of objects, matches are computed against all the array elements
...
The first condition configuration.configRuleList.complianceType = 'non_compliant' is applied to ALL elements in R.configRuleList, because R has a rule (rule B) with complianceType = ‘non_compliant’, the condition is evaluated as true. The second condition configuration.configRuleList.configRuleName is applied to ALL elements in R.configRuleList, because R has a rule (rule A) with configRuleName = ‘A’, the condition is evaluated as true. As both conditions are true, R will be returned.