I'm using a third party library, Scheme S7, in my codebase which is made up of one header and one source. I'm using this external code in 2 of my libraries. In one library, I need to include the source of the S7 library and set some #defines, but in the 2nd I only need to include the header, and have a different set of defines.
Is there a way to create a target such that it works for both scenarios, or do I need to create 2 different targets for this. The first target, which includes the source file is defined like this:
add_library(third_party_scheme INTERFACE)
add_library(third_party::scheme ALIAS third_party_scheme)
target_sources(third_party_scheme
INTERFACE
"scheme/s7.c"
)
target_compile_definitions(third_party_scheme
INTERFACE
S7_EXPORT_LIB
S7_OUTPUT_FUNCTION_FULL_STRING
)
So just create two targets.
add_library(third_party_scheme "scheme/s7.c")
target_compile_definitions(third_party_scheme PUBLIC
S7_EXPORT_LIB
S7_OUTPUT_FUNCTION_FULL_STRING
)
add_library(third_party_scheme2 "scheme/s7.c")
# no target_compile_definitions, or different ones.
do I need to create 2 different targets for this
Yes. You could do a function, like:
function(add_third_party_scheme_target name)
add_library(${name} scheme/s7.c)
target_compile_definitions(third_party_scheme PUBLIC
${ARGV}
)
endfunction()
add_third_party_scheme_target(third_party_scheme
S7_EXPORT_LIB
S7_OUTPUT_FUNCTION_FULL_STRING
)
add_third_party_scheme_target(third_party_scheme2
)
Related
I have the below code in my Github respository:
INSERT INTO lookup_tables.view_da_status_lookup(status_id, object_type, status_name, status_type, ui_dropdown_flag, last_updt_ts)
VALUES
(1,'View','Validation success','Maker Validation',true,CURRENT_TIMESTAMP),
(2,'View','Validation Failed','Maker Validation',true,CURRENT_TIMESTAMP),
(3,'View','Approved','Checker Validation',true,CURRENT_TIMESTAMP);
Maker Validation is repeated in the DML and it expects us to use a variable and have the command in a PL-SQL construct.
This is the error I get:
Define a constant instead of duplicating this literal 1 times.
However, we do not have to implement PL-SQL. Is there a way to tell it to ignore the error?
On the following link, you can find:
Exclude specific rules from specific files
Analysis Scope > D. Issue > Exclusions > Ignore Issues on Multiple Criteria
You can prevent specific rules from being applied to specific files by
combining one or more pairs of strings consisting of a** rule key
pattern and a **file path pattern.
The key for this parameter is sonar.issue.ignore.multicriteria.
However, because it is a multi-value property, we recommend that only
be set through the UI.
For example, if you use Maven, the setting should look like this:
<properties>
<sonar.issue.ignore.multicriteria>e1,e2</sonar.issue.ignore.multicriteria>
<sonar.issue.ignore.multicriteria.e1.ruleKey>squid:S00100</sonar.issue.ignore.multicriteria.e1.ruleKey>
<sonar.issue.ignore.multicriteria.e1.resourceKey>**/*Steps.java</sonar.issue.ignore.multicriteria.e1.resourceKey>
<sonar.issue.ignore.multicriteria.e2.ruleKey>squid:S1118</sonar.issue.ignore.multicriteria.e2.ruleKey>
<sonar.issue.ignore.multicriteria.e2.resourceKey>**/PropertyPlaceholderConfig.java</sonar.issue.ignore.multicriteria.e2.resourceKey>
</properties>
where e1 and e2 are local rule names, e1.ruleKey is sonarqube rule id and e1.resourceKey is path to file.
I have a problem with terraform configuration which I really don't know how to resolve. I have written module for policy assigments, module as parameters taking object with 5 attributes. The question is if it is possible to split into folder structure tfvars file. I mean eg.
I have main folder subscriptions -> folder_subscription_name -> some number of files tfvars with configuration for each of the policy assigment
Example of each of the file
testmap = {
var1 = "test1"
var2 = "test2"
var3 = "test3"
var4 = "test4"
var5 = "test5"
}
In the module I would like to iterate over all of the maps combine into list of maps. Is it good approach? How to achive that or maybe I should use something other to do it like terragrunt ?
Please give me some tips what is the best way to achive that, basically the goal is that I don't want to have one insanely big tvars file with list of 100 maps but splitted into 100 configuration file for each of the assignment.
Based on the comment on the original question:
The question is preety simple how to keep input variables for each of
resource in seperate file instead of keeping all of them in one very
big file
I would say you aim at having different .tfvars files. For example, you could have a dev.tfvars and a prod.tfvars file.
To plan your deployment, you can pass those file with:
terraform plan --var-file=dev.tfvars -out planoutput
To apply those changes
terraform apply planoutput
I want to know where does SimObject names like mem_ctrls, membus, replacement_policy are set in gem5. After looking at the code, I understood that, these name are used in stats.txt.
I have looked into SimObject code files(py,cc,hh files). I printed all Simobject names by stepping through root descendants in Simulation.py and then searched some of the names like mem_ctrls using vscode, but could not find a place where these names are set.
for obj in root.descendants():
print("object name:%s\n"% obj.get_name())
These names are the Python variable names from the configuration/run script.
For instance, from the Learning gem5 simple.py script...
from m5.objects import *
# create the system we are going to simulate
system = System()
# Set the clock fequency of the system (and all of its children)
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
# Set up the system
system.mem_mode = 'timing' # Use timing accesses
system.mem_ranges = [AddrRange('512MB')] # Create an address range
The names will be system, clk_domain, mem_ranges.
Note that only the SimObjects will have a name. The other parameters (e.g., integers, etc.) will not have a name.
You can see where this is set here: https://gem5.googlesource.com/public/gem5/+/master/src/python/m5/SimObject.py#1352
I'm building a tree in Graphviz and I can't seem to be able to get the feature names to show up, I have defined a list with the feature names like so:
names = list(df.columns.values)
Which prints:
['Gender',
'SuperStrength',
'Mask',
'Cape',
'Tie',
'Bald',
'Pointy Ears',
'Smokes']
So the list is being created, later I build the tree like so:
export_graphviz(tree, out_file=ddata, filled=True, rounded=True, special_characters=False, impurity=False, feature_names=names)
But the final image still has the feature names listed like X[]:
How can I get the actual feature names to show up? (Cape instead of X[3], etc.)
I can only imagine this has to do with passing the names as an array of the values. It works fine if you pass the columns directly:
export_graphviz(tree, out_file=ddata, filled=True, rounded=True, special_characters=False, impurity=False, feature_names=df.columns)
If needed, you can also slice the columns:
export_graphviz(tree, out_file=ddata, filled=True, rounded=True, special_characters=False, impurity=False, feature_names=df.columns[5:])
Is it possible to query for testcase results using project scoping?
The TestCaseResult object does not contain a project scope, and queries for testcaseresults seem to ignore project scoping.
So is there a way to, for example, to query for all test case results in the last 14 days scoped under a particular project and its children projects?
In standard WSAPI, you can do:
((TestCase.Project.Name = "My Project") AND (CreationDate > "2013-01-07"))
or
((TestCase.Project.ObjectID = "12345678910") AND (CreationDate > "2013-01-07"))
on TestCaseResults and it should provide you with Project-scoped TCR's.
If you desire child projects, you'd need to either run multiple queries, or do some complicated AND'ing.