My package contains several libraries ("a" and "b") and I try to define separate components. Like this
def package_info(self):
self.cpp_info.components["CA"].libs = ["a"]
self.cpp_info.components["CB"].libs = ["b"]
Nothing special and according to documentation, I believe. But when I create the package, Conan says "ConanException: MyTest/0.1 package_info(): self.cpp_info.components cannot be used with self.cpp_info global values at the same time"
I do not understand that. What does it mean? What I am doing wrong?
Your example is correct, but your recipe is mixing the things and does not follow your example.
You ca not use self.cpp_info.libs and self.cpp_info.components together.
There is warning on documentation about cpp-info.
Thus, you can use:
Or
def package_info(self):
self.cpp_info.libs = ["foo"]
Or
def package_info(self):
self.cpp_info.components["a"].libs = ["foo"]
But not mixed:
def package_info(self):
self.cpp_info.libs = ["bar"]
self.cpp_info.components["a"].libs = ["foo"]
Related
Consider the following pipeline:
example_gen = tfx.components.ImportExampleGen(input_base=_dataset_folder)
statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples'])
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True)
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath('preprocessing_fn.py'))
_trainer_module_file = 'run_fn.py'
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_trainer_module_file),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10),
eval_args=tfx.proto.EvalArgs(num_steps=6),)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)
)
)
components = [
example_gen,
statistics_gen,
schema_gen,
transform,
trainer,
pusher,
]
_pipeline_data_folder = './simple_pipeline_data'
pipeline = tfx.dsl.Pipeline(
pipeline_name='simple_pipeline',
pipeline_root=_pipeline_data_folder,
metadata_connection_config=tfx.orchestration.metadata.sqlite_metadata_connection_config(
f'{_pipeline_data_folder}/metadata.db'),
components=components)
tfx.orchestration.LocalDagRunner().run(pipeline)
Now, let's assume that once the pipeline is down, I would like to do something with the artifacts. I know I can query the ML Metadata like this:
import ml_metadata as mlmd
connection_config = pipeline.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
print(store.get_artifact_types())
But this way, I have no idea which IDs belong to the current pipeline. Sure, I can assume that the largest IDs represent the current pipeline artifacts but that's not going to be a practical approach in production when multiple executions might try to work with the same metadata store concurrently.
So, the question is how can I figure out the artifact IDs that were just created by the current execution?
[UPDATE]
To clarify the problem, consider the following partial solution:
def get_latest_artifact(metadata_connection_config, pipeline_name: str, component_name: str, type_name: str):
with Metadata(metadata_connection_config) as metadata:
context = metadata.store.get_context_by_type_and_name('node', f'{pipeline_name}.{component_name}')
artifacts = metadata.store.get_artifacts_by_context(context.id)
artifact_type = metadata.store.get_artifact_type(type_name)
latest_artifact = max([a for a in artifacts if a.type_id == artifact_type.id],
key=lambda a: a.last_update_time_since_epoch)
artifact = types.Artifact(artifact_type)
artifact.set_mlmd_artifact(latest_artifact)
return artifact
sqlite_path = './pipeline_data/metadata.db'
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(sqlite_path)
examples_artifact = get_latest_artifact(metadata_connection_config, 'simple_pipeline',
'SchemaGen', 'Schema')
Using get_latest_artifact function, I can get the latest artifact of a specific type from a specific pipeline. This will work even if two pipelines (with different names) create new artifacts concurrently. But it will fail when I try to extract the artifact of the "just finished" pipeline if multiple instances of the same pipeline are making changes to the store concurrently. That's because the function takes in the pipeline name as an input argument (as opposed to some pipeline unique ID).
I'm looking for a solution that works no matter how many different (or the same) pipelines work with the same store concurrently. At this point, I'm not sure if this can be done with MlMD. And if it cannot be done at the moment, I consider that a missed feature, a very crucial one.
OK, this is the solution I found. When defining the pipeline's components, you should use .with_id() method and give the component a custom ID. That way you can find it later on.
Here's an example. Let's say that I want to find the schema generated as part of the recently executed pipeline.
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True).with_id('some_unique_id')
Then, the same function I defined above can be used like this:
def get_latest_artifact(metadata_connection_config, pipeline_name: str, component_name: str, type_name: str):
with Metadata(metadata_connection_config) as metadata:
context = metadata.store.get_context_by_type_and_name('node', f'{pipeline_name}.{component_name}')
artifacts = metadata.store.get_artifacts_by_context(context.id)
artifact_type = metadata.store.get_artifact_type(type_name)
latest_artifact = max([a for a in artifacts if a.type_id == artifact_type.id],
key=lambda a: a.last_update_time_since_epoch)
artifact = types.Artifact(artifact_type)
artifact.set_mlmd_artifact(latest_artifact)
return artifact
sqlite_path = './pipeline_data/metadata.db'
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(sqlite_path)
examples_artifact = get_latest_artifact(metadata_connection_config, 'simple_pipeline',
'some_unique_id', 'Schema')
Once your TFX pipeline completes the run, you can query the ML metadata using below code.
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
Once metadata is fetched, you can use below helper functions to view data from MD store. The display_types() function queries the list of all its stored ArtifactTypes. Then display_artifacts() function list all artifacts for given artifact type and their uri. The display_properties() function will give execution properties for given artifact.
Please refer MLMD tutorial for detailed implementation of below functions.
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
Example code to get the latest pushed model execution properties.
# get all artifacts with ArtifactType as PusherModel
pushed_models = store.get_artifacts_by_type("PushedModel")
# get the latest pushed model
pushed_model = pushed_models[-1]
# get execution properties for latest pushed model
display_properties(store, pushed_model)
I am trying to extract the content of the [Documentation] section as a string for comparision with other part in a Python script.
I was told to use Robot framework API https://robot-framework.readthedocs.io/en/stable/
to extract but I have no idea how.
However, I am required to work with version 3.1.2
Example:
*** Test Cases ***
ATC Verify that Sensor Battery can enable and disable manufacturing mode
[Documentation] E1: This is the description of the test 1
... E2: This is the description of the test 2
[Tags] E1 TRACE{Trace_of_E1}
... E2 TRACE{Trace_of_E2}
Extract the string as
E1: This is the description of the test 1
E2: This is the description of the test 2
Have a look at these examples. I did something similar to generate testplans descritio. I tried to adapt my code to your requirements and this could maybe work for you.
import os
import re
from robot.api.parsing import (
get_model, get_tokens, Documentation, EmptyLine, KeywordCall,
ModelVisitor, Token
)
class RobotParser(ModelVisitor):
def __init__(self):
# Create object with remarkup_text to store formated documentation
self.text = ''
def get_text(self):
return self.text
def visit_TestCase(self, node):
# The matched `TestCase` node is a block with `header` and
# `body` attributes. `header` is a statement with familiar
# `get_token` and `get_value` methods for getting certain
# tokens or their value.
for keyword in node.body:
# skip empty lines
if keyword.get_value(Token.DOCUMENTATION) == None:
continue
self.text += keyword.get_value(Token.ARGUMENT)
def visit_Documentation(self,node):
# The matched "Documentation" node with value
self.remarkup_text += node.value + self.new_line
def visit_File(self, node):
# Call `generic_visit` to visit also child nodes.
return self.generic_visit(node)
if __name__ == "__main__":
path = "../tests"
for filename in os.listdir(path):
if re.match(".*\.robot", filename):
model = get_model(os.path.join(path, filename))
robot_parser = RobotParser()
robot_parser.visit(model)
text=robot_parser._text()
The code marked as best answer didn't quite work for me and has a lot of redundancy but it inspired me enough to get into the parsing and write it in a much readable and efficient way that actually works as is. You just have to have your own way of generating & iterating through filesystem where you call the get_robot_metadata(filepath) function.
from robot.api.parsing import (get_model, ModelVisitor, Token)
class RobotParser(ModelVisitor):
def __init__(self):
self.testcases = {}
def visit_TestCase(self, node):
testcasename = (node.header.name)
self.testcases[testcasename] = {}
for section in node.body:
if section.get_value(Token.DOCUMENTATION) != None:
documentation = section.value
self.testcases[testcasename]['Documentation'] = documentation
elif section.get_value(Token.TAGS) != None:
tags = section.values
self.testcases[testcasename]['Tags'] = tags
def get_testcases(self):
return self.testcases
def get_robot_metadata(filepath):
if filepath.endswith('.robot'):
robot_parser = RobotParser()
model = get_model(filepath)
robot_parser.visit(model)
metadata = robot_parser.get_testcases()
return metadata
This function will be able to extract the [Documentation] section from the testcase:
def documentation_extractor(testcase):
documentation = []
for setting in testcase.settings:
if len(setting) > 2 and setting[1].lower() == "[documentation]":
for doc in setting[2:]:
if doc.startswith("#"):
# the start of a comment, so skip rest of the line
break
documentation.append(doc)
break
return "\n".join(documentation)
I am trying to build the c++ app with glut using bazel. It should work on both macos and linux. Now the problem is that on macos it requires passing "-framework OpenGL", "-framework GLUT" to linker flags, while on linux I should probably do soemthing like
cc_library(
name = "glut",
srcs = glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
...
in glut.BUILD.
So the question is
1. How to provide platform-dependent linker options to cc_library rules in general?
2. And in particular how to link to glut in platform-independent way using bazel?
You can do this using the Bazel select() function. Something like this might work:
config_setting(
name = "linux_x86_64",
values = {"cpu": "k8"},
visibility = ["//visibility:public"],
)
config_setting(
name = "darwin_x86_64",
values = {"cpu": "darwin_x86_64"},
visibility = ["//visibility:public"],
)
cc_library(
name = "glut",
srcs = select({
":darwin_x86_64": [],
":linux_x86_64": glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
}),
linkopts = select({
":darwin_x86_64": [
"-framework OpenGL",
"-framework GLUT"
],
":linux_x86_64": [],
})
...
)
Dig around in the Bazel github repository, it's got some good real world examples of using select().
I had a similar problem but with picking the right compiler depending on the platform and #zlalanne's solution didn't work for me. After 2 days of frustration, I finally found the following solution:
config_setting (
name = "darwin",
constraint_values = [ "#bazel_tools//platforms:osx" ]
)
config_setting (
name = "windows",
constraint_values = [ "#bazel_tools//platforms:windows" ]
)
I didn't have any need for linux, but adding this to your BUILD file should work:
config_setting (
name = "linux",
constraint_values = [ "#bazel_tools//platforms:linux" ]
)
Use ":darwin", ":windows" and ":linux" in your selects and you should have a solution that works.
I'm using the VLAB MPC5xxx Toolbox.
I have a run script that is controlling my simulation, which loads the platform in the usual way, then runs it:
import vlab
import os
import sysc
image_path = os.path.join('o5e',
'firmware.open5xxxecu-e6009bbcfcd1',
'bin', 'o5e_dbg.elf')
vlab.load('mpc.mpc5674f.sim', args=['--testbench=o5e_testbench',
"--image=%s" % image_path,
"--debugger-config=GHS_MULTI",
"--trace=+src:sc_report",
])
vcd_sink = vlab.trace.sink.vcd("mpc.mpc5674f.sim.vcd")
vlab.add_trace("mpc5674f.PBRIDGE.EDMA_B", sink=vcd_sink)
for i in range(32):
vlab.add_trace("mpc5674f.PBRIDGE.ETPU.CH_OUT_A[%d]" % i, sink=vlab.trace.sink.console)
vlab.run(11, "ms", blocking=True)
vlab.exit()
I want to give this run script an argument to turn on tracing in the core, which I know you do by setting the tracing attribute on the core. And I know I can read the options of the script using python's optparse.
The problem I have is that you have to set the attribute before the end of elaboration, but the only place I can access the simulation before elaboration is in the testbench... but there seems to be no way to pass parameters (for example script arguments) to the testbench.
How should I pass the argument from my script into the testbench, so it conditionally turns on or not the core tracing?
So, I think the answer is:
1) Don't use try to use the testbench for things that aren't to do with ... a testbench
2) Use "phase breakpoints" to do things that need to happen at... specific phases in the simulation
eg:
from optparse import OptionParser
parser = OptionParser()
parser.add_option("--core-instrumentation", dest="core_instrumentation",
action = "store_true",
help="turn on core instrumentation")
(options, args) = parser.parse_args()
if options.core_instrumentation:
vlab.add_phase_breakpoint("before_end_of_elaboration",
action = lambda bp: vlab.write_attribute("mpc5467f.Core0.log_filter","+instr"))
image_path = os.path.join('o5e',
'firmware.open5xxxecu-e6009bbcfcd1',
'bin', 'o5e_dbg.elf')
vlab.load('mpc.mpc5674f.sim', args=['--testbench=o5e_testbench',
"--image=%s" % image_path,
"--debugger-config=GHS_MULTI",
])
vcd_sink = vlab.trace.sink.vcd("mpc.mpc5674f.sim.vcd")
vlab.add_trace("mpc5674f.PBRIDGE.EDMA_B", sink=vcd_sink)
for i in range(32):
vlab.add_trace("mpc5674f.PBRIDGE.ETPU.CH_OUT_A[%d]" % i, sink=vlab.trace.sink.console)
vlab.run(11, "ms", blocking=True)
vlab.exit()
If I want to create a program using Python2.7 where:
1) There are three classes, namely Tagging, Commenting, and Posting
2) The self.content for both Tagging and Commenting classes will be sent to the self.compact of class Posting
class Tagging: # Handles Tagging - Create new Tags
def __init__(self):
self.content = []
self.initialTag = ""
def doTag(self): #Tag people
self.initialTag = raw_input("Name to Tag: ")
self.content.append(self.initialTag)
#Tagging can only be done if the user created new post.
class Commenting: #Handles Commenting - Create new Comments
def __init__(self):
self.content = []
self.initialComment = ""
def doComment(self): #Commenting on Posts
self.initialComment = raw_input("Comment: ")
self.content.append(self.initialComment)
#Commenting can only be done on Posts. No Post means no Comment. (Same goes to Tags)
class Posting: #Handles Posting - Create new Posts
def __init__(self):
self.content = [] #Content of the post
self.initialPost = ""
self.compact = [] #Post that contains the Post, Comments, and Tags
#How do I do this?
def doPost(self):
self.initialPost = raw_input("Post: ")
self.content.append(self.initialPost)
I tried inheriting class Posting to both class Tagging and class Commenting but I think using Inheritance just for one single variable of class Posting is illogical.
Can anyone suggest me a better way?
And additional question: Are the class Tagging and class Commenting having Aggregation relationship to class Posting? Or is it an Association relationship? (word definition by UML)
How about this, just example code in "__ main __" part:
class Tagging: # Handles Tagging - Create new Tags
def __init__(self):
self.content = []
self.initialTag = ""
def doTag(self): #Tag people
self.initialTag = raw_input("Name to Tag: ")
self.content.append(self.initialTag)
#Tagging can only be done if the user created new post.
class Commenting: #Handles Commenting - Create new Comments
def __init__(self):
self.content = []
self.initialComment = ""
def doComment(self): #Commenting on Posts
self.initialComment = raw_input("Comment: ")
self.content.append(self.initialComment)
#Commenting can only be done on Posts. No Post means no Comment. (Same goes to Tags)
class Posting: #Handles Posting - Create new Posts
def __init__(self, TaggingContent, CommentingContent):
self.content = [] #Content of the post
self.initialPost = ""
self.compact = TaggingContent + CommentingContent #Post that contains the Post, Comments, and Tags
#How do I do this?
def doPost(self):
self.initialPost = raw_input("Post: ")
self.content.append(self.initialPost)
if __name__ == "__main__":
T = Tagging()
C = Commenting()
##Do stuff here with tagging and commenting....
P = Posting(T.content, C.content)
#Do stuff with posting
That way you have the content from Tagging and Commenting into compact from Posting, or am I wrong about what you need?
If you want to ensure in OOP that a set of classes provides obeys a certain contract you normally define an interface.
Python does not provide interfaces directly, but instead it's common to use duck-typeing by the use of something like isinstance or hasattr, which means, that if your object has a content-property use it, if not, raise an error.
Another possiblity to emulate interfaces is available since Python 2.6 by means of Abstract Base Classes
Hope this helps.