use local graph in noflo.asCallback - noflo

I'm trying to execute a graph located in a local graphs/ folder using the noflo.asCallback function.
I'm getting an error that the graph or component is not available with the base specified at the baseDir.
What is the correct way to us the asCallback function with a graph?

The issue was related to the package name of the project. The name I was using was noflo-test, and the graph was available as test/GraphName, without the noflo- prepended.

Related

Azure Synapse: Can I import modules from other notebooks in the same worksapce?

In normal Python project we might have a parent py file that imports from a sibling or child folder/module (initialised by __init__.py, from .folder import module etc.
Is there a way to do this using Synapse notebooks within a given (ex. dev) workspace?
Ex. I would like to create a notebook/python module for use in logging - a wrapper to wrap functions. I don't want to have to have to copy-paste this module into 20 different notebooks.
Thanks!
Yes, you can use something called notebook reference
You can use %run magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. %run magic command supports nested calls but not support recursive calls. You will receive an exception if the statement depth is larger than five.
Example: %run //Notebook1 { "parameterInt": 1, "parameterFloat": 2.5, "parameterBool": true, "parameterString": "abc" }.
Source

ANSYS Mechanical Workbench Scripting - Accessing Parameters

Ansys gurus,
My project is a static structural analysis using ANSYS workbench mechanical. I have created the parametrized geometry (via Design modeler) and material property in workbench, and used ACT scripting to configure the model. However, I don't find too much information on how to access the parameters via ACT scripting.
I have confirmed that the geometric parameters are successfully created in the workbench, e.g.
ID
Paramater Name
Value
Unit
P1
diameter
50
um
The documentation LINK suggests that I can obtain parameter ID using Analysis.GetParameter(), however, the following code didn't work for me and resulted in the error as below.
Code:
STATIC_STRUCTURAL = ExtAPI.DataModel.AnalysisByName("Static Structural")
HEIGHT = STATIC_STRUCTURAL.GetParameter('height')
Error:
Property not found.
Do you have any suggestions on the cause of such error, is it because the Parameters were not imported from workbench "project schematic" to "Model", or the code I tried to retrieve the parameters was incorrect. In either cases, could you advise the correct method to access the parameters? Thank you!
hawkoli1987
If you want to access a parameter from the "project schematic" page you can create a list. If you than want to do something with this inside of mechanical, you have to send the commands to your model:
# Access the geometric parameters
allParameters = Parameters.GetAllParameters()
for parameter in allParameters:
print parameter.DisplayText
if parameter.DisplayText == 'height':
heigthParameter = parameter
# Loop over all systems in the project
for system in GetAllSystems():
# Get Model Container
model = system.GetContainer(ComponentName="Model")
# edit model component in batch mode
system.Refresh()
model.Edit(Interactive=True)
# code to be sent to ansys mechanical
cmd ='''
here goes your ACT script as string. You have to make sure, that there are no leading spaces or tabs.
'''
# send code and exit mechanical
model.SendCommand(Language='Python',Command=cmd)
model.Exit()
print "Finished script execution."

how to concatenate the OutputPathPlaceholder with a string with Kubeflow pipelines?

I am using Kubeflow pipelines (KFP) with GCP Vertex AI pipelines. I am using kfp==1.8.5 (kfp SDK) and google-cloud-pipeline-components==0.1.7. Not sure if I can find which version of Kubeflow is used on GCP.
I am bulding a component (yaml) using python inspired form this Github issue. I am defining an output like:
outputs=[(OutputSpec(name='drt_model', type='Model'))]
This will be a base output directory to store few artifacts on Cloud Storage like model checkpoints and model.
I would to keep one base output directory but add sub directories depending of the artifact:
<output_dir_base>/model
<output_dir_base>/checkpoints
<output_dir_base>/tensorboard
but I didn't find how to concatenate the OutputPathPlaceholder('drt_model') with a string like '/model'.
How can append extra folder structure like /model or /tensorboard to the OutputPathPlaceholder that KFP will set during run time ?
I didn't realized in the first place that ConcatPlaceholder accept both Artifact and string. This is exactly what I wanted to achieve:
ConcatPlaceholder([OutputPathPlaceholder('drt_model'), '/model'])

How to create an op like conv_ops in tensorflow?

What I'm trying to do
I'm new to C++ and bazel and I want to make some change on the convolution operation in tensorflow, so I decide that my first step is to create an ops just like it.
What I have done
I copied conv_ops.cc from //tensorflow/core/kernels and change the name of the ops registrated in my new_conv_ops.cc. I also changed some name of the functions in the file to avoid duplication. And here is my BUILD file.
As you can see, I copy the deps attributes of conv_ops from //tensorflow/core/kernels/BUILD. Then I use "bazel build -c opt //tensorflow/core/user_ops:new_conv_ops.so" to build the new op.
What my problem is
Then I got this error.
I tried to delete bounds_check and got same error for the next deps. Then I realize that there is some problem for including h files in //tensorflow/core/kernels from //tensorflow/core/user_ops. So how can I perfectely create a new op excatcly like conv_ops?
Adding a custom operation to TensorFlow is covered in the tutorial here. You can also look at actual code examples.
To address your specific problem, note that the tf_custom_op_library macro adds most of the necessary dependencies to your target. You can simply write the following :
tf_custom_op_library(
name="new_conv_ops.so",
srcs=["new_conv_ops.cc"]
)

How to write summaries for multiple runs in Tensorflow

If you look at the Tensorboard dashboard for the cifar10 demo, it shows data for multiple runs. I am having trouble finding a good example showing how to set the graph up to output data in this fashion. I am currently doing something similar to this, but it seems to be combining data from runs and whenever a new run starts I see the warning on the console:
WARNING:root:Found more than one graph event per run.Overwritting the graph with the newest event
The solution turned out to be simple (and probably a bit obvious), but I'll answer anyway. The writer is instantiated like this:
writer = tf.train.SummaryWriter(FLAGS.log_dir, sess.graph_def)
The events for the current run are written to the specified directory. Instead of having a fixed value for the logdir parameter, just set a variable that gets updated for each run and use that as the name of a sub-directory inside the log directory:
writer = tf.train.SummaryWriter('%s/%s' % (FLAGS.log_dir, run_var), sess.graph_def)
Then just specify the root log_dir location when starting tensorboard via the --logdir parameter.
As mentioned in the documentation, you can specify multiple log directories when running tensorboard. Alternatively, you can create multiple run subfolder in the log directory to visualize different plots in the same graph.