Julia using package located in .julia/dev - module

I am beginner to Julia though I have experience with Python and some other languages. I get that this is probably a very simple/beginner issue, but I fail to understand how it should work in Julia.
I want to create a Julia module. I saw recommendations to create it with PkgTemplates, so that is exactly what I have done. My directory structure is thus:
It is located at the default path proposed by PkgTemplates: /home/username/.julia/dev/Keras2Flux.
I want to develop it with Revise package due to the slow start-up time of the Julia REPL. However, I fail to import my module to the Julia REPL in the terminal.
So, I cd to the directory mentioned above, use julia command and try using Keras2Flux. I get the error:
ERROR: ArgumentError: Package Keras2Flux not found in current path:
I tried both using Keras2Flux and using Keras2Flux.jl, and I also tried to call it from one level above in my directory structure (i.e. /home/username/.julia/dev). All has the same problem.
What is wrong (more importantly, why?) and how to fix it?
Current contents of the module (not really relevant to the question but still):
module Keras2Flux
import JSON
using Flux
export convert
function create_dense(config)
in = config["input_dim"]
out = config["output_dim"]
dense = Dense(in, outо)
return dense
end
function create_dropout(config)
p = config["p"]
dropout = Dropout(p)
return dropout
end
function create_model(model_config)
layers = []
for layer_config in model_config
if layer_config["class_name"] == "Dense"
layer = create_dense(layer_config["config"])
elseif layer_config["class_name"] == "Dropout"
layer = create_dropout(layer_config["config"])
else
println(layer_config["class_name"])
throw("unimplemented")
end
push!(layers, layer)
end
model = Chain(layers)
end
function convert(filename)
jsontxt = ""
open(filename, "r") do f
jsontxt = read(f, String)
end
model_params = JSON.parse(jsontxt)
if model_params["keras_version"] == "1.1.0"
create_model(model_params["config"])
else
throw("unimplemented")
end
end
end

Here is a full recipe to get you going:
cd("/home/username/.julia/dev")
using Pkg
pkg"generate Keras2Flux"
cd("Keras2Flux")
pkg"activate ."
pkg"add JSON Flux"
# now copy-paste whatever you need to Keras2Flux\src\Keras2Flux.jl
using Revise
using Keras2Flux
# happy development!

Related

The application and the screen stop when running the code

I am a beginner in programming, and I found it difficult to overcome this problem, which is represented in the Android application that runs the Python code
The code works fine. But when I add while Ture, the screen stops. What is the cause of the problem? Thank you
script python
Is there a suggestion about this problem?
def main(CodeAreaData):
file_dir = str(Python.getPlatform().getApplication().getFilesDir())
filename = join(dirname(file_dir), 'file.txt')
try:
original_stdout = sys.stdout
sys.stdout = open(filename, 'w', encoding = 'utf8', errors="ignore")
exec(CodeAreaData) # it will execute our code and save output in file
sys.stdout.close()
sys.stdout = original_stdout
output = open(filename, 'r').read()
sys.stdout.close()
except Exception as e:
sys.stdout = original_stdout
output = e
return str(output)
my code kotlin
```kotlin
val py = Python.getInstance()
//here we call our script with the name "myscirpt
//here we call our script with the name "myscirpt
val pyobj = py.getModule("myscript") //give python script name
//and call main method inside script...//pass data here
//and call main method inside script...//pass data here
val obj = pyobj.callAttr("main", editor.text.toString())
println(obj)
If you're adding a while true statement, then the screen should stop. That will lead to an infinite loop. A while loop runs until the condition evaluates to false. Here is more info about that:
https://www.geeksforgeeks.org/how-to-use-while-true-in-python/

Vertex AI Model Batch prediction, issue with referencing existing model and input file on Cloud Storage

I'm struggling to correctly set Vertex AI pipeline which does the following:
read data from API and store to GCS and as as input for batch prediction.
get an existing model (Video classification on Vertex AI)
create Batch prediction job with input from point 1.
As it will be seen, I don't have much experience with Vertex Pipelines/Kubeflow thus I'm asking for help/advice, hope it's just some beginner mistake.
this is the gist of the code I'm using as pipeline
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2 import dsl
from kfp.v2.dsl import component
from kfp.v2.dsl import (
Output,
Artifact,
Model,
)
PROJECT_ID = 'my-gcp-project'
BUCKET_NAME = "mybucket"
PIPELINE_ROOT = "{}/pipeline_root".format(BUCKET_NAME)
#component
def get_input_data() -> str:
# getting data from API, save to Cloud Storage
# return GS URI
gcs_batch_input_path = 'gs://somebucket/file'
return gcs_batch_input_path
#component(
base_image="python:3.9",
packages_to_install=['google-cloud-aiplatform==1.8.0']
)
def load_ml_model(project_id: str, model: Output[Artifact]):
"""Load existing Vertex model"""
import google.cloud.aiplatform as aip
model_id = '1234'
model = aip.Model(model_name=model_id, project=project_id, location='us-central1')
#dsl.pipeline(
name="batch-pipeline", pipeline_root=PIPELINE_ROOT,
)
def pipeline(gcp_project: str):
input_data = get_input_data()
ml_model = load_ml_model(gcp_project)
gcc_aip.ModelBatchPredictOp(
project=PROJECT_ID,
job_display_name=f'test-prediction',
model=ml_model.output,
gcs_source_uris=[input_data.output], # this doesn't work
# gcs_source_uris=['gs://mybucket/output/'], # hardcoded gs uri works
gcs_destination_output_uri_prefix=f'gs://{PIPELINE_ROOT}/prediction_output/'
)
if __name__ == '__main__':
from kfp.v2 import compiler
import google.cloud.aiplatform as aip
pipeline_export_filepath = 'test-pipeline.json'
compiler.Compiler().compile(pipeline_func=pipeline,
package_path=pipeline_export_filepath)
# pipeline_params = {
# 'gcp_project': PROJECT_ID,
# }
# job = aip.PipelineJob(
# display_name='test-pipeline',
# template_path=pipeline_export_filepath,
# pipeline_root=f'gs://{PIPELINE_ROOT}',
# project=PROJECT_ID,
# parameter_values=pipeline_params,
# )
# job.run()
When running the pipeline it throws this exception when running Batch prediction:
details = "List of found errors: 1.Field: batch_prediction_job.model; Message: Invalid Model resource name.
so I'm not sure what could be wrong. I tried to load model in the notebook (outside of component) and it correctly returns.
Second issue I'm having is referencing GCS URI as output from component to batch job input.
input_data = get_input_data2()
gcc_aip.ModelBatchPredictOp(
project=PROJECT_ID,
job_display_name=f'test-prediction',
model=ml_model.output,
gcs_source_uris=[input_data.output], # this doesn't work
# gcs_source_uris=['gs://mybucket/output/'], # hardcoded gs uri works
gcs_destination_output_uri_prefix=f'gs://{PIPELINE_ROOT}/prediction_output/'
)
During compilation, I get following exception TypeError: Object of type PipelineParam is not JSON serializable, though I think this could be issue of ModelBatchPredictOp component.
Again any help/advice appreciated, I'm dealing with this from yesterday, so maybe I missed something obvious.
libraries I'm using:
google-cloud-aiplatform==1.8.0
google-cloud-pipeline-components==0.2.0
kfp==1.8.10
kfp-pipeline-spec==0.1.13
kfp-server-api==1.7.1
UPDATE
After comments, some research and tuning, for referencing model this works:
#component
def load_ml_model(project_id: str, model: Output[Artifact]):
region = 'us-central1'
model_id = '1234'
model_uid = f'projects/{project_id}/locations/{region}/models/{model_id}'
model.uri = model_uid
model.metadata['resourceName'] = model_uid
and then I can use it as intended:
batch_predict_op = gcc_aip.ModelBatchPredictOp(
project=gcp_project,
job_display_name=f'batch-prediction-test',
model=ml_model.outputs['model'],
gcs_source_uris=[input_batch_gcs_path],
gcs_destination_output_uri_prefix=f'gs://{BUCKET_NAME}/prediction_output/test'
)
UPDATE 2
regarding GCS path, a workaround is to define path outside of the component and pass it as an input parameter, for example (abbreviated):
#dsl.pipeline(
name="my-pipeline",
pipeline_root=PIPELINE_ROOT,
)
def pipeline(
gcp_project: str,
region: str,
bucket: str
):
ts = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
gcs_prediction_input_path = f'gs://{BUCKET_NAME}/prediction_input/video_batch_prediction_input_{ts}.jsonl'
batch_input_data_op = get_input_data(gcs_prediction_input_path) # this loads input data to GCS path
batch_predict_op = gcc_aip.ModelBatchPredictOp(
project=gcp_project,
model=training_job_run_op.outputs["model"],
job_display_name='batch-prediction',
# gcs_source_uris=[batch_input_data_op.output],
gcs_source_uris=[gcs_prediction_input_path],
gcs_destination_output_uri_prefix=f'gs://{BUCKET_NAME}/prediction_output/',
).after(batch_input_data_op) # we need to add 'after' so it runs after input data is prepared since get_input_data doesn't returns anything
still not sure, why it doesn't work/compile when I return GCS path from get_input_data component
I'm glad you solved most of your main issues and found a workaround for model declaration.
For your input.output observation on gcs_source_uris, the reason behind it is because the way the function/class returns the value. If you dig inside the class/methods of google_cloud_pipeline_components you will find that it implements a structure that will allow you to use .outputs from the returned value of the function called.
If you go to the implementation of one of the components of the pipeline you will find that it returns an output array from convert_method_to_component function. So, in order to have that implemented in your custom class/function your function should return a value which can be called as an attribute. Below is a basic implementation of it.
class CustomClass():
def __init__(self):
self.return_val = {'path':'custompath','desc':'a desc'}
#property
def output(self):
return self.return_val
hello = CustomClass()
print(hello.output['path'])
If you want to dig more about it you can go to the following pages:
convert_method_to_component, which is the implementation of convert_method_to_component
Properties, basics of property in python.

Custom SPARQL functions in rdflib

What is a good way to hook a custom SPARQL function into rdflib?
I have been looking around in rdflib for an entry point for custom function. I found no dedicated entry point but found that rdflib.plugins.sparql.CUSTOM_EVALS might be a place to add the custom function.
So far I have made an attempt with the code below. It seems "dirty" to me. I am calling a "hidden" function (_eval) and I am not sure I got all the argument updating correct. Beyond the custom_eval.py example code (which form the basis for my code) I found little other code or documentation about CUSTOM_EVALS.
import rdflib
from rdflib.plugins.sparql.evaluate import evalPart
from rdflib.plugins.sparql.sparql import SPARQLError
from rdflib.plugins.sparql.evalutils import _eval
from rdflib.namespace import Namespace
from rdflib.term import Literal
NAMESPACE = Namespace('//custom/')
LENGTH = rdflib.term.URIRef(NAMESPACE + 'length')
def customEval(ctx, part):
"""Evaluate custom function."""
if part.name == 'Extend':
cs = []
for c in evalPart(ctx, part.p):
if hasattr(part.expr, 'iri'):
# A function
argument = _eval(part.expr.expr[0], c.forget(ctx, _except=part.expr._vars))
if part.expr.iri == LENGTH:
e = Literal(len(argument))
else:
raise SPARQLError('Unhandled function {}'.format(part.expr.iri))
else:
e = _eval(part.expr, c.forget(ctx, _except=part._vars))
if isinstance(e, SPARQLError):
raise e
cs.append(c.merge({part.var: e}))
return cs
raise NotImplementedError()
QUERY = """
PREFIX custom: <%s>
SELECT ?s ?length WHERE {
BIND("Hello, World" AS ?s)
BIND(custom:length(?s) AS ?length)
}
""" % (NAMESPACE,)
rdflib.plugins.sparql.CUSTOM_EVALS['exampleEval'] = customEval
for row in rdflib.Graph().query(QUERY):
print(row)
So first off, I want to thank you for showing how you implemented a new SPARQL function.
Secondly, by using your code I was able to create a SPARQL function that evaluates two strings by using the Levenshtein distance. It has been really insightful and I wish to share it for it holds additional documentation that could help other developers creating their own custom SPARQL functions.
# Import needed to introduce new SPARQL function
import rdflib
from rdflib.plugins.sparql.evaluate import evalPart
from rdflib.plugins.sparql.sparql import SPARQLError
from rdflib.plugins.sparql.evalutils import _eval
from rdflib.namespace import Namespace
from rdflib.term import Literal
# Import for custom function calculation
from Levenshtein import distance as levenshtein_distance # python-Levenshtein==0.12.2
def SPARQL_levenshtein(ctx:object, part:object) -> object:
"""
The first two variables retrieved from a SPARQL-query are compared using the Levenshtein distance.
The distance value is then stored in Literal object and added to the query results.
Example:
Query:
PREFIX custom: //custom/ # Note: this part refereces to the custom function
SELECT ?label1 ?label2 ?levenshtein WHERE {
BIND("Hello" AS ?label1)
BIND("World" AS ?label2)
BIND(custom:levenshtein(?label1, ?label2) AS ?levenshtein)
}
Retrieve:
?label1 ?label2
Calculation:
levenshtein_distance(?label1, ?label2) = distance
Output:
Save distance in Literal object.
:param ctx: <class 'rdflib.plugins.sparql.sparql.QueryContext'>
:param part: <class 'rdflib.plugins.sparql.parserutils.CompValue'>
:return: <class 'rdflib.plugins.sparql.processor.SPARQLResult'>
"""
# This part holds basic implementation for adding new functions
if part.name == 'Extend':
cs = []
# Information is retrieved and stored and passed through a generator
for c in evalPart(ctx, part.p):
# Checks if the function holds an internationalized resource identifier
# This will check if any custom functions are added.
if hasattr(part.expr, 'iri'):
# From here the real calculations begin.
# First we get the variable arguments, for example ?label1 and ?label2
argument1 = str(_eval(part.expr.expr[0], c.forget(ctx, _except=part.expr._vars)))
argument2 = str(_eval(part.expr.expr[1], c.forget(ctx, _except=part.expr._vars)))
# Here it checks if it can find our levenshtein IRI (example: //custom/levenshtein)
# Please note that IRI and URI are almost the same.
# Earlier this has been defined with the following:
# namespace = Namespace('//custom/')
# levenshtein = rdflib.term.URIRef(namespace + 'levenshtein')
if part.expr.iri == levenshtein:
# After finding the correct path for the custom SPARQL function the evaluation can begin.
# Here the levenshtein distance is calculated using ?label1 and ?label2 and stored as an Literal object.
# This object is than stored as an output value of the SPARQL-query (example: ?levenshtein)
evaluation = Literal(levenshtein_distance(argument1, argument2))
# Standard error handling and return statements
else:
raise SPARQLError('Unhandled function {}'.format(part.expr.iri))
else:
evaluation = _eval(part.expr, c.forget(ctx, _except=part._vars))
if isinstance(evaluation, SPARQLError):
raise evaluation
cs.append(c.merge({part.var: evaluation}))
return cs
raise NotImplementedError()
namespace = Namespace('//custom/')
levenshtein = rdflib.term.URIRef(namespace + 'levenshtein')
query = """
PREFIX custom: <%s>
SELECT ?label1 ?label2 ?levenshtein WHERE {
BIND("Hello" AS ?label1)
BIND("World" AS ?label2)
BIND(custom:levenshtein(?label1, ?label2) AS ?levenshtein)
}
""" % (namespace,)
# Save custom function in custom evaluation dictionary.
rdflib.plugins.sparql.CUSTOM_EVALS['SPARQL_levenshtein'] = SPARQL_levenshtein
for row in rdflib.Graph().query(query):
print(row)
To answer your question: "What is a good way to hook a custom SPARQL function into rdflib?
Currently I'm developing a class that handles RDF data and I believe it might be best to implement the following code in to __init__function.
For example:
class ClassName():
"""DOCSTRING"""
def __init__(self):
"""DOCSTRING"""
# Save custom function in custom evaluation dictionary.
rdflib.plugins.sparql.CUSTOM_EVALS['SPARQL_levenshtein'] = SPARQL_levenshtein
Please note, this SPARQL function will only work for the endpoint on which it is implemented. Even though the SPARQL syntax in the query is correct, it is not possible applying the function in SPARQL-queries used for databases like DBPedia. The DBPedia endpoint does not support this custom function (yet).

How do I optionally set a platform/model attribute from my run script?

I'm using the VLAB MPC5xxx Toolbox.
I have a run script that is controlling my simulation, which loads the platform in the usual way, then runs it:
import vlab
import os
import sysc
image_path = os.path.join('o5e',
'firmware.open5xxxecu-e6009bbcfcd1',
'bin', 'o5e_dbg.elf')
vlab.load('mpc.mpc5674f.sim', args=['--testbench=o5e_testbench',
"--image=%s" % image_path,
"--debugger-config=GHS_MULTI",
"--trace=+src:sc_report",
])
vcd_sink = vlab.trace.sink.vcd("mpc.mpc5674f.sim.vcd")
vlab.add_trace("mpc5674f.PBRIDGE.EDMA_B", sink=vcd_sink)
for i in range(32):
vlab.add_trace("mpc5674f.PBRIDGE.ETPU.CH_OUT_A[%d]" % i, sink=vlab.trace.sink.console)
vlab.run(11, "ms", blocking=True)
vlab.exit()
I want to give this run script an argument to turn on tracing in the core, which I know you do by setting the tracing attribute on the core. And I know I can read the options of the script using python's optparse.
The problem I have is that you have to set the attribute before the end of elaboration, but the only place I can access the simulation before elaboration is in the testbench... but there seems to be no way to pass parameters (for example script arguments) to the testbench.
How should I pass the argument from my script into the testbench, so it conditionally turns on or not the core tracing?
So, I think the answer is:
1) Don't use try to use the testbench for things that aren't to do with ... a testbench
2) Use "phase breakpoints" to do things that need to happen at... specific phases in the simulation
eg:
from optparse import OptionParser
parser = OptionParser()
parser.add_option("--core-instrumentation", dest="core_instrumentation",
action = "store_true",
help="turn on core instrumentation")
(options, args) = parser.parse_args()
if options.core_instrumentation:
vlab.add_phase_breakpoint("before_end_of_elaboration",
action = lambda bp: vlab.write_attribute("mpc5467f.Core0.log_filter","+instr"))
image_path = os.path.join('o5e',
'firmware.open5xxxecu-e6009bbcfcd1',
'bin', 'o5e_dbg.elf')
vlab.load('mpc.mpc5674f.sim', args=['--testbench=o5e_testbench',
"--image=%s" % image_path,
"--debugger-config=GHS_MULTI",
])
vcd_sink = vlab.trace.sink.vcd("mpc.mpc5674f.sim.vcd")
vlab.add_trace("mpc5674f.PBRIDGE.EDMA_B", sink=vcd_sink)
for i in range(32):
vlab.add_trace("mpc5674f.PBRIDGE.ETPU.CH_OUT_A[%d]" % i, sink=vlab.trace.sink.console)
vlab.run(11, "ms", blocking=True)
vlab.exit()

How do I export PSDs as PNGs with py-appscript?

I wrote a script to export PSDs as PNGs with rb-appscript. That's fine and dandy, but I can't seem to pull it off in py-appscript.
Here's the ruby code:
#!/usr/bin/env ruby
require 'rubygems'
require 'optparse'
require 'appscript'
ps = Appscript.app('Adobe Photoshop CS5.app')
finder = Appscript.app("Finder.app")
path = "/Users/nbaker/Desktop/"
ps.activate
ps.open(MacTypes::Alias.path("#{path}guy.psd"))
layerSets = ps.current_document.layer_sets.get
# iterate through all layers and hide them
ps.current_document.layers.get.each do |layer|
layer.visible.set false
end
layerSets.each do |layerSet|
layerSet.visible.set false
end
# iterate through all layerSets, make them visible, and create a PNG for them
layerSets.each do |layerSet|
name = layerSet.name.get
layerSet.visible.set true
ps.current_document.get.export(:in => "#{path}#{name}.png", :as => :save_for_web,
:with_options => {:web_format => :PNG, :png_eight => false})
layerSet.visible.set false
end
And here's the apparently nonequivalent python code:
from appscript import *
from mactypes import *
# fire up photoshop
ps = app("Adobe Photoshop CS5.app")
ps.activate()
# open the file for editing
path = "/Users/nbaker/Desktop/"
f = Alias(path + "guy.psd")
ps.open(f)
layerSets = ps.current_document.layer_sets()
# iterate through all layers and hide them
for layer in ps.current_document.layers():
layer.visible.set(False)
#... and layerSets
for layerSet in layerSets:
layerSet.visible.set(False)
# iterate through all layerSets, make them visible, and create a PNG for them
for layerSet in layerSets:
name = layerSet.name()
layerSet.Visible = True
ps.current_document.get().export(in_=Alias(path + name + ".png"), as_=k.save_for_web,
with_options={"web_format":k.PNG, "png_eight":False})
The only part of the python script that doesn't work is the saving. I've gotten all sorts of errors trying different export options and stuff:
Connection is invalid... General Photoshop error occurred. This functionality may not be available in this version of Photoshop... Can't make some data into the expected type...
I can live with just the ruby script, but all of our other scripts are in python, so it would be nice to be able to pull this off in python. Thanks in advance internet.
Bert,
I think I have a solution for you – albeit several months later. Note that I'm a Python noob, so this is by no means elegant.
When I tried out your script, it kept crashing for me too. However by removing the "Alias" in the export call, it seems to be OK:
from appscript import *
from mactypes import *
# fire up photoshop
ps = app("Adobe Photoshop CS5.1.app")
ps.activate()
# open the file for editing
path = "/Users/ian/Desktop/"
f = Alias(path + "Screen Shot 2011-10-13 at 11.48.51 AM.psd")
ps.open(f)
layerSets = ps.current_document.layer_sets()
# no need to iterate
ps.current_document.layers.visible.set(False)
ps.current_document.layer_sets.visible.set(False)
# iterate through all layerSets, make them visible, and create a PNG for them
for l in layerSets:
name = l.name()
# print l.name()
l.visible.set(True)
# make its layers visible too
l.layers.visible.set(True)
# f = open(path + name + ".png", "w").close()
ps.current_document.get().export(in_=(path + name + ".png"), as_=k.save_for_web,
with_options={"web_format":k.PNG, "png_eight":False})
I noticed, too, that you iterate through all the layers to toggle their visibility. You can actually just issue them all a command at once – my understanding is that it's faster, since it doesn't have to keep issuing Apple Events.
The logic of my script isn't correct (with regards to turning layers on and off), but you get the idea.
Ian