TableEditor fails to update correct range - traits

I came across the behavior that RangeEditor does not work properly inside a TableEditor.
from traits.api import *
from traitsui.api import *
class TableItem(HasTraits):
r=Range(1,6)
class Table(HasTraits):
t=List(Instance(TableItem))
def _t_default(self):
return [TableItem()]
traits_view=View(
Item(name='t',
editor=TableEditor( columns=[
ObjectColumn(label='Number',editor=RangeEditor(mode='spinner'),
name='r',editable=True)
]
),height=250,width=250,show_label=False))
Table().configure_traits()
The behavior resulting from this program is that the range can only be adjusted between 0 and 1. If mode='spinner' is not specified it acts like the range is an float between 0 and 1. Of course, in the above example, whenever the range is set to 0 an error is spit out because the trait doesn't accept any values other than the interval [1,6].
This behavior is very clearly a bug and probably won't ever be fixed as enaml moves forward. But is there a simple workaround?

I found a workaround:
from traits.api import *
from traitsui.api import *
class TableItem(HasTraits):
r=Range(1,6)
_integer_value_one=Constant(1)
_integer_value_six=Constant(6)
class Table(HasTraits):
t=List(Instance(TableItem))
def _t_default(self):
return [TableItem()]
traits_view=View(
Item(name='t',
editor=TableEditor( columns=[
ObjectColumn(label='Number',editor=RangeEditor(mode='spinner',
high_name='_integer_value_six',low_name='_integer_value_one'),
name='r',editable=True)
]
),height=250,width=250,show_label=False))
Table().configure_traits()

Related

ValueError: <module 'random' from 'C:...anaconda3\\lib\\random.py'> cannot be used to seed a numpy.random.RandomState instance

I am trying to perform the automatic clustering with the UMAP.
I am using the r wrapper function of UMAP, with all the requirements satisfied but unfortunately I cannot set the seed into the umapr function. i tried to run the code:
`hspace = {
"n_neighbors": hp.choice('n_neighbors', range(3,32)),
"n_components": hp.choice('n_components', range(3,32)),
"min_cluster_size": hp.choice('min_cluster_size', range(2,32)),
"random_state": 42
}
label_lower = 10
label_upper = 100
max_evals = 25 # change it to 50 or 100 for extra steps as wished.
import importlib
importlib.reload(utils)
%%time
from utils import *
best_params_use, best_clusters_use, trials_use = utils.bayesian_search(embeddings_st1,
space=hspace,
label_lower=label_lower,
label_upper=label_upper,
max_evals=max_evals)`
and the error I get is:
enter image description here
ValueError: <module 'random' from 'C:...anaconda3\lib\random.py'> cannot be used to seed a numpy.random.RandomState instance
Can someone please help me solve this problem, I tried to open the 'random" file and couldn't fix the problem.

Spark Scala Convert Sparse to Dense Feature

I have the following output that shows the DataFrame where I'm trying to OneHotEncode a String DataType:
+---------+--------+------------------+-----------+--------------+----------+----------+-------------+------------------+---------------+-------------+
|longitude|latitude|housing_median_age|total_rooms|total_bedrooms|population|households|median_income|median_house_value|ocean_proximity| feature|
+---------+--------+------------------+-----------+--------------+----------+----------+-------------+------------------+---------------+-------------+
| -122.28| 37.81| 52.0| 340.0| 97.0| 200.0| 87.0| 1.5208| 112500.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.13| 37.67| 40.0| 1748.0| 318.0| 914.0| 317.0| 3.8676| 184000.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.07| 37.67| 27.0| 3239.0| 671.0| 1469.0| 616.0| 3.2465| 230600.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.13| 37.66| 19.0| 862.0| 167.0| 407.0| 183.0| 4.3456| 163000.0| [NEAR BAY]|(5,[3],[1.0])|
As it can be seen that I have the feature calculated from the ocean_proximity column. I now want to expand on this feature column and have that as a dense vector and for that I tried something like this:
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{StringIndexer, OneHotEncoder}
import org.apache.spark.ml.feature.CountVectorizer
import org.apache.spark.mllib.linalg.Vector
import spark.implicits._
// Identify how many distinct values are in the OCEAN_PROXIMITY column
val distinctOceanProximities = dfRaw.select(col("ocean_proximity")).distinct().as[String].collect()
val oceanProximityAsArrayDF = dfRaw.withColumn("ocean_proximity", array("ocean_proximity"))
val countModel = new CountVectorizer().setInputCol("ocean_proximity").setOutputCol("feature").fit(oceanProximityAsArrayDF)
val transformedDF = countModel.transform(oceanProximityAsArrayDF)
transformedDF.show()
def columnExtractor(idx: Int) = udf((v: Vector) => v(idx))
val featureCols = (0 until distinctOceanProximities.size).map(idx => columnExtractor(idx)($"feature").as(s"$distinctOceanProximities(idx)"))
val toDense = udf((v:Vector) => v.toDense)
val denseDF = transformedDF.withColumn("feature", toDense($"feature"))
denseDF.show()
This however fails with the following message:
org.apache.spark.sql.AnalysisException: Cannot up cast `input` from struct<type:tinyint,size:int,indices:array<int>,values:array<double>> to struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.
The type path of the target object is:
- root class: "org.apache.spark.mllib.linalg.Vector"
You can either add an explicit cast to the input data or choose a higher precision type of the field in the target object
at org.apache.spark.sql.errors.QueryCompilationErrors$.upCastFailureError(QueryCompilationErrors.scala:137)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveUpCast$$fail(Analyzer.scala:3438)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36$$anonfun$applyOrElse$198.applyOrElse(Analyzer.scala:3467)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36$$anonfun$applyOrElse$198.applyOrElse(Analyzer.scala:3445)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:318)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:323)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChild$2(TreeNode.scala:377)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$4(TreeNode.scala:438)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:438)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:323)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:85)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36.applyOrElse(Analyzer.scala:3445)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36.applyOrElse(Analyzer.scala:3441)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$3(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:221)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
It was so annoying and the actual error was caused by using the wrong imports:
Instead of:
import org.apache.spark.mllib.linalg.Vector
Using this:
import org.apache.spark.ml.linalg.Vector
Solved the issue!

Custom SPARQL functions in rdflib

What is a good way to hook a custom SPARQL function into rdflib?
I have been looking around in rdflib for an entry point for custom function. I found no dedicated entry point but found that rdflib.plugins.sparql.CUSTOM_EVALS might be a place to add the custom function.
So far I have made an attempt with the code below. It seems "dirty" to me. I am calling a "hidden" function (_eval) and I am not sure I got all the argument updating correct. Beyond the custom_eval.py example code (which form the basis for my code) I found little other code or documentation about CUSTOM_EVALS.
import rdflib
from rdflib.plugins.sparql.evaluate import evalPart
from rdflib.plugins.sparql.sparql import SPARQLError
from rdflib.plugins.sparql.evalutils import _eval
from rdflib.namespace import Namespace
from rdflib.term import Literal
NAMESPACE = Namespace('//custom/')
LENGTH = rdflib.term.URIRef(NAMESPACE + 'length')
def customEval(ctx, part):
"""Evaluate custom function."""
if part.name == 'Extend':
cs = []
for c in evalPart(ctx, part.p):
if hasattr(part.expr, 'iri'):
# A function
argument = _eval(part.expr.expr[0], c.forget(ctx, _except=part.expr._vars))
if part.expr.iri == LENGTH:
e = Literal(len(argument))
else:
raise SPARQLError('Unhandled function {}'.format(part.expr.iri))
else:
e = _eval(part.expr, c.forget(ctx, _except=part._vars))
if isinstance(e, SPARQLError):
raise e
cs.append(c.merge({part.var: e}))
return cs
raise NotImplementedError()
QUERY = """
PREFIX custom: <%s>
SELECT ?s ?length WHERE {
BIND("Hello, World" AS ?s)
BIND(custom:length(?s) AS ?length)
}
""" % (NAMESPACE,)
rdflib.plugins.sparql.CUSTOM_EVALS['exampleEval'] = customEval
for row in rdflib.Graph().query(QUERY):
print(row)
So first off, I want to thank you for showing how you implemented a new SPARQL function.
Secondly, by using your code I was able to create a SPARQL function that evaluates two strings by using the Levenshtein distance. It has been really insightful and I wish to share it for it holds additional documentation that could help other developers creating their own custom SPARQL functions.
# Import needed to introduce new SPARQL function
import rdflib
from rdflib.plugins.sparql.evaluate import evalPart
from rdflib.plugins.sparql.sparql import SPARQLError
from rdflib.plugins.sparql.evalutils import _eval
from rdflib.namespace import Namespace
from rdflib.term import Literal
# Import for custom function calculation
from Levenshtein import distance as levenshtein_distance # python-Levenshtein==0.12.2
def SPARQL_levenshtein(ctx:object, part:object) -> object:
"""
The first two variables retrieved from a SPARQL-query are compared using the Levenshtein distance.
The distance value is then stored in Literal object and added to the query results.
Example:
Query:
PREFIX custom: //custom/ # Note: this part refereces to the custom function
SELECT ?label1 ?label2 ?levenshtein WHERE {
BIND("Hello" AS ?label1)
BIND("World" AS ?label2)
BIND(custom:levenshtein(?label1, ?label2) AS ?levenshtein)
}
Retrieve:
?label1 ?label2
Calculation:
levenshtein_distance(?label1, ?label2) = distance
Output:
Save distance in Literal object.
:param ctx: <class 'rdflib.plugins.sparql.sparql.QueryContext'>
:param part: <class 'rdflib.plugins.sparql.parserutils.CompValue'>
:return: <class 'rdflib.plugins.sparql.processor.SPARQLResult'>
"""
# This part holds basic implementation for adding new functions
if part.name == 'Extend':
cs = []
# Information is retrieved and stored and passed through a generator
for c in evalPart(ctx, part.p):
# Checks if the function holds an internationalized resource identifier
# This will check if any custom functions are added.
if hasattr(part.expr, 'iri'):
# From here the real calculations begin.
# First we get the variable arguments, for example ?label1 and ?label2
argument1 = str(_eval(part.expr.expr[0], c.forget(ctx, _except=part.expr._vars)))
argument2 = str(_eval(part.expr.expr[1], c.forget(ctx, _except=part.expr._vars)))
# Here it checks if it can find our levenshtein IRI (example: //custom/levenshtein)
# Please note that IRI and URI are almost the same.
# Earlier this has been defined with the following:
# namespace = Namespace('//custom/')
# levenshtein = rdflib.term.URIRef(namespace + 'levenshtein')
if part.expr.iri == levenshtein:
# After finding the correct path for the custom SPARQL function the evaluation can begin.
# Here the levenshtein distance is calculated using ?label1 and ?label2 and stored as an Literal object.
# This object is than stored as an output value of the SPARQL-query (example: ?levenshtein)
evaluation = Literal(levenshtein_distance(argument1, argument2))
# Standard error handling and return statements
else:
raise SPARQLError('Unhandled function {}'.format(part.expr.iri))
else:
evaluation = _eval(part.expr, c.forget(ctx, _except=part._vars))
if isinstance(evaluation, SPARQLError):
raise evaluation
cs.append(c.merge({part.var: evaluation}))
return cs
raise NotImplementedError()
namespace = Namespace('//custom/')
levenshtein = rdflib.term.URIRef(namespace + 'levenshtein')
query = """
PREFIX custom: <%s>
SELECT ?label1 ?label2 ?levenshtein WHERE {
BIND("Hello" AS ?label1)
BIND("World" AS ?label2)
BIND(custom:levenshtein(?label1, ?label2) AS ?levenshtein)
}
""" % (namespace,)
# Save custom function in custom evaluation dictionary.
rdflib.plugins.sparql.CUSTOM_EVALS['SPARQL_levenshtein'] = SPARQL_levenshtein
for row in rdflib.Graph().query(query):
print(row)
To answer your question: "What is a good way to hook a custom SPARQL function into rdflib?
Currently I'm developing a class that handles RDF data and I believe it might be best to implement the following code in to __init__function.
For example:
class ClassName():
"""DOCSTRING"""
def __init__(self):
"""DOCSTRING"""
# Save custom function in custom evaluation dictionary.
rdflib.plugins.sparql.CUSTOM_EVALS['SPARQL_levenshtein'] = SPARQL_levenshtein
Please note, this SPARQL function will only work for the endpoint on which it is implemented. Even though the SPARQL syntax in the query is correct, it is not possible applying the function in SPARQL-queries used for databases like DBPedia. The DBPedia endpoint does not support this custom function (yet).

Elm seed for Random.initialSeed - prefer current time [duplicate]

This question already has answers here:
elm generate random number
(2 answers)
Closed 7 years ago.
What's a simple way to do this?
The documentation for Random.initialSeed says:
"A good way to get an unexpected seed is to use the current time."
http://package.elm-lang.org/packages/elm-lang/core/2.1.0/Random#initialSeed
After a ton of reading, I can only find "solutions" that are well beyond my understanding of Elm and Functional Programming. They also don't seem to be solutions to this problem.
I'm currently hardcoding:
Random.initialSeed 314
If you use a library, please include the name used to get it from elm package. I've seen a solution that says use Native.now but I can't figure out how to get that one.
stackoverflow is suggesting this one but I can't understand how to apply it to my usecase Elm Current Date
You can try case nelson's answer from How do I get the current time in Elm?
From elm repl:
> import Now
> import Random
> Now.loadTime |> round -- get current time in Int
1455406828183 : Int
> Now.loadTime |> round |> Random.initialSeed -- get the Seed
Seed { state = State 1560073230 678, next = <function>, split = <function>, range = <function> }
: Random.Seed
I also have the code on my repo here.
Note: don't forget "native-modules": true in elm-package.json.
Edit:
to try the code,
git clone https://github.com/prt2121/elm-backup.git
cd elm-backup/now
elm make Now.elm
add "native-modules": true in elm-package.json
elm repl
The simplest way I can think of is to use the Elm Architecture and Effects.tick mechanism to initialise the seed with a time value.
Here is an example of how this works:
import Html exposing (..)
import Html.Events exposing (onClick)
import Random exposing (Seed, generate, int, initialSeed)
import Time exposing (Time)
import Effects exposing (Effects, Never)
import Task exposing (Task)
import StartApp
type alias Model = { seed : Seed, value : Int}
type Action = Init Time | Generate
init : (Model, Effects Action)
init = (Model (initialSeed 42) 0, Effects.tick Init)
modelFromSeed : Seed -> (Model, Effects Action)
modelFromSeed seed =
let
(value', seed') = generate (int 1 1000) seed
in
(Model seed' value', Effects.none)
update : Action -> Model -> (Model, Effects Action)
update action model =
case action of
Init time ->
modelFromSeed (initialSeed (round time))
Generate ->
modelFromSeed model.seed
view : Signal.Address Action -> Model -> Html
view address model =
div []
[ text ("Current value: " ++ (toString model.value))
, br [] []
, button [onClick address Generate] [text "New Value"]
]
app : StartApp.App Model
app = StartApp.start
{ init = init
, update = update
, view = view
, inputs = []
}
main : Signal Html
main = app.html
port tasks : Signal (Task Never ())
port tasks = app.tasks

ComplexModel not available on the client

I just started using Spyne and tried to use a ComplexModel as a parameter for one method. I mostly followed the user_manager example from the sources with spyne<2.99 but I always get a type error when doing the client.factory.create() call.
Example code that fails:
from spyne.application import Application
from spyne.decorator import rpc
from spyne.service import ServiceBase
from spyne.protocol.soap import Soap11
from spyne.model.primitive import String, Integer
from spyne.model.complex import ComplexModel
class DatosFac(ComplexModel):
__namespace__ = 'facturamanager.datosfac'
numero = String(pattern=r'[A-Z]/[0-9]+')
class FacturaService(ServiceBase):
#rpc(String, DatosFac, _returns=Integer)
def updateFacData(self, numero, data):
# do stuff
return 1
application = Application([FacturaService], 'facturaManager.service',
in_protocol=Soap11(validator='lxml'),
out_protocol=Soap11()
)
from spyne.server.null import NullServer
s = NullServer(application)
data = s.factory.create('DatosFac')
If you run this code you get:
Traceback (most recent call last):
File "spyner.py", line 25, in <module>
data = s.factory.create('DatosFac')
File "/Users/marc/.pyEnvs/default/lib/python2.7/site-packages/spyne/client/_base.py", line 30, in create
return self.__app.interface.get_class_instance(object_name)
File "/Users/marc/.pyEnvs/default/lib/python2.7/site-packages/spyne/interface/_base.py", line 114, in get_class_instance
return self.classes[key]()
KeyError: 'DatosFac'
(I used NullServer to make it easier to reproduce, but the same happens over Soap+Wsgi).
I amb pretty much stuck at this as I don't see what's essentialy different from this code and the user_manager examples.
What am I doing wrong?
thanks,
marc
Thanks for providing a fully working example.
The difference is that tns and the namespace of the DatosFac are different.
Either do:
data = s.factory.create('{facturamanager.datosfac}DatosFac')
or remove __namespace__ from DatosFac definition