neo4django mixin inheritance problems - authentication

Considering my previous question, I try to implement what I need.
The following is the content of a django app models.py.
from neo4django.db import models
from neo4django.auth.models import User as AuthUser
class MyManager(models.manager.NodeModelManager):
def filterLocation(self,**kwargs):
qs = self.get_query_set()
if 'dist' in kwargs:
qs = qs.filter(_where_dist=kwargs['dist'])
elif 'prov' in kwargs:
qs = qs.filter(_where_prov=kwargs['prov'])
elif 'reg' in kwargs:
qs = qs.filter(_where_reg=kwargs['reg'])
return qs
class MyMixin(object):
_test = models.BooleanProperty(default=True)
_where_dist = models.StringProperty(indexed=True)
_where_prov = models.StringProperty(indexed=True)
_where_reg = models.StringProperty(indexed=True)
search = MyManager()
class Meta:
abstract = True
class Activity(MyMixin,models.NodeModel):
name = models.StringProperty()
class User(MyMixin,AuthUser):
info = models.StringProperty()
I have many problems. The first is the non-inheritance of MyMixin's attributes:
>>> joe=User.objects.create(username='joe') # OK!
>>> joe
<User: joe>
>>> bill=User.objects.create(username='bill',_test=True)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/manager.py", line 43, in create
return self.get_query_set().create(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/query.py", line 1296, in create
return super(NodeQuerySet, self).create(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/django/db/models/query.py", line 375, in create
obj = self.model(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/base.py", line 141, in __init__
super(NodeModel, self).__init__(*args, **kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/django/db/models/base.py", line 367, in __init__
raise TypeError("'%s' is an invalid keyword argument for this function" % kwargs.keys()[0])
TypeError: '_test' is an invalid keyword argument for this function
But also the create fails to set User's own attributes!
>>> k=User.objects.create(username='kevin',info='The Best')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/manager.py", line 43, in create
return self.get_query_set().create(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/query.py", line 1296, in create
return super(NodeQuerySet, self).create(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/django/db/models/query.py", line 375, in create
obj = self.model(**kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/base.py", line 141, in __init__
super(NodeModel, self).__init__(*args, **kwargs)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/django/db/models/base.py", line 367, in __init__
raise TypeError("'%s' is an invalid keyword argument for this function" % kwargs.keys()[0])
TypeError: 'info' is an invalid keyword argument for this function
None of the mixin or User class own attributes exist in User.
If I derived in reverse order:
class User(AuthUser,MyMixin):
Here they are present, but I don't think is a good practice,
should not core models go to the right?
Anyway, as we see below, Activity does not have this problem,
like if AuthUser removed all attributes (intended behavior?).
While the alternative creation method works:
>>> k=User(username='kevin',info='The Best')
>>> k.save()
>>> k
<User: kevin>
But using the other Model, Activity, which inherits directly from NodeModelManager
(with User we have an intermediate parent AuthUser), things are better:
>>> a=Activity.objects.create(name="AA")
>>> a
<Activity: Activity object>
Several tests made with a simple NodeModel inheritance were ok,
the problems arise with multiple inheritance and mixins.
Another problem, with my NodeModelManager:
>>> User.search.filterLocation(dist="b")
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/tonjo/prj/tuned_prj/tuned_django/myapp/models.py", line 6, in filterLocation
qs = self.get_query_set()
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/manager.py", line 31, in get_query_
set
return NodeQuerySet(self.model)
File "/home/tonjo/venv/tuned/local/lib/python2.7/site-packages/neo4django/db/models/query.py", line 1222, in __init__
self._app_label = model._meta.app_label
AttributeError: 'NoneType' object has no attribute '_meta'
This one is beyond my comprehension ;)
MyManager worked well when in a previous test I derived from a NodeModel's child,
not from a mixin.

This is a pretty complicated question, but hopefully I can give you a pointer.
First- you need to understand that Django fields (and by extension neo4django properties) cooperate with the class on which they're defined. That's why they only work when defined on a Model (or, in neo4django, a NodeModel). There is no easy way to do multiple inheritance using Django models and fields- my mixin suggestion from your other question allows adding Python methods and attributes, but won't magically make Property or Field play nicely with object as a parent class.
If you really want to avoid duplication of property definitions in this situation, you have a few choices.
One is to use a shared super class- but in this case, you can't, since you need to inherit from neo4django.auth.models.User with one of your classes. This particular requirement will when neo4django supports Django 1.5+, which allows swappable user models.
Most metaprogramming won't work easily, since Django and neo4django make use of metaclasses. That said, I'm sure you could hack around this with a clever class decorator or child metaclass- but I'm not sure you should from a sanity standpoint :)
Let me know how it goes- maybe I'm missing an easier approach.

Related

using pandas.read_csv, how can one process all errors, receive all non-error data?

Data which, for me, generates an exception instead of invoking the 'on_bad_lines' handler is at:
https://opencalaccess.org/misc/NAMES_CD.TSV
I have this:
bad_lines = list()
def bad_line_finder(x):
bad_lines.append(str(x))
return None
for file in os.listdir(dir):
bad_lines = list()
try:
for df in pd.read_csv(f"{dir}/{file}",
sep='\t',
on_bad_lines=bad_line_finder,
engine='python',
chunksize=1000):
print(f"\n{target}")
df.info()
print(f"Bad Lines: {bad_lines}")
bad_lines = list()
except:
print("EXCEPTION:")
traceback.print_exc()
and this works great. There are errors in the files and the method handles them so that I can keep track of them. Except, why do i still see this:
EXCEPTION:
Traceback (most recent call last):
File "/home/ray/Projects/opencalaccess-data/import.py", line 41, in <module>
for df in pd.read_csv(f"{dir}/{file}",
File "/home/ray/Projects/opencalaccess-data/.venv/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1698, in __next__
return self.get_chunk()
File "/home/ray/Projects/opencalaccess-data/.venv/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1810, in get_chunk
return self.read(nrows=size)
File "/home/ray/Projects/opencalaccess-data/.venv/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1778, in read
) = self._engine.read( # type: ignore[attr-defined]
File "/home/ray/Projects/opencalaccess-data/.venv/lib/python3.10/site-packages/pandas/io/parsers/python_parser.py", line 250, in read
content = self._get_lines(rows)
File "/home/ray/Projects/opencalaccess-data/.venv/lib/python3.10/site-packages/pandas/io/parsers/python_parser.py", line 1114, in _get_lines
new_rows.append(next(self.data))
_csv.Error: ' ' expected after '"'
What is the "on_bad_lines" option doing if it does not handle all of the bad lines? Which of them will it handle and which will it not?
This is a government data source. There are format errors in the data that cannot be corrected by the agency, because they constitute the 0fficial record. So, I must fix them myself. But which of them throw exceptions and which do not?

pyspark RDDs strip attributes of numpy subclasses

I've been fighting an unexpected behavior when attempting to construct a subclass of numpy ndarray within a map call to a pyspark RDD. Specifically, the attribute that I added within the ndarray subclass appears to be stripped from the resulting RDD.
The following snippets contain the essence of the issue.
import numpy as np
class MyArray(np.ndarray):
def __new__(cls,shape,extra=None,*args):
obj = super().__new__(cls,shape,*args)
obj.extra = extra
return obj
def __array_finalize__(self,obj):
if obj is None:
return
self.extra = getattr(obj,"extra",None)
def shape_to_array(shape):
rval = MyArray(shape,extra=shape)
rval[:] = np.arange(np.product(shape)).reshape(shape)
return rval
If I invoke shape_to_array directly (not under pyspark), it behaves as expected:
x = shape_to_array((2,3,5))
print(x.extra)
outputs:
(2, 3, 5)
But, if I invoke shape_to_array via a map to an RDD of inputs, it goes wonky:
from pyspark.sql import SparkSession
sc = SparkSession.builder.appName("Steps").getOrCreate().sparkContext
rdd = sc.parallelize([(2,3,5),(2,4),(2,5)])
result = rdd.map(shape_to_array).cache()
print(result.map(lambda t:type(t)).collect())
print(result.map(lambda t:t.shape).collect())
print(result.map(lambda t:t.extra).collect())
Outputs:
[<class '__main__.MyArray'>, <class '__main__.MyArray'>, <class '__main__.MyArray'>]
[(2, 3, 5), (2, 4), (2, 5)]
22/10/15 15:48:02 ERROR Executor: Exception in task 7.0 in stage 2.0 (TID 23)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/Cellar/apache-spark/3.3.0/libexec/python/lib/pyspark.zip/pyspark/worker.py", line 686, in main
process()
File "/usr/local/Cellar/apache-spark/3.3.0/libexec/python/lib/pyspark.zip/pyspark/worker.py", line 678, in process
serializer.dump_stream(out_iter, outfile)
File "/usr/local/Cellar/apache-spark/3.3.0/libexec/python/lib/pyspark.zip/pyspark/serializers.py", line 273, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/local/Cellar/apache-spark/3.3.0/libexec/python/lib/pyspark.zip/pyspark/util.py", line 81, in wrapper
return f(*args, **kwargs)
File "/var/folders/w7/42_p7mcd1y91_tjd0jzr8zbh0000gp/T/ipykernel_94831/2519313465.py", line 1, in <lambda>
AttributeError: 'MyArray' object has no attribute 'extra'
What happened to the extra attribute of the MyArray instances?
Thanks much for any/all suggestions
EDIT: A bit of additional info. If I add logging inside the shape_to_array function just before the return, I can verify that the extra attribute does exist on the DataArray object that is being returned. But when I attempt to access the DataArray elements in the RDD from the main driver, they're gone.
After a night of sleeping on this, I remembered that I have often had issues with pyspark RDDs where the error message had to do the return type not working with pickle.
I wasn't getting that error message this time because numpy.ndarray does work with pickle. BUT... the __reduce__ and __setstate__ methods of numpy.ndarray known nothing of the added extra attribute on the MyArray subclass. This is where extra was being stripped.
Adding the following two methods to MyArray solved everything.
def __reduce__(self):
mthd,cls,args = super().__reduce__(self)
return mthd, cls, args + (self.extra,)
def __setstate__(self,args):
super().__setstate__(args[:-1])
self.extra = args[-1]
Thank you to anyone who took some time to think about my question.

keras.models.load_model() gives ValueError

I have saved the trained model and the weights as below.
model, history, score = fit_model(model, train_batches, val_batches, callbacks=[callback])
model.save('./model')
model.save_weights('./weights')
Then I tried to get the saved model as the following way
if __name__ == '__main__':
model = keras.models.load_model('./model', compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
test_batches, nb_samples = test_gen(dataset_test_path, 32, img_width, img_height)
predict, loss, acc = predict_model(model,test_batches, nb_samples)
print(predict)
print(acc)
print(loss)
But it gives me an error. What should I do to overcome this?
Traceback (most recent call last):
File "test_pro.py", line 34, in <module>
model = keras.models.load_model('./model',compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 138, in load
keras_loader.load_layers()
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 379, in load_layers
self.loaded_nodes[node_metadata.node_id] = self._load_layer(
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 407, in _load_layer
obj, setter = revive_custom_object(identifier, metadata)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in revive_custom_object
raise ValueError('Unable to restore custom object of type {} currently. '
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
Looking at the source code for Keras, the error is raised when trying to load a model with a custom object:
def revive_custom_object(identifier, metadata):
"""Revives object from SavedModel."""
if ops.executing_eagerly_outside_functions():
model_class = training_lib.Model
else:
model_class = training_lib_v1.Model
revived_classes = {
constants.INPUT_LAYER_IDENTIFIER: (
RevivedInputLayer, input_layer.InputLayer),
constants.LAYER_IDENTIFIER: (RevivedLayer, base_layer.Layer),
constants.MODEL_IDENTIFIER: (RevivedNetwork, model_class),
constants.NETWORK_IDENTIFIER: (RevivedNetwork, functional_lib.Functional),
constants.SEQUENTIAL_IDENTIFIER: (RevivedNetwork, models_lib.Sequential),
}
parent_classes = revived_classes.get(identifier, None)
if parent_classes is not None:
parent_classes = revived_classes[identifier]
revived_cls = type(
compat.as_str(metadata['class_name']), parent_classes, {})
return revived_cls._init_from_metadata(metadata) # pylint: disable=protected-access
else:
raise ValueError('Unable to restore custom object of type {} currently. '
'Please make sure that the layer implements `get_config`'
'and `from_config` when saving. In addition, please use '
'the `custom_objects` arg when calling `load_model()`.'
.format(identifier))
The method will only work fine with the custom objects of the types defined in revived_classes. As you can see, it currently only works with input layer, layer, model, network, and sequential custom objects.
In your code, you pass an tfa.metrics.F1Score class in the custom_objects argument, which is of type METRIC_IDENTIFIER, therefore, not supported (probably because it doesn't implement the get_config and from_config functions as the error output says):
keras.models.load_model('./model', compile=False, custom_objects={"F1Score": tfa.metrics.F1Score})
It's been a while since I last worked with Keras but maybe you can try and follow what was proposed in this other related answer and wrap the call to tfa.metrics.F1Score in a method. Something like this (adjust it to your needs):
def f1(y_true, y_pred):
metric = tfa.metrics.F1Score(num_classes=3, threshold=0.5)
metric.update_state(y_true, y_pred)
return metric.result()
keras.models.load_model('./model', compile=False, custom_objects={'f1': f1})

Instance of has no member

Maybe I'm getting something basic confused but I cant seem to work out how to fix this issue.
The compiler gives me
Traceback (most recent call last):
File "python", line 153, in <module>
File "python", line 90, in finish
File "python", line 25, in registered
AttributeError: 'Marathon' object has no attribute 'runnerList'
Which all seem to be the same issue. Surely the instance does have members. I'm not sure why it thinks it doesn't.
class Marathon:
# Creator
# -------
runnersList = []
timesList = []
def __init__(self):
"""Set up this marathon without any runners."""
# Inspectors
# ----------
# These are called anytime.
def registered(self, runner):
"""Return True if runner has registered, otherwise False."""
for item in self.runnerList:
if item == runner:
return True
else:
return False

Tensorflow Object Detection, error while generating tfrecord [TypeError: None has type NoneType, but expected one of: int, long]

When checking across different solutions available on the net, most people (including datitran) pointed out that it might be a missing class or a misspell of a class in the train csv file. Am not able to figure that out since the labelling is done using labelImg, it saves these classes as xml, the xml_to_csv.py converts this to a csv. Am not sure under what circumstance I could have had the opportunity to miss out or misspel any class incorrectly.
Here's the error am dealing with:
(OT)
nisxxxxx#xxxxxxxx:~/Desktop/OD/models/research/object_detection$
python generate_tfrecord.py --csv_input=data/train_labels.csv --
output_path=data/train.record
Traceback (most recent call last):
File "generate_tfrecord.py", line 192, in <module>
tf.app.run()
File "/home/nisxxxxx/Desktop/test_OD/OT/lib/python2.7/site-
packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "generate_tfrecord.py", line 184, in main
tf_example = create_tf_example(group, path)
File "generate_tfrecord.py", line 173, in create_tf_example
'image/object/class/label':
dataset_util.int64_list_feature(classes),
File"/home/nishanth/Desktop/test_OD/models/research/object_detection/utils/dat
aset_util.py", line 26, in int64_list_feature
return
tf.train.Feature(int64_list=tf.train.Int64List(value=value))
TypeError: None has type NoneType, but expected one of: int, long
Has anyone been able to solve this problem?
I am not sure how many classes you have used... after the final else try with "return 0 instead of none"... example
if row_label == 'red':
return 1
elif row_label == 'orange':
return 2
elif row_label == 'blue':
return 3
else:
return 0
Just change the label name whatever you labeling them during crop the image by labelImg tool.
def class_text_to_int(row_label):
if row_label == 'raccon':
return 1
else:
None
Instead of 'raccon' put label name for ex:- 'car'.