AttributeError: 'NoneType' object has no attribute 'value' - binary-search-tree

I'm currently working on a problem regarding Binary Search Trees. My problem is that I get the error "AttributeError: 'NoneType' object has no attribute 'value'" and I don't see what I can do about it. This code is in my BST class.
def _insert(self, data):
if self.root.value == data:
return False
I have a Node class and a BinarySearchTree class.
class _Node:
def __init__(self, value):
self.right_child = None
self.left_child = None
self.value = value
class BST:
def __init__(self):
self.root = None
I imagine it has something to do with the fact that self.root is set to None in my init() function. How can I fix this problem?

self.root is None so when you check self.root.value, the root has no such element.
Instead of initializing self.root=None, initialize self.root=Node(None)
Also while inserting, if you are planning to insert to root, then check if the root.value is None, if yes, insert else the rest of your logic

Related

typeError 'str' is not callable. invoke a class function using dictionary value

#Class and function
class cf:
def duplicate(**kwargs)
....
return df[df[field].isnull()
#There is a dictionary
r={'field':'con','function':'cf.duplicate'}
#Trying to call function looping above dictionary r
for a in r:
a['function'] a['field'] #this line is showing error as str object is not callable...and as checked a['function'] is string.
need to call a class method duplicate and give output and fetching the variable from dict

Declaring computed python-level property in pydantic

I have a class deriving from pydantic.BaseModel and would like to create a "fake" attribute, i.e. a computed property. The propery keyword does not seem to work with Pydantic the usual way. Below is the MWE, where the class stores value and defines read/write property called half with the obvious meaning. Reading the property works fine with Pydantic, but the assignment fails.
I know Pydantic is modifying low-level details of attribute access; perhaps there is a way to define computed field in Pydantic in a different way?
import pydantic
class Object(object):
def __init__(self,*,value): self.value=value
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
class Pydantic(pydantic.BaseModel):
class Config:
extra='allow'
value: float
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
o,p=Object(value=1.),Pydantic(value=1.)
print(o.half,p.half)
o.half=p.half=2
print(o.value,p.value)
outputs (value=1. was not modified by assigning half in the Pydantic case):
0.5 0.5
4 1.0
I happened to be working on the same problem today. Officially it is not supported yet, as discussed here.
However, I did find the following example which works well:
class Person(BaseModel):
first_name: str
last_name: str
full_name: str = None
#validator("full_name", always=True)
def composite_name(cls, v, values, **kwargs):
return f"{values['first_name']} {values['last_name']}"
Do make sure your derived field comes after the fields you want to derive it from, else the values dict will not contain the needed values (e.g. full_name comes after first_name and last_name that need to be fetched from values).
Instead of using a property, here's an example which shows how to use pydantic.root_validator to compute the value of an optional field:
https://daniellenz.blog/2021/02/20/computed-fields-in-pydantic/
I've adapted this for a similar application:
class Section (BaseModel):
title: constr(strip_whitespace=True)
chunks: conlist(min_items=1, item_type=Chunk)
size: typing.Optional[ PositiveInt ] = None
role: typing.Optional[ typing.List[ str ]] = []
license: constr(strip_whitespace=True)
#root_validator
def compute_size (cls, values) -> typing.Dict:
if values["size"] is None:
values["size"] = sum([
chunk.get_size()
for chunk in values["chunks"]
])
return values
In this case each element of the discriminated union chunks has a get_size() method to compute its size. If the size field isn't specified explicitly in serialization (e.g., input from a JSON file) then it gets computed.
Created a pip package that allows you to easily create computed properties.
Here you can check it out:
https://pypi.org/project/pydantic-computed/
By using the package the example with getting the half of a value would look like this :
from pydantic import BaseModel
from pydantic_computed import Computed, computed
class SomeModel(BaseModel):
value: float
value_half: Computed[float]
#computed("value_half")
def compute_value_half(value: float):
return value / 2

how do you type instance variable caching in Sorbet?

I have code that looks like this (playground link):
# typed: strict
class A
extend T::Sig
sig { returns(T::Array[Integer]) }
def compute_expensive
[1, 2, 3]
end
sig { returns(T::Array[Integer]) }
def expensive
#expensive ||= T.let(compute_expensive, T::Array[Integer])
end
end
This fails to typecheck, saying that:
editor.rb:12: The instance variable #expensive must be declared inside initialize or declared nilable https://srb.help/5005
12 | #expensive ||= T.let(compute_expensive, Integer)
^^^^^^^^^^
I've tried a couple things to get around this…
When I declare the type as T.nilable(Integer), Sorbet says that the return type does not match the sig. Fair.
When I declare the type in initialize as #expensive = nil, Sorbet says that nil does not type check with the Integer definition below. Also fair.
If I declare #expensive = [] in initialize, my assignment with ||= becomes unreachable.
I can of course say #expensive = compute_expensive if #expensive.empty? and then return #expensive but I'm more interested in how Sorbet's type system can accommodate the ||= pattern.
This feels like a really common pattern in Ruby to me! How can I get Sorbet to type-check it for me?
A Playground Link right back to you.
So, really using the initialize is the important part here.
sig { void }
def initialize
#expensive = T.let(nil, T.nilable(T::Array[Integer]))
end
Because the memoization is still nil up until the point it's actually called, you have to allow for it to be nil, along with T::Array[Integer], which then neccessitates for you to add it to the initialization to make the class sound.

Instantiate only unique objects of a class

I'm trying to create a class that only creates an instance if the arguments passed in during instantiation are a unique combination. If the combination of arguments have previously been passed in, then return the instance that has already been previously created.
I'd like for this class to be inherited by other classes so they inherit the same behavior. This is my first attempt at a solution,
The base/parent class to be inherited:
class RegistryType(type):
def __init__(cls, name, bases, namespace, *args):
cls.instantiated_objects = {}
class AdwordsObject(object, metaclass=RegistryType):
api = AdWordsAPI()
def __new__(cls, *args):
object_name = '-'.join(args)
if object_name in cls.instantiated_objects:
return cls.instantiated_objects[object_name]
else:
obj = super(AdwordsObject, cls).__new__(cls)
cls.instantiated_objects[object_name] = obj
# cls.newt_connection.commit()
return obj
And this is how it's being used in the child class:
class ProductAdGroup(AdwordsObject):
# init method only called if object being instantiated hasn't already been instantiated
def __init__(self, product_name, keyword_group):
self.name = '-'.join([product_name, keyword_group])
#classmethod
def from_string(cls, name: str):
arguments = name.split('-')
assert len(arguments) == 2, 'Incorrect ad group name convention. ' \
'Use: Product-KeywordGroup'
ad_group = cls(*arguments)
return ad_group
I've ran the program with this setup but it seems like a new dict is being created every time ProductAdGroup() is being created so the memory is exploding... even though the program returns the instance that had already been previously instantiated.
Is there anyway to fix this?
Thanks!!!
Your code seems to be right - the only thing incorrect above is that your __init__ method will always be called when instantiating a new class, regardless of a previous instance being returned by __new__ or not.
So, if you create extra objects in your __init__ method, that may be the cause of your memory leak - however, if you bind these new objects to the instane (self), they shuld just override a previously created object in the same place - which would them be freed. . In the code posted here, that happens with self.name- it may be that your real __init__ does more things, and associate new objects to other places than the instance (like, apending them to a list). If your __init__ methods are just as shown the cause for your memory growing is not evident in the code you supply.
As an extra advice, but not related to the problem you relate, I add that you don't need a metaclass for this at all.
Just check for the existence of an cls.instantiated_objects dict in the __new__ method itself. Not writting an unneeded metaclass will simplify your codebase, avoid metaclass conflicts if your class hierarchy evolves, and may even do away with your problem if there is more code on your metaclass than you are showing here.
The base class __new__ method can be rewritten something like this:
class AdwordsObject(object):
def __new__(cls, *args):
if not cls.__dict__.get("instantiated_objects"):
cls.instantiated_objects = {}
name = '-'.join(args)
if name in cls.instantiated_objects:
return cls.instantiated_objects[name]
instance = super().__new__(cls)
cls.instantiated_objects[name] = instance
return instance
And there is no more need for a custom metaclass.

where is the location of class Normal in edward's source code?

When using edward, we always use from edward.models import Normal , but i didn't find the declaration of Normal in github
anybody who can tell me where is it
They are defined in edward/models/random_variables.py.
You import the Normal class like this:
from edward.models import Normal
This suggests looking in edward/models/__init__.py, which has this line:
from edward.models.random_variables import *
Looking in edward/models/random_variables.py we find this code:
from edward.models.random_variable import RandomVariable as _RandomVariable
from tensorflow.contrib import distributions as _distributions
# Automatically generate random variable classes from classes in
# tf.contrib.distributions.
_globals = globals()
for _name in sorted(dir(_distributions)):
_candidate = getattr(_distributions, _name)
if (_inspect.isclass(_candidate) and
_candidate != _distributions.Distribution and
issubclass(_candidate, _distributions.Distribution)):
# to use _candidate's docstring, must write a new __init__ method
def __init__(self, *args, **kwargs):
_RandomVariable.__init__(self, *args, **kwargs)
__init__.__doc__ = _candidate.__init__.__doc__
_params = {'__doc__': _candidate.__doc__,
'__init__': __init__}
_globals[_name] = type(_name, (_RandomVariable, _candidate), _params)
del _candidate
This goes through the tensorflow.contrib.distributions module looking for classes derived from tensorflow.contrib.distributions.Distribution (ignoring other attributes like e.g. the __file__ member of the module, or the base Distribution class itself). For each one, it does a bit of hacking (which only affects the generated documentation) then executes this key line:
_globals[_name] = type(_name, (_RandomVariable, _candidate), _params)
The type() built-in function creates a new type i.e. declares a new class. The second parameter is the list of base classes, which here is edward's RandomVariable class and the TensorFlow random variable class. Earlier it defined _globals to be globals(), which is a built-in function returning the dictionary of the module's variables. Therefore, in the case you're interested in, the line above is equivalent to the following:
from edward.models.random_variable import RandomVariable as EdRandVar
from tensorflow.contrib.distributions import Normal as TfNormal
Normal = type("Normal", (EdRandVar, TfNormal), {...})
Which in turn is equivalent to this (if you ignore the docstring stuff):
from edward.models.random_variable import RandomVariable as EdRandVar
from tensorflow.contrib.distributions import Normal as TfNormal
class Normal(EdRandVar, TfNormal):
pass