Python 3.8 sys.getrefcount() returns 5 on first call - python-3.8

Why does this print 5? Using python 3.8. I understand sys.getrefcount() returns a value 1 greater than expected, but 5 ?
from sys import getrefcount
class Foo():
def __del__(self):
print('__del__() called')
print(getrefcount(Foo)) # 5

Interesting one!!
I used the following script to get the list of referrers
import gc
import pprint
import sys
class Example:
def __del__(self):
print("__del__() is called")
if __name__ == "__main__":
reference_count = sys.getrefcount(Example)
print(f"Reference count is {reference_count}")
pretty = pprint.PrettyPrinter(indent=4)
for referrer in gc.get_referrers(Example):
if isinstance(referrer, dict):
pretty.pprint(referrer)
continue
pretty.pprint(referrer)
Here is the output
➜ python3.8 reference_count.py
Reference count is 5
<attribute '__dict__' of 'Example' objects>
<attribute '__weakref__' of 'Example' objects>
(<class '__main__.Example'>, <class 'object'>)
{ 'Example': <class '__main__.Example'>,
'__annotations__': {},
'__builtins__': <module 'builtins' (built-in)>,
'__cached__': None,
'__doc__': None,
'__file__': 'temp.py',
'__loader__': <_frozen_importlib_external.SourceFileLoader object at 0x1006d4550>,
'__name__': '__main__',
'__package__': None,
'__spec__': None,
'gc': <module 'gc' (built-in)>,
'pprint': <module 'pprint' from '/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pprint.py'>,
'pretty': <pprint.PrettyPrinter object at 0x1006e29d0>,
'reference_count': 4,
'referrer': <Recursion on dict with id=4301742016>,
'sys': <module 'sys' (built-in)>}
Since sys.getrefcount() returns a value 1 grater than the expected, it verifies the length of list of referrers.
Correction
The point to be noted in the question is we have not really called reference count of the Example object, hence __del__ is never called, here is a slightly different example.
...
reference_count = sys.getrefcount(Example())
...
for referrer in gc.get_referrers(Example()):
...
Here is the output
__del__() is called
Reference count is 1
__del__() is called

Related

numpy.random functions give constant values on manim

I recently realized that random functions on numpy are always giving the same value when used in a Scene, as in this example:
from manimlib.imports import *
def r():
return np.random.rand(5)
class MyScene(Scene):
def construct(self):
print(r())
will give over and over the same values:
manim foo.py MyScene -p
...
[0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
Looking at manim/manimlib/scene/scene.py shows that there is a configuration parameter random_seed which defaults to 0.
When a Scene is created it calls random.random.seed setting the state of the random generator to this value.
To get your randoms again, set it to None as such:
def r():
return np.random.rand(5)
class MyScene(Scene):
CONFIG = dict(random_seed=None)
def construct(self):
print(r())

How to safely subclass ndarray and get behavior consistent with ndarray - odd nanmin/max results?

I'm trying to subclass an ndarray so that I can add some additional fields. When I do this however, I get odd behavior in a variety of numpy functions. For example nanmin returns now return an object of the type of my new array classs, whereas previously I'd get a float64. Why? Is this a bug with nanmin or my class?
import numpy as np
class NDArrayWithColumns(np.ndarray):
def __new__(cls, obj, columns=None):
obj = obj.view(cls)
obj.columns = tuple(columns)
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.columns = getattr(obj, 'columns', None)
NAN = float("nan")
r = np.array([1.,0.,1.,0.,1.,0.,1.,0.,NAN, 1., 1.])
print "MIN", np.nanmin(r), type(np.nanmin(r))
gives:
MIN 0.0 <type 'numpy.float64'>
but
>>> r = NDArrayWithColumns(r, ["a"])
>>> print "MIN", np.nanmin(r), type(np.nanmin(r))
MIN 0.0 <class '__main__.NDArrayWithColumns'>
>>> print r.shape
(11,)
Note the change in type, and also that str(np.nanmin(r)) shows 1 field, not 11.
In case you're interested, I'm subclassing because I'd like to track columns names is matrices of a single type but structure arrays and record type arrays allow for varying type).
You need to implement the __array_wrap__ method that gets called at the end of ufuncs, per the docs:
def __array_wrap__(self, out_arr, context=None):
print('In __array_wrap__:')
print(' self is %s' % repr(self))
print(' arr is %s' % repr(out_arr))
# then just call the parent
return np.ndarray.__array_wrap__(self, out_arr, context)

parent method to append vector to attribute of derived class

My goal is to create a method, called anotherVar, in a class Delta, that adds an array to an existing array which I can call from a derived class (in this case MyClass1).
The code I have written here doesn't accomplish this. Where am I going wrong? Presumably it's my definition of anotherVar?
import numpy as np
class Delta(object):
def anotherVar(self):
return np.vstack(self)
class myClass1(Delta):
def __init__(self, *myVars):
self.__myArray = np.vstack(myVars)
#property
def myArray(self):
return self.__myArray
someVars1 = [1,2,3]
someVars2 = [4,5,6]
someVars3 = [7,8,9]
myResult = myClass1(someVars1,someVars2,someVars2)
myResult.anotherVar = someVars3
print myResult.myArray
[[1 2 3]
[4 5 6]
[4 5 6]]
There are 2 issues with your original code:
You're rebinding the identifier anotherVar of Delta to a variable. Most likely, you wanted to call
myResult.anotherVar(someVars3)
rather than
myResult.anotherVar = someVars3
as the latter rebinds the method anotherVar to the variable someVars3.
When you are using double underscores, you're using name mangling. If it's merely to make an attribute/method "private", you shouldn't. Any developer who sees a single underscore in front of an attribute, will understand that it is liable to change and thus should not be depended on in the public API.
After changing 2 lines in Delta and changing double underscores into single underscores, your code works as you expect:
import numpy as np
class Delta(object):
def anotherVar(self, arr):
self._myArray = np.vstack((self._myArray, arr))
class myClass1(Delta):
def __init__(self, *myVars):
self._myArray = np.vstack(myVars)
#property
def myArray(self):
return self._myArray

Changing how object appears in interpreter

Is there a way to change how an object appears when displayed at the Python interpreter? For example:
>>> test = myobject(2)
>>> test
'I am 2'
OR
>>> test = myobject(2)
>>> test
myobject(2)
Yes, you can provide a definition for the special __repr__ method:
class Test:
def __repr__(self):
return "I am a Test"
>>> a = Test()
>>> a
I am a Test
In a real example, of course, you would print out some values from object data members.
The __repr__ method is described in the Python documentation here.

reindexObject fails during FileField to BlobField migration in Plone 4.0.7

I'm trying to migrate from plone 3.3.5 to plone 4.0.7 and I'm stuck on a step that converts all the FileFields to BlobFields.
Plone upgrade script successfully converts all native FileFields but I have several custom AT-based classes which have to be converted manually. I've tried two ways of doing the conversion which leads me to the same error.
Using schemaextender as outlined in Plone migration guide and a source code example
Renaming all FileFields to blob fields and then running this script:
from AccessControl.SecurityManagement import newSecurityManager
from AccessControl import getSecurityManager
from Products.CMFCore.utils import getToolByName
from zope.app.component.hooks import setSite
from Products.contentmigration.migrator import BaseInlineMigrator
from Products.contentmigration.walker import CustomQueryWalker
from plone.app.blob.field import BlobField
admin=app.acl_users.getUserById("admin")
newSecurityManager(None, admin)
portal = app.plone
setSite(portal)
def find_all_types_fields(portal_catalog, type_instance_to_search):
output = {}
searched = []
for k in catalog():
kobj = k.getObject()
if kobj.__class__.__name__ in searched:
continue
searched.append(kobj.__class__.__name__)
for field in kobj.schema.fields():
if isinstance(field, type_instance_to_search):
if kobj.__class__.__name__ in output:
output[kobj.__class__.__name__].append(field.__name__)
else:
output[kobj.__class__.__name__] = [field.__name__]
return output
def produce_migrator(field_map):
source_class = field_map.keys()[0]
fields = {}
for x in field_map.values()[0]: fields[x] = None
class FileBlobMigrator(BaseInlineMigrator):
'''Migrating ExtensionBlobField (which is still a FileField) to BlobField'''
src_portal_type = source_class
src_meta_type = source_class
fields_map = fields
def migrate_data(self):
'''Unfinished'''
for k in self.fields_map.keys():
#print "examining attributes"
#import pdb; pdb.set_trace()
#if hasattr(self.obj, k):
if k in self.obj.schema.keys():
print("***converting attribute:", k)
field = self.obj.getField(k).get(self.obj)
mutator = self.obj.getField(k).getMutator(self.obj)
mutator(field)
def last_migrate_reindex(self):
'''Unfinished'''
self.obj.reindexObject()
return FileBlobMigrator
def consume_migrator(portal_catalog, migrator):
walker = CustomQueryWalker(portal_catalog, migrator, full_transaction=True)
transaction.savepoint(optimistic=True)
walker_status = walker.go()
return walker.getOutput()
def migrate_blobs(catalog, migrate_type):
all_fields = find_all_types_fields(catalog, migrate_type)
import pdb; pdb.set_trace()
for k in [ {k : all_fields[k]} for k in all_fields]:
migrator = produce_migrator(k)
print consume_migrator(catalog, migrator)
catalog = getToolByName(portal, 'portal_catalog')
migrate_blobs(catalog, BlobField)
The problem occurs on self.obj.reindexObject() line where I receive the following traceback:
2011-08-09 17:21:12 ERROR Zope.UnIndex KeywordIndex: unindex_object could not remove documentId -1945041983 from index object_provides. This should not happen.
Traceback (most recent call last):
File "/home/alex/projects/plone4/eggs/Zope2-2.12.18-py2.6-linux-x86_64.egg/Products/PluginIndexes/common/UnIndex.py", line 166, in removeForwardIndexEntry indexRow.remove(documentId)
KeyError: -1945041983
> /home/alex/projects/plone4/eggs/Zope2-2.12.18-py2.6-linux-x86_64.egg/Products/PluginIndexes/common/UnIndex.py(192)removeForwardIndexEntry()
191 str(documentId), str(self.id)),
--> 192 exc_info=sys.exc_info())
193 else:
If I remove the line that triggers reindexing, the conversion completes successfully, but if I try to manually reindex catalog later, every object that's been converted can no longer be found, and I'm a bit at loss of what to do now.
The site has LinguaPlone installed, maybe it has something to do with this?
One option would be to run the migration without the reindexObject() call and do a "Clear and Rebuild" in the catalog ZMI Advanced tab after migrating.