where is the location of class Normal in edward's source code? - tensorflow

When using edward, we always use from edward.models import Normal , but i didn't find the declaration of Normal in github
anybody who can tell me where is it

They are defined in edward/models/random_variables.py.
You import the Normal class like this:
from edward.models import Normal
This suggests looking in edward/models/__init__.py, which has this line:
from edward.models.random_variables import *
Looking in edward/models/random_variables.py we find this code:
from edward.models.random_variable import RandomVariable as _RandomVariable
from tensorflow.contrib import distributions as _distributions
# Automatically generate random variable classes from classes in
# tf.contrib.distributions.
_globals = globals()
for _name in sorted(dir(_distributions)):
_candidate = getattr(_distributions, _name)
if (_inspect.isclass(_candidate) and
_candidate != _distributions.Distribution and
issubclass(_candidate, _distributions.Distribution)):
# to use _candidate's docstring, must write a new __init__ method
def __init__(self, *args, **kwargs):
_RandomVariable.__init__(self, *args, **kwargs)
__init__.__doc__ = _candidate.__init__.__doc__
_params = {'__doc__': _candidate.__doc__,
'__init__': __init__}
_globals[_name] = type(_name, (_RandomVariable, _candidate), _params)
del _candidate
This goes through the tensorflow.contrib.distributions module looking for classes derived from tensorflow.contrib.distributions.Distribution (ignoring other attributes like e.g. the __file__ member of the module, or the base Distribution class itself). For each one, it does a bit of hacking (which only affects the generated documentation) then executes this key line:
_globals[_name] = type(_name, (_RandomVariable, _candidate), _params)
The type() built-in function creates a new type i.e. declares a new class. The second parameter is the list of base classes, which here is edward's RandomVariable class and the TensorFlow random variable class. Earlier it defined _globals to be globals(), which is a built-in function returning the dictionary of the module's variables. Therefore, in the case you're interested in, the line above is equivalent to the following:
from edward.models.random_variable import RandomVariable as EdRandVar
from tensorflow.contrib.distributions import Normal as TfNormal
Normal = type("Normal", (EdRandVar, TfNormal), {...})
Which in turn is equivalent to this (if you ignore the docstring stuff):
from edward.models.random_variable import RandomVariable as EdRandVar
from tensorflow.contrib.distributions import Normal as TfNormal
class Normal(EdRandVar, TfNormal):
pass

Related

Can Kotlin extension functions be called without an import declaration?

Is it possible to call an extension function from another package without importing it?
Given an extension function:
package ext
fun Int.plusOne() = this + 1
Is there any way to call this function without importing the function first?
I can call non-extension functions without an import (ignore that the function does not need to be imported, just note that the syntax is valid):
val el: List<Int> = kotlin.emptyList()
I can instantiate classes without an import:
val str = java.lang.String("yo.")
But I have not yet found the equivalent for extensions (I know some examples are silly):
val i = 42
// All those are not valid syntax...
i.ext.plusOne()
ext.plusOne(i)
i.(ext.plusOne)()
i.(ext.plusOne())
ext.i.plusOne()
val pO = ext.plusOne
i.pO()
Bonus: Same question, but for extension properties.
Edit: To add to the list of invalid examples, even at places where the extension receiver is implicit, FQDNs are not allowed:
// Good:
import ext.plusOne
val four = with(3) { plusOne() }
// Unresolved reference:
val four = with(3) { ext.plusOne() }
No, according to the spec. A call can only be these forms:
A fully-qualified call without receiver: package.foo();
A call with an explicit receiver: a.foo();
An infix function call: a foo b;
An overloaded operator call: a + b;
A call without an explicit receiver: foo().
Notice that the "fully-qualified call" form, which is the only form that allows the use of package names, explicitly says "without receiver". However, your plusOne requires an Int as a receiver. In fact, by definition, all extensions functions/properties require a receiver by definition.
I also tried looking at callable references, in hopes of making a callable reference to plusOne using a fully qualified name, and then calling that callable reference. However, it turns out the syntax for those is even stricter.
Therefore, this cannot be done without modifying the ext package in some way, like adding a "wrapper" function.
After all, there is really no need for such a feature. Importing is not that hard - the IDE does it all for you these days. If you need to import two things with the same name, use an import alias:
import package1.extFunc as pack1ExtFunc
import package2.extFunc as pack2ExtFunc

Declaring computed python-level property in pydantic

I have a class deriving from pydantic.BaseModel and would like to create a "fake" attribute, i.e. a computed property. The propery keyword does not seem to work with Pydantic the usual way. Below is the MWE, where the class stores value and defines read/write property called half with the obvious meaning. Reading the property works fine with Pydantic, but the assignment fails.
I know Pydantic is modifying low-level details of attribute access; perhaps there is a way to define computed field in Pydantic in a different way?
import pydantic
class Object(object):
def __init__(self,*,value): self.value=value
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
class Pydantic(pydantic.BaseModel):
class Config:
extra='allow'
value: float
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
o,p=Object(value=1.),Pydantic(value=1.)
print(o.half,p.half)
o.half=p.half=2
print(o.value,p.value)
outputs (value=1. was not modified by assigning half in the Pydantic case):
0.5 0.5
4 1.0
I happened to be working on the same problem today. Officially it is not supported yet, as discussed here.
However, I did find the following example which works well:
class Person(BaseModel):
first_name: str
last_name: str
full_name: str = None
#validator("full_name", always=True)
def composite_name(cls, v, values, **kwargs):
return f"{values['first_name']} {values['last_name']}"
Do make sure your derived field comes after the fields you want to derive it from, else the values dict will not contain the needed values (e.g. full_name comes after first_name and last_name that need to be fetched from values).
Instead of using a property, here's an example which shows how to use pydantic.root_validator to compute the value of an optional field:
https://daniellenz.blog/2021/02/20/computed-fields-in-pydantic/
I've adapted this for a similar application:
class Section (BaseModel):
title: constr(strip_whitespace=True)
chunks: conlist(min_items=1, item_type=Chunk)
size: typing.Optional[ PositiveInt ] = None
role: typing.Optional[ typing.List[ str ]] = []
license: constr(strip_whitespace=True)
#root_validator
def compute_size (cls, values) -> typing.Dict:
if values["size"] is None:
values["size"] = sum([
chunk.get_size()
for chunk in values["chunks"]
])
return values
In this case each element of the discriminated union chunks has a get_size() method to compute its size. If the size field isn't specified explicitly in serialization (e.g., input from a JSON file) then it gets computed.
Created a pip package that allows you to easily create computed properties.
Here you can check it out:
https://pypi.org/project/pydantic-computed/
By using the package the example with getting the half of a value would look like this :
from pydantic import BaseModel
from pydantic_computed import Computed, computed
class SomeModel(BaseModel):
value: float
value_half: Computed[float]
#computed("value_half")
def compute_value_half(value: float):
return value / 2

Instantiate only unique objects of a class

I'm trying to create a class that only creates an instance if the arguments passed in during instantiation are a unique combination. If the combination of arguments have previously been passed in, then return the instance that has already been previously created.
I'd like for this class to be inherited by other classes so they inherit the same behavior. This is my first attempt at a solution,
The base/parent class to be inherited:
class RegistryType(type):
def __init__(cls, name, bases, namespace, *args):
cls.instantiated_objects = {}
class AdwordsObject(object, metaclass=RegistryType):
api = AdWordsAPI()
def __new__(cls, *args):
object_name = '-'.join(args)
if object_name in cls.instantiated_objects:
return cls.instantiated_objects[object_name]
else:
obj = super(AdwordsObject, cls).__new__(cls)
cls.instantiated_objects[object_name] = obj
# cls.newt_connection.commit()
return obj
And this is how it's being used in the child class:
class ProductAdGroup(AdwordsObject):
# init method only called if object being instantiated hasn't already been instantiated
def __init__(self, product_name, keyword_group):
self.name = '-'.join([product_name, keyword_group])
#classmethod
def from_string(cls, name: str):
arguments = name.split('-')
assert len(arguments) == 2, 'Incorrect ad group name convention. ' \
'Use: Product-KeywordGroup'
ad_group = cls(*arguments)
return ad_group
I've ran the program with this setup but it seems like a new dict is being created every time ProductAdGroup() is being created so the memory is exploding... even though the program returns the instance that had already been previously instantiated.
Is there anyway to fix this?
Thanks!!!
Your code seems to be right - the only thing incorrect above is that your __init__ method will always be called when instantiating a new class, regardless of a previous instance being returned by __new__ or not.
So, if you create extra objects in your __init__ method, that may be the cause of your memory leak - however, if you bind these new objects to the instane (self), they shuld just override a previously created object in the same place - which would them be freed. . In the code posted here, that happens with self.name- it may be that your real __init__ does more things, and associate new objects to other places than the instance (like, apending them to a list). If your __init__ methods are just as shown the cause for your memory growing is not evident in the code you supply.
As an extra advice, but not related to the problem you relate, I add that you don't need a metaclass for this at all.
Just check for the existence of an cls.instantiated_objects dict in the __new__ method itself. Not writting an unneeded metaclass will simplify your codebase, avoid metaclass conflicts if your class hierarchy evolves, and may even do away with your problem if there is more code on your metaclass than you are showing here.
The base class __new__ method can be rewritten something like this:
class AdwordsObject(object):
def __new__(cls, *args):
if not cls.__dict__.get("instantiated_objects"):
cls.instantiated_objects = {}
name = '-'.join(args)
if name in cls.instantiated_objects:
return cls.instantiated_objects[name]
instance = super().__new__(cls)
cls.instantiated_objects[name] = instance
return instance
And there is no more need for a custom metaclass.

From a Jython class how to call parent method in Java class

Using latest Jython 2.6 beta-1 I derive a custom class MyGame from an imported Java class Game, then I want to override the method render() in my sub class. Within that method I want to call the render() method of the parent (Java) class.
I tried three different versions of how to call this super method, but none work.
from com.badlogic.gdx import Game
class MyGame(Game):
def render(self):
# here I want to call super's render(), which takes no arguments;
# but none of the following three options work.
Game.render() # error: expected 1 args; got 0
Game.render(self) # error: render() takes exactly 1 argument (2 given)
self.super__render() # error: render() takes exactly 1 argument (2 given)
Any ideas?
The builtin super allows you to call the parent class render method.
from com.badlogic.gdx import Game
class MyGame(Game):
def render(self):
super(Game, self).render()

Does Jython class automatically call setter when inheriting Java interface?

I've looked through the Jython book on Jython.org and perused the Internet for some answers but I don't see anywhere that suggests the following (of what seems to me to be strange) behavior. I'm doing this using PyDev 1.5.7 in Eclipse 3.6.1 with Jython 2.5.3.
Does a Jython class that inherits from a Java interface with setters automatically call the setVal when self.val = val is executed?
Here's the Java interface:
package com.me.mypackage
import org.python.core.PyDictionary;
public interface MyInterface {
public double getMaxBW();
public boolean setMaxBW(double bw);
}
Here's the Jython class:
from com.me.mypackage import MyInterface
class MyClass(MyInterface):
def __init__(self, maxBW):
self.maxBW = maxBW
def setMaxBW(self, maxBW):
self.maxBW = maxBW
def getMaxBW(self):
return self.maxBW
When I instantiate the class, in the __init__ function:
setMaxBW gets called when self.maxBW = maxBW is run
This function call in turn runs self.maxBW = maxBW
This code again calls setMaxBW
This function call in turn runs self.maxBW = maxBW
Repeat forever
As a result of this infinite recursion, I get a RuntimeError after the maximum recursion depth has been reached.
One thought was that this was something nifty new-style Python classes were doing (I've spent most of my Python time with old-style classes) but this problem doesn't occur with pure Jython (in Eclipse or standalone from the command line) that doesn't inherit from the Java interface. I haven't tried the interface inheritance outside of Eclipse.
And now I reiterate my initial question but in the context of my code: Is a Jython class that inherits a Java interface with setters automatically call setMaxBW when self.maxBW = maxBW is executed?