Getting UndefVarError: new not defined when trying to define a struct with an inner constructor in Julia - oop

I'm new to Julia and trying out some code I copied from this article. It's supposed to be a way to do object-oriented programming (i.e. classes) in Julia:
using Lathe.stats: mean, std
struct NormalDistribution{P}
mu::Float64
sigma::Float64
pdf::P
end
function NormalDistribution(x::Array)
pdf(xt::Array) = [i = (i-μ) / σ for i in xt]
return new{typeof(pdf)}(mean(x), std(x), pdf)
end
x = [5, 10, 15, 20]
dist = NormalDistribution(x)
However, when I run this with Julia 1.1.1 in a Jupiter notebook I get this exception:
UndefVarError: new not defined
Stacktrace:
[1] NormalDistribution(::Array{Int64,1}) at ./In[1]:11
[2] top-level scope at In[1]:15
I found this documentation page on inner constructor methods which explains that they have
a special locally existent function called new that creates objects of the block's type.
(although the documentation for new linked above says it is a keyword).
It's possible I may have copied the code incorrectly but maybe someone could explain how to implement what the original author was proposing in the article. Also, I don't know how to debug in Julia yet so any pointers appreciated.

What the documentation page you linked to means to say is that the new keyword only exists within inner constructors (as opposed to outer constructors).
So either you go for an outer constructor, in which case you want to use the default constructor in order to actually create your new instance:
using Statistics
# First the type declaration, which comes with a default constructor
struct NormalDistribution{P}
mu::Float64
sigma::Float64
pdf::P
end
# Another outer constructor
function NormalDistribution(x::Array)
μ = mean(x)
σ = std(x)
pdf(xt::Array) = [(i-μ) / σ for i in xt]
# This is a call to the constructor that was created for you by default
return NormalDistribution(μ, σ, pdf)
end
julia> x = [5, 10, 15, 20]
4-element Array{Int64,1}:
5
10
15
20
julia> dist = NormalDistribution(x)
NormalDistribution{var"#pdf#2"{Float64,Float64}}(12.5, 6.454972243679028, var"#pdf#2"{Float64,Float64}(12.5, 6.454972243679028))
julia> dist.pdf([1, 2, 3])
3-element Array{Float64,1}:
-1.781572339255412
-1.626653005407115
-1.4717336715588185
Note that the definition of pdf in the original article is obviously problematic (if only because μ and σ are not defined). I tried to modify it so that it makes some sense
One possible issue with this is that anyone can define a NormalDistribution instance with an inconsistent state:
julia> NormalDistribution(0., 1., x->x+1)
NormalDistribution{var"#11#12"}(0.0, 1.0, var"#11#12"())
Which is why you might want to actually define an inner constructor, in which case Julia does not provide you with a default constructor but you get access to that special new function which creates objects of the type you're defining:
# A type declaration with an inner constructor
struct NormalDistribution2{P}
mu::Float64
sigma::Float64
pdf::P
# The inner constructor is defined inside the type declaration block
function NormalDistribution2(x::Array)
μ = mean(x)
σ = std(x)
pdf(xt::Array) = [(i-μ) / σ for i in xt]
# Use the `new` function to actually create the object
return new{typeof(pdf)}(μ, σ, pdf)
end
end
It behaves exactly in the same way as the struct with an outer constructor, except this time no default constructor is provided anymore:
julia> dist2 = NormalDistribution2(x)
NormalDistribution2{var"#pdf#5"{Float64,Float64}}(12.5, 6.454972243679028, var"#pdf#5"{Float64,Float64}(12.5, 6.454972243679028))
julia> dist2.pdf([1, 2, 3])
3-element Array{Float64,1}:
-1.781572339255412
-1.626653005407115
-1.4717336715588185
# No default constructor provided
julia> NormalDistribution2(0., 1., x->x+1)
ERROR: MethodError: no method matching NormalDistribution2(::Float64, ::Float64, ::var"#9#10")
Stacktrace:
[1] top-level scope at REPL[13]:1

An inner constructor has to be defined inside of the struct definition body; this is the only place where new has any special meaning. Your constructor method is outside of the struct definition where new is just a regular (undefined) name.

Ok. Looks like you were very confused. So that function is meant to be included as an inner constructor to the outer constructor before the end. The problem is the code is divided and built upon slowly across the article where only parts of the whole constructor are shown. If you are interested, and like my blog, I am about to post a video on outer constructors and inner constructors on my channel, https://youtube.com/playlist?list=PLCXbkShHt01seTlnlVg6O7f6jKGTguFi7 , it will be part 9 ( I am editing it now,) maybe watching this in video form will make it make more sense.
enter image description here

Related

F# and ILNumerics

I have just downloaded the last version of ILNumerics, to be used in my F# project. Is it possible to leverage on this library in F#? I have tried simple computations and it seems very cumbersome (in F#).
I would like to set up a constrained (or even unconstrained) optimization problem. The usual Rosenbrock function would do and then I will use my own function. I am having hard times in having even an Array being defined. The only kind of array I could define was a RetArray, for example with this code
let vector = ILMath.vector<float>(1.0, 2.0)
The compiler signals that vector is a RetArray; I think this is due to the fact that it is returning from a function (i.e.: ILMath.vector). If I define another similar vector, I can -e.g.- sum vectors, simply writing, for example
let a = ILMath.vector<float>(1.0, 2.0)
let b = ILMath.vector<float>(3.2,2.2)
let c = a + b
and I get
RetArray<float> = seq [4.2; 4.2]
but if I try to retrieve the value of c, again, writing, for example in FSI,
c;;
I get
Error: Object reference not set to an instance of an object.
What is the suggested way of using ILNumerics in F#? Is it possible to use the library natively in F# or I am forced to call my F# code from a C# library to use the whole ILNumerics library? Other than with the problem cited, I have problems in understanding the very basic logic of ILNumerics, when ported in F#.
For example, what would be the F# equivalent of the C# using scope as in the example code, as in:
using (ILScope.Enter(inData)) { ...
}
Just to elaborate a bit on brianberns' answer, there are a couple of things you could do to make it easier for yourself.
I would personally not go the route of defining a custom operator - especially one that overrides an existing one. Instead, perhaps you should consider using a computation expression to work with the ILMath types. This will allow you to hide a lot of the uglyness, that comes when working with libraries making use of non-F# standards (e.g. implicit type conversions).
I don't have access to ILMath, so I have just implemented these dummy alternatives, in order to get my code to compile. I suspect you should be able to just not copy that in, and the rest of the code will work as intended
module ILMath =
type RetArray<'t> = { Values: 't seq }
and Array<'t> = { OtherValues: 't seq } with
static member op_Implicit(x: RetArray<_>) = { OtherValues = x.Values }
static member inline (+) (x1, x2) = { Values = (x1.OtherValues, x2.OtherValues) ||> Seq.map2 (+) }
type ILMath =
static member vector<'t>([<ParamArray>] vs : 't []) = { ILMath.Values = vs }
If you have never seen or implemented a computation expression before, you should check the documentation I referenced. Basically, it adds some nice, syntactic sugar on top of some uglyness, in a way that you decide. My sample implementation adds just the let! (desugars to Bind) and return (desugars to Return, duh) key words.
type ILMathBuilder() =
member __.Bind(x: ILMath.RetArray<'t>, f) =
f(ILMath.Array<'t>.op_Implicit(x))
member __.Return(x: ILMath.RetArray<'t>) =
ILMath.Array<'t>.op_Implicit(x)
let ilmath = ILMathBuilder()
This should be defined and instantiated (the ilmath variable) at the top level. This allows you to write
let c = ilmath {
let! a = vector(1.0, 2.0)
let! b = vector(3.2, 2.2)
return a + b
}
Of course, this implementation adds only support for very few things, and requires, for instance, that a value of type RetArray<'t> is always returned. Extending the ILMathBuilder type according to the documentation is the way to go from here.
The reason that the second access of c fails is that ILNumerics is doing some very unusual memory management, which automatically releases the vector's memory when you might not expect it. In C#, this is managed via implicit conversion from vector to Array:
// C#
var A = vector<int>(1, 2, 3); // bad!
Array<int> A = vector<int>(1, 2, 3); // good
F# doesn't have implicit type conversions, but you can invoke the op_Implicit member manually, like this:
open ILNumerics
open type ILMath // open static class - new feature in F# 5
let inline (!) (x : RetArray<'t>) =
Array<'t>.op_Implicit(x)
[<EntryPoint>]
let main argv =
let a = !vector<float>(1.0, 2.0)
let b = !vector<float>(3.2,2.2)
let c = !(a + b)
printfn "%A" c
printfn "%A" c
0
Note that I've created an inline helper function called ! to make this easier. Every time you create an ILNumerics vector in F#, you must call this function to convert it to an array. (It's ugly, I know, but I don't see an easier alternative.)
To answer your last question, the equivalent F# code is:
use _scope = Scope.Enter(inData)
...

Access to **private** variables of a class in Python

I understand that Python does not explicitly support private variables in a class. However, please consider the following program:
class AClass(object):
def __init__(self, x):
self.__x = x
class BClass(object):
def __init__(self, x):
self.__x = x
# _____________________________________________________________________________
aClass = AClass(10)
bClass = BClass(10)
aClass.__x = 15
print (aClass.__x)
##bClass.__x = 20
print (bClass.__x)
The program above, will produce following error:
AttributeError: 'BClass' object has no attribute '__x'
But, if the second last line of code is uncommented, it will execute without an error.
If someone can please clarify what appears to be an inconsistency and if there is a PEP that explains this behaviour, I would appreciate a pointer to it.
Best regards.
BD
It's because variables starting with a dunder are name mangled to "protect" them. If you examine the dictionary of bClass, you'll see:
>>> print(bClass.__dict__)
{'_BClass__x': 10, '__x': 20}
The _BClass__x (I'll call this the object variable) was created by the object itself, hence its mangled name. The __x was created outside of the class(a) which is why it has a non-mangled name, and therefore why you can access it with just __x.
To access the object variable for both types, you can use:
print (aClass._AClass__x)
print (bClass._BClass__x)
But I'm not sure how reliable that is. I am sure that it's something you probably shouldn't be doing however, since it breaks encapsulation :-)
In fact, though I said the mangling was done by the object, I want to make sure you understand it's not done when an object is instantiated. The actual mangling happens when the code is compiled, which you can see if you disassemble:
>>> import dis
>>> dis.dis(AClass)
Disassembly of __init__:
3 0 LOAD_FAST 1 (x)
2 LOAD_FAST 0 (self)
4 STORE_ATTR 0 (_AClass__x)
6 LOAD_CONST 0 (None)
8 RETURN_VALUE
The STORE_ATTR bytecode actually knows to use a mangled name.
(a) And it is very much distinct from the object variable, as you'll find to your distress when you later try to use __x within a member function and find it hasn't been changed by your code outside :-)

Understanding how Julia modules can be extended

I'm struggling to understand how exactly modules can be extended in Julia. Specifically, I'd like to create my own LinearAlgebra matrix whose parent class is AbstractMatrix{T} and implement its functionality similar to how the Diagonal or UpperTriangular matrices are implemented in the actual LA package. If I could literally add my matrix to the original package, then I would, but for now I am content creating my own MyLinearAlgebra package that simply imports the original and extends it. Here's what I've got so far in MyLinearAlgebra.jl:
module MyLinearAlgebra
import LinearAlgebra
import Base: getindex, setindex!, size
export
# Types
LocalMatrix,
SolutionVector,
# Functions
issymmetric,
isdiag
# Operators
# Constants
include("SolutionVector.jl")
include("LocalMatrix.jl")
end
Focusing solely on LocalMatrix.jl now, I have:
"""
struct LocalMatrix{T} <: AbstractMatrix{T}
Block diagonal structure for local matrix. `A[:,:,s,iK]` is a block matrix for
state s and element iK
"""
struct LocalMatrix{T} <: AbstractMatrix{T}
data::Array{T,4}
function LocalMatrix{T}(data) where {T}
new{T}(data)
end
end
[... implement size, getindex, setindex! ... all working perfectly]
"""
issymmetric(A::LocalMatrix)
Tests whether a LocalMatrix is symmetric
"""
function issymmetric(A::LocalMatrix)
println("my issymmetric")
all(LinearAlgebra.issymmetric, [#view A.data[:,:,i,j] for i=1:size(A.data,3), j=1:size(A.data,4)])
end
"""
isdiag(A::LocalMatrix)
Tests whether a LocalMatrix is diagonal
"""
function isdiag(A::LocalMatrix)
println("my isdiag")
all(LinearAlgebra.isdiag, [#view A.data[:,:,i,j] for i=1:size(A.data,3), j=1:size(A.data,4)])
end
When I try and run this however, I get
error in method definition: function LinearAlgebra.isdiag must be explicitly imported to be extended
OK not a problem, I can change the definition to function LinearAlgebra.isdiag() instead and it works. But if I also change the definition of the other function to function LinearAlgebra.issymmetric() and run a simple test I now get the error
ERROR: MethodError: no method matching issymmetric(::MyLinearAlgebra.LocalMatrix{Float64})
So I'm stumped. Obviously I have a workaround that lets me continue working for now, but I must be simply misunderstanding how Julia modules work because I can't seem to distinguish between the two functions. Why does one needs to be explicitly extended? Why can the other not? What even is the difference between them in this situation? What is the correct way here to extend a package's module? Thanks for any help.
You need to explicitly state that you are adding new methods to existing functions so it should be:
function LinearAlgebra.issymmetric(A::LocalMatrix)
...
end
function LinearAlgebra.isdiag(A::LocalMatrix)
...
end
The reason that you are getting the error most likely is because you forgot to import LinearAlgebra in the code that is testing your package.
Note that your constructor also should be corrected:
struct LocalMatrix{T} <: AbstractMatrix{T}
data::Array{T,4}
function LocalMatrix(data::Array{T,4}) where {T}
new{T}(data)
end
end
With the current constructor you need to write LocalMatrix{Float64}(some_arr) instead of simply LocalMatrix(some_arr). Even worse, if you provide your constructor with a 3-d array you will get a type conversion error, while when you used the syntax that I am proposing one gets no method matching LocalMatrix(::Array{Int64,3}) which is much more readable for users of your library.

Julia - ERROR: cannot assign variable ImageAxes.data from module Main

I wish to run the example from here, but I get this error:
julia> using DataFrames, GLM
julia> data = DataFrame(X=[1,2,3], Y=[2,4,7])
ERROR: cannot assign variable ImageAxes.data from module Main
Stacktrace: 1 top-level scope at none:0
Can someone help?
ImageAxes.jl defines a deprecated function data. You must have used this function before trying to assign a value to data variable.
Now to understand what is going on consider the following example. I am using a fresh REPL session:
julia> sin = 1
1
julia> sin
1
julia> cos(1)
0.5403023058681398
julia> cos = 1
ERROR: cannot assign variable Base.cos from module Main
julia> log # it is enough to reference a function ho have this situation - you do not have to call it
log (generic function with 19 methods)
julia> log = 1
You can notice that you could bind a value to sin (although it is a standard function) to 1 BEFORE sin was referenced to (e.g. called) in the session. On the other hand we have called cos first before trying to assign a value to cos variable. This introduced cos into global scope, and as cos is a function rebinding the value assigned to cos is not allowed.

Static variable vs class variable vs instance variable vs local variable

Before posting, I read more than a half dozen posts in a search from this site (python variables class static instance local)
https://stackoverflow.com/search?q=python+variables+class+static+instance+local
I bookmarked several of the results for future studying, but none of them seemed to clarify to me if my thinking is right or wrong, so I feel I may be missing something about the basics (see below)...
Are the terms 'class variable' and 'static variable' referring to the same thing? After about three Google searches, reading through ~6 articles per search that I could understand, I've reached a conclusion that class and static variables are the same thing. But, since I'm just learning the fundamentals of Python and OOP, this conclusion may be wrong so I want to find any flaws in my reasoning before I continue learning with the wrong mindset. In the code below:
class Variables():
scVar = 3
def __init__(self, a, b):
self.iVar1 = a
self.iVar2 = b
def getLocalVar3(self):
localVar1 = 17
localVar2 = 100
localVar3 = localVar1 + localVar2
return localVar3
Is 'scVar' both a class variable and a static variable ('class' and 'static' variables being synonymns)?
The second question is to clarify my understanding of differentiating class variables, instance variables, and local variables. In the code above, I'm thinking that scVar is a class variable; iVar1 and iVar2 are instance variables; and localVar1, localVar2, and localVar3 are local variables. Is that correct to say, or is there something that I'm missing?
Thanks cirosantilli, That article that you linked to is one I haven't seen yet. I'm going to look that over. I wonder a bit about Python's point of vew that there's not a distinction between class variables and instance variables. Is this point of view one that I should try to understand correctly right up front, as a beginner, or should I just keep the idea in mind and not worry too much about reconciling my current point of view with that one until I become more experienced? That question probably seems overly vague, and dependent upon my current level of understanding Python. Mostly, I've been running commands on class examples similar to what is in the original post; predicting the output before I press the [enter] key and then studying the output when it's not what I predicted. From that, I'm starting to get some grasp of how things work in Python. Basically, I've just recently started getting a glimpse of how OO works in Python - inching forward, slowly but surely.
"However the above are just conventions: from the language point of view there is no distinction between class variables and instance variables." -- Just to help me understand this better, does the part, 'from the language point of view ...' sort of imply that the Python documentation explains this point of view somewhere within it? If so, I'll re-read through the docs and look specifically for this part, and try to conform my thinking process to it. "... between class variables and instance variables." So, even though there's the difference in the way that class and instance variables can be seen and accessed by 'classobj's and 'instance's, Python's
point of view is that there is no distinction between them? I'm going to keep this idea in mind during future reading so that I can maybe get rid of some confusion on my part. After I run the *.py in the original post, I get the following outputs in IDLE, using Python 2.7.x (only the last line of the traceback error is included, for better readability):
>>> Variables.scVar
3
>>> Variables.iVar1
AttributeError: class Variables has no attribute 'iVar1'
>>> instance = Variables(5, 15)
>>> instance
<__main__.Variables instance at 0x02A0F4E0>
>>> Variables
<class __main__.Variables at 0x02A0D650>
>>> instance.scVar
3
>>> instance.iVar1
5
>>> instance2 = Variables(25, 35)
>>> instance2.scVar
3
>>> Variables.scVar = Variables.scVar * 100
>>> Variables.scVar
300
>>> instance.scVar
300
>>> instance2.scVar
300
>>> instance.scVar = 9999
>>> Variables.scVar
300
>>> instance.scVar
9999
>>> instance2.scVar
300
>>> type(Variables)
<type 'classobj'>
>>> type(instance)
<type 'instance'>
"However the above are just conventions: from the language point of view there is no distinction between class variables and instance variables." -- By using the code from the original post, is there maybe a sequence of commands that illustrates this point? I don't doubt that you know what you're talking about; I just find it difficult to reconcile my current way of thinking with the above statement. But I get the feeling that if I can start to see the difference between the two perspectives, that something important will 'click'.
As an afterthought to the last few sentences, I might be on to seeing things more along the same lines as your statement about 'no distinction between class variables and instance variables', but only if my following assumption is accurate... From the code in the original post (class Variables - 12 lines), are there just the two scopes of global and local involved in that program? Since I've just started to form conclusions about how it all fits together, I think that my limited understanding of scope might be what keeps me from fully grasping the idea that there's no distinction between class variables and instance variables. The only thing I can seem to make of it now is that (only maybe) - 'Python has no distinction between class variables and instance variables; but the differences between global and local scope might make it appear to a novice that there is a distinction between these two types of variables. I don't know, does that statement identify a potential 'hang up' that I could be having about it?
"Everything is an object, including classes and integers:" -- I've read this numerous times. So much so that I take it to be a core belief to understanding OO and Python, but it's not a concept in which I fully realize the implications of yet (I think).
class Foo():
integer = 10
float = 6.37
string = 'hello'
boolean = True
idkyet = None
def __init__(self):
self.a = 'iv_a'
self.b = 'iv_b'
self.c = 'iv_c'
def Func(self):
self.g = 'g'
h = 'h'
i = 'i'
return 'g' + 'h' + 'i'
>>> Foo
<class __main__.Foo at 0x02A1D650>
>>> type(Foo.integer)
<type 'int'>
>>> type(Foo.float)
<type 'float'>
>>> type(Foo.string)
<type 'str'>
>>> type(Foo.boolean)
<type 'bool'>
>>> type(Foo.idkyet)
<type 'NoneType'>
>>> type(Foo)
<type 'classobj'>
>>> import os
>>> type(os.getcwd() + '\\Test.py')
<type 'str'>
>>> type(os)
<type 'module'>
>>> f = Foo()
>>> type(f)
<type 'instance'>
>>> type(f.Func)
<type 'instancemethod'>
>>> type(f.Func())
<type 'str'>
>>> f.Func
<bound method Foo.Func of <__main__.Foo instance at 0x02A25AF8>>
>>> Foo.Func
<unbound method Foo.Func>
>>> type(f.a)
<type 'str'>
>>> type(Foo.a)
AttributeError: class Foo has no attribute 'a'
>>> type(Foo.self.a)
AttributeError: class Foo has no attribute 'self'
When I was about half way through this response, leaving off with the 'class Foo():' code above and the commands ran on it below that, I hit a snag and couldn't quite continue with the other follow-up question that I barely had in mind. So, I stepped away from the problem for awhile and started to read that 'cafepy...' link that you posted (by Shalabh Chaturvedi). That's really interesting. I had seen excerpts from that before but I hadn't read the whole thing, but it seems much more understandable now than it would have been just a week ago. I think I will read the whole thing. Don't mind the last half of this post (after the '***') because I still can't pinpoint exactly what I was trying to ask. ...everything is an object...mainly just a difference in object types???... < That is the note that I had jotted down when I almost had in mind how to frame the last question, but it never came to fruition. I'll have to wait until something else 'clicks' and I can see again what I had in mind.
I'll also keep in mind to stop and re-read if I glance across anything related to 'MRO', bound and unbound methods... I have been picking up just a bit of those three terms lately, in a way that it feels like they won't be too far in the future of my learning process.
I believe that static and class variables are commonly used as synonyms.
What you say about the variables is correct from the convention point of view: this is how you should think about them most of the time.
However the above are just conventions: from the language point of view there is no distinction between class variables and instance variables.
Python is not like C++ or Java.
Everything is an object, including classes and integers:
class C(object): pass
print id(C)
C.a = 1
assert C.__dict__['a'] == 1
There is no clear distinction between methods and instance variables: they are just attributes of an object.
Therefore, there is no language level distinction between instance variables and class variables: they are just attributes of different objects:
instance variables are attributes of the object (self)
class variables are attributes of the Class object.
The real magic happens on the order that the . operator searches for attributes:
__dict__ of the object
__dict__ of the class of the object
MRO up to parent classes
You should read this great article before you get confused in the future.
Also beware of bound vs unbound methods.
EDIT: attempt to address further questions by the OP made in his post.
Wow that was large! I'll try to read everything, but for the future you should try to keep questions more concice. More code, less talk =). You'll get better answers.
should I just keep the idea in mind and not worry too much about reconciling my current point of view with that one until I become more experienced?": I do things.
I do as I feel necessary. When necessity calls, or I can't take magic behaviour anymore, I learn.
sort of imply that the Python documentation explains this point of view somewhere within it?
I don't know about the docs, but the language itself works that way.
Of course, the language was designed to give the impression that syntax works just like in C++ in the common cases, and it adds a thin layer of magic to classes to make it look like so.
But, since that is not how it truly works, you cannot account for all (useful) behaviour by thinking only in terms of C++ class syntax.
By using the code from the original post, is there maybe a sequence of commands that illustrates this point?
I'm not sure it can be illustrated in sequence of commands. The point is: classes are objects, and their attributes are searched by the dot . MRO on the same order as attributes of objects:
class C(object):
i_static = 0
def __init__(self):
self.i = 1
# i is in the __dict__ of object c
c = C()
assert c.__dict__['i'] == 1
assert c.i == 1
# dot finds i_static because MRO looks at class
assert c.__class__.__dict__['i_static'] == 0
assert c.i_static == 0
# i_static is in the __dict__ of object C
assert C.__dict__['i_static'] == 0
assert C.i_static == 0
# __eq__ is in the dict of type, which is the __class__ of C
# By MRO, __eq__ is found. `C,C` because of bound vs unbound.
assert C.__class__.__dict__['__eq__'](C,C)
assert C == C
are there just the two scopes of global and local involved in that program?
This is a point I don't know very clearly.
There is a no global scope in Python, only module level.
Then there is a new local scope inside functions.
The rest is how the . looks for attributes.
can't pinpoint exactly what I was trying to ask
Ask: can I find a difference in syntax between classes, integers or functions?
If you think you have found one, ask: hmmm, how can I make an object with certain attributes that behaves just like that thing which does not look like an object?
You should find an answer every time.
Example:
def f(): pass
class C(object): pass
AHA: f is different than c = C() because I can do f() but notc()`!
But then, no, it is just that the f.__class__.__dict__['__call__'] attribute is defined for f, and can be found via MRO.
But we can do that for c too:
class C(object):
def __call__(self): pass
and now we can do c().
So they were not different in that aspect.