I am currently working with the CUDArt package. The GitHub documentation includes the following snippet of code when loading a ptx module containing a custom CUDA C kernel:
md = CuModule("mycudamodule.ptx", false) # false means it will not be automatically finalized
(comment in original)
I am trying to understand what exactly this false option for finalizing means and when I would / would not want to use it. I came across this post on SO (What is the right way to write a module finalize method in Julia?). It quotes from Julia documentation as:
finalizer(x, function)
Register a function f(x) to be called when there are no program-accessible references to x. The behavior of this function is unpredictable if x is of a bits type.
I don't really understand what this means though, or even whether the finalizing here is the same as that referred to in the CUDArt example. For example, it doesn't make sense to me to try to call a function on an argument x when that argument isn't accessible to the program - how could this even be possible? Thus, I would appreciate any help in clarifying:
What it means to "finalize" in Julia and
When I would/would not want to use it in the context of importing .ptx modules with CUDArt
I can't speak for CUDArt, but here is what finalize means in Julia: when the garbage collector detects that the program can no longer access the object, then it will run the finalizer, and then collect (free) the object. Note that the garbage collector can still access the object, even though the program cannot.
Here is an example:
julia> type X
a
end
julia> j = X(1) # create new X(1) object, accessible as j
julia> finalizer(j, println) # print the object when it's finalized
julia> gc() # suggest garbage collection; nothing happens
julia> j = 0 # now the original object is no longer accessible by the program
julia> gc() # suggest carbage collection
X(1) # object was collected... and finalizer was run
This is useful so that external resources (such as file handles or malloced memory) are freed if an object is collected.
I cannot comment, but I would like to add that from docs:
finalizer(f, x)
f must not cause a task switch, which excludes most I/O operations such as println. Using the #async macro (to defer context switching to outside of the finalizer) or ccall to directly invoke IO functions in C may be helpful for debugging purposes.
Related
I would like to run a code on n processes, and have the logs from each process in a separate file.
I tried, naively, sthing like this
from multiprocessing import Process
import logging
class Worker(Process):
def __init__(self, logger_name, log_file):
super().__init__()
self.logger = logging.getLogger(logger_name)
self.log_file = log_file
self.logger.addHandler(logging.FileHandler(log_file))
print("from init", self.logger, self.logger.handlers)
def run(self) -> None:
print("from run", self.logger, self.logger.handlers)
if __name__ == '__main__':
p1 = Worker("l1", "log1")
p1.start()
(tried in python 3.9 and 3.11)
but from some reason, the handler is gone. This is the output:
from init <Logger l1 (WARNING)> [<FileHandler log1 (NOTSET)>]
from run <Logger l1 (WARNING)> []
Why is the FileHandler gone? Should I use the AddHandler within the run method -- is it a correct way?
I was trying to use this answer but couldn't make it really work.
For the moment, I solved it via defining the handlers in run but it seems like a dirty hack to me...
UPDATE: This happens on my MacBook python installations. On a linux server, I couldn't reproduce this. Very confusing.
In either case, the question is probably:
"Is this the correct way to log to files, with several copies of one
process?"
I found the reason for the observed behavior. It has to do with pickling of objects when they are transferred between Processes.
In the standard library's implementation of Logger, a __reduce__ method is defined. This method is used in cases where an object cannot be reliably pickled. Instead of trying to pickle the object itself, the pickle protocol instead uses the returned value from __reduce__. In the case of Logger, __reduce__ returns a function name (getLogger) and a string (the name of the Logger being pickled) to be used as an argument. In the unpicking procedure, the unpickling protocol makes a function call (logging.getLogger(name)); the result of that function call becomes the unpickled Logger instance.
The original Logger and the unpickled Logger will have the same name, but perhaps not much else in common. The unpickled Logger will have the default configuration, whereas the original Logger will have any customization you may have performed.
In Python, Process objects do not share an address space (at least, not on Windows). When a new Process is launched, its instance variables must somehow be "transferred" from one Process to another. This is done by pickling/unpickling. In the example code, the instance variables declared in the Worker.__init__ function do indeed appear in the new Process, as you can verify by printing them in Worker.run. But under the hood Python has actually pickled and unpickled all of the instance variables, to make it look like they magically have migrated to the new Process. In the vast majority of cases this works just fine. But not necessarily if one of those instance variables defines a __reduce__ method.
A logging.FileHandler cannot, I suspect, be pickled since it uses operating system resources (a file). This is probably the reason (or at least one of the reasons) why Logger objects can't be pickled.
I'm writting a basic event handler in lua which uses some code located in another module
require "caves"
script.on_event({defines.events.on_player_dropped_item}, function(e)
caves.init_layer(game)
player = game.players[e.player_index]
caves.move_down(player)
end
)
but whenever the event is triggered i get following error
attempt to index global 'caves' (a nil value)
why is this and how do i solve it?
You open up the module in question and see what it exports (which global variables are assigned and which locals are returned in the bottom of the file). Or pester the mod author to create interface.
Lua require(filename) only looks up a file filename.lua and runs it, which stands for module initialization. If anything is returned by running the file, it is assigned into lua's (not-so) hidden table (might as well say, a cache of the require function), if nothing is returned but there was no errors, the boolean true is assigned to that table to indicate that filename.lua has already been loaded before. The same true is returned to the variable to the left of equals in the caves = require('caves').
Anything else is left up to author's conscience.
If inside the module file functions are written like this (two variants shown):
init_layer = function(game)
%do smth
end
function move_down(player)
%do smth
end
then after call to require these functions are in your global environment, overwriting your variables with same names.
If they are like this:
local init_layer = function(game)
%do smth
end
local function move_down(player)
%do smth
end
then you won't get them from outside.
Your code expects that the module is written as:
caves = {
init_layer = function(game)
%do smth
end
}
caves.move_down=function(player)
%do smth
end
This is the old way of doing modules, it is currently moved away, but not forbidden. Many massive libraries like torch still use it because you'd end up assigning them to the same named globals anyway.
Кирилл's answer is relevant to newer style:
local caves={
%same as above
}
%same as above
return caves
We here cannot know more about this. The rest is up to you, lua scripts are de-facto open-source anyways.
Addendum: The event_handler is not part of lua language, it is something provided by your host program in which lua is embedded and the corresponding tag is redundant.
You should consult your software documentation on what script.on_event does in this particular case it is likely does not matter, but in general the function that takes another function as argument can dump it to string and then try to load it and run in the different environment without the upvalues and globals that the latter may reference.
require() does not create global table automatically, it returns module value to where you call this function. To access module via global variable, you should assign it manually:
local caves = require "caves"
I'm working with the NativeCall interface.
The library is going to call my callback function a bunch of times.
That works fine. I can just declare my callback with the right
signature, pass it in as &callback and the library calls the sub just
fine.
It also has the capability to set a payload void *pointer to anything
I want, and it will include that in the call to my callback function.
Can I hide a Perl Str, for example, in the payload and successfully round trip it?
sub set_userdata(Pointer) returns int32 is native { ... }
sub set_callback(&callback(Pointer $userdata --> int32)) returns int32 is native { ... }
sub callback(Pointer $userdata) returns int32 {
my Str $mystring = ???
...
}
my Str $my-userdata-string;
set_userdata(???);
set_callback(&callback);
It seems like it could work with some incantation of binding, "is rw", nativecast() and/or .deref.
You can only use a native representation in such a case (such as CStruct, CArray, and CPointer), or alternatively a Blob. You are also responsible for ensuring that you keep a reference to the thing you pass as userdata alive from Perl 6's perspective also, so the GC doesn't reclaim the memory that was passed to the C function.
Memory management is the reason you can't pass any old Perl 6 object off to a C function: there's no way for the GC to know whether the object is still reachable through some C data structure it can't introspect. In a VM like MoarVM objects are moved around in memory over time as part of the garbage collection process also, meaning that the C code could end up with an out-dated pointer.
An alternative strategy is not not pass a pointer at all, but instead pass an integer and use that to index into an array of objects. (That's how the libuv binding inside of MoarVM tracks down the VM-level callbacks, fwiw.)
I got around this by just ignoring the userdata and making a new closure referencing the Perl object directly for every callback function. Since there is a new closure created every time I set the callback, I think this will leak memory over time.
The program is used in the context of MPI.It's a MPI implementation of fortran.
I declare an array within a module.like
module var
real,save ::arr(8)
end module
Then use a subroutine like init to initialize the array arr.
In the main program unit,first call init to initialize the array arr.And then call another
subroutine like algo to do some computations.At the beginning of subroutine algo,the value of arr is correct.During the process of computation,the value of arr changed weirdly on some processors though there is no code changing the value of arr while on the other processors the value is correct.
I check the code and I am pretty sure no code change the value of arr during computation.
By the way,numbers of variables declared within the module var are numerous.
Thanks for all of you who give suggestions.The error is due to the access of array element out
of boundary.In my program there is a line of code accessing the index 0 element in array like
arr(0)=... which is beyond the range of fortran array.And this code leads to the value of another variable in the module to be changed which is quite unexpected to me.arr(0)=.. leads to the change of another variable like parm defined in the module.
Since you are using MPI, you must also broadcast the variable to all processors if the initialization is being done only on one processor
I am getting "Segmentation Fault" error over and over again, while using my subroutines (I have put all of them in MODULEs) with a code written in Fixed Form Source (during fortran77 days).
The original make file (Linux platform) is a mess, it compiles only ".f" source, so I had to change the extensions of my files from ".f90" to ".f", and have left first 7 columns blank in my modules.
My modules extensively use whole array operations and operations on array-sections, and I declare the variables in F90 style, many of them are assumed-size arrays.
My question:- although the compiler compiles these modules (having whole-array/array-section operations) without any warning/error, however is this "segmentation fault" due to the usage of modules with whole-array/array-section operations (kept in .f files) in a legacy code?
For example I have written following code in "algebra.f" module:
function dyad_vv(v1,v2) !dyadic product of two vectors
real*8, dimension(:)::v1,v2
real*8, dimension(size(v1,1),size(v2,1))::dyad_vv
integer i,j
do i=1,size(v1,1)
do j=1,size(v2,1)
dyad_vv(i,j)=v1(i)*v2(j)
end do
end do
end function
!==================================
function dot_mv(m,v) !dot product of a matrix and a vector
real*8, dimension(:,:)::m
real*8, dimension(:)::v
real*8, dimension(size(m,1))::dot_mv
integer i,j
do i=1,size(m,1)
dot_mv(i)=0.0000D0
do j=1,size(v,1)
dot_mv(i)=dot_mv(i)+m(i,j)*v(j)
end do
end do
end function
!==================================
function dot_vm(v,m) !dot product of a vector and a matrix
real*8, dimension(:)::v
real*8, dimension(:,:)::m
real*8, dimension(size(m,2))::dot_vm
integer i,j
do i=1,size(m,2)
dot_vm(i)=0.0000D0
do j=1,size(v,1)
dot_vm(i)=dot_vm(i)+v(j)*m(j,i)
end do
end do
end function
To expand a little on my already over-long comment:
Segmentation faults in Fortran programs generally arise from either (a) attempting to access an array element outside the array bounds, or (b) mismatching procedure actual arguments and dummy arguments.
Fortunately, intelligent use of your compiler can help you spot both these situations. For (a) you need to switch on run-time array bounds checking, and for (b) you need to switch on compile-time subroutine interface checking. Your compiler manual will tell you what flags you need to set.
One of the advantages of modern Fortran, in particular of modules is that you get procedure interface checking for free as it were, the compiler takes care, at compile time, of checking that dummy and actual arguments match.
So I don't think your problem stems directly from writing modern Fortran in fixed-source form. But I do think that writing modern Fortran in fixed-source form to avoid re-writing your makefile and to avoid upgrading some FORTRAN77 is a sufficiently perverse activity that you will find it painful in the short run, and regret it in the long run as you continue to develop degraded code.
Face up to it, refactor now.