I am using Metal in my project and I have encapsulated some of the kernels as functions kind of the same way as MetalPerformanceShaders suggests.
So each my Metal kernel has Objective-C class with the method:
- (void)encodeToCommandBuffer:(id<MTLCommandBuffer>)cmdBuffer
inputTexture:(id<MTLTexture>)inputTexture
outputTexture:(id<MTLTexture>)outputTexture
inputSize:(TextureSize)inputSize
outputSize:(TextureSize)outputSize
{
id<MTLComputeCommandEncoder> enc = [cmdBuffer computeCommandEncoder];
[enc setComputePipelineState:_state];
//set arguments to the state
[enc dispatchThreadgroups:_threadgroupsPerGrid threadsPerThreadgroup:_threadsPerThreadgroup];
[enc endEncoding];
}
The problem is that my code crashes with the assertion:
failed assertion A command encoder is already encoding to this command buffer
Issue is random, happens at different functions. The error description is self explanatory, but what I am curious is - crashes happen in my encodeToCommandBuffer methods. In the pipeline I also use Image Processing functions from MetalPerformanceShaders and these are also get called with encodeToCommandBuffer method and these do not crash.
So it is clear that my understanding how encodeToCommandBuffer method should be written is wrong. How do I need to modify the code? Do I need to check for cmdBuffer state somehow? That it is ready to produce new Encoder. And what if it's not? Do I need to have some sort of while loop that will wait until buffer is ready?
Ok, sorted out. My pipeline is processing multiple instances in parallel, and I made a mistake in the code - pipeline tried to process all instances through the same command buffer, when it was not intended.
Related
I would like to run a code on n processes, and have the logs from each process in a separate file.
I tried, naively, sthing like this
from multiprocessing import Process
import logging
class Worker(Process):
def __init__(self, logger_name, log_file):
super().__init__()
self.logger = logging.getLogger(logger_name)
self.log_file = log_file
self.logger.addHandler(logging.FileHandler(log_file))
print("from init", self.logger, self.logger.handlers)
def run(self) -> None:
print("from run", self.logger, self.logger.handlers)
if __name__ == '__main__':
p1 = Worker("l1", "log1")
p1.start()
(tried in python 3.9 and 3.11)
but from some reason, the handler is gone. This is the output:
from init <Logger l1 (WARNING)> [<FileHandler log1 (NOTSET)>]
from run <Logger l1 (WARNING)> []
Why is the FileHandler gone? Should I use the AddHandler within the run method -- is it a correct way?
I was trying to use this answer but couldn't make it really work.
For the moment, I solved it via defining the handlers in run but it seems like a dirty hack to me...
UPDATE: This happens on my MacBook python installations. On a linux server, I couldn't reproduce this. Very confusing.
In either case, the question is probably:
"Is this the correct way to log to files, with several copies of one
process?"
I found the reason for the observed behavior. It has to do with pickling of objects when they are transferred between Processes.
In the standard library's implementation of Logger, a __reduce__ method is defined. This method is used in cases where an object cannot be reliably pickled. Instead of trying to pickle the object itself, the pickle protocol instead uses the returned value from __reduce__. In the case of Logger, __reduce__ returns a function name (getLogger) and a string (the name of the Logger being pickled) to be used as an argument. In the unpicking procedure, the unpickling protocol makes a function call (logging.getLogger(name)); the result of that function call becomes the unpickled Logger instance.
The original Logger and the unpickled Logger will have the same name, but perhaps not much else in common. The unpickled Logger will have the default configuration, whereas the original Logger will have any customization you may have performed.
In Python, Process objects do not share an address space (at least, not on Windows). When a new Process is launched, its instance variables must somehow be "transferred" from one Process to another. This is done by pickling/unpickling. In the example code, the instance variables declared in the Worker.__init__ function do indeed appear in the new Process, as you can verify by printing them in Worker.run. But under the hood Python has actually pickled and unpickled all of the instance variables, to make it look like they magically have migrated to the new Process. In the vast majority of cases this works just fine. But not necessarily if one of those instance variables defines a __reduce__ method.
A logging.FileHandler cannot, I suspect, be pickled since it uses operating system resources (a file). This is probably the reason (or at least one of the reasons) why Logger objects can't be pickled.
I'm trying to add a new discrete distribution to PyMC3 (a Wallenius non-central hypergeometric) by wrapping Agner Fogs c++ version of it (https://www.agner.org/random/).
I have successfully put the relevant functions in a c++ extension and added broadcasting so that it behaves as scipy's distributions. (For now broadcasting is done in Python. .. will later try the xtensor-python bindings for more performant vectorization in c++.)
I'm running into the following problem: when I instantiate an RV of the new distribution in a model context, I'm getting a "TypeError: an integer is required (got type FreeRV)" from where "value" is passed to the logp() function of the new distribution.
I understand that PyMC3 might need to connect RVs to the functions, but I find no way to cast them into something my new functions can work with.
Any hints on how to resolve this or general info on adding new distributions to PyMC3 or the internal workings of distributions would be extremely helpful.
Thanks in advance!
Jan
EDIT: I noticed that FreeRV inherits from theanos TensorVariable, so I tried calling .eval(). This leads to another error along the lines that no input is connected. (I don't have the exact error message right now).
One thing which puzzles me is why logp is called at instantiation of the variable when setting up the model ...
What does this warning mean? Is there any way we can avoid this warning? I tried to understand the message from the compiler code here but I couldn't.
frege> native sysin "java.lang.System.in" :: InputStream
native function sysin :: InputStream
3: note that the java expression
java.lang.System.in is supposed to be
constant.
I also tried the code below but got the same warning:
frege> native sysin "java.lang.System.in" :: MutableIO InputStream
native function sysin :: MutableIO InputStream
3: note that the java expression
java.lang.System.in is supposed to be
constant.
It is simply a reminder that the java value could change over the lifetime of the program, but you, the programmer, assume its de-facto immutability by using this notation.
In fact, one can re-assign those fields on the Java level. In this case, Frege code could still return the previous value that it may have cached somewhere. Or it could violate referential transparency, so that sysin does not mean the same everywhere.
If you need to make sure that you get the current value of a mutable field, you need to declare it as IO or ST.
This feature is thought as a relief for the cases when we know that a value won't change, so that we can write:
dosomething sysin
instead of
sysin >>= dosomething
This is used, for example, in frege.java.IO, where stdin, stdout and stderr are defined that way.
The warning cannot be supressed, except by compiling with nowarn. This feature should simply not be used unless you're absolutly sure you're doing the right thing, that is, when a proper IO or ST action would produce the very same value all the time.
I'm writing a primitive that takes in two agentsets and a command block. It needs to call a few functions, execute the command block in the current context, and then call another function. Here's what I have so far:
class WithContext(pushGraphContext: GraphContext => Unit, popGraphContext: api.World => GraphContext)
extends api.DefaultCommand {
override def getSyntax = commandSyntax(
Array(AgentsetType, AgentsetType, CommandBlockType))
def perform(args: Array[Argument], context: Context) {
val turtleSet = args(0).getAgentSet.requireTurtleSet
val linkSet = args(1).getAgentSet.requireLinkSet
val world = linkSet.world
val gc = new GraphContext(world, turtleSet, linkSet)
val extContext = context.asInstanceOf[ExtensionContext]
val nvmContext = extContext.nvmContext
pushGraphContext(gc)
// execute command block here
popGraphContext(world)
}
}
I looked at some examples that used nvmContext.runExclusively, but that looked like it's specifically for having a given agentset run the command block. I want the current agent (possibly the observer) to run it. Should I wrap nvm.agent in an agentset and pass that to nvmContext.runExclusively? If so, what's the easiest way to wrap an agent in agentset? If not, what should I do?
Method #1
The quicker-but-arguably-dirtier method is to use runExclusiveJob, as demonstrated in (e.g.) the create-red-turtles command in https://github.com/NetLogo/Sample-Scala-Extension/blob/master/src/SampleScalaExtension.scala .
To wrap the current agent in an agentset, you can use agent.AgentSetBuilder. (You could also pass an Array[Agent] of length 1 to one of the ArrayAgentSet constructors, but I'd recommend AgentSetBuilder since it's less reliant on internal implementation details which are likely to change.)
Method #2
The disadvantage of method #1 is the slight constant overhead associated with creating and setting up the extra AgentSet, Job, and Context objects and directing execution through them.
Creating and running a separate job isn't actually how built-in commands like if and while work. Instead of making a new job, they remain in the current job and cause commands in a command block to run (or not run) by manipulating the instruction pointer (nvm.Context.ip) to jump into them or skip over them.
I believe an extension command could do the same. I'm not certain if it has been tried before, but I can't see any reason it wouldn't work.
Doing it this way would involve understanding more about NetLogo engine internals, as documented at https://github.com/NetLogo/NetLogo/wiki/Engine-architecture . You'd model your primitive after e.g. https://github.com/NetLogo/NetLogo/blob/5.0.x/src/main/org/nlogo/prim/etc/_if.java , including altering your implementation of nvm.CustomAssembled. (Note that prim._extern, which runs extension commands, delegates its assemble method to the wrapped command's own assemble method, so this should work.) In your assemble method, instead of calling done() at the end to terminate the job, you'd just allow execution to fall through.
I could try to construct an example that works this way, but it'd take me a couple hours; it's probably not worth me doing unless there's a real need.
I've made a large program that opens and closes files and databases, perform writes and reads on them etc among other things. Since there no such thing as "exception handling in go", and since I didn't really know about "defer" statement and "recover()" function, I applied error checking after every file-open, read-write, database entry etc. E.g.
_,insert_err := stmt.Run(query)
if insert_err != nil{
mylogs.Error(insert_err.Error())
return db_updation_status
}
For this, I define db_updation_status at the beginning as "false" and do not make it "true" until everything in the program goes right.
I've done this in every function, after every operation which I believe could go wrong.
Do you think there's a better way to do this using defer-panic-recover? I read about these here http://golang.org/doc/articles/defer_panic_recover.html, but can't clearly get how to use them. Do these constructs offer something similar to exception-handling? Am I better off without these constructs?
I would really appreciate if someone could explain this to me in a simple language, and/or provide a use case for these constructs and compare them to the type of error handling I've used above.
It's more handy to return error values - they can carry more information (advantage to the client/user) than a two valued bool.
What concerns panic/recover: There are scenarios where their use is completely sane. For example, in a hand written recursive descent parser, it's quite a PITA to "bubble" up an error condition through all the invocation levels. In this example, it's a welcome simplification if there's a deferred recover at the top most (API) level and one can report any kind of error at any invocation level using, for example
panic(fmt.Errorf("Cannot %v in %v", foo, bar))
If an operation can fail and returns an error, than checking this error immediately and handling it properly is idiomatic in go, simple and nice to check if anything gets handled properly.
Don't use defer/recover for such things: Needed cleanup actions are hard to code, especially if stuff gets nested.
The usual way to report an error to a caller is to return an error as an extra return value. The canonical Read method is a well-known instance; it returns a byte count and an error.
But what if the error is unrecoverable? Sometimes the program simply cannot continue.
For this purpose, there is a built-in function panic that in effect creates a run-time error that will stop the program (but see the next section). The function takes a single argument of arbitrary type—often a string—to be printed as the program dies. It's also a way to indicate that something impossible has happened, such as exiting an infinite loop.
http://golang.org/doc/effective_go.html#errors