About programming server actions in Odoo/OpenERP - openerp-7

I am developing server actions in Odoo 7 of the "Python code" type. How can I know the list of disponible methods that I can call? Where is this documentation?
Thanks!

Server actions are evaluated through the safe_eval method in openerp/tools/safe_eval.py. The __builtins__ dict contains the allowed methods. Two things should be noted:
__import__ is not your regular import; it is a redefinition that allows you to import _strptime and time only.
globals is redefined to return locals instead, to keep things neatly sandboxed.
The list of allowed methods, types, etc... is as follows:
__import__
True
False
None
str
unicode
globals
locals
bool
int
float
long
enumerate
dict
list
tuple
map
abs
min
max
sum
reduce
filter
round
len
repr
set
all
any
ord
chr
cmp
divmod
isinstance
range
xrange
zip
Exception
As far as I know, this is not documented anywhere.
Apart from these, you also have access to the following, as noted in the comments on the empty Python Code when you create a server action:
self: ORM model of the record on which the action is triggered
object: Record on which the action is triggered if there is one, otherwise None
pool: ORM model pool (i.e. self.pool)
cr: database cursor
uid: current user id
context: current context
time: Python time module
workflow: Workflow engine
And the undocumented (you need to check the source of openerp/addons/base/ir/ir_actions.py):
datetime: Python datetime module
dateutil: Python dateutil module
user: browse record of current user (contrast with uid, which is just his user id)
Warning: openerp.exceptions.Warning

Related

Terraform module as "custom function"

It is possible to use some i.e. local module to return let' say same calculated output. But how can you pass some parameters? So each time you will ask for the output value you will get different value according to the parameter(ie different prefix)
Is it possible to pass resource to module and enhance it with tags?
I can imagine that both cases are more likely to be case for providers, but for some simple case it should work maybe. The best would be if they implemented some custom function that you will be able to call at will.
It is possible in principle to write a Terraform module that only contains "named values", which is the broad term for the three module features Input Variables (analogous to function arguments), Local Values (analogous to local declarations inside your function), and Output Values (analogous to return values).
Such a module would not contain any resource or data blocks at all and would therefore be a "computation-only" module, which therefore has all of the same capabilities as a function in a functional programming language.
variable "a" {
type = number
}
variable "b" {
type = number
}
locals {
sum = var.a + var.b
}
output "sum" {
value = local.sum
}
The above example is contrived just to show the principle. A "function" this simple doesn't really need the local value local.sum, because its expression could just be written inline in the value of output "sum", but I wanted to show examples of all three of the relevant constructs here.
You would "call the function" by declaring a module call referring to the directory containing the file with the above source code in it:
module "example" {
source = "./modules/sum"
a = 1
b = 2
}
output "result" {
value = module.example.sum
}
I included the output "result" block here to show how you can refer to the result of the "function" elsewhere in your module, as module.example.sum.
Of course, this syntax is much more "chunky" than a typical function call, so in practice Terraform module authors will use this approach only when the factored out logic is significant enough to justify it. Verbosity aside though, you can include as many module blocks referring to that same module as you like if you need to call the "function" with different sets of arguments. Each call to the module can take a different set of input variable values and therefore produce a different result.

How to access gem5 stats from the Python script?

Is it possible to run the simulation for a certain amount of ticks, and then read the value of selected statistics from the Python config script?
Edit: a patch has been submitted at: https://gem5-review.googlesource.com/c/public/gem5/+/33176
As of 3ca404da175a66e0b958165ad75eb5f54cb5e772 it does not seem possible, but it is likely easy to implement.
We already have a loop that goes over all stats in the Python under src/python/m5/stats/__init__.py, so there are python stat objects fully exposed and iterated, but the actual stat value appears not exposed to them, only the stat name:
def _dump_to_visitor(visitor, root=None):
# Legacy stats
if root is None:
for stat in stats_list:
stat.visit(visitor)
# New stats
def dump_group(group):
for stat in group.getStats():
stat.visit(visitor)
The visit method then leads the value to be written out to the stat file, but the Python does not do it.
However, visit is already a pybind11 Python C++ extension defined at src/python/pybind11/stats.cc:
py::class_<Stats::Info, std::unique_ptr<Stats::Info, py::nodelete>>(
m, "Info")
.def("visit", &Stats::Info::visit)
;
so you would likely need to expose the value there.
One annoyance is that each stat type that derives from Stats::Info has a different data representation, e.g. scalars return a double:
class ScalarInfo : public Info
{
public:
virtual Counter value() const = 0;
but vectors an std::vector:
class VectorInfo : public Info
{
public:
virtual const VCounter &value() const = 0;
and so there is no base value() method due to the different return types, you'd juts need to expose one per base class.
TODO I couldn't see the value() method on the Python still, likely because they were still objects of the base class, needs more investigating.
You can use variation of the gem5-stat-summarizer. gem5-stat-summarizer is used to extract selected gem5 statistics to a csv file when you have multiple stats.txt files

Serializing the current state of "Module Random" in OCaml's StdLib

I must have read the OCaml manual page on the standard library modules Random and Random.State half a dozen times (probably even more often) but I can't figure out how to serialize the current internal state of the PRNG.
Here's what I learned so far:
The modules Random and Random.State both operate on a state that is abstract / opaque from the outside.
Both modules offer two / three initializers, but functions exporting the current state ... I can't see them :(
What can I do ? Help please !
You can serialize (and de-serialize) the state using the Marshal module, e.g.,
let save_random_state out =
Marshal.to_channel out (Random.get_state ()) []
let load_random_state inp =
Random.set_state (Marshal.from_channel inp)
But if you just want the Random module to generate the same sequences of pseudo-random numbers than it is better just to initialize with the same state, i.e., use the same seed, e.g., if you will start your program with,
let () = Random.set_state (Random.State.make [|42|])
you will get the deterministic behavior of your program, as the Random module will always generate the same numbers.

Define method type for meta dynamically created method

I'm using graphql-ruby, and I'd really like to be able to type the dynamic methods that created for things like arguments.
Small example:
class Test
argument :first_argument, String
argument :secondArgument, String, as: second_argument, required: false
def method
puts first_argument.length # this is okay
puts second_argument.length # this is a problem, because it can be nil
end
end
I've tried to define these by doing:
# ...
first_argument = T.let(nil, String)
second_argument = T.let(nil, T.nilable(String))
which doesn't seem to work. I also did
#...
sig { returns(String) }
def first_argument; ""; end
sig { returns(T.nilable(String)) }
def second_argument; end
which works, but is not overly pretty. Is there a nicer way to go about this?
There is some nascent, experimental support for typing methods declared by meta programming like this: https://sorbet.org/docs/metaprogramming-plugins
In this case, you might define a plugin file like:
# argument_plugin.rb
# Sorbet calls this plugin with command line arguments similar to the following:
# ruby --class Test --method argument --source "argument :first_argument, String"
# we only care about the source here, so we use ARGV[5]
source = ARGV[5]
/argument[( ]:([^,]*?), ([^,]*?)[) ]/.match(source) do |match_data|
puts "sig {return(#{match_data[2]})}" # writes a sig that returns the type
puts "def #{match_data[1]}; end" # writes an empty method with the right name
end
I've only included the "getter" for the argument here, but it should be simple to go ahead and write out the sig for the setter method as well. You'd also want to handle all variants of the argument method as I've only handled the one with Symbol, Type arguments. For what it's worth, I'm not sure if the "source" passed in to your plugin would be normalized with parens or not, so I've made the regex match either. I also suspect that this will not work if you pass in the symbol names as variables instead of literals.
We then use a YAML file to tell Sorbet about this plugin.
# triggers.yaml
ruby_extra_args:
# These options are forwarded to Ruby
- '--disable-gems' # This option speeds up Ruby boot time. Use it if you don't need gems
triggers:
argument: argument_plugin.rb # This tells Sorbet to run argument.rb when it sees a call to `argument`
Run Sorbet and pass in the yaml config file as the argument for --dsl-plugins:
❯ srb tc --dsl-plugins triggers.yaml ... files to type check ...
I'd really like to be able to type the dynamic methods that created for things like arguments
Sorbet doesn't support typing dynamic methods like that. But they do provide a T::Struct class that has similar functionality. I did something similar last week for my project and I'll describe what I did below. If T::Struct doesn't work for you, an alternative is writing some code to generate the Sigs that you'd write manually.
My approach is using T::Struct as the wrapper for “arguments” class. You can define args as props in a T::Struct like following:
const for arguments that don’t change
prop for arguments that may change
Use default to provide a default value when no value is given
Use T.nilable type for arguments that can be nil
Building on top of the vanilla T::Struct, I also add support for “maybe”, which is for args that is truly optional and can be nil. IE: when a value is not passed in, it should not be used at all. It is different from using nil as default value because when a value is passed in, it can be nil. If you’re interested in this “maybe” component, feel free to DM me.

invoking TCL C API's inside tcl package procedures

I am using TCL-C API for my program.
and I read and created test program that is similar to this C++ example.
But I have a problem with this example. when I use this example in the shell (by loading it with load example.o) every input automatically invokes the interpreter of the API and run the command that is related to the input string.
But suppose that I want that the input will invoke tcl procedure that is inside a package required by me , this procedure will check the parameters and will print another message and only after this will invoke TCL-C API related function (kind of wrapper), In this case how can I do it?
I read somewhere that the symbol # is the symbol should be used for invoking external program but I just can't find where it was.
I will give a small example for make things more clear.
somepackage.tcl
proc dosomething { arg1 , arg2 , arg3 } {
# check args here #
set temp [ #invoke here TCL-C API function and set it's result in temp ]
return $temp
}
package provide ::somepackage 1.0
test.tcl
package require ::somepackage 1.0
load somefile.o # this is the object file which implements TCL-C API commands [doSomething 1 2 3 ]
...
But I have a problem with this example. when I use this example in the
shell (by loading it with load example.o) every input automatically
invokes the interpreter of the API and run the command that is related
to the input string.
Provided that you script snippets represent your actual implementation in an accurate manner, then the problem is that your Tcl proc named doSomething is replaced by the C-implemented Tcl command once your extension is loaded. Procedures and commands live in the same namespace(s). When the loading order were reversed, the problem would remain the same.
I read that everything is being evaluated by the tcl interperter so in
this case I should name the tcl name of the C wrap functions in
special way for example cFunc. But I am not sure about this.
This is correct. You have to organise the C-implemented commands and their scripted wrappers in a way that their names do not conflict with one another. Some (basic) options:
Use two different Tcl namespaces, with same named procedures
Apply some naming conventions to wrapper procs and commands (your cFunc hint)
If your API were provided as actual Itcl or TclOO objects, and the individual commands were the methods, you could use a subclass or a mixin to host refinements (using the super-reference, such as next in TclOO, to forward from the scripted refinement to the C implementations).
A hot-fix solution in your current setup, which is better replaced by some actual design, would be to rename or interp hide the conflicting commands:
load somefile.o
Hide the now available commands: interp hide {} doSomething
Define a scripted wrapper, calling the hidden original at some point:
For example:
proc doSomething {args} {
# argument checking
set temp [interp invokehidden {} doSomething {*}$args]
# result checking
return $temp
}