I have some heavy csv table that i would like to import in parallel with #sync #sync macros.
Not very familiar to this, I tried this way :
#import files
#sync #async begin
df1=CSV.File(libname*"df1.csv")|> DataFrame!
df2=CSV.File(libname*"df2.csv")|> DataFrame!
end
I have the task done, but the data subset I make after seems to be impacted :
select!(df1, Not("Var1"))
ArgumentError : Column :Var1 not found in the data frame
PS : without #sync macro the code works well
I probably make something wrong. Any idea would be helpful.
Thanks
#sync #async do not do anything in your code other than introducing a begin... end block with its local scope.
What happens here is that you are creating a new scope and never modify the global values of df1 and df2 - rather than that you are seeing their old values.
If I/O is the bottleneck in your code the correct code would be the following:
dfs = Vector{DataFrame}(undef, 2)
#sync begin
#async dfs[1]=CSV.File(libname*"df1.csv")|> DataFrame!
#async dfs[2]=CSV.File(libname*"df2.csv")|> DataFrame!
end
However, usually it is not the I/O that is the issue but rather the CPU. In that case green threads are not that much useful and you need normal regular threads:
dfs = Vector{DataFrame}(undef, 2)
Threads.#threads for i in 1:2
dfs[i]=CSV.File(libname*"df$i.csv")|> DataFrame!
end
Note that for this code to use multi-threading you need to set the JULIA_NUM_THREADS system variable before running Julia such as:
set JULIA_NUM_THREADS=2
Related
Coming from python3 to Julia one would love to be able to write fast iterators as a function with produce/yield syntax or something like that.
Julia's macros seem to suggest that one could build a macro which transforms such a "generator" function into an julia iterator.
[It even seems like you could easily inline iterators written in function style, which is a feature the Iterators.jl package also tries to provide for its specific iterators https://github.com/JuliaCollections/Iterators.jl#the-itr-macro-for-automatic-inlining-in-for-loops ]
Just to give an example of what I have in mind:
#asiterator function myiterator(as::Array)
b = 1
for (a1, a2) in zip(as, as[2:end])
try
#produce a1[1] + a2[2] + b
catch exc
end
end
end
for i in myiterator([(1,2), (3,1), 3, 4, (1,1)])
#show i
end
where myiterator should ideally create a fast iterator with as low overhead as possible. And of course this is only one specific example. I ideally would like to have something which works with all or almost all generator functions.
The currently recommended way to transform a generator function into an iterator is via Julia's Tasks, at least to my knowledge. However they also seem to be way slower then pure iterators. For instance if you can express your function with the simple iterators like imap, chain and so on (provided by Iterators.jl package) this seems to be highly preferable.
Is it theoretically possible in julia to build a macro converting generator-style functions into flexible fast iterators?
Extra-Point-Question: If this is possible, could there be a generic macro which inlines such iterators?
Some iterators of this form can be written like this:
myiterator(as) = (a1[1] + a2[2] + 1 for (a1, a2) in zip(as, as[2:end]))
This code can (potentially) be inlined.
To fully generalize this, it is in theory possible to write a macro that converts its argument to continuation-passing style (CPS), making it possible to suspend and restart execution, giving something like an iterator. Delimited continuations are especially appropriate for this (https://en.wikipedia.org/wiki/Delimited_continuation). The result is a big nest of anonymous functions, which might be faster than Task switching, but not necessarily, since at the end of the day it needs to heap-allocate a similar amount of state.
I happen to have an example of such a transformation here (in femtolisp though, not Julia): https://github.com/JeffBezanson/femtolisp/blob/master/examples/cps.lsp
This ends with a define-generator macro that does what you describe. But I'm not sure it's worth the effort to do this for Julia.
Python-style generators – which in Julia would be closest to yielding from tasks – involve a fair amount of inherent overhead. You have to switch tasks, which is non-trivial and cannot straightforwardly be eliminated by a compiler. That's why Julia's iterators are based on functions that transform one typically immutable, simple state value, and another. Long story short: no, I do not believe that this transformation can be done automatically.
After thinking a lot how to translate python generators to Julia without loosing much performance, I implemented and tested a library of higher level functions which implement Python-like/Task-like generators in a continuation-style. https://github.com/schlichtanders/Continuables.jl
Essentially, the idea is to regard Python's yield / Julia's produce as a function which we take from the outside as an extra parameter. I called it cont for continuation. Look for instance on this reimplementation of a range
crange(n::Integer) = cont -> begin
for i in 1:n
cont(i)
end
end
You can simply sum up all integers by the following code
function sum_continuable(continuable)
a = Ref(0)
continuable() do i
a.x += i
end
a.x
end
# which simplifies with the macro Continuables.#Ref to
#Ref function sum_continuable(continuable)
a = Ref(0)
continuable() do i
a += i
end
a
end
sum_continuable(crange(4)) # 10
As you hopefully agree, you can work with continuables almost like you would have worked with generators in python or tasks in julia. Using do notation instead of for loops is kind of the one thing you have to get used to.
This idea takes you really really far. The only standard method which is not purely implementable using this idea is zip. All the other standard higher-level tools work just like you would hope.
The performance is unbelievably faster than Tasks and even faster than Iterators in some cases (notably the naive implementation of Continuables.cmap is orders of magnitude faster than Iterators.imap). Check out the Readme.md of the github repository https://github.com/schlichtanders/Continuables.jl for more details.
EDIT: To answer my own question more directly, there is no need for a macro #asiterator, just use continuation style directly.
mycontinuable(as::Array) = cont -> begin
b = 1
for (a1, a2) in zip(as, as[2:end])
try
cont(a1[1] + a2[2] + b)
catch exc
end
end
end
mycontinuable([(1,2), (3,1), 3, 4, (1,1)]) do i
#show i
end
I am currently developing my own programming language. The codebase (in Lua) is composed of several modules, as follows:
The first, error.lua, has no dependancies;
lexer.lua depends only on error.lua;
prototypes.lua also has no dependancies;
parser.lua, instead, depends on all the modules above;
interpreter.lua is the fulcrum of the whole codebase. It depends on error.lua, parser.lua, and memory.lua;
memory.lua depends on functions.lua;
finally, functions.lua depends on memory.lua and interpreter.lua. It is required from inside memory.lua, so we can say that memory.lua also depends on interpreter.lua.
With "A depends on B" I mean that the functions declared in A need those declared in B.
The real problem, though, is when A depends on B which depends on A, which, as you can understand from the list above, happens quite frequently in my code.
To give a concrete example of my problem, here's how interpreter.lua looks like:
--first, I require the modules that DON'T depend on interpreter.lua
local parser, Error = table.unpack(require("parser"))
--(since error.lua is needed both in the lexer, parser and interpreter module,
--I only actually require it once in lexer.lua and then pass its result around)
--Then, I should require memory.lua. But since memory.lua and
--functions.lua need some functions from interpreter.lua to work, I just
--forward declare the variables needed from those functions and then those functions themself:
--forward declaration
local globals, new_memory, my_nil, interpret_statement
--functions I need to declare before requiring memory.lua
local function interpret_block()
--uses interpret_statement and new_memory
end
local function interpret_expresion()
--uses new_memory, Error and my_nil
end
--Now I can safely require memory.lua:
globals, new_memory, my_nil = require("memory.lua")(interpret_block, interpret_espression)
--(I'll explain why it returns a function to call later)
--Then I have to fulfill the forward declaration of interpret_executement:
function interpret_executement()
--uses interpret_expression, new_memory and Error
end
--finally, the result is a function
return function()
--uses parser, new_fuction and globals
end
The memory.lua module returns a function so that it can receive interpret_block and interpret_expression as arguments, like this:
--memory.lua
return function(interpret_block, interpret_expression)
--declaration of globals, new_memory, my_nil
return globals, new_memory, my_nil
end
Now, I got the idea of the forward declarations here and that of the functions-as-modules (like in memory.lua, to pass some functions from the requiring module to the required module) here. They're all great ideas, and I must say that they work greatly. But you pay in readability.
In fact, breaking in smaller pieces the code this time made my work harder that it would have been if I coded everything in a single file, which is impossible for me because it's over than 1000 lines of code and I'm coding from a smartphone.
The feeling I have is that of working with spaghetti code, only on a larger scale.
So how could I solve the problem of my code being ununderstandable because of some modules needing each other to work (which doesn't involve making all the variables global, of course)? How would programmers in other languages solve this problem? How should I reorganize my modules? Are there any standard rules in using Lua modules that could also help me with this problem?
If we look at your lua files as a directed graph, where a vertice points from a dependency to its usage, the goal is to modify your graph to be a tree or forest, as you intend to get rid of the cycles.
A cycle is a set of nodes, which, traversed in the direction of the vertices can reach the starting node.
Now, the question is how to get rid of cycles?
The answer looks like this:
Let's consider node N and let's consider {D1, D2, ..., Dm} as its direct dependencies. If there is no Di in that set that depends on N either directly or indirectly, then you can leave N as it is. In that case, the set of problematic dependencies looks like this: {}
However, what if you have a non-empty set, like this: {PD1, ..., PDk} ?
You then need to analyze PDi for i between 1 and k along with N and see what is the subset in each PDi that does not depend on N and what is the subset of N which does not depend on any PDi. This way you can define N_base and N, PDi_base and PDi. N depends on N_base, just like all PDi elements and PDi depends on PDi_base along with N_base.
This approach minimalizes circles in the dependency tree. However, it is quite possible that a function set of {f1, ..., fl} exists in this group which cannot be migrated into _base as discussed due to dependencies and there are still cycles. In this case you need to give a name to the group in question, create a module for it and migrate all to functions into that group.
I was profiling an application that does a lot of math operations on NMatrix matrices.
The application spends most of it's time in in the code below.
{add: :+, sub: :-, mul: :*, div: :/, pow: :**, mod: :%}.each_pair do |ewop, op|
define_method("__list_elementwise_#{ewop}__") do |rhs|
self.__list_map_merged_stored__(rhs, nil) { |l,r| l.send(op,r) }.cast(stype, NMatrix.upcast(dtype, rhs.dtype))
end
define_method("__dense_elementwise_#{ewop}__") do |rhs|
self.__dense_map_pair__(rhs) { |l,r| l.send(op,r) }.cast(stype, NMatrix.upcast(dtype, rhs.dtype))
end
define_method("__yale_elementwise_#{ewop}__") do |rhs|
self.__yale_map_merged_stored__(rhs, nil) { |l,r| l.send(op,r) }.cast(stype, NMatrix.upcast(dtype, rhs.dtype))
end
end
In the commets above the code it says:
# Define the element-wise operations for lists. Note that the __list_map_merged_stored__ iterator returns a Ruby Object
# matrix, which we then cast back to the appropriate type. If you don't want that, you can redefine these functions in
# your own code.
I am not that familiar with the internals of NMatrix but it seems as though the math operations are being executed in Ruby. Is there anyway to speed up these methods?
We had written them in C/C++ originally, but it required some really complicated macros which were basically unmaintainable and buggy, and substantially increased compile time.
If you look in History.txt, you'll be able to find at what version we started writing the math operations in Ruby. You could use the prior code to override and put the element-wise operations (where you need speed) exclusively in C/C++.
However, you may run into problems getting those to work properly (without a crash) on matrices of dtype :object.
As a side note, the sciruby-dev Google Group (or the nmatrix issue tracker) might be a more appropriate place for a question like this one.
I am in the midst of trying to make the leap from Matlab to numpy, but I desperately need speed in my fft's. Now I know of pyfftw, but I don't know that I am using it properly. My approach is going something like
import numpy as np
import pyfftw
import timeit
pyfftw.interfaces.cache.enable()
def wrapper(func, *args):
def wrapped():
return func(*args)
return wrapped
def my_fft(v):
global a
global fft_object
a[:] = v
return fft_object()
def init_cond(X):
return my_fft(2.*np.cosh(X)**(-2))
def init_cond_py(X):
return np.fft.fft(2.*np.cosh(X)**(-2))
K = 2**16
Llx = 10.
KT = 2*K
dx = Llx/np.float64(K)
X = np.arange(-Llx,Llx,dx)
global a
global b
global fft_object
a = pyfftw.n_byte_align_empty(KT, 16, 'complex128')
b = pyfftw.n_byte_align_empty(KT, 16, 'complex128')
fft_object = pyfftw.FFTW(a,b)
wrapped = wrapper(init_cond, X)
print min(timeit.repeat(wrapped,repeat=100,number=1))
wrapped_two = wrapper(init_cond_py, X)
print min(timeit.repeat(wrapped_two,repeat=100,number=1))
I appreciate that there are builder functions and also standard interfaces to the scipy and numpy fft calls through pyfftw. These have all behaved very slowly though. By first creating an instance of the fft_object and then using it globally, I have been able to get speeds as fast or slightly faster than numpy's fft call.
That being said, I am working under the assumption that wisdom is implicitly being stored. Is that true? Do I need to make that explicit? If so, what is the best way to do that?
Also, I think timeit is completely opaque. Am I using it properly? Is it storing wisdom as I call repeat? Thanks in advance for any help you might be able to give.
In an interactive (ipython) session, I think the following is what you want to do (timeit is very nicely handled by ipython):
In [1]: import numpy as np
In [2]: import pyfftw
In [3]: K = 2**16
In [4]: Llx = 10.
In [5]: KT = 2*K
In [6]: dx = Llx/np.float64(K)
In [7]: X = np.arange(-Llx,Llx,dx)
In [8]: a = pyfftw.n_byte_align_empty(KT, 16, 'complex128')
In [9]: b = pyfftw.n_byte_align_empty(KT, 16, 'complex128')
In [10]: fft_object = pyfftw.FFTW(a,b)
In [11]: a[:] = 2.*np.cosh(X)**(-2)
In [12]: timeit np.fft.fft(a)
100 loops, best of 3: 4.96 ms per loop
In [13]: timeit fft_object(a)
100 loops, best of 3: 1.56 ms per loop
In [14]: np.allclose(fft_object(a), np.fft.fft(a))
Out[14]: True
Have you read the tutorial? What don't you understand?
I would recommend using the builders interface to construct the FFTW object. Have a play with the various settings, most importantly the number of threads.
The wisdom is not stored by default. You need to extract it yourself.
All your globals are unnecessary - the objects you want to change are mutable, so you can handle them just fine. fft_object always points to the same thing, so no problem with that not being a global. Ideally, you simply don't want that loop over ii. I suggest working out how to structure your arrays in order that you can do all your operations in a single call
Edit:
[edit edit: I wrote the following paragraph with only a cursory glance at your code, and clearly with it being a recursive update, vectorising is not an obvious approach without some serious cunning. I have a few comments on your implementation at the bottom though]
I suspect your problem is a more fundamental misunderstanding of how to best use a language like Python (or indeed Matlab) for numerical processing. The core tenet is vectorise as much as possible. By this, I mean roll up your python calls to be as few as possible. I can't see how to do that with your example unfortunately (though I've only thought about it for 2 mins). If that's still failing, think about cython - though make sure you really want to go down that route (i.e. you've exhausted the other options).
Regarding the globals: Don't do it that way. If you want to create an object with state, use a class (that is what they are for) or perhaps a closure in your case. The global is almost never what you want (I think I have one at least vaguely legit use for it in all my writing of python, and that's in the cache code in pyfftw). I suggest reading this nice SO question. Matlab is a crappy language - one of the many reasons for this is its crap scoping facilities which tend to lead to bad habits.
You only need global if you want to modify a reference globally. I suggest reading a bit more about the Python scoping rules and what variables really are in python.
FFTW objects carry with them all the arrays you need so you don't need to pass them around separately. Using the call interface carries almost no overhead (particularly if you disable the normalisation) either for setting or returning the values - if you're at that level of optimisation, I strongly suspect you've hit the limit (I'd caveat this that this may not quite be true for many many very small FFTs, but at this point you want to rethink your algorithm to vectorise the calls to FFTW). If you find a substantial overhead in updating the arrays every time (using the call interface), this is a bug and you should submit it as such (and I'd be pretty surprised).
Bottom line, don't worry about updating the arrays on every call. This is almost certainly not your bottleneck, though make sure you're aware of the normalisation and disable it if you wish (it might slow things down slightly compared to raw accessing of the update_arrays() and execute() methods).
Your code makes no use of the cache. The cache is only used when you're using the interfaces code, and reduces the Python overhead in creating new FFTW objects internally. Since you're handling the FFTW object yourself, there is no reason for a cache.
The builders code is a less constrained interface to get an FFTW object. I almost always use the builders now (it's much more convenient that creating a FFTW object from scratch). The cases in which you want to create an FFTW object directly are pretty rare and I'd be interested to know what they are.
Comments on the algorithm implementation:
I'm not familiar with the algorithm you're implementing. However, I have a few comments on how you've written it at the moment.
You're computing nl_eval(wp) on every loop, but as far as I can tell that's just the same as nl_eval(w) from the previous loop, so you don't need to compute it twice (but this comes with the caveat that it's pretty hard to see what's going on when you have globals everywhere, so I might be missing something).
Don't bother with the copies in my_fft or my_ifft. Simply do fft_object(u) (2.29 ms versus 1.67 ms on my machine for the forward case). The internal array update routine makes the copy unnecessary. Also, as you've written it, you're copying twice: c[:] means "copy into the array c", and the array you're copying into c is v.copy(), i.e. a copy of v (so two copies in total).
More sensible (and probably necessary) is copying the output into holding arrays (since that avoids clobbering interim results on calls to the FFTW object), though make sure your holding arrays are properly aligned. I'm sure you've noted this is important but it's rather more understandable to copy the output.
You can move all your scalings together. The 3 in the computation of wn can be be moved inside my_fft in nl_eval. You can also combine this with the normalisation constant from the ifft (and turn it off in pyfftw).
Take a look at numexpr for the basic array operations. It can offer quite a bit of speed-up over vanilla numpy.
Anyway take what you will from all that. No doubt I've missed something or said something incorrect, so please accept it with as much humility as I can offer. It's worth spending a little time working out how Python ticks compared to Matlab (in fact, just forget the latter).
I am trying to compute numerically the solutions for a system of many equations and variables (100+). I tried so far three things:
I now that the vector of p(i) (which contains most of the endogenous variables) is decreasing. Thus I gave simply some starting points, and then was increasing(decreasing) my guess when I saw that the specific p was too low(high). Of course this was always conditional on the other being fixed which is not the case. This should eventually work, but it is neither efficient, nor obvious that I reach a solution in finite time. It worked when reducing the system to 4-6 variables though.
I could create 100+ loops around each other and use bisection for each loop. This would eventually lead me to the solution, but take ages both to program (as I have no idea how to create n loops around each other without actually having to write the loops - which is also bad as I would like to increase/decrease the amount of variables easily) and to execute.
I was trying fminsearch, but as expected for that wast amount of variables - no way!
I would appreciate any ideas... Here is the code (this one the fminsearch I tried):
This is the run file:
clear all
clc
% parameter
z=1.2;
w=20;
lam=0.7;
tau=1;
N=1000;
t_min=1;
t_max=4;
M=6;
a_min=0.6;
a_max=0.8;
t=zeros(1,N);
alp=zeros(1,M);
p=zeros(1,M);
p_min=2;
p_max=1;
for i=1:N
t(i)= t_min + (i-1)*(t_max - t_min)/(N-1);
end
for i=1:M
alp(i)= a_min + (i-1)*(a_max - a_min)/(M-1);
p(i)= p_min + (i-1)*(p_max - p_min)/(M-1);
end
fun=#(p) david(p ,z,w,lam,tau,N,M,t,alp);
p0=p;
fminsearch(fun,p0)
And this is the program-file:
function crit=david(p, z,w,lam,tau,N,M,t,alp)
X = zeros(M,N);
pi = zeros(M,N);
C = zeros(1,N);
Xa=zeros(1,N);
Z=zeros(1,M);
rl=0.01;
rh=1.99;
EXD=140;
while (abs(EXD)>100)
r1=rl + 0.5*(rh-rl);
for i=1:M
for j=1:N
X(i,j)=min(w*(1+lam), (alp(i) * p(i) / r1)^(1/(1-alp(i))) * t(j)^((z-alp(i))/(1-alp(i))));
pi(i,j)=p(i) * t(j)^(z-alp(i)) * X(i,j)^(alp(i)) - r1*X(i,j);
end
end
[C,I] = max(pi);
Xa(1)=X(I(1),1);
for j=2:N
Xa(j)=X(I(j),j);
end
EXD=sum(Xa)- N*w;
if (abs(EXD)>100 && EXD>0)
rl=r1;
elseif (abs(EXD)>100 && EXD<0)
rh=r1;
end
end
Ya=zeros(M,N);
for j=1:N
Ya(I(j),j)=t(j)^(z-alp(I(j))) * X(I(j),j)^(alp(I(j)));
end
Yi=sum(Ya,2);
if (Yi(1)==0)
Z(1)=-50;
end
for j=2:M
if (Yi(j)==0)
Z(j)=-50;
else
Z(j)=(p(1)/p(j))^tau - Yi(j)/Yi(1);
end
end
zz=sum(abs(Z))
crit=(sum(abs(Z)));
First of all my recommendation: use your brain.
What do you know about the function, can you use a gradient approach, linearize the problem, or perhaps fix most of the variables? If not, think twice before you decide that you are really interested in all 100 variables and perhaps simplify the problem.
Now, if that is not possible read this:
If you found a way to quickly get a local optimum, you could simply wrap a loop around it to try different starting points and hope you will find a good optimum.
If you really need to make lots of loops (and a variable amount) I suppose it can be done with recursion, but it is not easily explained.
If you just quickly want to make a fixed number of loops inside each other this can easily be done in excel (hint: loop variables can be called t1,t2 ... )
If you really need to evaluate a function at a lot of points, probably creating all the points first using ndgrid and then evaluating them all at once is preferable. (Needless to say this will not be a nice solution for 100 nontrivial variables)