Nested "if" (AKA "switch") in Smalltalk (Pharo) - smalltalk

I need to populate the matrix (stored as an array of arrays) with some values. The matrix is a Jacobian for a simple diffusion problem and looks like this:
J(1,1) = 1, J(N,N)=0
and for 1<n<N:
J(n,n) = -2k/dx^2 - 2*c(n)
J(n,n-1)=J(n,n+1) = k/dx^2
the rest of the matrix entries are zeros.
So far I have this monstrosity:
(1 to: c size) collect: [ :n |
(1 to: c size) collect: [ :m |
n = 1 | (n = c size)
ifTrue: [ m = n ifTrue: [ 1.0 ] ifFalse: [ 0.0 ] ]
ifFalse: [ m = n
ifTrue: [ -2.0 * k / dx squared - (2.0 * (c at: n)) ]
ifFalse: [ m = (n-1) | (m = (n+1))
ifTrue: [ k / dx squared ]
ifFalse: [ 0.0 ] ] ]
] ]
Notice the nested "if-statements" (Smalltalk equivalents). This works. But, is there, perhaps, a more elegant way of doing the same thing? As it stands now, it is rather unreadable.

n := c size.
Matrix
new: n
tabulate: [:i :j | self jacobianAtRow: i column: j]
where
jacobianAtRow: i column: j
n := c size.
(i = 1 or: [i = n]) ifTrue: [^j = i ifTrue: [1.0] ifFalse [0.0]].
j = i ifTrue: [^-2.0 * k / dx squared - (2.0 * (c at: i))].
(j = (i - 1) or: [j = (i + 1)]) ifTrue: [^k / dx squared].
^0.0
Basically, the general idea is this: whenever you find nested ifs, factor out that piece of code to a method by itself and transform the nesting into a cases-like enumeration that returns a value at every possibility.

For readability's sake I would consider sacrificing the extra O(n) time and avoid IFs altogether (which just make it even faster...).
J(N,N) = 0
J(1,1) = 1
//and for 1<n<N:
J(n,n) = Y(n)
J(n,m-1) = J(n,m+1) = X
What this tells me is that the whole matrix looks something like this
( 1 X 0 0 0 )
( X Y X 0 0 )
( 0 X Y X 0 )
( 0 0 X Y X )
( 0 0 0 X 0 )
Which means that I can create the whole matrix with just zeros, and then change the diagonal and neighboring diagonals.
jNM := [ k / dx squared ].
jNN := [ :n | -2.0 * k / dx squared - (2.0 * (c at: n)) ].
n := c size.
m := Matrix
new: n
tabulate: [:i :j | 0 ].
(1 to: n - 1) do: [ :i |
m at: i at: i put: (jNN value: i).
m at: i + 1 at: i put: jnM value.
m at: i at: i + 1 put: jnM value.
].
m at: 1 at: 1 put: 1.
Note: I'm not familiar with the math behind this but the value for J(n,m-1) seems like a constant to me.
Note 2: I'm putting the values at i + 1 indexes, because I am starting at position 1;1, but you can start from the opposite direction and have i-1.

Related

What is the most optimal way to apply a kernel to a large array in F#?

I have the following code:
// reduce by 25x
let smallOutput = Array2D.init (output.GetLength(0) / 5) (output.GetLength(1) / 5) (fun _ _ -> Int32.MinValue)
let weights =
array2D [|
[| 1; 1; 1; 1; 1 |]
[| 1; 3; 3; 3; 1 |]
[| 1; 3; 5; 3; 1 |]
[| 1; 3; 3; 3; 1 |]
[| 1; 1; 1; 1; 1 |]
|]
let weightsSum = 45
for y in [0 .. smallOutput.GetLength(0) - 1] do
for x in [0 .. smallOutput.GetLength(1) - 1] do
let mutable v = 0
for i in [0 .. 4] do
for j in [0 .. 4] do
v <- v + weights.[j, i] * output.[y * 5 + j, x * 5 + i]
smallOutput.[y, x] <- v / weightsSum
It takes a large matrix (16k x 16k) and reduces it by 25x while applying weights.
I understand I can try to do this in a Parallel.ForEach loop, but I am wondering if there is anything built-in F# that would allow to make this faster in the first place.
I don't think there's much you can do to optimize it further, short of doing the summation already when you initialize the smallOutput variable; i.e.
let smallOutput = Array2D.init (output.GetLength(0) / 5) (output.GetLength(1) / 5) (fun y x ->
let mutable v = 0
for i in [0 .. 4] do
for j in [0 .. 4] do
v <- v + weights.[j, i] * output.[y * 5 + j, x * 5 + i]
v / weightsSum)
The thing is, that you need to loop over all entries in the larger array, there's no way getting around that. If you know beforehand the structure of the weighting-matrix, e.g. that it's symmetric in some way, you might be able to utilize that. Thought to be honest, I'm not sure how much of an optimization that would yield.

How can I solve exponential equation in Maxima CAS

I have function in Maxima CAS :
f(t) := (2*exp(2*%i*%pi*t) - exp(4*%pi*t*%i))/4;
here:
t is a real number between 0 and 1
function should give a point on the boundary of main cardioid of Mandelbrot set
How can I solve equation :
eq1:c=f(t);
(where c is a complex number)
?
Solve doesn't work
solve( eq1,t);
result is empty list
[]
Result of this equation should give real number t ( internal angle or rotation number ) from complex point c
EDIT: Thx to comment by #JosehDoggie
I can draw initial equation using:
load(draw)$
f(t):=(2*exp(%i*t) - exp(2*t*%i))/4;
draw2d(
key="main cardioid",
nticks=200,
parametric( 0.5*cos(t) - 0.25*cos(2*t), 0.5*sin(t) - 0.25*sin(2*t), t,0,2*%pi),
title="main cardioid of M set "
)$
or
draw2d(polar(abs(exp(t*%i)/2 -exp(2*t*%i)/4),t,0,2*%pi));
Similar image ( cardioid) is here
Edit2:
(%i1) eq1:c = exp(%pi*t*%i)/2 - exp(2*%pi*t*%i)/4;
%i %pi t 2 %i %pi t
%e %e
(%o1) c = ---------- - ------------
2 4
(%i2) solve(eq1,t);
%i log(1 - sqrt(1 - 4 c)) %i log(sqrt(1 - 4 c) + 1)
(%o2) [t = - -------------------------, t = - -------------------------]
%pi %pi
So :
f1(c):=float(cabs( - %i* log(1 - sqrt(1 - 4* c))/%pi));
f2(c):=float(cabs( - %i* log(1 + sqrt(1 - 4* c))/%pi));
but the results are not good.
Edit 3 :
Maybe I shoud start from it.
I have:
complex numbers c ( = boundary of cardioid)
real numbers t ( from 0 to 1 or sometimes from 0 to 2*pi )
function f which computes c from t : c= f(t)
I want to find function which computes t from c: t = g(c)
testing values :
t = 0 , c= 1/4
t = 1/2 , c= -3/4
t = 1/3 , c = c = -0.125 +0.649519052838329*%i
t = 2/5 , c = -0.481762745781211 +0.531656755220025*%i
t = 0.118033988749895 c = 0.346828007859920 +0.088702386914555*%i
t = 0.618033988749895 , c = -0.390540870218399 -0.586787907346969*%i
t = 0.718033988749895 c = 0.130349371041523 -0.587693986342220*%i
load("to_poly_solve") $
e: (2*exp(2*%i*%pi*t) - exp(4*%pi*t*%i))/4 - c $
s: to_poly_solve(e, t) $
s: maplist(lambda([e], rhs(first(e))), s) $ /* unpack arguments of %union */
ratexpand(s);
Outputs
%i log(1 - sqrt(1 - 4 c)) %i log(sqrt(1 - 4 c) + 1)
(%o6) [%z7 - -------------------------, %z9 - -------------------------]
2 %pi 2 %pi

Sample without replacement

How to sample without replacement in TensorFlow? Like numpy.random.choice(n, size=k, replace=False) for some very large integer n (e.g. 100k-100M), and smaller k (e.g. 100-10k).
Also, I want it to be efficient and on the GPU, so other solutions like this with tf.py_func are not really an option for me. Anything which would use tf.range(n) or so is also not an option because n could be very large.
This is one way:
n = ...
sample_size = ...
idx = tf.random_shuffle(tf.range(n))[:sample_size]
EDIT:
I had posted the answer below but then read the last line of your post. I don't think there is a good way to do it if you absolutely cannot produce a tensor with size O(n) (numpy.random.choice with replace=False is also implemented as a slice of a permutation). You could resort to a tf.while_loop until you have unique indices:
n = ...
sample_size = ...
idx = tf.zeros(sample_size, dtype=tf.int64)
idx = tf.while_loop(
lambda i: tf.size(idx) == tf.size(tf.unique(idx)),
lambda i: tf.random_uniform(sample_size, maxval=n, dtype=int64))
EDIT 2:
About the average number of iterations in the previous method. If we call n the number of possible values and k the length of the desired vector (with k ≤ n), the probability that an iteration is successful is:
p = product((n - (i - 1) / n) for i in 1 .. k)
Since each iteartion can be considered a Bernoulli trial, the average number of trials unitl first success is 1 / p (proof here). Here is a function that calculates the average numbre of trials in Python for some k and n values:
def avg_iter(k, n):
if k > n or n <= 0 or k < 0:
raise ValueError()
avg_it = 1.0
for p in (float(n) / (n - i) for i in range(k)):
avg_it *= p
return avg_it
And here are some results:
+-------+------+----------+
| n | k | Avg iter |
+-------+------+----------+
| 10 | 5 | 3.3 |
| 100 | 10 | 1.6 |
| 1000 | 10 | 1.1 |
| 1000 | 100 | 167.8 |
| 10000 | 10 | 1.0 |
| 10000 | 100 | 1.6 |
| 10000 | 1000 | 2.9e+22 |
+-------+------+----------+
You can see it varies wildy depending on the parameters.
It is possible, though, to construct a vector in a fixed number of steps, although the only algorithm I can think of is O(k2). In pure Python it goes like this:
import random
def sample_wo_replacement(n, k):
sample = [0] * k
for i in range(k):
sample[i] = random.randint(0, n - 1 - len(sample))
for i, v in reversed(list(enumerate(sample))):
for p in reversed(sample[:i]):
if v >= p:
v += 1
sample[i] = v
return sample
random.seed(100)
print(sample_wo_replacement(10, 5))
# [2, 8, 9, 7, 1]
print(sample_wo_replacement(10, 10))
# [6, 5, 8, 4, 0, 9, 1, 2, 7, 3]
This is a possible way to do it in TensorFlow (not sure if the best one):
import tensorflow as tf
def sample_wo_replacement_tf(n, k):
# First loop
sample = tf.constant([], dtype=tf.int64)
i = 0
sample, _ = tf.while_loop(
lambda sample, i: i < k,
# This is ugly but I did not want to define more functions
lambda sample, i: (tf.concat([sample,
tf.random_uniform([1], maxval=tf.cast(n - tf.shape(sample)[0], tf.int64), dtype=tf.int64)],
axis=0),
i + 1),
[sample, i], shape_invariants=[tf.TensorShape((None,)), tf.TensorShape(())])
# Second loop
def inner_loop(sample, i):
sample_size = tf.shape(sample)[0]
v = sample[i]
j = i - 1
v, _ = tf.while_loop(
lambda v, j: j >= 0,
lambda v, j: (tf.cond(v >= sample[j], lambda: v + 1, lambda: v), j - 1),
[v, j])
return (tf.where(tf.equal(tf.range(sample_size), i), tf.tile([v], (sample_size,)), sample), i - 1)
i = tf.shape(sample)[0] - 1
sample, _ = tf.while_loop(lambda sample, i: i >= 0, inner_loop, [sample, i])
return sample
And an example:
with tf.Graph().as_default(), tf.Session() as sess:
tf.set_random_seed(100)
sample = sample_wo_replacement_tf(10, 5)
for i in range(10):
print(sess.run(sample))
# [3 0 6 8 4]
# [5 4 8 9 3]
# [1 4 0 6 8]
# [8 9 5 6 7]
# [7 5 0 2 4]
# [8 4 5 3 7]
# [0 5 7 4 3]
# [2 0 3 8 6]
# [3 4 8 5 1]
# [5 7 0 2 9]
This is quite intesive on tf.while_loops, though, which are well-known not to be particularly fast in TensorFlow, so I wouldn't know how fast can you really get with this method without some kind of benchmarking.
EDIT 4:
One last possible method. You can divide the range of possible values (0 to n) in "chunks" of size c and pick a random amount of numbers from each chunk, then shuffle everything. The amount of memory that you use is limited by c, and you don't need nested loops. If n is divisible by c, then you should get about a perfect random distribution, otherwise values in the last "short" chunk would receive some extra probability (this may be negligible depending on the case). Here is a NumPy implementation. It is somewhat long to account for different corner cases and pitfalls, but if c ≥ k and n mod c = 0 several parts get simplified.
import numpy as np
def sample_chunked(n, k, chunk=None):
chunk = chunk or n
last_chunk = chunk
parts = n // chunk
# Distribute k among chunks
max_p = min(float(chunk) / k, 1.0)
max_p_last = max_p
if n % chunk != 0:
parts += 1
last_chunk = n % chunk
max_p_last = min(float(last_chunk) / k, 1.0)
p = np.full(parts, 2)
# Iterate until a valid distribution is found
while not np.isclose(np.sum(p), 1) or np.any(p > max_p) or p[-1] > max_p_last:
p = np.random.uniform(size=parts)
p /= np.sum(p)
dist = (k * p).astype(np.int64)
sample_size = np.sum(dist)
# Account for rounding errors
while sample_size < k:
i = np.random.randint(len(dist))
while (dist[i] >= chunk) or (i == parts - 1 and dist[i] >= last_chunk):
i = np.random.randint(len(dist))
dist[i] += 1
sample_size += 1
while sample_size > k:
i = np.random.randint(len(dist))
while dist[i] == 0:
i = np.random.randint(len(dist))
dist[i] -= 1
sample_size -= 1
assert sample_size == k
# Generate sample parts
sample_parts = []
for i, v in enumerate(np.nditer(dist)):
if v <= 0:
continue
c = chunk if i < parts - 1 else last_chunk
base = chunk * i
sample_parts.append(base + np.random.choice(c, v, replace=False))
sample = np.concatenate(sample_parts, axis=0)
np.random.shuffle(sample)
return sample
np.random.seed(100)
print(sample_chunked(15, 5, 4))
# [ 8 9 12 13 3]
A quick benchmark of sample_chunked(100000000, 100000, 100000) takes about 3.1 seconds in my computer, while I haven't been able to run the previous algorithm (sample_wo_replacement function above) to completion with the same parameters. It should be possible to implement it in TensorFlow, maybe using tf.TensorArray, although it would require significant effort to get it exactly right.
use the gumbel-max trick here: https://github.com/tensorflow/tensorflow/issues/9260
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z,K)
indices are what you want. This tick is so easy~!
The following works fairly fast on the GPU, and I did not encounter memory issues when using n~100M and k~10k (using NVIDIA GeForce GTX 1080 Ti):
def random_choice_without_replacement(n, k):
"""equivalent to 'numpy.random.choice(n, size=k, replace=False)'"""
return tf.math.top_k(tf.random.uniform(shape=[n]), k, sorted=False).indices

Smalltalk Vandermonde-matrix

Long in short it is a Vandermonde matrix and I have a problem to run a for in the second dimension of the array.
'add meg M-et majd N-et (enter kozotte)(az 1. sor az 1-es szam hatvanyai)' displayNl.
M := stdin nextLine asInteger.
N := stdin nextLine asInteger.
|tomb|
tomb := Array new: M.
x := 1.
y := 1.
a := M + 1.
b := N + 1.
x to: a do: [ :i|
tomb at:x put: (Array new: N) y to: b do: [ :j |
x at: y put: (x raisedTo: y - 1) ] ].
tomb printNl.
Here is a good way to create a matrix for which we have an expression of the generic entry aij:
Matrix class >> fromBlock: aBlock rows: n columns: m
| matrix |
matrix := self rows: n columns: m.
matrix indicesDo: [:i :j | | aij |
aij := aBlock value: i value: j.
matrix at: i at: j put: aij].
^matrix
With the above method you can now implement
Matrix class >> vandermonde: anArray degree: anInteger
^self
fromBlock: [:i :j | (anArray at: i) raisedTo: j - 1]
rows: anArray size
columns: anInteger + 1
EDIT
I just realized that in Pharo there is a way to create a matrix from the expression of its aij, it is named rows:columns:tabulate:, so my answer reduces to:
Matrix class >> vandermonde: anArray degree: anInteger
^self
rows: anArray size
columns: anInteger + 1
tabulate: [:i :j | (anArray at: i) raisedTo: j - 1]

Haskell, function works when using numbers, but not with variables

I'm using ghci and I'm having a problem with a function for getting the factors of a number.
The code I would like to work is:
let factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]
It doesn't complain when I then hit enter, but as soon as I try to use it (with 66 in this case) I get this error message:
Ambiguous type variable 't0' in the constraints:
(Integral t0)
arising from a use of 'factors' at <interactive>:30:1-10
(Num t0) arising from the literal '66' at <interactive>:30:12-13
(RealFrac t0)
arising from a use of 'factors' at <interactive:30:1-10
Probable fix: add a type signature that fixes these type variable(s)
In the expression: factors 66
In the equation for 'it': it = factors 66
The following code works perfectly:
let factorsOfSixtySix = [x | x <- [1..truncate (66/2)], mod 66 x == 0]
I'm new to haskell, and after looking up types and typeclasses, I'm still not sure what I'm meant to do.
Use div for integer division instead:
let factors n = [x | x <- [1.. n `div` 2], mod n x == 0]
The problem in your code is that / requires a RealFrac type for n while mod an Integral one. This is fine during definition, but then you can not choose a type which fits both constraints.
Another option could be to truncate n before using mod, but is more cumbersome. After all, you do not wish to call factors 6.5, do you? ;-)
let factors n = [x | x <- [1..truncate (n/2)], mod (truncate n) x == 0]
If you put a type annotation on this top-level bind (idiomatic Haskell), you get different, possibly more useful error messages.
GHCi> let factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]
GHCi> :t factors
factors :: (Integral t, RealFrac t) => t -> [t]
GHCi> let { factors :: Double -> [Double]; factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]; }
<interactive>:30:64:
No instance for (Integral Double) arising from a use of `truncate'
Possible fix: add an instance declaration for (Integral Double)
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
In a stmt of a list comprehension: x <- [1 .. truncate (n / 2)]
GHCi> let { factors :: Integer -> [Integer]; factors n = [x | x <- [1..truncate (n/2)], mod n x == 0]; }
<interactive>:31:66:
No instance for (RealFrac Integer) arising from a use of `truncate'
Possible fix: add an instance declaration for (RealFrac Integer)
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
In a stmt of a list comprehension: x <- [1 .. truncate (n / 2)]
<interactive>:31:77:
No instance for (Fractional Integer) arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the first argument of `truncate', namely `(n / 2)'
In the expression: truncate (n / 2)
In the expression: [1 .. truncate (n / 2)]
I am new to Haskell so please forgive my courage to come up with an answer here but recently i have done this as follows;
factors :: Int -> [Int]
factors n = f' ++ [n `div` x | x <- tail f', x /= exc]
where lim = truncate (sqrt (fromIntegral n))
exc = ceiling (sqrt (fromIntegral n))
f' = [x | x <- [1..lim], n `mod` x == 0]
I believe it's more efficient. You will notice if you do like;
sum (factors 33550336)