Is there a Micropython library for Adafruit's TLC5947? - module

I'm working on a RPi Pico W based project, and I need to use a TLC5947 led driver. The connection is SPI, which I'm told is pretty simple, but I tried to implement it myself and couldn't. Adafruit has a circuitpython module, but it dosen't seem to translate (directly at least) into micropython.
Do I need to keep researching making it myself or is there a module already made?
My attempt: (I assume write or pwmbuffer is the problem fwiw. Comments are copied directly from Adafruit's C++ version of the library for arduino.)
from machine import Pin
import machine
class TLC5947:
def __init__(self, clock: int = 2, data: int = 3, latch: int = 5):
self.numdrivers = 1
self.data = Pin(data, Pin.OUT)
self.clock = Pin(clock, Pin.OUT)
self.latch = Pin(latch, Pin.OUT)
self.latch.low()
self._spi = machine.SPI(0)
# self.OE = OE
self.pwmbuffer = [0] * (24 * 2 * self.numdrivers) # memset(pwmbuffer, 0, 2 * 24 * n);
# self.spi = machine.SPI(0)
def write(self):
self.latch.low() # digitalWrite(_lat, LOW);
# // 24 channels per TLC5974
for c in range(24 * self.numdrivers - 1, -1, -1): # for (int16_t c = 24 * numdrivers - 1; c >= 0; c--) {
# // 12 bits per channel, send MSB first
for b in range(11, -1, -1): # for (int8_t b = 11; b >= 0; b--) {
self.clock.low() # digitalWrite(_clk, LOW);
if self.pwmbuffer[c] & (1 << b): # if (pwmbuffer[c] & (1 << b))
self.data.high() # digitalWrite(_dat, HIGH);
else: # else
self.data.low() # digitalWrite(_dat, LOW);
#
self.clock.high() # digitalWrite(_clk, HIGH);
# }
# }
self.clock.low() # digitalWrite(_clk, LOW);
self.latch.high() # digitalWrite(_lat, HIGH);
self.latch.low() # digitalWrite(_lat, LOW);
def setLed(self, lednum, r,g,b):
self.setPWM(lednum * 3, r)
self.setPWM(lednum * 3 + 1, g)
self.setPWM(lednum * 3 + 2, b)
def setPWM(self, chan: int, pwm: int):
if (pwm > 4095):
pwm = 4095
try:
self.pwmbuffer[chan] = pwm
except:
pass
Edit:
Got it. That repo refers to a folder structure like this:
project/
├── modules/
│ └──tlc5947-rgb-micropython/
│ ├──...
│ └──micropython.mk
└── micropython/
├──ports/
... ├──stm32/
...
But I don't have anything like that. Mine is:
project/
|_ .vscode/
| |_ ...
|_ lib/
|_ code.py
|_ i2c_display.py
|_ tlc5947_ME.py
|_ .picowgo # Used by Pico-W-Go vscode extention to allow Pico programming in vscode

Via Awesome MicroPython, I found https://gitlab.com/peterzuger/tlc5947-rgb-micropython - it looks fairly up-to-date.

Related

What does mpn_invert_3by2 in mini-gmp do?

I really wonder the answer to this question. and I used python to calculate:
def inv(a):
return ((1 << 96) - 1) // (a << 32)
Why is python's result different from mpn_invert_limb's?
/* The 3/2 inverse is defined as
m = floor( (B^3-1) / (B u1 + u0)) - B
*/
B should be 2^32
And what is the use of mpn invert_limb?
Python code:
def inv(a):
return ((1 << 96) - 1) // (a << 32)
a = 165536
b = inv(a)
print(b & (2 ** 32 - 1))
C code:
int main()
{
mp_limb_t a = 16636;
mp_limb_t b;
b = mpn_invert_limb(a);
printf("a = %u, b = %u\n", a, b);
printf("a = %X, b = %X\n", a, b);
return 0;
}
Python output:
3522819686
C output:
a = 165536, b = 3165475657
a = 286A0, b = BCAD5349
Calling mpn_invert_limb only makes sense when your input is full-sized (has its high bit set). If the input isn't full sized then the quotient would be too big to fit in a single limb whereas in the full sized case its only 1 bit too big hence the subtraction of B in the definition.
I actually can't even run with your input of 16636, I get a division by 0 because this isn't even half a limb. Anyway, if I replace that value by a<<17 then I get a match between your Python and C. This shifting to make the top bit be set is what mini-gmp does in its usage of the function.

CUDA profiling - high shared transactions/access but low local replay rate

After running the Visual Profiler, guided analysis tells me that I'm memory-bound, and that in particular my shared memory accesses are poorly aligned/accessed - basically every line I access shared memory is marked as ~2 transactions per access.
However, I couldn't figure out why that was the case (my shared memory is padded/strided so that there shouldn't be bank conflicts), so I went back and checked the shared replay metric - and that says that only 0.004% of shared accesses are replayed.
So, what's going on here, and what should I be looking at to speed up my kernel?
EDIT: Minimal reproduction:
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
from pycuda.compiler import SourceModule
import pycuda.gpuarray as gp
mod = SourceModule("""
(splitting the code block to get both Python and CUDA/C++ coloring)
typedef unsigned char ubyte;
__global__ void identity(ubyte *arr, int stride)
{
const int dim2 = 16;
const int dim1 = 64;
const int dim0 = 33;
int shrstrd1 = dim2;
int shrstrd0 = dim1 * dim2;
__shared__ ubyte shrarr[dim0 * dim1 * dim2];
auto shrget = [shrstrd0, shrstrd1, &shrarr](int i, int j, int k) -> int{
return shrarr[i * shrstrd0 + j * shrstrd1 + k];
};
auto shrset = [shrstrd0, shrstrd1, &shrarr](int i, int j, int k, ubyte val) -> void {
shrarr[i * shrstrd0 + j * shrstrd1 + k] = val;
};
int in_x = threadIdx.x;
int in_y = threadIdx.y;
shrset(in_y, in_x, 0, arr[in_y * stride + in_x]);
arr[in_y * stride + in_x] = shrget(in_y, in_x, 0);
}
""",
(ditto)
options=['-std=c++11'])
#Equivalent to identity<<<1, dim3(32, 32, 1)>>>(arr, 64);
identity = mod.get_function("identity")
identity(gp.zeros((64, 64), np.ubyte), np.int32(64), block=(32, 32, 1))
2 transactions per access, shared replay overhead 0.083. Decreasing dim2 to 8 makes the problem go away, which I also don't understand.
Partial answer: I had a fundamental misunderstanding of how shared memory banks worked (namely, that they are banks of around a thousand byte-banks each) and so didn't realize that they looped around, so that too much padding meant that 32 row elements might end up using each bank more than once.
Presumably, though, that conflict just didn't come up every time - instead it came up, oh, about 85 times a block, from the numbers.
I'll leave this here for a day in hopes of a more complete explanation, then close and accept this answer.

Sample without replacement

How to sample without replacement in TensorFlow? Like numpy.random.choice(n, size=k, replace=False) for some very large integer n (e.g. 100k-100M), and smaller k (e.g. 100-10k).
Also, I want it to be efficient and on the GPU, so other solutions like this with tf.py_func are not really an option for me. Anything which would use tf.range(n) or so is also not an option because n could be very large.
This is one way:
n = ...
sample_size = ...
idx = tf.random_shuffle(tf.range(n))[:sample_size]
EDIT:
I had posted the answer below but then read the last line of your post. I don't think there is a good way to do it if you absolutely cannot produce a tensor with size O(n) (numpy.random.choice with replace=False is also implemented as a slice of a permutation). You could resort to a tf.while_loop until you have unique indices:
n = ...
sample_size = ...
idx = tf.zeros(sample_size, dtype=tf.int64)
idx = tf.while_loop(
lambda i: tf.size(idx) == tf.size(tf.unique(idx)),
lambda i: tf.random_uniform(sample_size, maxval=n, dtype=int64))
EDIT 2:
About the average number of iterations in the previous method. If we call n the number of possible values and k the length of the desired vector (with k ≤ n), the probability that an iteration is successful is:
p = product((n - (i - 1) / n) for i in 1 .. k)
Since each iteartion can be considered a Bernoulli trial, the average number of trials unitl first success is 1 / p (proof here). Here is a function that calculates the average numbre of trials in Python for some k and n values:
def avg_iter(k, n):
if k > n or n <= 0 or k < 0:
raise ValueError()
avg_it = 1.0
for p in (float(n) / (n - i) for i in range(k)):
avg_it *= p
return avg_it
And here are some results:
+-------+------+----------+
| n | k | Avg iter |
+-------+------+----------+
| 10 | 5 | 3.3 |
| 100 | 10 | 1.6 |
| 1000 | 10 | 1.1 |
| 1000 | 100 | 167.8 |
| 10000 | 10 | 1.0 |
| 10000 | 100 | 1.6 |
| 10000 | 1000 | 2.9e+22 |
+-------+------+----------+
You can see it varies wildy depending on the parameters.
It is possible, though, to construct a vector in a fixed number of steps, although the only algorithm I can think of is O(k2). In pure Python it goes like this:
import random
def sample_wo_replacement(n, k):
sample = [0] * k
for i in range(k):
sample[i] = random.randint(0, n - 1 - len(sample))
for i, v in reversed(list(enumerate(sample))):
for p in reversed(sample[:i]):
if v >= p:
v += 1
sample[i] = v
return sample
random.seed(100)
print(sample_wo_replacement(10, 5))
# [2, 8, 9, 7, 1]
print(sample_wo_replacement(10, 10))
# [6, 5, 8, 4, 0, 9, 1, 2, 7, 3]
This is a possible way to do it in TensorFlow (not sure if the best one):
import tensorflow as tf
def sample_wo_replacement_tf(n, k):
# First loop
sample = tf.constant([], dtype=tf.int64)
i = 0
sample, _ = tf.while_loop(
lambda sample, i: i < k,
# This is ugly but I did not want to define more functions
lambda sample, i: (tf.concat([sample,
tf.random_uniform([1], maxval=tf.cast(n - tf.shape(sample)[0], tf.int64), dtype=tf.int64)],
axis=0),
i + 1),
[sample, i], shape_invariants=[tf.TensorShape((None,)), tf.TensorShape(())])
# Second loop
def inner_loop(sample, i):
sample_size = tf.shape(sample)[0]
v = sample[i]
j = i - 1
v, _ = tf.while_loop(
lambda v, j: j >= 0,
lambda v, j: (tf.cond(v >= sample[j], lambda: v + 1, lambda: v), j - 1),
[v, j])
return (tf.where(tf.equal(tf.range(sample_size), i), tf.tile([v], (sample_size,)), sample), i - 1)
i = tf.shape(sample)[0] - 1
sample, _ = tf.while_loop(lambda sample, i: i >= 0, inner_loop, [sample, i])
return sample
And an example:
with tf.Graph().as_default(), tf.Session() as sess:
tf.set_random_seed(100)
sample = sample_wo_replacement_tf(10, 5)
for i in range(10):
print(sess.run(sample))
# [3 0 6 8 4]
# [5 4 8 9 3]
# [1 4 0 6 8]
# [8 9 5 6 7]
# [7 5 0 2 4]
# [8 4 5 3 7]
# [0 5 7 4 3]
# [2 0 3 8 6]
# [3 4 8 5 1]
# [5 7 0 2 9]
This is quite intesive on tf.while_loops, though, which are well-known not to be particularly fast in TensorFlow, so I wouldn't know how fast can you really get with this method without some kind of benchmarking.
EDIT 4:
One last possible method. You can divide the range of possible values (0 to n) in "chunks" of size c and pick a random amount of numbers from each chunk, then shuffle everything. The amount of memory that you use is limited by c, and you don't need nested loops. If n is divisible by c, then you should get about a perfect random distribution, otherwise values in the last "short" chunk would receive some extra probability (this may be negligible depending on the case). Here is a NumPy implementation. It is somewhat long to account for different corner cases and pitfalls, but if c ≥ k and n mod c = 0 several parts get simplified.
import numpy as np
def sample_chunked(n, k, chunk=None):
chunk = chunk or n
last_chunk = chunk
parts = n // chunk
# Distribute k among chunks
max_p = min(float(chunk) / k, 1.0)
max_p_last = max_p
if n % chunk != 0:
parts += 1
last_chunk = n % chunk
max_p_last = min(float(last_chunk) / k, 1.0)
p = np.full(parts, 2)
# Iterate until a valid distribution is found
while not np.isclose(np.sum(p), 1) or np.any(p > max_p) or p[-1] > max_p_last:
p = np.random.uniform(size=parts)
p /= np.sum(p)
dist = (k * p).astype(np.int64)
sample_size = np.sum(dist)
# Account for rounding errors
while sample_size < k:
i = np.random.randint(len(dist))
while (dist[i] >= chunk) or (i == parts - 1 and dist[i] >= last_chunk):
i = np.random.randint(len(dist))
dist[i] += 1
sample_size += 1
while sample_size > k:
i = np.random.randint(len(dist))
while dist[i] == 0:
i = np.random.randint(len(dist))
dist[i] -= 1
sample_size -= 1
assert sample_size == k
# Generate sample parts
sample_parts = []
for i, v in enumerate(np.nditer(dist)):
if v <= 0:
continue
c = chunk if i < parts - 1 else last_chunk
base = chunk * i
sample_parts.append(base + np.random.choice(c, v, replace=False))
sample = np.concatenate(sample_parts, axis=0)
np.random.shuffle(sample)
return sample
np.random.seed(100)
print(sample_chunked(15, 5, 4))
# [ 8 9 12 13 3]
A quick benchmark of sample_chunked(100000000, 100000, 100000) takes about 3.1 seconds in my computer, while I haven't been able to run the previous algorithm (sample_wo_replacement function above) to completion with the same parameters. It should be possible to implement it in TensorFlow, maybe using tf.TensorArray, although it would require significant effort to get it exactly right.
use the gumbel-max trick here: https://github.com/tensorflow/tensorflow/issues/9260
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z,K)
indices are what you want. This tick is so easy~!
The following works fairly fast on the GPU, and I did not encounter memory issues when using n~100M and k~10k (using NVIDIA GeForce GTX 1080 Ti):
def random_choice_without_replacement(n, k):
"""equivalent to 'numpy.random.choice(n, size=k, replace=False)'"""
return tf.math.top_k(tf.random.uniform(shape=[n]), k, sorted=False).indices

rjags error Invalid vector argument to ilogit

I'd like to compare a betareg regression vs. the same regression using rjags
library(betareg)
d = data.frame(p= sample(c(.1,.2,.3,.4),100, replace= TRUE),
id = seq(1,100,1))
# I am looking to reproduce this regression with jags
b=betareg(p ~ id, data= d,
link = c("logit"), link.phi = NULL, type = c("ML"))
summary(b)
Below I am trying to do the same regression with rjags
#install.packages("rjags")
library(rjags)
jags_str = "
model {
#model
y ~ dbeta(alpha, beta)
alpha <- mu * phi
beta <- (1-mu) * phi
logit(mu) <- a + b*id
#priors
a ~ dnorm(0, .5)
b ~ dnorm(0, .5)
t0 ~ dnorm(0, .5)
phi <- exp(t0)
}"
id = d$id
y = d$p
model <- jags.model(textConnection(jags_str),
data = list(y=y,id=id)
)
update(model, 10000, progress.bar="none"); # Burnin for 10000 samples
samp <- coda.samples(model,
variable.names=c("mu"),
n.iter=20000, progress.bar="none")
summary(samp)
plot(samp)
I get an error on this line
model <- jags.model(textConnection(jags_str),
data = list(y=y,id=id)
)
Error in jags.model(textConnection(jags_str), data = list(y = y, id = id)) :
RUNTIME ERROR:
Invalid vector argument to ilogit
Can you advise
(1) how to fix the error
(2) how to set priors for the beta regression
Thank you.
This error occurs because you have supplied the id vector to the scalar function logit. In Jags inverse link functions cannot be vectorized. To address this, you need to use a for loop to go through each element of id. To do this I would probably add an additional element to your data list that denotes how long id is.
d = data.frame(p= sample(c(.1,.2,.3,.4),100, replace= TRUE),
id = seq(1,100,1), len_id = length(seq(1,100,1)))
From there you just need to make a small edit to your jags code.
for(i in 1:(len_id)){
y[i] ~ dbeta(alpha[i], beta[i])
alpha[i] <- mu[i] * phi
beta[i] <- (1-mu[i]) * phi
logit(mu[i]) <- a + b*id[i]
}
However, if you track mu it is going to be a matrix that is 20000 (# of iterations) by 100 (length of id). You are likely more interested in the actual parameters (a, b, and phi).

julia-lang define new operator |= or |>=

a += 1 is equivalent to a = a + 1
I would like to have a |>= √ or a |= √ equivalent to a = a |> √. Can I define these new operator?
The set of updating operators is hardcoded and currently limited to:
+= -= *= /= //= \= ^= ÷= %= <<= >>= >>>= |= &= ⊻= $=
The parser will automatically expand all of these to a = a op b. All of these operators, however, have well defined meaning in base and have different precedence than |>. You could shadow one of these definitions with your own meaning, but it'll be very surprising for anyone else that uses your code… and you yourself could be surprised by the precedence at times.
julia> const | = |>
|> (generic function with 1 method)
julia> a = 2
2
julia> a |= √
1.4142135623730951
I suppose you could make it a little better by only overriding the behavior for function arguments:
julia> >>>(x, y::Function) = y(x)
>>>(x, y) = Base.:>>>(x, y)
>>> (generic function with 2 methods)
julia> a = 2
a >>>= √
1.4142135623730951
julia> 0xf3 >>> 3 # The standard unsigned bit shift
0x1e