Related
I'm trying to create a TensorFlow (2.0) variable like this:
c_init = tf.zeros_initializer()
c = tf.Variable(initial_value=c_init(shape=shape, dtype="float32"), trainable=True)
the shape variable is this:
shape=(49, 52, 26, 49, 6, 3, 31, 11, 24, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I'm getting this error message:
InvalidArgumentError: Too many dimensions [Op:Fill] name: zeros/
I did not know there is a limit in the number of dimensions. I did not see anything in TensorFlow documentation about it. Is there any way to get around this limitation?
The max number of dimensions is 254
Get around this limitation?
If you don't mind, I do have to ask you a question:
Are you sure you have that many dimensions in your problem? I have seen people mistakenly using size of dimension as number of dimensions. Are you sure you have so many dimensions with size 1?
Let's not forget that there is no need to represent anything as tensors. We could solve any problem without using a multi-dimensional data type (tensor). The reason why this type of representation is used is because it allows certain linear algebra operations to be applied and they are much fast when compared to regular loops in a more traditional code.
So, yes, you can get around this limitation but you will need to so some "soul search" and figure out what kind of math operations you are planning to apply to this humongous tensor.
Following command
Fight.last.fight_logs.where(item_id: nil)
generates sql:
Fight Load (0.3ms) SELECT "fights".* FROM "fights" ORDER BY "fights"."id" DESC LIMIT $1 [["LIMIT", 1]]
FightLog Load (0.2ms) SELECT "fight_logs".* FROM "fight_logs" WHERE "fight_logs"."fight_id" = $1 AND "fight_logs"."item_id" IS NULL LIMIT $2 [["fight_id", 27], ["LIMIT", 11]]
and returns:
#<ActiveRecord::AssociationRelation [#<FightLog id: 30, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 1, damage: 11.0, created_at: "2017-11-02 16:20:55", updated_at: "2017-11-02 16:20:57">, #<FightLog id: 31, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 20, damage: 3.0, created_at: "2017-11-02 16:20:57", updated_at: "2017-11-02 16:20:57">, #<FightLog id: 33, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 1, damage: 1.0, created_at: "2017-11-02 16:21:40", updated_at: "2017-11-02 16:21:40">, #<FightLog id: 32, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 20, damage: 7.0, created_at: "2017-11-02 16:21:33", updated_at: "2017-11-02 16:21:40">, #<FightLog id: 34, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 1, damage: 12.0, created_at: "2017-11-02 16:21:47", updated_at: "2017-11-02 16:21:48">, #<FightLog id: 35, fight_id: 27, attack: 0, block: 0, item_id: nil, user_id: 20, damage: 14.0, created_at: "2017-11-02 16:21:48", updated_at: "2017-11-02 16:21:48">]>
but
Fight.last.fight_logs.where.not(item_id: 1)
generates sql:
Fight Load (1.0ms) SELECT "fights".* FROM "fights" ORDER BY "fights"."id" DESC LIMIT $1 [["LIMIT", 1]]
FightLog Load (0.8ms) SELECT "fight_logs".* FROM "fight_logs" WHERE "fight_logs"."fight_id" = $1 AND ("fight_logs"."item_id" != $2) LIMIT $3 [["fight_id", 27], ["item_id", 1], ["LIMIT", 11]]
and returns:
#<ActiveRecord::AssociationRelation []>
How it is possible? What i'm doing wrong?
You should specify NULL value in your query since you have it in your database:
Fight.last.fight_logs.where('item_id != ? OR item_id IS NULL', 1)
This is just how SQL works:
select 1 != NULL;
+-----------+
| 1 != NULL |
+-----------+
| NULL |
+-----------+
You can look at this answer to clarify the issue.
Also, I would recommend avoiding using default NULL values in your database, there is nice answer about it. You can simply use default: 0, null: false your case.
I am trying to compute matrix z (defined below) in python with numpy.
Here's my current solution (using 1 for loop)
z = np.zeros((n, k))
for i in range(n):
v = pi * (1 / math.factorial(x[i])) * np.exp(-1 * lamb) * (lamb ** x[i])
numerator = np.sum(v)
c = v / numerator
z[i, :] = c
return z
Is it possible to completely vectorize this computation? I need to do this computation for thousands of iterations, and matrix operations in numpy is much faster than huge for loops.
Here is a vectorized version of E. It replaces the for-loop and scalar arithmetic with NumPy broadcasting and array-based arithmetic:
def alt_E(x):
x = x[:, None]
z = pi * (np.exp(-lamb) * (lamb**x)) / special.factorial(x)
denom = z.sum(axis=1)[:, None]
z /= denom
return z
I ran em.py to get a sense for the typical size of x, lamb, pi, n and k. On data of this size,
alt_E is about 120x faster than E:
In [32]: %timeit E(x)
100 loops, best of 3: 11.5 ms per loop
In [33]: %timeit alt_E(x)
10000 loops, best of 3: 94.7 µs per loop
In [34]: 11500/94.7
Out[34]: 121.43611404435057
This is the setup I used for the benchmark:
import math
import numpy as np
import scipy.special as special
def alt_E(x):
x = x[:, None]
z = pi * (np.exp(-lamb) * (lamb**x)) / special.factorial(x)
denom = z.sum(axis=1)[:, None]
z /= denom
return z
def E(x):
z = np.zeros((n, k))
for i in range(n):
v = pi * (1 / math.factorial(x[i])) * \
np.exp(-1 * lamb) * (lamb ** x[i])
numerator = np.sum(v)
c = v / numerator
z[i, :] = c
return z
n = 576
k = 2
x = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5])
lamb = np.array([ 0.84835141, 1.04025989])
pi = np.array([ 0.5806958, 0.4193042])
assert np.allclose(alt_E(x), E(x))
By the way, E could also be calculated using scipy.stats.poisson:
import scipy.stats as stats
pois = stats.poisson(mu=lamb)
def alt_E2(x):
z = pi * pois.pmf(x[:,None])
denom = z.sum(axis=1)[:, None]
z /= denom
return z
but this does not turn out to be faster, at least for arrays of this length:
In [33]: %timeit alt_E(x)
10000 loops, best of 3: 94.7 µs per loop
In [102]: %timeit alt_E2(x)
1000 loops, best of 3: 278 µs per loop
For larger x, alt_E2 is faster:
In [104]: x = np.random.random(10000)
In [106]: %timeit alt_E(x)
100 loops, best of 3: 2.18 ms per loop
In [105]: %timeit alt_E2(x)
1000 loops, best of 3: 643 µs per loop
I'm learning Bayesian data analysis. I try to replicate the tutorials by Trond Reitan by stan, which are originally created by WinBugs.
Specifically, I have following data and model
weta.windata<-list(numdet=c(0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 1, 1, 2, 0, 3, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 1, 0, 3, 1, 1, 3, 1, 1, 2, 0, 2, 1, 1, 1, 1,0, 0, 0, 2, 0, 2, 4, 3, 1, 0, 0, 2, 0, 2, 2, 1, 0, 0, 1),
numvisit=c(4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 3, 3, 4, 4, 4, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4,4, 4, 4, 4, 4, 4, 4 ,4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3),
nsites=72)
model_string1="
data{
int nsites;
real<lower=0> numdet[nsites];
real<lower=0> numvisit[nsites];
}
parameters{
real<lower=0> p;
real<lower=0> psi;
int<lower=0> z[nsites];
}
model{
p~uniform(0,1);
psi~uniform(0,1);
for(i in 1:nsites){
z[i]~ bernoulli(psi);
p.site[i]~z[i]*p;
numdet[i]~binomial(numvisit[i],p.site[i]);
}
}
"
mcmc_samples <- stan(model_code=model_string1,
data=weta.windata,
pars=c("p","psi","z"),
chains=3, iter=30000, warmup=10000)
The context is about detecting wetas in fields. There are 72 sites. for each site, researchers visited several times (i.e., numvisit) and recorded the number of times weta found (i.e., numdet).
There is a latent variable z, describing whether one site has weta or not. psi is the probability that one site has weta. p is the detection rate.
The problem I have is I can not declare z to be integers
parameters or transformed parameters cannot be integer or integer array; found declared type int, parameter name=z
Problem with declaration.
However, if I set z to be real, that is,
real<lower=0> z[nsites];
the problem becomes I cannot set the variable from bernoulli as integer...
No matches for:
real ~ bernoulli(real)
I'm very new to stan. Forgive me if this question is very silly.
Stan doesn't support integer parameters or hacks to let you pretend real variables are integers. What it does support is marginalizing the integer variables out of the density. You can then reconstruct them with much more efficiency and much higher tail resolution.
The chapter in the manual on latent discrete parameters is the place to start. It includes an implementation of the CJS population models, which may be familiar. I implemented the Dorazio and Royle occupance models as a case study and Hiroki Ito translated the entire Kery and Schaub book to Stan. They're all linked under users >> documentation on the web site.
I ran into this mysterious error with ulam while answering practice problems in Statistical Rethinking. When you're constructing a list to pass to the data argument to ulam be sure to use = rather than <- for assignment. If you don't the list you construct won't have named components, and a missing name produces this error.
I'm building a graph which allows edges to be toggled on/off. I need to be able to add and remove them repeatedly. I have noticed this error with node degrees with nodes attached to toggled edges. I've included an example.
My code:
allElements = cy.elements();
....
var allEdges = allElements.filter('edge');
var allNodes = allElements.filter('node');
for(var i=0; i<5; i++){
// DELETE
var printThis = [];
allNodes.filter(function(i,ele){
printThis.push(ele.degree());
});
console.log(printThis);
cy.remove(allEdges);
cy.add(allEdges);
}
Returns:
[1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 6, 1, 2, 1, 1, 1, 36, 8, 3, 4, 4, 2, 1, 1, 1, 1, 1, 1, 2]
[1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 6, 1, 2, 1, 1, 1, 36, 8, 3, 4, 4, 2, 1, 1, 1, 1, 1, 1, 2]
[2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 12, 2, 4, 2, 2, 2, 72, 16, 6, 8, 8, 4, 2, 2, 2, 2, 2, 2, 4]
[3, 3, 3, 3, 3, 9, 3, 3, 3, 3, 3, 18, 3, 6, 3, 3, 3, 108, 24, 9, 12, 12, 6, 3, 3, 3, 3, 3, 3, 6]
[4, 4, 4, 4, 4, 12, 4, 4, 4, 4, 4, 24, 4, 8, 4, 4, 4, 144, 32, 12, 16, 16, 8, 4, 4, 4, 4, 4, 4, 8]
Which shows that removing edges after the first time dont decrease the degree of the nodes they're attached to.
How can I have cytoscape return the correct degree?
Thank you for notifying us of the issue. We will get a fix in for 2.0.3 -M
https://github.com/cytoscape/cytoscape.js/issues/360