Is is possible to pass custom linear operators (like CUSP's http://cusplibrary.github.io/classcusp_1_1linear__operator.html) to ViennaCL's solvers?
Thanks.
Yes, it is possible for iterative solvers. An example can be found in
examples/tutorial/matrix-free.cpp or here:
http://viennacl.sourceforge.net/doc/matrix-free_8cpp-example.html
Related
I'm trying to learn how to use XLA for my models. And I'm looking at the doc from official here: https://www.tensorflow.org/xla#enable_xla_for_tensorflow_models. It was documented that there are two methods to enable XLA: 1) Explicit compilation by using #tf.function(jit_compile=True) to decorate your training function. 2) Auto-clustering by setting environment variables.
As I'm using tensorflow 1.15, not 2.x. So I think the second approach is the same as using this statement:
config.graph_options.optimizer_options.global_jit_level = (
tf.OptimizerOptions.ON_1)
You can also found info from here: https://www.tensorflow.org/xla/tutorials/autoclustering_xla. It seems this is what they used in tf2.x:
tf.config.optimizer.set_jit(True) # Enable XLA.
I think they are the same, correct me if I'm wrong.
OK, so if using the first approach, I think in tf1.15, this is equivalent to using
tf.xla.experimental.compile(computation)
So, my question is if I have used
tf.xla.experimental.compile(computation) to decorate my whole training function. Is this equivalent to use
config.graph_options.optimizer_options.global_jit_level = (
tf.OptimizerOptions.ON_1)
? Anybody knows? Much appreciated.
According to this video from TF team (2021), clustering will automatically look for places to optimize. Nevertheless, due to an unpredictable behaviour, they recommend decorating tf.fuctions with #tf.function(jit_compile=True) over using out-of-the-box clustering.
In case you want to use autoclustering, set_jit(True) is being deprecated and the most correct way now is tf.config.optimizer.set_jit('autoclustering')
How do you write a matrix multiplication function? Takes two matrices outputs one.
The documentation on assemblyscript.org is pretty short, Float64Array though is a valid type among these but that's 1D so...
AssemblyScript's stdlib is modeled after JavaScript's stdlib, so there are no matrix operations. However, here is a library that might work for you: https://github.com/JustinParratt/big-mult
let's say i'm optimizing Ax = b where A is a matrix and x,b are vectors.
my question - is it possible to optimize it only on subset of A? specifically, a patch of A.
in other words, i would like to keep as constant a subset of parameters in A.
is it possible in TensorFlow?
I thought about using tf.silce(), but it creates a new reference of the variable
Thanks!
Unless I've misunderstood your question (or there's missing context), just define the parts of A you want to optimise over using tf.Variable(), and define the parts you don't using tf.Constant().
You can either use tf.stop_gradient or the var_list parameter of your optimizer.
See this answer for more details: https://stackoverflow.com/a/34478044/4554460
I need discrete distribution in tensorflow。
But when I search the documentation from tensorflow,I can only find
normal distribution and so on.
In theano, I often use
theano.tensor.shared_randomstreams.RandomStreams.choice method to
generate discrete distribution。
And also, I Google this problem。And I found
tf.contrib.distributions.DiscreteDistribution。 But this is an
abstract class。I cannot use it directly。
So,here is question。How to implement discrete distribution in tensorflow。
Thanks for your help。
You can make your own discrete 0-1 variable by doing tf.random.uniform() > 0.5, this can be easily extended to other discrete distributions.
Maybe one of these fit the bill?
ds = tf.contrib.distributions
ds.Bernoulli
ds.Binomial
ds.Categorical
ds.Deterministic
ds.OneHotCategorical
I am new on Semantic Web Rules Language and I am writing some rules in order to calculate the probability of - discrete and continuous - distributions.
I know that with SWRL I can do subtractions, addition, multiplication and divisions.
But what about exponentiation, summation, calculation of mathematical functions? Is there a way to do this in SWRL?
Just an example to place my question :
You know, for example, for Triangular distribution, we need basic mathematical calculus (subtractions and divisions), but for Beta Distribution we need exponentiation and calculus of the beta function..
Is there a way to do this in SWRL?
Thanks
The standard describes what math functions should be available, and these include exponentiation:
8.2. Math Built-Ins
…
swrlb:pow
Satisfied iff the first argument is equal to the result of the second argument raised to the third argument power.
There's no built in for the Beta function, though. You'd need to look into the reasoner that you're using and see whether you can implement additional mathematical builtins.
summation, calculation of mathematical functions
For summations, you may find the aggregate functions in SPARQL useful, but only if the terms you need to sum are available individually. You won't easily be able to express arbitrary sums like ∑i=1…n i2. You might find support for extension functions in SPARQL implementations, too.