Kotlin: Check if a Double value is only between 0.0 to 1.0 stepped by 0.1 - kotlin

What would be a simple/elegant solution to check if a double value is only in the set of
{.0, .1, .. .9, 1.0}
values.
Right now I am doing
setOf(.0, .1, .2, .3, .4, .5, .6, .7, .8, .9, 1.0)
and check if a Double value contains.
Is there a more simpler/elegant solution?

You can make it with sequences.
fun contains(d: Double) = d in generateSequence(0.0) { it + 0.1 }.takeWhile { it <= 1.0 }
If you want to make it shorter, add step function like there is one for Int sequences.
infix fun ClosedRange<Double>.step(step: Double): Sequence<Double> =
generateSequence(start) { it + step }.takeWhile { it <= endInclusive }
fun contains(d: Double) = d in 0.0..1.0 step 0.1
Edit
As mentioned in comment, simple in doesn't work because of complex Double calculations. Therefore you can add your own checking function:
val acceptableAccuracy = 1e-15
infix fun Double.nearlyIn(sequence: Sequence<Double>) =
sequence.any { this in (it - acceptableAccuracy..it + acceptableAccuracy) }
Then you need few changes in the code above:
fun contains(d: Double) = d nearlyIn (0.0..1.0 step 0.1)

Since you're only really worried about the tens position, I'd just shift it once and check for 0..10:
fun Double.isSpecial() = (this * 10.0) in (0..10).map(Int::toDouble)
Testing with play.kotlinlang.org:
fun main() {
listOf(0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0).forEach(::checkSpecial)
listOf(0.01, 0.11, 0.22, 1.01).forEach(::checkSpecial)
}
fun checkSpecial(value: Double) {
println("$value isSpecial = ${value.isSpecial()}")
}
Outputs:
0.0 isSpecial = true
0.1 isSpecial = true
0.2 isSpecial = true
0.3 isSpecial = true
0.4 isSpecial = true
0.5 isSpecial = true
0.6 isSpecial = true
0.7 isSpecial = true
0.8 isSpecial = true
0.9 isSpecial = true
1.0 isSpecial = true
0.01 isSpecial = false
0.11 isSpecial = false
0.22 isSpecial = false
1.01 isSpecial = false
If you're less worried about elegance and more about performance, you could just do:
fun Double.isSpecial() = when (this) {
0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 -> true
else -> false
}
Which avoids allocating any sets or ranges entirely. If the range isn't dynamic, I'd just go with this, personally.

This will do it if you want to check steps of 0.1 for any double.
Multiply by 10, check if the result is an integer.
fun isSpecial(v:Double) : Boolean {
val y = v*10
return y == y.toInt().toDouble()
}
Unless you explicitly only want 0.0-1.0 ?

Related

How can I measure the height and width of a text in Jetpack Compose Canvas?

I'm use Jetpack Compose Canvas to draw a division circle. The min value of the division circle is 20, and the max value of the division circle is 120.
So I write the Code A, and I get the result Image A as expected except the label.
From the Image A, I find the label 0, 20, 40 are on good position, and the label 60, 80, 100, 120 are not on good position.
1: It seems that I need to measure the height and width of a text,then adjust the position of the text, if so, how can I measure the height and width of a text?
2: Is there other way to place these text on apposite position without measurement the height and width of text?
Code A
#Composable
fun setCanvas(maxCountList: MaxCountList<Double>) {
Canvas(
modifier = Modifier
) {
val axisPaint = Paint()
val textPaint = TextPaint()
drawIntoCanvas {
val orig = MyPoint(size.width / 2, size.height / 2)
val temp = min(size.height, size.width)
val radius = temp / 2 - 10
it.drawCircle(Offset(x = 0f.toX(orig), y = 0f.toY(orig)), radius, axisPaint)
val lineOffset = 5.0f
val lineLength = 20.0f
val labelOffset = 10.0f
val point1 = radius - lineOffset
val point2 = radius - lineOffset - lineLength
val point3 = radius - lineOffset - lineLength - labelOffset
(0..6).forEach { i ->
val radians = Math.toRadians(225 - i * 45.0)
val x1 = point1 * cos(radians).toFloat()
val x2 = point2 * cos(radians).toFloat()
val x3 = point3 * cos(radians).toFloat()
val y1 = point1 * sin(radians).toFloat()
val y2 = point2 * sin(radians).toFloat()
val y3 = point3 * sin(radians).toFloat()
it.drawLine(
Offset(x = x1.toX(orig), y = y1.toY(orig)),
Offset(x = x2.toX(orig), y = y2.toY(orig)),
axisPaint
)
val label=(i * 20).toString()
it.nativeCanvas.drawText(label, x3.toX(orig), y3.toY(orig), textPaint)
}
}
}
}
//Convert X to new coordinate
fun Float.toX(originCoordinate: MyPoint) :Float {
return originCoordinate.x+this
}
//Convert Y to new coordinate
fun Float.toY(originCoordinate: MyPoint):Float {
return originCoordinate.y-this
}
class MyPoint (val x:Float, val y: Float)
Image A
you can use TextMeasurer.measure in jetpack compose 1.3.0-alpha2 onwards
you can check samples here
https://android-review.googlesource.com/c/platform/frameworks/support/+/2135315/4/compose/foundation/foundation/integration-tests/foundation-demos/src/main/java/androidx/compose/foundation/demos/text/DrawTextDemo.kt

Why does this 1-dimensional Perlin Noise Generator Return Values > 1?

For educational purposes I want to implement the 1-dimensional Perlin Noise algorithm in Kotlin. I familiarized myself with the algorithm here and here.
I think I understood the basic concept, however my implementation can return values greater than 1. I expect the result of the call perlin(x) to be in the range 0 to 1. I can't figure out where I'm mistaken, so maybe someone can point me in the right direction. For simplicity I use simple linear interpolation instead of smoothstep or other advanced techniques for now.
class PerlinNoiseGenerator(seed: Int, private val boundary: Int = 10) {
private var random = Random(seed)
private val noise = DoubleArray(boundary) {
random.nextDouble()
}
fun perlin(x: Double, persistence: Double = 0.5, numberOfOctaves: Int = 8): Double {
var total = 0.0
for (i in 0 until numberOfOctaves) {
val amplitude = persistence.pow(i) // height of the crests
val frequency = 2.0.pow(i) // number of crests per unit distance
val octave = amplitude * noise(x * frequency)
total += octave
}
return total
}
private fun noise(t: Double): Double {
val x = t.toInt()
val x0 = x % boundary
val x1 = if (x0 == boundary - 1) 0 else x0 + 1
val between = t - x
val y0 = noise[x0]
val y1 = noise[x1]
return lerp(y0, y1, between)
}
private fun lerp(a: Double, b: Double, alpha: Double): Double {
return a + alpha * (b - a)
}
}
For example if you would use these random generated noises
private val noise = doubleArrayOf(0.77, 0.02, 0.63, 0.74, 0.49, 0.22, 0.19, 0.76, 0.16, 0.08)
You would end up with an image like this:
where the green line is the calculated Perlin Noise of 8 octaves with a persistence of 0.5. As you can see the sum of all octaves at x=0 for example is greater than 1. (The blue line being the first octave noise(x) and the orange one being the second octave 0.5 * noise(2x)).
What am I doing wrong?
Thanks in advance.
Note: I'm aware that the Simplex Noise algorithm is the successor of Perlin Noise, however for educational purposes I want to implement Perlin Noise first. I'm also aware that my boundary should be set to something in the magnitude of 256 but for simplicity I just used 10 for now.
I've been digging around and found this article which introduces a value to normalize the results returned by Perlin(x). Essentially the amplitudes are summed up and the total is divided by this value. This seems to make sense since we could have "bad luck" and have a y-value of 1.0 in the first octave, followed by a 0.5 in the next, etc. So dividing by the sum of the amplitudes (1.5 in this case with 2 octaves) seems reasonable to keep the values in the range 0 - 1.
However, I'm unsure if this is the preferred way since none of the other resource uses this technique.
The modified code would look like this:
fun perlin(x: Double, persistence: Double = 0.5, numberOfOctaves: Int = 8): Double {
var total = 0.0
var amplitudeSum = 0.0 //used for normalizing results to 0.0 - 1.0
for (i in 0 until numberOfOctaves) {
val amplitude = persistence.pow(i) // height of the crests
val frequency = 2.0.pow(i) // frequency (number of crests per unit distance) doubles per octave
val octave = amplitude * noise(x * frequency)
total += octave
amplitudeSum += amplitude
}
return total / amplitudeSum
}

Bayes network clarification

I am trying to learn Bayes Network and I have a problem that I would like some clarification on.
Given the table
CPT
What would the p(Aggression=high|Anger=Partly,Hostility=Yes) be? My answer is 0.5.
My thought process is that Anger and Hostility are dependent, so according to the info given, the probability of partly anger and yes hostility is 0.5.
Aggression is independent of the two, so it would just be P(aggression)*0.5= 0.5.
Would this be a correct assumption?
Short answer: My value for p(Aggression=high|Anger=Partly,Hostility=Yes) is 100%.
If Aggression were indepent of Hostility and Anger, it would not matter what evidence you have.
So p(Aggression) was the maximum of the 3 values p(Agg=low), p(Agg=high), p(Agg=veryhigh).
However the 3*9 table implies p(Agg) = p(Hos, Ang) and it is not independent.
I have tried to model your CPT (upper table) with the free software "Samiam".
I doing so I've entered the values from the CPT for the Aggression node in Samiam.
For the priors: I am assuming someone who is in Anger 5% of the time, partly angry 15% of the time, and 80% not angry; and hostile 10% of the time, partly hostile 30% or not hostile 60% of the time.
See screenshots:
Table values for Aggression Node:
With Observed Evidence - Value of Aggression=High goes up to 100%:
I've also attached the samiam file:
net
{
propagationenginegenerator1791944048146838126L = "edu.ucla.belief.approx.BeliefPropagationSettings#20ece334";
recoveryenginegenerator6944530267470113528l = "edu.ucla.util.SettingsImpl#49f77e1b";
node_size = (130.0 55.0);
huginenginegenerator3061656038650325130L = "edu.ucla.belief.inference.JoinTreeSettings#71a1d859";
}
node Aggression
{
states = ("Low" "High" "VeryHigh" );
position = (268 -263);
diagnosistype = "AUXILIARY";
DSLxSUBMODEL = "Root Submodel";
ismapvariable = "false";
ID = "variable2";
label = "Aggression";
DSLxEXTRA_DEFINITIONxDIAGNOSIS_TYPE = "AUXILIARY";
excludepolicy = "include whole CPT";
}
node Anger
{
states = ("no" "partly" "yes" );
position = (118 -48);
diagnosistype = "AUXILIARY";
DSLxSUBMODEL = "Root Submodel";
ismapvariable = "false";
ID = "variable0";
label = "Anger";
DSLxEXTRA_DEFINITIONxDIAGNOSIS_TYPE = "AUXILIARY";
excludepolicy = "include whole CPT";
}
node Hostility
{
states = ("No" "Partly" "Yes" );
position = (351 -46);
diagnosistype = "AUXILIARY";
DSLxSUBMODEL = "Root Submodel";
ismapvariable = "false";
ID = "variable1";
label = "Hostility";
DSLxEXTRA_DEFINITIONxDIAGNOSIS_TYPE = "AUXILIARY";
excludepolicy = "include whole CPT";
}
potential ( Aggression | Anger Hostility )
{
data = ((( 1.0 0.0 0.0 )
( 0.5 0.5 0.0 )
( 0.5 0.0 0.5 ))
(( 0.5 0.5 0.0 )
( 0.5 0.5 0.0 )
( 0.0 1.0 0.0 ))
(( 0.5 0.0 0.5 )
( 0.0 0.5 0.5 )
( 0.0 0.0 1.0 )));
}
potential ( Anger | )
{
data = ( 0.8 0.15 0.05 );
}
potential ( Hostility | )
{
data = ( 0.6 0.3 0.1 );
}

Math adjustment of swipe factor

I cannot figure this out. So I'm swiping a card (tinder style) and I'm capturing swipePercent from 0.0 to 1.0... I want the animation on the next card to happen between 0.2 and 0.4.
So I need a variable swipePercentAdjusted that starts at 0.0 when swipePercent = 0.2 and then ramps up to 1.0 when swipePercent = 0.4.
I just can't figure this out.
Something like this:
if (swipePercent >= 0.2 && swipePercent <= 0.4) {
CGFloat swipePercentAdjusted = (swipePercent - 0.2) / 0.2;
}
Examples:
swipePercent is 0.2 yields 0.0
swipePercent is 0.3 yields 0.5
swipePercent is 0.4 yields 1.0
The most brutal solution is just to remap the interval 0.2 - 0.4 to 0.0 - 1.0. (Which is similar to rmaddys solution).
if(swipePercent<0.2)
return 0.0;
if(swipePercent>0.4)
return 1.0;
return (swipePercent-0.2)/(0.4-0.2)
Which can be implemented in a simple function
double remapLinear(double low, double high, double value) {
if(swipePercent<low)
return 0.0;
if(swipePercent>high)
return 1.0;
return (swipePercent-low)/(high-low)
}
However sometimes this is too abrupt. In that case you can use a cosine based transition
double remapSmooth(double low, double high, double value) {
if(swipePercent<low)
return 0.0;
if(swipePercent>high)
return 1.0;
double z = (swipePercent-low)/(high-low);
return 0.5-0.5*cos(z*PI);
}

Singular Value Decomposition: Different results with Jama, PColt and NumPy

I want to perform Singular Value Decomposition on a large (sparse) matrix. In order to choose the best(most accurate) library, I tried replicating the SVD example provided here using different Java and Python libraries. Strangely I am getting different results with each library.
Here's the original example matrix and it's decomposed (U S and VT) matrices:
A =2.0 0.0 8.0 6.0 0.0
1.0 6.0 0.0 1.0 7.0
5.0 0.0 7.0 4.0 0.0
7.0 0.0 8.0 5.0 0.0
0.0 10.0 0.0 0.0 7.0
U =-0.54 0.07 0.82 -0.11 0.12
-0.10 -0.59 -0.11 -0.79 -0.06
-0.53 0.06 -0.21 0.12 -0.81
-0.65 0.07 -0.51 0.06 0.56
-0.06 -0.80 0.09 0.59 0.04
VT =-0.46 0.02 -0.87 -0.00 0.17
-0.07 -0.76 0.06 0.60 0.23
-0.74 0.10 0.28 0.22 -0.56
-0.48 0.03 0.40 -0.33 0.70
-0.07 -0.64 -0.04 -0.69 -0.32
S (with the top three singular values) =
17.92 0 0
0 15.17 0
0 0 3.56
I tried using the following Java and Python libraries:
Java: PColt, Jama
Python: NumPy
Here are the results from each one of them:
Jama:
U = 0.5423 -0.065 -0.8216 0.1057 -0.1245
0.1018 0.5935 0.1126 0.7881 0.0603
0.525 -0.0594 0.213 -0.1157 0.8137
0.6449 -0.0704 0.5087 -0.0599 -0.5628
0.0645 0.7969 -0.09 -0.5922 -0.0441
VT =0.4646 -0.0215 0.8685 8.0E-4 -0.1713
0.0701 0.76 -0.0631 -0.6013 -0.2278
0.7351 -0.0988 -0.284 -0.2235 0.565
0.4844 -0.0254 -0.3989 0.3327 -0.7035
0.065 0.6415 0.0443 0.6912 0.3233
S = 17.9184 0.0 0.0 0.0 0.0
0.0 15.1714 0.0 0.0 0.0
0.0 0.0 3.564 0.0 0.0
0.0 0.0 0.0 1.9842 0.0
0.0 0.0 0.0 0.0 0.3496
PColt:
U = -0.542255 0.0649957 0.821617 0.105747 -0.124490
-0.101812 -0.593461 -0.112552 0.788123 0.0602700
-0.524953 0.0593817 -0.212969 -0.115742 0.813724
-0.644870 0.0704063 -0.508744 -0.0599027 -0.562829
-0.0644952 -0.796930 0.0900097 -0.592195 -0.0441263
VT =-0.464617 0.0215065 -0.868509 0.000799554 -0.171349
-0.0700860 -0.759988 0.0630715 -0.601346 -0.227841
-0.735094 0.0987971 0.284009 -0.223485 0.565040
-0.484392 0.0254474 0.398866 0.332684 -0.703523
-0.0649698 -0.641520 -0.0442743 0.691201 0.323284
S =
(00) 17.91837085874625
(11) 15.17137188041607
(22) 3.5640020352605677
(33) 1.9842281528992616
(44) 0.3495556671751232
Numpy
U = -0.54225536 0.06499573 0.82161708 0.10574661 -0.12448979
-0.10181247 -0.59346055 -0.11255162 0.78812338 0.06026999
-0.52495325 0.05938171 -0.21296861 -0.11574223 0.81372354
-0.64487038 0.07040626 -0.50874368 -0.05990271 -0.56282918
-0.06449519 -0.79692967 0.09000966 -0.59219473 -0.04412631
VT =-4.64617e-01 2.15065e-02 -8.68508e-01 7.99553e-04 -1.71349e-01
-7.00859e-02 -7.59987e-01 6.30714e-02 -6.01345e-01 -2.27841e-01
-7.35093e-01 9.87971e-02 2.84008e-01 -2.23484e-01 5.65040e-01
-4.84391e-01 2.54473e-02 3.98865e-01 3.32683e-01 -7.03523e-01
-6.49698e-02 -6.41519e-01 -4.42743e-02 6.91201e-01 3.23283e-01
S = 17.91837086 15.17137188 3.56400204 1.98422815 0.34955567
As can be noticed the sign of each element in the Jama decomposed matrices (u & VT) is opposite to the ones in the original example. Interestingly, for PColt and Numpy only the signs of the elements in the last two columns are inverted. Is there any specific reason behind the inverted signs? Has someone faced similar discrepancies?
Here are the pieces of code which I used:
Java
import java.text.DecimalFormat;
import cern.colt.matrix.tdouble.DoubleMatrix2D;
import cern.colt.matrix.tdouble.algo.DenseDoubleAlgebra;
import cern.colt.matrix.tdouble.algo.decomposition.DenseDoubleSingularValueDecomposition;
import cern.colt.matrix.tdouble.impl.DenseDoubleMatrix2D;
import Jama.Matrix;
import Jama.SingularValueDecomposition;
public class SVD_Test implements java.io.Serializable{
public static void main(String[] args)
{
double[][] data2 = new double[][]
{{ 2.0, 0.0, 8.0, 6.0, 0.0},
{ 1.0, 6.0, 0.0, 1.0, 7.0},
{ 5.0, 0.0, 7.0, 4.0, 0.0},
{ 7.0, 0.0, 8.0, 5.0, 0.0},
{ 0.0, 10.0, 0.0, 0.0, 7.0}};
DoubleMatrix2D pColt_matrix = new DenseDoubleMatrix2D(5,5);
pColt_matrix.assign(data2);
Matrix j = new Matrix(data2);
SingularValueDecomposition svd_jama = j.svd();
DenseDoubleSingularValueDecomposition svd_pColt = new DenseDoubleSingularValueDecomposition(pColt_matrix, true, true);
System.out.println("U:");
System.out.println("pColt:");
System.out.println(svd_pColt.getU());
printJamaMatrix(svd_jama.getU());
System.out.println("S:");
System.out.println("pColt:");
System.out.println(svd_pColt.getS());
printJamaMatrix(svd_jama.getS());
System.out.println("V:");
System.out.println("pColt:");
System.out.println(svd_pColt.getV());
printJamaMatrix(svd_jama.getV());
}
public static void printJamaMatrix(Matrix inp){
System.out.println("Jama: ");
System.out.println(String.valueOf(inp.getRowDimension())+" X "+String.valueOf(inp.getColumnDimension()));
DecimalFormat twoDForm = new DecimalFormat("#.####");
StringBuffer sb = new StringBuffer();
for (int r = 0; r < inp.getRowDimension(); r++) {
for (int c = 0; c < inp.getColumnDimension(); c++)
sb.append(Double.valueOf(twoDForm.format(inp.get(r, c)))).append("\t");
sb.append("\n");
}
System.out.println(sb.toString());
}
}
Python:
>>> import numpy
>>> numpy_matrix = numpy.array([[ 2.0, 0.0, 8.0, 6.0, 0.0],
[1.0, 6.0, 0.0, 1.0, 7.0],
[5.0, 0.0, 7.0, 4.0, 0.0],
[7.0, 0.0, 8.0, 5.0, 0.0],
[0.0, 10.0, 0.0, 0.0, 7.0]])
>>> u,s,v = numpy.linalg.svd(numpy_matrix, full_matrices=True)
Is there something wrong with the code?
.
Nothing wrong: the s.v.d. is not unique up to a sign change of the columns of U and V. (i.e. if you change the sign of i-th column of U and the i-th column of V, you still have a valid s.v.d: A = U*S*V^T). Different implementations of the svd will give slightly different results: to check for a correct svd you have to compute norm(A-U*S*V^T) / norm(A) and verify that it is a small number.
There is nothing wrong. The SVD resolves the column space and the row space of the target matrix into orthonormal bases in such a fashion as to align these two spaces and account for the dilations along the eigenvectors. The alignment angles may be unique, a discrete set, or a continuum as below.
For example, given two angles t and p and the target matrix (see footnote)
A = ( (1, -1), (2, 2) )
The general decomposition is
U = ( (0, exp[ i p ]), (-exp[ i t ], 0) )
S = sqrt(2) ( (2,0), (0,1) )
V* = ( 1 / sqrt( 2 ) ) ( (exp[ i t ], exp[ i t ]), (exp[ i p ], -exp[ i p ]) )
To recover the target matrix use
A = U S V*
A quick test of the quality of the answer is to verify the unit length of each column vector in both U and V.
Footnote:
Matrices are in row major format. That is, the first row vector in the matrix A is (1, -1).
Finally I have enough points to post an image file.