SVG Pixel Compositing Operations in NumPy - numpy

I am using GIMP layer operations which (AFAICT) map back to SVG Compositing operations (https://gitlab.gnome.org/GNOME/gegl/-/blob/master/operations/generated/src.c, https://www.w3.org/TR/SVGCompositing)
I am working with images loaded by OpenCV, and manipulated with NumPy. I am most interested in implementing the GIMP (SVG?) "overlay" operation. Are there any libraries that do this already? If there isn't a library, how to I convert the SVG spec to NumPy? The overlay compositing is defined as:
if 2 × Dc <= 1
f(Sc,Dc) = 2 × Sc × Dc
otherwise
f(Sc,Dc) = 1 - 2 × (1 - Dc) × (1 - Sc)
X = 1
Y = 1
Z = 1
if 2 × Dca <= Da
Dca' = 2 × Sca × Dca + Sca × (1 - Da) + Dca × (1 - Sa)
otherwise
Dca' = Sa × Da - 2 × (Da - Dca) × (Sa - Sca) + Sca × (1 - Da) + Dca × (1 - Sa)
= Sca × (1 + Da) + Dca × (1 + Sa) - 2 × Dca × Sca - Da × Sa
Da' = Sa + Da - Sa × Da
Admittedly, I am having trouble decoding the SVG notation. (Why two = statements in the 2nd otherwise?)

Related

Writing piecewise constraints in GAMS

I'm trying to solve the network problem below in GAMS Cplex. I have a piecewise constraint that depend on the node situations (whether a node is an origin (o) node, in between node, and destination (d) node).
How do I write these piecewise constraints? Or is there any way to write this 'manually' for the equation 1 to 5?
In the program below, I've written:
eq1 represents node 1 as an origin node,
eq2 eq3 eq4 represents node 2,3, and 4 as in between nodes,
eq5 represents node 5 as a destination node.
Set
i nodes /1,2,3,4,5/;
Alias(i,j);
Set
arc(i,j) arcs from node i to j
/1 .2
2 .1
1 .3
3 .1
1 .4
4 .1
2 .3
3 .2
2 .5
5 .2
3 .5
5 .3
4 .5
5 .4/;
Table c(i,j) population exposed from node i to node j
1 2 3 4 5
1 0 105000 90000 65000 0
2 105000 0 100000 0 85000
3 90000 100000 0 0 80000
4 65000 0 0 0 55000
5 0 85000 80000 55000 0
;
Table l(i,j) distance from node i to node j
1 2 3 4 5
1 0 5 8 10 0
2 5 0 2 0 7
3 8 2 0 0 11
4 10 0 0 0 8
5 0 7 11 8 0
Binary Variables
x(i,j)
y(i,j);
Positive Variables
v(i,j)
lambda(i,j);
Free Variables
w(i) node i
w(j) node j
z optimization solution;
Scalar
R very large number;
R = 10000000000000000;
Equations
sol optimization solution
eq1(i,j) constraint 1
eq2(i,j) constraint 2
eq3(i,j) constraint 3
eq4(i,j) constraint 4
eq5(i,j) constraint 5
eq6(i,j) constraint 6
eq7(i,j) constraint 7
eq8(i,j) constraint 8
eq9(i,j) constraint 9;
sol.. z =e= sum(arc(i,j),c(arc)*x(arc));
eq1(i,j).. x(1,2) - x(2,1) + x(1,3) - x(3,1) + x(1,4) - x(4,1) =e= 1;
eq2(i,j).. - x(1,2) + x(2,1) + x(2,3) - x(3,2) + x(2,5) - x(5,2) =e= 0;
eq3(i,j).. - x(1,3) + x(3,1) - x(2,3) + x(3,2) + x(3,5) - x(5,3) =e= 0;
eq4(i,j).. - x(1,4) + x(4,1) + x(4,5) - x(5,4) =e= 0;
eq5(i,j).. - x(2,5) + x(5,2) - x(3,5) + x(5,3) - x(4,5) + x(5,4) =e= -1;
eq6(i,j).. - y(i,j) + x(i,j) =l= 0;
eq7(i,j).. l(i,j) - w(i) + w(j) - v(i,j) + lambda(i,j) =e= 0;
eq8(i,j).. v(i,j) - R * (1 - x(i,j)) =l= 0;
eq9(i,j).. lambda(i,j) - R * (1 - (y(i,j) - x(i,j))) =l= 0;
Model contohTMB /all/;
Solve contohTMB using MIP Minimizing z;
Display "Solution values:"
Display
x.l, z.l;

Yolov4 Darknet Training Error on Macbook Pro M1

I am creating a custom Yolov4 to detect characters & digits in an image. I have installed Darknet on my Macbook M1 referring to this repo: https://github.com/AlexeyAB/darknet
The annotated dataset is ready for training. However, when the training beings, an error is shown saying that the GPU and OpenCV are not being used and the training stops abruptly.
Here is the error in the terminal:
GPU isn't used
OpenCV isn't used - data augmentation will be slow
valid: Using default 'data/train.txt'
yolov4-obj
mini_batch = 4, batch = 64, time_steps = 1, train = 1
layer filters size/strd(dil) input output
0 conv 32 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 32 0.299 BF
1 conv 64 3 x 3/ 2 416 x 416 x 32 -> 208 x 208 x 64 1.595 BF
2 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF
3 route 1 -> 208 x 208 x 64
4 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF
5 conv 32 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 32 0.177 BF
6 conv 64 3 x 3/ 1 208 x 208 x 32 -> 208 x 208 x 64 1.595 BF
......
......
Total BFLOPS 59.817
avg_outputs = 494379
Loading weights from yolov4.conv.137...
seen 64, trained: 0 K-images (0 Kilo-batches_64)
Done! Loaded 137 layers from weights-file
Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005
Detection layer: 139 - type = 28
Detection layer: 150 - type = 28
Detection layer: 161 - type = 28
Resizing, random_coef = 1.40
608 x 608
Create 64 permanent cpu-threads
mosaic=1 - compile Darknet with OpenCV for using mosaic=1
mosaic=1 - compile Darknet with OpenCV for using mosaic=1
Opencv is installed using brew install opencv#2 and M1 GPU works fine with TensorFlow. But due to some reasons, it is simply not working here.
Any help on this issue will be greatly appreciated.
Thank you in advance!

Keras (tf backend) memory allocation problems

I am using Keras with Tensorflow backend.
I am facing a batch size limitation due to high memory usage
My data is composed of 4 1D signals treated with a sample size of 801 for each channel. Global sample size is 3204
Input data:
4 channels of N 1D signals of length 7003
Input generated by applying a sliding window on 1D signals
Give input data shape (N*6203, 801, 4)
N is the number of signals used to build one batch
My Model:
Input 801 x 4
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
MaxPooling 2 x 1
Conv2D 5 x 1, 20 channels
Flatten
Dense 2000
Dense 5
With my GPU (Quadro K6000, 12189 MiB) i can fit only N=2 without warning
With N=3 I get a ran out of memory warning
With N=4 I get a ran out of memory error
It sound like batch_size is limitated by the space used by all tensors.
Input 801 x 4 x 1
Conv 797 x 4 x 20
MaxPooling 398 x 4 x 20
Conv 394 x 4 x 20
MaxPooling 197 x 4 x 20
Conv 193 x 4 x 20
MaxPooling 96 x 4 x 20
Conv 92 x 4 x 20
Dense 2000
Dense 5
With a 1D signal of 7001 with 4 channels -> 6201 samples
Total = N*4224 MiB.
N=2 -> 8448 MiB fit in GPU
N=3 -> 12672 MiB work but warning: failed to allocate 1.10 GiB then 3.00 GiB
N=4 -> 16896 MiB fail, only one message: failed to allocate 5.89 GiB
Does it work like that ? Is there any way to reduce the memory usage ?
To give a time estimation: 34 batch run in 40s and I got N total = 10^6
Thank you for your help :)
Example with python2.7: https://drive.google.com/open?id=1N7K_bxblC97FejozL4g7J_rl6-b9ScCn

differential equation by maxima

I am new in maxima, so I am really sorry if I ask simple question. I have a differential equation,
(%i1) -(x-x/2*sinh(x/2)+'diff(y,x))*(1/y+'diff(y,x)*x/y^2)+(x-x^2/sinh(x/2)+x^2*cosh(x/2)/(4*(sinh(x/2))^2)+'diff(y,x)*x+'diff(y,x,2)*x^2)/y+y^2-1-0.9*(x-x^2/(2*sinh(x/2)))=0;
2 x
2 2 x cosh(-)
2 d y dy x 2
x --- + x -- - ------- + ---------- + x
2 dx x 2 x
dx sinh(-) 4 sinh (-)
2 2
(%o1) ----------------------------------------
y
x dy
x sinh(-) x -- 2
dy 2 dx 1 2 x
+ (- -- + --------- - x) (---- + -) + y - 0.9 (x - ---------) - 1 = 0
dx 2 2 y x
y 2 sinh(-)
2
(%i2) ode2(%,y,x);
rat: replaced -0.9 by -9/10 = -0.9
(%o2) false
what should I do?
The equation you have is nonlinear. Maxima's ode2 can only solve a limited variety of differential equations, and it appears your equation doesn't fall into any of the categories it can handle.
I don't know if there is another symbolic diff eq solver in Maxima that you can try. If a numerical solution is enough, take a look at rk (a Runge-Kutta implementation).

Formula for sum of odd or even or all numbers from x to y?

I think the title covers it.
Most of the formulae I've seen produce the sum of even or odd numbers from 1 to n. I expect I could work out how to generalise by subtracting the lower range from the higher range, eg:
For sum of odds from 49 to 157:
(Sum of all odds -> 157) - (Sum of all odds -> 45).
What I've heard though is there's a general formula out of which all three problems falls, in which you give the first and last numbers and the interval between them and you're done?
It isn't proving easy when I'm trying to write a program that can take user input of any two values, and provide an answer every time:
Sum of odds from 1 - 99?
Sum of odds from 2 - 98?
Sum of evens from 1 - 99?
Sum of evens from 2 = 99?
Sum of evens from 3 - 3?
etc, including the sum of all numbers from 1 - 99... etc.
I'm guessing this is not a hard question and somebody will have an easy solution for it?
How would you do it?
Sum of odd numbers from 1 to 2n is (n - 1) squared: eg. 1 + 3 + 5 = 9 which is 3 squared. 1 + 3 + 5 + 7 = 16 which is 4 squared.
For a series not starting at 1 simply subrtact the smaller square : (1 + 3) + 5 + 7 = 16,
Subtrating bracketed terms is 16 - 4 or 4^2 - 2^2 = 12.
Generally: a..b odd integers,
Sum is ((b-1) / 2)^2 - ((a-1) / 2)^2, both a and b included in sum.
For odd all odds from a to b, regardless of whether a or b are odd or even:
((b + (b mod 2)) / 2)^2 - ((a - (a mod 2)) / 2)^2
For example, from 4 to 9:
((9 + (9 mod 2)) / 2)^2 - ((4 - (4 mod 2)) / 2)^2
= ((9 + 1) / 2)^2 - ((4 - 0) / 2)^2
= (10 / 2)^2 - (4 / 2)^2
= (5)^2 - (2)^2 = 25 - 4 = 21
To confirm, 5 + 7 + 9 = 21
Even looks a bit more intimidating. It would be:
(((b - (b mod 2)) / 2)^2 + ((b - (b mod 2)) / 2)) - ((((a + (a mod 2)) - 2) / 2)^2 + (((a + (a mod 2)) - 2) / 2)
So, evens from 6 to 11 (6 + 8 + 10 = 24):
(((11 - (11 mod 2)) / 2)^2 + ((11 - (11 mod 2)) / 2)) - ((((6 + (6 mod 2)) - 2) / 2)^2 + (((6 + (6 mod 2)) - 2) / 2)
= (((11 - 1) / 2)^2 + ((11 - 1) / 2)) - ((((6 + 0) - 2) / 2)^2 + (((6 + 0) - 2) / 2)
= ((10 / 2)^2 + (10 / 2)) - (((6 - 2) / 2)^2 + ((6 - 2) / 2))
= ((5)^2 + 5) - ((4 / 2)^2 + (4 / 2))
= (25 + 5) - ((2)^2 + 2) = 30 - (4 + 2) = 24
For even numbers the sum is (n / 2)^2 + n / 2
so 2 + 4 + 6 + 8 = ( 8 / 2 )^2 + ( 8 / 2 ) = 16 + 4 = 20
So for a series a..b again with both a and b included, the sum of even numbers is:
((b / 2)^2 + ( b / 2 )) - ((a / 2)^2 + ( a / 2 ))