Problem evaluating iterated integral in SymPy - numpy

I'm teaching a course in Multivariate Calculus and decided to convert my notes from Sage to Jupyter using SymPy. I have rewritten nearly all my notes as Jupyter Notebooks and am very impressed how I can use multiple cells like Mathematica and I can use MarkDown cells with LaTeX as well as all the great features of matplotlib, NumPy and SymPy.
I'm nearly done converting my sagelets to Python scripts on Colab and found a discrepancy.
This Sage code resolves as pi:
integral(integral(integral(1, z, x^2+y^2, 2-x^2-y^2),
y, -sqrt(1-x^2), sqrt(1-x^2)),
x, -1, 1)
but this this SymPy code resolves as -pi/2:
Integral(1,
(z, x**2+y**2, 2-x**2-y**2),
(y, -sqrt(1-x**2), sqrt(1-x**2)),
(x, -1, 1)
).doit()
See #3 in this Jupyter Notebook:
https://colab.research.google.com/drive/1OlT9nfPG8TzoR_WpDavx-SAa07HLg3hV?usp=sharing
Shouldn't these be equal? What am I missing? Any help would be greatly appreciated as I've done A LOT of work on this course using SymPy and would like to use it in class this summer session!
Please help,
A. Jorge Garcia
Applied Math & CS
Nassau Community College
http://shadowfaxrant.blogspot.com
PS: Here's the SageCell version,
https://sagecell.sagemath.org/?z=eJzFk8FuwjAMhu9IvIMFB9qRTk1A07TrtE07cNuuSBkEGlGS4qRA-_RLQ1uQuDCGtlMsx_ns_5fTn-AbsJg-3scPoxg-J89sDB8os1TAu7JiiTw13U6fhvCy5WnOrQCu5v4OMxT2CWRdFpwHlJQkJiwkhTtGIdm7Yxx2OybRu6B3U2jPYbccg0FBykHT4seUdrYK52SzX0xIo-KArZvQo_DbYnsXyz1_e5A5unqe_ZQNiykjLHJR5KIKHJkN2oBWqZCcxFXviJ4a8deNL7XqOvrBzHEIr9JpsYmANTcG9MLHuZIWZvmXcNJ8YiHRWNAzy5WFXSIUYKJBGshQZxqt1IqnYLUvNpuco2hYc2ncq5ljoF77jEa5lKo19j-HaL_i6pKPuLoLareHZWWs39NmPf2yOmO_Adu6eJ4=&lang=sage

Related

How can I "try out" Tensorflow functions in iPython and see the answers?

Sometimes I really just want to interactively experiment with things like softmax(), or sigmoid() just to get a sense of how they behave. I'm struggling to be able to see the answer. Maybe I need to rewrite everything in numpy, but I hope not.
Example:
v = tf.sigmoid(tf.convert_to_tensor([0.123, 0.345]))
Now I have v, but heck if I can figure out how to see the values inside it. How can it be done?
In case you are running Tensorflow 2.0 -
v = tf.sigmoid(tf.convert_to_tensor([0.123, 0.345]))
v.numpy()
The answer is -
array([0.5307113, 0.5854046], dtype=float32)
If you are running Tensorflow 1.0 -
with tf.Session() as sess:
print(v.eval())
It gives the following answer -
[0.5307113 0.5854046]

Image similarity using Tensorflow or PyTorch

I want to compare two images for similarity. Since my purpose is to match a given image against a massive collection of images, I want to run the comparisons on GPU.
I came across tf.image.ssim and tf.image.psnr functions but I am unable to find and working examples only. The solutions in PyTorch is also appreciated. Since I don't have a good understanding of CUDA and C language, I am hesitant to try kernels in PyCuda.
Will it be helpful in terms of processing if I read the entire image collection and store as Tensorflow Records for future processing?
Any guidance or solution, greatly appreciated. Thank you.
Edit:- I am matching images of same size only. I don't want to do mere histogram match. I want to do SSIM or PSNR implementation for image similarity. So, I am assuming it would be similar in color, content etc
Check out the example on the tensorflow doc page (link):
im1 = tf.decode_png('path/to/im1.png')
im2 = tf.decode_png('path/to/im2.png')
print(tf.image.ssim(im1, im2, max_val=255))
This should work on latest version of tensorflow. If you use older versions tf.image.ssim will return a tensor (print will not give you a value), but you can call .run() to evaluate it.
There is no implementation of PSNR or SSIM in PyTorch. You can either implement them yourself or use a third-party package, like piqa which I have developed.
Assuming you already have torch and torchvision installed, you can get it with
pip install piqa
Then for the image comparison
import torch
from torchvision import transforms
from PIL import Image
im1 = Image.open('path/to/im1.png')
im2 = Image.open('path/to/im2.png')
transform = transforms.ToTensor()
x = transform(im1).unsqueeze(0).cuda() # .cuda() for GPU
y = transform(im2).unsqueeze(0).cuda()
from piqa import PSNR, SSIM
psnr = PSNR()
ssim = SSIM().cuda()
print('PSNR:', psnr(x, y))
print('SSIM:', ssim(x, y))

Difference between matplotlib.countourf and matlab.contourf() - odd sharp edges in matplotlib

I am a recent migrant from Matlab to Python and have recently worked with Numpy and Matplotlib. I recoded one of my scripts from Matlab, which employs Matlab's contourf-function, into Python using matplotlib's corresponding contourf-function. I managed to replicate the output in Python, apart that the contourf-plots are not exacly the same, for a reason that is unknown to me. As I run the contourf-function in matplotlib, I get this otherwise nice figure but it has these sharp edges on the contour-levels on top and bottom, which should not be there (see Figure 1 below, matplotlib-output). Now, when I export the arrays I used in Python to Matlab (i.e. the exactly same data set that was used to generate the matplotlib-contourf-plot) and use Matlab's contourf-function, I get a slightly different output, without those sharp contour-level edges (see Figure 2 below, Matlab-output). I used the same number of levels in both figures. In figure 3 I have made a scatterplot of the same data, which shows that there are no such sharp edges in the data as shown in the contourf-plot (I added contour-lines just for reference). Example dataset can be downloaded through Dropbox-link given below. The data set contains three txt-files: X, Y, Z. Each of them are an 500x500 arrays, which can be directly used with contourf(), i.e. plt.contourf(X,Y,Z,...). The code that used was
plt.contourf(X,Y,Z,10, cmap=plt.cm.jet)
plt.contour(X,Y,Z,10,colors='black', linewidths=0.5)
plt.axis('equal')
plt.axis('off')
Does anyone have an idea why this happens? I would appreciate any insight on this!
Cheers,
Jussi
Below are the details of my setup:
Python 3.7.0
IPython 6.5.0
matplotlib 2.2.3
Matplotlib output
Matlab output
Matplotlib-scatter
Link to data set
The confusing thing about the matlab plot is that its colorbar shows much more levels than there are actually in the plot. Hence you don't see the actual intervals that are contoured.
You would achieve the same result in matplotlib by choosing 12 instead of 11 levels.
import numpy as np
import matplotlib.pyplot as plt
X, Y, Z = [np.loadtxt("data/roundcontourdata/{}.txt".format(i)) for i in list("XYZ")]
levels = np.linspace(Z.min(), Z.max(), 12)
cntr = plt.contourf(X,Y,Z,levels, cmap=plt.cm.jet)
plt.contour(X,Y,Z,levels,colors='black', linewidths=0.5)
plt.colorbar(cntr)
plt.axis('equal')
plt.axis('off')
plt.show()
So in conclusion, both plots are correct and show the same data. Just the levels being automatically chosen are different. This can be circumvented by choosing custom levels depending on the desired visual appearance.

How to get python to generate the tweedie deviance for xgboost?

Using statsmodel's GLM, the tweedie deviance is included in the summary function, but I don't know how to do this for xgboost. Reading the API didn't help either.
In Python this is how you do it. Suppose predictions is the result of your gradient boosted tree and real are the actual numbers. Then using statsmodels you would run this:
import statsmodels as sm
dev = sm.families.Tweedie(pow_var=1.5).deviance(predictions, real)

Logarithmic scaling / colorbar in Julia using PyPlot (matplotlib)

I am using Julia 0.5 and the latest version of PyPlot.
I am printing an 2D-Array using plot.pcolorand it works pretty good. But now I have data that needs a logarithmic scaling. I searched on the web and what I found was an example using
plt.pcolor(X, Y, Z1, norm=LogNorm(vmin=Z1.min(), vmax=Z1.max()), cmap='PuBu_r')
But since LogNorm seems to be a python function ist doesn't work in Julia. Does anyone have an idea what I can hand over to norm=to get a logarithmic scaling?
An example would be:
using PyPlot
A = rand(20,20)
figure()
PyPlot.pcolor(A, cmap="PuBu_r")
colorbar()
Matplotlib fields and methods can be accessed using the
matplotlib[:colors][:LogNorm]
syntax (i.e. for the corresponding matplotlib.colors.LogNorm object).
UPDATE: Thank you for your mwe. Based on that example, I managed to make it work like this:
PyPlot.pcolor(A, norm=matplotlib[:colors][:LogNorm](vmin=minimum(A), vmax=maximum(A)), cmap="PuBu_r")