How can DWT be used in LSB substitution steganography - steganography

In steganography, the least significant bit (LSB) substitution method embeds the secret bits in the place of bits from the cover medium, for example, image pixels. In some methods, the Discrete Wavelet Transform (DWT) of the image is taken and the secret bits are embedded in the DWT coefficients, after which the inverse trasform is used to reconstruct the stego image.
However, the DWT produces float coefficients and for the LSB substitution method integer values are required. Most papers I've read use the 2D Haar Wavelet, yet, they aren't clear on their methodology. I've seen the transform being defined in terms of low and high pass filters (float transforms), or taking the sum and difference of pair values, or the average and mean difference, etc.
More explicitly, either in the forward or the inverse transform (but not necessarily in both depending on the formulas used) eventually float numbers will appear. I can't have them for the coefficients because the substitution won't work and I can't have them for the reconstructed pixels because the image requires integer values for storage.
For example, let's consider a pair of pixels, A and B as a 1D array. The low frequency coefficient is defined by the sum, i.e., s = A + B, and the high frequency coefficient by the difference, i.e., d = A - B. We can then reconstruct the original pixels with B = (s - d) / 2 and A = s - B. However, after any bit twiddling with the coefficients, s - d may not be even anymore and float values will emerge for the reconstructed pixels.
For the 2D case, the 1D transform is applied separately for the rows and the columns, so eventually a division by 4 will occur somewhere. This can result in values with float remainders .00, .25, .50 and .75. I've only come across one paper which addresses this issue. The rest are very vague in their methodology and I struggle to replicate them. Yet, the DWT has been widely implemented for image steganography.
My question is, since some of the literature I've read hasn't been enlightening, how can this be possible? How can one use a transform which introduces float values, yet the whole steganography method requires integers?

One solution that has worked for me is using the Integer Wavelet Transform, which some also refer to as a lifting scheme. For the Haar wavelet, I've seen it defined as:
s = floor((A + B) / 2)
d = A - B
And for inverse:
A = s + floor((d + 1) / 2)
B = s - floor(d / 2)
All the values throughout the whole process are integers. The reason it works is because the formulas contain information about both the even and odd parts of the pixels/coefficients, so there is no loss of information from rounding down. Even if one modifies the coefficients and then takes the inverse transform, the reconstructed pixels will still be integers.
Example implementation in Python:
import numpy as np
def _iwt(array):
output = np.zeros_like(array)
nx, ny = array.shape
x = nx // 2
for j in xrange(ny):
output[0:x,j] = (array[0::2,j] + array[1::2,j])//2
output[x:nx,j] = array[0::2,j] - array[1::2,j]
return output
def _iiwt(array):
output = np.zeros_like(array)
nx, ny = array.shape
x = nx // 2
for j in xrange(ny):
output[0::2,j] = array[0:x,j] + (array[x:nx,j] + 1)//2
output[1::2,j] = output[0::2,j] - array[x:nx,j]
return output
def iwt2(array):
return _iwt(_iwt(array.astype(int)).T).T
def iiwt2(array):
return _iiwt(_iiwt(array.astype(int).T).T)
Some languages already have built-in functions for this purpose. For example, Matlab uses lwt2() and ilwt2() for 2D lifting-scheme wavelet transform.
els = {'p',[-0.125 0.125],0};
lshaarInt = liftwave('haar','int2int');
lsnewInt = addlift(lshaarInt,els);
[cAint,cHint,cVint,cDint] = lwt2(x,lsnewInt) % x is your image
xRecInt = ilwt2(cAint,cHint,cVint,cDint,lsnewInt);
An article example where IWT was used for image steganography is Raja, K.B. et. al (2008) Robust image adaptive steganography using integer wavelets.

Related

Projecting a vector in a given plane using numpy

Using numpy, how can I do an orthogonal projection of, for example, the vector np.array([0.3,0.5,0.2]) into the plane 3x+2y-2z=0 ?
EDIT:
I think one may simply use numpy.linalg.lstsq to find the orthogonal projection?
Your hyperplane is defined by the set of x such that <a,x>=0, where a is a vector orthogonal to the plane. In your example,
a = (3,2,-2).
Then The projection of a point p is in the hyperplane is a point p_proj such that p-p_proj is orthogonal to the plane. This means that it is parallel to a, or in other words p-p_proj=lambda*a. So
p_proj = p- lambda*a (1).
since p_proj is in the hyperplane, <p_proj,a> = 0 so multiplying by a on the equality(1) gives
lambda= <p,a>/<a,a>.
Substituting into (2), you get
Projection(p) = p_proj = p-<p,a>/<a,a>a
which can be done easily in numpy using np.dot(v_1,v_2) wherever we encounter <v_1,v_2>:
def projection(p,a):
lambda_val = np.dot(p,a)/np.dot(a,a)
return p - lambda_val * a
(Note that this is a Gram-Schmidt iteration).

Projection of fisheye camera model by Scaramuzza

I am trying to understand the fisheye model by Scaramuzza, which is implemented in Matlab, see https://de.mathworks.com/help/vision/ug/fisheye-calibration-basics.html#mw_8aca38cc-44de-4a26-a5bc-10fb312ae3c5
The backprojection (uv to xyz) seems fairly straightforward according to the following equation:
, where rho=sqrt(u^2 +v^2)
However, how does the projection (from xyz to uv) work?! In my understanding we get a rather complex set of equations. Unfortunately, I don't find any details on that....
Okay, I believe I understand it now fully after analyzing the functions of the (windows) calibration toolbox by Scaramuzza, see https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page
Method 1 found in file "world2cam.m"
For the projection, use the same equation above. In the projection case, the equation has three known (x,y,z) and three unknown variables (u,v and lambda). We first substitute lambda with rho by realizing that
u = x/lambda
v = y/lambda
rho=sqrt(u^2+v^2) = 1/lambda * sqrt(x^2+y^2) --> lambda = sqrt(x^2+y^2) / rho
After that, we have the unknown variables (u,v and rho)
u = x/lambda = x / sqrt(x^2+y^2) * rho
v = y/lambda = y / sqrt(x^2+y^2) * rho
z / lambda = z /sqrt(x^2+y^2) * rho = a0 + a2*rho^2 + a3*rho^3 + a4*rho^4
As you can see, the last equation now has only one unknown, namely rho. Thus, we can solve it easily using e.g. the roots function in matlab. However, the result does not always exist nor is it necessarily unique. After solving the unknown variable rho, calculating uv is very simple using the equation above.
This procedure needs to be performed for each point (x,y,z) separately and is thus rather computationally expensive for an image.
Method 2 found in file "world2cam_fast.m"
The last equation has the form rho(x,y,z). However, if we define m = z / sqrt(x^2+y^2) = tan(90°-theta), it only depends on one variable, namely rho(m).
Instead of solving this equation rho(m) for every new m, the authors "plot" the function for several values of m and fit an 8th order polynomial to these points. Using this polynomial they can calculate an approximate value for rho(m) much quicker in the following.
This becomes clear, because "world2cam_fast.m" makes use of ocam_model.pol, which is calculated in "undistort.m". "undistort.m" in turn makes use of "findinvpoly.m".

Coefficients of 2D Chebyshev series in numpy.polynomial.chebyshev

I understand that chebvander2d and chebval2d return the Vandermonde matrix and fitted values for 2D inputs, and chebfit returns the coefficients for 1D-input series, but how do I get the coefficients for 2D-input series?
Short answer: It looks to me like this is not yet implemented. The whole of 2D polynomials seems more like a draft with some stub functions (as of June 2020).
Long answer (I came looking for the same thing, so I dug a little deeper):
First of all, this applies to all of the polynomial classes, not only chebyshev, so you also cannot fit an "ordinary" polynomial (power series). In fact, you cannot even construct one.
To understand the programming problem, let me recapture what a 2D polynomial looks like as a math formula, at an example polynomial of degree 2:
p(x, y) = c_00 + c_10 x + c_01 y + c_20 x^2 + c11 xy + c02 y^2
here the indices of c refer to the powers of x and y (the sum of the exponents must be <= degree).
First thing to notice is that, for degree d, there are (d+1)(d+2)/2 coefficients.
They could be stored in the upper left part of a matrix or in a 1D array, e.g. aranged as in the formula above.
The documentation of functions like numpy.polynomial.polynomial.polyval2d implies that numpy expects the matrix variant: p(x, y) = sum_i,j c_i,j * x^i * y^j.
Side note: it may be confusing that the row index i ("y-coordinate") of the matrix is used as exponent of x, not y; maybe the role of i and j should be switched if this is eventually implementd, or at least there should be a note in the documentation.
This leads to the core problem: the data structure for the 2D coefficients is not defined anywhere; only indirectly, like above, it can be guessed that a matrix should be used. But compared to a 1D array this is a waste of space, and evaluation of the polynomial takes two nested loops instead of just one. Also: does the matrix have to be initialized with np.zeros or do the implemented functions make sure that the lower right part is never touched so that np.empty can be used?
If the whole (d+1)^2 matrix were used, as the polyval2d function doc suggests, the degree of the polynomial would actually be d*2 (if c_d,d != 0)
To test this, I wanted to construct a numpy.polynomial.polynomial.Polynomial (yes, three times polynomial) and check the degree attribute:
import numpy as np
import numpy.polynomial.polynomial as poly
coef = np.array([
[5.00, 5.01, 5.02],
[5.10, 5.11, 0. ],
[5.20, 0. , 0. ]
])
polyObj = poly.Polynomial(coef)
print(polyObj.degree)
This gave a ValueError: Coefficient array is not 1-d before the print statement was reached. So while polyval2d expects a 2D coefficient array, it is not (yet) possible to construct such a polynomial - not manually like this at least. With this insight, it is not surprising that there is no function (yet) that computes a fit for 2D polynomials.

Zoom in on np.fft2 result

Is there a way to chose the x/y output axes range from np.fft2 ?
I have a piece of code computing the diffraction pattern of an aperture. The aperture is defined in a 2k x 2k pixel array. The diffraction pattern is basically the inner part of the 2D FT of the aperture. The np.fft2 gives me an output array same size of the input but with some preset range of the x/y axes. Of course I can zoom in by using the image viewer, but I have already lost detail. What is the solution?
Thanks,
Gert
import numpy as np
import matplotlib.pyplot as plt
r= 500
s= 1000
y,x = np.ogrid[-s:s+1, -s:s+1]
mask = x*x + y*y <= r*r
aperture = np.ones((2*s+1, 2*s+1))
aperture[mask] = 0
plt.imshow(aperture)
plt.show()
ffta= np.fft.fft2(aperture)
plt.imshow(np.log(np.abs(np.fft.fftshift(ffta))**2))
plt.show()
Unfortunately, much of the speed and accuracy of the FFT come from the outputs being the same size as the input.
The conventional way to increase the apparent resolution in the output Fourier domain is by zero-padding the input: np.fft.fft2(aperture, [4 * (2*s+1), 4 * (2*s+1)]) tells the FFT to pad your input to be 4 * (2*s+1) pixels tall and wide, i.e., make the input four times larger (sixteen times the number of pixels).
Begin aside I say "apparent" resolution because the actual amount of data you have hasn't increased, but the Fourier transform will appear smoother because zero-padding in the input domain causes the Fourier transform to interpolate the output. In the example above, any feature that could be seen with one pixel will be shown with four pixels. Just to make this fully concrete, this example shows that every fourth pixel of the zero-padded FFT is numerically the same as every pixel of the original unpadded FFT:
# Generate your `ffta` as above, then
N = 2 * s + 1
Up = 4
fftup = np.fft.fft2(aperture, [Up * N, Up * N])
relerr = lambda dirt, gold: np.abs((dirt - gold) / gold)
print(np.max(relerr(fftup[::Up, ::Up] , ffta))) # ~6e-12.
(That relerr is just a simple relative error, which you want to be close to machine precision, around 2e-16. The largest error between every 4th sample of the zero-padded FFT and the unpadded FFT is 6e-12 which is quite close to machine precision, meaning these two arrays are nearly numerically equivalent.) End aside
Zero-padding is the most straightforward way around your problem. But it does cost you a lot of memory. And it is frustrating because you might only care about a tiny, tiny part of the transform. There's an algorithm called the chirp z-transform (CZT, or colloquially the "zoom FFT") which can do this. If your input is N (for you 2*s+1) and you want just M samples of the FFT's output evaluated anywhere, it will compute three Fourier transforms of size N + M - 1 to obtain the desired M samples of the output. This would solve your problem too, since you can ask for M samples in the region of interest, and it wouldn't require prohibitively-much memory, though it would need at least 3x more CPU time. The downside is that a solid implementation of CZT isn't in Numpy/Scipy yet: see the scipy issue and the code it references. Matlab's CZT seems reliable, if that's an option; Octave-forge has one too and the Octave people usually try hard to match/exceed Matlab.
But if you have the memory, zero-padding the input is the way to go.

np.fft.fft off by a factor of 1000 (fitting an powerspectrum)

I'm trying to make a powerspectrum from an experimental dataset which I am reading in, and then to fit it to an theoretical curve. Now everything is working fine and I'm not getting errors, except for the fact that my curve keeps differing by a factor of 1000 from the data and I have absolutely no idea what the problem could be. I've asked a few people, but to no avail. (I hope that you guys will be able to help)
Anyways, I'm pretty sure that its not the units, as they were tripple checked by me and 2 others. Basically, I need to fit a powerspectrum to an equation by using the least squares method.
I can't post the whole code, as its rather long and a bit messy, but this is the fourier part, I added comments to all arrays and vars which have not been declared in the code)
#Calculate stuff
Nm = 10**-6 #micro to meter
KbT = 4.10E-21 #Joule
T = 297. #K
l = zvalue*Nm #meter
meany = np.mean(cleandatay*Nm) #meter (cleandata is the array that I read in from a cvs at the start.)
SDy = sum((cleandatay*Nm - meany)**2)/len(cleandatay) #meter^2
FmArray[0][i] = ((KbT*l)/SDy) #N
#print FmArray[0][i]
print float((i*100/len(filelist)))#how many % done?
#fourier
dt = cleant[1]-cleant[0] #timestep
N = len(cleandatay) #Same for cleant, its the corresponding time to cleandatay
Here is where the fourier part starts, I take the fft and turn it into a powerspectrum. Then I calculate the corresponding freq steps with the array freqs
fouriery = np.fft.fft((cleandatay*(10**-6)))
fourierpower = (np.abs(fouriery))**2
fourierpower = fourierpower[1:N/2] #remove 0th datapoint and /2 (remove negative freqs)
fourierpower = fourierpower*dt #*dt to account for steps
freqs = (1.+np.arange((N/2)-1.))/50.
#Least squares method
eta = 8.9E-4 #pa*s
Rbead = 0.5E-6#meter
constant = 2*KbT/(3*eta*pi*Rbead)
omega = 2*pi*freqs #rad/s
Wcarray = 2.*pi*np.arange(0,30, 0.02003) #0.02 = 30/len(freqs)
ChiSq = np.zeros(len(Wcarray))
for k in range(0, len(Wcarray)):
Py = (constant / (Wcarray[k]**2 + omega**2))
ChiSq[k] = sum((fourierpower - Py)**2)
pylab.loglog(omega, Py)
print k*100/len(Wcarray)
index = np.where(ChiSq == min(ChiSq))
cutoffw = Wcarray[index]
Pygoed = (constant / (Wcarray[index]**2 + omega**2))
print cutoffw
print constant
print min(ChiSq)
pylab.loglog(omega,ChiSq)
So I have no idea what could be going wrong, I think its the fft, as nothing else can really go wrong.
Below is the pic I get when I plot all the fit lines against the spectrum, as you can see it is off by about 1000 (actually exactly 1000, as this leaves a least square residue of 10^-22, but I can't just randomly multiply without knowing why)
Just to elaborate on the picture. The green dots are the fft spectrum, the lines are the fits, the red dot is where it thinks the cutoff frequency is, and the blue line is the chi-squared fit, looking for the lowest value.
Take a look at the documentation for the FFT that you are using. Many FFTs introduce a scaling factor that is usually N * result (number of samples). Multiplying by 1/N will scale the results back in line. (You said that the result is 1000 too high....could it be that you are using a 1024 size FFT?)
Your library FFT routine might include a scale factor of 1/sqrt(n).
Check the documentation for the fft you used, as the proportion of the scale factor allocated between the fft and the ifft is arbitrary.