Size of object in Blender not correct, - size

Why is the length 1221.21' and not 192' as excepted. I am using Ubuntu 21.10 Linux and Blender 3.0 and scripting using Python.
As can be seen from the 'scene properties' on the right the units are set to 'Imperial' not metric and should be in inches and feet. I am also using the orthogonal view and not perspective.
import bpy
tall = 7
inchesinfeet = 12
#horizontal pieces
length = 192
#x
width = 4 #y
height = 2 #z
#vertical pieces
thick = 2 #x
wide = 2 #y
ceiling = tall*inchesinfeet #z
oncenter = 18
# bottom
bpy.ops.mesh.primitive_cube_add(size=2, enter_editmode=False, align='WORLD', location=(length/2, 0, 0), scale=(length, 1, 1))
# sides left first
print("length = " + str(length))
'''
bpy.ops.mesh.primitive_cube_add(enter_editmode=False, align='WORLD', location=(width, depth,) (tall*inchesinfeet+2*height)), scale=(width, depth, ))
bpy.ops.mesh.primitive_cube_add(enter_editmode=False, align='WORLD', location=(2 + 1*oncenter, 4, 98), scale=(depth,
And the terminal output is as expected also

I tried your script and I am seeing size of cube as below. I think it is as expected, given scale(length,1,1) in script.
To see size of object, press 'N' to see dimensions as in this screenshot (Bottom right - we can see 384,2,2):

Why is the length 1221.21' and not 192' as expected ?
because the operator bpy.ops.mesh.primitive_cube_add do not support imperial metric, and create your object using the 'none' metric, which in Blender is the same as for meters.
Remember the Blender notation through the interface, eg for 1.02:
1.02 is None
1.02m is Meter
1.02' is Imperial
Could you provide 1.02' to the operator ? No, the operator do not support this notation.
REF: Blender Mesh Operator : bpy.ops.mesh.primitive_cube_add
Programaticaly, whatever system unit in use, you have to provide the measure (size, scale, ..) of your object in 'none' metric.
So, you have to convert from 'feet' to 'none'.
If you want a Cube of 192' :
1' = 0.3048m
Convert 192' to 'none' (ie 'meter') = 58.5216
Create your object using 58.5216 as parameter (size, scale, whatever)
That's all.
Some explanation:
if you create a Cube 'by hand' - ie through the interface - you see that the Size parameter of your object is suffixed with the system metric unit notation currently in use (eg None = "", meter = m, imperial = ').
Through the interface, you can use the unit notation (m or ') according to your needs, and whatever unit system in use, you can enter different unit notations to specify the object size, even a mix of them (with some limitation).
So, you can enter '1.08m' in the size field, even if you use the Imperial unit system, Blender will convert it automatically.
When you use the bpy operator, you cannot specify the unit notation like through the interface.
So, the default 'None' (or Meter) is used.
The 'Unit settings' is a way to:
display the same object size using different unit scale
using a default unit system as parameter through the interface.
But IS NOT a way to compute using a default unit, because the operator do not support unit system notation, and all of the vertex vectors are in 'none/meter' metrics ; to display what is behind the scene on a modified default cube:
import bpy
print("Unit System In Use: " + bpy.context.scene.unit_settings.system)
for item in bpy.data.objects:
print(item.name)
if item.type == 'MESH':
for vertex in item.data.vertices:
print(vertex.co)
could output something like:
Unit System In Use: IMPERIAL
Camera
Cube
<Vector (3.3311, 1.3453, 1.0000)>
<Vector (1.0000, 1.0000, -1.0000)>
<Vector (1.0000, -1.0000, 1.0000)>
<Vector (1.0000, -1.0000, -1.0000)>
<Vector (-1.0000, 1.0000, 1.0000)>
<Vector (-1.0000, 1.0000, -1.0000)>
<Vector (-1.0000, -1.0000, 1.0000)>
<Vector (-1.0000, -1.0000, -1.0000)
The first vector display the vertex coordinate which is located at:
10.9287ft, 4.41385ft, 3.28084ft

Related

Matplotlib draws values that equal to zero in the array as non zero values

I draw a map of a parameter with matplotlib and cartopy. Use cartopy crs Mercator (also I tried AlbersEqualArea - result is the same).
An array (2d as it's a mesh) has some float values like 1.1 - 5.65 etc, but the main part are 0.000000000000000000e+00 values.
So, for levels=0.1,0.3,0.5,1,1.5,2,3,5,7,10,20,30,40,50
cntr = ax.contourf(lons, lats, array, levels = levels, cmap = cmap, norm = norm, transform = ccrs.PlateCarree(), extend = ext, zorder=1)
gives a map where all float zero values are in blue, so as per scale it's 0.3 value (it doesn't depend on values given in levels).
Three things help to generate a map normally:
Changing crs - for PlateCarree it's ok, for Merkator not, for Albers also not.
Changing north maximum latitude (76 for Merkator works, 78 - doesn't), but for Albers this doesn't help.
Adding array = np.where(array<0.3,float(0),array) - works for all
Also it can't be fixed by changing parameters extend or norm (tried all variants).
Question is: what sort of bug is it, where is the problem, how to fix it completely.

Projecting a vector in a given plane using numpy

Using numpy, how can I do an orthogonal projection of, for example, the vector np.array([0.3,0.5,0.2]) into the plane 3x+2y-2z=0 ?
EDIT:
I think one may simply use numpy.linalg.lstsq to find the orthogonal projection?
Your hyperplane is defined by the set of x such that <a,x>=0, where a is a vector orthogonal to the plane. In your example,
a = (3,2,-2).
Then The projection of a point p is in the hyperplane is a point p_proj such that p-p_proj is orthogonal to the plane. This means that it is parallel to a, or in other words p-p_proj=lambda*a. So
p_proj = p- lambda*a (1).
since p_proj is in the hyperplane, <p_proj,a> = 0 so multiplying by a on the equality(1) gives
lambda= <p,a>/<a,a>.
Substituting into (2), you get
Projection(p) = p_proj = p-<p,a>/<a,a>a
which can be done easily in numpy using np.dot(v_1,v_2) wherever we encounter <v_1,v_2>:
def projection(p,a):
lambda_val = np.dot(p,a)/np.dot(a,a)
return p - lambda_val * a
(Note that this is a Gram-Schmidt iteration).

How to fill a line in 2D image along a given radius with the data in a given line image?

I want to fill a 2D image along its polar radius, the data are stored in a image where each row or column corresponds to the radius in target image. How can I fill the target image efficiently? Such as with iradius or some functions? I do not prefer a pix-pix operation.
Are you looking for something like this?
number maxR = 100
image rValues := realimage("I(r)",4,maxR)
rValues = 10 + trunc(100*random())
image plot :=realimage("Ring",4,2*maxR,2*maxR)
rValues.ShowImage()
plot.ShowImage()
plot = rValues.warp(iradius,0)
You might also want to check out the relevant example code from the F1 help documentation of GMS itself:
Explaining warp a bit:
plot = rValues.warp(iradius,0)
Assigns values to plot based on a value-lookup in rValues.
For each pixel in plot a coordinate position in rValues is computed, and the value is simply looked up. If the computed coordinate is non-integer, bilinear interpolation between the 4 closest points is used.
In the example, the two 'formulas' for the coordinate calculation are simple x' = iradius and y' = 0 where iradius is an expression computed from the coordinate in plot, for convenience.
You can feed any expression into the parameters for warp( ) and the command is closely related to just using the square bracket notation of addressing values. In fact, the only difference is that warp performs the bilinear interpolation of values instead of truncating the coordinates to integer values.

What is the right way to resize using NVIDIA NPP to exact destination dimensions?

I'm trying to use NVIDIA NPP to experiment with some image resizing routines. I want to resize to an exact dimension. I've been looking at image resizing using NVIDIA NPP but all of its resize functions take scale factors for X and Y Dimensions, and I could not see any API taking direct destination dimensions.
As an example, this is one API:
NppStatus nppiResizeSqrPixel_8u_C1R(const Npp8u * pSrc, NppiSize oSrcSize, int nSrcStep, NppiRect oSrcROI, Npp8u * pDst, int nDstStep, NppiRect oDstROI, double nXFactor, double nYFactor, double nXShift, double nYShift, int eInterpolation);
I realize one way could be to find the appropriate scale factor the destination dimension, but we don't exactly know how the API decides destination ROI based on scale factor (since it is floating point math). We could reverse the calculation in the jpegNPP sample to find the scale factor, but the API itself does not make any guarantees so I'm not sure how safe it is. Any ideas what are the possibilities?
As a side question, the API also takes two params, nXShift and nYShift, but just says "Source pixel shift in x-direction". I'm not exactly clear what shift is being talked about here. Do you have an idea?
If I wanted to map the whole SRC image to the smaller rectangle in the DST image as shown in the image below I would use xFactor = yFactor = 0.5 and xShift = 0.5*DST.width and yShift = 0.
Mapping src to half size destination image
In other words, the pixel at (x,y) in the SRC is mapped to the pixel (x',y') in the DST as
x' = xFactor * x + xShift
y' = yFactor * y + yShift
In this case, both the source and dest ROI could be the entire support of the respective images.

How can DWT be used in LSB substitution steganography

In steganography, the least significant bit (LSB) substitution method embeds the secret bits in the place of bits from the cover medium, for example, image pixels. In some methods, the Discrete Wavelet Transform (DWT) of the image is taken and the secret bits are embedded in the DWT coefficients, after which the inverse trasform is used to reconstruct the stego image.
However, the DWT produces float coefficients and for the LSB substitution method integer values are required. Most papers I've read use the 2D Haar Wavelet, yet, they aren't clear on their methodology. I've seen the transform being defined in terms of low and high pass filters (float transforms), or taking the sum and difference of pair values, or the average and mean difference, etc.
More explicitly, either in the forward or the inverse transform (but not necessarily in both depending on the formulas used) eventually float numbers will appear. I can't have them for the coefficients because the substitution won't work and I can't have them for the reconstructed pixels because the image requires integer values for storage.
For example, let's consider a pair of pixels, A and B as a 1D array. The low frequency coefficient is defined by the sum, i.e., s = A + B, and the high frequency coefficient by the difference, i.e., d = A - B. We can then reconstruct the original pixels with B = (s - d) / 2 and A = s - B. However, after any bit twiddling with the coefficients, s - d may not be even anymore and float values will emerge for the reconstructed pixels.
For the 2D case, the 1D transform is applied separately for the rows and the columns, so eventually a division by 4 will occur somewhere. This can result in values with float remainders .00, .25, .50 and .75. I've only come across one paper which addresses this issue. The rest are very vague in their methodology and I struggle to replicate them. Yet, the DWT has been widely implemented for image steganography.
My question is, since some of the literature I've read hasn't been enlightening, how can this be possible? How can one use a transform which introduces float values, yet the whole steganography method requires integers?
One solution that has worked for me is using the Integer Wavelet Transform, which some also refer to as a lifting scheme. For the Haar wavelet, I've seen it defined as:
s = floor((A + B) / 2)
d = A - B
And for inverse:
A = s + floor((d + 1) / 2)
B = s - floor(d / 2)
All the values throughout the whole process are integers. The reason it works is because the formulas contain information about both the even and odd parts of the pixels/coefficients, so there is no loss of information from rounding down. Even if one modifies the coefficients and then takes the inverse transform, the reconstructed pixels will still be integers.
Example implementation in Python:
import numpy as np
def _iwt(array):
output = np.zeros_like(array)
nx, ny = array.shape
x = nx // 2
for j in xrange(ny):
output[0:x,j] = (array[0::2,j] + array[1::2,j])//2
output[x:nx,j] = array[0::2,j] - array[1::2,j]
return output
def _iiwt(array):
output = np.zeros_like(array)
nx, ny = array.shape
x = nx // 2
for j in xrange(ny):
output[0::2,j] = array[0:x,j] + (array[x:nx,j] + 1)//2
output[1::2,j] = output[0::2,j] - array[x:nx,j]
return output
def iwt2(array):
return _iwt(_iwt(array.astype(int)).T).T
def iiwt2(array):
return _iiwt(_iiwt(array.astype(int).T).T)
Some languages already have built-in functions for this purpose. For example, Matlab uses lwt2() and ilwt2() for 2D lifting-scheme wavelet transform.
els = {'p',[-0.125 0.125],0};
lshaarInt = liftwave('haar','int2int');
lsnewInt = addlift(lshaarInt,els);
[cAint,cHint,cVint,cDint] = lwt2(x,lsnewInt) % x is your image
xRecInt = ilwt2(cAint,cHint,cVint,cDint,lsnewInt);
An article example where IWT was used for image steganography is Raja, K.B. et. al (2008) Robust image adaptive steganography using integer wavelets.