Scan small areas at maximum resolution (up to 6400dpi) - scanning

Problem statement: I want to scan an image at maximum scanner resolution (6400 dpi on a Epson V850). This is partly possible from the Epson scanner "professional mode" in the software, provided that the scan area is limited to 21000 x 30000 pixels.
I'm ok with this limitation, I could simply scan small squares of the full area (at max resolution), then "stitch" them together afterwards.
I want to automate this, so I'm attempting to use pyinsane / SANE.
The issue is: the maximum resolution I can set is 1200, as you can see from the properties reported by pyinsane
dps_optical_xres=6400 ([])
dps_optical_yres=6400 ([])
resolution=300 ([50, 75, 100, 125, 150, 175, 200, 225, 250, 275, 300, 325, 350, 375, 400, 425, 450, 475, 500, 525, 550, 575, 600, 625, 650, 675, 700, 725, 750, 775, 800, 825, 850, 875, 900, 925, 950, 975, 1000, 1025, 1050, 1075, 1100, 1125, 1150, 1175, 1200])
xres=300 ([50, 1200, 1])
yres=300 ([50, 1200, 1])
optical_xres=6400 ([])
optical_yres=6400 ([])
So the question is: how do I override this setting so I am able to scan small areas at 6400dpi?
Again, using the EPSON Scan software I can scan at 6400dpi, provided the scanned area is small.
I know the limit exists for memory reasons, but it doesn't feel right that I can't adjust scan area and resolution, just like the Epson software allows to do.
The problems with using the Epson software is A) I can't automate the process, and B) I can't select an arbitrary scan area in terms of top-left to bottom-right coordinates.
I'm surprised how there is no definite answer on this yet. Let's try to have one once and for all, for posterity!

First of all, beware Pyinsane2 is not maintained anymore. Its replacement is Libinsane. (I'm the author of both).
The maximum of 1200dpi comes of the constraint on xres and yres: xres=300 ([50, 1200, 1]) and yres=300 ([50, 1200, 1]) (resolution is just an alias to those 2 options created by Pyinsane2).
Based on what you say, my guess is that you can get this constraint to go to higher values by setting first the scan area to a smaller one (see tl-x, tl-y, br-x, br-y). However after that, I don't think Pyinsane2 will reload the constraint on resolution correctly and so the max will remain 1200dpi (whereas Libinsane should reload it correctly).
By the way, just to be pedantic, if you have options like dps_optical_xres or optical_xres, you are not using Pyinsane2 on top of Sane (Linux), but Pyinsane2 on top of WIA2 (Windows).

For Linux there is ImageScan v3 with command line option.
I didn't try ImageScan v3 but scanimage (sane) on Ubuntu. 3200 ppi worked without problems.

Related

React Native Mi Scale Weight Data

I am trying to getting data from mi scale V2. I am getting service data like this: “serviceData”: {“0000181b-0000-1000-8000-00805f9b34fb”: “BiTlBwcZFgsYAAAmAg==”}(5.15kg) and I decode the base64 string to array like this [66, 105, 84, 108, 66, 119, 99, 90, 70, 103, 115, 89, 65, 65, 65, 109, 65, 103, 61, 61] But I can not retrieve the correct result. How can I get the weight data?
The UUID 0000181b-0000-1000-8000-00805f9b34fb belongs to the pre-defined Body Composition Service (BCS). You can download the specification from here.
It should have the two characteristics Body Composition Feature and Body Composition Measurement.
The Features characteristic shows you the features supported by your scale, the measurement characteristic returns the actual measurement.
Take a look at this answer where I explain the process of decoding a sample weight measurement.
UUIDs with the format of 0000xxxx-0000-1000-8000-00805f9b34fb are an officially adopted Bluetooth SIG UUID and can be looked up online.
If you look at the following URL:
https://www.bluetooth.com/specifications/assigned-numbers/
there is a document with the title "16-bit UUIDs". I can see from that document that 0x181b is Body Composition GATT Service.
According to the "Body Composition Service 1.0" document at:
https://www.bluetooth.com/specifications/specs/
there should be a Body Composition Feature (0x2A9B) and a Body Composition Measurement (0x2A9C) characteristic available for that service.
It will be the Body Composition Measurement characteristic that will contain the weight value.
A generic Bluetooth Low Energy scanning and exploration tool like nRF Connect can be useful when exploring and understanding the data on a device.

Is the optimal binary code for these frequencies unambiguous?

i should create a Huffman code with this frequencies 1300, 1900, 2000, 2400,
3000, 3300, 3400, 3900, 4900, 7000, 7200, 9900.
MySolution
My Question is the optimal binary code for these frequencies unambiguous?
The tree is unambiguous, since there are no ties for which bottom two frequencies to add at each step. However the binary codes are certainly not unambiguous. You don't have to assign 0 to the left and 1 to the right. You can swap those for any subset of the eleven nodes you like, resulting in 2048 different optimal binary codes.

Passing array of images to compute shader

I am currently working on a project using the draft for Compute shaders in WebGL 2.0. [draft].
Nevertheless I do not think that my question is WebGL specific but more an OpenGL problem. The goal build a pyramid of images to be used in a compute shader. Each level should be a squares of 2^(n-level) values (down to 1x1) and contain simple integer/float values...
I already have the values needed for this pyramid, stored in different
OpenGL Images all with their according sizes. My question is: how would you pass my image array to a compute shader. I have no problem in restricting the amount of images passed to lets say 14 or 16... but it needs to be at-least 12. If there is a mechanism to store the images in a Texture and than use textureLod that would solve the problem, but I could not get my head around how to do that with Images.
Thank you for your help, and I am pretty sure there should be an obvious way how to do that, but I am relatively new to OpenGL in general.
Passing in a "pyramid of images" is a normal thing to do in WebGL. They are called a mipmap.
gl.texImage2D(gl.TEXUTRE_2D, 0, gl.R32F, 128, 128, 0, gl.RED, gl.FLOAT, some128x128FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 1, gl.R32F, 64, 64, 0, gl.RED, gl.FLOAT, some64x64FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 2, gl.R32F, 32, 32, 0, gl.RED, gl.FLOAT, some32x32FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 3, gl.R32F, 16, 16, 0, gl.RED, gl.FLOAT, some16x16FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 4, gl.R32F, 8, 8, 0, gl.RED, gl.FLOAT, some8x8FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 5, gl.R32F, 4, 4, 0, gl.RED, gl.FLOAT, some4x4FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 6, gl.R32F, 2, 2, 0, gl.RED, gl.FLOAT, some2x2FloatData);
gl.texImage2D(gl.TEXUTRE_2D, 7, gl.R32F, 1, 1, 0, gl.RED, gl.FLOAT, some1x1FloatData);
The number in the 2nd column is the mip level, followed by the internal format, followed by the width and height of the mip level.
You can access any individual texel from a shader with texelFetch(someSampler, integerTexelCoord, mipLevel) as in
uniform sampler2D someSampler;
...
ivec2 texelCoord = ivec2(17, 12);
int mipLevel = 2;
float r = texelFetch(someSampler, texelCoord, mipLevel).r;
But you said you need 12, 14, or 16 levels. 16 levels is 2(16-1) or 32768x32768. That's a 1.3 gig of memory so you'll need a GPU with enough space and you'll have to pray the browser lets you use 1.3 gig of memory.
You can save some memory by allocating your texture with texStorage2D and then uploading data with texSubImage2D.
You mentioned using images. If by that you mean <img> tags or Image well you can't get Float or Integer data from those. They are generally 8 bit per channel values.
Or course rather than using mip levels you could also arrange your data into a texture atlas

Asymmetrical and inaccurate output from Mali-400MP GPU

I have the following simple fragment shader:
precision highp float;
main()
{
gl_FragColor = vec4(0.15, 0.15, 0.15, 0.15);
}
I'm rendering to texture using frame buffer object.
When reading back values from the frame buffer I get the following:
38, 38, 38, 38, 39, 39, 39, 39,38, 38, 38, 38, 39 etc.
0.15*255 = 38.25 so I expect to get 38 uniformly for all pixels, which I do get on my desktop GPU (intel 4000) and on Tegra 3.
I'll be glad if someone can shed some light on this issue.
It is critical to anyone doing GPGPU for mobile devices as the Mali-400MP is used in Samsung Galaxi s2, s3 and s3 mini.
It looks like your output is being dithered, as in the error left over from one pixel is carried over to the next where it gets rounded up. Remember GL_DITHER is on by default in OpenGL, try doing a glDisable(GL_DITHER).

Graph a series of planes as a solid object in Mathematica

I'm trying to graph a series of planes as a solid object in mathematica. I first tried to use the RangePlot3D options as well as the fill options to graph the 3D volume, but was unable to find a working result.
The graphic I'm trying to create will show the deviation between the z axis and the radius from the origin of a 3D cuboid. The current equation I'm using is this:
Plot3D[Evaluate[{Sqrt[(C[1])^2 + x^2 + y^2]} /.
C[1] -> Range[6378100, 6379120]], {x, -1000000,
1000000}, {y, -1000000, 1000000}, AxesLabel -> Automatic]
(output for more manageable range looks as follows)
Where C1 was the origional Z-value at each plane and the result of this equation is z+(r-z)
for any point on the x,y plane.
However this method is incredibly inefficient. Because this will be used to model large objects with an original z-values of >6,000,000 and heights above 1000, mathematica is unable to graph thousands of planes and represent them in a responsive method.
Additionally, Because the Range of C1 only includes integer values, there is discontinuity between these planes.
Is there a way to rewrite this using different mathematica functionality that will generate a 3Dplot that is both a reasonable load on my system and is a smooth object?
2nd, What can I do to improve my perforamance? when computing the above input for >30min, mathematica was only utilizing about 30% CPU and 4GB of ram with a light load on my graphics card as well. This is only about twice as much as chrome is using right now on my system.
I attempted to enable CUDALink, but it wouldn't enable properly. Would this offer a performance boost for this type of processing?
For Reference, my system build is:
16GB Ram
Intel i7 4770K running at stock settings
Nvidia GeForce 760GTX
256 Samsung SSD
Plotting a million planes and hoping that becomes a 3d solid seems unlikely to succeed.
Perhaps you could adapt something like this
Show[Plot3D[{Sqrt[6^2+x^2+y^2], Sqrt[20^2+x^2+y^2]}, {x, -10, 10}, {y, -10, 10},
AxesLabel -> Automatic, PlotRange -> {{-15, 15}, {-15, 15}, All}],
Graphics3D[{
Polygon[Join[
Table[{x, -10, Sqrt[6^2 + x^2 + (-10)^2]}, {x, -10, 10, 1}],
Table[{x, -10, Sqrt[20^2 + x^2 + (-10)^2]}, {x, 10, -10, -1}]]],
Polygon[Join[
Table[{-10, y, Sqrt[6^2 + (-10)^2 + y^2]}, {y, -10, 10, 1}],
Table[{-10, y, Sqrt[20^2 + (-10)^2 + y^2]}, {y, 10, -10, -1}]]],
Polygon[Join[
Table[{x, 10, Sqrt[6^2 + x^2 + 10^2]}, {x, -10, 10, 1}],
Table[{x, 10, Sqrt[20^2 + x^2 + 10^2]}, {x, 10, -10, -1}]]],
Polygon[Join[
Table[{10, y, Sqrt[6^2 + 10^2 + y^2]}, {y, -10, 10, 1}],
Table[{10, y, Sqrt[20^2 + 10^2 + y^2]}, {y, 10, -10, -1}]]]}]]
What that does is plot the top and bottom surface and then construct four polygons, each connecting the top and bottom surface along one side. But one caution, if you look very very closely you will see that, because they are polygons, the edges of the four faces are made up of short line segments rather than parabolas and thus are not perfectly joining your two paraboloids, there can be tiny gaps or tiny overlaps. This may or may not make any difference for your application.
That graphic displays in a fraction of a second on a machine that is a fraction of yours.
Mathematica does not automatically parallelize computations onto multiple cores.
CUDA programming is a considerably bigger challenge than turning the link on.
If you can simply define each face of your solid and combine them with Show then
I think you will have a much greater chance of success.
Another way:
xyrange = 10
cmin = 6
cmax = 20
RegionPlot3D[
Abs[x] < xyrange && Abs[y] < xyrange &&
cmin^2 < z^2 - ( x^2 + y^2) < cmax^2 ,
{x, -1.2 xyrange, 1.2 xyrange}, {y, -1.2 xyrange, 1.2 xyrange},
{z, cmin, Sqrt[ cmax^2 + 2 xyrange^2]}, MaxRecursion -> 15,
PlotPoints -> 100]
This is nowhere near as fast as Bills' approach, but it may be useful if you plot a more complicated region. Note RegionPlot does not work for your original example because the volume is too small compared to the plot range.