Many of you may know the classic windows screen saver. Does anyone have any idea how this was programmed in 3D or 2D? No real code necessary - just the an overall explanation of the algorithm would be great.
This is screenshot from Chrome. In chrome it is programmed very easily:
file_util::AppendToPath(&path, L"sspipes.scr");
CreateProcess(NULL, ...
or: Open a windows Pipe screensaver file, and Run it.
Source of xscreensaver's version is: xscreensaver-4.16/hacks/glx/pipes.c in the xscreensaver-4.16.tar.bz2 (or other version of the same package). Online version of the file.
UPDATE: How it works: It uses OpenGL to make the things beautiful
Each tube addittion is drawn as cylinder and a sphere:
glBegin(GL_QUAD_STRIP);
for (an = 0.0; an <= 2.0 * M_PI; an += M_PI / 12.0) {
glNormal3f((COSan_3 = cos(an) / 3.0), (SINan_3 = sin(an) / 3.0), 0.0);
glVertex3f(COSan_3, SINan_3, one_third);
glVertex3f(COSan_3, SINan_3, -one_third);
}
glEnd();
Rotation in space is done by glRotatef before glBegin. All rotations are 90 degrees only.
End sphere is glu object:
quadObj = gluNewQuadric();
gluQuadricDrawStyle(quadObj, (GLenum) GLU_FILL);
gluSphere(quadObj, radius, 16, 16);
gluDeleteQuadric(quadObj);
For bends, a lot of code is used to draw (function myElbow).
For not to intersect, the 3d array is used with flags "this point of space contains a pipe". All pipes have integer coordinates and are parallel to the axis. Perspective correction is from 3d library (opengl/direct3d).
Main function with logic is draw_pipes.
It draws a sphere, selects a direction as random and pipe run begins. At every step there is a random shance (with 20% prob) of bending. Also, neiborhods are checked at every step to prevent collisions. Is there is no free space to continue a pipe or the pipe is long enough (may be random too), it will stop and new pipe begins from random point.
It was done using OpenGL (back when MS was excited about OpenGL on Windows). While I can't speak authoritatively about the rest (not sure I've ever seen the source code) it looks like a pretty straightforward matter of choosing a direction (up, down, left, right, forward, backward) and a distance, with some bounds to keep it all in a cube.
The pipe has some particular diameter, and you can select a bitmap to be textured onto the pipe if you want. If you don't use a texture, it can/will choose colors. It's old enough, I believe it's written to use only the 20 (16?) colors defined by Windows as the basic palette normally supported on almost any graphics adapter -- but it's been quite a while since mainstream hardware was nearly that restricted.
Related
This is more a maths conundrum than specifically coding but I'm sure I'll need help there too.
I am writing some code to place a MicroStation cell at a chosen scale and then to use MicroStation commands to stretch part of the cell to another location. The coding was complete, I thought, as it worked in my test environment but in testing with some users it broke immediately. It took some time to establish the reason and it is caused by the on screen view having a rotation applied.
My code relies on capturing the user placed position for the start of the cell placement retrieved using a point3d, I then need to select another point at a set distance along XY from the last point, this is adjusted by factoring in the chosen scale "CellScale as integer" so my second position is defined as:
PosFlood.X=Pos1.X + (0.35 * CellScale)
PosFlood.Y=Pos1.Y + (0.007 * CellScale)
But this calculation is wrong when the view is rotated. I have been able to retrieve the rotation angle and set it as "ViewAngle as Double" but I don't know what formula to use to calculate my second position, I believe I need to use sin and cos but all searching has brought me up short.
Hopefully there is a maths wizard out in cyber space who can put me on the right path?
Thanks - Mark
MicroStation is 3D, and using angles is not good way to deal with rotation. A common approach in 3D math is to use a rotation matrix, which is fully supported by MicroStation VBA with the Matrix3D class.
You can get the rotation matrix of a MicroStation view using View.Rotation. You will find many methods that operate on a Matrix3D in MicroStation VBA help. Those methods include calculating a scaling matrix and multiplying matrices (e.g. scaling matrix . rotation matrix) to obtain a rotation for your cell.
I currently have 16 tiles, with individual images that make up 1 big map. I pan by transforming right at the beginning before any actual drawing with this:
GL.Translate(G_.Pan(0), G_.Pan(1), 0)
Then I zoom by doing this:
GL.Ortho(-G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, -G_.Size * 1.5 ^ G_.ZoomFactor, -1, 1)
G_.Size is a constant that only varies on startup depending on parameters, zoom factor ranges from -1 to -13
What I want to be able to do is check if 1 of the 16 tiles is within the visible area, so then I stop them drawing when they are not on screen. I had found some quite complex methods for doing it, but it was 3D and seemed like a lot of work for something that should be simple. I would of thought it would of been something like just checking if a point is within the bounds of visible area, but I have no idea on how to get the visible area.
Andon M Coleman already suggested you to implement projection volume culling (a generalized form of frustum culling). This is however outside the scope of OpenGL. You must understand that OpenGL is not a "magical" scene graph that does scene management and the likes. It's mere drawing API; what it does is putting shaded, textured points, lines or triangles on the screen and that's it. The rest is up to you, or the libraries you choose to implement it.
In the case of projection volume culling you're testing if a given piece of geometry intersects with the volume defined by the planes that form the borders of the volume. Your projection matrix defines such planes, specifically it transform the view space vertex position volume into the range [-1;1]×[-1;1]×[0;1] of perspective divided clip space. So by inverting the projection matrix and unprojection the corners of the [-1;1]×[-1;1]×[0;1] cube through that you determine the limiting planes of the projection volume in view space.
You then use that information to intersect your quads with the volume to see if they cross it, i.e. are in any way visible.
I have a webcam directly over a chicken nest. This camera takes images and uploads them to a folder on a server. I'd like to detect if an egg has been laid from this image.
I'm thinking the best method would be to compare the contrast as the egg will be much more reflective than the straw nest. (The camera has Infrared so the image is partly grey scale)
I'd like to do this in .NET if possible.
Try to resize your image to a smaller size, maybe 10 x 10 pixel. This averages out any small disturbing details.
Const N As Integer = 10
Dim newImage As New Bitmap(N, N)
Dim fromCamera As Image = Nothing ' Get image from camera here
Using gr As Graphics = Graphics.FromImage(newImage)
gr.SmoothingMode = SmoothingMode.HighSpeed
gr.InterpolationMode = InterpolationMode.Bilinear
gr.PixelOffsetMode = PixelOffsetMode.HighSpeed
gr.DrawImage(fromCamera, New Rectangle(0, 0, N, N))
End Using
Note: you do not need a high quality, but you need a good averaging. Maybe you will have to test different quality settings.
Since now, a pixel covers a large area of your original image, a bright pixel is very likely part of an egg. It might also be a good idea to compare the brightness of the brightest pixel to the average image brightness, since that would reduce problems due to global illumination changes.
EDIT (in response to comment):
Your code is well structured and makes sense. Here some thoughts:
Calculate the gray value from the color value with:
Dim grayValue = c.R * 0.3 + c.G * 0.59 + c.B * 0.11
... instead of comparing the three color components separately. The different weights are due to the fact, that we perceive green stronger than red and red stronger than blue. Again, we do not want a beautiful thumbnail we want a good contrast. Therefore, you might want to do some experiments here as well. May be it is sufficient to use only the red component. Dependent on lighting conditions one color component might yield a better contrast than others. I would recommend, to make the gray conversion part of the thumbnail creation and to write the thumbnails to a file or to the screen. This would allow you to play with the different settings (size of the thumbnail, resizing parameters, color to gray conversion, etc.) and to compare the (intermediate) results visually. Creating a bitmap (bmp) with the (end-)result is a very good idea.
The Using statement does the Dispose() for you. It does it even if an exception should occur before End Using (There is a hidden Try Finally involved).
I'm trying to draw about 4000-10000 segments using NSBezierPath on every drawRect of an NSView (about a 300x300 pixel box). This is very resource heavy and is taking a lot time to draw (relatively long).
Can someone suggest a substitute for this? I've tried using a single NSBezierPath for 1000 segments at a time, but it's still too resource heavy.
I'm looking for any possible alternatives. I'm sure OpenGL would be faster, but I don't know if I have to learn a new platform in order to do what I need. I'm open to suggestions.
Not an answer, just test results
I did a simple experiment with Mathematica. This experiment gives us an absolute upper bound for your time, since I used no optimization, no GPU, an interpreted language, etc. So I think much more than one order of magnitude is achievable.
Results:
Generating a 10.000 bezier curves list
b = Table[
{Hue[RandomReal[]],
BezierCurve#RandomReal[{0, 300}, {4, 2}]}, {10000}];
is very quick, because mathematica does not evaluate nothing.
Now rendering:
h1 = AbsoluteTime[]; Print#Graphics[b]; h2 = AbsoluteTime[]; Print[h2 - h1];
Time spent 11.8 secs
Result:
PS: The intention is to set a timing baseline for our mindset.
The Orange book, section 16.2, lists implementing diffuse lighting as:
void main()
{
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec4 V = gl_ModelViewMatrix * gl_vertex;
vec3 L = normalize(lightPos - V.xyz);
gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L));
}
However, when I run this, the lighting changes when I move my camera.
On the other hand, when I change
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
to
vec3 N = normalize(gl_Normal);
I get diffuse lighting that works like the fixed pipeline.
What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?
[For completeness, the fragment shader just copies the color]
OK, I hope there's nothing wrong with answering your question after over half a year? :)
So there are two things to discuss here:
a) What should the shader look like
You SHOULD transform your normals by the modelview matrix - that's a given. Consider what would happen if you don't - your modelview matrix can contain some kind of rotation. Your cube would be rotated, but the normals would still point in the old direction! This is clearly wrong.
So: When you transform your vertices by modelview matrix, you should also transform the normals. Your normals are vec3 not vec4, and you're not interested in translations (normals only contain direction), so you can just multiply your normal by mat3(gl_ModelViewMatrix), which is the upper-left 3-3 submatrix.
Then: This is ALMOST correct, but still a bit wrong - the reasons are well-described on Lighthouse 3D - go have a read. Long story short, instead of mat3(gl_ModelViewMatrix), you have to multiply by an inverse transpose of that.
And OpenGL 2 is very helpful and precalculates this for you as gl_NormalMatrix. Hence, the correct code is:
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
b) But it's different from fixed pipeline, why?
The first thing which comes to my mind is that "something's wrong with your usage of fixed pipeline".
I'm not really keen on FP (long live shaders!), but as far as I can remember, when you specify your lights via glLightParameterfv(GL_LIGHT_POSITION, something), this was affected by the modelview matrix. It was easy (at least for me :)) to make a mistake of specifying the light position (or light direction for directional lights) in the wrong coordinate system.
I'm not sure if I remember correctly how that worked back then since I use GL3 and shaders nowadays, but let me try... what was your state of modelview matrix? I think it just might be possible that you have specified the directional light direction in object space instead of eye space, so that your light would rotate together with your object. IDK if that's relevant here, but make sure to pay attention to that when using FF. That's a mistake I remember myself doing often when I was still using GL 1.1.
Depending on the modelview state, you could specify the light in:
eye (camera) space,
world space,
object space.
Make sure which one it is.
Huh.. I hope that makes the topic more clear for you. The conclusions are:
always transform your normals along with your vertices in your vertex shaders, and
if it looks different from what you expect, think how you specify your light positions. (Maybe you want to multiply the light postion vector in a shader too? The remarks about light position coordinate systems still hold)