I am implementing a Forward Renderer with DirectX 10. I want it to handle an unlimited amount of lights so I can later compare its performance with a Deferred Renderer. So basically the algorithm I am using is: for every object, for every light -> set light, draw object. Using additive blending, I render the object for each light summing the contribution of every light on it. Everything works using an additive blending and disabling depth writes. The problem I have is that, using this simple approach, different object get blended together (because depth writes are disabled), while I just want a single object to be blended with the different light contribution's on it but still to obscure other objects behind it. How can I do this? Is a Z pre-pass the solution? Any suggestion will be very appreciated. Thanks.
This are the blending and depth/stencil states I use in my HLSL shader:
DepthStencilState NoDepthWritesDSS
{
DepthEnable = true;
DepthWriteMask = Zero;
StencilEnable = true;
StencilReadMask = 0xff;
StencilWriteMask = 0xff;
FrontFaceStencilFunc = Always;
FrontFaceStencilPass = Incr;
FrontFaceStencilFail = Keep;
BackFaceStencilFunc = Always;
BackFaceStencilPass = Incr;
BackFaceStencilFail = Keep;
};
BlendState BlendingAddBS
{
AlphaToCoverageEnable = false;
BlendEnable[0] = true;
SrcBlend = ONE;
DestBlend = ONE;
BlendOp = ADD;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
BlendOpAlpha = ADD;
RenderTargetWriteMask[0] = 0x0F;
};
There's several options to handle multiple lights, if you want to implement it using multipass a depth pre pass is your best option (then you do draw again using LESS_EQUAL comparison on your depth state).
This approach will most likely be quite unefficient on a high number of lights/objects tho.
I recommend this article which explains how to render several lights, it has different interesting implementations. The compute tile will not work in directx10, but the geometry sprite version can be easily ported (I have a dx9 version of it)
If you still want forward rendering, there's also the light indexed technique, implementation example here
Related
I can't get the panning to work in Naudio.
here is my code:
void Play(double Amp, double Left, double Right)
{
BBeats = new binaural_beats();
BBeats.Amplitude = Amp;
BBeats.Amplitude2 = Amp;
BBeats.Frequency = Left;
BBeats.Frequency2 = Right;
BBeats.Bufferlength = 44100 * 2 * 3; // will play for 3 sec
waveout = new WaveOut();
WaveChannel32 temp = new WaveChannel32(BBeats);
temp.PadWithZeroes = false;
temp.Pan = 0.0f;
waveout.Init(temp);
waveout.Play();
}
I tried 0.0F, 1.0F and 100F but it is not working.
I want it to play completely from one speaker and not from the other one.
or from one channel and not the other channel.
I just spent the entire night with same problem.
AND the solution was a whole different place than expected. I tried using pan and PanningSampleProvider, and MultiplexingWaveProvider, to obtain control over the pan, but I could only hear a minor change in sound, not really a pan. On my output meters, I could see maybe 10% variation.
Now I must translate from Danish, so it might not be 100% accurate. But under your sound device in windows, select the play device you are using, press properties, press extensions, and tick the "Deactivate all sound effects". BAM, 100% control over pan.
Guess windows have some kind of auto-level algorithm between stereo channels selected as default - don't know why and what it should do.
The Pan setting on WaveChannel32 goes from -1 (left only) to 1 (right only)
Or for more control over panning strategies, look at the PanningSampleProvider class.
I had the same problem. I tried to use PanningSampleProvider (NAudio) but it didn't work. I found out the cause was window system setting. Just turn off mono audio from Audio Setting.
Here is my source code:
var _audioFile = new AudioFileReader("E://CShap/Test/speaker.wav");
var monofile = new StereoToMonoSampleProvider(_audioFile);
var panner = new PanningSampleProvider(monofile);
panner.PanStrategy = new SquareRootPanStrategy();
panner.Pan = -1.0f; // pan fully left
WaveFileWriter.CreateWaveFile16("E://CShap/Test/speaker_resampler_L.wav", panner);
I am developing a game using iOS SpriteKit. I am trying to make an object in this game that will pull things towards it and the force will get greater as objects come closer to it, think of a magnet or a black hole. I've been having a difficult time figuring out what properties to change to get this nodes physicsBody to attract other nodes as they pass by.
In iOS 8 and OS X 10.10, SpriteKit has SKFieldNode for creating forces that apply to bodies in an area. This is great for things like buoyancy, area-specific gravity, and "magnets".
Watch out, though — the magneticField you get from that class is probably not what you want for the kind of "magnets" gameplay you might be looking for. A magnetic field behaves as per real-world physics at the micro level... that is, it deflects moving, charged bodies. What we usually think of as magnets — the kind that stick to your fridge, pick up junked cars, or make a hoverboard fly — is a higher-level effect of that force.
If you want a field that just attracts anything (or some specific things) nearby, a radialGravityField is what you're really after. (To attract only specific things, use the categoryBitMask on the field and the fieldBitMask on the bodies it should/shouldn't interact with.)
If you want a field that attracts different things more or less strongly, or attracts some things and repels others, the electricField is a good choice. You can use the charge property of physics bodies to make them attracted or repelled (negative or positive values) or more or less strongly affected (greater or less absolute value) by the field.
Prior to iOS 8 & OS X 10.10, SpriteKit's physics simulation doesn't include such kinds of force.
That doesn't keep you from simulating it yourself, though. In your scene's update: method you can find the distances between bodies, calculate a force on each proportional to that distance (and to whatever strength of magnetic field you're simulating), and apply forces to each body.
yes you can create magnetic force in sprite kit
-(void)didSimulatePhysics
{
[self updateCoin];
}
-(void) updateCoin
{
[self enumerateChildNodesWithName:#"coin" usingBlock:^(SKNode *node, BOOL *stop) {
CGPoint position;
position=node.position;
//move coin right to left
position.x -= 10;
node.position = position;
//apply the magnetic force between hero and coin
[self applyMagnetForce:coin];
if (node.position.x <- 100)
[node removeFromParent];
}];
}
-(void)applyMagnetForce:(sprite*)node
{
if( gameViewController.globalStoreClass.magnetStatus)
{
//if hero under area of magnetic force
if(node.position.x<400)
{
node.physicsBody.allowsRotation=FALSE;
///just for fun
node.physicsBody.linearDamping=10;
node.physicsBody.angularVelocity=10*10;
//where _gameHero is magnet fulling object
[node.physicsBody applyForce:CGVectorMake((10*10)*(_gameHero.position.x- node.position.x),(10*10)*(_gameHero.position.y-node.position.y)) atPoint:CGPointMake(_gameHero.position.x,_gameHero.position.y)];
}
}
}
remember both hero and coin body need be dynamic
Well seems like now you can since Apple introduced SKFieldNode in iOS 8.
Well you use the following code snippets to do what you're looking for, but it doesn't have attraction and repulsion properties
Body code:
let node = SKSpriteNode(imageNamed: "vortex")
node.name = "vortex"
node.position = position
node.run(SKAction.repeatForever(SKAction.rotate(byAngle: CGFloat.pi, duration: 1)))
node.physicsBody = SKPhysicsBody(circleOfRadius: node.size.width / 2)
node.physicsBody?.isDynamic = false
node.physicsBody?.categoryBitMask = CollisionTypes.vortex.rawValue
node.physicsBody?.contactTestBitMask = CollisionTypes.player.rawValue
node.physicsBody?.collisionBitMask = 0
addChild(node)
Upon contact with the that blackhole body:
func playerCollided(with node: SKNode) {
if node.name == "vortex" {
player.physicsBody?.isDynamic = false
isGameOver = true
score -= 1
let move = SKAction.move(to: node.position, duration: 0.25)
let scale = SKAction.scale(to: 0.0001, duration: 0.25)
let remove = SKAction.removeFromParent()
let sequence = SKAction.sequence([move, scale, remove])
player.run(sequence) { [unowned self] in
self.createPlayer()
self.isGameOver = false
}
}
I have some code to move a randomly oriented 3D seismic line forwards or backwards similar to the general intersection player. It worked perfectly in Petrel 2011, however it seems to have broken once I updated to 2012. The issue is that the normal direction of the line seems to change by a few decimals when I try to set a new facet. Below is some example code...
SeismicLine3D line = ...;
double distance = ...;
Direction3 direction = ...;
Direction3 normal = ...;
Facet facet = seismicLine3D.Intersection.Facets.ElementAt(0);
Vector3 offset = Vector3.Multiply(distance, direction.NormalizedVector);
Point3 point = Point3.Add(facet.Plane.DefiningPoint, offset);
Plane3 plane = new Plane3(point, normal);
Facet newFacet = new Facet(plane, new Plane3[] {});
IEnumerable<Facet> facets = new Facet[] {newFacet};
using (ITransaction transaction = DataManager.NewTransaction())
{
transaction.Lock(seismicLine3D);
try { seismicLine3D.Intersection.Facets = facets; }
finally { transaction.Commit(); }
}
// BAD!!!
// seismicLine3D.Intersection.Facets.ElementAt(0).Plane.Normal != normal;
Does anyone know what may have changed between Petrel 2011 and 2012 to cause this? Also, does anyone know of a possible work-around?
Edit:
The change in normal orientation is very noticeable when viewing in any toggle window. You will see slight "glitches" in the visualization as the line moves.
The issue is due to rounding during double -> float and float->double conversions. This algorithm modifies slightly its input seismic line at each iteration, causing the computed normal to be slightly different each time because of this rounding issue.
Converting the normalized normal to float first increases a bit the precision of the algorithm. But the best work around so far, is to store the first normal and use it at each iteration.
Cheers,
Priya
I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.
In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).