I have some code to move a randomly oriented 3D seismic line forwards or backwards similar to the general intersection player. It worked perfectly in Petrel 2011, however it seems to have broken once I updated to 2012. The issue is that the normal direction of the line seems to change by a few decimals when I try to set a new facet. Below is some example code...
SeismicLine3D line = ...;
double distance = ...;
Direction3 direction = ...;
Direction3 normal = ...;
Facet facet = seismicLine3D.Intersection.Facets.ElementAt(0);
Vector3 offset = Vector3.Multiply(distance, direction.NormalizedVector);
Point3 point = Point3.Add(facet.Plane.DefiningPoint, offset);
Plane3 plane = new Plane3(point, normal);
Facet newFacet = new Facet(plane, new Plane3[] {});
IEnumerable<Facet> facets = new Facet[] {newFacet};
using (ITransaction transaction = DataManager.NewTransaction())
{
transaction.Lock(seismicLine3D);
try { seismicLine3D.Intersection.Facets = facets; }
finally { transaction.Commit(); }
}
// BAD!!!
// seismicLine3D.Intersection.Facets.ElementAt(0).Plane.Normal != normal;
Does anyone know what may have changed between Petrel 2011 and 2012 to cause this? Also, does anyone know of a possible work-around?
Edit:
The change in normal orientation is very noticeable when viewing in any toggle window. You will see slight "glitches" in the visualization as the line moves.
The issue is due to rounding during double -> float and float->double conversions. This algorithm modifies slightly its input seismic line at each iteration, causing the computed normal to be slightly different each time because of this rounding issue.
Converting the normalized normal to float first increases a bit the precision of the algorithm. But the best work around so far, is to store the first normal and use it at each iteration.
Cheers,
Priya
Related
Following the example on the website: https://vega.github.io/editor/#/examples/vega-lite/interactive_bar_select_highlight
I want to programmatically set the selections via signals. I realize that I could emulate a click by doing the following
VEGA_DEBUG.view.signal("select_tuple", {"unit":"","fields":[{"type":"E","field":"_vgsid_"}],"values":[1]})
However, I cannot proceed to select another, e.g., the shift select of the 2
VEGA_DEBUG.view.signal("select_tuple", {"unit":"","fields":[{"type":"E","field":"_vgsid_"}],"values":[2]})
This makes sense, since only shift-click accumulates the state.
I tried modifying the accumulated signal
VEGA_DEBUG.view.signal("select", {"_vgsid_":[1,2],"vlMulti":{"or":[{"_vgsid_":1},{"_vgsid_":2}]}})
However, this does not help. Is this not possible? I understand that a custom solution may be possible in hand-rolled vega, as opposed to that compiled from vega-lite.
Thanks.
Just need to set VEGA_DEBUG.view.signal("select_toggle", true) before adding the new select!!
After much research I made this example of how to change the vega-lite brush programmatically
https://observablehq.com/#john-guerra/update-vega-lite-brush-programmatically
Using #koaning example this stack overflow question I figured that you can change the brush by updating "brush_y" (assuming that your selection is called brush) or change the selection using "brush_tuple" (which doesn't seem to update the brush mark)
viewof chart = {
const brush = vl.selectInterval("brush").encodings("y");
const base = vl
.markBar()
.select(brush)
.encode(
vl.x().count(),
vl.y().fieldQ("Horsepower"),
vl.color().if(brush, vl.value("steelblue")).value("gray")
)
.height(maxY);
return base.data(data).render();
}
update = {
// From https://codepen.io/keckelt/pen/bGNQPYq?editors=1111
// brush_y -> brush_tuple -> brush
// Updates on pixels
chart.signal("brush_y", [by0, maxY / 2]);
await chart.runAsync();
}
Crossposting here in case it might be useful for anyone
I can't get the panning to work in Naudio.
here is my code:
void Play(double Amp, double Left, double Right)
{
BBeats = new binaural_beats();
BBeats.Amplitude = Amp;
BBeats.Amplitude2 = Amp;
BBeats.Frequency = Left;
BBeats.Frequency2 = Right;
BBeats.Bufferlength = 44100 * 2 * 3; // will play for 3 sec
waveout = new WaveOut();
WaveChannel32 temp = new WaveChannel32(BBeats);
temp.PadWithZeroes = false;
temp.Pan = 0.0f;
waveout.Init(temp);
waveout.Play();
}
I tried 0.0F, 1.0F and 100F but it is not working.
I want it to play completely from one speaker and not from the other one.
or from one channel and not the other channel.
I just spent the entire night with same problem.
AND the solution was a whole different place than expected. I tried using pan and PanningSampleProvider, and MultiplexingWaveProvider, to obtain control over the pan, but I could only hear a minor change in sound, not really a pan. On my output meters, I could see maybe 10% variation.
Now I must translate from Danish, so it might not be 100% accurate. But under your sound device in windows, select the play device you are using, press properties, press extensions, and tick the "Deactivate all sound effects". BAM, 100% control over pan.
Guess windows have some kind of auto-level algorithm between stereo channels selected as default - don't know why and what it should do.
The Pan setting on WaveChannel32 goes from -1 (left only) to 1 (right only)
Or for more control over panning strategies, look at the PanningSampleProvider class.
I had the same problem. I tried to use PanningSampleProvider (NAudio) but it didn't work. I found out the cause was window system setting. Just turn off mono audio from Audio Setting.
Here is my source code:
var _audioFile = new AudioFileReader("E://CShap/Test/speaker.wav");
var monofile = new StereoToMonoSampleProvider(_audioFile);
var panner = new PanningSampleProvider(monofile);
panner.PanStrategy = new SquareRootPanStrategy();
panner.Pan = -1.0f; // pan fully left
WaveFileWriter.CreateWaveFile16("E://CShap/Test/speaker_resampler_L.wav", panner);
I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.
I am implementing a Forward Renderer with DirectX 10. I want it to handle an unlimited amount of lights so I can later compare its performance with a Deferred Renderer. So basically the algorithm I am using is: for every object, for every light -> set light, draw object. Using additive blending, I render the object for each light summing the contribution of every light on it. Everything works using an additive blending and disabling depth writes. The problem I have is that, using this simple approach, different object get blended together (because depth writes are disabled), while I just want a single object to be blended with the different light contribution's on it but still to obscure other objects behind it. How can I do this? Is a Z pre-pass the solution? Any suggestion will be very appreciated. Thanks.
This are the blending and depth/stencil states I use in my HLSL shader:
DepthStencilState NoDepthWritesDSS
{
DepthEnable = true;
DepthWriteMask = Zero;
StencilEnable = true;
StencilReadMask = 0xff;
StencilWriteMask = 0xff;
FrontFaceStencilFunc = Always;
FrontFaceStencilPass = Incr;
FrontFaceStencilFail = Keep;
BackFaceStencilFunc = Always;
BackFaceStencilPass = Incr;
BackFaceStencilFail = Keep;
};
BlendState BlendingAddBS
{
AlphaToCoverageEnable = false;
BlendEnable[0] = true;
SrcBlend = ONE;
DestBlend = ONE;
BlendOp = ADD;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
BlendOpAlpha = ADD;
RenderTargetWriteMask[0] = 0x0F;
};
There's several options to handle multiple lights, if you want to implement it using multipass a depth pre pass is your best option (then you do draw again using LESS_EQUAL comparison on your depth state).
This approach will most likely be quite unefficient on a high number of lights/objects tho.
I recommend this article which explains how to render several lights, it has different interesting implementations. The compute tile will not work in directx10, but the geometry sprite version can be easily ported (I have a dx9 version of it)
If you still want forward rendering, there's also the light indexed technique, implementation example here
In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).