Computation of fault interpretation and polyline intersections - ocean

I am planning to make functionality that can test if a borehole is crossing a fault. My first idea was to make a workstep component that takes a Borehole and a Fault Interpretation as input and returns the number of intersections. I have already made a workstep that checks if a fault interpretation is intersecting a surface. The core of this function is the following:
ICoordinateReferenceSystem inputCRS = PetrelProject.PrimaryProject.CoordinateReferenceSystem;
SpatialUnitsPolicy unitsPolicy = SpatialUnitsPolicy.AllDataInSI;
SpatialContext spatialCtx = new SpatialContext(inputCRS, unitsPolicy);
ISurfaceIntersectionService sis = CoreSystem.GetService<ISurfaceIntersectionService>(arguments.Surface);
foreach (FaultInterpretationPolyline p in arguments.Fault.GetPolylines()) {
IEnumerable<PolylineSurfaceIntersection> intersections = sis.GetSurfacePolyLineIntersection(arguments.Surface, p.Polyline);
foreach (PolylineSurfaceIntersection intersection in intersections) {
arguments.NumberOfIntersections++;
}
}
The above works fine and I was thinking I could make something along the same lines to compute the intersection between a polyline (well trajectory) and a surface generated from the collection of polylines representing the fault interpretation. The key question is, is there a way to get/generate a surface from a collection of polylines? The fault interpretation can be displayed as a surface (triangulated), is this surface accessible from the api? The surface returned from the api must be such that it can be used as an argument to ISurfaceIntersectionService. If this is not possible through the Ocean api, is there a way that the user could prepare the fault interpretation up front making surfaces from the fault interpretations? Or maybe there is a complete different approach to solve the above in an efficient way?

The problem you will have is the creation of the surface. Currently you can only create a RegularHeightFieldSurface which is a surface that has its points located on a lattice. A FaultIntersection will not normally fit this model as it's points are not picked on a regular lattice. Therefore creating surface for the points from a set of fault interpretation picks is the problem.

Related

Redeclaring two Medium packages in One system component

I am new to modelica, and i don't have this much experience in it, but i got the basics of course. I am trying to model a micrfluidic network. The network consists of two sources of water and oil, controlled by two valves. The flow of the two mediums interact at a Tjunction and then into a tank or chamber. I don't care about the fluid properties of the mixture because its not my purpose. My question is how do redeclare two medium packages (water and oil) in one system component such as the Tjunction or a tank in order to simulate the system. In my real model, the two mediums doesn't meet, becuase every medium passes through the channels at a different time.
I attached the model with this message. Here's the link.
https://www.dropbox.com/s/yq6lg9la8z211uc/twomediumsv2.zip?dl=0
Thanks for the help .
I don't think you can redeclare a medium during simulation. In your case (where you don't need the mixing of the two fluids) you could create a new medium, for instance called OilWaterMixture, extending from Modelica.Media.Interfaces.PartialMedium.
If you look into the code of PartialMedium you'll see that it contains a lot of partial ("empty") functions that you should fill in in your new medium model. For example, in OilWaterMixture you should extend the function specificEnthalpy_pTX to return the specific enthalpy of your water/oil mixture, for a certain water/oil mixture (given by the mass fraction vector X). This could be done by adding the following model to the OilWaterMixture package:
redeclare function extends specificEnthalpy_pTX "Return specific enthalpy"
Oil = Modelica.Media.Incompressible.Examples.Essotherm650;
Water = Modelica.Media.Water.StandardWater;
algorithm
h_oil := Oil.h_pT(p,T);
h_water := Water.specificEnthalpy_pT(p,T);
h := X[0]*h_oil + X[1]*h_water;
end specificEnthalpy_pTX;
The mass fraction vector X is defined in PartialMedium and in OilWaterMixture you must define that it has two elements.
Again, since you are not going to actually use the mixing properties but only mass fraction vectors {0,1} or {1,0} the simple linear mixing equation should be adequate.
When you use OilWaterMixture in the various components, the error log will tell you which medium functions they need. So you probably don't need to extend all the partial functions in PartialMedium.

Generate OPEN surface mesh from a set of 3D points

I have a set of points on an OPEN surface in 3D space.
I have identified a subset of points which lay on the boundary.
I mean to generate a triangulation of those points, which gives me an open surface and keeps my selected points on the boundary.
All references I found deal with (sometimes?) closed surfaces, e.g., CGAL.
See examples below.
In addition, some CGAL algorithms require oriented normals at each point, which I do not have.
Is there an available algorithm and code for this? (either CGAL Advancing_front_surface_reconstruction, properly handled, or any other)
See also this and this.
Example 1
I compiled and ran example reconstruction_surface_mesh.cpp from examples/Advancing_front_surface_reconstruction, out-of-the box (which uses file half.xyz as input for data points), and I obtained a closed surface:
I would like to get rid of the few triangles that close the surface.
I tried adding an extra point at the end of half.xyz, and I got
which is an open surface.
So far, with what I tested, I do not know:
How to indicate an open surface.
How to indicate which vertices lay at the boundary.
If this is a non-empty set (and it should have at least three vertices) this would imply an open surface.
Ideally, one would have a workflow which works without manual intervention.
Example 2
I compiled and ran example boundaries.cpp, out-of-the box (which also uses file half.xyz as input for data points).
The output is:
0 outliers:
Boundaries:
boundary
0.178269 0.438589 0.129521
0.0795598 0.419465 0.244812
0.0549683 0.377617 0.3119
-0.0295721 0.360972 0.329075
-0.111332 0.334417 0.342617
-0.186667 0.2953 0.346683
-0.2719 0.16555 0.375017
-0.336304 0.117058 0.339323
-0.393517 0.0775 0.285917
-0.421419 -0.126854 0.215271
-0.395217 -0.214417 0.20015
-0.354783 -0.2953 0.170767
-0.237067 -0.395867 0.172233
-0.178246 -0.438588 0.129553
0.0227767 -0.4873 0.0700833
0.220338 -0.438589 -7.23321e-06
0.293 -0.395867 0
0.36025 -0.334417 0
0.418077 -0.258382 6.0303e-05
0.46025 -0.17265 0
0.484417 -0.0425167 -0.0763333
0.485067 0.03875 -0.0782667
0.471547 0.117058 -0.076827
0.44605 0.197567 -0.0700833
0.4092 0.27125 -0.0433167
0.364885 0.329645 0
0.313633 0.377617 0.0441167
0.2509 0.41425 0.0879333
I did not find how to use this for
automatically removing triangles which would make my target boundary vertices not laying at the boundary.
Moreover, the output seems to be the list of boundary points, without the "spurious" triangles (I am not sure). I would like to specify this list.
The CGAL advancing front reconstruction algorithm does generate open surfaces in general.

Does ordering of mesh element change from run to run for constrained triangulation under CGAL?

I iterate over finie_vertieces, finite_edges and finite_faces after generating constrained delauny triangulation with Loyd optimization. I am on VS2012 using CGAL 4.12 under release mode. I see for a given case finite_verices list is repeatable (so is the vertex list under finite_faces), however, the ordering of the edges in finite_edges seems to change from run to run
for(auto eit = cdtp.finite_edges_begin(); eit != cdtp.finite_edges_end(); ++eit)
{
const auto isConstrainedEdge = cdtp.is_constrained(*eit);
auto & cFace = *(eit->first);
auto cwVert = cFace.vertex(cFace.cw(eit->second));
auto ccwVert = cFace.vertex(cFace.ccw(eit->second));
I use the above code snippet to extract vertex list, and vertex list with a given edge changes from run to run.
Any help is appreciated resolving this, as I am looking for consistent behavior in the code. My triangulation involves many line constraints on a two dimensional domain.
I was told it's likely dependable behaviour, but there is no guarantee of order. IIRC the documentation says the traversal order is not guaranteed. I think it's best to assume the iterators' transversal is not deterministic and could change.
You could use any of the _info extensions to embed information into the face, edge, etc (a hash perhaps?) which you could then check against to detect a change.
In my use case, I wanted to traverse the mesh in parallel and OpenMP didn't support the iterators. So I hold a vector of the Face_handles in memory which I can then easily index over. In conjunction with the _info data, you could use this to build a vector of edges,faces, etc with a guaranteed order using unique information in the ->info() field.
Another _info example.

Custom EQ AudioUnit on iOS

The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/

Unity 3D Physics

I'm having trouble with physics in unity 3d. I'm wanting my ball to bounce off of walls and go another direction. When the ball hits the wall it just bounces straight back. I have tried changing the direction to be orthogonal to the direction it hits the wall but it doesn't change direction. Due to this, the ball just keeps hitting the wall and bouncing straight back.
Secondly, sometimes the ball goes through the wall. The walls have box colliders while the ball has a sphere collider. They all have continuous dynamic as the collision detection mode.
Here's a link to a similar thread:
http://forum.unity3d.com/threads/22063-I-shot-an-arrow-up-in-the-air...?highlight=shooting+arrow
Personally, I would code the rotation using LookAt as GargarathSunman suggests in this link, but if you want to do it with physics, you'll probably need to build the javelin in at least a couple of parts, as the others suggest in the link, and add different drag and angular drag values to each part,perhaps density as well. If you threw a javelin in a vacuum, it would never land point down because air drag plays such an important part (all things fall at the same rate regardless of mass, thank you Sir Isaac Newton). It's a difficult simulation for the physics engine to get right.
Maybe try to get the collider point between your sphere and your wall then catch your rigidbody velocity and revert it by the collision point normal.
an example of a script to do that ---> (put this script on a wall with collider )
C# script:
public class WallBumper : MonoBehaviour
{
private Vector3 _revertDirection;
public int speedReflectionVector = 2;
/***********************************************
* name : OnCollisionEnter
* return type : void
* Make every gameObject with a RigidBody bounce againt this platform
* ********************************************/
void OnCollisionEnter(Collision e)
{
ContactPoint cp = e.contacts[0];
_revertDirection = Vector3.Reflect(e.rigidbody.velocity, cp.normal * -1);
e.rigidbody.velocity = (_revertDirection.normalized * speedReflectionVector);
}
}
I recently has an issue with a rocket going through targets due to speed and even with continuous dynamic collision detection I couldn't keep this from happening a lot.
I solved this using a script "DontGoThroughThings" posted in wiki.unity3d.com. This uses raycasting between current and previous positions and then ensures the frame ends with the colliders connected for messages an OnTrigger event. Has worked everytime since and it's just a matter of attaching the script so super easy to use.
I think the physics answer is as others have suggested to use multiple components with different drag although typically I think you only want a single RigidBody on the parent. Instead of direction using transform.LookAt you could try and calculate using Quaternion.LookRotation from the rigidbody.velocity. Then use Vector3.Angle to find out how much are are off. The greater the angle diference the more force should be experienced and then use RigidBody.ApplyTorque. Maybe use the Sin(angleDifference) * a constant so less force is applied to torque as you approach proper rotation.
Here is some code I used on my rocket although you'll have to substitute some things as I was pointing toward a fixed target and you'll want to use your velocity.
var rotationDirection = Quaternion.LookRotation(lockedTarget.transform.position - this.transform.position);
var anglesToGo = Vector3.Angle(this.transform.rotation.eulerAngles, rotationDirection.eulerAngles);
if (anglesToGo > RotationVelocity)
{
var rotationDirectionToMake = (rotationDirection * Quaternion.Inverse(this.transform.rotation)).eulerAngles.normalized * RotationVelocity;
transform.Rotate(rotationDirectionToMake);
}