I have to model a "seven-body-mechanism" in Modelica:
The initial angles are given:
Starting with the left side (K5 and K7):
The Modelica Model:
Is it possible to model for example K5 as one body-shape and just specify the center of mass?
Where can I set the initial angles for K5 and K7? In the model "revolute2" it is possible to set one "phi_start"
Which models should I use for the "fixed" B and O? There is this parameter: Position vector from world frame to frame_b, resolved in world frame.
edit: I think i can fix the problem with 2 different angles - I just added another revolute:
The next problem I have: how to model the revolute where K5 and K4 meet? I am not sure if i should also use 2 revolutes? How to model the fixes B and O? A is fixed to the origin, but I am not sure which position vector for B and O.
I always get an error "all forces connot be uniquely calculated"
Thank you very much for your help
Have a look at the Modelica.Mechanics.MultiBody.Examples.Loops.PlanarLoops_analytic example, this contains an example of the K4, K5, K6 and K7 mechanism. In this mechanism set the start value of the Revolute.
Well the crucial part in the mechanism is the connection between O and B (planar four link) which can be solved using e.g. Modelica.Mechanics.MultiBody.Joints.Assemblies.JointRRR as demonstrated in Modelica.Mechanics.MultiBody.Examples.Loops.PlanarLoops_analytic.
The binary members K5-K4 and K7-K6 are principally the same and they don't change the degrees of freedom of the abovementioned planar four link. So they have to be modelled in the same way (which means the revolute2 and revolute6 must be instantiated twice in your model) and be connected similarly to the planar four link once it is properly parameterized and initiated.
Optionally, you can model the mechanism using the PlanarMechanics library.
Related
When I try the bokeh segmentation effect using body-pix#1.0.0, It detects/segments the person (A) in front of the camera. If another person (B) is standing behind, away from A, B is being blurred out. If the person B comes very close to the contour of A, then person B is also getting detected. This is the preferred behaviour.
Now when I try with body-pix#2.0.0, both Person A and B are getting detected even though I am using segmentPerson API. Pls note, person B is standing much away from person A, still both are getting detected. The advantage I see with 2.0 is that the contour of the person detected is much more accurate and smoother than that in 1.0 which had a gap in the contour and the bokeh effect was missing around this gap. In 2.0, the contour is more accurate. But multiple people are getting detected. Is there any parameter I could tweak to restrict this to single person detection and use the smoother contour?
Thanks
For those who wants to know the answer. Source: https://github.com/tensorflow/tfjs/issues/2547
If you want to use BodyPix 2.0 to only blur just a subset of people (e.g. the large people), a quick way would be to use BodyPix 2.0's Multi-Person Segmentation API: https://github.com/tensorflow/tfjs-models/tree/master/body-pix#multi-person-segmentation.
This method returns an array of PersonSegmentation object. In your case it will be an array of two PersonSegmentation object: one for Person A and one for Person B.
You could then remove certain people (in your case Person B) from that array and pass the resulting array (with only 1 element: Person A) to the drawBokehEffect https://github.com/tensorflow/tfjs-models/tree/master/body-pix#bodypixdrawbokeheffect.
To automate this process for other cases (3 or more people):
Each PersonSegmentation object has a .pose field that contains the 2D coordinates (in image pixel space) of the person's 17 keypoints. They can be used to compute the smallest bounding box area for each person. The person bounding box area can then be used as a criteria to remove small people in the image.
I am new to modelica, and i don't have this much experience in it, but i got the basics of course. I am trying to model a micrfluidic network. The network consists of two sources of water and oil, controlled by two valves. The flow of the two mediums interact at a Tjunction and then into a tank or chamber. I don't care about the fluid properties of the mixture because its not my purpose. My question is how do redeclare two medium packages (water and oil) in one system component such as the Tjunction or a tank in order to simulate the system. In my real model, the two mediums doesn't meet, becuase every medium passes through the channels at a different time.
I attached the model with this message. Here's the link.
https://www.dropbox.com/s/yq6lg9la8z211uc/twomediumsv2.zip?dl=0
Thanks for the help .
I don't think you can redeclare a medium during simulation. In your case (where you don't need the mixing of the two fluids) you could create a new medium, for instance called OilWaterMixture, extending from Modelica.Media.Interfaces.PartialMedium.
If you look into the code of PartialMedium you'll see that it contains a lot of partial ("empty") functions that you should fill in in your new medium model. For example, in OilWaterMixture you should extend the function specificEnthalpy_pTX to return the specific enthalpy of your water/oil mixture, for a certain water/oil mixture (given by the mass fraction vector X). This could be done by adding the following model to the OilWaterMixture package:
redeclare function extends specificEnthalpy_pTX "Return specific enthalpy"
Oil = Modelica.Media.Incompressible.Examples.Essotherm650;
Water = Modelica.Media.Water.StandardWater;
algorithm
h_oil := Oil.h_pT(p,T);
h_water := Water.specificEnthalpy_pT(p,T);
h := X[0]*h_oil + X[1]*h_water;
end specificEnthalpy_pTX;
The mass fraction vector X is defined in PartialMedium and in OilWaterMixture you must define that it has two elements.
Again, since you are not going to actually use the mixing properties but only mass fraction vectors {0,1} or {1,0} the simple linear mixing equation should be adequate.
When you use OilWaterMixture in the various components, the error log will tell you which medium functions they need. So you probably don't need to extend all the partial functions in PartialMedium.
I am using the Envelope_3 package of CGAL-4.9.1 and I need to compute an upper envelope where the resulting envelope diagram (Envelope_diagram_2<EnvTraits>) could have edges of three different types:
segments
rays
parabolic arcs (conic arcs)
The three provided models of Envelope_Traits_3 are not enough for this.
I therefore need to create my own EnvTraits (which have to be a model of the concept Envelope_Traits_3).
For now, I made a something like the already provided Env_sphere_traits_3<ConicTraits> model, with which I have at my disposal both parabolic arcs and segments (I just use straight arcs).
The problem arises because I also need to be able to use Rays. How could I do this? Is there a Traits class that I can extend (just like I'm doing right now with Arr_conic_traits_2) that provides X_monotone_curve_2s that can be of the three types that I need?
I found the Arr_polycurve_traits_2 class, hoping that it would allow curves of different type to be stored as subcurves, but it actually just allows to store polycurves that are all of the same kind (linear, bezier, conic, circular...).
What you need is a model of the EnvelopeTraits_3 concept and of the ArrangementOpenBoundaryTraits_2 concept. Among all traits classes provided by the "2D Arrangements" package only instances of the templates Arr_linear_traits_2, Arr_rational_function_traits_2, and Arr_algebraic_segment_traits_2 are models of the later concept.
I suggest that you develop something like Env_your_object_traits_3<AlgebraicTraits_2>, where the template parameter AlgebraicTraits_2 can be substituted with an instance of Arr_algebraic_segment_traits_2.
Efi
I have two data models which are represented by the following classes:
1) ImagesSet - an object that owns 2DImage's, each 2DImage has its own position (origin(3DPoint), x-,y-axes(3DVector) and dimension along x and y axes(in pixels)), but the same pixel size(in mm for example), angle between x and y axes(90 degrees)
This object has following methods(in pseudo code):
AddImage(2DImage);
RemoveImage(ImageIndex);
number GetNumberOfImages();
2DImage Get2DImage(ImageIndex);
2) 3DImage - an objects that is similar to the first but with following restrictions:
it can store 2D images only with the same x-,y-axes and dimensions along x and y axes.
Is it correct in this case to derive 3DImage from ImagesSet?
From my point of view 3DImage "is a" ImagesSet (but with small restrictions)
Could I apply here Liskov substitution principle?
In this case if we are trying to add an image with another x,y axes - method AddImage either will throw an exception or return an error.
Thanks in advance,
Sergey
I agree with maxim1000 that LSP will be violated because derived class adds restrictions that are not present in the base class. If you take a close look at your description you will notice that the question can be turned upside-down: Can ImageSet derive from 3DImage?
Your situation is somewhat similar to Ellipse-Circle problem. Which one derives from the other? Is circle an ellipse with a constraint, or is an ellipse a circle with additional radius? The point is that both are wrong. If you constrain ellipse to equal radiuses, then client which attempts to set different values would receive an error.
Otherwise, if we say that ellipse is just a less constrained circle, we get a more subtle mistake. Suppose that shapes may not breach boundaries of the screen. Now suppose that a circle is replaced with an ellipse. Depending on which coordinate was tested, the shape might break out of the screen area without changing the client code. That is the exact violation of LSP.
Conclusion is - circle and ellipse are separate classes; 3DImage and ImageSet are separate classes.
May be it's just me, but whenever I hear "derive or not derive" my first reaction "not derive" :)
Two reasons in this case:
LSP is violated exactly because of those "small restrictions". So until you have AddImage in your base class which allows to add an image with any orientation, 3DImage is not an ImagesSet. There will be no way for algorithms to state that they need this feature (and comments is not a good place :) ), so you'll have to rely on run-time checks. It's still possible to program in this way, but this will be one more overhead for developers.
Whenever you create some abstraction, it's important to understand why exactly it's created. With derivation you implicitly create an abstraction - it's interface of 3DImage. And instead of this it's better to create this abstraction explicitly. Create an interface class, list there methods useful for algorithms able to work on both data structures and make both ImagesSet and 3DImage implementing that interface possibly adding some other methods.
P.S.
And likely AddImage will become one of those added methods - different in ImagesSet and 3DImage, but that depends...
Dear maxim1000 and sysexpand,
Thanks for the answers. I agree with you. It is clear now that LSP is violated and in this case I can't derive 3DImage from ImagesSet.
I need to redesign the solution in the following way:
2DImage will contain:
2DDimension's
PixelSize(in mm)
PixelData
2DImageOrientated will be derived from 2DImage and will contain new data:
3DPoint origin,
3DVector x-,y-axes
I will create pure interface IImagesSet:
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
ImagesSet will be derived from IImagesSet and will contain the following:
vector<2DImageOrientated>
Add2DImage(2DImageOrientated)
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
3DImage will be also derived from IImagesSet and will contain the following.
vector<2DImageOrientated>
Add2DImage(2DImage)
SetOrigin(3DPoint)
SetXAxis(3DVector)
SetYAxis(3DVector)
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
In this case I think LSP is not violated.
I have a projectile that I would like to pass through specific coordinates at the apex of its path. I have been using a superb equation that giogadi outlined here, by plugging in the velocity values it produces into chipmunk's cpBodyApplyImpulse function.
The equation has one drawback that I haven't been able to figure out. It only works when the coordinates that I want to hit have a y value higher than the cannon (where my projectile starts). This means that I can't shoot at a downward angle.
Can anybody help me find a suitable equation that works no matter where the target is in relation to the cannon?
As pointed out above, there isn't any way to make the apex be lower than the height of the cannon (without making gravity work backwards). However, it is possible to make the projectile pass through a point below the cannon; the equations are all here. The equation you need to solve is:
angle = arctan((v^2 [+-]sqrt(v^4 - g*(x^2+2*y*v^2)))/g*x)
where you choose a velocity and plug in the x and y positions of the target - assuming the cannon is at (0,0). The [+-] thing means that you can choose either root. If the argument to the square root function is negative (an imaginary root) you need a larger velocity. So, if you are "in range" you have two possible angles for any particular velocity (other than in the maximum range 45 degree case where the two roots should give the same answer).
I suspect one trajectory will tend to 'look' much more sensible than the other, but that's something to play around with once you have something working. You may want to stick with the apex grazing code for the cases where the target is above the cannon.