How to apply force or line load in an angle in ABAQUS - finite-element-analysis

A question in ABAQUS (may seem very basic). How to apply concentrated force or line load at an angle in ABAQUS? ABAQUS only gives option to specify components in X, Y, Z directions for a concentrated force. So resolving force is only option? Can anyone comment?

It's a simple matter of understanding that force is a vector quantity. Given known angle(s), calculate the components in the coordinate system of choice.
You might really be asking "If I know that my force is normal to a surface in its local coordinate system, how do I calculate its components in global (x, y, z) system?"
If that's your real question, it's just a vector transformation from the surface normal coordinate system to the global (x, y, z). The surface coordinate system should be (n, t, z) where n = unit vector normal to the surface, z = unit vector out of the plane, t = unit vector tangential to the surface defined by the cross product t = z X n.

Concentrated forces are applied in the nodal coordinate system. You can use a TRANSFORM on the nodes. For example, to apply a radial load away from the Z axis (lifted from http://www-h.eng.cam.ac.uk/help/programs/fe/abaqus/faq68/abaqusf7.html)
*TRANSFORM, TYPE=C, NSET=CID1
0., 0., 0., 0., 0., 1.
**
*NSET, NSET=CID1
1, 2, 3, 4, 5
**
**
** radial force
**
*CLOAD, OP=NEW
1, 1, 1.
2, 1, 1.
3, 1, 1.
4, 1, 1.
5, 1, 1.
In general though, you'll still need vector components to define the rotated coordinate system for *TRANSFORM but if your angles are uniform in a cylindrical or spherical coordinate system or many nodes have the same angle but different loads, then this will save a lot of tediousness.

Related

Find the furthest y cartesian coordinate for 6 DoF robot in joint coordinates

I've got robotic arm with 6 DoF. I make constrain that x, z carteasian coordinate and orientation is exactly specified. I would like to get joint coordinates which are at cartesian position [x, y_max, z], where y_max is the maximum y cartesian coordinate which is reachable by the end-effector of the robotic arm.
For example:
I set x to be 0.5, z to by 1.0 and I want to find joint coordinates that satisfy after forward kinematics that robot's end-effector is at cartesian coordinates [0.5, maximum reachable coordinate, 1.0].
I know that if I know cartesian position and orientation I can find joint coordinates by inverse kinematics and check if the end-effector is at desired coordinateds by forward kinematics, but what if I don't know one of the axis in cartesian and it depends on robot how far it is possible to move? As far as I know, inverse kinematics is possible to solve analyticaly or numericaly, but to solve it I need to know the whole frame of the finish coordinate.
Moreover I would like to have orientation dependent on y coordinate. (for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0]).
You could use a numerical task-based inverse kinematics with a task such as:
Orientation: the orientation you have specified
Position in (x, z): the coordinates you have specified
Position in y: something very far away
The behavior of a task-based approach (with proper damping) when a target is not feasible is to "stretch" the robot as far as it can without violating its constraints. Here is an example with a humanoid robot and three tasks:
(for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0])
This should be possible with a proper task as well. For example, in C++ the mc_rtc framework has a LookAtTask to keep a frame looking at a desired point.

Using cytoscape to generate bezier curve - which parameters should I use?

I'm really confused by cytoscape's parameters and how they line up with a bezier curve generation.
I have several columns of evenly spaced points. I'd like to draw curves that look like this between the points:
http://cubic-bezier.com/#.5,.01,.48,.99
So far I've tried a wide variety of different arguments but none have gotten me very close. I'd appreciate any suggestions.
Firstly, if you want beziers that aren't automatically bundled, then you need to use unbundled-beziers.
A control point for an edge is specified relatively to the source and target node. So you can think of the control points being in a co-ordinate system with dimensions (w, d). This maintains the relative shape of the edge as the source and target move.
The w dimension is progress from the source node position (w = 0), going directly towards the target node position (w = 1).
The d dimension is the perpendicular distance away from the line segment formed straight from w = 0 to w = 1.
From the docs:
This is w : control-point-weights : A series of values that weights control points along a line from source to target, e.g. 0.25 0.5 0.75. A value usually ranges on [0, 1], with 0 towards the source node and 1 towards the target node — but larger or smaller values can also be used.
This is d : control-point-distances : A series of values that specify for each control point the distance perpendicular to a line formed from source to target, e.g. -20 20 -20.
So you just specify [w1, w2, w3, ..., wn] and [d1, d2, d3, ..., dn] with (wi, di) specifying a particular control point.

How to convert relative GPS coordinates to a "local custom" x, y, z coordinate?

Let's say I know two persons are standing at GPS location A and B. A is looking at B.
I would like to know B's (x, y, z) coordinates based on A, where the +y axis is the direction to B (since A is looking at B), +z is the vertically to the sky. (therefore +x is right-hand side of A)
I know how to convert a GPS coordinate to UTM, but in this case, a coordinate system rotation and translation seem needed. I am going to come up with a calculation, but before that, will there be some codes to look at?
I think this must be handled by many applications, but I could not find so far.
Convert booth points to 3D Cartesian
GPS suggest WGS84 so see How to convert a spherical velocity coordinates into cartesian
Construct transform matrix with your desired axises
see Understanding 4x4 homogenous transform matrices. So you need 3 perpendicular unit vectors. The Y is view direction so
Y = normalize(B-A);
one of the axises will be most likely up vector so you can use approximation
Z = normalize(A);
and as origin you can use point A directly. Now just exploit cross product to create X perpendicular to both and make also Y perpendicular to X and Z (so up stays up). For more info see Representing Points on a Circular Radar Math approach
Transfrom B to B' by that matrix
Again in the QA linked in #1 is how to do it. It is simple matrix/vector multiplication.

C-Support Vector Classification Comprehension

I have a question regarding a code snipped which I have found i a book.
The author creates two categories of sample points. Next the author learns a model and plots the SVC model onto the "blobs".
This is the code snipped:
# create 50 separable points
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
# fit the support vector classifier model
clf = SVC(kernel='linear')
clf.fit(X, y)
# plot the data
fig, ax = plt.subplots(figsize=(8, 6))
point_style = dict(cmap='Paired', s=50)
ax.scatter(X[:, 0], X[:, 1], c=y, **point_style)
# format plot
format_plot(ax, 'Input Data')
ax.axis([-1, 4, -2, 7])
# Get contours describing the model
xx = np.linspace(-1, 4, 10)
yy = np.linspace(-2, 7, 10)
xy1, xy2 = np.meshgrid(xx, yy)
Z = np.array([clf.decision_function([t])
for t in zip(xy1.flat, xy2.flat)]).reshape(xy1.shape)
line_style = dict(levels = [-1.0, 0.0, 1.0],
linestyles = ['dashed', 'solid', 'dashed'],
colors = 'gray', linewidths=1)
ax.contour(xy1, xy2, Z, **line_style)
The result is the following:
My question is now, why do we create "xx" and "yy" as well as "xy1" and "xy2"? Because actually we want to show the SVC "function" for the X and y data and if we pass xy1 and xy2 as well as Z (which is also created with xy1 and xy2) to the meshgrid function to plot the meshgrid, there is no connection to the data with which the SVC model was learned...isn't it?
Can anybody explain this to me please or give a recommendation how to solve this more easily?
Thanks for your answers
I'll start with short broad answers. ax.contour() is just one way to plot the separating hyperplane and its "parallel" planes. You can certainly plot it by calculating the plane, like this example.
To answer your last question, in my opinion it's already a relatively simple (in math and logic) and easy (in coding) way to plot your model. And it is especially useful when your separating hyperplane is not mathematically easy to describe (such as polynomial and RBF kernel for non-linear separation), like this example.
To address your second question and comments, and to answer your first question, yes you're right, xx, yy, xy1, xy2 and Z all have very limited connect to your (simulated blobs of) data. They are used for drawing the hyperplanes to describe your model.
That should answer your questions. But please allow me to give some more details here in case others are not familiar with the topic as you do. The only connection between your data and xx, yy, xy1, xy2, Z is:
xx, yy, xy1 and xy2 sample an area surrounding the simulated data. Specifically, the simulated data centered around 2. xx sets a limit between (-1, 4) and yy sets a limit between (-2, 7). One can check the "meshgrid" by ax.scatter(xy1, xy2).
Z is a calculation for all sample points in the "meshgrid". It calculates the normalized distance from a sample point to the separating hyperplane. Z is the levels on the contour plot.
ax.contour then uses the "meshgrid" and Z to plot contour lines. Here are some key points:
xy1 and xy2 are both 2-D specifying the (x, y) coordinates of the surface. They list sample points in the area row by row.
Z is a 2-D array with the same shape as xy1 and xy2. It defines the level at each point so that the program can "understand" the shape of the 3-dimensional surface.
levels = [-1.0, 0.0, 1.0] indicates that there are 3 curves (lines in this case) at corresponding levels to draw. In related to SVC, level 0 is the separating hyperplane; level -1 and 1 are very close (differ by a ζi) to the maximum margin separating hyperplane.
linestyles = ['dashed', 'solid', 'dashed'] indicates that the separating hyperplan is drawn as a solid line and the two planes on both sides are drawn as a dashed line.
Edit (in response to the comment):
Mathematically, the decision function should be a sign function which tell us a point is level 0 or 1, as you said. However, when you check values in Z, you will find they are continuous data. The decision_function(X) works in a way that the sign of the value indicates the classification, while the absolute value is the "Distance of the samples X to the separating hyperplane" which reflects (kind of) the confidence/significance of the predicted classification. This is critical to the plot of model. If Z is categorical, you would have contour lines which makes an area like a mesh rather than a single contour line. It will be like the colormesh in the example; but you won't see that with ax.contour() since it's not a correct behavior for a contour plot.

GLKView GLKMatrix4MakeLookAt description and explanation

For modelviewMatrix I understand how to form translate and scale Matrix. But I am unable to understand how to form viewMatrix using GLKMatrix4MakeLookAt. Can anyone explain how to it works and how to give value to parameters(eye center up X Y Z).
GLK_INLINE GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
GLKMatrix4MakeLookAt creates a viewing matrix (in the same way as gluLookAt does, in case you look at other OpenGL code). As the parameters suggest, it considers the position of the viewer's eye, the point in space they're looking at (e.g., a point on an object), and the up vector, which specifies which direction is "up" (e.g., pointing towards the sky). The viewing matrix generated is the combination of a rotation matrix (composed of a set of orthonormal bases [basis vectors]) and an translation.
Logically, the matrix is basically constructed in a few steps:
compute the line-of-sight vector, which is the normalized vector going from the eye's position to the point you're looking at, the center point.
compute the cross product of the line-of-sight vector with the up vector, and normalize the resulting vector.
compute the cross product of the vector computed in step 2. with the line-of-sight to complete the orthonormal basis.
create a 3x3 rotation matrix by setting the first row to the vector created in step 2., the middle row with the vector from step 3., and the bottom row to the negated, normalized line-of-sight vector.
those three steps produce a rotation matrix that will rotate the world coordinate system into eye coordinates (a coordinate system where the eye is located at the origin, and the line-of-sight is down the -z axis. The final viewing matrix is computed by multiplying a translation to the negated eye position, which moves the "world coordinate positioned eye" to the origin for eye coordinates.
Here's a related question showing the code of GLKMatrix4MakeLookAt, and here's a question with more detail about eye coordinates and related coordinate systems: (What exactly are eye space coordinates?) .