Contact pressure representation in Abaqus - scripting

The main question is connected with extracting the contact pressure from .odb file.
The issue is described in three facts written below:
Imagine that we have simple 3D contact model in Abaqus/CAE
1.If we make a plot of CPRESS on a deformed shape in visualisation module, we'll get a one value of CPRESS for each node. The same (one value for one node) we will get if we request XYdata field output for all frames. And this all seems to be ok, because as far as I know Abaqus CAE use averaging for surface output (CPRESS) to make it possible to request as nodal output.
2.If we use "Probe values" instrument to examine CPRESS value in node, we'll get four values for one node. It still seems to be ok, because, i suppose, it shows the values befor averaging.
3.If we request CPRESS value from command window using this script:
odb.steps['step_name'].frames[frame_number].fieldOutputs['CPRESS'].getSubset(region='node_path').values
length of this vector of CPRESS values in a single node may be from 1 to 6 depending on a chosen node. And the quantity of CPRESS valuse got using this method have no connection with the quantity got using method 2.
So the trick is that I can't inderstand how the vector of CPRESS in node is forming.
Found very little information about this topic in Abaqus Manual.
Hope somebody may help)

Probe Values, extacts the CPRESS values for the whole element. It shows the face number and its node IDs toghether with their corresponding values.

Related

What is requirement coverage in testing?

I was looking at graphwalker which is a model-based testing tool. It creates a model like an oriented graph and uses a generator and a stop condition to walk on that graph, something like:
random(edge_coverage(100)) //covers the graph random till all edges are selected (100%)
random(vertex_coverage(100)) //covers the graph random till all vertices are selected (100%)
There is another stop condition called requirement_coverage: usage random(requirement_coverage(100)).
From the description on the website it says:
requirement_coverage( an integer representing percentage of desired requirement coverage )
The stop criteria is a percentage number. When, during execution, the percentage of traversed requirements is reached, the test is stopped. If requirement is traversed more than one time, it still counts as 1, when calculating the percentage coverage.
What exactly are those traversed requirements?
This might be a little belated answer, but what I have found is:
https://github.com/GraphWalker/graphwalker-project/wiki/Requirements
Basically you can use REQTAG keyword on your vertices, mapping to some external requirement document reference (i.e. REQTAG: requirement1), and GraphWalker collects these requirements and applies the stop condition based on random(requirement_coverage(x)).
So in below example vertices are marked with the requirements tag, and using random(requirement_coverage(50)) would result in stopping after visiting two vertices, etc...

Compute Maya Output Attr From Previous Frame's Outputs

Does Maya allow one to compute the output attributes at frame N using the output attributes calculated at Frame (N-1) as inputs? With the proviso that at (e.g.) Frame 0 we don't look at the previous frame but use some sort of initial condition. Negative frames would be calculated by looking forward in time.
e.g. The translate of the ball at Frame N is computed to be the translate of the ball at Frame N-1 + 1cm higher. At frame zero the ball is given an initial translate of zero.
The DataBlock has a setContext function but the docs appear to forbid using that to do 'timed evaluation'. I could hit the attribute plugs directly and get value with a different time but that would be using inputs outside of the data block.
Is the Maya dependency API essentially timeless - only allowing calculation using the state at the current time? With the only solution to use animation curves which are also essentially timeless (their input state of key frames remaining the same regardless of the time)?
A simple node connection is supposed to be updated on demand, ie for the 'current' frame. It's supposed to be ahistorical -- you should be able to jump to a given frame directly and get a complete evaluation of the scene state without history.
If you need offset values you can use a frame cache node to access a different point in the value stream. You connect the attribute you want to lag to the frameCache's 'stream' plug, and then connect either the 'future' or 'past' attribute to the plug on your node. The offset is applied by specifying the index value for the connections, ie, frameCache1.past[5] is 5 frames behind the value that was fed into the frameCache.
You can also do this in a less performant, but more flexible way by using an expression node. The expression can poll an attribute value at a particular time by calling getAttr() with the -t flag to specify the time. This is much slower to evaluate but lets you apply any arbitrary logic to the time offset you might want.

Can't run k-means with SPSS Modeler 16

I'm using IBM SPSS modeler 16.0 to analyze my data that have four fields and all of them are retrived from a database as string and converted to numbers with the node replace using to_number(). When I connect my node to k-means node to create the clusters using that data I get an error (I'm running a french version and this is a translation of the error):
Type not enough specified for the field 'MyField1'
Type not enough specified for the field 'MyField2'
Type not enough specified for the field 'MyField3'
Type not enough specified for the field 'MyField4'
I tried almost everything but I can't get rid of this error. Can anyone help me to figure this out ?
Many thanks.
You will need to instantiate the input fields used by the k-means model.
You do this by adding a 'Type' node before the modeling node and after any field operation node that would make compute or change any of the nodes that are used as input to the model.
In the 'Type' node you then make sure to click the "Read Values" button or make the proper selections for each field, which is what will instantiate the fields.
This step is not only required for the k-means model, but for most (if not all) of the modeling nodes.

Edges records not showing up in OrientDB

I've discovered recently about OrientDB and I've been playing a little with this tool these past few weeks. However, I noticed today that something seemed to be wrong whenever I added an edge between two vertices. The edge record is not present if I make a query such as SELECT FROM E, this just returns an empty set. In spite of this, it is possible to see the relationship as a property in the nodes, and queries like SELECT IN() FROM V do work.
This poses an issue; if I can't access directly the edge record, I can't modify it with more properties, or even if I could, I wouldn't be able to see the changes made. I thought this could be a design decision for some reason but the GratefulDeadConcerts example database doesn't seem to have this problem.
I'll illustrate my question with an example:
Let's create a graph database in OrientDB from scratch and name it "Test". We'll create a couple of vertices:
CREATE VERTEX SET TEST=123
CREATE VERTEX SET TEST=456
Let's assume the #rid of these nodes are #9:0 and #9:1 respectively, as we haven't changed anything from the default settings. Let's create an edge between them:
CREATE EDGE FROM #9:0 TO #9:1
Now, let's take a look at the output of the query SELECT FROM V:
orientdb {Test}> SELECT FROM V
----+----+----+----+----
# |#RID|TEST|out_|in_
----+----+----+----+----
0 |#9:0|123 |#9:1|null
1 |#9:1|456 |null|#9:0
----+----+----+----+----
2 item(s) found. Query executed in 0.005 sec(s).
Everything looks right so far. However, the output of the query SELECT FROM E is simply 0 item(s) found. Query executed in 0.016 sec(s).. If we execute SELECT IN() FROM V we get the following:
orientdb {Test}> SELECT IN() FROM V
----+-----+----
# |#RID |IN
----+-----+----
0 |#-2:1|[0]
1 |#-2:2|[1]
----+-----+----
2 item(s) found. Query executed in 0.005 sec(s).
From this, I assume that the edges are created in cluster number -2, even if the default cluster for the class E is 10, and I haven't added any other clusters. I suspect this has something to do with the problem, but I'm not sure how to fix it. I have tried adding new clusters to the class E and creating the edges in this new cluster, but to no avail, I keep getting the exact same result.
So my question is, how do I make edges records show up in OrientDB?
I'm using OrientDB Community 1.7-RC2 and have tried this in two different machines, one Windows 7 and another one Debian Wheezy.
Extracted from https://github.com/orientechnologies/orientdb/wiki/Troubleshooting#why-i-cant-see-all-the-edges:
OrientDB, by default, manages edges as "lightweight" edges if they have no properties. This means that if an edge has no properties, it's not stored as physical record. But don't worry, your edge is still there but encoded in a separate data structure. For this reason if you execute a select from Eno edges or less edges than expected are returned. It's extremely rare the need to have the list of edges, but if this is your case you can disable this feature by issuing this command once (with a slow down and a bigger database size):
alter database custom useLightweightEdges=false

How to distinguish in master data and calculated interpolated data?

I'm getting a bunch of vectors with datapoints for a fixed set of values, in the example below you see an example of a vector with a value per time point
1D:2
2D:
7D:5
1M:6
6M:6.5
But alas not for all the timepoints is a value available. All vectors are stored in a database and with a trigger we calcuate the missing values by interpolation, or possibly a more advanced algorithm. Somehow I want to be able to tell which data points have been calculated and which have been original delivered to us. Of course I can add a flag column to the table with values indicating whether the value was a master value or is calculated, but I'm wondering whether there is a more sophisticated way. We probably don't need to determine on a regular basis, so cpu cycles are not an issue for determining or insertion.
The example above shows some nice looking numbers but in reality it would look more somethin like 3.1415966533.
The database for storage is called oracle 10.
cheers.
Could you deactivate the trigger temporarily?