Keep physics object upright using engines placed on it - physics

So, I'm writing an auto-pilot, sorta, for a 3D physics object.
The player is allowed to place as many engines as they wish, wherever they wish. The auto-pilot is then supposed to keep this object upright using the engines, control each engine's thrust amount 0-100%. So if it's tilting to the right, it should fire all engines to the right a bit more. If it's tilting forward, but only slightly to the right, all engines in the direction it's tilting should fire more, so the object gets upright again.
How would I go about doing this?

determine motors usage
you have to make list for what your motors will be used
so divide all motors to groups by its direction
you should have motors for left,right,up,down,forward,backward
the problem is you can missing some ... in that case you are screwed and cant complete the task
the group sorting is not easy for non axis aligned arbitrary motors
to simplify this just do the dot product of motor direction and group direction
and the maximal values are belonging to motors of that group
the minimal values (negative) are belonging to opposite direction
so tag each motor with group
and try to select them so each group has at least one motor
control
well just use any type of regulation P,PI,PID... to maintain the position
it should be pretty straightforward
for example motors in x group with P (proportional regulation)
thrust_x = c0 + c1 * (object_x-wanted_x)
where c1 is response constant play with it to achieve wanted response
too big will cause oscilations
too small will cause slow reactions
c0 is anihilating external force fields like gravity
both c0,c1 are dependent on the group strength, object mass,...
if you need also to control orientation
then you just have to add more groups
[notes]
the motors usage can change with time (object can rotate)
so you can convert the desired position to object local coordinate system
or recompute the groups once in a while

Related

Detect breakdown voltage in an AC waveform

I need to monitor an AC Voltage waveform and record the RMS value when the breakdown happens. I roughly know how to acquire data from videos I have watched, however, it is difficult for me to produce a solution that reads the Breakdown Voltage Value. Ideally, I would also take a screenshot along with the breakdown voltage value,
In case you are not familiar with this topic, When a breakdown happens the voltage will drop immediately to zero. So what I need is to measure the voltage just before it falls to zero, and if possible take a screenshot. This is an image of a normal waveform (black) with a breakdown one (red).
Naive solution*:
Take the data and get the Y values (this would depend on the datatype you have, which would depend on how you acquire the data).
Find the breakdown point by iterating over the values and maintaining a couple of flags (I would probably say track "got higher than X" and once that's true, track "got lower than Y").
From that, I would just say take the last N points (Get Array Subset) and get the array max. Or just track the maximum value as you run.
Assuming you have the graph in a control, you can just right click and select Create>>Invoke Node>>Export Image.
I would suggest trying playing with that with a VI with static data which you can repeatedly run to check how your code behaves.
*I don't know the problem domain and an not overly familiar with the various analysis VIs that ship with LV, so there are quite possibly more efficient ways of doing this.

what does latDist mean in sumo traci.vehicle.changeSublane(vehID, latDist)?

I want to know something more about the latDist component in traci.vehicle.changeSublane(vehID, latDist) rather than what sumo says in "https://sumo.dlr.de/docs/TraCI/Change_Vehicle_State.html#lane_change_mode_0xb6". Does it have any interval? Do the values it takes are the matter of distance? Does it have values as threshold? What do we mean when using for instance "3.00" as latDist?
Best,
Ali
In SUMO's sublane model every vehicle has a continuous lateral position meaning it can be freely positioned in the boundaries of the edge, occupying one or more sublanes. This means a "lane change" is nothing more than a lateral movement. To make it independent of the actual sublane width (which has not so much relevance in reality) the offset to change is now given in meters and not in lane (or sublane) numbers. So an offset of 3.0 means move 3 meters to the left (in a right hand driven network).

Similarity matching algorithm

I have products with different details in different attributes and I need to develop an algorithm to find the most similar to the one I'm trying to find.
For example, if a product has:
Weight: 100lb
Color: Black, Brown, White
Height: 10in
Conditions: new
Others can have different colors, weight, etc. Then I need to do a search where the most similar return first. For example, if everything matches but the color is only Black and White but not Brown, it's a better match than another product that is only Black but not White or Brown.
I'm open to suggestions as the project is just starting.
One approach, for example, I could do is restrict each attribute (weight, color, size) a limited set of option, so I can build a binary representation. So I have something like this for each product:
Colors Weight Height Condition
00011011000 10110110 10001100 01
Then if I do an XOR between the product's binary representation and my search, I can calculate the number of set bits to see how similar they are (all zeros would mean exact match).
The problem with this approach is that I cannot index that on a database, so I would need to read all the products to make the comparison.
Any suggestions on how I can approach this? Ideally I would like to have something I can index on a database so it's fast to query.
Further question: also if I could use different weights for each attribute, it would be awesome.
You basically need to come up with a distance metric to define the distance between two objects. Calculate the distance from the object in question to each other object, then you can either sort by minimum distance or just select the best.
Without some highly specialized algorithm based on the full data set, the best you can do is a linear time distance comparison with every other item.
You can estimate the nearest by keeping sorted lists of certain fields such as Height and Weight and cap the distance at a threshold (like in broad phase collision detection), then limit full distance comparisons to only those items that meet the thresholds.
What you want to do is a perfect use case for elasticsearch and other similar search oriented databases. I don't think you need to hack with bitmasks/etc.
You would typically maintain your primary data in your existing database (sql/cassandra/mongo/etc..anything works), and copy things that need searching to elasticsearch.
What are you talking about very similar to BK-trees. BK-tree constructs search tree with some metric associated with keys of this tree. Most common use of this tree is string corrections with Levenshtein or Damerau-Levenshtein distances. This is not static data structure, so it supports future insertions of elements.
When you search exact element (or insert element), you need to look through nodes of this tree and go to links with weight equal to distance between key of this node and your element. if you want to find similar objects, you need to go to several nodes simultaneously that supports your wishes of constrains of distances. (Maybe it can be even done with A* to fast finding one most similar object).
Simple example of BK-tree (from the second link)
BOOK
/ \
/(1) \(4)
/ \
BOOKS CAKE
/ / \
/(2) /(1) \(2)
/ | |
BOO CAPE CART
Your metric should be Hamming distance (count of differences between bit representations of two objects).
BUT! is it good to compare two integers as count of different bits in their representation? With Hamming distance HD(10000, 00000) == HD(10000, 10001). I.e. difference between numbers 16 and 0, and 16 and 17 is equal. Is it really what you need?
BK-tree with details:
https://hamberg.no/erlend/posts/2012-01-17-BK-trees.html
https://nullwords.wordpress.com/2013/03/13/the-bk-tree-a-data-structure-for-spell-checking/

How to group nearby latitude and longitude locations stored in SQL

Im trying to analyse data from cycle accidents in the UK to find statistical black spots. Here is the example of the data from another website. http://www.cycleinjury.co.uk/map
I am currently using SQLite to ~100k store lat / lon locations. I want to group nearby locations together. This task is called cluster analysis.
I would like simplify the dataset by ignoring isolated incidents and instead only showing the origin of clusters where more than one accident have taken place in a small area.
There are 3 problems I need to overcome.
Performance - How do I ensure finding nearby points is quick. Should I use SQLite's implementation of an R-Tree for example?
Chains - How do I avoid picking up chains of nearby points?
Density - How to take cycle population density into account? There is a far greater population density of cyclist in london then say Bristol, therefore there appears to be a greater number of backstops in London.
I would like to avoid 'chain' scenarios like this:
Instead I would like to find clusters:
London screenshot (I hand drew some clusters)...
Bristol screenshot - Much lower density - the same program ran over this area might not find any blackspots if relative density was not taken into account.
Any pointers would be great!
Well, your problem description reads exactly like the DBSCAN clustering algorithm (Wikipedia). It avoids chain effects in the sense that it requires them to be at least minPts objects.
As for the differences in densities across, that is what OPTICS (Wikipedia) is supposed do solve. You may need to use a different way of extracting clusters though.
Well, ok, maybe not 100% - you maybe want to have single hotspots, not areas that are "density connected". When thinking of an OPTICS plot, I figure you are only interested in small but deep valleys, not in large valleys. You could probably use the OPTICS plot an scan for local minima of "at least 10 accidents".
Update: Thanks for the pointer to the data set. It's really interesting. So I did not filter it down to cyclists, but right now I'm using all 1.2 million records with coordinates. I've fed them into ELKI for analysis, because it's really fast, and it actually can use the geodetic distance (i.e. on latitude and longitude) instead of Euclidean distance, to avoid bias. I've enabled the R*-tree index with STR bulk loading, because that is supposed to help to get the runtime down a lot. I'm running OPTICS with Xi=.1, epsilon=1 (km) and minPts=100 (looking for large clusters only). Runtime was around 11 Minutes, not too bad. The OPTICS plot of course would be 1.2 million pixels wide, so it's not really good for full visualization anymore. Given the huge threshold, it identified 18 clusters with 100-200 instances each. I'll try to visualize these clusters next. But definitely try a lower minPts for your experiments.
So here are the major clusters found:
51.690713 -0.045545 a crossing on A10 north of London just past M25
51.477804 -0.404462 "Waggoners Roundabout"
51.690713 -0.045545 "Halton Cross Roundabout" or the crossing south of it
51.436707 -0.499702 Fork of A30 and A308 Staines By-Pass
53.556186 -2.489059 M61 exit to A58, North-West of Manchester
55.170139 -1.532917 A189, North Seaton Roundabout
55.067229 -1.577334 A189 and A19, just south of this, a four lane roundabout.
51.570594 -0.096159 Manour House, Picadilly Line
53.477601 -1.152863 M18 and A1(M)
53.091369 -0.789684 A1, A17 and A46, a complex construct with roundabouts on both sides of A1.
52.949281 -0.97896 A52 and A46
50.659544 -1.15251 Isle of Wight, Sandown.
...
Note, these are just random points taken from the clusters. It may be sensible to compute e.g. cluster center and radius instead, but I didn't do that. I just wanted to get a glimpse of that data set, and it looks interesting.
Here are some screenshots, with minPts=50, epsilon=0.1, xi=0.02:
Notice that with OPTICS, clusters can be hierarchical. Here is a detail:
First, your example is quite misleading. You have two different sets of data, and you don't control the data. If it appears in a chain, then you will get a chain out.
This problem is not exactly suitable for a database. You'll have to write code or find a package that implements this algorithm on your platform.
There are many different clustering algorithms. One, k-means, is an iterative algorithm where you look for a fixed number of clusters. k-means requires a few complete scans of the data, and voila, you have your clusters. Indexes are not particularly helpful.
Another, which is usually appropriate on slightly smaller data sets, is hierarchical clustering -- you put the two closest things together, and then build the clusters. An index might be helpful here.
I recommend though that you peruse a site such as kdnuggets in order to see what software -- free and otherwise -- is available.

Chess Checkmate Algorithm Complexity

I was wondering what the complexity of a graph search algorithm would be for determining a checkmate in chess in Big O notation.
8 pieces on each side. First move has 16 possibilities for just the pawns alone and another 4 for the knights, second movie has the same amount. After this the list of possibilities grow to an uncomputable level.
The best chess engines in the world use 'most probable' graph searches.
This wikipedia article is very useful: http://en.wikipedia.org/wiki/Game_complexity
"Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, the number of atoms in the observable universe, to which it is often compared, is estimated to be between 4×1079 and 1081."
The answer is the algorithm would solve all the possible moves by the remaining chess pieces (N). Since it only goes through each piece once the complexity is O(N) (Linear).
The king has at most eight moves. And it takes constant time to verify if a king is threatened after each move. Plus the case where king stay put (and another piece moves). So it's constant time.
If you just want to check if a given board contains a checkmate, then you could do something like this:
determine the (king)set of squares your king could move to (8 in all directions - fields occupied by your own pieces - borders of field)
iterate over all enemy pieces and determine the squares they attack. If any of those squares are in your kingset, remove them. Maintain a boolean to indicate if your king is under attack.
checkmate if your kingset gets empty and the king is under attack
The number of pieces does play a role, so if you have an arbitrary sized board with n pieces, it does matter. In that case, the bottleneck will be to check for all pieces if they attack a position or not because another piece could block the attack. A simple implementation could do it in quadratic time.
By maintaining the positions of the pieces in a set and optimise it for add() and contains() you could get this down to linear in n (although the size of the board will also influence this) so I guess linear complexity is feasible.