There is a crossing sensor at point A for a bicycle. When the bicycle passes point A(the sensor senses the object and then does not), a light turns on for 20 seconds. If the bicycle goes in reverse now and passes through point A again nothing must happen.(Again the sensor only senses something and then it doesn't once the car passes.)
The difficult part of this is that the sensor changes from true to false very quickly because the bicycle moves away. If the bicycle stayed on the sensor; it would be easier. Another difficulty is the way back because it also doesn't stay.
Any propositions? I would appreciate any help. Note My skills in labview are quite mediocre. Though I would like to learn more.
Using two sensors at Point A, one in front of the other, would give you the bicycle direction which would allow you to determine when the bicycle is in reverse. This assumes that the sensors have a read response fast enough to distinguish between the front and back sensor.
I'm not sure it's possible.
The only solution I can think of would require that the reverse and forward speeds of the bicylce are different.
If their speeds are different you can try making a determination based on how long the sensor is activated.
However, you run into the problem where all bicyclists may not pass the sensor at the same speed.
The best solution would be to use two sensors and check what order they are activated in.
Related
I know they are not accessible now. Nest can clearly see this data. I have 2 additional sensors (thermostat 1st floor, 1 sensor basement, 1 sensor 2nd floor) and want to log/measure and potentially even control based of those and the Nest app is quite poor with this. (ie you can only change the sensor you are using 4x in a day, at specific times).
In real-life usage, where I am on different stories at different times of day (which don't correspond with the canned choices) it would be nice to be able to read these and potentially control. In some cases I want to just reduce the difference in temperature between the areas, others I might want to make sure my kids are comfortable during naps but without access to the sensor data it's quite useless...
It is clear that the sensors aren't currently supported by the API but this is where it redirects me to ask questions! Any idea?
EDIT: ok so it looks like technically you can read the sensor data but only when the sensor is set as the current sensor. Listed this and other findings so far here: https://ssprod.me/blog/2022/11/14/a-closer-look-at-the-nest-thermostat/
I am new in the world of freertos, I have to do a project that consists of an automatic alcohol dispenser that measures temperature. The parts/sensors of my project are:
DHT22 for temperature (I know its not ideal but its the only one
that I have).
Hc-sr04 for distance measurement (ultrasound).
I2c display 16x2 to show the temperature.
Buzzer to make sound.
Servo to dispense alcohol.
The idea of the project is that when someone comes within 15 cm of the device, the temperature is displayed on the screen, the servo moves and can dispense alcohol, and the buzzer makes a little sound.
As I understand it, I have to create a task for each activity. One to measure temperature and possibly send that information to a queue, another to read the queue and display it on the screen, another to make the sound with the buzzer, another to measure distance with the ultrasound, and another to move the servo.
This is how I was asked to do it, but my question is what is the best way to organize the tasks?
How do I make it so that ...
first the distance is measured,
then the temperature is measured,
then it is shown on the display,
the servo is moved and the sound is made?
What is the best way to communicate between tasks (when a task measures less than 15 cm, tell another task to measure temperature, and then it is shown on the display, and the servo moves and makes the sound)?
I would like to see how you think about it and it would help me a lot to know.
I’m very new to the subject and I’m having a hard time thinking which is the best way. I would appreciate simple solutions that not involve complicated stuff as I'm having a hard time with freeRTOS.
This seems like a fairly simple system, as all work can be done sequentially (i.e. one thing happens after another). You certainly don't need to use dedicated tasks for activities which are done sequentially. In fact, the simplest architecture by far is to have a single task, running in a loop, doing everything. I strongly suggest you start with that approach and build something that works.
Then after you have something that works sequentially in a single task, re-consider your options. It might be the perfect architecture, it might need minor adjustments. You'll be in a much better position to judge.
I have a set of 12 IMUs mounted on the end effector of my robot arm which I read using a micro controller to determine it's movement. With my controller I can read two sensors simultaneously using direct memory access. After acquiring the measurements I would like to fuse them to make up for the sensor error and generate a more reliable reading than having only one sensor.
After some research my understanding is that I can use a Kalman filter to reach my desired outcome, but still have the problem of all the sensor values having different time stamps, since I can read only two at a time and even if both time stamps will be synchronized perfectly, the next pair will have a different time stamp if only in the µs range.
Now I know controls engineering principles but am completely new to the topic of sensor fusion and google presents me with too many results to find a solution in a reasonable amount of time.
Therefore my question, can anybody point me into the right direction by naming me a certain keyword I need to look for or literature I should work through to better understand that topic, please?
Thank you!
The topic you are dealing with is not an easy one. Try to have a look at the multi-rate kalman filters.
The idea is that you design different kalman filters for each combination of sensor that you can available at the same time, and use it when you have the data from those sensors, while the system state is passed between the various filters.
I need to build the AI for an opponent in an arcade style fighting game, very similar to Mortal Kombat.
I don't want to use random moves for the computer, but I would like to have an AI that is harder to beat.
Where can I start looking for resources ? Do you know of any implementation of this sort of project ?
Think about how you play the game.
Ask yourself, under what conditions would I perform certain attacks? When would I block? What do I do when I have low health? When my opponent has low health? Do I become more agressive in one situation over the other? When is it best to use long range versus short range?
Etc.
An AI like this usually only follows a bunch of if/else/then statements, with som randomness added in.
You want it to react quickly so much of anything else (A*, alpha-beta, etc) won't be as useful.
There is an algorithm which depends on statistic and count of shoots in each direction. You can put such logic to calculate how many times enemy kicked you in needed direction and predict future attacks.
I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.