Trouble programming MindstormsNXT with RobotC - robot

I am having trouble making my robot controlling my Mindstorms NXT robot with RobotC. I want my robot to be able to move forward on a table and when reaching the end, the ultrasonic sensor, which is facing down, will determine if it is on the edge by seeing how far the ground is. When the ultrasonic sensor finds that it is on the edge, the robot will move back from the edge, turn around, and go the other way.
Here is my code:
#pragma config(Sensor, S1, , sensorSONAR)
//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//
task main()
{
int Ultrasonic;
Ultrasonic = SensorValue[S1];
while (true)
{
if (Ultrasonic >10)
{
motor[motorB] = -100;
motor[motorC] = -100;
wait1Msec(2000);
motor[motorB] = 100;
wait1Msec(2000);
}
if (Ultrasonic <= 10)
{
motor[motorB] = 50;
motor[motorC] = 50;
wait1Msec(5000);
}
}
}

The problem with this program is that you really only read from the ultrasonic sensor once. Here's what happens when you run your program:
The variable Ultrasonic is created, and the sensor value is assigned to it.
The program checks the value of the ultrasonic sensor.
The program does something based on what the ultrasonic sensor is read.
The program does something based on what the ultrasonic sensor is read.
...
To fix this problem, all you need to do is to move the reading of the ultrasonic sensor into the while loop, so that the NXT continually checks the value of the sensor. Here's what a modified version would look like:
task main()
{
int Ultrasonic;
while (true)
{
Ultrasonic = SensorValue[S1];
if (Ultrasonic > 10)
{
// ... Do stuff here...
}
if (Ultrasonic <= 10)
{
// ... Do more stuff here...
}
}
}
In fact, you could make this code even "cleaner" by combining the check for the value of the ultrasonic sensor by using an "if... else..." statement. This checks one value, and then makes a decision based on whether that value is true--or else. Just replace the line if (Ultrasonic <= 10) with else.

Related

Sample MPU6050 inside Interrupt Service Routine

I'm trying to sample an MPU6050 via I2C at a fixed rate (200 Hz). I want to do this to avoid using polling as much as possible, but I can't figure a way to do this.
I've set the timer7 to throw interrupts at 200 Hz by means of the PSC/ARR registers. I've also set the LD2 to flash while the ISR is executed.
void TIM7_conf(void){
RCC->APB1ENR|=RCC_APB1ENR_TIM7EN; //enable TIM7 clock
RCC->AHB1ENR|=RCC_AHB1ENR_GPIOAEN;//Enable GPIOA clock
GPIOA->MODER|=GPIO_MODER_MODE5_0;//set PA5 as output
//set timer prescaller and Auto Reload value
TIM7->PSC=8400-1;
TIM7->ARR=50-1;
TIM7->DIER|=TIM_DIER_UIE;//enable interrupt
NVIC_EnableIRQ(TIM7_IRQn);
TIM7->CR1|= TIM_CR1_CEN;
}
I've also wrote another function to calculate the MPU angle by means of the accelerations. This function is nested inside another one with the aim to start the communication with the sensor and retrieve the datas from it:
float CalcTheta (float* Ax,float* Ay, float* Az){
float AySq=(*Ay)*(*Ay);
float AzSq=(*Az)*(*Az);
return (atan2(-(*Ax),sqrt(AySq+AzSq)))*180/M_PI;
}
And the overall one:
float angle_filt(void){
MPU6050_Read_RawData(&Accel_Raw, &Gyro_Raw);
MPU6050_Read_ScaledData(&Accel_Scaled, &Gyro_Scaled);
float Acx=Accel_Scaled.x-biasA_x;
float Acy=Accel_Scaled.y-biasA_y;
float Acz=Accel_Scaled.z+biasA_z;
theta_acc=CalcTheta(&Acx,&Acy,&Acz);
float Gcy=Gyro_Scaled.y-bias_angle;
return filt_angle=0.9*(filt_angle+Gcy*dt/1000)+0.1*theta_acc;
}
After all I wrote the TIM7_IRQHandler like this:
void TIM7_IRQHandler(void)
{
TIM7->SR=0; //clear status register
GPIOA->ODR^=GPIO_ODR_OD5;
angle_filt();
}
If I execute the ISR in this way, I get the error:
Program received signal SIGINT, Interrupt. 0x08002c8c in I2C_WaitOnFlagUntilTimeout (hi2c=0x2000008c <i2cHandler>, Flag=1048578, Status=SET, Timeout=25, Tickstart=508) at ../Drivers/STM32F4xx_HAL_Driver/Src/stm32f4xx_hal_i2c.c:7199 7199 while (__HAL_I2C_GET_FLAG(hi2c, Flag) == Status)
If I put first the function "angle_filt" the LED never turns on. Is this a correct way to sample a sensor at fixed rate with I2C? Should I share with you the whole code?
Thanks in advance for your help!

Raspberry Pi Pico locks up when I try to use interrupts

I'm trying to use encoders to track the movement of three wheels on a robot, but as soon as any of the motors move the robot "locks up", it stops responding to commands, stops printing to the serial monitor, and just keeps spinning its wheels until I turn it off. I cut out everything except just the code to track one encoder and tried turning the wheel by hand to sus out the problem, but it still locked up. And even more strangely, now it will start spinning one of the wheels even though I've removed any code that should have it do that, even by mistake.
I used the Arduino IDE to program the pico since I've got no familiarity with python, but I can't find any information or troubleshooting tips for using interrupts with the pico that don't assume you're using micropython.
Here's the simplified code I'm using to try to find the problem. All it's meant to do is keep track of how many steps the encoder has made and print that to the serial monitor every second. Ive tried removing the serial and having it light up LEDs instead but that didn't help.
int encA = 10;
int encB = 11;
int count = 0;
int timer = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
attachInterrupt(digitalPinToInterrupt(encA),readEncoder,RISING);
timer = millis();
}
void loop() {
// put your main code here, to run repeatedly:
if (timer - millis() > 5000) {
Serial.println(count);
timer = millis();
}
}
void readEncoder() {
int bVal = digitalRead(encB);
if (bVal == 0) {
count--;
}
else{
count++;
}
}
Does the mapping function digitalPinToInterrupt for the Pi Pico work?
Can you try just using the interrupt number that corresponds to the pi?
attachInterrupt(9,readEncoder,RISING); //Or the number 0-25 which maps to that pin
https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html
You have the wrong pin to encoder in your example (maybe you incorrectly copy and pasted)?
attachInterrupt(digitalPinToInterrupt(**encA**),readEncoder,RISING);
void readEncoder() {
int bVal = digitalRead(**encB**); ...}
There is similar code on GitHub that you could modify and try instead.
https://github.com/jumejume1/Arduino/blob/master/ROTARY_ENCODER/ROTARY_ENCODER.ino
It might help you find a solution.
Also,
https://www.arduino.cc/reference/en/libraries/rpi_pico_timerinterrupt/
The interrupt number corresponds to the pin (unless you have reassigned it or disabled it) so for pin 11 the code can be:
attachInterrupt(11, buttonPressed, RISING);
This works:
bool buttonPress = false;
unsigned long buttonTime = 0; // To prevent debounce
void setup() {
Serial.begin(9600);
pinMode(11, INPUT_PULLUP);
attachInterrupt(11, buttonPressed, RISING);
// can be CHANGE or LOW or RISING or FALLING or HIGH
}
void loop() {
if(buttonPress) {
Serial.println(F("Pressed"));
buttonPress= false;
} else {
Serial.println(F("Normal"));
}
delay(250);
}
void buttonPressed() {
//Set timer to work for your loop code time
if (millis() - buttonTime > 250) {
//button press ok
buttonPress= true;
}
buttonTime = millis();
}
See: https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html for disable, enable etc.

How to optimize the code for reading SPI through ARDUINO in SLAVE mode

Not important:
I am doing a project to integrate a bluetooth module into a car radio pioneer. I understand perfectly well that it's easier to buy a new one =) but it's not interesting. At the moment, the byproduct was an adapter on arduino of resistor buttons, which the pioneer did not understand. The same adapter also controls the bluetooth board, it can switch the track forward and backward (there is no button on the steering wheel for pause). Now I want the bluetooth to turn on only in AUX mode. But there is a problem, which mode can be understood only by reading the signal from the SPI bus of the commutation microcircuit. I was able to read this data using arduino nano. I do not have an analyzer, but it is not necessary that I would understand something additional with it.
Essence of the question:
Using the scientific poke method, I found sequences indicating the launch of a particular mode, for example:
10110011
1
111
1000000
I'm sure I'm doing it wrong, but in the meantime I get duplicate results. But, when I try to detect them using IF, the nano speed is not enough and the chip starts to pass data.
#include "SPI.h"
bool flag01, flag02, flag03, flag11, flag12, flag13, flag31, flag32, flag33;
void setup (void)
{
Serial.begin(9600);
pinMode(MISO, OUTPUT);
SPCR |= _BV(SPE);
SPI.attachInterrupt();
}
// Вызываем функцию обработки прерываний по вектору SPI
// STC - Serial Transfer Comlete
ISR(SPI_STC_vect)
{
// Получаем байт из регистра данных SPI
byte c = SPDR;
Serial.println(c, BIN);
if (c == 0b1) {
Serial.println("1 ok");
flag11 = true;
} else {
flag11 = false;
}
if (c == 0b11 && flag11) {
Serial.println("11 ok");
flag12 = true;
} else {
flag12 = false;
flag11 = false;
}
if (c == 0b1100000 && flag11 && flag12) {
Serial.println("1100000 ok");
flag13 = true;
} else {
flag13 = false;
flag12 = false;
flag11 = false;
}
}
void loop(void)
{}
I myself am scared to look at this code, but I cannot think of anything better. It seems like I heard about some kind of buffer, but I don't know how to screw it to this solution. After all, the data packets go with dropping the CS signal and I can’t figure out how to determine the beginning and end of the packet from the commands in order to write it to a buffer or array and only then go through it with a comparison.
I will be grateful if someone will tell me at least in which direction to move.
There is also esp8266, but there is a limitation on the size of a data packet of 32 bits in a slave mode and I do not know how to get around it correctly.
So, actually the question.
How can I optimize the code so that the arduino has time to process the data and I can catch the pattern?
Perhaps, if we implement reading of data of arbitrary length on esp8266, or at least fill them to the required length, it would help me. But I still can't figure it out with the spi.slave library.
First you should keep your ISR as short as possible, certainly don't use Serial print inside the ISR.
Secondly, if you don't know exactly how long the data is, then you need to have a buffer to capture the data and try to determine the data length first before you trying to analysis it.
volatile uint8_t byteCount = 0;
volatile bool dataReady = false;
byte data[32];
// SPI interrupt routine
ISR (SPI_STC_vect)
{
data[byteCount++] = SPDR;
dataReady = true;
}
void setup (void)
{
// your SPI and Serial setup code
}
void loop (void)
{
// for determine the data stream length
if (dataReady) {
Serial.println(byteCount);
dataReady = false;
}
}
Once you know how long the data stream is (let assumed it is 15-byte long), you can then change your sketch slightly to capture the data and analysis it.
volatile uint8_t byteCount = 0;
volatile bool dataReady = false;
byte data[32];
// SPI interrupt routine
ISR (SPI_STC_vect)
{
data[byteCount++] = SPDR;
if (byteCount == 15)
dataReady = true;
}
void loop (void)
{
if (dataReady) {
dataReady = false;
// do your analysis here
}
}

Microsoft Kinect and background/environmental noise

I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.

GPS altitude print to LCD

I am trying to get my arduino uno to display the altitude from a GPS module, without any other data. I am still learning the code, but I've run into a problem where I can't seem to find what command is used to pull the altitude from the GPS string. I know it is pulling the data successfully, as I ran the example code from http://learn.parallax.com/kickstart/28500 and it read the first bit of the string, though I moved onto trying to get the altitude before getting it to scroll the whole string.
I am using a basic 16x2 LCD display, and the display I have working fine.
The end goal of this project is a GPS/gyroscope altimeter that can record to an SD card and record temperature, and deploy a parachute at apogee (15,000ft) and a larger parachute at 1,000ft.
Here is the code I am using for the altitude, I've marked the section I can't figure out. (probably just missing a term, or I might have really messed something up)
Any help would be appreciated, have a great day.
#include <SoftwareSerial.h>
#include "./TinyGPS.h" // Special version for 1.0
#include <LiquidCrystal.h>
TinyGPS gps;
SoftwareSerial nss(0, 255); // Yellow wire to pin 6
LiquidCrystal lcd(7, 8, 9, 10, 11, 12);
void gpsdump(TinyGPS &gps);
bool feedgps();
void setup() {
// set up the LCD's number of columns and rows:
lcd.begin(16, 2);
// initialize the serial communications:
Serial.begin(9600);
Serial.begin(115200);
nss.begin(4800);
lcd.print("Reading GPS");
lcd.write(254); // move cursor to beginning of first line
lcd.write(128);
lcd.write(" "); // clear display
lcd.write(" ");
}
void loop() {
bool newdata = false;
unsigned long start = millis();
while (millis() - start < 5000) { // Update every 5 seconds
if (feedgps())
newdata = true;
}
gpsdump(gps);
}
// Get and process GPS data
void gpsdump(TinyGPS &gps) {
// problem area
float falt, flat, flon;
unsigned long age;
gps.f_get_position(&flat, &flon);
inline long altitude (return _altitude);
long _altitude
;lcd.print(_altitude, 4);
}//end problem area
// Feed data as it becomes available
bool feedgps() {
while (nss.available()) {
if (gps.encode(nss.read()))
return true;
}
return false;
}
lcd.print(x,4) prints base-4. Did you want that, or do you want ordinary base-10 (decimal)?
Secondly, where do you expect _altitude to come from? It's uninitialized. There's also an uninitialized falt, and a weird inline long altitude line which doesn't mean anything.
You might be better of learning C++ first, in a desktop environment. Debugging an embedded device is a lot harder, and you're still producing quite a few bugs.