NXT 2.0: Combine Line follow and Obstacle avoidance? - nxt

I have seen code sample that do line follow and code samples that do obstacle avoidance. But are there any code samples that combine both line follow and obstacle avoidance?

One of the bits of code which I built for an assignment does this. It follows the rules of Australian Robocup Rescue.
So if you look at navigateBottle() and followLine() at this link Click here for the code. That should help you in understanding in how you can do this.
This does use 2 Light Sensors and a Sonar Sensor. It does a bouncing effect between the 2 sensors and Sonar for seeing if objects are in front of it.
Sam

Related

mediapipe KNIFT template matching example: using own pics doesn not work properly - how does the example actually work?

i followed the example steps to create an own android app KNIFT template matching example like the 3 dollar bill example on the mediapipe website...did anyone of you build this and know how this really works? I cant get a clear documentation.
My approach to run my own example as suggested on the mediapipe: I have three example pics in my folder (and I did all the build steps with them) that are detected and framed indeed....but not as often and not as correctly as in the dollar bill example (which works fine for me, every bill is detected and framed and labeled as expected).
Also my labeling doesnt work properly. Sometimes it does partly, sometimes it is labeled but incorrect. What does the framework do with my pics and labels and how to optimize my own example?
Any help is appreciated...
regards, fabian

Resistive current measurement by novizon

can anyone provide vi file of resistive current measurement as following block diagram?
It is impossible translate this image into code since it is not a LabView code snippet but just a plain image and you are missing the code for the sub-VI and the express VIs.
My suggestion would be to contact Novizon or any of the co-authors of the paper where this image is coming from on ResearchGate and hope they are willing to supply you with the VIs.

Writing a MP in zimpl to be solved with scip

This might be a quite basic question, however, I did not find any suggestions so far.
I am running the Scip Opt Suite on OSX and everything runs well so far. No I wanted to start to model my first mathematical problem in zimpl, however I do not know how to start.
However, in the user's guide there is just prescribed how to load existing zpl-files, but not how to create on files.
Do you have any suggestions or any further threads dealing with that task?
Kind Regards
In the zimpl package root, there's an example directory. The .zpl files in example are a great starting point for writing your own zimpl inputs. Also, zimpl's author wrote this pdf that walks through the process of writing a formulation to solve a Sudoku puzzle using zimpl.

How can my vhdl code and microblaze co-exist?

Well my problem stated when i had my vhdl code up and running on my Spartan-3a but needed to send and receive data from it to the pc,
I need my vhdl code , so i went for a microplaze structure , problem is I cant understand how will my vhdl code and microblaze co-exist at the same time because every time i program fpga with SDK it deletes my vhdl off the fpga and vise versa with ISE, I dont want to use custom peripherals except if this is the only solution.
Some people just tell me to just use microblaze hdl files produced by EDK, OK but theny aren't I using an unprogrrammed MicroBlaze???
And do i need to go through all of this just to be able to communicate with my vhdl code through pc(NO I CANT USE R232 since i need a speed of 56Mbit/sec)
So here is what i don't understand:
1-IF you implemented microblaze through hardware(HDL from edk to ISE) , isnt it then an un-proggrammed processor?
2-PEOPLE TELL ME i can let microblaze and my vhdl code see each other through GPIO , again how will i implement GPIO and how to connect it to both microblaze and my vhdl code , and how to program microblaze while it in hardware in this situation.
Please any help , its kinda a mess.
It is not that difficult, but unfortunately Xilinx documentation is not that clear.
What you need to do after you are done with your Microblaze code and you feel comfortable with it, is to create a new project in ISE or the one you already had, then add a new file to the project, but instead of adding a VHDL or Verilog file, you must add the system file from EDK.
After you added your XPS project into ISE, you need to do some manual work in order to make things work for you.
Here are a list of things that needs to be done:
You have to create a UCF file that includes all the constraints from EDK
You have to make sure that you have enough space inside your FPGA for both the EDK and
your own code
Synthesize and implement your design using the project in ISE.
Program your FPGA from the bit file generated by ISE
In order to communicate between the MB and your own code, you can do it in many different ways, the easiest way is to use the GPIO block from your MB, then connect those signals to your own code on your top level wrapper.
You maybe able to find some useful information on the lab document and lab material from the following Xilinx page:
enter linkXilinx EDK interface class description here
Accessing the GPIO is pretty simple, you can use the information on this page to get you started:
Reading DIP Switch with MicroBlaze
You may also find this document and related files very useful, it is not for your board, but it covers the exact same thing you are asking for:
Avnet MB tutorial document
I hope this is clear enough.

webrtc AEC algorithm

I have made a software that uses WebRTC DSP libraries (AEC, NS, AGC, VAD). Now I need to know what algorithm uses each one to write my MasterĀ“s Thesis, but I don't find any information about that.
Someone knows the algorithms of this libraries, specially the Acoustic Echo Cancellation (like for example NLMS, that I know it's commonly used, but I don't know if WebRTC also uses it).
I've tryed to know the algorithm looking into the source code, but I don't understand enough.
Thanks in advance!
I've just successfully using standalone WebRTC aecm module for android. and there's some tips:
1.the most important is the thing called "delay", you can find the definition of it in dir:
..\src\modules\audio_processing\include\audio_processing.h
quote:
Sets the |delay| in ms between AnalyzeReverseStream() receiving a
far-end frame and ProcessStream() receiving a near-end frame
containing the corresponding echo. On the client-side this can be
expressed as delay = (t_render - t_analyze) + (t_process - t_capture)
where,
t_analyze is the time a frame is passed to AnalyzeReverseStream() and t_render is the time the first sample of the same frame is
rendered by the audio hardware.
t_capture is the time the first sample of a frame is captured by the audio hardware and t_pull is the time the same frame is passed to
ProcessStream().
if you wanna use aecm module in standalone mode, be sure you obey this doc strictly.
2.AudioRecord and AudioTrack sometimes block(due to minimized buffer size), so when you calc the delay, don't forget adding blocking time to it.
3.if you don't know how to compile aecm module, you may learn Android NDK first, and the module src path is
..\src\modules\audio_processing\aecm
BTW, this blog may help a lot in native dev. and debuging.
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-development/
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-debugging/
hope this may help you.
From the code inspection of AGC algorithm in WebRtc, it matches closely with the description in http://www.ti.com/lit/wp/spraal1/spraal1.pdf
It's based on NLMS, but has a variable step length (mu).