Machine learning with continuous training/model update (tensorflow or something else) - tensorflow

I am wondering if there are projects/examples with any machine learning library (tensorflow, etc) which can do continous training kind of simulate an animal or pet.
What do I mean by animal/pet?
Let's assume I have these hardware robot.
Inputs:
Touch sensor, which returns number from 0 to 255 depends on touching force.
Microphone.
Webcam.
Outputs:
Moving module, which can move forward/backward and left/right. Let's say just some simple wheels system with 4 input pins. If I send +5v (binary 1) in pin 1, it goes forward, to pin 2, it goes backward, to pin 3, left, pin 4, right.
Speakers.
Everything is connected to central Computer (raspberry Pi or if not enough CPU/Memory then Microsoft Surface Pro with 4 cores i7 3+GHz CPU and 32 GB RAM).
Idea is to connect hardware inputs mentioned above to input of neural network, outputs to output of neural network and put this conditions:
Minimize bad feelings and maximize good.
If touch sensor returns number more than 128, it is bad feelings (pain}, if touch sensor returns less than 127 it is good feelings (pet). If battery is less than 20% it is bad feelings. Loud noise from microphone is bad feelings. In programming terms 3 variables to minimize and one to maximize.
When I connect it all together and switch it on, I will train it like a baby. Show some pictures, tell something, pet it for good work, etc. Show where is the battery (maybe I will put wireless charger, that it can do it by itself). I understand, that it will take long time, maybe years.
My problem now, that most of the examples which I found so far works like train first and then use this already trained neural network. Or use pre-trained by others neural network. I could not find an example with continuous training and usage of neural network.
Questions:
Is it possible to implement this with current machine learning technologies/libraries (tensorflow, etc)? Let's consider first only software part, if I have unlimited hardware.
If it is not possible, then why?
If it is possible, then links to examples or general approach description will be very helpful.
If it is possible, then what hardware will be needed?
P.S. of course I do not expect it to be as smart as human, even not as a dog/cat. Maybe like fly or mosquito :)
Also I would like to get high level answer without going very deep into details like, how would you implement moving module, etc. And everything as simple as possible.

Related

GNU Radio and bladeRF on Raspberry Pi (simple FSK system)

I am having a problem porting a GNU Radio setup from PC (windows 10, USB3) to Raspberry Pi 2 (USB2). USB bandwidth and CPU should not be a problem I think (only around 30% utilization while running). Essentially it looks like the RPi is 'pausing' during transmission, while the PC is not. The receiver is running on PC in both cases. I am including a pic of what I see after the FSK demod when running transmitter on PC vs Pi (circled 'pause' area), as well as a picture of my (admittedly sloppy) schematic. Any help/tips is greatly appreciated.gnuradio schemreceived signals
Edit: It appears it may actually be processing limitations. Switching from 9400 baud to 2400 baud makes the issue go away. If anyone has experience with GNURadio...am I doing anything overly inefficient or should I just drop comm rate?
The first thing I would do would be to lower your sample rates.
You don't need 1.5Ms/s if you are going to keep only the lowest 32k in your low pass filter.
Then you could do the same for your second stage after the quadrature demod if it's not enough (by the way, the sample rate of your second low pass filter does not seem to match the actual sample rate of the stage which is still 1.5Ms/s if I'm not mistaken).
Anyway, Gnuradio uses a lot of processing power so try not to use a sampling rate way above what you actually need ;)
In your case, you could cut the incoming sample rate down to 64k (say 80 for safety). 18 times less samples to process might do the trick :)

Using a GPU both as video card and GPGPU

Where I work, we do a lot of numerical computations and we are considering buying workstations with NVIDIA video cards because of CUDA (to work with TensorFlow and Theano).
My question is: should these computers come with another video card to handle the display and free the NVIDIA for the GPGPU?
I would appreciate if anyone knows of hard data on using a video card for display and GPGPU at the same time.
Having been through this, I'll add my two cents.
It is helpful to have a dedicated card for computations, but it is definitely not necessary.
I have used a development workstation with a single high-end GPU for both display and compute. I have also used workstations with multiple GPUs, as well as headless compute servers.
My experience is that doing compute on the display GPU is fine as long as demands on the display are typical for software engineering. In a Linux setup with a couple monitors, web browsers, text editors, etc., I use about 200MB for display out of the 6GB of the card -- so only about 3% overhead. You might see the display stutter a bit during a web page refresh or something like that, but the throughput demands of the display are very small.
One technical issue worth noting for completeness is that the NVIDIA driver, GPU firmware, or OS may have a timeout for kernel completion on the display GPU (run NVIDIA's 'deviceQueryDrv' to see the driver's "run time limit on kernels" setting). In my experience (on Linux), with machine learning, this has never been a problem since the timeout is several seconds and, even with custom kernels, synchronization across multiprocessors constrains how much you can stuff into a single kernel launch. I would expect the typical runs of the pre-baked ops in TensorFlow to be two or more orders of magnitude below this limit.
That said, there are some big advantages of having multiple compute-capable cards in a workstation (whether or not one is used for display). Of course there is the potential for more throughput (if your software can use it). However, the main advantage in my experience, is being able to run long experiments while concurrently developing new experiments.
It is of course feasible to start with one card and then add one later, but make sure your motherboard has lots of room and your power supply can handle the load. If you decide to have two cards, with one being a low-end card dedicated to display, I would specifically advise against having the low-end card be a CUDA-capable card lest it get selected as a default for computation.
Hope that helps.
In my experience it is awkward to share a GPU card between numerical computation tasks and driving a video monitor. For example, there is limited memory available on any GPU, which is often the limiting factor in the size of a model you can train. Unless you're doing gaming, a fairly modest GPU is probably adequate to drive the video. But for serious ML work you will probably want a high-performance card. Where I work (Google) we typically put two GPUs in desk-side machines when one is to be used for numerical computation.

Is it possible to have CAN on Arduino without extra hardware?

I would like to have Arduino operating in a CAN network. Does the software that provides OSI model network layer exist for Arduino? I would imagine detecting the HI/LOW levels with GPIO/ADC and sending the signal to the network with DAC. It would be nice to have that without any extra hardware attached. I don't mind to have a terminating resistor required by the CAN network though.
By Arduino I mean any of them. My intention is to keep the development environmen.
If such a software does not exist, is there any technical obstacle for that, like limited flash size (again, I don't mean particular board with certain Atmega chip).
You can write a bit banging CAN driver, but it has many limitations.
First it's the timeing, it's hard to achieve the bit timing and also the arbitration.
You will be able to get 10kb or perhaps even 50kb but that consumes a huge amount of your cpu time.
And the code itself is a pain.
You have to calculate the CRC on the fly (easy) but to implement the collision detection and all the timing parameters is not easy.
Once, I done this for a company, but it was a realy bad idea.
Better buy a chip for 1 Euro and be happy.
There are several CAN Bus Shield boards available (e.g: this, and this), and that would be a far better solution. It is not just a matter of the controller chip, the bus interface, line drivers, and power all need to be considered. If you have the resources and skills you can of course create your own board or bread-board for less.
Even if you bit-bang it via GPIO you would need some hardware mods I believe to handle bus contention detection, and it would be very slow and may not interoperate well with "real" CAN controllers on the bus.
If your aim is to communicate between devices of your own design rather than off-the shelf CAN devices, then you don't need CAN for that, and something proprietary will suffice, and a UART will perform faster that a bit-banged CAN implementation.
I don't think, that such software exists. CAN bus is more complex, than for example I2C. Basically you would have to implement functionality of both CAN controller and CAN transceiver. See this thread for more details (in German).
Alternatively you could use one of the CAN shields. Another option were to use BeagleBone with suitable CAN cape.
Also take a look at AVR-CAN.

Send and receive data trough the power network

I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it.
Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field?
Edit:
There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.
This is not possible, as described in the question. Yes, with extra hardware you can do it. No, with the standard hardware in a PC, you could not.
As others have noted, among other problems, the only information you can get from a generic PC is a bit of voltage info for the CPU. It's not going to give a picture of the AC signal, nor any signal modulated on top of it. You'll be watching a few highly regulated DC signals deep inside the computer, probably converted at a relatively low rate too. Almost by definition, if you could see external information on any of those signals, your machine is already suffering a hardware failure and chances are the CPU will be crashing soon...
*blink* No...
Edit: I mean, there's the possibility to use the powerlines as network cables, but only with special adapters. And it is just designed for home networks.
Edit2: You can't read something from the power supply of a computer...it's not designed for that. You would have to create your own component/adapter for this.
Am I mis-reading this? Wouldnt this be a pure hardware solution?
This is highly improbable without adding some hardware.
You see, the power supplies in a regular PC are switching power supplies which effectively decouple the AC input from the supplied DC voltage needed on the PC side. The AC side just basically provides power that fuels the high-speed power switching circuitry.
Also, a DC signal, by definition, doesn't provide a signal per se: it is a "static" power level (and yes the power level does vary a bit in the time domain but not as an easy to leverage function).
Yes there can be an AD (Analog to Digital) monitoring chip that can be used on the PC side to read the voltage of the DC component supplied to the motherboard etc., but that doesn't mean there is still a signal that can be harvested: the original power line "signal" might have been through enough filters that there isn't a "signal" left to be processed.
Lastly, one needs to consider that power supplies design varies from company to company; this fact will undoubtedly affect any possible design of a communication solution.
what you describe is possible but unfortunately, you need an adapter to convert the signal running on the powerlines to sensible network traffic.
the power line acts as a physical medium, thus is at the lowest level f the OSI stack. conversion from electrical signal to sensible network traffic requires a hardware adapter, same for your an ethernet adapter. your computer is unable to understand this traffic since its power supply was not build to transmit those informations. but note that you can easily find an adapter and it will works the same as an ethernet adapter, that is be accessible through the standard BSD socket library.
This is ENTIRELY possible, although you would need to either buy or build some hardware to make it happen. In addition, the software solution would be very, very complex.
The computer's power supply would be out of the picture for the most part. You need to read data straight from the wall with as little extraneous noise as possible. From the electrical engineering perspective, this is a very thoroughly covered topic. In the end, all you're really doing is an analog to digital conversion, and the rest keeps your circuit from being fried.
The software solution would basically be eliminating random noise, and looking for embedded signals. The math behind analog signal analysis is very complex, and you can spend a few semesters in college covering the topic, and the rest of your career trying to master it. If you're good at it, there's a cushy job for you on wallstreet predicting the stock market.
And that only covers reading incoming signals. Transmitting is a whole 'nother sport.
Now, it also sounds like you might be interested in a hack. That is...
You could buy a
commercial-off-the-shelf power-line
Ethernet adapter and tear it apart.
They have two prongs that plug into
a standard wall outlet. You could
remove these and wire them to the
INSIDE of a power supply.
To do that, you'd have to tear apart a power
supply as well, which is incredibly
dangerous and I hereby warn you and
anyone else to NEVER attempt this.
The entire Ethernet adapter could be
tucked into the power supply and you
could basically have an Ethernet
port on the surface of your power
supply (either inside or outside the
computer).
Simply wire that to a
standard Ethernet adapter and voila
(!), you have nothing but a power
cable connecting your computer to
the wall outlet, AND you magically have
Ethernet!
Note that there also has to be another power-line
Ethernet adapter somewhere else for
you to establish a network and make the whole project useful.
How can you read modulated data from the power supply, you are talking about voltage and ohms and apart from a possible electrical shock which would be just shocking :) There are specialized electrical plugs with ethernet jacks in them that you can use.
I just hazard a guess that this is totally transparent as per Adrien Plisson's answer, i.e. you would have all of the OSI layer and is no different. You can write code to read from the sockets.
AFAIK no company that produces this electrical plug would ever open up the API for competition reasons, it is still in early stages as adoption of that is low because obviously it is very expensive (120 euro here in my country for a pair of 'em), as it does not deliver the quoted speed, say 100Mbps power plug, may get maybe 85Mbps due to varying situations and phenomena with power (think surges, brown outs, interference).
My 2cents.
Hope this helps,
Best regards,
Tom.

What are some ideas for an embedded and/or robotics project?

I'd like to start messing around programming and building something with an Arduino board, but I can't think of any great ideas on what to build. Do you have any suggestions?
I show kids, who have never programmed, or done any electronics before, to make a simple 'Phototrope', a light sensitive robot, in about a day. It costs under £30 (GBP) including Arduino, electronics and off-the-shelf mechanics. If folks really get into mobile robots, the initial project can grow and grow (which I feel is part of the fun).
There are international robot competitions which require relatively simple mechanics to get started, e.g. in the UK http://www.tic.ac.uk/micromouse/toh.asp
Ultimate performance require specially built machines (for lightness) , but folks would get creditable results with an Arduino Nano, the right electronics, and a couple of good motors.
A line following robot is the classic mobile robot project. The track can be as simple as electrical tape. Pololu have some fun videos about their near-Arduino 3PI robot. The sensors are about £1, and there are a bunch of simple motor+gearbox kits from lots of places for under £10. Add a few £ for motor control, and you have autonomous robot mechanics, in need of programming! Add an Infrared Remote receiver (about £1), and you can drive it around using your TV remote. Add a small solar cell, use an Arduino analogue input to measure voltage, and it can find the sun. With a bit more electronics, it can 'feed' itself. And so it gets more sophisticated. Each step might be no more than a few hours to a few days effort, and you'll find new problems to solve and learn from.
IMHO, the most interesting (low-cost) competitions are maze solving robots. The international competition rule require the robot to explore a walled maze, usually using Infrared sensors, and calculate their optimal route. The challenges include keeping track of current position to near-millimeter accuracy, dealing with real world's unpredictably noisy environment and optimising straight-line speed with shortest distance cornering.
All that in 16K of program, and 1K RAM, with real-time interrupt handling (as much as 100K interrupts/second for some motor systems), sensor sampling, motor speed control, and maze solving is an interesting programming challenge. (You might make it 'easy' with 32K of program, and 2K RAM :-)
I'm working on a 'constrained' robot challenge (based on Arduino) so that robot performance is mainly about programming rather than having a big budget.
Start small and build up to something more complex. Control servos. Blink LEDs. Debounce inputs. Read analog sensors. Display text on an LCD. Then put it together.
Despite the name, I like the "Evil Genius" book for PIC microcontrollers because of the small, easily digestible projects that tend to build on one another. It is, of course, aimed at PIC programmers rather than the Arduino, but the material covered will be useful no matter what you're developing on.
I know Arduino is trendy right now, but I also like the Teensy++ development board because of its low price-point ($24), breadboard-compatible PCB, relatively high pin count, Linux development environment, USB connectivity, and not needing a programmer. Worth considering for smaller projects.
If you come up with something cool, let me know. I need an excuse to do something fun :)
Bicycle-related ideas:
theft alarm (perhaps with radio link to a base station which is connected to a PC by Ethernet)
fancy trip computer (with reed switch or opto sensor on wheel)
integrate with a GPS telematics unit (trip logging) with Ethernet/USB download of logged data to PC. Also has an interesting PC programming component--integrate with Google Maps.
Other ideas:
Clock with automatic time sync from:
GPS receiver
FM radio signal with embedded RDS data with CT code
Digital radio (DAB+)
Mobile phone tower (would it require a subscription and SIM card for this receive-only operation?)
NTP server via:
Ethernet
WiFi
ZigBee (with a ZigBee coordinator that gets its time from e.g. Ethernet or GPS)
Mains electricity smart meter via ZigBee (I'm interested now that smart meters are being introduced in Victoria, Australia; not sure if the smart meters broadcast the time info though, and whether it requires authentication)
Metronome
Instrument tuner
This reverse-geocache puzzle box was an awesome Arduino project. You could take this to the next step, e.g. have a reverse-geocache box that gives out a clue only at a specific location, and then using physical clues found at that location coupled with the next clue from the box, determine where to go for the next step.
You could do one of the firefighting robot competitions. We built a robot in university for my bachelor's final project, but didn't have time to enter the competition. Plus the robot needed some polish anyway... :)
Video here.
Mind you, this was done with a Motorola HC12 and a C compiler, and most components outside the microcontroller board were made from scratch, so it took longer than it should. Should be much easier with prefab components.
Path finding/obstacle navigation is typically a good project to start with. If you want something practical, take a look at how iRobot vacuums the floor and come up with a better scheme.
Depends on your background and if you want practical or cool. On the practical side, a remote control could be a simple starting point. It's got buttons and lights but isn't too demanding.
For a cool project maybe a Simon-style memory game or anything with lights & noises (thinking theremin-style).
I don't have suggestions or perhaps something like a line follower robot. I could help you with some links for inspiration
Arduino tutorials
Top 40 Arduino Projects of the Web
20 Unbelievable Arduino Projects
I'm currently developing plans to automate my 30 year old model train layout.
A POV device could be fun to build (just google for POV Arduino). POV means persistence of vision.