Camera with AI recognization - camera

I need to build device with AI objects recognization which is working 24h/day
Writing the code is not problem however don't know if there any existing products (camera + main unit) on market which I can buy ( around 25 $) and write the code or have to just build it somehow?
When given object is recognised the alarm should be activated.
Do you have any idea about direction should I go? Or you know existing product i can buy?
Thanks in advance

Related

Unreal 4 - Vehicle from scratch - Vehicle does not move

Anyone can help me with U4 Vehicle basic project ?
(I cant get any help anywhere on unreal forums)
here is the link to the zipped porject(+-1mb)
https://answers.unrealengine.com/storage/attachments/221022-blenderimportvehicle.zip
no matter what I do I cant get the car moving
I enabled the inputs in the car BP, and in the project BP, I've set the car pawn to be automaticaly possessed...still nothing
please help me on this one, it's my first shot with unreal and so far this AAA engine seems really buggy (I use v 4.18)
thanks
Ok, so I am a bit disapointed by the genral attitude, If one considers this a bad question just say so
I found a solution anyway
my scales were not right in blender, I had to set the units to cm then it worked fine

How to make Pepper robot move randomly and then go to its charging station

Intro:
I have created an application now which works well. The problem is that my Pepper Robot is doing this application while standing in one place. I managed to get it moving in intervals with AlNavigation.explore() but it seems like this is not the smoothest way since it is mostly doing circles around itself and then just moving a little. Also i when Pepper is getting below 15% battery i want it to go find its charging station. I did it successfully when it was in autonomous life but when my application is opened then it does not work. I added ALRecharge.goToStation() to my application to fix this, but sometimes it works and sometimes it doesnt.
Questions:
1) How to make Pepper smoothly "walk" around in the room and then stop when someone is speaking to Pepper?
2) How to add Recharger app inside my application so they would work together, or should i do it myself for my application?
3) How to make sure Pepper finds charging stock even if Pepper does not see it from where it is standing?
Does anyone have any examples of this maybe where they made Pepper "live" in the room and also used Pepper charging stand.
Thanks
When you ask your Pepper to go to recharge, the charging station has to be in view (ie roughly less than 3 meters).
If not, he won't find it.
What I would suggest is to use the map, created during the ALNavigation exploration in the background, to send Pepper near his charging station, then you can start the ALRecharge.goToStation() method.
So the easiest way it to turn your Pepper on while on his charger (or just restart naoqi) so after exploring you just have to ask him to go to world position (0,0,0) then you can ask him to go to recharge.
If you don't want to use navigation to move, you could also use the WorldRobotPosition to send it manually back to the position 0,0,0.
Alexandre's solution is a good one.
If you create a map through the explore method in ALNavigation, you could also feed random in-map targets to the navigateToInMap method, in order to navigate around quite smoothly.
You can then decide to stop the navigation when you detect someone, with ALFaceDetection or ALPeoplePerception.
If you use ALNavigation, you can make a map and use it to move with Pepper :
ALNavigation.explore()
#The best is to start exploration near the charging station, so the coordinates (0,0) will be the charging station
path = ALNavigation.saveExploration()
ALNavigation.loadExploration(path)
ALNavigation.startLocalization()
Ok, now you are localized.
You can get the current position of your robot with
ALNavigation.getRobotPositionInMap()
It returns an array with the position of the robot and the confidence.
Create a file somewhere on your robot and put the coordinates like {charger : [0,0]} if you have multiple coordinates to save.
If you want to move smoothly, you can use ALNavigation.navigateToInMap(coord) but it will not be really smooth.
What could be better is to use multiple ALMotion.moveToward(x,y,theta,configuration) and set the velocity of the robot.

Modify an .asm file with vb.net application on run time

I'm working on a personal project and i decided to make some kind of theremin but instead of radio frequencies, i want to make it with code.
I found how to make sound with asm on thissite (very helpful).
And i do have a sensor that sends me a certain voltage depending on distance of my hand and i can read it with an application in vb.net(HID).
Now my question is that is it possible to manipulate an .asm file on the run time with vb.net so i can make a sound with asm depending of the value that my sensor is giving me?
i have been looking for some information with Google, but it seems that i'm not typing the right tags.

Can someone explain what a filter chain is in GPUImage in simple words?

I realize GPUImage has been well documented and there's a lot of instructions on how to use it on the main github page. However, it fails to explain what a filter chain is - what's addTarget? What's missing is a simple enough diagram showing what needs to be added to what. Is it always GPUImageView (source?) -> add target -> [filter]? I'm sorry if this sounds daft, but I fail to follow the correct sequence given there are so many ways of using it. To me, it sounds like you're connecting it the other way round (such as saying: Connect the socket to the TV). Why not add filter to the source? I'm trying to use it but I get lost in all the addTargets. Thanks!
You can think of it as a series of inputs and outputs. Look in the GPUImage framework project to see which are inputs (typically filters) and which are outputs (imageview, moviewriter, etc..). Every target effects the next target in the chain.
Example:
GPUImageMovie -> GPUImageSepiaFilter -> GPUImageMovieWriter
A movie will be sent to the sepia filter that will perform its job, the movie with a sepia filter applied will be sent to the movie writer, then the movie writer will export a movie with a sepia filter applied.
To help visualize what's going on, any node editor program typically uses this scheme. Think of calling addTarget: as one of the connections in the attached image.
A google image search for Node Editor will give you plenty of other image to help picture what adding targets does.

How do I create a new logic brick for blender?

I am thinking of making a new logic brick to contribute or 10, but need to get a template/idea of where to start, I want to make a 6dof actuator and sensor first, that can trigger based on rotation targets or distance limits etc, and a actuator that can remove or change a 6dof target to a new position or object and position,
I am making a open source 3d puzzle game with limited ads in game, but need to make a few logic bricks, for me and the community....
There are no coding tutorials regarding BGE Game logic that I'm aware of, but here are some pointers for the code:
The game logic parts are mostly in:
https://svn.blender.org/svnroot/bf-blender/trunk/blender/source/gameengine/GameLogic/
You'll see that sensors implement the ISensor interface. Browse through a few different sensors to see how they work. Blender has NDOF device support, so NDOF events already exist (get handled by our GHOST layer: https://svn.blender.org/svnroot/bf-blender/trunk/blender/intern/ghost/intern/GHOST_NDOFManager.h ). You could create a new manager like the mouse manager (see the gamelogic directory for the different managers ).
With this information you should be able to get started. Read the existing code carefully, you'll be able to find what you need.
You can use this commit Mouse actuator commit as your template to add a new actuator.