I am thinking of making a new logic brick to contribute or 10, but need to get a template/idea of where to start, I want to make a 6dof actuator and sensor first, that can trigger based on rotation targets or distance limits etc, and a actuator that can remove or change a 6dof target to a new position or object and position,
I am making a open source 3d puzzle game with limited ads in game, but need to make a few logic bricks, for me and the community....
There are no coding tutorials regarding BGE Game logic that I'm aware of, but here are some pointers for the code:
The game logic parts are mostly in:
https://svn.blender.org/svnroot/bf-blender/trunk/blender/source/gameengine/GameLogic/
You'll see that sensors implement the ISensor interface. Browse through a few different sensors to see how they work. Blender has NDOF device support, so NDOF events already exist (get handled by our GHOST layer: https://svn.blender.org/svnroot/bf-blender/trunk/blender/intern/ghost/intern/GHOST_NDOFManager.h ). You could create a new manager like the mouse manager (see the gamelogic directory for the different managers ).
With this information you should be able to get started. Read the existing code carefully, you'll be able to find what you need.
You can use this commit Mouse actuator commit as your template to add a new actuator.
Related
I am trying to create a synchronized usrp source block in gnu radio consisting of multiple B210 USRP devices. Lang: C++.
From what I have found I need to:
Instantiate multiple multi_usrp_sptr as each B210 requires one and multiple B210 devices cannot be addressed by using single sptr
Use external frequency and PPS sources - an option that can be selected from block or set programmatically
Synchronize re/tuning to achieve repeatable phase offset between nodes - this can be achieved using timed commands API https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD
Synchronize sample streams using time_spec property issue_stream cmd
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
I looked into the gr-uhd folder where the sink/source code resided and found functions that could be altered.
Unfortunately I don't know how to copy or export this library to do these modifications and later compile to insert my custom blocks to GNU Radio, because gr-uhd seems to be built in and compiled at GR installation.
I attempted coping and then making the lib but that's not the way - it didn't succeed. Should I add my own source block via gr_modtool and insert only the commands I need?
Compatibility with uhd and its functions apart from just adding a few lines would be advantageous not to write the source from scratch.
Please advise
Edit
Experimental flowchart, based on Marcus Müller suggestion:
Experimental usrp synchronization flow
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
For a USRP sink: add tags containing dictionaries with the correct command times to the streams. The GNU Radio API docs have information on how these dictionaries need to look like. The time field is what you need to set with an appropriate value.
For a USRP source: Use the set_start_time on the uhd_usrp_source block; use the same dictionaries described above to issue commands like tuning, gain setting at a coordinated time.
I was trying to find a proper way of synchronizing the USRPs via tags.
There are a few issues I came across in this approach:
Timed commands require the knowledge of the current moment in time, which is done via usrp.get_time_now(), even though I would request the USRP to give the time through tags I would have to somehow extract it from the output. (make some kind of loop and proper triggering) (source: https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD) or maybe plan everything not in a relative way - using absolute values instead of offsets. I have seen an approach to regularly reset the sense of time each PPS (set it to 0.0) and maybe then setting time of commands within range of 0.0-1.0 would be acceptable. Then the loop for reading and inserting time into commands would also be redundant.
I didn't found a way to create dicts in GR via blocks to make the solution scalable (without writing a few lines of code in textbox) or writing OOT block
In the end there is so little information to tell what kind of solution is most appropriate (PDU, events, are tags still relevant in GR
?), and the docs are so very scarce, that after some mailing I decided to add a simple class that inherits from the main top_bock.py and after instantiation of top_block it calls a few functions to synchronize the devices. This kind of solution is not the most flexible one, and the parent class top_block.py has to be called through the inheriting one, but it enables an easy programming interface.
Soon I will add an example of the code used in inheriting class just in case.
If there is any more neat, dynamic or scalable solution please let me know or point me to sources.
Intro:
I have created an application now which works well. The problem is that my Pepper Robot is doing this application while standing in one place. I managed to get it moving in intervals with AlNavigation.explore() but it seems like this is not the smoothest way since it is mostly doing circles around itself and then just moving a little. Also i when Pepper is getting below 15% battery i want it to go find its charging station. I did it successfully when it was in autonomous life but when my application is opened then it does not work. I added ALRecharge.goToStation() to my application to fix this, but sometimes it works and sometimes it doesnt.
Questions:
1) How to make Pepper smoothly "walk" around in the room and then stop when someone is speaking to Pepper?
2) How to add Recharger app inside my application so they would work together, or should i do it myself for my application?
3) How to make sure Pepper finds charging stock even if Pepper does not see it from where it is standing?
Does anyone have any examples of this maybe where they made Pepper "live" in the room and also used Pepper charging stand.
Thanks
When you ask your Pepper to go to recharge, the charging station has to be in view (ie roughly less than 3 meters).
If not, he won't find it.
What I would suggest is to use the map, created during the ALNavigation exploration in the background, to send Pepper near his charging station, then you can start the ALRecharge.goToStation() method.
So the easiest way it to turn your Pepper on while on his charger (or just restart naoqi) so after exploring you just have to ask him to go to world position (0,0,0) then you can ask him to go to recharge.
If you don't want to use navigation to move, you could also use the WorldRobotPosition to send it manually back to the position 0,0,0.
Alexandre's solution is a good one.
If you create a map through the explore method in ALNavigation, you could also feed random in-map targets to the navigateToInMap method, in order to navigate around quite smoothly.
You can then decide to stop the navigation when you detect someone, with ALFaceDetection or ALPeoplePerception.
If you use ALNavigation, you can make a map and use it to move with Pepper :
ALNavigation.explore()
#The best is to start exploration near the charging station, so the coordinates (0,0) will be the charging station
path = ALNavigation.saveExploration()
ALNavigation.loadExploration(path)
ALNavigation.startLocalization()
Ok, now you are localized.
You can get the current position of your robot with
ALNavigation.getRobotPositionInMap()
It returns an array with the position of the robot and the confidence.
Create a file somewhere on your robot and put the coordinates like {charger : [0,0]} if you have multiple coordinates to save.
If you want to move smoothly, you can use ALNavigation.navigateToInMap(coord) but it will not be really smooth.
What could be better is to use multiple ALMotion.moveToward(x,y,theta,configuration) and set the velocity of the robot.
I've encountered a problem with my gameproject lately. When I load more than a certain number of textures (around 1000) at startup I recieve an error regarding memory (because I have to use 32-bit in XNA). I am self-taught and so have only basic knowledge of the "correct" ways to program a game. The project itself is getting very big and although I try to compress images together etc, I will have to use more than 1000 textures throughout the project.
My question, then, is: How can I load textures at other points in time than at startup in XNA, using vb.net? (I'm not using classes at all that I'm aware of, so I hope to stay away from that if possible)
Thank's for any feedback on this!
/Christian
I cannot comment, so I'll just put my idea here. Declaration time: I am also self-taught developer, so the algorithm that I'm using isn't proven to be the standard or anything like that. Hope it helps!
I am using a single class called GContent witch is "instanciated" (more like loaded) the first thing when game starts. This class is static, and it has lists for all textures, sounds and spritefonts in game. So, anywhere in my code I can put GContent.Texture("texture_folder\\texture_name"); (similar for sound and spritefont). Now, before this function loads Texture2D, it checks it's list of textures, and tries to return a texture with the name that is asked for in parameter. If it finds the right texture, it returns the texture from the list. If not, it uses Content.Load(textureFullPath); (by full path I don't mean "C:\Users\....") to load the texture, give it a name (Texture2D.Name) equal to the parameter textureFullPath, adds that texture to the list, and then returns the new texture. So, the more you play my game, the more textures will be loaded, if I don't load all the assets at the start of the game. Now, I imagine you can simply have an array of strings that represent all textures that are used by a single level, or a map, or main menu... This way, you could easaly create function that will take List< string > and try to load or unload all textures of one map/level/menu...
So, my answer is pretty much: have a static class with lists of assets and load/unload asset(s) from where ever in game you want!
Also, if my answer helped you, please, check it as an answer :)
So the answer seems a lot easier than I expected, if I'm doing it right at the moment(?).
I simply Load the content needed for all "Worlds" in the beginning of the game (in the Protected Overrides Sub LoadContent()) and then Dispose() and Load.Content() depending on what World is loaded later (in any Sub I choose):
TextureName = Content.Load(Of Texture2D)("")
TextureName.Dispose()
If there isn't any problem I'm not yet aware of, this seem to do the trick and does not leave me with the memory error in the start.
Thank you Davor Mlinaric and Monset for helping me along
I realize GPUImage has been well documented and there's a lot of instructions on how to use it on the main github page. However, it fails to explain what a filter chain is - what's addTarget? What's missing is a simple enough diagram showing what needs to be added to what. Is it always GPUImageView (source?) -> add target -> [filter]? I'm sorry if this sounds daft, but I fail to follow the correct sequence given there are so many ways of using it. To me, it sounds like you're connecting it the other way round (such as saying: Connect the socket to the TV). Why not add filter to the source? I'm trying to use it but I get lost in all the addTargets. Thanks!
You can think of it as a series of inputs and outputs. Look in the GPUImage framework project to see which are inputs (typically filters) and which are outputs (imageview, moviewriter, etc..). Every target effects the next target in the chain.
Example:
GPUImageMovie -> GPUImageSepiaFilter -> GPUImageMovieWriter
A movie will be sent to the sepia filter that will perform its job, the movie with a sepia filter applied will be sent to the movie writer, then the movie writer will export a movie with a sepia filter applied.
To help visualize what's going on, any node editor program typically uses this scheme. Think of calling addTarget: as one of the connections in the attached image.
A google image search for Node Editor will give you plenty of other image to help picture what adding targets does.
I am trying to flash the very first u-boot binary file (uboot.bin) into blank NOR flash of a brand new blank board which has marvel 370 soc(ARM) using Teraterm(xmodem/ymodem/zmodem)
When I compile the uboot, I get two binaries like uboot-uart.bin and uboot.bin.
What is the difference between two binaries?
I have been instructed to make some dip switch changes and then load uboot-uart.bin first into the prototype board.
From manual I understand that the dip switch setting is to set "Boot from Uart" to Boot source list.
I am new to embedded and want to learn more about this from u-boot perspective. Where can I learn about this?
Would also like to know what these xmodem,ymodem,zmodem things are?
And would also like to learn how to customize u-boot for a custom board using marvel 370 soc(ARM)?
I would be happy if someone can point to good resources.
XModem itself is a quite simple protocol which is meant to send files over a serial link it is explained in detail here.
Most Marvell ARM-Chips in the last couple of years have the possibility to upload a binary via UART using the XModem protocol. There are two ways to do that.
By sending a special sequence to the chip during bootup (which can be done without any changes to the bootstrap options).
By setting up the bootstrap options accordingly (via DIP-Switches in your case)
In both cases the chip will then initiate an Xmodem-download. TeraTerm should have an option to upload files via the xmodem protocol. IIRC it is available under File/Transfer/XModem/Send.
If you know just send your "uboot-uart.bin" file to the Armada 370 (which will take some time). The SoC will now boot the file just like if it was loaded from NAND or any other source.
The only difference between your uboot-uart.bin and uboot.bin is most probably the special header which has to be put in front of the actual uboot-binary, it contains the bootdevice type the image was meant for, the address in memory where the image should be loaded to and a lot of board specific settings. The exact structure and content is usually explained in the very excellent datasheets from Marvell.
For customizing uboot I can only suggest to dig into the code provided by Marvell and change it according to your own board. You'll find the board specific files under boards/Marvell.