What is a sensor in the context of the Blender game engine? - game-engine

What's a sensor in the context of the Blender game engine?
it's something that connects an input to an output
it's something that listens for various events
it's something that controls an object or part of the scene
or all 3 of them?

Related

In UE5, how exactly can I create a cinematic camera which follows a vehicle with view from driver seat?

How exactly can I drive a vehicle in UE5 and have a camera in the driver view/seat?
I'm attempting to use the Unreal Engine 5 Sample city demo, specifically the vehicle test map.
From what I understand, this is a cinematic camera that would need to be offset in some way? What are the steps required to get a cam setup to track a vehicle, and to test the above scenario? I could use SampleCity, or even just start with a basic template if there are some instructions to get that camera setup in that way.
In play mode (press green play button) then your actor just gets in and drives the
car (google that)
to animate it, do that through 'sequencer' it will play, but to record it you need 'take recorder' active. add a cinecamera and position in POV driver, either
1. add the cam in blueprint and check 'expose to cinematics' or
2. add the cam to scene, then add to 'sequencer' and animate via 'transform'
to move with the car, setting movement from point A to point B on
sequencer timeline or you can move cam along a 'spline' or rail track etc for
more complex shot movement.
that's the process, just google for details

How to display the screen of one player to all other players in Godot ,gdscript?

I'm using Godot engine to develop a multiplayer Lan WiFi game,at some point the game will give a player a task to solve ,the task is a mini game that has some random aspects ,one player should control and solve this task while other players will be just watching and should not be able to control anything ,so I want to know how to display exactly what's happening on that player screen to the rest of players?
enter image description here
what i would recommend is to record the person game screen and just send the recording to the other person via live this is going to take some bandwidth tho
(https://github.com/henriquelalves/GodotRecorder)
also when sending a screen recording it just a matrix array or a pool byte array in Godot i think.
another way is to get person movement and location and set the camera to that exact location you could also just use two cameras and a split screen to see the current player and the other player.

Implementing a 3D Minimap for VR in Unreal Engine

I am working on a VR tower defense game. I want to place my towers on a small map of the field and at the same time show the field/units/towers on the small map, in 3D. Like this:
http://halo.bungie.net/images/games/Halo3ODST/imagery/screenshots/H3ODST_PreparetoDropCinematic.jpg
The map would be something like a small clone of the field. Is there a way to do so with camera etc. So that my minimap is just a re-render/clone of the field.
Sorry if this is the wrong place, but the Unreal Engine Forum is not working at the moment.
I dont know of a simple camera projection to 3D (since the camera map would be about scene capture onto 2D textures)
You can do a fake/tricky camera implementation tho. You just show a normal camera projection and put it into the world. Now you let the actual camera position depend on where the users location is in relation to the map.
You can cover this up with particle effects so it doesnt look like some TV that is following you....
depending on the complexity of your game and how u like the solution obove it might be better to implement a calculated model.
Like you have miniature models of everything u want to show on the map and then let a map class project every "object that is shown on map" the miniature version onto the map object.
Ofc this requires tons of work compared to just setting up a camera but you have alot of better control of how the map looks and what it can do (u could add functionallity, control options and special views etc....)

LCD panel interfacing with beaglebone Black

i am trying to interface a cheap LCD panel to BBB
so basically i am making my own LCD7 cape but without the EPROM & I2C stuff
and till now i have succesfully wrote a device tree overlay , loaded it, and fried a LCD panel well ...without any smoke.
the problem is after checking the LCD7 made by circuitco i noted this IC between the beagle and the LCD :
74AVC32T245
i dont really understand why its there
here is the opensource design of LCD7 cape the transducer is at page 21
http://www.openhacks.com/uploadsproductos/beaglebone-lcd7-reva2-srm.pdf
any help regarding out to interface LCD panels is very appresiated
on page 20 of that document, section 5.2.2: Non-Inverting Bus Transceiver explains everything. The chip is meant for voltage level translation, just in case the LCD and the MCU operate at different levels. But in the BeagleBone LCD7 Cape, no translation is required. So its just a buffer, I don't think it should matter in the code implementation. It does say "its two power rails are both 3.3V" So you should observe that.

How to map kinect skeleton data to a model?

I have set up a Kinect device and written a simple program that reads the stream to a QImage using OpenNI 2.0. I have set up skeleton tracking with NiTE 2.0, so I have access to the coordinates of all the 15 joints. I have also set up a simple scene using SceniX. The hand coordinates provided by the skeleton tracking are beeing used to draw 2 boxes to represent the hands.
I would like to bind the whole skeleton to a (rigged)model, and cant seem to find any good tutorials. Anyone have any idea how I should proceed?
depending on your requirements you could look at something like this for Unity Engine https://www.assetstore.unity3d.com/en/#!/content/10693
There is also a Plugin for the Unreal 4 Engine called KINECT 4 UNREAL FROM OPAQUE MULTIMEDIA
But if you have to write it all by hand for yourself, i have done something similar using OpenGL.
I used Assimp http://assimp.sourceforge.net/ to be able to load animated Collada models and OpenNi with NiTE for skeletal tracking. I then used the rotation data from the Nite skeleton and applied it to the corresponding Bones of my rigged mesh, overwriting the rotation values of the animation. Don't use positional Data. It will strech your bones and distort the mesh.
There are many sources for free 3D Models, like TF3DM.com . I for myself used a custom Rig for my models to be suitable for my code. So you might look into using Blender and how to Rig a Model.
Also remember that the Nite Skeleton has no joint for the Pelvis, and that Nite joints don't inherit their parents rotation, contrary to the bones in a rigged model.
I hope this helps to have something to go on.
You can try DigitalRune, they have examples of binding a rigged model to joints. They have mentioned some examples too. try http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/155/Research-Augmented-Reality-with-Microsoft-Kinect.aspx
Also you would need to know to animate model in blender and export it to XNA or to your working graphics framework. Eg:-http://www.codeproject.com/Articles/230540/Animating-single-bones-in-a-Blender-3D-model-with#SkinningSampleProject132