LCD panel interfacing with beaglebone Black - embedded

i am trying to interface a cheap LCD panel to BBB
so basically i am making my own LCD7 cape but without the EPROM & I2C stuff
and till now i have succesfully wrote a device tree overlay , loaded it, and fried a LCD panel well ...without any smoke.
the problem is after checking the LCD7 made by circuitco i noted this IC between the beagle and the LCD :
74AVC32T245
i dont really understand why its there
here is the opensource design of LCD7 cape the transducer is at page 21
http://www.openhacks.com/uploadsproductos/beaglebone-lcd7-reva2-srm.pdf
any help regarding out to interface LCD panels is very appresiated

on page 20 of that document, section 5.2.2: Non-Inverting Bus Transceiver explains everything. The chip is meant for voltage level translation, just in case the LCD and the MCU operate at different levels. But in the BeagleBone LCD7 Cape, no translation is required. So its just a buffer, I don't think it should matter in the code implementation. It does say "its two power rails are both 3.3V" So you should observe that.

Related

STM32 External LED blink

I am trying to Blink external LED using STM32CubeIDE and Proteus
while (1)
{
HAL_GPIO_TogglePin(LED_GPIO_Port, LED_Pin);
HAL_Delay(100);
}
[)2
The LED doesn't blink
I am assuming that you have configured the pin to be output correctly with no pull up or pull down resistor. Hence you will need to terminate the LED into the ground instead of 3.3V.
In the event there is no pull up or pull down resistor, the pin is neither high or low, the pin is in a "z-state" hence when you toggle the LED from low to high there is no potential difference between the pin and the LED so no current flows, and when the pin goes from High to low, the diode property (only allows current flow in one direction) of the LED doesn't let current flow into towards the board.
The Problem Solved.
First the polarity of the LED.
Second the library of the blue pill on proteus doesn't support STM32FC103C8 that I selected on STM32CubeIDE. STM32FC103C6 should be selected instead.

Make your own mouse driver

I have a mouse mouse from speedlink that is able to do a lot of things, like changing the colours of the leds, but I can only do those things with the provided software from speedlink.
Is it possible to code your own software that controls the led lights of the mouse?
Yes, but you would have to have the hardware specifications to know what needs to be sent to the mouse to accept the commands you're looking for. Usually these things are not published or readily accessible.
I bought two Microsoft Basic Optical mice, identical to the one which performed the functions I needed really well. The first one would grab and flip the grid with object in one position within the grid with a right click of the Mouse. The Blender 2.79 3D modelling app is what I am using the mice for, I plugged the two new mice into two computers, to try them out, they would NOT do the grid grabbing and flipping, though they would perform the other functions. the grid grabbing function is important so that You can move the scene and inspect the model, or appear to walk around a solid object in the real world.

Interfacing Reed switch with beaglebone Black

I have a BeagleBone Black and Reed switch. Linux 14.04 is running on BeagleBone Black.
I want to interface this reed-switch with BeagleBone Black means How I can interface Reed-switch with Beaglebone black so that I can read it's status Using C program. I'm a new bee to hardware and don't know how to interface it. Any one have idea/suggestion ?
Basically, Using this I want to detect door is closed or open. :)
Configure a GPIO pin as input.
You will need a hardware interface circuit between the GPIO pin and the reed switch
Poll the GPIO pin for High or Low
Perform the intended action based on reed switch output.
HTH

Unity3d external camera frame rate

I am working on a live augmented reality application. So far I have worked on many AR-Applications for mobile devices.
Now I have to get the video signal from a Panasonic P2. The camera is an European version. I catch the signal with a AJA io HD Box, witch is connected by firewire to a MacPro. So far everything works great - just not in Unity.
When I start the preview in Unity the framebuffer of the AJA ControlPanel jumps to a frame-rate of 59.94 fps. I guess because of a preference on unity. Because of the European version of the camera I can not switch to 59,94fps or 29,47fps.
I checked all settings in Unity, but couldn't find anything...
Is there any possibility to change the frame-rate unity captures from an external camera?
If you're polling the camera from Unity's Update() function then you will be under the influence of Vsync, which limits frame processing to 60 FPS.
You can switch off Vsync by going to Edit > Project Settings > Quality and then setting the option Vsync Count to "don't sync".

Kinect: How to obtain a skeleton from back view?

Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?