Issue in implementing capacitive touch in PIC18F26K40 microcontroller using mTouch library - embedded

i currently working on PIC microcontroller. I have PIC18F26K40. I want to use CVD (Capacitive Voltage Divider) technique to implement capacitive touch button. I am using mTouch library of MCC (Microchip Code Configurator) library for that. I am doing same as per microchip documentation (links are here and here ). PIC is not detecting the touch. Here is my main method below:
void main(void)
{
// Initialize the device
SYSTEM_Initialize();
INTERRUPT_GlobalInterruptEnable();
INTERRUPT_PeripheralInterruptEnable();
LED_TRIS = OUTPUT;
while (1)
{
if(MTOUCH_Service_Mainloop())
{
/* Button API*/
if (MTOUCH_Button_isPressed(0))
LED_LAT = HIGH;
else
LED_LAT = LOW;
}
}
}
I have some doubts:
What is differential CVD.
What is driver shield. Do i need it?
Do i need to use two analog channels.
I have worked and tested CTMU mode of PIC18F26K22. Is there any way use CVD like CTMU.
If you have any solutions with or without library then let me know.
I am attaching some screenshot of my MCC configurations. Please go through it.
Help needed!
Note:
MPlab IDE: v5.50
Analog pin used for sensing: RB0
Programmer: PICKIT3

I came across your post whilst researching another issue with PIC mTouch on the same MPU you're using (or, rather the low voltage version, PIC18LF26K40). I am also using MPLAB X v5.50 with the mTouch plugin. Just wanted you to know that I was able to set up a single touch button with no problem on this chip, actually compiled and worked on the first try. So, you're on the right track!
You do not need to use the "driven shield" that is presented as an mTouch output (this is for improving signal integrity later when you're concerned about such things, see the various app notes on this). I used the "CS" (capacitive sensor) output only and it works fine.
This video helped me get started: https://www.youtube.com/watch?v=CCW3g9RqpZk
Hope this helps a bit.

Related

Hololens application stopping at splash screen

I'm working on a Unity project for Hololens, that uses the camera to capture pictures, send them to a photo recognition API and displays the result. The project works perfectly fine in unity, but not on the emulator/Hololens.
Unfortunately, I wrote a lot of code at once, so i don't know at what point this problem started. The problem show's up after building the project and running it on the Hololens/emulator in Debug mode. On the Hololens, I see the starting window (the one you see after you open any application). After i place it, i see End showing splash screen. on the Output window in Visual Studio, and it just doesn't go any further (but doesn't freeze either, just does nothing).
I don't know where it's coming from, since no exceptions are thrown, but i suspect the camera is the cause. Earlier, i had to comment this line of code:
transform.position = Camera.main.ScreenToWorldPoint(new Vector3((CameraManager.Resolution.width * .5f), (CameraManager.Resolution.height * .5f), 10));
because the function ScreenToWorldPoint was throwing the following exception:
Screen position out of view frustum (screen pos 0.000000, 0.000000, 10.000000) (Camera rect 0 0 0 0)
As you see it says that the Camera rect's size is 0. I even tried directly logging the camera's dimensions to make sure (Debug.Log(Camera.main.pixelWidth + ", " + Camera.main.pixelHeight)), and sure enough, they were (0, 0) on the Hololens/emulator.
I made sure that webcam is supported, and that my camera settings are all set, but that didn't help either.
So i'm not sure if that's the cause of the problem or simply a symptom. And I can't start anywhere since neither the Output nor the Error window show anything wrong. Any help or suggestions would be greatly appreciated.
Thanks for reading!
Edit: Here's the entire output log from beginning to end.
Edit2: I don't know if this is significant, but if I paused execution (in Visual Studio), it always seem to be at Build/ProjectName/App.css => Line 78:
[MTAThread]
static void Main(string[] args)
{
var app = new App();
CoreApplication.Run(app); //<===== Here
}
You might want to check any of your Start() methods. You might have some code that is CPU intensive. Even if it runs smoothly in Unity, doesn't mean it will run easily on HoloLens since their CPU is not powerful.
Also, to avoid any Camera problems, make sur to use the Camera prefab from this
repository :
https://github.com/Microsoft/MixedRealityToolkit-Unity
Those are just some thoughts, hope it helps!
Turns out i didn't enable "Virtual Reality Supported" under Other settings in PlayerSettings. It's really dumb, but i hope this helps someone.

How to fake keyboard event to widget in Qt5

I have made a custom virtual keyboard widget for my kiosk application, and now comes the time when I want it to produce fake keyboard events and feed them to an QLineEdit of choice.
I do the following:
// target is the QWidget to receive the events
// k is the Qt::Key (keycode) I want to send (Testing with an 'A')
Qt::Key k=Qt::Key_A;
if(0!=target){
//According to docs this will be freed once posted
QKeyEvent * press=new QKeyEvent(QKeyEvent::KeyPress, (int )k,0);
QKeyEvent * release=new QKeyEvent(QKeyEvent::KeyRelease, (int )k,0);
//Give the target focus just to be sure it is available for input
target->setFocus();
//Post the events (queue up and let the target consume them when the eventloop gets around to the target)
QCoreApplication::postEvent ( target, press) ;
QCoreApplication::postEvent ( target, release) ;
}
I see the target widget receive focus, but there are no letters typed into the input field like I would expect. What am I doing wrong? Which assumptions are wrong?
PS: I know that this could be solved by using existing virtual keyboards or at least using the platform interface as is done in this post. In our approach we have decided to build the kayboard into the application to obtain full control over the UX and keyboard design.
Thanks!
Since no-one stepped up, I will try to provide some closure.
It turns out that Qt5 comes with a library of testing facilities called testlib. It has all sorts of goodies to facilitate easy creation, management and running of unit tests for Qt application. Among these facilities there is a set of functions for sending fake events such as fake typing of text, mouse clicks etc. It is quite comprehensive and covers many use-cases. Since this is used internally by Qt developers to test Qt itself it is also production proven code.
I simply copied what I needed from there.

How do you prevent Nvidia's 3D Vision from kicking in? (HOW TO CALL NVAPI FROM .NET)

I would normally post this on nvidia's developer forums but they are still offline from last months hacking. If anyone knows of another good community for these types of questions I would love to know about them.
I am developing some software designed to test human vision. Right now I have two tests written one that presents a stereoscopic image using nvidia's 3D vision v.2 glasses and another test that displays letters on the screen similar to the charts you see in an eye doctors exam room. My problem is the during the "chart test" the 3D vision is getting triggered and causing the screen to look dim. I can enable and disable the 3D vision through the nvidia control panel between running the different tests but that is a less then elegant solution. I am using DX9 and Visual Basic to develop my code. In order to trigger the 3D in the Stereoscopic test I am using the NV_STEREO_IMAGE_SIGNATURE Method that is described here. Basically the method involves making a backbuffer that is twice the width of the screen plus an extra column of pixel data in the middle where you insert a special signature that tells the video card that it is a stereo image and the left half of the backbuffer should be displayed to the left eye and the right half to the right eye. I don't do any of that in my code for the "Chart test" but the 3d Vision is still getting triggered and I can't figure out why. Is there a way to tell the video card to temporally disable the 3d Vision functionality in the code?
Thanks
I think I may have found the solution to my own question. Nvidia supplies a library called NVAPI that can be statically loaded and and contains calls such as NVAPI_STEREO_ENABLE , and NVAPI_STEREO_DISABLE. Downloads and info on the NVAPI can be found here. I will edit this post with an example of the actual code when I have completed the solution.
EDIT:
Because C# / Visual Basic do not allow static loading of .lib files I had to create a Visual C++ to wrap the NVAPI.lib file. After referencing the lib file and adding the include header files to my wrapper project I wrote the following code.
#include "stdafx.h"
#include "nvapi.h"
public ref class NvApiWrapper
{
public:
static bool NvApiWrapper_Initialize(){
if (NvAPI_Initialize() == 0){
return true;
} else {
return false;
}
}
static bool NvApiWrapper_Stereo_Enable(){
if (NvAPI_Stereo_Enable() == 0){
return true;
} else {
return false;
}
}
static bool NvApiWrapper_Stereo_Disable(){
if (NvAPI_Stereo_Disable() == 0){
return true;
} else {
return false;
}
}
};
I was only interested in those two stereo methods so I did not add anything else but there are lots of interesting methods that can be called. You must call NvAPI_Initialize() before any other calls.

Setting sound output/input

I've been looking all over the web, but I don't know if it is possible: can a Cocoa Mac OS X app change the sound input/output device? If so, how come?
can a Cocoa Mac OS X app change the sound input/output device?
Yes, by setting the relevant Audio System Object property.
If so, how come?
Probably because the user might want to change the default input or output device from within an application, rather than having to jump over to the Sound prefpane before and after or use the Sound menu extra.
I know this is an old post but I've been struggling these days trying to find a way of changing the sound input/output device using code and I finally found how to do it. In case someone else runs into the same problem, here's the answer!
There's a command line utility called SwitchAudio-OSX (https://code.google.com/p/switchaudio-osx/) that allows you to switch the audio source from the terminal. It is open-source and you can find the latest version here: https://github.com/deweller/switchaudio-osx.
Anyway, you can use these lines to change the sound input/output device:
UInt32 propertySize = sizeof(UInt32);
AudioHardwareSetProperty(kAudioHardwarePropertyDefaultInputDevice, propertySize, &newDeviceID); // To change the input device
AudioHardwareSetProperty(kAudioHardwarePropertyDefaultOutputDevice, propertySize, &newDeviceID); // To change the output device
AudioHardwareSetProperty(kAudioHardwarePropertyDefaultSystemOutputDevice, propertySize, &newDeviceID); // To change the system output device
Where newDeviceID is an instance of AudioDeviceID and represents the id of the device you want to select. Also, a list of all available devices can be obtained using this code:
AudioDeviceID dev_array[64];
AudioHardwareGetProperty(kAudioHardwarePropertyDevices, &propertySize, dev_array);
int numberOfDevices = (propertySize / sizeof(AudioDeviceID));

What's the modern equivalent of GetNextEvent in Cocoa?

I'm porting an archaic C++/Carbon program to Obj-C and Cocoa. The current version uses asynchronous usb reads and GetNextEvent to read the data.
When I try to compile this in Objective C, GetNextEvent isn't found because it's in the Carbon framework.
Searching Apple support yields nothing of use.
EDIT TO ADD:
Ok, so what I'm trying to do is run a document scanner through USB. I have set up the USBDeviceInterface and the USBInterfaceInterface (who came up with THAT name???) and I call (*usbInterfaceInterface)->WritePipeTO() to ask the scanner to scan. I believe this works. AT least the flatbed light moves across the page...
Then I try to use *(usbInterfaceInterface))->ReadPipeAsyncTO() to read data. I give this function a callback function, USBDoneProc().
The general structure is:
StartScan()
WaitForScan()
StartScan() calls the WritePipeTO and the ReadPipeAsyncTO
WaitForScan() has this:
while (deviceActive) {
EventRecord event;
GetNextEvent(0,&event);
if (gDataPtr != saveDataPtr) { // more data up the timeout
timeoutTicks = TickCount() + 60 * 60;
saveDataPtr = gDataPtr;
}
if (TickCount() > timeoutTicks) {
deviceActive = false;
}
}
Meanwhile, USBDoneProc incrementing gDataPtr to be the end of the data that we've read so far. It gets called several times during the asynchronous read, called automatically by the callback, as far as I can tell.
If I cake out the GetNextEvent() call in the WORKING code the USBDoneProc doesn't get called until the asynchronous readpipe timesout.
So it looks to me that I need something to give control back to the event handler so that the USBRead interrupts can actually interrupt and make the USBDoneProcget called...
Does that make any sense?
thanks.
I suppose the nearest thing to a Cocoa equivalent would be -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]. But bear in mind that GetNextEvent is archaic even for Carbon. The preferred way of handing events is the "don't call us, we'll call you" scheme, where the app calls NSApplicationMain or RunApplicationEventLoop and events are dispatched to you.
EDIT to add: Does you app have a normal event loop? If so, maybe WaitForScan could start a Carbon timer and return to the event loop. Each time the timer fires, do what you did in the WaitForScan loop.
there is a USB hidapi that works for mac on windows.
http://www.signal11.us/oss/hidapi/
may this could be of help to you?
It works fine (I can list the connected USB devices and connect/write/read to a device);
however, if i USB device is connected/disconnected during the runtime of the application, I don't see the new connected/disconnected devices.
See: https://github.com/signal11/hidapi/issues/14
If I add the following code to hidapi, then hidapi detects the new USB devices.
#include <Carbon/Carbon.h>
void check_apple_events() {
printf("check_apple_events\n");
RgnHandle cursorRgn = NULL;
Boolean gotEvent=TRUE;
EventRecord event;
while (gotEvent) {
gotEvent = WaitNextEvent(everyEvent, &event, 0L, cursorRgn);
}
}
I need to compile this on OSX10.5 because it uses Carbon instead of Cocoa.
I am currently looking how to transform this to Cocoa.
you are also trying to move your code to cocoa, right?
let me know if you find out; I'll post it here if I get it.
regards,
David
Have you considered throwing the whole thing out and using Image Kit's new-in-10.6 scanner support instead? Even if it's custom, writing a TWAIN driver for it might be easier (and is certainly better) than trying to twist Cocoa into a GetNextEvent shape.