i'm mainly focused on how this example uses wndproc as friend... im a little confused how it works and im just trying to figure out if and how this would work with more than one window
http://www.uta.fi/~jl/pguibook/api2oo.html
Yes, it will work with more than one window because it stores a pointer to the C++ object with the corresponding HWND:
Window *wPtr;
...
SetWindowLongPtr(hWnd, 0, (LONG_PTR) wPtr);
and the global WndProc then retrieves that pointer and calls the object's methods through it:
wPtr = (Window*) ::GetWindowLongPtr(hWnd, 0);
wPtr->WndProc(message, wParam, lParam);
(Note that the original code uses SetWindowLong, and hence won't work in a 64-bit program - I've changed the code above to use SetWindowLongPtr.)
Related
I've noticed that when a control such as an AutoSuggestBox has a callback, the callback is executed both when a user interacts with the control and when my code changes the associated value.
For example, if I set the TextChanged property on an AutoSuggestBox, the function is called even when my code sets the Text property to an initial value.
This is causing problems in my application in the form of both bugs and unnecessary function calls. You may be wondering how the code came to be in this state -- the answer is, I don't know. The project was handed off to me from another developer and I've been tasked to fix a number of bugs.
Although I can individually hunt down all the places in the code where this happens and temporarily remove the callback, I'm wondering if there is an easier way, for example a property I can set on the control that says, "don't call the callbacks when it is the code making a change rather than the UI".
For the AutoSuggestBox.TextChanged event, the attribute of the trigger reason is provided in AutoSuggestBoxTextChangedEventArgs, and you can judge based on this
private void AutoSuggestBox_TextChanged(AutoSuggestBox sender, AutoSuggestBoxTextChangedEventArgs args)
{
if (args.Reason == AutoSuggestionBoxTextChangeReason.ProgrammaticChange)
{
return;
}
//other code
}
For other controls, you need to deal with them according to your situation.
Best regards.
I'm trying to create a custom TitleBar for my game, i got everything else to work like i wanted but i can't get it to minimize without using the windows buttons from the windows frame.
I have a Game class with these in the constructor :
'Having this here or in the constructor doesn't make any difference
Public AsWindow As Windows.Forms.Form = CType(Windows.Forms.Form.FromHandle(Me.Window.Handle), Windows.Forms.Form)
Public Sub New()
Window.AllowUserResizing = False
Window.IsBorderless = True
end sub
Public Sub Minimize()
AsWindow.WindowState = System.Windows.Forms.FormWindowState.Minimized
End Sub
Calling Minimize cause a "System.NullReferenceException" on AsWindow, regardless of if i define AsWindow on runtime or during the initiation.
Note that i have added "System.Windows.Forms" as a reference.
Things i've tried:
This is where i got the initial code for the AsWindow field.
And here i tried to send a Message to minimize the window.
The problem you're facing is that the MonoGame API (which is exactly like XNA) doesn't expose a direct way of controlling the window. This kinda makes sense, because MonoGame supports platforms that don't have a concept of a "window" so to speak.
However, it's usually possible to get access to the underlying platform specific code and workaround these issues. It's important to keep in mind however that this approach will be platform specific and might break if MonoGame changes under the hood.
Last I checked, under the hood MonoGame uses SDL on the Windows desktop platform. This may not be true anymore, but I do have some code in an old project that gives the general idea. My code is in C# but it should point you in the right direction.
The first step is to import the method you want from the SDL.dll. This not managed code do you can't reference it like you normally would in .NET project.
[DllImport("SDL2.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern void SDL_MaximizeWindow(IntPtr window);
Once you have access to the method you can call it by passing in the window handle.
SDL_MaximizeWindow(Window.Handle);
Obviously, this is code to maximize the window (that's the only code I had handy) but I'm sure you can figure out how to do the minimize version yourself.
When responding to an event in a textbox using C++/winrt I need to use ScrollViewer.ChangeView(). Trouble is, nothing happens when the call executes and I expect that is because at that moment the code is in the wrong thread; I have read this is the cause for lack of visible results from ChangeView(). It appears that the proper course is to use CoreDispatcher.RunAsync to update the scroller on the UI thread. The example code for this is provided only in C# and managed C++, however, and it is a tricky matter to figure out how this would look in normal C++. At any rate, I am not getting it. Does anyone have an example of the proper way to call a method on the UI thread in C++/winrt? Thanks.
[UPDATE:] I have found another method that seems to work, which I will show here, though I am still interested in an answer to the above. The other method is to create an IAsyncOperation that boils down to this:
IAsyncOperation<bool> ScrollIt(h,v, zoom){
co_await m_scroll_viewer.ChangeView(h,v,zoom);
}
The documentation entry Concurrency and asynchronous operations with C++/WinRT: Programming with thread affinity in mind explains, how to control, which thread runs certain code. This is particularly helpful in context of asynchronous functions.
C++/WinRT provides helpers winrt::resume_background() and winrt::resume_foreground(). co_await-ing either one switches to the respective thread (either a background thread, or the thread associated with the dispatcher of a control).
The following code illustrates the usage:
IAsyncOperation<bool> ScrollIt(h, v, zoom){
co_await winrt::resume_background();
// Do compute-bound work here.
// Switch to the foreground thread associated with m_scroll_viewer.
co_await winrt::resume_foreground(m_scroll_viewer.Dispatcher());
// Execute GUI-related code
m_scroll_viewer.ChangeView(h, v, zoom);
// Optionally switch back to a background thread.
// Return an appropriate value.
co_return {};
}
I'm porting an archaic C++/Carbon program to Obj-C and Cocoa. The current version uses asynchronous usb reads and GetNextEvent to read the data.
When I try to compile this in Objective C, GetNextEvent isn't found because it's in the Carbon framework.
Searching Apple support yields nothing of use.
EDIT TO ADD:
Ok, so what I'm trying to do is run a document scanner through USB. I have set up the USBDeviceInterface and the USBInterfaceInterface (who came up with THAT name???) and I call (*usbInterfaceInterface)->WritePipeTO() to ask the scanner to scan. I believe this works. AT least the flatbed light moves across the page...
Then I try to use *(usbInterfaceInterface))->ReadPipeAsyncTO() to read data. I give this function a callback function, USBDoneProc().
The general structure is:
StartScan()
WaitForScan()
StartScan() calls the WritePipeTO and the ReadPipeAsyncTO
WaitForScan() has this:
while (deviceActive) {
EventRecord event;
GetNextEvent(0,&event);
if (gDataPtr != saveDataPtr) { // more data up the timeout
timeoutTicks = TickCount() + 60 * 60;
saveDataPtr = gDataPtr;
}
if (TickCount() > timeoutTicks) {
deviceActive = false;
}
}
Meanwhile, USBDoneProc incrementing gDataPtr to be the end of the data that we've read so far. It gets called several times during the asynchronous read, called automatically by the callback, as far as I can tell.
If I cake out the GetNextEvent() call in the WORKING code the USBDoneProc doesn't get called until the asynchronous readpipe timesout.
So it looks to me that I need something to give control back to the event handler so that the USBRead interrupts can actually interrupt and make the USBDoneProcget called...
Does that make any sense?
thanks.
I suppose the nearest thing to a Cocoa equivalent would be -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]. But bear in mind that GetNextEvent is archaic even for Carbon. The preferred way of handing events is the "don't call us, we'll call you" scheme, where the app calls NSApplicationMain or RunApplicationEventLoop and events are dispatched to you.
EDIT to add: Does you app have a normal event loop? If so, maybe WaitForScan could start a Carbon timer and return to the event loop. Each time the timer fires, do what you did in the WaitForScan loop.
there is a USB hidapi that works for mac on windows.
http://www.signal11.us/oss/hidapi/
may this could be of help to you?
It works fine (I can list the connected USB devices and connect/write/read to a device);
however, if i USB device is connected/disconnected during the runtime of the application, I don't see the new connected/disconnected devices.
See: https://github.com/signal11/hidapi/issues/14
If I add the following code to hidapi, then hidapi detects the new USB devices.
#include <Carbon/Carbon.h>
void check_apple_events() {
printf("check_apple_events\n");
RgnHandle cursorRgn = NULL;
Boolean gotEvent=TRUE;
EventRecord event;
while (gotEvent) {
gotEvent = WaitNextEvent(everyEvent, &event, 0L, cursorRgn);
}
}
I need to compile this on OSX10.5 because it uses Carbon instead of Cocoa.
I am currently looking how to transform this to Cocoa.
you are also trying to move your code to cocoa, right?
let me know if you find out; I'll post it here if I get it.
regards,
David
Have you considered throwing the whole thing out and using Image Kit's new-in-10.6 scanner support instead? Even if it's custom, writing a TWAIN driver for it might be easier (and is certainly better) than trying to twist Cocoa into a GetNextEvent shape.
I'm curious, what role does the int main function play in a Cocoa program? Virtually all of the sample code I've been looking at has only the following code in main.m:
#import <Cocoa/Cocoa.h>
int main(int argc, char *argv[])
{
return NSApplicationMain(argc, (const char **) argv);
}
What exactly is this doing, and where does the program actually start stepping through commands? It seems my conceptions need readjustment.
Since a Cocoa project starts like any other, the entry point for the Operating system is main. However the Cocoa Architecture is constructed to actually start the processing of your program from NSApplicationMain, which is responsible for loading the initial window from your application and starting up the Events loop used to process GUI events.
Apple has a very in depth discussion on this under the Cocoa Fundamentals Guide : The Core Application Architecture on Mac OS X
If you want to learn how control passes from "launch this" to the main() function, the execve man page has the details. You would also want to read about dyld. main() is a part of the Unix standard. Every single program that you can run effectively has a main().
As others have mentioned, NSApplicationMain passes control to Cocoa. The documentation is quite specific as to what it does.
One interesting note, NSApplicationMain doesn't actually every return. That is, if you were to separate the call to NSApplicationMain from the return in your main function and put code in between, that code would never be executed.
main() is the entry point for your program.
When you run your program that is the first function called. Your program ends when you exit that function.
Also note that this does not come from Objective-C. This is simple C.
Have a look at
Wikipedia's page on it
Value returned from main is returned by the process to operating system when the process is done.
Shell stores the value returned by last process and you can get it back with $? :
> ls
a b c
> echo $?
0
> ls x
x: No such file or directory
> echo $?
1
ls is an application like anything else.
You can use the return value to chain multiple processes together using shell script or anything else that can execute a process and check for return value.
I'm wondering where the code begins
executing (like why does an NSView
subclass execute and draw without me
explicitly calling it?) and if I'm not
supposed to stick my main loop in int
main() where does it go?
In an xcode project you have a main.m file that contains the 'int main' function. You won't actually find the code that calls the NSView draw explicitly, this code is hidden deep within an iPhone or Mac OS X framework. Just know that there is an event loop hidden deep within your 'int main' that checks for changes so that it knows when to update your view. You don't need to know where this event loop is, it's not useful information since you can override methods or create and assign delegates that can do things when this happens.
To get a better answer, you'll need to explain what you mean by a 'main loop' that you wanted to put inside the 'int main' function.
It's just weird to me coming off a
little experience in C++. It looks
unnatural that the main function would
be so empty.
You can encapsulate a billion lines of code into one function and put it into 'int main'. Don't be deceived by a main only having a few lines, that is done on purpose. Good programming teaches us to keep code in specific containers so that it is well organized. Apple chose to make the "real" launch point of their iPhone apps in this single line of code inside the main.m file:
int retVal = UIApplicationMain(argc, argv, nil, #"SillyAppDelegate");
From that one piece of code, an app's delegate is launched and won't return control to the main function until it is done.