ID2D1DCRenderTarget vs ID2D1HwndRenderTarget - gpu

I'm trying to port some GDI/GDI+ code to Direct2D, but I'm still a little confused about which type of target is better to use (DC or Hwnd), because I found different performance depending on whether or not I used the Gpu. In particular, I found the following problems:
If I use the DCRenderTarget I can not use hardware acceleration (or Default), because I have continued violations of protected areas of memory. This does not happen if I use HwndRenderTarget.
If I use HwndRenderTarget, in general everything is fine, but if I have many window (like buttons), I lose the focus on the main window, which does not recognize the KeyPressed message, and, if I use the Gpu, performance fall a lot and strongly depend on the number of active targets (which does not happen if I use the software acceleration).
Has anyone encountered the same problems? Can you recommend something about it?
Thanks a lot!

In general, if you want Direct2D to interoperate with GDI, you should use ID2D1DCRenderTarget, otherwise, use ID2D1HwndRenderTarget.
I am not quite understood what you said about the performance, do you mean the performance was bad when your main window lost focus? If that's the case, you can handle the windows status to let it stop rendering when the window was lost focus, for example inactive.

Related

General question: Adding new test code to embedded system

this maybe will be off topic, but I am preparing for an exam in real time. And I have been browsing the book and Internet for an answer for a problem.
Basically I wonder if by adding additional test code if it may change the real time behavior for an embedded system, and or also if it will introduce new errors.
Anyone who might know the answer for this, or refer me to some reading material for it?
Your question is too general.. So I guess the default answer would be it depends.. But considering the possibilities as an exercise of logic and thought, yes it surely can!
There are many schemes available to guarantee the 'real-timeness' of an embedded system. For example, one can have a pre-emptive timer based ISR to service the real-time task.. In such a case, your test code could possibly not affect the 'real-timeness'.. But if the testing takes too long, and the context switches are not pre-emptive, you could get into trouble..
But again it depends on what you're testing and how you're testing. Your test code can possible mess with the timers, interrupts or the memory of system. The possibilities to mess up stuff if you're not careful are endless..
Having an OS underneath will prevent some errors, but again depending on how it works, you may be saved from bad 'test code'..
Yes, when you add code (test, diagnostic, statistic) it may change the real time behavior. It depends on the design, the implementation and the CPU power if it will actually change the behavior. You also have more lines of code and the probability for errors may increase. But I wouldn't say, "it will introduce errors", since it can introduce errors.
Yes it can. See How can adding data to a segment in flash memory screw up a program's timing? for an example of how even adding non-executable code can adjust timing enough to screw up a system.
Yea, changing your code base could totally change its timing. Consider if you dumped some debug output to a serial port, it takes time to call that function, format the data, and if the function is synchronous, then for it to wait for data to go out. This kinda stuff definitely changes system timing behavior.

Supporting multi-monitors

I want to provide multi-monitor support in my application.
In the past I have had the simplistic view that multi-monitor support is simply the lack of open multi-monitor related bugs. If it seems to work on a multi-monitor setup, then it supports multi-monitors, right?
But I would like to create some clear requirements about this.
What are the basic requirements I need to adhere to, in order to satisfy most users' expectations so that they might say "yes this application supports multi-monitors"?
For example an obvious requirement is that all windows/messageboxes/tooltips etc must open on the same monitor that the application is on. And any children of those windows must open on the same monitor as their parent.
Can you think of any more? Are there any guidelines about this anywhere?
if the application was last used in a multi-monitor setup, and started up a second time with none of the additional monitors plugged in, make your application sense this and more all boxes and working areas back into the main screen.
(Eclipse doesn't do this and it annoys the hell out of me)
What makes me very happy is an application that remembers where all of its windows are (be they on the main monitor or second monitor) and re-displays everything in the same layout when I start up the application.
Couple of nasties to watch out for from personal experience (with GDI and Direct3D9 anyway):
When the two monitors have different bit depths, and the user expands the window to cover both monitors or drags it from one to the other... well it can get ugly if your app wasn't expecting it. Used to be much more of a headache back when systems might mix 8-bit palletized displays with higher bit-depth RGB ones (although be aware that such configurations are still out there for certain specialized applications).
Direct3D windows created on one "D3D device" may need attention if you drag them to another display (wholly or partially). Depending how you set up the device, windows either displays nothing on the other display, or does display content but with a massive amount of CPU-processing overhead (enough to kill high frame rates). (This was true on XP anyway).
I always thought this was an OS responsibility. It seems that way on the Mac. Alas, I've run into lots of apps in Windows that don't behave very well on dual monitors, though I've tended to blame that on the video card.
My biggest complaint are apps that insist on using an MDI interface only. MDIs are great if you have limited screen space but get quite annoying when you actually have lots of screen space and want floated pallets/documents separate from each other.

How to save a program's progress, and resume later?

You may know a lot of programs, e.g some password cracking programs, we can stop them while they're running, and when we run the program again (with or without entering a same input), they will be able to continue from where they have left. I wonder what kind of technique those programs are using?
[Edit] I am writing a program mainly based on recursion functions. Within my knowledge, I think it is incredibly difficult to save such states in my program. Is there any technique, somehow, saves the stack contents, function calls, and data involved in my program, and then when it is restarted, it can run as if it hasn't been stopped? This is just some concepts I got in my mind, so please forgive me if it doesn't make sense...
It's going to be different for every program. For something as simple as, say, a brute force password cracker all that would really need to be saved was the last password tried. For other apps you may need to store several data points, but that's really all there is too it: saving and loading the minimum amount of information needed to reconstruct where you were.
Another common technique is to save an image of the entire program state. If you've ever played with a game console emulator with the ability to save state, this is how they do it. A similar technique exists in Python with pickling. If the environment is stable enough (ie: no varying pointers) you simply copy the entire apps memory state into a binary file. When you want to resume, you copy it back into memory and begin running again. This gives you near perfect state recovery, but whether or not it's at all possible is highly environment/language dependent. (For example: most C++ apps couldn't do this without help from the OS or if they were built VERY carefully with this in mind.)
Use Persistence.
Persistence is a mechanism through which the life of an object is beyond programs execution lifetime.
Store the state of the objects involved in the process on the local hard drive using serialization.
Implement Persistent Objects with Java Serialization
To achieve this, you need to continually save state (i.e. where you are in your calculation). This way, if you interrupt the probram, when it restarts, it will know it is in the middle of calculation, and where it was in that calculation.
You also probably want to have your main calculation in a separate thread from your user interface - this way you can respond to "close / interrupt" requests from your user interface and handle them appropriately by stopping / pausing the thread.
For linux, there is a project named CRIU, which supports process-level save and resume. It is quite like hibernation and resuming of the OS, but the granularity is broken down to processes. It also supports container technologies, specifically Docker. Refer to http://criu.org/ for more information.

Protection against automation

One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on.
One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours).
So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like
how to detect if another application makes periodical screenshots?
how to detect if another application scans our process memory?
what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client?
Your feedback is highly appreciated!
I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet.
So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to.
Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that.
You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue.
Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already.
The best protection against automation is to not have tasks that require grinding.
That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often.
However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning.
A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx.
Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application.
Google the term: "Key Loggers"...
Here's an article that describes the problem and methods to prevent it.
I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go:
Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on.
You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function.
This should work for the screenshot taking too, but you might want to intercept the BitBlt function.
Here's a basic tutorial concerning the function interception:
Intercepting System API Calls
You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers.
Edit: What I mean is, look into how they do it; how they detect that stuff.
I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running.
That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work.
Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments).
All these questions are easily solved by the rule 1 above:
* how to detect if another application makes periodical screenshots?
* how to detect if another application scans our process memory?
* what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
* is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.

Can I control register allocation in g++?

I have highly optimized piece of C++ and making even small changes in places far from hot spots can hit performance as much as 20%. After deeper investigation it turned out to be (probably) slightly different registers used in hot spots.
I can control inlineing with always_inline attribute, but can I control register allocation?
If you really want to mess with the register alloation then you can force GCC to allocate local and global variables in certain registers.
You do this with a special variable declaration like this:
register int test_integer asm ("EBX");
Works for other architectures as well, just replace EBX with a target specific register name.
For more info on this I suggest you take a look at the gcc documentation:
http://gcc.gnu.org/onlinedocs/gcc-4.3.3/gcc/Local-Reg-Vars.html
My suggestion however is not to mess with the register allocation unless you have very good reasons for it. If you allocate some registers yourself the allocator has less registers to work with and you may end up with a code that is worse than the code you started with.
If your function is that performance critical that you get 20% performance differences between compiles it may be a good idea to write that thing in inline-assembler.
EDIT: As strager pointed out the compiler is not forced to use the register for the variable. It's only forced to use the register if the variable is used at all. E.g. if the variable it does not survive an optimization pass it won't be used. Also the register can be used for other variables as well.
In general the register keyword is simply ignored by all modern compilers. The only exception is the (relatively) recent addition of an error if you attempt to take the address of a variable you've marked with the register keyword.
I've experienced this sort of pain as well, and eventually found the only real way around it was to look at output assembly to try and determine what is causing gcc to go off the deepend. There are other things you can do but it depends on exactly what your code is trying to do. I was working in a very very large function with a large amount of computed goto mayhem in which minor (seemingly innocuous) changes could cause catastrophic performance hits. If you're doing similar there are a few things you can do to try and mitigate the problem, but the details are somewhat icky so i'll forgo discussing them here unless it's actually relevant.
It depends on the processor you are using. Or should I say, yes you can with the register keyword, but this is frowned upon unless you are using a simple processor with no pipe-lining and a single core. These days GCC can do a way better job than you can with register allocation. Trust it.