Unreal Engine Pawn Possession issues when using Mixed Reality UX Tools Plugin - game-engine

I am having issues with attempting to start the game and possessing a pawn; But, when I enable the Mixed Reality UX Tools plugin it screws up my Player Controller.
This is my simple scene:
2 Pawns I have created over a chess board
I can easily set up possession when I start up the scene.
Player Controller that works without the plugin:
This works fine and takes one of the MR Pawns
But once I enable it I go from Left to Right on the views:
Left View: Before plugin and it possess a pawn. Right View: after plugin it creates a new pawn
I assume this is because the plugin creates a new pawn with interactive elements in it when you start up. But I was wondering if I could override this feature of the plugin.

This is likely caused by the input simulation feature in UXTools: it sets a view target in order to emulate the HMD position, which can interfere with expected camera views from a pawn. We intend to address these issues in the next release to make input simulation work more like a true HMD in the VR preview. In the meantime there are a few things you can try to work around the issue:
Disable input simulation: Under Project Settings > Platforms > Windows Mixed Reality you can disable the feature altogether. This means you have to play on-device in order to test interactions with UXTools.
Use a camera component on the pawn: That will override the view target defined by input sim and force the camera to be in the pawn location.

Related

How do we get Qt to render to memory rather than a device?

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM

unit testing custom UIView

I have a graphic application using CoreGraphics framework,
while i have unit test suits for the model files i can't seem to understand how to create a unit test for my custom UIView,
my goal is setting basic properties of view and see the draw functions result, although in my setters i called:
[self setNeedsDisplay];
my 'drawRect' function is not being called from the unit test although it is being called from the real application
is there a way to draw in unit test project?
what is the best practice/ tools to test ui projects?
thanks
A simple way to test drawing is to first make the drawing look the way you want (so no TDD). Then make a test which renders the drawing into a PNG. Use a #if conditional to switch the test code between capturing a baseline PNG, and comparing against that PNG.
Rendering can change slightly in new OS versions. So stick with a single OS for baseline image testing. Grab a new baseline when you need to.
With this kind of testing in place, you can then refactor your drawing code. If it renders the same image, your latest changes are good. If there's a difference, you have to make an eyeball decision about whether or not to accept the change. (And if you do, capture a new baseline.)
Edit: These days, instead of doing all that by hand, I'd use iOSSnapshotTestCase. When there is a test failure, instead of just giving a "something went wrong" result, it saves the image. This way, you can do a side-by-side comparison without having to manually launch your app and navigate to the view in question. It also simulates rendering to different devices in different orientations, which is really great for checking autolayout.
In general unit testing views isn't the right thing to do. Unit testing is meant to validate discrete atoms of logic, and if you're factoring your code properly there should be very little logic in your views.
A more successful approach might be to use the UIAutomation framework (or your automation tool of choice). This allows you to automate simulated user interaction while the app is actually running (either in the simulator or on device). UIAutomation has functions (the various view methods, and captureRectWithName()) that allow you to find and screenshot specific views. You can then hook it up to, for example, ImageMagick's command line compare tool to validate that you're drawing the correct thing.
I personally think that you should test any rendering you do with your view using XCUI tests. However any business logic that creates the said custom rendering, or any maths, or logic you have added should be tested and done isolated from any other class.
Therefore your UIAutomation tests will check if your view is being rendered, but your unit tests will validate any maths or business logic that might exist in it.

Create custom templates in iOS ap

How to create custom templates in iOS app having uiimageview ,uitextview,and many other views so that user can select any one template and starts editing it.
There is a famous library thats floating around for this kind of usage - iOS BoilerPlate
It is intended to provide a base of code to start with
It is not intended to be a framework
It is intended to be modified and extended by the developer to fit their needs
It includes solid third-party libraries if needed to not reinvent the wheel
What it includes -
HTTP requests and an image cache (both in-memory and disk-based)
UITableViews and UITableViewCells: fast scrolling, async images, pull-down-to-refresh, swipeable cells,...
A built-in browser so your users don't leave your application when they browse to a certain URL
Maps and locations: directions between two points, autocomplete a location, etc.

Two native AIR windows from a single AIR app?

I'm building in FlashDevlop as pure AS3.
I'm looking at building a kiosk that uses two screens. Its used to administer tests. So one screen has the test, second the controls for admin the test. I have played with wide app but its not very elegant and I really would like both screens to run full screen on each screen. Is it possible to have one air app spawn two native air windows? A secondary question is it possible to detect multiple screens and target a screen to full screen to? Even something as simple as checking the window size to detect would work, im just not sure I can move and if the low level api will fullscreen on that screen. I could not find any examples of this in the docs.
What docs did you look into? I found it right away.
You'll need the Screen class if you want information on the screens that are connected to the PC. And here's some documentation on using it.
To create new windows, just instantiate a new NativeWindow class and call activate() on it when you're done configuring.
There's a lot of other useful stuff for you in the flash.display package. All the AIR stuff is marked with a little AIR icon. I have to admit that it would have been easier to find if they had put these classes in a separate AIR package.