SDL2: Share renderer between multiple windows - rendering

I have a set of images, and I need show it on different displays. So I create two windows and two renderers. But some image may be show on several displays. And if the texture was created using rendererOne, and shown with rendererTwo, we have a program crash.
If I create texture in runtime each time, when I need show - I have falling of FPS.
How it is better to solve this problem? Can I share renderer between windows (on different displays)? Or can I share texture between different renderers?
p.s. I can mark image's name like "Image1.one.two.png" or "Image2.one.png" and so on, and create two copies of Image1 and one copy of Image2, but it very difficult way, and require many RAM.
p.p.s. I don't use OpenGL directly.

I have solved this problem by using a lazy initialization of texture. I store SDL_Surface, and when I need show some Texture then I check it:
if (m_texture == nullptr || !m_texture->CompatibleWithRenderer(renderer))
{
m_texture = new Texture(renderer, m_surface);
}

Related

Custom MapIcon image wrong size in UWP app

I've got a UWP app with a map view. For some of my MapIcons, I need to set a custom image. Others use the default image. My custom image is approximately the same size as the default image. Additionally, I generated different sizes for different screen scales, and named them accordingly (for example, MyIcon.scale-200.png, etc).
I tested this by running the app on my computer, a Surface Pro, and setting the scaling to different values in the Settings app. It seems to work. As I choose larger scales, I get larger custom MapIcons, and the custom icon is similar in size to the default icon.
However, my customers report that it is not working correctly when I distribute my app. They are sending me screen shots that show the custom MapIcon either much larger, or noticeably smaller than the default one. I can't reproduce or explain these results.
What could cause this?

can you set a predetermined billing app qt?

can you set a vertical orientation in a qml app?
if so?
I have searched on various sites to try and solve this problem. I found it in C ++ code but I would need the piece of code in qml language
I'm using the application project - qt quick application - empty
I'm using a version of qt 5.10.1
thank you
You're looking for Screen.orientationUpdateMask.
Once the mask is set, Screen.orientation will contain the current orientation of the screen.You can read more about the Screem QML type here. Of course the orientation in this case is set by the accelerometer.
If you want to be able to go back and forth between portrait and landscape without the use of the accelerometer and while having the logic in qml you will need to use the Transform, Scale and Rotation QML types. I wouldn't recommend this approach.
One alternative to using Transform would be to use two different views all together, which might not be a good idea for maintainability especially if you want to use the 4 orientations.
If you want to force the orientation no matter what you can do it in the manifest file as you would normally without Qt.

updating camera helper projection matrix (three.js)

I'm switching a camera between two exported dummies from 3dsmax by setting it to use their matrixWorld properties.
camera_foreground.matrixWorld = (dummy_shot1.matrixWorld);
camera_foreground.updateProjectionMatrix();
This works great but the camera_helper that I've created doesn't inherit the matrix changes.
It doesn't allow me to run updateProjectionMatrix() on the helper itself. I've tried parenting the helper to the original camera. I've also tried to set the helper.matrixWorld to the same dummy_shot1.matrixWorld. What would be the best way to get the helper to update along with the camera it's created for/from?
You can update the frustum of a camera helper with THREE.CameraHelper.update().
Given two cameras: camera1 and camera2, you can switch the CameraHelper transformation from camera1 to camera2 like this :
cameraHelper.camera = camera2;
cameraHelper.matrix = camera2.matrixWorld;
cameraHelper.update();
Note
Another solution that seems more convenient for you, would be to create one THREE.CameraHelper for each camera and switch the currently displayed helper with :
camera1.helper.visible = false;
camera2.helper.visible = true;
In addition, THREE.Layers can also be used to control the currently displayed helper.
3dsmax cameras/camera animation aren't supported by the 3dsmax collada exporter. To get around this I was exporting dummies (from max to threejs) that had been parented to said cameras. I had worried that the dummies would come through without their animation (since the parents (3dsmax cameras) are ignored).
The dummies were coming through fine (I'd thought) since camera location/rotation worked fine when copying matrixes to three.js cameras. The problem arose when trying to get the helpers to do the same (apparent when viewing the scene via a debug camera).
It seems camera_helper objects in three.js don't play nicely with the matrix of these particular dummies (probably because in 3dsmax they are inherting from a parent). What's strange is that the cameras (as mentioned above) work fine.
To get around the issue I used a great maxscript (http://www.breidt.net/scripts/mb_collapse.mcr) to copy/bake all the parent keyframe data from the 3dsmax camera to it's dummy (no longer inheriting). The collada friendly dummy was then exported to three.js where both the camera and it's helper work great with.
Thanks Neeh for help and questions. It was rebuilding a test scene to upload that I noticed the new dummies (not parented to cameras) worked fine with helpers.

How to directly manipulate texels in OpenGL ES?

I want to use OpenGL ES to scale and display an image on the screen. The image is going to be updated about 20 times per second, so the idea was to paint directly into the texture. While scaling should be done by the graphic card, the pixel format is guaranteed to be in the correct format by my application. My application needs to manipulate the image on a pixel-by-pixel basis. Due to the architecture of the application I would like to avoid calls like settexel(x,y,color) but write directly into memory.
Is it possible to directly access a texture in the (graphic card's?) memory and change it pixel-wise?
If not, is it possible to use something like settexel(x,y,color) to change a texture?
Thanks for any help!
Ok, after asking some guys at my company I found out that there is no clean way to access the graphic memory directly (solution 1) or to access the main memory from within a shader (solution 2).
Thus, I will store the pixels in the main memory and move the changed regions via glTextSubImage2D into the graphic memory.
Thanks to everybody who helped me with this!

NSScreenNumber changes (randomly)?

In my application I need to distinguish between different displays, which I do by using the NSScreenNumber key of the deviceDescription dictionary provided by NSScreen. So far everything worked flawlessly, but now all of a sudden I sometimes get a different screen ID for my main screen (it's a laptop and I haven't attached a second screen in months, its always the same hardware). The id used to be 69676672 but now most of the time I get 2077806975.
At first I thought I might be misinterpreting the NSNumber somehow, but that doesn't seem to be the case, I also checked by using the CGMainDisplayID() function and I get the same value. What is even weirder is that some of the Apple applications still seem to get the old ID: Eg. the desktop image is referenced in its config file using the screen ID and when updating the desktop image the desktop image app by Apple uses the "correct" (=old) ID.
I am starting to wonder if there might have been a change in a recent update (10.7.1 or 10.7.2) that led to the change, has anybody else noticed something similar or had this issue before?
Here is the code that I use:
// This is in an NSScreen category
- (NSNumber *) uniqueScreenID {
return [[self deviceDescription] objectForKey:#"NSScreenNumber"];
}
And for getting an int:
// Assuming screen points to an instance of NSScreen
NSLog(#"Screen ID: %i", [[screen uniqueScreenID] intValue]);
This is starting to get frustrating, appreciate any help/ideas, thanks!
For Mac's that have built-in graphics and discrete graphics cards (such as MacBook Pro models with on-board Intel graphics and a separate graphics card), the display ID can change when the system automatically switches between the two. You can disable "Automatic graphics switching" in the Energy Saver prefs panel to test whether this is the cause of your screen number changes (when disabled, will always use the discrete graphics card).
On such systems, the choice of which graphics is in use at a particular time is tied to the applications that are currently running and their needs. I believe any use of OpenGL by an application would cause a switch to the discrete graphics card, for instance.
If you need to notice when such a switch occurs while your application is running, you can register a callback (CGDisplayRegisterReconfigurationCallback) and examine the changes that occur (kCGDisplayAddFlag, kCGDisplayRemoveFlag, etc). If you're trying to match a display to one previously used/encountered, you would need to go beyond just comparing display id's.