Old-school computer graphics sometimes produced animations (cycles and fades) without actually redrawing anything to video memory, purely by updating the color palette.
Is it possible to do this in an animated gif? That is, optimise (reduce file-size of) the gif by only providing a single frame of (significant) raster content, but have each (delayed) animation frame update colour values in the (global) palette?
The short answer is no.
According to the existing standard, every GIF frame containing a local palette must have its own data to be displayed using that palette, otherwise the local palette is of no use.
One of the possible solutions is to define your own GIF Application Extension block (like Netscape did; see the link) to store additional palettes and their time delays. Apparently, those extension blocks should appear after frames whose data they affect.
The downside of this approach is that no one except your decoder would support palette cycling unless your block type somehow makes its way to become a new de-facto standard.
Nevertheless, your handcrafted GIFs would remain valid for all other GIF decoders (even though without any palette cycling), as the standard requires them to silently ignore any GIF Application Extensions with IDs unknown to them.
Related
My app has a toolbar which is normally 64 pixels height. On OS X (with a retina display) the toolbar's height still equals to 64 (logical) pixels.
If I pass 64x64 bitmap when creating a wxBitmapButton I get a blurry image (which is expected), so I need to pass somehow a 128x128 bitmap.
When I pass it, it's just shown cropped without proper scaling. So how can I use wxBitmapButton to show high-quality bitmap?
I know this is a rather old question by now, but there is finally a good answer to it now if you're using the latest versions of wxWidgets from Git or 3.1.6+ once it will have been released.
The answer consists in using wxBitmapBundle which is basically a smart container for bitmaps to be used in different resolutions/at different DPI scale factors. In the simplest case, which is sufficient under Mac, you just need to create a bundle from the two bitmaps, to be used at normal (or 100%) and high (or 200%) DPI by using wxBitmapBundle::FromBitmaps(bmpNormal, bmpHigh) and pass this object to wxBitmapButton::SetBitmap().
This also works with wxToolBar tools, wxStaticBitmap and other (although not yet all) classes. And the bitmap bundle can be created from resources, which is especially convenient under Mac, as you can just have normal.png and normal#2x.png files in your application bundle (which has nothing to do with wxBitmapBundle, it's just an unfortunately overloaded term) Resources subdirectory.
wxWidgets 3.1 claims to fix the Windows High DPI issues. It works too but I see blur UI (fonts/bitmaps) looks stretched.
I went through the https://learn.microsoft.com/en-us/windows/desktop/hidpi/high-dpi-desktop-application-development-on-windows
I did the manifest changes to make my application DPI-aware, it removed the blur effect but application layout went wrong, every layout looks smaller (unusable UI).
Note* issue more vigilant on 3K and 4K system. Hardcoded pixel sizes are not scaling (like 400px width button, 500pixel width panel etc).
wxWidgets gives you a (relatively simple) way to make your application work in high DPI, but doesn't -- and can't -- do it automatically for you, in particular only sizer-based layouts without hardcoded pixel sizes will work correctly and you do need to provide your own higher definition artwork.
Concerning the existing pixel values, the simplest (even though not really the best) way to make them work better is to put FromDIP() calls around them.
Also note that you don't need to do anything special for pixel values in XRC, they're already interpreted as being resolution-independent pixels and are scaled according to the DPI automatically.
I have a tvtk SceneEditor for a mayavi scene in my traitsui application. When defining this editor, I asked it to have size 500x500.
Because of the mayavi toolbar, the scene editor itself understands that it has a smaller size than this under normal circumstances:
>>> self.scene.scene_editor.get_size()
wx.Size(500, 468)
And when the window is enlarged so that the editor takes up much more space, it understands this too
>>> self.scene.scene_editor.get_size()
wx.Size(500, 781)
However, if the editor is made much smaller, it refuses to take up less space (even if the contents could easily be rescaled) because the editor was specified to take up 500x500. It allows the editor to be resized, but just cuts off part of the display until it is enlarged again.
I want to know if there is a way to a) ask how much of the editor is currently displayed on the screen (instead of the minimum size the editor is willing to display) and b) make the editor default to 500x500 but be willing to ask its contents to rescale themselves if it is made smaller.
I am using the wxpython backend.
edit: It is also important that the scene is contained within a layout='split' Group -- after exploring this question and running into the enormous number of sizers that wx generates when adding traitsui widgets, I realized this might matter as well.
After hacking around for a few days, I have an answer to part a) (though it is hacky and not necessarily general purpose and it might be specific to the layout='split' layout).
self.scene.scene_editor.control.Parent.Parent.Size describes the size of the viewport in which the editor is being displayed, at least with layout='split' but I can easily imagine that it would be different for other layout types (which I didn't test).
Armed with this information, I managed to write a wrapper that resizes the editor to be at least this small before making snapshots.
More gory details:
The sizer associated with self.scene.scene_editor.control isn't the right one, probably because that sizer refers to the window containing the editor, whereas the sizer associated with the size of window within the containing splitter widget is allowed to "cover" that window and ignore its size.
I found out that the process of adding traitsui widgets defines so many boxes (assuming a nontrivial number of widgets) that there result is this enormous glut of wx Windows with their associated wx Sizers that all depend on their child widgets to determine the current size, so it is extremely difficult to control things like this. This is probably why enaml exists; so that these constraints can be explicitly specified by the user. Maybe someday I will convert my program to use enaml, but that seems like a lot of effort for something that currently works pretty well.
I still don't have an answer to part b), so I am leaving this question open. It would be nice if there were a way to specify at construction to the mess of wx sizers that resizing this window below its initial size is allowed, but there probably isn't one.
We've now got 4-resolutions to support and my app needs at least 6 full-screen background images to be pretty. Don't want to break the bank on megabytes of images.
I see guides online about loading PDFs as images and custom SVG libraries but no discussion of prectically.
Here's the question: considering rendering speed and file size, what is the bet way to use vector images in iOS? And in addition, are there any practical caching or other considerations one should make in real world app development?
Something to consider for simple graphics, such as the type of thing used for backgrounds, etc., is just to render them at runtime using CG.
For example, in one of our apps, instead of including the typical repeating background tile image in all the required resolutions, we instead draw it once into a CGPatternRef, then convert it to a UIColor, at which point things become simple.
We still use graphic files for complex things, but for anything that's simple in nature, we just render it at runtime and cache the result, so we get resolution independence without gobs of image files. It's also made maintenance quite a bit easier.
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.