Set windows size of QuickLook Plugin - objective-c

I'm building a QuickLook plugin. I want to change the width of the windows that pops up when user hits the spacebar.
I've read there are two keys in the info.plist file of the project where height and width are customisable. Even if I change those values I can't get the size of the preview windows to my desired one.
I don't know what else to try. Any idea?
Thanks!

Thought I'd dig a little on this. I have not tried any of the following suggestions, so nobody get their hopes up. I'll assume you're using the generator callback:
OSStatus (*GeneratePreviewForURL)(
void *thisInterface,
QLPreviewRequestRef preview,
CFURLRef url,
CFStringRef contentTypeUTI,
CFDictionaryRef options
);
Before anything else, you might manually check the options dictionary argument and verify that the kQLPreviewPropertyWidthKey and kQLPreviewPropertyHeightKey keys are indeed mapped to the desired CFNumber values.
Referring to each of these properties, the Apple QuickLook programming guide says:
Note that this property is a hint; Quick Look might set the width
automatically for some types of previews. The value must be
encapsulated in a CFNumber object.
(Edit: If your preview representation is flexible, you might try finding a preview type for which QuickLook honors your size hints, as per the statement above. Just a thought.)
Running nm on the QuickLook framework binary revealed some undocumented kQLPreviewProperty-- constants as well as the aforementioned width and height keys. One that caught my attention was kQLPreviewPropertyAutoSizeKey. Recalling Apple's statement about ignoring the hints to set the size automatically, this might be significant? Following the convention in QuickLook.framework/Headers/QLBase.h, you might try declaring
extern const CFStringRef kQLPreviewPropertyAutoSizeKey;
Then you could try associating a CFNumber 0 with that property key in the options dictionary. There are other undocumented keys of note, such as kQLPreviewPropertyAttributesKey.
Back to the Info.plist you mentioned, Apple says about those keys QLPreviewWidth and QLPreviewHeight:
This number gives Quick Look a hint for the width (in points) of
previews. It uses these values if the generator takes too long to
produce the preview. (emphasis added)
This is where someone makes the terrible suggestion of calling sleep() in your generator. But I'm perplexed as to why Apple would make following the size hints dependent on the generator latency. (?)
Edit: Also note the above statement says the Info.plist hints must be expressed in points (not pixels), a unit dependent on the user's screen resolution.

Recently I was developing a Quick Look Plugin myself which uses HTML+CSS and faced the same problem.
The solution for my was to test the plugin not within Xcode and qlmanage as the executable but instead to try the real .qlgenerator from my user library.
When invoking the generator from my user library, the Quick Look window was resized exactly the way I specified in the *-Info.plist.

I've run into the same problem, and may offer some clues: In my case I'm generating an image quick look preview for my custom file format. I initiate the preview context to draw my preview into using
CGContextRef QLPreviewRequestCreateContext(QLPreviewRequestRef preview, CGSize size, Boolean isBitmap, CFDictionaryRef properties);
The curious thing is that if I set isBitmap to true, quick look adjusts the preview panel size to the size specified for the context (up to a certain size at least). But if you set isBitmap to false, it seems to disregard the context size and instead always shows a full size preview panel with the vector graphics image scaled to cover the entire panel.
So, if you use a bitmap graphical preview context, it seems the preview panel will be set to the size of the context you specify. However, I haven't found any way to set the size of the panel when using a vector graphic preview context (which is what I want).

Related

In gym, how should we implement the environment's render method so that Monitor's produced videos are not black?

How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)

CoreText equivalent for [NSFont fontWithName]

In our app we are storing fonts for custom objects as strings containing the font name (as retrieved from the font panel) and instantiate fonts using
+ (NSFont *)fontWithName:(NSString *)fontName
Now when loading or importing files we'd like to check whether the font name will actually resolve on the current system.
Due to certain design restrictions we need to perform the check in plain old C/C++, i.e. these days using CoreText.
However, there doesn't seem to exist an equivalent CoreText API call working similarly to fontWithName::
CTFontCreateWithName always returns a font as it uses a fallback strategy
CTFontCopyFullName appears to use different names than wha is accepted/used by NSFont..?
So basically the question is:
How can we use CoreText functionality to check whether a font exists on the current system and the font can be instantiated successfully using [NSFont fontWithName:] later..?
It appears that CTFontCopyPostScriptName() will always return a valid result for any string that is understood by [NSFont fontWithName:] - and vice versa.
At least in my experiments across 10.8 - 10.10 and using valid and made-up font names the results of CTFontCopyPostScriptName() could always be used to verify a font name will be valid for NSFont.
Edit #1
Some experimentation shows that it's a good idea to also add a fallback with CTFontCopyFamilyName() to be on the safe side.

Resizing in Dropzone.js?

Is there an example of the resize function for dropzone.js? I don't really understand how it works, it says:
"Resize is the function that gets called to create the resize information. It gets the file as first parameter and must return an object with srcX, srcY, srcWidth and srcHeight and the same for trg*. Those values are going to be used by ctx.drawImage()."
But I don't really get how to use it. So far I'm resizing the images on server-side, but I'd like to do it client-side and I think this might help. Any other solutions using dropzone.js if not this one?
I believe the built-in resize function in dropzoneJS is what is used to create the thumbnail, not resize the photo client-side, per se. There might be some way you could leverage it by setting the thumbnail dimensions to what you want to save to the server, and overwriting the file being saved with the thumbnail, but I'd have to hack about for a spell to offer you any code suggestions for that.

UILables, Text Flow and Layouts

Consider the following, I have paragraph data being sent to a view which needs to be placed over a background image, which has at the top and the bottom, fixed elements (fig1)
Fig1.
My thought was to split this into 4 labels (Fig1.example2) my question here is how I can get the text to flow through labels 1 - 4 given that label 1,2 & 3 ar of fixed height. I assumed here that label 3 should be populated prior to 4 hence the layout in the attached diagram.
Can someone suggest the best way of doing this with maybe an example?
Thanks
Wish I could help more, but I think I can at least point you in the right direction.
First, your idea seems very possible, but would involve lots of calculations of text size that would be ugly and might not produce ideal results. The way I see it working is a binary search of testing portions of your string with sizeWithFont: until you can get the best guess for what the label will fit into that size and still look "right". Then you have to actually break up the string and track it in pieces... just seems wrong.
In iOS 6 (unfortunately doesn't apply to you right now but I'll post it as a potential benefit to others), you could probably use one UILabel and an NSAttributed string. There would be a couple of options to go with here, (I haven't done it so I'm not sure which would be the best) but it seems that if you could format the page with html, you can initialize the attributed string that way.
From the docs:
You can create an attributed string from HTML data using the initialization methods initWithHTML:documentAttributes: and initWithHTML:baseURL:documentAttributes:. The methods return text attributes defined by the HTML as the attributes of the string. They return document-level attributes defined by the HTML, such as paper and margin sizes, by reference to an NSDictionary object, as described in “RTF Files and Attributed Strings.” The methods translate HTML as well as possible into structures of the Cocoa text system, but the Application Kit does not provide complete, true rendering of arbitrary HTML.
An alternative here would be to just use the available attributes, setting line indents and such according to the image size. I haven't worked with attributed strings at this level, so I the best reference would be the developer videos and the programming guide for NSAttributedString. https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/AttributedStrings/AttributedStrings.html#//apple_ref/doc/uid/10000036-BBCCGDBG
For lesser versions of iOS, you'd probably be better off becoming familiar with CoreText. In the end you'll be rewarded with a better looking result, reusability/flexibility, the list goes on. For that, I would start with the CoreText programming guide: https://developer.apple.com/library/mac/#documentation/StringsTextFonts/Conceptual/CoreText_Programming/Introduction/Introduction.html
Maybe someone else can provide some sample code, but I think just looking through the docs will give you less of a headache than trying to calculate 4 labels like that.
EDIT:
I changed the link for CoreText
You have to go with CoreText: create your AttributedString and a CTFramesetter with it.
Then you can get a CTFrame for each of your textboxes and draw it in your graphics context.
https://developer.apple.com/library/mac/#documentation/Carbon/Reference/CTFramesetterRef/Reference/reference.html#//apple_ref/doc/uid/TP40005105
You can also use a UIWebView

Printing without an NSView

Currently I'm writing an app for OSX which will eventually need to be ported to iOS.
The data that needs to be printed is being drawn via CoreGraphics into a PDF context - that is working perfectly.
I've been reading the Apple dev documentation on printing in both iOS and OSX, and, ironically, it actually seems printing from iOS will be easier.
On iOS, UIPrintInteractionController's printingItem property can take an NSData object containing PDF data and print that. Looks like it should be fairly straight-forward.
OSX on the other hand, (looks like it) requires using the NSPrintOperation class - but it seems the only way to get data into an instance is via an NSView. (+printOperationWithView: or +printOperationWithView:printInfo:).
Seeing as the content is formatted and paginated already it seems rather pointless to have to re-draw the PDF data to something like an NSView.
Could there possibly be another way of achieving this that I've missed?
This code is by no means complete, but for anyone who comes across this later, this is basically how you can print directly from an NSData stream:
#define kMimeType #"application/pdf"
#define kPaperType #"A4"
- (void)printData:(NSData *)incomingPrintData {
CFArrayRef printerList; //will soon be an array of PMPrinter objects
PMServerCreatePrinterList(kPMServerLocal, &printerList);
PMPrinter myPrinter;
//iterate over printerList and determine which one you want, assign to myPrinter
PMPrintSession printSession;
PMPrintSettings printSettings;
PMCreateSession(&printSession);
PMCreatePrintSettings(&printSettings);
PMSessionDefaultPrintSettings(printSession, printSettings);
CFArrayRef paperList;
PMPrinterGetPaperList(myPrinter, &paperList);
PMPaper usingPaper;
//iterate over paperList and to set usingPaper to the paper desired
PMPageFormat pageFormat;
PMCreatePageFormatWithPMPaper(&pageFormat, usingPaper);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((CFDataRef)incomingPrintData);
PMPrinterPrintWithProvider(myPrinter, printSettings, pageFormat, (CFStringRef)kMimeType, dataProvider);
}
(via Core Printing Reference)
Beware this code lacks memory management so you will need to use the PMRetain() and PMRelease() functions as well as the CoreFoundation memory-management functions as well.
If anyone can tell me how I can get data from the OSX print dialogue into data I can use in this method I'll accept their answer instead of this. That is, without using Carbon functions.