UILables, Text Flow and Layouts - objective-c

Consider the following, I have paragraph data being sent to a view which needs to be placed over a background image, which has at the top and the bottom, fixed elements (fig1)
Fig1.
My thought was to split this into 4 labels (Fig1.example2) my question here is how I can get the text to flow through labels 1 - 4 given that label 1,2 & 3 ar of fixed height. I assumed here that label 3 should be populated prior to 4 hence the layout in the attached diagram.
Can someone suggest the best way of doing this with maybe an example?
Thanks

Wish I could help more, but I think I can at least point you in the right direction.
First, your idea seems very possible, but would involve lots of calculations of text size that would be ugly and might not produce ideal results. The way I see it working is a binary search of testing portions of your string with sizeWithFont: until you can get the best guess for what the label will fit into that size and still look "right". Then you have to actually break up the string and track it in pieces... just seems wrong.
In iOS 6 (unfortunately doesn't apply to you right now but I'll post it as a potential benefit to others), you could probably use one UILabel and an NSAttributed string. There would be a couple of options to go with here, (I haven't done it so I'm not sure which would be the best) but it seems that if you could format the page with html, you can initialize the attributed string that way.
From the docs:
You can create an attributed string from HTML data using the initialization methods initWithHTML:documentAttributes: and initWithHTML:baseURL:documentAttributes:. The methods return text attributes defined by the HTML as the attributes of the string. They return document-level attributes defined by the HTML, such as paper and margin sizes, by reference to an NSDictionary object, as described in “RTF Files and Attributed Strings.” The methods translate HTML as well as possible into structures of the Cocoa text system, but the Application Kit does not provide complete, true rendering of arbitrary HTML.
An alternative here would be to just use the available attributes, setting line indents and such according to the image size. I haven't worked with attributed strings at this level, so I the best reference would be the developer videos and the programming guide for NSAttributedString. https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/AttributedStrings/AttributedStrings.html#//apple_ref/doc/uid/10000036-BBCCGDBG
For lesser versions of iOS, you'd probably be better off becoming familiar with CoreText. In the end you'll be rewarded with a better looking result, reusability/flexibility, the list goes on. For that, I would start with the CoreText programming guide: https://developer.apple.com/library/mac/#documentation/StringsTextFonts/Conceptual/CoreText_Programming/Introduction/Introduction.html
Maybe someone else can provide some sample code, but I think just looking through the docs will give you less of a headache than trying to calculate 4 labels like that.
EDIT:
I changed the link for CoreText

You have to go with CoreText: create your AttributedString and a CTFramesetter with it.
Then you can get a CTFrame for each of your textboxes and draw it in your graphics context.
https://developer.apple.com/library/mac/#documentation/Carbon/Reference/CTFramesetterRef/Reference/reference.html#//apple_ref/doc/uid/TP40005105
You can also use a UIWebView

Related

In gym, how should we implement the environment's render method so that Monitor's produced videos are not black?

How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)

Mapkit Searchbar Autocomplete Ios8

I'm trying to implement an Autocomplete on a search bar on a map using Mapkit. I found this :
https://github.com/chenyuan/SPGooglePlacesAutocomplete
Works and is totally perfect except that it uses UISearchDisplayViewController which has been deprecated in ios8 and replaced by UISearchViewController. Is there a way around it or a simpler way than the one mentioned above?
Thanks in Advance
Please try this new repo: https://github.com/hkellaway/HNKGooglePlacesAutocomplete, which is being actively maintained.
Apple provides a full autocomplete for the entire English language (and others) but if you want to implement your own autocomplete it's not too difficult, you simply need the range of words or phrases that you want to suggest and a way of ranking them in order of frequency used.
I've implemented a simple autocomplete in one of my projects that centers around a PredictionString class and an AutopredictCoordinator class.
The PredictionString has a NSString property and a float property which relates to the strings frequency of use by the user. The AutopredictCoordinator then holds an array of prediction strings and responds to requests for the most likely completion of any given string.

Set windows size of QuickLook Plugin

I'm building a QuickLook plugin. I want to change the width of the windows that pops up when user hits the spacebar.
I've read there are two keys in the info.plist file of the project where height and width are customisable. Even if I change those values I can't get the size of the preview windows to my desired one.
I don't know what else to try. Any idea?
Thanks!
Thought I'd dig a little on this. I have not tried any of the following suggestions, so nobody get their hopes up. I'll assume you're using the generator callback:
OSStatus (*GeneratePreviewForURL)(
void *thisInterface,
QLPreviewRequestRef preview,
CFURLRef url,
CFStringRef contentTypeUTI,
CFDictionaryRef options
);
Before anything else, you might manually check the options dictionary argument and verify that the kQLPreviewPropertyWidthKey and kQLPreviewPropertyHeightKey keys are indeed mapped to the desired CFNumber values.
Referring to each of these properties, the Apple QuickLook programming guide says:
Note that this property is a hint; Quick Look might set the width
automatically for some types of previews. The value must be
encapsulated in a CFNumber object.
(Edit: If your preview representation is flexible, you might try finding a preview type for which QuickLook honors your size hints, as per the statement above. Just a thought.)
Running nm on the QuickLook framework binary revealed some undocumented kQLPreviewProperty-- constants as well as the aforementioned width and height keys. One that caught my attention was kQLPreviewPropertyAutoSizeKey. Recalling Apple's statement about ignoring the hints to set the size automatically, this might be significant? Following the convention in QuickLook.framework/Headers/QLBase.h, you might try declaring
extern const CFStringRef kQLPreviewPropertyAutoSizeKey;
Then you could try associating a CFNumber 0 with that property key in the options dictionary. There are other undocumented keys of note, such as kQLPreviewPropertyAttributesKey.
Back to the Info.plist you mentioned, Apple says about those keys QLPreviewWidth and QLPreviewHeight:
This number gives Quick Look a hint for the width (in points) of
previews. It uses these values if the generator takes too long to
produce the preview. (emphasis added)
This is where someone makes the terrible suggestion of calling sleep() in your generator. But I'm perplexed as to why Apple would make following the size hints dependent on the generator latency. (?)
Edit: Also note the above statement says the Info.plist hints must be expressed in points (not pixels), a unit dependent on the user's screen resolution.
Recently I was developing a Quick Look Plugin myself which uses HTML+CSS and faced the same problem.
The solution for my was to test the plugin not within Xcode and qlmanage as the executable but instead to try the real .qlgenerator from my user library.
When invoking the generator from my user library, the Quick Look window was resized exactly the way I specified in the *-Info.plist.
I've run into the same problem, and may offer some clues: In my case I'm generating an image quick look preview for my custom file format. I initiate the preview context to draw my preview into using
CGContextRef QLPreviewRequestCreateContext(QLPreviewRequestRef preview, CGSize size, Boolean isBitmap, CFDictionaryRef properties);
The curious thing is that if I set isBitmap to true, quick look adjusts the preview panel size to the size specified for the context (up to a certain size at least). But if you set isBitmap to false, it seems to disregard the context size and instead always shows a full size preview panel with the vector graphics image scaled to cover the entire panel.
So, if you use a bitmap graphical preview context, it seems the preview panel will be set to the size of the context you specify. However, I haven't found any way to set the size of the panel when using a vector graphic preview context (which is what I want).

Showing past entries on UITextField

I am writing an app in which I have two UITextFields...
Starting Text
Destination Text
Now once I place values and hit the search or whatever function I want to call, I Want to reuse these values. The app should record and save these values as cache. And should show them when typing or upon a button click. Is that possible to show them just like Dictionary words show up or Which is more preferable tableView or PickerView? If there is any other please let me know.
Definitely use a UITableView. A UIPicherView is used modally most of the time and not for optional suggestions.
Table view is used in Safari, and a lots of other apps:
As for how to cache the data, you have lots of options. It also depends on how much data you expect.
One easy way would be to simply use NSArray. You can very easily write an NSArray to disk in a plist file and read it back when you need it.
Or you could use Core Data, if you expect lots of data and still want high performance. It will be a lot more difficult though to get used to that API if you've never tried it before. Basically you'll need a simple model with one entity called something like SearchEntry that has a single property text. Then you keep adding new instances to your managed object context and can easily filter the existing values.

create a new object

I want to create a new object so as to instantiate and use it several times;
For example, if I want to create an object that has a label and a button inside, how do I? I created a new NSObject but inside it has nothing, then how do I make everything from scratch since there was a viewDidLoad for example (obviously, since it has a view)?
thanks!
Your questions lead me to think that you're really just starting out. There's nothing wrong with that, but rather than trying to summarize several megabytes of documentation in a few paragraphs, I'm just going to point you to the iOS Starting Point. I think that you're just trying to create a container that can hold other UI components? If so, use a UIView for that. However, don't jump in and try to get something specific done without first reading through some of the Getting Started documents -- you'll just end up back here, and we'll just point you back to the docs. You might like the Your First iOS Application guide, as that lets you get your feet wet but explains things along the way.