Custom iOS address book. Need advices about data structures and performance - objective-c

I am currently developing a voIP app and I am really stuck with the address book.
Because of the custom design, the native address book does not fit in my app. Besides, I want to add some extra data not present in the native address book. But this is leading into some problems which I've separated into two sections:
1. Data structures:
In a section of my app I need to show to the user all his address book contacts with additional information (if the user has the same app and it's online, for example).
Right now I'm getting all the info from the Address Book api and loading it in an array directly (which is accessed by the tableView:cellForRowAtIndexPath:), but not displaying the custom information I was talking about. I don't know if its worthwhile to store all the address book info in a sqlite data base (where I'd be able to add the extra information easily) or if I should store only that extra information in a file or something.
The biggest problem of storing it in a data base is that the contact's picture is heavy enough to get a wasting-memory data base. I thought to store only a reference (the ABRecordID) and then to gather the related info from the address book instead of the data base, but the Apple documentation of the Address Book api says the ABRecordID is not guaranteed to remain the same, so it could cause my data to appear next to wrong contact data.
Any idea?
2. Performance:
The second big problem with this custom address book is that... the iOS table views are too 'manual' compared to the Android ones, for example. You need to have the data stored somewhere so that when the tableView:cellForRowAtIndexPath: method gets called you return that data. You can also load that data inside this method, but this makes it very slow.
The problem here is that preloading all the data in memory is dangerous, because a person may have 40 contacts or 2000 (and maybe he/she has taken a picture for each of them, which will be much more memory-consuming). If the iOS device runs out of memory the system will kill the app. The data base approach has no memory problems, but making queries for each cell to appear is so slow that it becomes unacceptable.
Again, I need ideas for this. Can't find a tradeoff between performance and memory consumption.
Please, don't ask for code because I'm not allowed to post it. I'd really appreciate your advices. Thank you in advance!

Data structures:
Along with the recordref you should store the name phone number and email address. Nothing else in your data store. If one of the three vales change and the other two remain the same update the changed value. The recordref can change during a restore of a device for many users at once but the name email and phone won't. If the user changes a name or email or phone they won do it across many users at once. Once in a while you end up with a recordref that does not match up with email and phone say, the contact may have changed employers so then show a list of close matches and ask the user to select one.
As far as some one having 1000s of contact I would use paging. Load 100 or 200 at a time in to an array with current row displayed in the table view as middle of your array index. Once the user scrolls 20-30 records update the records in your array from address book. Your going to be spending a lot time resaving data just to go through the collection comparing and trying to keep it up to date. You should be able to store quite a few records long as your not keeping the user images in memory, for that you should let the table view handle it. Get image and assign to cell when you get the notification about the cell about to become visible. Even then I would put a short wait before loading the image, because if the user is scrolling fast the cell will just fly by and you'll get notified that the cell scrolled out and you can release the image data. If the user is scrolling slowly then the short wait/sleep will pass and the image should show up for each cell.
I don't know how much meta your planning on storing in your app wrapping the contacts but if you should create two tables for the contact object, one with 3-4 indexed columns that will allow for faster querying and a second to hold the rest that loaded only when users viewing the contact in a detail view. Can't get too much into a tableviescell, unless your on the iPad.
Hope that helps.

Related

How to Collect dijit/form/combobox Selected Values in Repeat Control

An XPage is used to display the number of points a person has collected, and the number of points remaining (see below).
I have a repeat control which gets a collection of documents meeting a specific criteria. The last column in the control contains 5 digit/form/comboboxes, which are displayed or hidden, according to the number of fields on each document that contain data.
The layout contains gift cards worth a certain amount of points, and the person can select how many of each gift card they want. eg.
Company Available in Values of Points Required Quantity Requested
The Quantity Requested column contains the digit/form/comboboxes. As the person selects values in the checkbox, I want the number of points remaining to be recalculated.
The onChange event of the digit/form/comboboxes calls a function in an Output Script which calls an RPC, which in turn calls an SSJS function. The SSJS function cycles through the documents displayed in the repeat control, gathering the points required information. I then wanted it to also grab the Quantity Requested. I understand from a previous posting that because of the way the digit/form/combox is rendered, I can only get the value using CSJS with dijit.byId and perhaps putting the value in a hidden field and retrieving it from there.
I can't seem to wrap my head around how I will do this when the repeat control will make it possible for there to be many combobox1 and combobox2, etc.
The XPage is not bound to a form, because all the items are just calculated on the fly and then discarded.
What is the best way to do this?
The JSON RPC service can't interact with any changes made in the browser, see https://www.intec.co.uk/json-rpc-service-component-tree-manipulation-openlog/. This could be the cause of your problems.
You may be able to get around it by triggering a partial refresh (POST) before calling the JSON RPC. In theory that might work, because the component tree (server-side map of the XPage) would get updated by the partialRefreshPost and the updates picked up by the JSON RPC. It's possible though that the Restore View picks up a version of the XPage other than the one for the browser, I don't know. I've never investigated that.
It's been a while since I've worked with server java script, I have been doing it the managed bean way with ActionListeners. If you have the data in the UI, then can you avoid server side processing and do it client side?
You can also use the DOM XSP Object like XSP.setSubmittedValue to have a key value pair sent with your post request to the server side, you can only have one... it can be JSON or any other value you set it to from the client side javascript.
I figured out how to do this. If anyone wants the code, let me know and I'll provide it.

Is there a way to get Address Book contact ID's from Sync Services contact ID's?

When getting the modified contacts from Sync Services, through the applyChange:forEntityName:remappedRecordIdentifier:formattedRecord:error method. The IDs in the address book are of the form 2C13E20E-6B24-4090-81FA-7A1E8B28119B, and even though some IDs of this kind are present in the ISyncChange * object, those are not actual contact ID's that can be found in the address book...
Is there a way to find out from Sync Services what a certain contact's ID is in the Address Book?
The reason for asking is that when saving large pictures for contacts in the Address Book, Sync Services does not save those pictures in their internal data storage. Therefore, contacts that have been modified or added with a large picture will be returned by Sync Services without the picture, basically offering incomplete information.
I need to get the Address Book ID, so that I can look up the contact's picture in ~/Library/Application Support/Address Book/Images/
Thanks!
It's a bad idea to rely on the Address Book id relating to an image in ~/Library/Application Support/Address Book/Images/ - you'd be better off finding an API that provides you the data you want to work with, because you aren't guaranteed that the image will be there then, or later (after an upgrade, this could all change!).
After a small amount of research, it appears that the api you want is documented here : http://developer.apple.com/library/mac/#documentation/UserExperience/Conceptual/AddressBook/Tasks/AccessingData.html#//apple_ref/doc/uid/20001023-103617
It's a little unwieldy because you necessarily need to understand their ABImageClient protocol and provide a callback, but I don't think it's that bad. This approach is much better than what you were doing - it's the Apple sanctioned way of getting this data and you won't have to worry about it breaking in the future.

How to decide whether to split up a VB.Net application and, if so, how to split it up?

I have 2 1/2 years experience of VB.Net, mostly self taught, so please bear with me if I seem rather noobish still and do not know some of the basics. I would recommend you grab a cup of tea before starting on this, as it appears to have got quite long...
I currently have a rather large application (VB.Net website) of over 15000 lines of code at the last count. It does not do retail or anything particularly complex like that - it is literally just a wholesale viewing website with admin frontend, catalogue / catalogue management system and pageview system.
I don't really know much about how .Net applications work in the background - whether they are all loaded on the same thread or if each has its own thread... I just know how to code them, or at least like to think I do... :-)
Basically my application is set up as follows:
There are two different areas - the customer area and the administration frontend.
The main part of the customer frontend is the Catalogue. The MasterPage will load a list of products but that's all, and this is common to all the customer frontend pages.
I tend to work on only one or several parts of the application at a time before uploading the changes. So, for example, I may alter the hierarchy of the Catalogue and change the Catalogue page to match the hierarchy change whilst leaving everything else alone.
The pageview database is getting really quite large and so it is getting rather slow when the application is first requested due to the way it works.
The application timeout is set to 5 minutes - don't know how to change it, I have even tried asking this question on here and seem to remember the solution was quite complex and I was recommended not to change it, but if a customer requests the application 5 minutes after the last page view then it will reload the application from scratch. This means there is a very slow page load whenever it exceeds 5 minutes of inactivity.
I am not sure if this needs consideration to determine how best to split the application up, if at all, but each part of the catalogue system is set up as follows:
A Manager class at the top level, which is used by the admin frontend to add, edit and remove items of the specified type and the customer frontend to retrieve a list of items of the specified type. For example the "RangeManager" will contain a list of product "Ranges" and will be used to interact with these from the customer frontend.
An Item class, for example Range, which contains a list of Attributes. For example Name, Description, Visible, Created, CreatedBy and so on. The form for adding / editing loops through these to display relevant controls for the administrator. For example a Checkbox for BooleanAttribute.
An Attribute class, which can be of type StringAttribute, BooleanAttribute, IntegerAttribute and so on. There are also custom Attributes (not just datatypes) such as RangeAttribute, UserAttribute and so on. These are given a data field which is used to get a piece of data specific to the item it is contained in when it is first requested. Basically the Item is given a DataRow which is stored and accessed by Attributes only when they are first requested.
When one item is requested from a specific manager is requested, the manager will loop through all the items in the database and create a new instance of the item class. For example when a Range is requested from the RangeManager, the RangeManager will loop through all of the DataRows in the Ranges table and create a new instance of Range for each one. As stated above it simply creates a new instance with the DataRow, rather than loading all the data into it there and then. The Attributes themselves fetch the relevant data from the DataRow as and when they're first requested.
It just seems a tad stupid, in my mind, to recompile and upload the entire application every time I fix a minor bug or a spelling mistake for a word which is in the code behind (for example if I set the text of a Label dynamically). A fix / change to the Catalogue page, the way it is now, may mean a customer trying to view the Contact page, which is in no way related to the Catalogue page apart from by having the same MasterPage, cannot do so because the DLL is being uploaded.
Basically my question is, given my current situation, how would people suggest I change the architecture of the application by way of splitting it into multiple applications? I mean would it be just customer / admin, or customer / admin and pageviews, or some other way? Or not at all? Are there any other alternatives which I have not mentioned here? Could web services come in handy here? Like split the catalogue itself into a different application and just have the masterpage for all the other pages use a web service to get the names of the products to list on the left hand side? Am I just way WAY over-complicating things? Judging by the length of this question I probably am, and it wouldn't be the first time... I have tried to keep it short, but I always fail... :-)
Many thanks in advance, and sorry if I have just totally confused you!
Regards,
Richard
15000 LOC is not really all that big.
It sounds like you are not pre-compiling your site for publishing. You may want to read this: http://msdn.microsoft.com/en-us/library/1y1404zt(v=vs.80).aspx
Recompiling and uploading the application is the best way to do it. If all you are changing is your markup, that can be uploaded individually (e.g. changing some html layout in an aspx page).
I don't know what you mean here by application timeout, but if your app domain recycles every 5 minutes, then that doesn't seem right at all. You should look into this.
Also, if you find yourself working on various different parts of the site (i.e. many different changes), but need to deploy only some items in isolation, then you should look into how you are using your source control tools (you are using one, aren't you?). Look into something like GIT and branching/merging.
Start by reading:
Application Architecture Guide

Address Book contacts in Core Data

What’s considered ‘best practice’ when saving Address Book contacts in Core Data?
I’m writing an iPhone App, based on Core Data, where I need to save and recall Address Book contacts as part of the data model.
In the UI I plan to present a screen where the user can pick a contact from the current Address Book, create a new contact to store in the Address Book, or just create a ‘one-off’ contact with no saved record, local to the App only. These contacts are tracked in the context of the orders they have made, and not all contacts will require saving outside the App itself.
It feels ‘wrong’ to copy the data from the Address Book if using an existing entry, but not sure what to do if an Address Book record is edited or deleted.
I only need to track name and photo for the purposes of the App, so gut-reaction is to store the ABRecordID, and—because these can apparently change(!)—the first and last name, and only update local record if it’s updated (how to track that?).
Or can you store a ABRecordRef directly? (I imagine they aren’t persistent?)
I’ve done some searching on Google, and here, but can’t find any code samples or discussion on the integration of Core Data and Address Book in this manner; just lots of stuff on each in isolation.
Any one with some experience/gotchas on this subject point them out, or point me in the direction of some more reading?
Thanks.
Andy W
I would store the ABRecordID and then handle the situation for when they change although I have not personally seen a case where they change except when the user deletes all data and restores it from another source (moving from MobileMe to Google for example).
See Apples online Documentation on how to handle changing ids and what to store.

How to skip known entries when syncing with Google Reader?

for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.