Before I go to write my data to my coredata I call
[self deleteAllObjects:#"MyEntity"];
But this seems to iterate through every single object and delete them which seems abit slow. I was wondering if there is a better / quicker way of doing this.
I have checked out the coredata notes on the developer site.. but thats the only function I can find for deleting entries out of your entity.
Since iterating is expensive, there are quicker ways. You can check these answers that have been discussed on SO before here, here and here.
Related
I am building a simple custom headless CMS with React to save data in Fauna via API Gateway and Lambda. To list my posts in the admin, I would like to get the data from my collection sorted by a date value.
When I create a new index to do this, I expected to get the same data/structure that is in the default index that is created. However, what I've found is that it returns only the data explicitly defined in the index without any keys to describe what values are present.
I asked this question without the context before and got a great response, but I would like to know more generally what the best and most performant practice would be to accomplish this in Fauna. I have not discovered a way to sort data outside of creating an index.
This default behavior is counter intuitive to me. It seems there would be a simpler way to return the data in reversed order. I would love to know why this is the default behavior. I'm sure there are good reasons for it rationalized by folks who are much smarter than I am. Thanks for any guidance.
there is indeed a very good reason for this. In contrast to many other databases, FaunaDB took the decisions not to allow you to do inefficient things in order to save you from unpleasant surprises. When you sort data in a database, it either uses an index typically one of two things happens:
There is an index defined because you knew you were going to do that, you care about performance and you thought about it. The index is used for the sort.
You forgot about the index, or your data is so small that you didn't care, the query engine is going to still do this yet is going to do this in a horribly inefficient way.
If you end up in the second case where you forgot and you do this on massive data, then you might have a performance problem, if that database is auto-scaling and pay-as-you-go than ok.. no problem.. the database should be be able to handle but since it's pay-as-you-go, it'll be expensive.
The same counts for sorting. Maybe a database has a clever way to reverse a sort order but it might just as well not use the index and do something super inefficient by running over that complete dataset until to the end and start reading in reverse order.
To avoid nasty pricing surprises, most things that you can do that requires an index to do efficiently will not be possible without defining that index in advance.
Is that the answer you were looking for?
First, I am an objective-c newbie. Just thought I would get that out of the way ;)
I am trying to handle objects but I'm a bit confused about the best way to go around doing so. Let me put this into a bit of context:
I have a preference area where a user can add a new Foo to the app. Once the input fields are validated it should spawn a new object of type Foo (according to my Foo class). The user could have anywhere from 1 to 100 of these in the app. What is the best way of keeping track of all of these? How can I create them in the code and keep track of them?
I bet that made no sense, but I have tried to explain it the best I can. Please feel free to ask for more details.
Thanks in advance for any help
Oh, I thought you said lots. :-) I was already planning an explanation on the flyweight pattern when I read 'up to 100'. You can just put these in an array.
It depends somewhat on what you want to do with them. To just keep them in RAM, you can store pointers to those objects in an NSArray (or NSMutableArray), or if you need to be able to find them with a key use an NSDictionary (or NSMutableDictionary). To save them so that they persist even after your app exits so you can load them again next time you can them write them to a file (plist, sqllite, coredata, ...).
I'm currently looking for the best way to save data in my iPhone application; data that will persist between opening and closing of the application. I've looked into archiving using a NSKeyedArchiver and I have been successful in making it work. However, I've noticed that if I try to save multiple objects, they keep getting overwritten every time I save. (Essentially, the user will be able to create a list of things he/she wants, save the list, create a few more lists, save them all, then be able to go back and select any of those lists to load at a future date.)
I've heard about SQLite, Core Data, or using .plists to store multiple arrays of data that will persist over time. Could someone point me in the best direction to save my data? Thanks!
Core Data is very powerful and easy to use once you get over the initial learning curve. here's a good tutorial to get you started - clicky
As an easy and powerful alternative to CoreData, look into ActiveRecord for Objective-C. https://github.com/aptiva/activerecord
I'd go with NSKeyedArchiver. Sounds like the problem is you're not organizing your graph properly.
You technically have a list of lists, but you're only saving the inner-nested list.
You should be added the list to a "super" list, and then archiving the super-list.
CoreData / SQL seems a bit much from what you described.
Also you can try this framework. It's very simple and easy to use.
It's based on ActiveRecord pattern and allow to use migrations, relationships, validations, and more.
It use sqlite3 only, without CoreData, but you don't need to use raw sql or create tables manually.
Just describe your iActiveRecord and enjoy.
You want to check out this tutorial by Ray Wenderlich on Getting started with CoreData. Its short and goes over the basics of CoreData.
Essentially you only want to look at plists if you have a small amount of data to store. A simple list of settings or preferences. Anything larger than that and it breaks down specifically around performance. There is a great video on iTunesU where the developers at LinkedIn describe their performance metrics between plists and CoreData.
Archiving works, but is going to be a lot of work to store and retrieve your data, as well as put the performance challenge on your back. So I wouldn't go there. I would use CoreData. Its extremely simple to get started with and if you understand the objects in this stack overflow question then you know everything you need to get going.
I am parsing some xml and storing the result in a plist save it to file. I later frequently use that plist to search, add/remove stuff and then save it back.
Now, I don't have any problem with this, everything works fine, im just wondering if there's a better/more efficient/faster way of doing this?
About the plist: array of 200 dictionaries with 150 entries each. Some of those entries are array themselves with sub dictionaries of 50-100 entries... (you get the point)
Thanks.
Unless you are running into performance problems I would suggest not worrying about it and just focus on getting the rest of app completed. Premature optimization is the root of all evil (someone had to say it right?).
If you decide that the time has come to make that part of your app as efficient as possible then we would need to see that actual code that you are using to determine if there are more efficient ways to do it. Considering your description of the plist I would guess that if there was anything incredibly inefficient with your strategy and/or implementation then you would already be running up against it with respect to performance.
I have a reasonable number of records in an Azure Table that I'm attempting to do some one time data encryption on. I thought that I could speed things up by using a Parallel.ForEach. Also because there are more than 1K records and I don't want to mess around with continuation tokens myself I'm using a CloudTableQuery to get my enumerator.
My problem is that some of my records have been double encrypted and I realised that I'm not sure how thread safe the enumerator returned by CloudTableQuery.Execute() is. Has anyone else out there had any experience with this combination?
I would be willing to bet the answer to Execute returning a thread-safe IEnumerator implementation is highly unlikely. That said, this sounds like yet another case for the producer-consumer pattern.
In your specific scenario I would have the original thread that called Execute read the results off sequentially and stuff them into a BlockingCollection<T>. Before you start doing that though, you want to start a separate Task that will control the consumption of those items using Parallel::ForEach. Now, you will probably also want to look into using the GetConsumingPartitioner method of the ParallelExtensions library in order to be most efficient since the default partitioner will create more overhead than you want in this case. You can read more about this from this blog post.
An added bonus of using BlockingCollection<T> over a raw ConcurrentQueueu<T> is that it offers the ability to set bounds which can help block the producer from adding more items to the collection than the consumers can keep up with. You will of course need to do some performance testing to find the sweet spot for your application.
Despite my best efforts I've been unable to replicate my original problem. My conclusion is therefore that it is perfectly OK to use Parallel.ForEach loops with CloudTableQuery.Execute().