Calling deleteRecord() too often or too fast at a time? - ember-data

I want to delete all records of the model "Article" (around five pieces). I'm doing it like that:
CMS.ArticlesController = Em.ArrayController.extend
deleteAll: ->
#get("content").forEach (article) ->
article.deleteRecord()
However, while executing, it says after three articles:
Uncaught TypeError: Cannot call method 'deleteRecord' of undefined
It works though when using a little delay:
CMS.ArticlesController = Em.ArrayController.extend
deleteAll: ->
#get("content").forEach (article) ->
setTimeout( (->
article.deleteRecord()
), 500)
Why is that?
(Im using Ember.js-rc.1 and Ember Data rev 11 together with the ember-localstorage-adapter by #rpflorence, but I don't think that matter since I didn't call commit() yet...)
Update 1
Just figured out it also works with Ember.run.once...
Update 2
I opened a GitHub issue: https://github.com/emberjs/data/issues/772

As discussed on GitHub, the forEach()-loop breaks, because it breaks the index while removing items.
The solution:
"Copy" it in another array using toArray():
#get("content").toArray().forEach(article) ->
article.deleteRecord()
The nicer approach, if there was a function like forEachInReverse, is to loop backwards, so even though items are removed, the missing index wouldn't hurt the loop.

I still had issues with the above answer. Instead, I used a reverse for loop:
for(var i = items.length - 1; i >= 0; i--) {
items.objectAt(i).destroyRecord(); // or deleteRecord()
}
This destroys each item without disrupting the index.

Related

How to get just the most recent of all documents

In sanity studio you get a nice list of the most recent version of all your documents. If there is a draft you get that, if not, you get the published one.
I need the same list for a few filters and scripts. The following groq does the job but is not very fast and does not work in the new API (v2021-03-25).
*[
_type == $type &&
!defined(*[_id == "drafts." + ^._id])
]._id
A way around the breaking changes in the API is to use length() = 0 in place of !defined() but that makes an already slow query 10-20 X slower.
Does anyone know a way of making filters that consider only the latest version?
Edit: An example where I need this is if I want to see all documents without any categories. Regardless whether it is the published document or the draft that has no categories it shows up in a normal filter. So if you add categories but don't immediately want to publish it will be confusing in the no-categories-list. ,'-)
100 X improvement on API v2021-03-25 🥳
The only way I was able to solve this with speed was to first make a projection of the sub-query so it doesn't run once for every non-draft. Then I thought, why not project both sets and then figure out the overlap, and that was even faster! It runs more than 10 x faster than possible on API v1 and 100 x faster than any suggestions for new API.
{
'drafts': *[ _type == $type && _id in path("drafts.**") ]._id,
'published': *[ _type == $type && !(_id in path("drafts.**"))]._id,
}
{
'current': published[ !("drafts." + # in ^.drafts) ] + drafts
}
First I get both drafts and non-drafts and "store" it in this projection, like a variable-😉-ish
Then I start with my non-drafts - published
And filter out any that has a counterpart in my drafts "variable"
Lastly I add all drafts to the my list of filtered non-drafts
Overall I think you're on the right track. Some ideas to help you out:
Drafts are always fresher and newer than published documents, so if a given doc's id in path("drafts.**"), that's already the last updated one.
Knowing the above allows you to skip the defined(*[_id == ...]) part of the query for drafts, speeding up your execution
As drafts are already included, we can exclude published documents with a draft (defined(*[_id == "drafts." + ^._id][0]))
Notice I added a [0] to the end of the query to pick only the first element that matches. This will improve performance slightly.
For getting only documents that have no categories, use count(categoriesField) < 1
Order documents with | order(_updatedAt desc) to get the freshest documents first
And paginate your request to reduce the payload and speed things up.
Here's a sample query applying these principles (I haven't ran it, you may have to do some adjustments there):
*[
_type == $type &&
// Assuming you only want those without categories:
count(categories) < 1 &&
(
// Is either a draft -> drafts are always fresher
_id in path("drafts.**") ||
// Or a published document with no draft
!defined(*[_id == "drafts." + ^._id][0])
// 👆 with the check above we're ensuring only
// published documents run the expensive defined query
)
]
// Order by last updated
| order(_updatedAt desc)
// Paginate for faster queries
[$paginationStart..$paginationEnd]
// Get only the _id, assuming that's what you want
._id
Hope this helps 🙌

Using deepstream List for tens of thousands unique values

I wonder if it's a good/bad idea to use deepstream record.getList for storing a lot of unique values, for example, emails or any other unique identifiers. The main purpose is to be able to answer a question quickly whether we already have, say, a user with such email (email in use) or another record by specific unique field.
I made few experiments today and got two problems:
1) when I tried to populate the list with few thousands values I got
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
and my deepstream server went off. I was able to fix it by adding more memory to the server node process with this flag
--max-old-space-size=5120
it doesn't look fine but allowed me to make a list with more than 5000 items.
2) It wasn't enough for my tests so I precreated the list with 50000 items and put the data directly to rethinkdb table and got another issue on getting the list or modifing it:
RangeError: Maximum call stack size exceeded
I was able to fix it with another flag:
--stack-size=20000
It helps but I believe it's only matter of time when one of those errors appear in production when the list size reaches proper value. I don't know really whether it's nodejs, javascript, deepstream or rethinkdb issue. That's all in general made me think that I try to use deepstream List wrong way. Please, let me know. Thank you in advance!
Whilst you can use lists to store arrays of strings, they are actually intended as collections of recordnames - the actual data would be stored in the record itself, the list would only manage the order of the records.
Having said that, there are two open Github issues to improve performance for very long lists by sending more efficient deltas and by introducing a pagination option
Interesting results in regards to memory though, definitely something that needs to be handled more gracefully. In the meantime you could drastically improve performance by combining updates into one:
var myList = ds.record.getList( 'super-long-list' );
// Sends 10.000 messages
for( var i = 0; i < 10000; i++ ) {
myList.addEntry( 'something-' + i );
}
// Sends 1 message
var entries = [];
for( var i = 0; i < 10000; i++ ) {
entries.push( 'something-' + i );
}
myList.setEntries( entries );

phalcon querybuilder total_items always returns 1

I make a query via createBuilder() and when executing it (getQuery()->execute()->toArray())
I got 10946 elements. I want to paginate it, so I pass it to:
$paginator = new \Phalcon\Paginator\Adapter\QueryBuilder(array(
"builder" => $builder,
"limit" => $limit,
"page" => $current_page
));
$limit is 25 and $current_page is 1, but when doing:
$paginator->getPaginate();
$page->total_items;
returns 1.
Is that a bug or am I missing something?
UPD: it seems like when counting items it uses created sql with limit. There is no difference what limit is, limit divided by items per page always equals 1. I might be mistaken.
UPD2: Colleague helped me to figure this out, the bug was in the query phalcon produces: count() of the group by counts grouped elements. So a workaround looks like:
$dataCount = $builder->getQuery()->execute()->count();
$page->next = $page->current + 1;
$page->before = $page->current - 1 > 0 ? $page->current - 1 : 1;
$page->total_items = $dataCount;
$page->total_pages = ceil($dataCount / 100);
$page->last = $page->total_pages;
I know this isn't much of an answer but this is most likely to be a bug. Great guys at Phalcon took on a massive job that is too big to do it properly in their little free time and things like PHQL, Volt and other big but non-core components do not receive as much attention as we'd like. Also given that most time in the past 6 months was spent on v2 there are nearly 500 bugs about stuff like that and it's counting. I came across considerable issues in ORM, Volt, Validation and Session, which in the end made me stick to other not as cool but more proven solutions. When v2 comes out I'm sure all attention will on the bug list and testing, until then we are mostly on our own. Given that it's all C right now, only a few enthusiast get involved, with v2 this will also change.
If this is the only problem you are hitting, the best approach is to update your query to get the information you need yourself without getPaginate().

How to clear datatable filters?

Im using custom filtering for my datatable using the method:
$.fn.dataTableExt.afnFiltering.push("custom filter function");
This function adds a filter to my datatable.
The problem is that when I use ajax to create an other datatable object, this filter persists and is applied to this other table that should have nothing to do with this filter. How do I clear the filter or bind it to the first datatable only?
if you make a push on $.fn.dataTableExt.afnFiltering, it means it's an array. So when you receive your data, you can remove the filter reference in this array by using :
delete $.fn.dataTableExt.afnFiltering[index or key];
this method will set the element to undefined
or by using the splice method in javascript.
$.fn.dataTableExt.afnFiltering.splice(index,1);
this method will remove the element from the array.
so
var index = $.fn.dataTableExt.afnFiltering.indexOf("custom filter function");
$.fn.dataTableExt.afnFiltering.splice(index,1);
should resolve your problem
(you can have precision here Javascript - remove an array item by value as indexOf is not supported by IE<9)
If you are going to use the 1.10+ version of the datatable in the future, the use of the search plug-in document is shown below:
Search plug-in development
To reset the filter for version 1.10+, simply add any of the following;
$.fn.dataTable.ext.search = [];
$.fn.dataTable.ext.search.pop();
after this blocks you can add;
table.draw();
$.fn.dataTableExt.afnFiltering.pop();
credit to #DrewT
As mentioned from #kthorngren, there is no build in way of tracking, if or how much custom searches are active.
If you are sure, that there is only one custom search is active a
$.fn.dataTableExt.afnFiltering.pop();
will work - there is big BUT:
$.fn.dataTable.ext.search is an array which contains the search settings for custom search and for searchPanes.
An erasing of this array with $.fn.dataTable.ext.search = []; or two pop()'s, allthough there is only one custom search is active --> will brake searchPanes.
e.g. if you have three panes active, this would mean:
$.fn.dataTable.ext.search[0] -> SearchPane Col1
$.fn.dataTable.ext.search[1] -> SearchPane Col2
$.fn.dataTable.ext.search[2] -> SearchPane Col3
$.fn.dataTable.ext.search[3] -> Custom Search -> safe to delete
$.fn.dataTable.ext.search[4] -> Custom Search -> safe to delete
Following code does the job in my case:
let lenOfSearchPanes = dt.settings()[0]._searchPanes.c.columns.length;
let lenOfSearchArr = $.fn.dataTable.ext.search.length;
let diff = lenOfSearchArr - lenOfSearchPanes
if (diff > 0) {
$.fn.dataTable.ext.search = $.fn.dataTable.ext.search.slice(0, -diff);
}

Passing options to JS function with Amber

I'm trying to write the equivalent of:
$( "#draggable" ).draggable({ axis: "y" });
in Amber smalltalk.
My guess was: '#draggable' asJQuery draggable: {'axis' -> 'y'} but that's not it.
Not working on vanilla 0.9.1, but working on master at least last two months ago is:
'#draggable' asJQuery draggable: #{'axis' -> 'y'}
and afaict this is the recommended way.
P.S.: #{ 'key' -> val. 'key2' -> val } is the syntax for inline creation of HashedCollection, which is implemented (from the aforementioned two-month ago fix) so that only public (aka enumerable) properties are the HashedCollection keys. Before the fix also all the methods were enumerable, which prevented to use it naturally in place of JavaScript objects.
herby's excellent answer points out the recommended way to do it. Appearently, there is now Dictionary-literal support (see his comment below). Didn't know that :-)
Old / Alternate way of doing it
For historical reasons, or for users not using the latest master version, this is an alternative way to do it:
options := <{}>.
options at: #axis put: 'y'.
'#draggable' asJQuery draggable: options.
The first line constructs an empty JavaScript object (it's really an JSObjectProxy).
The second line puts the string "y" in the slot "axis". It has the same effect as:
options.axis = "y"; // JavaScript
Lastly, it is invoked, and passed as a parameter.
Array-literals vs Dictionaries
What you were doing didn't work because in modern Smalltalk (Pharo/Squeak/Amber) the curly-brackets are used for array-literals, not as an object-literal as they are used in JavaScript.
If you evaluate (print-it) this in a Workspace:
{ #elelemt1. #element2. #element3 }.
You get:
a Array (#elelemt1 #element2 #element3)
As a result, if you have something that looks like a JavaScript object-literal in reality it is an Array of Association(s). To illustrate I give you this snippet, with the results of print-it on the right:
arrayLookingLikeObject := { #key1 -> #value1. #key2 -> #value2. #key3 -> #value3}.
arrayLookingLikeObject class. "==> Array"
arrayLookingLikeObject first class. "==> Association"
arrayLookingLikeObject "==> a Array (a Association a Association a Association)"
I wrote about it here:
http://smalltalkreloaded.blogspot.co.at/2012/04/javascript-objects-back-and-forth.html