The database I am trying to pull data from has approximately 50,000 documents. Currently it takes around 90 seconds for an iOS or Android device to query and display the data to the mobile device in a view. My code is posted below. Is there something I could be doing differently to speed this up? Thanks for any tips.
function updateAllPoliciesTable() {
try {
var db = Alloy.Globals.dbPolicyInquiry;
var view = db.getView("AllRecordsByInsured");
var vec = view.getAllEntriesBySQL("Agent like ? OR MasterAgent like ?", [Ti.App.agentNumber, Ti.App.agentNumber], true);
var ve = vec.getFirstEntry();
var data = [];
while (ve) {
var unid = ve.getColumnValue("id");
var row = Ti.UI.createTableViewRow({
unid : unid,
height: '45dp',
rowData: ve.getColumnValue("Insured") + " " + ve.getColumnValue("PolicyNumber")
});
var viewLabel = Ti.UI.createLabel({
color : '#333',
font : {
fontSize : '16dp'
},
text: toTitleCase(ve.getColumnValue("Insured")) + " " + ve.getColumnValue("PolicyNumber"),
left: '10dp'
});
row.add(viewLabel);
data.push(row);
ve = vec.getNextEntry();
}
//Ti.API.log("# of policies= " + data.length);
if(data.length == 0) {
var row = Ti.UI.createTableViewRow({
title : "No policies found"
});
data.push(row);
}
$.AllPoliciesTable.setData(data);
Alloy.Globals.refreshAllPolicies = false;
Alloy.Globals.loading.hide();
} catch (e) {
DTG.exception("updateAllPoliciesTable -> ", e);
}
}
Create an index on the appropriate table, that should speed up things.
The SQLite table for your view should be named "view_AllRecordsByInsured".
Create an index for that table, check SQLite documentation about "CREATE INDEX" for more details.
To execute the appropriate SQL, you could use the DTGDatabase class like
var sqldb = new DTGDatabase(Alloy.Globals.dbPolicyInquiry.localdbname);
sqldb.execute("CREATE INDEX IF NOT EXISTS ON view_AllRecordsByInsured (Agent,MasterAgent)")
If that does give enough speed, look at full text search for SQLite dbs.
Here is some example code regarding full text indexes to give you a starting point:
CREATE VIRTUAL TABLE ft_view__mobile_companies_ USING fts4(id, customername, customercity)
INSERT INTO ft_view__mobile_companies_(id, customername, customercity) SELECT id, customername, customercity FROM view__mobile_companies_
To query the index you need to execute SQL with the MATCH operator (see SQLite documentation). In one app I have well over 100.000 datasets synchronized from a Domino view, and searching using a fulltext search in SQLite works instantly.
Well, unlike big database engines, the SQLite database engine is more limited, and so are the devices that it's run on.
What I would try to do is check the query that pulls the data - are you using indexes in your table? do you use them to query? is there unnecessary joins or pulls?
I you fail to tweet the query you should maybe consider checking out a mobile noSQL solution - I know there are some on the appcelerator marketplace - check if it suits your needs and if it speeds up things.
Related
I am trying to add columnSummary to my table using Handsontable. But it seems that the function does not fire. The stretchH value gets set and is set properly. But it does not react to the columnSummary option:
this.$refs.hot.hotInstance.updateSettings({stretchH: 'all',columnSummary: [
{
destinationRow: 0,
destinationColumn: 2,
reversedRowCoords: true,
type: 'custom',
customFunction: function(endpoint) {
console.log("TEST");
}
}]
}, false);
I have also tried with type:'sum' without any luck.
Thanks for all help and guidance!
columnSummary cannot be changed with updateSettings: GH #3597
You can set columnSummary settings at the initialization of Handsontable.
One workaround would be to somehow manage your own column summary, since Handsontable one could give you some headeache. So you may try to add one additional row to put your arithmetic in, but it is messy (it needs fixed rows number and does not work with filtering and sorting operations. Still, it could work well under some circumstances.
In my humble opinion though, a summary column has to be fully functionnal. We then need to set our summary row out of the table data. What comes to mind is to take the above mentioned additional row and take it away from the table data "area" but it would force us to make that out of the table row always looks like it still was in the table.
So I thought that instead of having a new line we could just have to add our column summary within column header:
Here is a working JSFiddle example.
Once the Handsontable table is rendered, we need to iterate through the columns and set our column summary right in the table cell HTML content:
for(var i=0;i<tableConfig.columns.length;i++) {
var columnHeader = document.querySelectorAll('.ht_clone_top th')[i];
if(columnHeader) { // Just to be sure column header exists
var summaryColumnHeader = document.createElement('div');
summaryColumnHeader.className = 'custom-column-summary';
columnHeader.appendChild( summaryColumnHeader );
}
}
Now that our placeholders are set, we have to update them with some arithmetic results:
var printedData = hotInstance.getData();
for(var i=0;i<tableConfig.columns.length;i++) {
var summaryColumnHeader = document.querySelectorAll('.ht_clone_top th')[i].querySelector('.custom-column-summary'); // Get back our column summary for each column
if(summaryColumnHeader) {
var res = 0;
printedData.forEach(function(row) { res += row[i] }); // Count all data that are stored under that column
summaryColumnHeader.innerText = '= '+ res;
}
}
This piece of code function may be called anytime it should be:
var hotInstance = new Handsontable(/* ... */);
setMySummaryHeaderCalc(); // When Handsontable table is printed
Handsontable.hooks.add('afterFilter', function(conditionsStack) { // When Handsontable table is filtered
setMySummaryHeaderCalc();
}, hotInstance);
Feel free to comment, I could improve my answer.
I am currently updating a BackEnd project to .NET Core and having performance issues with my Linq queries.
Main Queries:
var queryTexts = from text in _repositoryContext.Text
where text.KeyName.StartsWith("ApplicationSettings.")
where text.Sprache.Equals("*")
select text;
var queryDescriptions = from text in queryTexts
where text.KeyName.EndsWith("$Descr")
select text;
var queryNames = from text in queryTexts
where !(text.KeyName.EndsWith("$Descr"))
select text;
var queryDefaults = from defaults in _repositoryContext.ApplicationSettingsDefaults
where defaults.Value != "*"
select defaults;
After getting these IQueryables I run a foreach loop in another context to build my DTO model:
foreach (ApplicationSettings appl in _repositoryContext.ApplicationSettings)
{
var applDefaults = queryDefaults.Where(c => c.KeyName.Equals(appl.KeyName)).ToArray();
description = queryDescriptions.Where(d => d.KeyName.Equals("ApplicationSettings." + appl.KeyName + ".$Descr"))
.FirstOrDefault()?
.Text1 ?? "";
var name = queryNames.Where(n => n.KeyName.Equals("ApplicationSettings." + appl.KeyName)).FirstOrDefault()?.Text1 ?? "";
// Do some stuff with data and return DTO Model
}
In my old Project, this part had an execution from about 0,45 sec, by now I have about 5-6 sec..
I thought about using compiled queries but I recognized these don't support returning IEnumerable yet. Also I tried to avoid Contains() method. But it didn't improve performance anyway.
Could you take short look on my queries and maybe refactor or give some hints how to make one of the queries faster?
It is to note that _repositoryContext.Text has compared to other contexts the most entries (about 50 000), because of translations.
queryNames, queryDefaults, and queryDescriptions are all queries not collections. And you are running them in a loop. Try loading them outside of the loop.
eg: load queryNames to a dictionary:
var queryNames = from text in queryTexts
where !(text.KeyName.EndsWith("$Descr"))
select text;
var queryNamesByName = queryName.ToDictionary(n => n.KeyName);
one can write queries like below
var Profile="developer";
var LstUserName = alreadyUsed.Where(x => x.Profile==Profile).ToList();
you can also use "foreach" like below
lstUserNames.ForEach(x=>
{
//do your stuff
});
I'm currently using indexdDB to store offline data of some records in a sales store. The store has columns such as id, shopname, and lastsaledate. I would like some help performing the same operation as the following SQL statement using indexedDB:
SELECT MAX(LastSaleDate) FROM Sales;
Any suggestions?
Ensure you have an index for the lastsaledate property, e.g. when upgrading the database do:
store.createIndex('by_lastsaledate', 'lastsaledate');
When querying, use a reverse cursor ('prev') and null range (i.e. all records):
var store = transaction.objectStore('records');
var index = store.index('by_lastsaledate');
var request = index.openCursor(/*query*/null, /*direction*/'prev');
request.onsuccess = function() {
var cursor = request.result;
if (cursor) {
console.log('max date is: ' + cursor.key);
} else {
console.log('no records!');
}
};
I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.
I'm studing CouchDB and I'm picturing a worst case scenario:
for each document type I need 3 view and this application can generate 10 thousands of document types.
With "document type" I mean the structure of the document.
After insertion of a new document, couchdb make 3*10K calls to view functions searching for right document type.
Is this true?
Is there a smart solution than make a database for each doc type?
Document example (assume that none documents have the same structure, in this example data is under different keys):
[
{
"_id":"1251888780.0",
"_rev":"1-582726400f3c9437259adef7888cbac0"
"type":'sensorX',
"value":{"ValueA":"123"}
},
{
"_id":"1251888780.0",
"_rev":"1-37259adef7888cbac06400f3c9458272"
"type":'sensorY',
"value":{"valueB":"456"}
},
{
"_id":"1251888780.0",
"_rev":"1-6400f3c945827237259adef7888cbac0"
"type":'sensorZ',
"value":{"valueC":"789"}
},
]
Views example (in this example only one per doc type)
"views":
{
"sensorX": {
"map": "function(doc) { if (doc.type == 'sensorX') emit(null, doc.valueA) }"
},
"sensorY": {
"map": "function(doc) { if (doc.type == 'sensorY') emit(null, doc.valueB) }"
},
"sensorZ": {
"map": "function(doc) { if (doc.type == 'sensorZ') emit(null, doc.valueC) }"
},
}
The results of the map() function in CouchDB is cached the first time you request the view for each new document. Let me explain with a quick illustration.
You insert 100 documents to CouchDB
You request the view. Now the 100 documents have the map() function run against them and the results cached.
You request the view again. The data is read from the indexed view data, no documents have to be re-mapped.
You insert 50 more documents
You request the view. The 50 new documents are mapped and merged into the index with the old 100 documents.
You request the view again. The data is read from the indexed view data, no documents have to be re-mapped.
I hope that makes sense. If you're concerned about a big load being generated when a user requests a view and lots of new documents have been added you could look at having your import process call the view (to re-map the new documents) and have the user request for the view include stale=ok.
The CouchDB book is a really good resource for information on CouchDB.
James has a great answer.
It looks like you are asking the question "what are the values of documents of type X?"
I think you can do that with one view:
function(doc) {
// _view/sensor_value
var val_names = { "sensorX": "valueA"
, "sensorY": "valueB"
, "sensorZ": "valueC"
};
var value_name = val_names[doc.type];
if(value_name) {
// e.g. "sensorX" -> "123"
// or "sensorZ" -> "789"
emit(doc.type, doc.value[value_name]);
}
}
Now, to get all values for sensorY, you query /db/_design/app/_view/sensor_value with a parameter ?key="sensorX". CouchDB will show all values for sensorX, which come from the document's value.valueA field. (For sensorY, it comes from value.valueB, etc.)
Future-proofing
If you might have new document types in the future, something more general might be better:
function(doc) {
if(doc.type && doc.value) {
emit(doc.type, doc.value);
}
}
That is very simple, and any document will work if it has a type and value field. Next, to get the valueA, valueB, etc. from the view, just do that on the client side.
If using the client is impossible, use a _list function.
function(head, req) {
// _list/sensor_val
//
start({'headers':{'Content-Type':'application/json'}});
// Updating this will *not* cause the map/reduce view to re-build.
var val_names = { "sensorX": "valueA"
, "sensorY": "valueB"
, "sensorZ": "valueC"
};
var row;
var doc_type, val_name, doc_val;
while(row = getRow()) {
doc_type = row.key;
val_name = val_names[doc_type];
doc_val = row.value[val_name];
send("Doc " + row.id + " is type " + doc_type + " and value " + doc_val);
}
}
Obviously use send() to send whichever format you prefer for the client (such as JSON).