A LINQ query returns 17000 records. A Odata controller Get() return this query but it takes more than 6 min How I reduce the time.
you can try and add paging logic to your query, the concept is to fetch only a set amount of records each time instead of all of them which may reduce your query time.
note that this solution work only if your client will also be able to support the paging logic.
refer to these articles to see possible implementations:
ODATA:
https://learn.microsoft.com/en-us/aspnet/web-api/overview/odata-support-in-aspnet-web-api/supporting-odata-query-options#server-paging
SQL:
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/paging-through-a-query-result
Related
I am using Typeorm for SQL Server in my application. When I pass the native query like connection.query(select * from user where id = 1), the performance is really good and it less that 0.5 seconds.
If we use the findone or QueryBuilder method, it is taking around 5 seconds to get a response.
On further debugging, we found that passing value directly to query like this,
return getConnection("vehicle").createQueryBuilder()
.select("vehicle")
.from(Vehicle, "vehicle")
.where("vehicle.id='" + id + "'").getOne();
is faster than
return getConnection("vehicle").createQueryBuilder()
.select("vehicle")
.from(Vehicle, "vehicle")
.where("vehicle.id =:id", {id:id}).getOne();
Is there any optimization we can do to fix the issue with parameterized query?
I don't know Typeorm but it seems to me clear the difference. In one case you query the database for the whole table and filter it locally and in the other you send the filter to the database and it filters the data before it sends it back to the client.
Depending on the size of the table this has a big impact. Consider picking one record from 10 million. Just the time to transfer the data to the local machine is 10 million times slower.
By this I mean if I am running a query that will return a million results, can I return a thousand at a time and append, much like how an AJAX call will return results and append to HTML?
You could write your own with the TOP function in MSSQL or the LIMIT in MySQL. Alter your query to return a maximum of 1000 records, and keep a running start index.
Appending to HTML will be a different problem. Perhaps your best option is to make a function that gets paged data on the server, and call this multiple times from the client.
We're developing an application based on neo4j and php with about 200k nodes, which every node has a property like type='user' or type='company' to denote a specific entity of our application. We need to get the count of all nodes of a specific type in the graph.
We created an index for every entity like users, companies which holds the nodes of that property. So inside users index resides 130K nodes, and the rest on companies.
With Cypher we quering like this.
START u=node:users('id:*')
RETURN count(u)
And the results are
Returned 1 row.Query took 4080ms
The Server is configured as default with a little tweaks, but 4 sec is too for our needs. Think that the database will grow in 1 month 20K, so we need this query performs very very much.
Is there any other way to do this, maybe with Gremlin, or with some other server plugin?
I'll cache those results, but I want to know if is possible to tweak this.
Thanks a lot and sorry for my poor english.
Finaly, using Gremlin instead of Cypher, I found the solution.
g.getRawGraph().index().forNodes('NAME_OF_USERS_INDEX').query(
new org.neo4j.index.lucene.QueryContext('*')
).size()
This method uses the lucene index to get "aproximate" rows.
Thanks again to all.
Mmh,
this is really about the performance of that Lucene index. If you just need this single query most of the time, why not update an integer with the total count on some node somewhere, and maybe update that together with the index insertions, for good measure run an update with the query above every night on it?
You could instead keep a property on a specific node up to date with the number of such nodes, where updates are done guarded by write locks:
Transaction tx = db.beginTx();
try {
...
...
tx.acquireWriteLock( countingNode );
countingNode.setProperty( "user_count",
((Integer)countingNode.getProperty( "user_count" ))+1 );
tx.success();
} finally {
tx.finish();
}
If you want the best performance, don't model your entity categories as properties on the node. In stead, do it like this :
company1-[:IS_ENTITY]->companyentity
Or if you are using 2.0
company1:COMPANY
The second would also allow you automatically update your index in a separate background thread by the way, imo one of the best new features of 2.0
The first method should also proof more efficient, since making a "hop" in general takes less time than reading a property from a node. It does however require you to create a separate index for the entities.
Your queries would look like this :
v2.0
MATCH company:COMPANY
RETURN count(company)
v1.9
START entity=node:entityindex(value='company')
MATCH company-[:IS_ENTITIY]->entity
RETURN count(company)
I'm using RiaServices to populate a grid using an EntityQuery.
Since my database has millions of rows, I want to query only the current page, but also to bring the total number of rows for paging purposes.
Ex: 100 rows total
entityQuery.Skip(0).Take(10); //for the first page
entityQuery.IncludeTotalCount = true;
That brings me 10 rows, and loadOperation.TotalEntityCount = 100. Perfect.
But imagine this:
Ex: 100 rows total
entityQuery.Where(p => Id >= 1 && p.Id <= 50).Skip(0).Take(10); //with filter now
entityQuery.IncludeTotalCount = true;
That brings me 10 rows, and loadOperation.TotalEntityCount = 100 (I need 50!)
Here is the problem: for paging purposes, I need the total number of entities satisfying my filter, not all of them.
Is it possible to change the query for "IncludeTotalCount" or should I forget about TotalEntityCount and query the server two times?
Cheers,
André Carlucci
RIA Services handles total count requests by stripping off the skip/take paging directives (as you'd expect) and passing the unpaged query to the protected virtual DomainService.Count method. I recommend overriding this method, setting a breakpoint so you can verify that the correct count query is being passed to your service. If you're using the EF DomainService, the base implementation of Count will simply do a query.Count(). So things should be behaving as you expect - I'm not sure yet why they aren't. What type of DomainService are you using?
i have a sql query that can bring back a large number of rows via a DataReader. Just now I query the DB transform the result set into a List(of ) and data bind the Grid to the List.
This can result occasionally in a timeout due to the size of Dataset.
I currently have a three teir setup where by the UI is acting on the List of objects in the business layer.
Can anyone suggest the best approach to implementing lazy loading in this scenatrio? or is there some other way of implementing this cleanly?
I am currently using Visual Studio 2005, .NET 2.0
EDIT: How would paging be used in this instance?
LINQ to SQL seems to make sense in your situation.
Otherwise if for any reason, you don't want to use LINQ to SQL (e.g. you are on .NET 2.0), consider writing an iterator that reads the DataReader and converts it to the appropriate object:
IEnumerator<MyObject> ReadDataReader() {
while(reader.MoveNext())
yield return FetchObject(reader);
}
Do you need to bring back all the data at once? You could considering paging.
Paging might be your best solution. If you are using SQL Server 2005 or greater there was new feature added. ROWNUMBER():
WITH MyThings AS
(
SELECT ThingID, DateEntered,
ROW_NUMBER() OVER (ORDER BY DateEntered) AS 'RowNumber'
FROM dbo.Things
)
SELECT *
FROM ThingDetails
WHERE RowNumber BETWEEN 50 AND 60;
There is an example by David Hayden which is very helpful in demonstrating the SQL .
This method would decrease the number of records returned, reducing the overall load time. It does mean that you will have to do a bit more to track where you are in the sequence of records, but it is worth the effort.
The standard paging technique requires everything to come back from the database and then be filtered at the middle tier, or client tier (code-behind) this method reduces the records to a more manageable subset.