Large Data set load - Server Side or Client Side - sql

I'm looking for anybody out there that has implemented large SQL sp's data returns to AG-Grid without waiting many minutes for load to screen?
I'm struggling getting this implemented without having to wait and wait.
When I say large data, I'm talking 50,000+ records.
Any help would be awesome.
Thanks

We're hooking through API's to SQL

Related

Processing 5M Records from Database to SFDC using MuleSoft as ETL

We are using MuleSoft as the transportation layer b/w DB and SFDC and there is a use case to do one-time data migration from DB to SFDC .
DB has over 5M Records and they want push everything to SFDC. Just to mention the Mule is a stand alone server running on windows VM.
I guess to be more specific , would like to know how to retrieve the 5M records from DB ( using Mule 4) .should i fetch only 100K every time or just pull everything and set Batch Block Size to 10K. what is the best way to do it ?
Does anyone has best ideas to do this faster way ? Appreciate your thoughts on this.
Thank you
Use pagination. 100K or as many as you can per page.

How to pause Web API ? Is it even possible?

We are facing odd issue.
We have two parts
1. Windows task to update database
2. Web API using same database to provide search results
We want to pause API while Windows task updating the database. So Search results won't be partial or incorrect.
Is it possible to pause API request while database is being updated? Database update take about 10-15 seconds.
When you say "pause", what do you expect to happen to callers? It seems like you are choosing to give them errors instead of incomplete data.
If possible, your database updates should be wrapped in a transaction so consumers get current, complete data until the transaction is committed. Then, the next call will have updated and complete data.
I would hope that transactional processing would also help you recover from errors in your updates. What happens now if something fails part way through an update?
This post may help you: How to Decide to use Database Transactions
If the API knows when the this task is being starting, you can do have the thread sleep for 10 seconds by calling:
System.Threading.Thread.Sleep(10000)

BigQuery web UI is unresponsive & eventually crashes

When I click on "Details" to see a preview of the data in the table, the web UI locks up and I see the following errors. I've tried refreshing and restating my browser, but it doesn't help.
Yes, we (the BQ team) are aware of performance issues when viewing very repeated rows in the BigQuery web UI. The public genomics tables are known to tickle these performance issues since individual rows of their table are highly repeated.
We're considering a few methods of fixing this, but the simplest would probably be to default to the JSON display of rows for problematic tables, and allow switching to the tabular view with a "View it at your own risk!"-style warning message.
It took a little time for me too, but it eventually (1min 40sec) loaded up to UI.
I think it is because of how table data is presented in Native BQ UI for Preview mode.
As you could noticed - it is showed in sort of hierarchical way.
I noticed this slowness for heavy tables (row size and/or hierarchical structure wise) when this was intorduced. And btw. only one row is showed for this particular table because of this.
Of course this is just my guess - would be great to hear from Google Team!
Meantime - when I am using internal application that uses same APIs for preview table data - i dont see any slowness at all (10 rows in 3 sec), which supports my above guess.

Whats the maximum speed required between server and client?

Help please,
I have an Access database with about 40 users placed around Australia, connected to a SQL server backend based in Sydney. Performance is good most of the time but occasionally it deteriorates. The biggest complaint from users is after updating a record is takes a second or two to move to the next line in a datasheet subform. The setup is a master form, with a datasheet style subform for entering order lines.
I have noted when these complaints start, PING times from the local PC to the SQL Server can get above 150ms and up to 300ms, when the norm is around 30-50ms. Is this a problem? Is the PING time a good reference for speed? Should 150ms still be acceptable?
My next issue is, we are wanting to move the SQL Server to the US. It would appear the best PING times I get are around 220ms. I have tested the connection and the lag on my forms is really bad. Has anyone ever had to connect to a SQL Server in the US from Australia? Can it be done? Should I be looking at a different platform?
Any help appreciated. Thanks. CE.

Best practice for inserting and querying data from memory

We have an application that takes real time data and inserts it into database. it is online for 4.5 hours a day. We insert data second by second in 17 tables. The user at any time may query any table for the latest second data and some record in the history...
Handling the feed and insertion is done using a C# console application...
Handling user requests is done through a WCF service...
We figured out that insertion is our bottleneck; most of the time is taken there. We invested a lot of time trying to finetune the tables and indecies yet the results were not satisfactory
Assuming that we have suffecient memory, what is the best practice to insert data into memory instead of having database. Currently we are using datatables that are updated and inserted every second
A colleague of ours suggested another WCF service instead of database between the feed-handler and the WCF user-requests-handler. The WCF mid-layer is supposed to be TCP-based and it keeps the data in its own memory. One may say that the feed handler might deal with user-requests instead of having a middle layer between 2 processes, but we want to seperate things so if the feed-handler crashes we want to still be able to provide the user with the current records
We are limited in time, and we want to move everything to memory in short period. Is having a WCF in the middle of 2 processes a bad thing to do? I know that the requests add some overhead, but all of these 3 process(feed-handler, In memory database (WCF), user-request-handler(WCF) are going to be on the same machine and bandwidth will not be that much of an issue.
Please assist!
I would look into creating a cache of the data (such that you can also reduce database selects), and invalidate data in the cache once it has been written to the database. This way, you can batch up calls to do a larger insert instead of many smaller ones, but keep the data in-memory such that the readers can read it. Actually, if you know when the data goes stale, you can avoid reading the database entirely and use it just as a backing store - this way, database performance will only affect how large your cache gets.
Invalidating data in the cache will either be based on whether its written to the database or its gone stale, which ever comes last, not first.
The cache layer doesn't need to be complicated, however it should be multi-threaded to host the data and also save it in the background. This layer would sit just behind the WCF service, the connection medium, and the WCF service should be improved to contain the logic of the console app + the batching idea. Then the console app can just connect to WCF and throw results at it.
Update: the only other thing to say is invest in a profiler to see if you are introducing any performance issues in code that are being masked. Also, profile your database. You mention you need fast inserts and selects - unfortunately, they usually trade-off against each other...
What kind of database are you using? MySQL has a storage engine MEMORY which would seem to be suited to this sort of thing.
Are you using DataTable with DataAdapter? If so, I would recommend that you drop them completely. Insert your records directly using DBCommand. When users request reports, read data using DataReader, or populate DataTable objects using DataTable.Load (IDataReader).
Storying data in memory has the risk of losing data in case of crashes or power failures.