I am having a data of 4000 records and listing with *ngfor in angular 2 but it terribly slow down my application performance. But if i use smaller data like 500 or 1000 it works better. Kindly suggest what should i do to cope with it.
Thanks
You can use ngx-virtual-scroll or ngx-infinite-scroll npm modules, when you have huge data.
https://www.npmjs.com/package/ngx-virtual-scroller
https://www.npmjs.com/package/ngx-infinite-scroll
Related
I'm currently using a 10% sample of a very large dataset (10 vars, over 300m rows) which amounts to over 200 GB of data when stored in .dta format for the full dataset. Stata is able to handle operations like egen, collapse, merging, etc in a reasonable amount of time for the 10% sample when using Stata-MP on a UNIX server with ~50G of RAM and multiple cores.
However, now I want to move on to analyzing the whole sample. Even if I use a machine that has enough RAM to hold the dataset, simply generating a variable takes ages. (I think perhaps the background operations are causing Stata to run into virtual mem)
The problem is also very amenable to parallelization, i.e., the rows in the dataset are independent of each other, so I can just as easily think about the one large dataset as 100 smaller datasets.
Does anybody have any suggestions for how to process/analyze this data or can give me feedback on some suggestions I currently have? I mostly use Stata/SAS/MATLAB so perhaps there are other approaches that I am simply unaware of.
Here are some of my current ideas:
Split the dataset up into smaller datasets and utilize informal parallel processing in Stata. I can run my cleaning/processing/analysis on each partition and then merge the results after without having the store all the intermediate parts.
Use SQL to store the data and also perform some of the data manipulation such as aggregating over certain values. One concern here is that some tasks that Stata can handle fairly easily such as comparing values across time won't work so well in SQL. Also, I'm already running into performance issues when running some queries in SQL on a 30% sample of the data. But perhaps I'm not optimizing by indexing correctly, etc. Also, Shard-Query seems like it could help with this but I have not researched it too thoroughly yet.
R also looks promising, but I'm not sure if it would solve the problem of working with this enormous amount of data.
Thanks to those who have commented and replied. I realized that my problem is similar to this thread. I have re-written some of my data manipulation code in Stata into SQL and the response time is much quicker. I believe I can make large optimization gains by correctly utilizing indexes and using parallel processing via partitions/shards if necessary. After all the data manipulation has been done, I can import that data via ODBC in Stata.
Since you are familiar with Stata there is a well documented FAQ about large data sets in Stata Dealing with Large Datasets: you might find this helpful.
I would clean via columns, splitting those up, running any specific cleaning routines and merge back in later.
Depending on your machine resources, you should be able to hold the individual columns in multiple temporary files using tempfile. Taking care to select only the variables or columns most relevant to your analysis should reduce the size of your set quite a lot.
I am making a quiz and I am expecting a lot of players for it. The quiz has a fixed number of questions.The quiz system can either fetch individual questions one by one from the MySql database and display it to the user. Or alternatively I can fetch all the questions when he/she logs in and display them one by one. Can the second method significantly help in reducing the load on the server due to large number of SQL queries? Here I am talking about 500-600 users simultaneously playing the game.
The only way to know for sure is to benchmark both methods.
For what it's worth, I've recently improved the throughput of a performance-critical piece of code ten-fold by replacing a large number of small SQL queries with a small number of large queries.
A well indexed database should have no problems at all with single queries, which is what I would prefer. Bonus: those small queries will remain in the server's query cache and thus be delivered way faster if requested again.
It depends on the numbers. Are you displaying all the questions or you are picking random 5 out of 1000?
If the number of questions is small, fetching them in a single query will be better.
If the number of questions is large, fetching them in a single query may use a lot of memory, so fetching them one-by-one will be better.
If the number of questions is small, it would make little difference either way. And if the number of questions is large, fetching in a single query is more efficient since you'll avoid most of the back-and-forth network latencies.
Except 15 select queries each user's answer is a individual insert query. Maximum advantage for DB server is 2 batches of queries: 1 select and 1 insert for every user.
In theory you may receive 500-600 inserts per second. In practice there will be less inserts.
Anyway you should avoid large number of queries cause there are overheads for insert locks.
How many queries in one webpage is good performance? If that page is home page that is viewed many times.
and how about....
$sql1 = mysql_query("SELECT * FROM a", $db1);
while($row = mysql_fetch_assoc($sql1)){
$sql2 = mysql_query("SELECT * FROM b WHERE aid='a'", $db2);
$a = mysql_fetch_assoc($sql2);
}
is it good? acctually I can combine $sql1 and $sql2 together by INNER JOIN but the problem is $sql1 is query data from database 1 and $sql2 is query data from database 2. and I use Parallels Plesk Panel that doesn't allow me to add same database user to multiple database.
If I use this code on my website, is it good? or anyway to do this?
Thanks...
Actually you have 2 questions in 1.
A general one and a particular one.
Both has obvious answers in my opinion.
How many queries in one webpage is good performance?
There is no direct connection between number of queries and performance. Database setup, architecture and tuning is responsible for the performance.
And number of queries should be caused by database architecture only. Use as many queries as many you need. Do not reduce number of queries at any cost, only in pursue of performance.
is it good?
Does it matter if you have no choice?
And another, unspoken question:
Should I be concerned about this code snippet performance?
Should you?
Do you have any performance issues at the moment?
If not - why to worry at all? Why to worry about this particular snippet, not any other one?
If yes - you have to profile your code first.
And then build your optimization strategy based on the profiling results. It may be number of queries, it may be proper indexing, clusterization, server upgrade.
Do not blind shoot. Take sensible steps.
I like to keep mine under 12.
In all seriousness though, that's pretty meaningless. If hypothetically there was a reason for you to have 800 queries in a page, then you could go ahead and do it. You'll probably find that the number of queries per page will simply be dependant on what you're doing, though in normal circumstances I'd be surprised to see over 50 (though these days, it can be hard to realise just how many you're doing if you are abstracting your DB calls away).
Slow queries matter more
I used to be frustrated at a certain PHP based forum software which had 35 queries in a page and ran really slow, but that was a long time ago and I know now that the reason that particular installation ran slow had nothing to do with having 35 queries in a page. For example, only one or two of those queries took most of the time. It just had a couple of really slow queries, that were fixed by well-placed indexes.
I think that identifying and fixing slow queries should come before identifying and eliminating unnecessary queries, as it can potentially make a lot more difference.
Consider even that 20 fast queries might be significantly quicker than one slow query - number of queries does not necessarily relate to speed. Sometimes, you can reduce load and speed up a page by splitting a slow query into multiple queries.
Try caching
There are various ways to cache parts of your application which can really cut down on the number of queries you do, without reducing functionality. Libraries like memcached make this trivially easy these days and yet run really fast. This can also help improve performance a lot more than reducing the number of queries.
If queries are really unnecessary, and the performance really is making a difference, then remove/combine them
Just consider looking for slow queries and optimizing them, or caching their results, first.
Measure it.
For the specific case outlined above, I'd combine to a join if possible.
In general, multiple queries per request is pretty normal.
Many sites have tens of requests per query and they are fairly performant.
Use a load tester like Apache bench. (If you have Apache installed, type ab to see the parameters)
I just had the same problem here.
The problem is that you use a query in a loop. If your record a has 10 rows, it makes 10 queries. If your record 'a' has 100 rows, it will make 100 queries. So the more rows your record 'a' has, the worse it gets.
The solution is to put the requests in an array and use the correct foreach loops to display the same thing with only 2 queries. I found This site which is really clear about this topic.
I was researching for a CMS to use and ran into a review on vBulletin 4.0; using about 200 queries on one page load.
I was then worried.
Further research brought me to other sites to see how much queries they are using and I found that some forum software such as Invision Power Board and PHPBB are using queries as low as 6 or 8.
Currently, my site uses about 25 to 40 queries.
Should I be worried?
Don't be worried about number of queries.
Be worried about:
Pages loading too slowly
The SQL being too complicated to maintain.
Clarification:
SQL being too complicated can come from either too many queries OR a few queries that are very complicated (lots of joins and sub queries, etc).
If you aim for something, aim for 3 reads and 1 writes per HTTP hit.
While these are arbitrary numbers (somehow, they are actually taken from the Advanced PHP Programming), they emphasize the ideas:
the number of SQL roundtrips should be low, under 10 for sure, per HTTP call
there is a difference between reads and writes, and the ratio should favour reads. writes create contention
Also remember that not all reads are equal: the 3 reads should be highly optimized reads, not full table scans with 4-5 outer joins...
It Depends. The more you hit the db, the more load you have. Just some things to look for. If you need to display values from several different tables, you will probably need to run several queries. If you only have a couple of users and you know you're not going to have lots of data, it probably doesn't matter.
Some things to consider:
Are you running the same query multiple times per page load? If you can reuse the result, do it.
Are you running a query-per-result of another query? If so, maybe allow the DB to do the join and only do one pull.
If your page is slow from hitting the db too much, look at memcached.
You might try re-factoring your code over time to decrease the number of round-trips to SQL Server. One way to do this could be to utilize caching. For example, data you need frequently can be loaded when the application is started, then grabbed from the cache when it is needed.
Another approach could be to de-normalize your data into tables that are specifically designed to give you the data your site needs in a fewer number of queries.
Also consider if some of those queries (those you use to populate lookup values for instance) can be cached. That way if the same query is called on multiple pages or each time you move from one group of records to another, the database isn't hit again to run exaclty the same query. I remeber one time we were trying to determine why the site was so slow when the stored proc that was running was very fast and found using profiler that it was being sent over and over and over again when it didn't need to be.
You can cache all those queries with vbulletin. If you look at pbnation.com they have over a million visitors a day and only around 3-4 queries per page load. Everything else is cached in memcached.
I am trying to create an lucene of around 2 million records. The indexing time is around 9 hours.
Could you please suggest how to increase performance?
I wrote a terrible post on how to parallelize a Lucene Index. It's truly terribly written, but you'll find it here (there's some sample code you might want to look at).
Anyhow, the main idea is that you chunk up your data into sizable pieces, and then work on each of those pieces on a separate thread. When each of the pieces is done, you merge them all into a single index.
With the approach described above, I'm able to index 4+ million records in approx. 2 hours.
Hope this gives you an idea of where to go from here.
Apart from the writing side (merge factor) and the computation aspect (parallelizing) this is sometimes due to the simplest of reasons: slow input. Many people build a Lucene index from a database of data. Sometimes you find that a particular query for this data is too complicated and slow to actually return all the (2 million?) records quickly. Try just the query and writing to disk, if it's still in the order of 5-9 hours, you've found a place to optimize (SQL).
The following article really helped me when I needed to speed things up:
http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
I found that document construction was our primary bottleneck. After optimizing data access and implementing some of the other recommendations, I was able to substantially increase indexing performance.
The simplest way to improve Lucene's indexing performance is to adjust the value of IndexWriter's mergeFactor instance variable. This value tells Lucene how many documents to store in memory before writing them to the disk, as well as how often to merge multiple segments together.
http://search-lucene.blogspot.com/2008/08/indexing-speed-factors.html