Using linux perf tool to measure the amount of times the CPU has to acccess the main memory - ram

As I understand, the perf tool can read hardware counters available on a processor to provide performance information. For example, I know to use L1-dcache-load-misses to measure the number of times the L1 cache does not have the requested data.
I want to find out how many times my CPU, when running my program, has to access the DRAM. Using perf list | grep dram throws up hundreds of counters of which I cannot find any information.
So, which event to use to measure the number of times DRAM has been accessed?

There are a lot of events labelled with "dram", but none of those are the one that you want.... :-(.
For Intel processors, it is not possible to count all of the DRAM traffic using the performance counters in the core.
The good news is that the performance counters in the memory controllers are accurate in all of the Intel systems that I have tested. The bad news is that they have different names for processors with the "client" uncore and those with the "server" uncore.
On a Xeon Gold or Xeon Platinum server system, the events can be found using:
perf list | grep -i cas
which returns:
uncore_imc_0/cas_count_read/ [Kernel PMU event]
uncore_imc_0/cas_count_write/ [Kernel PMU event]
uncore_imc_1/cas_count_read/ [Kernel PMU event]
uncore_imc_1/cas_count_write/ [Kernel PMU event]
uncore_imc_2/cas_count_read/ [Kernel PMU event]
uncore_imc_2/cas_count_write/ [Kernel PMU event]
uncore_imc_3/cas_count_read/ [Kernel PMU event]
uncore_imc_3/cas_count_write/ [Kernel PMU event]
uncore_imc_4/cas_count_read/ [Kernel PMU event]
uncore_imc_4/cas_count_write/ [Kernel PMU event]
uncore_imc_5/cas_count_read/ [Kernel PMU event]
uncore_imc_5/cas_count_write/ [Kernel PMU event]
The corresponding "perf stat" command is very long, but straightforward to construct.
I don't have access to any processors with the "client" uncore to test, but documents like https://www.intel.com/content/dam/www/public/us/en/documents/manuals/6th-gen-core-family-uncore-performance-monitoring-manual.pdf indicate that the memory controller performance counters are available in those products as well.

(This doesn't fully answer your question, hopefully someone else with more memory-profiling experience will answer. The events I mentions are present on Skylake-client; IDK about other CPUs.)
On a CPU without L4 eDRAM cache, you can count L3 misses. e.g. mem_load_retired.l3_miss for loads. (But that might count 2 loads to the same line as two separate misses, even though they both wait for the same LFB to fill, so actually just one access seen by the DRAM.)
And it won't count DRAM access driven by HW prefetch. Also, that's only counting loads, not write-backs of dirty data after stores.
The offcore_response events are super complex because they consider the possibility of multi-socket systems and snooping other sockets, local vs. remote RAM, and so on. Not sure if there's one single event with dram in its name that does what you want. Also, the offcore_response events divide up between demand_code_rd, demand_data_rd, demand_rfo (store misses), and other.
There is offcore_requests.l3_miss_demand_data_rd to count demand-load (non prefetch)

Related

Time Complexity for Spark/Distributed Algorithms

If we have below time complexity
for some sequential algorithm, how can we express this time complexity for the same algorithm implemented in Spark (distributed version). Assuming that we have 1 master node and 3 worker nodes in the cluster?
Similarly, how can we express O(n^2) time complexity for Spark algorithm?
Moreover, how can we express Space Complexity in HDFS with replication factor 3?
Thanks in advance!!!!
Ignoring orchestration and communication time (which is often not the case, ex. in case of sorting the whole data, the operation cannot be just "split" on different partitions).
Let's make another convenient assumption: the data is perfectly partitioned among the 3 partitions: every node holds n/3 data.
This said, I think we can consider an O(n^2) algorithm as sum of three O((n/3) ^ 2) partial computations (hence a final O((n/3) ^ 2)). This goes similarly for any other complexity ( O(n^2 log n) will be O((n/3)^2 log(n/3)) ).
As for the replication factor in hadoop, given the assumptions above, since the operations will be executed in parallel among replicas (!= from partitions), the complexity will be the same as an execution of a single "replica".

Select statement low performance on simple table

Using Management Studio, I have a table with the six following columns on my SQL Server:
FileID - int
File_GUID - nvarchar(258)
File_Parent_GUID - nvarchar (258)
File Extension nvarchar(50)
File Name nvarchar(100)
File Path nvarchar(400)
It has a primary key on FileID.
This table has around 200M rows.
If I try and process the full data, I receive an memory error.
So I have decided to load this in partitions, using a select statement in every 20M where I split on the FileID number.
These selects take forever, the retrieval of rows is extremely slow, I have no idea why.. There are no calculations whatsoever, just a pull of data using a SELECT.
When I ran the query analyzer I see:
Select cost = 0%
Clustered Index Cost = 100%
Do you guys have any idea on why this could be happening or maybe some tips that I can apply ?
My query:
Select * FROM Dim_TFS_File
Thank you!!
Monitor the query while it's running to see if it's blocked or waiting on resources. If you can't easily see where the bottleneck is during monitoring the database and client machines, I suggest you run a couple of simple tests to help identify where you should focus your efforts. Ideally, run the tests with no other significant activity and a cold cache.
First, run the query on the database server and discard the results. This can be done from SSMS with the discard results option (Query-->Query Options-->Results-->Grid-->Discard Results after execution). Alternatively, use a Powershell script like the one below:
$connectionString = "Data Source=YourServer;Initial Catalog=YourDatabase;Integrated Security=SSPI;Application Name=Performance Test";
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString);
$command = New-Object System.Data.SqlClient.SqlCommand("SELECT * FROM Dim_TFS_File;", $connection);
$command.CommandTimeout = 0;
$sw = [System.Diagnostics.Stopwatch]::StartNew();
$connection.Open();
[void]$command.ExecuteNonQuery(); #this will discard returned results
$connection.Close();
$sw.Stop();
Write-Host "Done. Elapsed time is $($sw.Elapsed.ToString())";
Repeat the above test on the client machine. The elapsed time difference reflects data transfer network overhead. If the client machine test is significantly faster than the application, focus you efforts on the app code. Otherwise, take a closer look at database and network. Below are some random notes that might help remediate performance issues.
This trivial query will likely perform a full clustered index scan. The limiting performance factors on the database server will be:
CPU: Throughtput of this single-threaded query will be limited by spead of a single CPU core.
Storage: The SQL Server storage engine will use read-ahead reads during large scans to fetch data asynchronously so that data will already be in memory by the time it is needed by the query. Sequential read performance is important to keep up with the query.
Fragmentation: Fragmentation will result in more disk head movement against spinning media, adding several milliseconds per physical disk IO. This is typically a consideration only for large sequential scans on a single-spindle or low-end local storage arrays, not SSD or enterprise class SAN. Fragmentation can be eliminated with a reorganizing or rebuilding the clustered index. Be sure to specify MAXDOP 1 for rebuilds for maximum benefits.
SQL Server streams results as fast as they can be consumed by the client app but the client may be constrained by network bandwidth and latency. It seems you are returning many GB of data, which will take quite some time. You can reduce bandwidth needs considerably with different data types. For example, assuming the GUID-named columns actually contain GUIDs, using uniqueidentifier instead of nvarchar will save about 80 bytes per row over the netowrk and on disk. Similarly, use varchar instead of nvarchar if you don't actually need Unicode characters to cut data size by half.
Client processing time: The time to process 20M rows by the app code will be limited by CPU and code efficiency (especially memory management). Since you ran out of memory, it seems you are either loading all rows into memory or have a leak. Even without an outright out of memory error, high memory usage can result in paging and greatly slow throughput. Importantly, the database and network performance is moot if the app code can't process rows as fast as data are returned.

SQL Server completely slows down after occupying full memory

On our server with 32 GB RAM we have an instance of SQL Server running that is capped with max Memory at 80%.
Everything works fine when the memory utilization is low. See screenshot below
But as time passes, 3-4 days hence SQL will utilize the full RAM (80% of total).
During this 3-4 days we make no changes to the server, but day by day it keeps eating more RAM.
When it reaches the Max Limit the entire performance goes for a toss, and we face query timeouts on our website. It takes many seconds to execute the same query that was executing within milliseconds.
At this point we have no choice but the restart the entire server and things return to normal. (Restarting the service only does not work)
This will work for a week or so, after which we have to restart it again
I have read online, that SQL server does not release memory. But they have also mentioned that it is how SQL functions, but does not affect performance.
In my case it does and performance suffers.
Is there a memory leak? Or a stored proc consuming lots of memory and never releasing it? If so how do I debug it?
By default SQL server consumes entire memory if uncapped and uses all memory available when capped.This is Normal.You will also have to ensure SQLSERVER is the only application on the box(which is recommended) and also try to cap memory as per best practices.
I would start troubleshooting using below approach
1.start finding top memory consuming queries and see if I can reduce the memory usage of the query ..memory consuming queries can be found by below query.
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time
now that you found queries causing high memory, you need to fine tune them to see if you can reduce the memory usage.
EX: query may be doing many reads due to an unsuitable index or may be your IO device has an issue due to which buffer pool is getting flushed many times
2.You can also find top components that use memory, which gives you an understanding of how RAM is spent
SELECT TOP(20) [type], [name], SUM(single_pages_kb) AS [SPA Mem, Kb]
FROM sys.dm_os_memory_clerks
GROUP BY [type], [name]
ORDER BY SUM(single_pages_kb) DESC;
if buffer pool uses more memory consistently, I would not worry about that, but if it is cachestore_obcp.then you might be having many ad-hoc queries filling up cache store which is bad
One part of investigation leads to another, so you will have to troubleshoot based on the leads as there is no one click solution
Side Note: Not Recommended:
one of our dev instances used to face the same issue, so instead of doing all the tuning stuff, we used to run below command which effectively releases memory immediately..but this is not at all recommended for a production instance, as this flushes plans stored in the cache and you may face slight CPU pressure
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
References:
sql-server-memory-related-queries
sql-server-find-most-expensive-queries-using-dmv
You need to find out Most expensive queries and need to review query code, many times the query code is not effectively optimized. Performance tuning you need to do.

What is the difference/relationship between extent and allocation unit?

Can you explain the difference -or relationship- between 'Extent' and 'Allocation Unit' in SQL?
The allocation unit is basically just a set of pages. It can be small (one page) or large (many many pages). It has a metadata entry in sys.allocation_units. It is tracked by a IAM chain. The most common use of allocation units is the 3 well known AUs of a rowset: IN_ROW_DATA, ROW_OVERFLOW and LOB_DATA.
An extent is any 8 consecutive pages that start from a page ID that is divisible by 8. SQL Server IO is performed in an extent aware fashion: ideally an entire extent is read in at once, an entire extent is write out at once. This is subject to current state of the buffer pool, for details see How It Works: Bob Dorr's SQL Server I/O Presentation. Extents are usually allocated together, so all pages of an extent belong to the same allocation unit. But since this would lead to overallocation for small tables a special type of extent is a so called 'mixed' extent, in which each page can belong to a separate allocation unit. For details see Inside The Storage Engine: GAM, SGAM, PFS and other allocation maps.
So as you see the concepts are related, but very different. Perhaps you should explain a bit what is the problem you're trying to solve or why are you interested in these concepts, perhaps we can then elaborate.
Each object (be it an index or a heap) has a number of partitions (1-15k). Each partition can have three different allocation units, the HoBT (heap or b-tree, also known as the hobbit) where your actual data is stored. The LOB ALU for the LOB types as well as the SLOB ALU for row-overflow data.
Pages belong to a certain allocation unit. All pages belong to an extent - a group of 8 pages. While the individual pages can belong to different allocation units, they'll always belong to the same object in a uniform extent - while a mixed extent contains pages for different objects and potentially different allocation units.

Do relational databases provide a feasible backend for a process historian?

In the process industry, lots of data is read, often at a high frequency, from several different data sources, such as NIR instruments as well as common instruments for pH, temperature, and pressure measurements. This data is often stored in a process historian, usually for a long time.
Due to this, process historians have different requirements than relational databases. Most queries to a process historian require either time stamps or time ranges to operate on, as well as a set of variables of interest.
Frequent and many INSERT, many SELECT, few or no UPDATE, almost no DELETE.
Q1. Is relational databases a good backend for a process historian?
A very naive implementation of a process historian in SQL could be something like this.
+------------------------------------------------+
| Variable |
+------------------------------------------------+
| Id : integer primary key |
| Name : nvarchar(32) |
+------------------------------------------------+
+------------------------------------------------+
| Data |
+------------------------------------------------+
| Id : integer primary key |
| Time : datetime |
| VariableId : integer foreign key (Variable.Id) |
| Value : float |
+------------------------------------------------+
This structure is very simple, but probably slow for normal process historian operations, as it lacks "sufficient" indexes.
But for example if the Variable table would consist of 1.000 rows (rather optimistic number), and data for all these 1.000 variables would be sampled once per minute (also an optimistic number) then the Data table would grow with 1.440.000 rows per day. Lets continue the example, estimate that each row would take about 16 bytes, which gives roughly 23 megabytes per day, not counting additional space for indexes and other overhead.
23 megabytes as such perhaps isn't that much but keep in mind that numbers of variables and samples in the example were optimistic and that the system will need to be operational 24/7/365.
Of course, archiving and compression comes to mind.
Q2. Is there a better way to accomplish this? Perhaps using some other table structure?
I work with a SQL Server 2008 database that has similar characteristics; heavy on insertion and selection, light on update/delete. About 100,000 "nodes" all sampling at least once per hour. And there's a twist; all of the incoming data for each "node" needs to be correlated against the history and used for validation, forecasting, etc. Oh, there's another twist; the data needs to be represented in 4 different ways, so there are essentially 4 different copies of this data, none of which can be derived from any of the other data with reasonable accuracy and within reasonable time. 23 megabytes would be a cakewalk; we're talking hundreds-of-gigabytes to terabytes here.
You'll learn a lot about scale in the process, about what techniques work and what don't, but modern SQL databases are definitely up to the task. This system that I just described? It's running on a 5-year-old IBM xSeries with 2 GB of RAM and a RAID 5 array, and it performs admirably, nobody has to wait more than a few seconds for even the most complex queries.
You'll need to optimize, of course. You'll need to denormalize frequently, and maintain pre-computed aggregates (or a data warehouse) if that's part of your reporting requirement. You might need to think outside the box a little: for example, we use a number of custom CLR types for raw data storage and CLR aggregates/functions for some of the more unusual transactional reports. SQL Server and other DB engines might not offer everything you need up-front, but you can work around their limitations.
You'll also want to cache - heavily. Maintain hourly, daily, weekly summaries. Invest in a front-end server with plenty of memory and cache as many reports as you can. This is in addition to whatever data warehousing solution you come up with if applicable.
One of the things you'll probably want to get rid of is that "Id" key in your hypothetical Data table. My guess is that Data is a leaf table - it usually is in these scenarios - and this makes it one of the few situations where I'd recommend a natural key over a surrogate. The same variable probably can't generate duplicate rows for the same timestamp, so all you really need is the variable and timestamp as your primary key. As the table gets larger and larger, having a separate index on variable and timestamp (which of course needs to be covering) is going to waste enormous amounts of space - 20, 50, 100 GB, easily. And of course every INSERT now needs to update two or more indexes.
I really believe that an RDBMS (or SQL database, if you prefer) is as capable for this task as any other if you exercise sufficient care and planning in your design. If you just start slinging tables together without any regard for performance or scale, then of course you will get into trouble later, and when the database is several hundred GB it will be difficult to dig yourself out of that hole.
But is it feasible? Absolutely. Monitor the performance constantly and over time you will learn what optimizations you need to make.
It sounds like you're talking about telemetry data (time stamps, data points).
We don't use SQL databases for this (although we do use SQL databases to organize it); instead, we use binary streaming files to capture the actual data. There are a number of binary file formats that are suitable for this, including HDF5 and CDF. The file format we use here is a proprietary compressible format. But then, we deal with hundreds of megabytes of telemetry data in one go.
You might find this article interesting (links directly to Microsoft Word document):
http://www.microsoft.com/caseStudies/ServeFileResource.aspx?4000003362
It is a case study from the McClaren group, describing how SQL Server 2008 is used to capture and process telemetry data from formula one race cars. Note that they don't actually store the telemetry data in the database; instead, it is stored in the file system, and the FILESTREAM capability of SQL Server 2008 is used to access it.
I believe you're headed in the right path. We have a similar situation were we work. Data comes from various transport / automation systems across various technologies such as manufacturing, auto, etc. Mainly we deal with the big 3: Ford, Chrysler, GM. But we've had a lot of data coming in from customers like CAT.
We ended up extracting data into a database and as long as you properly index your table, keep updates to a minimum and schedule maintenance (rebuild indexes, purge old data, update statistics) then I see no reason for this to be a bad solution; in fact I think it is a good solution.
Certainly a relational database is suitable for mining the data after the fact.
Various nuclear and particle physics experiments I have been involved with have explored several points from not using a RDBMS at all though storing just the run summaries or the run summaries and the slowly varying environmental conditions in the DB all the way to cramming every bit collected into the DB (though it was staged to disk first).
When and where the data rate allows more and more groups are moving towards putting as much data as possible into the database.
IBM Informix Dynamic Server (IDS) has a TimeSeries DataBlade and RealTime Loader which might provide relevant functionality.
Your naïve schema records each reading 100% independently, which makes it hard to correlate across readings- both for the same variable at different times and for different variables at (approximately) the same time. That may be necessary, but it makes life harder when dealing with subsequent processing. How much of an issue that is depends on how often you will need to run correlations across all 1000 variables (or even a significant percentage of the 1000 variables, where significant might be as small as 1% and would almost certainly start by 10%).
I would look to combine key variables into groups that can be recorded jointly. For example, if you have a monitor unit that records temperature, pressure and acidity (pH) at one location, and there are perhaps a hundred of these monitors in the plant that is being monitored, I would expect to group the three readings plus the location ID (or monitor ID) and time into a single row:
CREATE TABLE MonitorReading
(
MonitorID INTEGER NOT NULL REFERENCES MonitorUnit,
Time DATETIME NOT NULL,
PhReading FLOAT NOT NULL,
Pressure FLOAT NOT NULL,
Temperature FLOAT NOT NULL,
PRIMARY KEY (MonitorID, Time)
);
This saves having to do self-joins to see what the three readings were at a particular location at a particular time, and uses about 20 bytes instead of 3 * 16 = 48 bytes per row. If you are adamant that you need a unique ID integer for the record, that increases to 24 or 28 bytes (depending on whether you use a 4-byte or 8-byte integer for the ID column).
Yes, a DBMS is appropriate for this, although not the fastest option. You will need to invest in a reasonable system to handle the load though. I will address the rest of my answer to this problem.
It depends on how beefy a system you're willing to throw at the problem. There are two main limiters for how fast you can insert data into a DB: bulk I/O speed and seek time. A well-designed relational DB will perform at least 2 seeks per insertion: one to begin the transaction (in case the transaction can not be completed), and one when the transaction is committed. Add to this additional storage to seek to your index entries and update them.
If your data are large, then the limiting factor will be how fast you can write data. For a hard drive, this will be about 60-120 MB/s. For a solid state disk, you can expect upwards of 200 MB/s. You will (of course) want extra disks for a RAID array. The pertinent figure is storage bandwidth AKA sequential I/O speed.
If writing a lot of small transactions, the limitation will be how fast your disk can seek to a spot and write a small piece of data, measured in IO per second (IOPS). We can estimate that it will take 4-8 seeks per transaction (a reasonable case with transactions enabled and an index or two, plus some integrity checks). For a hard drive, the seek time will be several milliseconds, depending on disk RPM. This will limit you to several hundred writes per second. For a solid state disk, the seek time is under 1 ms, so you can write several THOUSAND transactions per second.
When updating indices, you will need to do about O(log n) small seeks to find where to update, so the DB will slow down as the record counts grow. Remember that a DB may not write in the most efficient format possible, so data size may be bigger than you expect.
So, in general, YES, you can do this with a DBMS, although you will want to invest in good storage to ensure it can keep up with your insertion rate. If you wish to cut on cost, you may want to roll data over a specific age (say 1 year) into a secondary, compressed archive format.
EDIT:
A DBMS is probably the easiest system to work with for storing recent data, but you should strongly consider the HDF5/CDF format someone else suggested for storing older, archived data. It is an flexible and widely supported format, provides compression, and provides for compression and VERY efficient storage of large time series and multi-dimensional arrays. I believe it also provides for some methods of indexing in the data. You should be able to write a little code to fetch from these archive files if data is too old to be in the DB.
There is probably a data structure that would be more optimal for your given case than a relational database.
Having said that, there are many reasons to go with a relational DB including robust code support, backup & replication technology and a large community of experts.
Your use case is similar to high-volume financial applications and telco applications. Both are frequently inserting data and frequently doing queries that are both time-based and include other select factors.
I worked on a mid-sized billing project that handled cable bills for millions of subscribers. That meant an average of around 5 rows per subscriber times a few million subscribers per month in the financial transaction table alone. That was easily handled by a mid-size Oracle server using (now) 4 year old hardware and software. Large billing platforms can have 10x that many records per unit time.
Properly architected and with the right hardware, this case can be handled well by modern relational DB's.
Years ago, a customer of ours tried to load an RDBMS with real-time data collected from monitoring plant machinery. It didn't work in a simplistic way.
Is relational databases a good backend for a process historian?
Yes, but. It needs to store summary data, not details.
You'll need a front-end based in-memory and on flat files. Periodic summaries and digests can be loaded into an RDBMS for further analysis.
You'll want to look at Data Warehousing techniques for this. Most of what you want to do is to split your data into two essential parts ---
Facts. The data that has units. Actual measurements.
Dimensions. The various attributes of the facts -- date, location, device, etc.
This leads you to a more sophisticated data model.
Fact: Key, Measure 1, Measure 2, ..., Measure n, Date, Geography, Device, Product Line, Customer, etc.
Dimension 1 (Date/Time): Year, Quarter, Month, Week, Day, Hour
Dimension 2 (Geography): location hierarchy of some kind
Dimension 3 (Device): attributes of the device
Dimension *n*: attributes of each dimension of the fact
You may want to look at KDB. It is specificaly optimized for this kind of usage: many inserts, few or no updates or deletes.
It isn't as easy to use as traditional RDBMS though.
The other aspect to consider is what kind of selects you're doing. Relational/SQL databases are great for doing complex joins dependent on multiple indexes, etc. They really can't be beaten for that. But if you're not doing that kind of thing, they're probably not such a great match.
If all you're doing is storing per-time records, I'd be tempted to roll your own file format ... even just output the stuff as CSV (groans from the audience, I know, but it's hard to beat for wide acceptance)
It really depends on your indexing/lookup requirements, and your willingness to write tools to do it.
You may want to take a look at a Stream Data Manager System (SDMS).
While not addressing all your needs (long-time persistence), sliding windows over time and rows and frequently changing data are their points of strength.
Some useful links:
Stanford Stream Data Manager
Stream Mill
Material about Continuous Queries
AFAIK major database makers all should have some kind of prototype version of an SDMS in the works, so I think it's a paradigm worth checking out.
I know you're asking about relational database systems, but those are unicorns. SQL DBMSs are probably a bad match for your needs because no current SQL system (I know of) provides reasonable facilities to deal with temporal data. depending on your needs you might or might not have another option in specialized tools and formats, see e. g. rrdtool.