If my source data is on Server 1, the cube is on Server 2, and I manually initiate cube/dimension processing in BIDS on my local machine, where exactly does processing occur?
With the traditinal MOLAP storage mode, the query (for raw data) will be executed on Server 1, the aggregation calculations will be done by the server which hosts the SSAS instance (Server 2).
The ROLAP mode will create indexed views on the source system and queries those views, so the 'calculations' are performed by the database engine (Server 1).
With HOLAP mode, it depends on the query since the aggregations are stored in the AS database (calculated by the AS engine while processing), while raw data is accessed from the source system (during a drill-down for example).
Basically you can say: All information stored in the AS database, the calculations are made by the AS engine.
For more information about storage modes see http://msdn.microsoft.com/en-us/library/ms174915.aspx
Related
In our current system, we have a lot of ECC tables replicated to SAP HANA with SDI (Smart Data Integration). Replication tasks can be real-time or on demand, but sometimes a replication task comes too late and the data in the replicated table is very different from the source table.
What would be the best approach in SAP HANA to check these delta values?
ERP system uses DB2 database
DB2LogReaderAdapter is used to read DB2 database tables
Remote source is created in the Cloud (Virtual table)
There are about 260 replication tasks
Replication tasks contain only one object
Replication tasks are based on virtual tables
The biggest issue faced right now is latency in the remote source tables (delta values)
There is no easy/straightforward way to "check" delta values here.
The 260 replication tasks are processed independently from each other; regardless of transactional compounding in the source system.
That means, that if table A and B are updated in the same transaction, but replicated in separate tasks to HANA, the data will be written to HANA in separate transactions. The data in HANA will be lagging behind the source system.
Usually, this difference should only last a relatively short time (maybe a few secs.), but, of course, if you do aggregation queries and want to see current valid sums etc. this leads to wrong data.
One way to deal with this is to implement the queries in a way that takes this into account, by e.g. filtering on data that has been changed half an hour ago (or longer), and to exclude newer data.
Note that as the replication via LogReader is de-coupled from the source system's transaction processing, this problem of "lagging data" is built-in conceptionally and cannot be generally avoided.
All one can do is to reduce the extend of the lag and cope with the differences in the upstream processing.
This very issue is one of the reasons for why remote data access is usually preferred over replication for cases like operational reporting.
And if you do need data-loading (e.g. to avoid additional load on the source system) then a ETL/ELT approach into data stores (DWH/BW-like) makes the situation a lot better structures.
In fact, the current S/4 HANA & BW/4 HANA setups usually use a combination of scheduled data loads and ad-hoc fetching of new data via operational delta queues from the source system.
Lars,
If we need to replicate data from ECC on Oracle to a HANA instance, should we use SLT (because of cluster tables for example) or SDI already covers all functionality SLT provides?
Regards, Chris
I have multidimensional cube in SSAS and big SQL server underlying table(more than 100GB) on the same machine. I have just found performance counters of SSAS in Performance monitor and I see number of rows per second that SSAS is reading from SQL server.
But what I don't understand is how are those rows sent from SQL server to SSAS, because when I take a look at the Network tab of the Resource Monitor, I just do not see that amount of data that has been transferred. sqlservr.exe is not sending that amount of data that I would expect to see.
I believe SSAS is going to use the Shared Memory transport protocol since SQL is on the same server as SSAS. Thus it wouldn’t use the NIC or network bandwidth I believe.
You can run the following on the SQL Server during cube processing to double check the net_transport column is shared memory.
SELECT * FROM sys.dm_exec_connections
Typically on an on-premise SQL server ETL workflow via SSIS, we load data from anywhere into staging tables and then apply validation and transformations to load/merge them into downstream data warehouse tables.
My question is if we should do something similar on Azure, where we have set of staging tables and downstream tables in azure SQL database or use azure storage area as staging and move data from there into final downstream tables via ADF.
As wild is it may seem, we also have a proposal to have separate staging database and downstream database, between which we move using ADF.
There are different models for doing data movement pipelines and no single one is perfect. I'll make a few comments on the common patterns I see in case that will help you make decisions on your application.
For many data warehouses where you are trying to stage in data and create dimensions, there is often a process where you load the raw source data into some other database/tables as raw data and then process it into the format you want to insert into your fact and dimension tables. That process is complicated by the fact that you may have data arrive late or data that is corrected on a later day, so often these systems are designed using partitioned tables on the target fact tables to allow re-processing of a partition worth of data (e.g. a day) without having to reprocess the whole fact table. Furthermore, the transformation process on that staging table may be intensive if the data itself is coming in a form far away from how you want to represent it in your DW. Often in on-premises systems, these are handled in a separate database (potentially on the same SQL Server) to isolate it from the production system. Furthermore, it is sometimes the case that these staging tables are re-creatable from original source data (CSV files or similar), so it is not the store of record for that source material. This allows you to consider using simple recovery mode on that database (which reduces the Log IO requirements and recovery time compared to full recovery). While not every DW uses full recovery mode for the processed DW data (some do dual load to a second machine instead since the pipeline is there), the ability to use full recovery plus physical log replication (AlwaysOn Availability Groups) in SQL Server gives you the flexibility to create a disaster recovery copy of the database in a different region of the world. (You can also do query read scale-out on that server if you would like). There are variations on this basic model, but a lot of on-premises systems have something like this.
When you look at SQL Azure, there are some similarities and some differences that matter when considering how to set up an equivalent model:
You have full recovery on all user databases (but tempdb is in simple recovery). You also have quorum-commit of your changes to N replicas (like in Availability Groups) when using v-core or premium dbs which matters a fair amount because you often have a more generic network topology in public cloud systems vs. a custom system you build yourself. In other words, log commit times may be slower than your current system. For batch systems it does not necessarily matter too much, but you need to be careful to use large enough batch sizes so that you are not waiting on the network all the time in your application. Given that your staging table may also be a SQL Azure database, you need to be aware that it also has quorum commit so you may want to consider which data is going to stay around day-over-day (stays in SQL Azure DB) vs. which can go into tempdb for lower latencies and be re-created if lost.
There is no intra-db resource governance model today in SQL Azure (other than elastic pools which is partial and is targeting a different use case than DW). So, having a separate staging database is a good idea since it isolates your production workload from the processing in the staging database. You avoid noisy neighbor issues with your primary production workload being impacted by the processing of the day's data you want to load.
When you provision machines for on-premises DW, you often buy a sufficiently large storage array/SAN that you can host your workload and potentially many others (consolidation scenarios). The premium/v-core DBs in SQL Azure are set up with local SSDs (with Hyperscale being the new addition where it gives you some cross-machine scale-out model that is a bit like a SAN in some regards). So, you would want to think through the IOPS required for your production system and your staging/loading process. You have the ability to choose to scale up/down each of these to better manage your workload and costs (unlike a CAPEX purchase of a large storage array which is made up front and then you tune workloads to fit into it).
Finally, there is also a SQL DW offering that works a bit differently than SQL Azure - it is optimized for larger DW workloads and has scale-out compute with the ability to scale that up/down as well. Depending on your workload needs, you may want to consider that as your eventual DW target if that is a better fit.
To get to your original question - can you run a data load pipeline on SQL Azure? Yes you can. There are a few caveats compared to your existing experiences on-premises, but it will work. To be fair, there are also people who just load from CSV files or similar directly without using a staging table. Often they don't do as many transformations, so YMMV based on your needs.
Hope that helps.
I have enabled the Query Store for two of my databases (acceptance and production), which are both running on the same instance of SQL Server 2016 Standard Edition.
The Query Store records query history on the acceptance database, but on the production database it does not record any data.
The two databases are configured identically, with the exception of mirroring that is only enabled for the production database. The mirroring mode used is "High safety with automatic failover (synchronous)".
Query Store feature is introduced to monitor performance and is still evolving. There are certain known limitations around it.
As of now, it does not work on Read-Only databases (Including read-only AG replicas). Since readable secondary replicas are read-only,
the query store on those secondary replicas is also read-only. This
means runtime statistics for queries executed on those replicas are
not recorded into the query store.
Known Limitations of Query Store
No Information on Who ran / Which Program ran since Query Store provides no data related to Application Name.
Query store cannot be enabled for system databases like master or tempdb
Lack of Control - Multiple DBAs could change settings
Data is stored in Primary filegroup
Data captured at batch level is not available.
Query Wait Stats Store available starting from SQL Server 2017
Arbitrary values are not allowed for INTERVAL_LENGTH_MINUTES in Statistics Collection Interval. (1, 5, 10, 15, 30, 60, or 1440 minutes)
I'm in a satellite office that needs to pull some data from our main office for display on our intranet. We use MS SQL Server in both locations and we're planning to create a linked server in our satellite office pointing to the main office. The connection between the two is a VPN tunnel I believe (does that sound right? What do I know, I'm a programmer!)
I'm concerned about generating a lot of traffic across a potentially slow connection. We will be getting access to a SQL view on the main office's server. It's not a lot of data (~500 records) once the select query has run, but the view is huge (~30000 records) without a query.
I assume running a query on a linked server will bring back only the results over the wire (and not the entire view to be queried locally). In that case the major bottleneck is most likely the connection itself assuming the view is indexed, etc. Are there any other gotchas or potential bottlenecks (maybe based on the way I structure queries) that I should be aware of?
From what you explained your connection is likely to be the bottleneck.
Also, you might also consider caching data at the satellite location.
The decision will depend on the following:
- how many rows and how often data are updated in the main database
- how often you need to load the same data set at satellite location
Two edge examples:
Data is static or relatively static - inserts only in main DB. In satellite location users often query the same data again and again. In this case it would make sense to cache the data locally at satellite location.
Data is volatile, a lot of updates or/and deletes. Users in satellite location rarely query data and when they do, it is always different where condition. In this case it doesn't make sense to cache. If connection is slow and there are often changes you might end up never being at sync with the main DB.
Another advantage of caching is that you can implement data compression, which will alleviate bad effect of slow connection.
If you chose to cache at local location there are a lot of options, but this I believe would be another topic.
[Edit]
About compression: You can use compressed transaction log shipping. In SQL 2008 compression is supported in Enterprise edition only. In SQL 2008 R2 it is available starting Standard version. http://msdn.microsoft.com/en-us/library/bb964719.aspx .
You can implement custom compression before you ship transaction logs, using any compression library you like.