Azure Synapse Link Append Only Option - azure-synapse

Kindly help us, the Azure Synapse Link for Dataverse with Azure Data Lake, while adding or manage the tables, there is advanced setting while choose the tables for Append only ( Yes or No ), if I choose Yes, the partition automatically choose Yearly and if we go for Append No, the partition will be like Month.
we would like explain what is pros and cons if we choose Yes and NO. the document also not much information available.
https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-data-lake
above link provide information as below only, please help us on more details.
The Count column shows the number rows written. When Append only is
set to No, this is the total number of records. When Append Only is
set to Yes, this is the total number of changes. The Append only and
Partition strategy columns show the usage of different advanced
configurations.
Kindly share your comments for this.

Related

Taking snapshot of SQL tables

I have a set of referential tables with different schema which we use as a reference data during integration of files. The reference data can be modified from the GUI.
And the requirement is, I need to create a snapshot of data if there are any changes. For eg., Users should be able to see which referential data has been used for particular date.
Option 1: Historize all the tables over night everyday with date. This way when users want to see the data used for particular date, we can easily query the corresponding history table. As users doesnt change the data everyday, this way we will make the database bigger day by day.
Option 2: Historize only the data(rows) which has been modified with modified date and use the view to fetch the data for particular days. But this way I need to write many views as the schema is different for different tables.
If you know of the best way I can use, I would appreciate it if you share your knowledge.
Thanks,
Not sure if possible but:
Option 3: Create/Edit triggers OnInsert/Update/Delete to write new values to an "historical table" and include a timestamp.
To get the Admin data used on day "X" just use the timestamp.
Another option (again not sure if possible) is to add "start_dt/end_dt" to the admin tables and have the processes lookup only the active data
Sérgio

How to store daily/monthly snapshots on Google BigQuery?

we need to store daily and monthly snapshots of some of ours database.
It's not backup, we need to store the data so to analyze them later and to see how they evolve during the time.
We still don't know exactly what sort of queries we will need in two months, for starting we need to track some evolutions of our user base, so we will save daily snapshots of users and other related collections.
We are thinking to put all the stuff on Google BigQuery, it's easy to put data on it and easier to make queries on that data.
We will create some tables, one for each set of data we need, with all the needed columns, plus an extra one that will contain the date on which the extraction process was done.
We will use this column to group the data by day, month, and so on.
An alternative approach could be to create a dataset for each .. well set of data, and one table every time we need a snapshot.
I honestly don't know what is the better between these two, or if there are better options.
It's difficult to say which is best for you since I don't know your needs or cost requirements.
However, with the "create some tables, one for each set of data we need, with all the needed columns, plus an extra one that will contain the date on which the extraction process was done" method, you could run queries that will allow you to see what has changed for your users over time. For example, you could say, for a particular time slice, the average activity of a particular user over time.
Probably a bit late, but for future readers: you are probably looking for date-partitioned tables. It corresponds exactly to this use case, and there's a straightforward example in the documentation page.
You can now create table snapshots in BigQuery.
You can only use the bq command line tool for now.
See here -> https://cloud.google.com/bigquery/docs/table-snapshots-create#creating_table_snapshots

delete data information

One of my colleague deleted records from a tables 15 days before. I could not know who are deleted those records. I want know their information on which machine, username and modified date in sqlserver2005. How can i get these information? please suggest?
Thanks,
Mailam
You can't unless you have the appropiate columns and/or history tables and/or features like CDC enabled.
By default, there is no in-built automatic mechanism to record data changes

archiving the table : searching for the best way

there is a table which has 80.000 rows.
Everyday I will clone this table to another log table giving a name like 20101129_TABLE
, and every day the prefix will be changed according to date..
As you calculate, the data will be 2400 000 rows every month..
Advices please for saving space, and getting fast service and other advantages and disadvantages!! how should i think to create the best archive or log..
it is a table has the accounts info. branch code balance etc
It is quite tricky to answer your question since you are a bit vague on some important facts:
How often do you need the archived tables?
How free are you in your design-choices?
If you don't need the archived data often and you are free in your desgin I'd copy the data into an archive database. That will give you the option of storing the database on a separate disk (cost-efficiency) and you can have a separate backup-schedule on that database as well.
You could also store all the data in one table with just an additional column like ArchiveDate datetime. But I think this depends really on how you plan on accessing the data later.
Consider TABLE PARTITIONING (MSDN) - it is designed for exactly this kind of scenarios. Not only you can spread data across partitions (and map partitions to different disks), you can keep all data in the same table and let MSSQL do all the hard work in the background (what partition to use based on select criteria, etc.).

SQL 2005 DB Partitioning for SharePoint

Background
I have a massive db for a SharePoint site collection. It is 130GB and growing at 10gb per month. 100GB of the 130GB is in one site collection. 30GB is the version table. There is only one site collection - this is by design.
Question
Am I able to partition a database (SharePoint) using SQL 2005s data partitioning features (creating multiple data files)?
Is it possible to partition a database that is already created?
Has anyone partitioned a SharePoint DB? Will I encounter any issues?
You would have to create a partition set and rebuild the table on that partition set. SQL2005 can only partition on a single column, so you would have to have a column in the DB that
Behaves fairly predictably so you don't get a large skew in the amount of data in each partition
IIRC the column has to be a numeric or datetime value
In practice it's easiest if it's monotonically increasing - you can create a series of partitions (automatically or manually) and the system will fill them up as it gets to the range definitions.
A date (perhaps the date the document was entered) would be ideal. However, you may or may not have a useful column on the large table. M.S. tech support would be the best source of advice for this.
The partitioning should be transparent to the application (again, you need a column with appropriate behaviour to use as a partition key).
Unless you are lucky enough to have a partition key column that is also used as a search predicate in the most common queries you may not get much query performance benefit from the partitioning. An example of a column that works well is a date column on a data warehouse. However, your Sharepoint application may not make extensive use of this sort fo query.
Mauro,
Is there no way you can segment the data on a Sharepoint level?
ie you may have multiple "sites" using a single (SQL) content database.
You could migrate site data to a new content database, which will allow you to reduce the data in that large content site and then shrink the datafiles.
it will also assist you in managing your obvious continued growth.
James.