Adding new data to the end of a table using power query - sql

I've got my query that pulls data from sql server 2012 into excel using power query. However, when I change the date range in my query I'd like to pull the new data into my table and store it below the previously pulled data without deleting it. So far all I've found is the refresh button, which will rerun my query with the new dates but replace the old. Is there a way to accomplish this? I'm using it to build an automated QA testing program that will compare this period to last. Thank you.

This sounds like incremental load. If your table doesn't exceed 1,1 Mio rows, you can use the technique described here: http://www.thebiccountant.com/2016/02/09/how-to-create-a-load-history-or-load-log-in-power-query-or-power-bi/

Related

How to add new row data to PowerBi table on refresh

On our SQL Server, our org has a table that contains a current instance of records. I need to query that table and append the output row(s) to a PowerBi data table.
I have researched doing this in Power Automate with the “Add Rows to a dataset” step. Unfortunately, I cannot find a way to use the aforementioned SQL query as the payload.
Has anyone else encountered this use case? Is there an alternative way to continuously add rows to a table based on a SQL query?
I would start with this stock template:
https://powerautomate.microsoft.com/en-us/templates/details/ab50f2c1faa44e149265e45f72575a61/add-sql-server-table-rows-to-power-bi-data-set-on-a-recurring-basis/
There are few ways
Incremental refresh https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview
Duplicate's remover, you download whole DB, and then remove dublicates
Crete SQL side VIEW which do same things, and in PBI side use this VIEW

IBM SPSS How to import a Custom SQL Database Query

I am looking to see if the capability is there to have a custom SSMS sql query imported in SPSS (Statistical Package for the Social Sciences). I would want to build syntax that generates this query as my new dataset that I can then continue my scripted analysis. I see the basic query capability of one table from a Sql Server but I would like to create a query that joins to many tables. I anticipate the query to be a bit complex with many joins and perhaps data transformations.
Has anybody had experience or a solution to this situation?
I know I could take the query and make a table of it that SPSS can then connect to but my data changes daily and I would need a job in another application to refresh this table before my SPSS syntax would pull it and I would like to eliminate that first step by just having the query that grabs the data at the beginning of my syntax.
Ultimately I am looking to build out my SPSS syntax and schedule it in the Production Facility to run daily.

Tableau Incremental Refresh Using SQL Query

I have a question regarding incremental refresh with SQL Query on a Tableau Server.
My plan was the following:
Run the query for data till yesterday(i.e 20/7/2021) . After this I will have the full extract until that Date.
The next day(22/7/2021), I will build a flow that will do this. Each day will run the query for the previous day (21/7/2021) and UNION the data with the extract. In that way, I will have the incremental extract using the SQL Query.
Problem:
For that Procedure, I must use the Output Extract, that the flow will produce.
I tried this Procedure on my local machine, but, Tableau Prep gives me the following Error.
What's the best solution to approach this problem? Is there a better way?
I also attach the full Flow.
Thank you in advance.

Fastest way to convert a very large SQL Server table

We are redesigning a very large (~100Gb, partitioned) table in SQL Server 2012.
For that we need to convert data from the old (existing) table into the newly designed table on the production server. The new table is also partitioned. Rows are only appended to the table.
The problem is a lot of users work on this server, and we can do this conversion process only in chunks and when the server is not under heavy load (a couple of hours a day).
I wonder if there is a better & faster way?
This time we will finish the conversion process in a few days (and then switch our application to use the new table), but what would we do if the table was 1Tb? Or 10Tb?
PS. More details on the current process:
The tables are partitioned based on the CloseOfBusinessDate column (DATE). Currently we run this query when the server is under low load:
INSERT INTO
NewTable
...
SELECT ... FROM
OldTable -- this SELECT involves xml parsing and CROSS APPLY
WHERE
CloseOfBusinessDate = #currentlyMigratingDate
Every day about 1M rows from the old table gets converted into 200M rows in the new table.
When we finish the conversion process we will simply update our application to use NewTable.
Everybody, who took time to read the question and tried to help me, I'm sorry, I didn't have enough details myself. Turns out the query that selects data from the old table and converts it, is VERY slow (thanks to #Martin Smith I've decided to check the SELECT query). The query involves parsing xml & uses cross apply. I think the better way in our case would be to write a small application that would simply load data from the old table for each day, convert it in memory and then use Bulk Copy to insert into the new table.

Why does my SSIS package takes so long to execute?

I am fairly new creating SSIS packages. I have the following SQL Server 2008 table called BanqueDetailHistoryRef containing 10,922,583 rows.
I want to extract the rows that were inserted on a specific date (or dates) and insert them on a table on another server. I am trying to achieve this through a SSIS which diagram looks like this:
OLEDB Source (the table with the 10Million+ records) --> Lookup --> OLEDB Destination
On the look up I have set:
Now, the query (specified on the Lookup transformation):
SELECT * FROM BanqueDetailHistoryRef WHERE ValueDate = '2014-01-06';
Takes around 1 second to run through SQL Server Management Studio, but the described SSIS package is taking really long time to run (like an hour).
Why is causing this? Is this the right way to achieve my desired results?
You didn't show how your OLEDB Source component was set up but looking at the table names I'd guess you are loading the whole 10 million rows in the OLEDB source and then using the Lookup to filter out only the ones you need. This is needlessly slow.
You can remove the Lookup completely and filter the rows in OLEDB source using the same query you had in the Lookup.