Initialization of the data source failed - Excel 2016 - powerpivot

I'm trying to refresh a query in Excel 2016 (new install) and I get the above error. I've looked around, the problem seems quite common, but none of the answers seem to fit my issue.
In Excel, I have a couple of tabs of data in Excel tables. I use Get & Transform to import these tables into Power Query from where I generate 4 further tables of data, which are uploaded to the Data Model. I then create 3 relationships and generate 3 pivot tables with a single slicer to operate the tables.
When I come out of Excel and go back in and select "Refresh All", this is when I get the error:
Initial of the data source failed.
Check the database server or contract your db admin. Make sure the
external db is available and then try the operation again. If you see
this message again, create a new data source to connect to the DB
The data source is the excel workbook. I tried re-creating the Power Query queries etc, but to no avail.
Repair on Power Pivot also didn't work.
Given it's a new install of 2016, which comes with Power Query and Pivot as standard, I'm not sure where to try next.
Any help much appreciated.

I ran a repair on my installed version of Excel 2010, and that seems to solve the issue for me, I've seen sometimes when the user has multiple versions of excel installed library references can get broken, resulting in this error.

Related

SSAS Tabular Model. Column disappears after SSAS sevice restart

SSAS Version: 14.0.226.1
Visual Studio Version: 4.7.02558
Issue: once model is delployed to the server, it is processed w/o any errors. But if the SSAS server is rebooted, one of the dimensions throws an error while processing. It just loses one of the column. Here is the error that I get (Failed to save modifications to the server. Error returned: 'The 'Global_Code_SKU' column does not exist in the rowset.):
The column data sample looks like this:
The model contains 2 dimensions and a fact table with 632 million rows in it. May it be that the fact table size is an issue? Maybe dictionary's too big?
How I fix it: by deploying model again without partitions and roles, just metadata, and this fixes the issue, however sometimes servers can be rebooted without notification, so the processing job fails next day (it runs once a day).
Is there any suggestion I can consider to fix this? I searched for a while, haven't found any solution though.
There was a hidden sign in right before the first symbol in one of the names, so after comparing binaries of the two strings we wound that we just should recreate the table and that solved the problem
Some suggestions to try:
After reboot, connect to the SSAS server using SSMS and right click the database in question and choose Script -> Script database as. Is the column Global_Code_SKU still there? Is it hidden? Is it available in the source?
What datatype is the Global_Code_SKU? I've had problems with columns with similar values being auto-identified by SSAS as binary and therefore excluded from the load.

Automation to pull data into excel from SQL

I have a report that I generate on a weekly basis. I have the code written in SQL and I then pull all the data into excel's data model.
I then create pivot tables and dashboards in excel from that particular data.
The SQL code creates new table of the same name everytime and deletes the older version of the table. There isn't any way for me to just append the new data as the report is run from the very start and not just on the new data.
I wish to automate this process of refreshing my dashboard from the data I produce in SQL. Is there a way to do so?
Currently I create a new table in SQL, import data into the excel's data model and then recreate the dashboard.
I am not even sure if this is possible. Any help would be greatly appreciated!
Solved!
After some digging, I was able to find a feature that Excel's data model supports.
Instead of making a connection directly to a SQL Server Table, you can create a connection by writing a SQL Query.
This way, even if you delete the table for updating it, as far as the name remains the same, Excel's data model would be able to pull data from the table just by you hitting refresh!

UNION ALL with Excel file as data source

I have got the following Problem.
I have several Excel files containing each the data of a country in one folder.
However I want to pull that all into one Excel report.
As the content of the source files change dayly, I guess the best way to do that is to do a import via an SQL Statement using Union All.
However the problem is that MSQuery only allows me to Access one file at a time. Is there a Workaround for that problem?
Maybe create a data model and use DAX?
This sounds like a job for Power Query, a free add-in from Microsoft for Excel 2010 and Excel 2013, and built into Excel 2016 as "Get and Transform" in the Data ribbon.
You can create individual queries to the different Excel files in the different folder, then create a query that appends all previous queries into one table, which can be loaded to the Excel data model or a worksheet table for further processing.
The queries can be refreshed with a click when the data has changed.

SSIS package completed successfully but data is not getting loaded into Excel Destination

Inside a DATA Flow task, I have OLEDB source, data conversion task and excel destination.
I could see data moving from OLEDB source to EXCEL through DATA CONVERSION task.
I switched on data viewer and I could see data moving.
I replaced the Excel with a Flat File. The flat file is getting loaded with the data.
But if my destination is EXCEL, then I am not able to see data in that excel file. Total count of rows is around 600,000 and my destination excel is 2007(.xlsx)
I am running it in 32bit.
Can anyone please help me out? Please I need it.
Thank you so much in advance.
Excel 2007 row limit is 65,536. I know the source here is Wikipedia, but it is accurate. Source: Wikipedia Excel 2010 is a million something MS Excel Specs. Might be time for an upgrade.
In case you haven't already checked, page/scroll down to the end of the spreadsheet to confirm the data hasn't just been appended below rows that previously held data.
Carl's answer is probably the right fit, but thought I'd share this just in case. I had a similar outcome while developing an SSIS package today.
I tried to transfer data to an Excel sheet that previously had data in the first 1400 rows. I deleted the data in the Excel sheet prior to running the package. The package ran to completion (all green) and said it wrote 1400 rows.
Went back to check the file but there was nothing. Made some tweaks to the package and ran it a few more times with the same result.
Upon closer inspection of the destination Excel sheet, I found that the data actually did get over to the Excel sheet but it didn't start until row 1401...even though there was nothing in rows 1-1400. Did some research but found no solutions that would be worth the time. I ended up just exporting the data to a new file.
goto;
Redistributable components version Registry key
Excel 2016
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Office\16.0\Access Connectivity Engine\Engines\Excel
Excel 2010
HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Office\14.0\Access Connectivity Engine\Engines\Excel
change TypeGuessRows 8 -> 0

SQL Server 2000, how to automate import data from excel

Say the source data comes in excel format, below is how I import the data.
Converting to csv format via MS Excel
Roughly find bad rows/columns by inspecting
backup the table that needs to be updated in SQL Query Analyzer
truncate the table (may need to drop foreign key constraint as well)
import data from the revised csv file in SQL Server Enterprise Manager
If there's an error like duplicate columns, I need to check the original csv and remove them
I was wondering how to make this procedure more effecient in every step? I have some idea but not complete.
For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once.
For step 3&5, is there any way to automatically update the table without manually go through the importing steps?
Could the community advise, please? Thanks.
I believe in SQL 2000 you still have DTS (Data Transformation Services) part of Enterprise Manager. Using that you should be able to create a workflow that does all of these steps in sequence. I believe it can actually natively import Excel as well. You can run everything from SQL queries to VBScript so there's pretty much nothing you can't do.
I used to use it for these kind of bucket brigade jobs all the time.