I am using VS2019 for SSAS Tabular Model development. Have imported a table from a CSV. The source CSV has undergone a change(new column has been added). When I process my table in VS2019, it gets processed successfully. However I am unable to see the new column introduced in source CSV. I went to Table properties and did a Refresh Preview but was not able to see the new column. Closed and re-started solution, re-processed the table but no luck! I remember in VS2017 we used to add the column by going into table properties and selecting the new column but things seem to be different in VS2019. Any help would be appreciated.
I'm assuming you used Get Data / Power Query to import the CSV. This unfortunately generates a Power Query Csv.Document function call that includes the number of columns when the query was generated. This parameter isn't exposed through the usual Power Query UI.
If you use the Advanced Editor or turn on the Formula Bar (view menu), you will see a parameter like Columns=10, was generated, usually in your Source step.
It currently seems safe to delete that parameter by editing the code - it will then always pull back all columns presented. Or if you prefer, you can edit the number of columns, as described in this blog post:
https://prathy.com/2016/08/how-to-add-extra-columns-to-an-existing-power-bi-file-which-using-csv-data-source/
Related
I'm new to building SSIS packages, in fact this is my first package. I need to pull data from one DB view on Azure managed instance to an SQL on prem. I have built out the data flow and all. I'm moving data from a database view into a another database table but the destination table has a column that the source doesn't have hence my destination mapping view looks like (See attached image) How do I fix this or what are my options?
If this columns needs to stay empty in the source and you don't have it in source your best and only option is leave it like this. It basically needs to ignore it so no information will be fed. That will work.
In case you need information as current date you can add derivied column box in between your source and destination in your Data Flow where you can add current date or more columns that come from variable for example.
Its self explanatory that ignore(optional) means mapping for those columns can be ignored and if you want columns to be mapped with any calculated column you can do it by using derived column SSIS component Reference
As per your use case,try to use OLD DB component instead of ADO.NET component
to optimize performance for a relatively large data set
Does any one know if there is a way to import a spreadsheet into report builder 2.0 and then use my data set to make calculations against.
This might seem like a novice question as my limited experience of report builder does not help.
The reason i want to do this is so that i don't have to have my main data-set run the query on working out averages of hundreds of thousands of records as it take ages to run. by having the benchmark average data static i would want to run my query and do the calculations in report builder which will make it a 100 times faster.
Thank you for your time in advance
You may be able to overcome this by adding Calculated Fields to your dataset (DS). I am assuming that your static data can be related to your dataset by using at least one existing field. Using the Switch function, you can populate your calculated fields. Switch “evaluates a list of expressions and returns an Object value corresponding to the first expression in the list that is True.”
You can use the function like this:
=Switch(Fields!DsField1.Value = 2, “Your Value1”, Fields!DsField1.Value = 5, “Your Value2”, Fields!DsField1.Value = 10, “Your Value3”, ….)
If you have any condition that needs to be checked, you can add it before the Switch statement like this:
=IIF(Fields!DsField20.Value <>1000, Switch(Fields!DsField1.Value = 2, “Your Value1”, Fields!DsField1.Value = 5, “Your Value2”, Fields!DsField1.Value = 10, “Your Value3”, ….), Nothing)
You can have your values in an Excel sheet to make the creation of the formula easier. Simply create your formula in the first row, copy the row down to extend your formula, and cover all your values. Then from Excel simply copy and paste the column of data into your calculated field(s).
Here’s an example of My Excel formula. This is the best I could do as I could not paste the sample here. You can copy and paste these and replace them your own values.
In Cell-A2 2
In Cell-B2 YourValue1
In Cell-C2 YourOtherValue1
In Cell-D2 YourOtherOtherValue1
In Cell-E2 YourOtherOtherOtherValue1
In Cell-F2 ="Fields!DsField1.Value ="&A2&","&""""&B2&""""&","
In Cell-G2 ="Fields!DsField1.Value ="&$A2&","&""""&C2&""""&","
In Cell-H2 ="Fields!DsField1.Value ="&$A2&","&""""&D2&""""&","
In Cell-I2 ="Fields!DsField1.Value ="&$A2&","&""""&E2&""""&","**
Sorry if there is anything I have missed; I did this in a rush.
Report builder doesn't have anywhere you can 'import' the spreadsheet to, except one of the databases you are querying from. And Excel isn't a supported data source for SSRS, however it might be possible to add a report data source that uses an ODBC DSN to the appropriate Excel file. (I haven't tried it).
But, I can foresee some problems with this approach - it may get upset under multiple users and I expect you may find the file gets locked so you can't update it very easily.
An approach that might work could be to upload the static data into an Access database (as that is supported via the OLE DB Jet provider) and reference that as a data source; but the best approach is always going to be importing the static data into a table in your main database and using that.
We get weekly data files (flat files) from our vendor to import into SQL, and at times the column names change or new columns are added.
What we have currently is an SSIS package to import columns that have been defined. Since we've assigned the mapping, SSIS only throws up an error when a column is absent. However when a new column is added (apart from the existing ones), it doesn't get imported at all, as it is not named. This is a concern for us.
What we'd like is to get the list of all the columns present in the flat file so that we can check whether any new columns are present before we import the file.
I am relatively new to SSIS, so a detailed help would be much appreciated.
Thanks!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that reads the flat file using the file system object and a StreamReader object, and looks at the columns, which are hopefully named in the first line of the file.
However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
I agree with the answer provided by #TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination).
May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email.
The "Get Header Row" DataFlow would look like this. Row Number if needed.
The Control Flow level would look like this. Use a ForEach ADO RecordSet object to assign the header row value to an SSIS variable CurrentHeader..
Above, the precedent constraints (fx icons ) of
[#ExpectedHeader] == [#CurrentHeader]
[#ExpectedHeader] != [#CurrentHeader]
determine whether you load data or send email.
Hope this helps!
i have worked for banking clients. And for banks to randomly add columns to a db is not possible due to fed requirements and rules. That said I get your not fed regulated bizz. So here are some steps
This is not a code issue but more of soft skills and working with other teams(yours and your vendors).
Steps you can take are:
(1) reach a solid columns structure that you always require. Because for newer columns older data rows will carry NULL.
(2) if a new column is going to be sent by the vendor. You or your team needs to make the DDL/DML changes to the table were data will be inserted. Ofcouse of correct data type.
(3) document this change in data dictanary as over time you or another member will do analysis on this data and would like to know what is the use of each attribute or column.
(4) long-term you do not wish to keep changing table structure monthly because one of your many vendors decided to change the style the send you data. Some clients push back very aggresively other not so much.
If a third-party tool is an option for you, check out CozyRoc's Data Flow Task Plus. It handles variable columns in sources.
SSIS cannot make the columns dynamic,
one thing, i always do, is use a script task to read the first and last lines of a file.
if it is not an expected list of csv columns i mark file as errored and continue/fail as required.
Headers are obviously important, but so are footers. Files can through any unknown issue be partially built. Requesting the header be placed at the rear of the file it is a double check.
I also do not know if SSIS can do this dynamically, but it never ceases to amaze me how people add/change order of columns and assume things will still work.
1-SSIS Does not provide dynamic source and destination mapping.But some third party component such as Data flow task plus , supporting this feature
2-We can achieve this using ssis script task.
3-If the Header is correct process further for migration else fail the package before DFT execute.
4-Read the line from the header using script task and store in array or list object
5-Then compare those array values to user defined variables declare earlier contained default value as column name.
6-If values are matching exactly then progress further else fail it.
I've made a dataset using the dataset designer, and I'm trying to add a column to reflect changes made to the database (added a column, nothing fancy). Is there a way to 'refresh' the dataset schema from the datasource without deleting my adapter (and all the methods and queries I've created)?
I know its been a while since you posted but as I was having the same problem and figured out how to do this I reckoned I'll post the solution that worked for me.
Right click on the dataset object you want to update (on the strip at the bottom of your viewpane)
Select "Edit in Dataset Designer"
in the dataset designer, right click on the header of the table you want to add a column to
select configure... this will bring up the sql statement that is used to draw values into the dataset for this table
Edit the sql to include the column you want to include in your dataset's table and click finish i.e. in the select statement, include your columns name in the list
close the dataset designer then go to any controls (in my case its a datagridview), click on the tasks arrow (top right hand corner next to the handle) and select add column
select the newly created column from the list of databound columns and click "add"
select "edit columns" from the task menu
move the column to the correct position (it will always be placed as the last column in your grid and you may not want it to be the last column)
voila, I know its hardly snappy but it beats the hell out of deleting the dataset and then fixing up all the coding errors that come up... also after doing it a few times it'll be like second nature (I hope)
regards
p.s. am working in VS2010
Had to just delete the adapter and the table. It's rather annoying but I guess there really isn't a way around it. Maybe in VS2010 or later versions of .net.
I had a problem in creating the Dynamic report in SSRS. My problem is:
In a table I have stored SQL scripts with the column SQLScripts. If you execute these SQL scripts you get different number of columns for each script.
My problem is, I have one report with buttons of these scripts, for example test1, test2...like that. If you press test1 button this should take the test one SQL script and should display the report with appropiate columns in that sqlscripts.
I can't create individual reports for each test report, they are plenty. Are there any options for me to solve this problem...
The only way I've been able to get this to work sofar is:
Each report has 2 datasets.
ReportData
DataHeaders
The "DataHeaders" need to have the proper name of the datafields in "ReportData". Be careful since SSRS replaces blanks and special characters with "_"
Now, create a table (or matrix) and drag the DataHeaders as the Columns of your report. (This should be a grouped column). If you run it at this point, you'll see all your columns without any data. Now comes the magic:
Create another report that takes a "DataField" parameter. Create another table or matrix within this report and set it's dataset property to be "ReportData". In the DATA cell for the table, set it to the expression =Fields(Parameters!DataField.Value).Value
Now go back to your first report. Right click and insert a subreport. Right click on the subreport and select "Subreport Properties". Under general, select the second report you created to be used as the subreport. Under parameters, select the DataField parameter and set its value to something like =Fields!DataField.Value
In my case I did some formatting in this expression to fix the above mentioned issue with spaces and special characters, since my stored procedure was initially used in ASP.NET and this was just a proof of concept.
Also in my experience the performance isn't great. In fact it was kinda slow, though I haven't had a chance to switch it to use a shared dataset, which I suspect would help a bit. Please let me know if you find a better solution.
I have not found a way to do this completely dynamically. Here is a similar question with some possible solutions:
How do i represent an unknown number of columns in SSRS?
You basically need to create a 'master dataset' from the other Datasets that are based on your multitude of SQL scripts first.The master dataset should contain the data to be presented in it's most simplistic form, i.e. in a simple list format.
Finally, go to the toolbar in SSRS and drag a 'Matrix' into the report. A Matrix table acts similar to a pivot table in Excel or a CrossTab query in Access that will display whatever's in the Dataset.