ForEach Loop Container does not pass variables to DataFlowTask - variables

I have ForEach Loop Container (FEL Container) with DataFlowTask (DFT) in my SSIS. In DFT there is OLE DB SRC with SQL Comand coming from variable "DateToSelect" . This variable have two parameters - StartDate & EndDate, which I pass in my FELC Container. The goal of SSiS is to load table on SQL Server for certain periods
Problem is the next - SSIS runs without errors - but information do not load in table. I have put Breakpoints and Watches to see either my variables are changing - and they really do, but still nothing is loading.
Have somebody ideas what is wrong?
I have mentioned one more detail - package downloads data only for april, while period is from january to april (or from february to april) and if I change period from january to march - again no data is downloading..

Related

Can I automate an SSIS package that requires user input?

I've been developing a data pipeline in SSIS on an on-premise VM during my internship, and was tasked with gathering data from Marketo (re: https://www.marketo.com/ ). This package runs without error, starting with a Truncate table execute SQL task, followed by 5 data flow tasks that gather data from different sources within Marketo and moves them to staging tables within SQL Server, and concludes with an execute SQL task to load processing tables with only new data.
The problem I'm having: my project lead wants this process to be automated to run daily, and I have noticed tons of resources online that show automation of an SSIS package, but within my package, I have to have user input for the Marketo source. The Marketo source requires a user input of a time frame from which to gather data.
Is it possible to automate this package to run daily even with user input required? I was thinking there may be a way to increment the date value by one for the start and end dates (So start date could be 2018-07-01, and end date could be 2018-07-02, incrementing each day by one), to make the package run by itself. Thank you in advance for any help!
As you are automating your extract, this suggests that you have a predefined schedule on which to pull that data. From this schedule, you should be able to work out your start and end dates based on the date that the package was run.
In SSIS there are numerous ways to achieve this depending on the data source and your connection methods. If you are using a script task, you can simply calculate the dates required using your .Net code. Another alternative would be to use calculated variables that return the result of an expression, such as:
DATEADD("Month", -1, GETDATE())
Assuming you schedule your extract to run on the first day of the month, the expression above would return the first day of the previous month.

PENTAHO 7.1 - Generating large number of different reports by script

On my Pentaho CE 7.1 I often need to generate large number of reports (*.prpt) with different attributes.
For example, I have a report that shows data for a day, and I need to generate those reports for each day since September 2017.
Is there any way how create a script, that would execute those *.prpt files one by one for each day since September 2017 until now?
I have been checking API on official Pentaho documentation, but it does not seem to be such option there. Perhaps some kind of hack, like sending parameters in URL or so?
Create your *.prpt with the Report Designer and use a parameter to select one day in your data.
Then open PDI, with first step to generate a date starting from 2017-09-10, and give this date to a Pentaho Reporting Example step. Then do what you need with the report output (mail, save them in the Pentaho-solutions,...).
You have a use case very similar in the sample directory which is shipped with the Pentaho Data Integrator. It is named Pentaho Reporting Output Example.ktr.

SSIS - using VB to dynamically create ole db source where the SQL source is too dynamic

I'm very new to SSIS, but have this week been completely burried in turorials of how to use it, this has been very fruitful so far but now I'm stuck with what seems a more complicated task.
I have two source SQl tables, which are dynamically created using the TSQL pivot function - so the numebr of columns in each will vary.
I feed a list of criteria into a for each loop whic his set to use an ADO enumerator to go through this data set and has a script task to set the string variable SQL_DOWNLOAD_STRING to point to one of the two poential tables using an IF statement. TABLEA or TABLEB.
I tried using the standard OLE DB source in the data flow, but the associated meta data was generated based on TABLEA at the time of build. I then set the SQL statement to be based on the variable SQL_DOWNLOAD_STRING and when I execute, the meta data of the source has changed as obviously it is a differnt table. That said even if itwas the same table there is no guarentee that the columns would be the same.
So my question is - how do I create the OLE DB Source dynamically so that it reconfigures to the new structure of the table?
Then I'm trying to export the data to an Excel workbook (which again I can do for a static source) but in this instance the mapping would need to change too & I imagine I'll need to rebuild the destination table to match the source from above.
If I can get a working example - then finishing the project should be pretty staright forward from there - examples for this sort of thing seem pretty limited though :(
Please help! (PS I'm working in VB)
Thanks

SQL: Tracking changes to the table that gets truncated everyday (and repulled form different srvr)

I have a table that is a replicate of a table from a different server.
Unfortunately I don't have access to the transaction information, and all I have is the table that shows "as is" information & I have a SSIS to replicate the table on my server every day (the table gets truncated, and new information is pulled every night).
Everything has been fine and good, but I want to start tracking what has changed. i.e. I want to know if a new row has been inserted or a value of a column has changed.
Is this something that could be done easily?
I would appreciate any help..
The SQL version is SQL Server 2012 SP1 | Enterprise
If you want to do this for a perticular table then you can go for a scd(slowly changing dimension) transform in SSIS control flow which will keep the hystory records in different table
or
you can create CDC(changing data capture) method on that table.CDC will help you on monitering of every DML operation in that table.It will inserted in the modified row in the system table.

SSAS dimension source table changed - how to propagate changes to analysis server?

Sorry if the question isn't phrased very well but I'm new to SSAS and don't know the correct terms.
I have changed the name of a table and its columns. I am using said table as a dimension for my cube, so now the cube won't process. Presumably I need to make updates in the analysis server to reflect changes to the source database?
I have no idea where to start - any help gratefully received.
Thanks
Phil
Before going into the details of how to amend the cube, have you considered creating a view with the same name as the old table which maps the new column names to the old?
The cube processing process should pick this up transparently.
EDIT
There are quite a lot of variations on how to amend SSAS - it depends on your local set-up.
If your cube definition is held in source control (which it should ideally be), you need to check the cube definition out and amend it from there.
If your definition exists only on the server you need to open it from the server:
Open the Business Intelligence
Development Studio (BIDS) -
typically on the Windows start menu
under Programs > Microsoft SQL
Server 2005.
Go to File > Open > Analysis Services Database
Select your server/database and click OK.
Once you have the project open in BIDS, you can amend the Data Source View to switch to the new table.
These instructions are based on the principle that it's going to be easier to alias the new table to look like the old in the DSV, since this means fewer changes within the cube definition.
Open the Data Source View from the Solution Explorer - there should be only one.
Locate the table you need to change in the DSV
Right-click on the table and select Replace Table > With New Named Query
Replace the existing query with a query from the new table with the new columns aliased with the new names:
SELECT ~new column name~ AS ~old column name~
FROM ~new_table~
Once the new query has been set, deploy the changes:
If you use source control, check in and deploy the project to the target server.
If you opened the cube definition from the server, select File > Save All
Finally, re-process the cube.