I have a calculation script for an ASO cube which copies data from one version to another . in my POV section of the code i am using crossjoin function of MDX to create a set of tuples
POV "Crossjoin({Filter([Accounts].Members, IsLeaf([Accounts].CurrentMember))},
Crossjoin ({[D100]},
Crossjoin ({[2014]},{[USD]})))"
SourceRegion "Crossjoin({[ACTL]}, {[EOP]})" ;
but on execution this MDX using Maxl i am getting the following error
MaxL Shell completed with error
ERROR - 1300033 - Upper-level members, for example [AC0001], are not allowed in argument [POV]. Select a level-0 member.
ERROR - 1241190 - Custom Calculation terminated with Essbase error 1300033 in POV.
I am using Filter function to filter out all the lev 0 members from my account dimension in POV section, somehow still its returning parent level members. I also tried [Accounts].Levels(0).Members
but faced the same problem.
Can anyone help me out with where i am going wrong ?
Calculation scripts are typically used to calculate the Accounts that are derived from the Input Accounts in one Application.Database. A calculation script uses consolidation operators (+) or formulas usually in BSO.
Assuming you are trying to extract a portion of data from one ASO Application.Database to another Application.Database, one option is to use a report script with syntax similar to the following to extract Level 0 members only. You can right click on the report script in Essbase Administration Services and execute it. Report scripts are usually much easier to write than Multi-Dimensional Expressions. Above you state that you are trying to filter out Level 0 members when ASOs always need to be loaded at Level 0. They will roll up to the higher levels in the Outline.
To load the text file extract, build a rules file and validate it in EAS (you can also use EAS to load directly from relational database tables). Select "File" - "Open Data File". Once the file is validated you can use EAS to load the file into the other Application.Database. Once you complete the process in EAS you can automate it using MAXL code and run the process from the command prompt.
If you are trying to copy or mirror an entire Application.Database, right click on the Application and select copy in Essbase Administration Services.
Related
I need to understand under what circumstance does the protoPayload.resourceName with full table path i.e., projects/<project_id>/datasets/<dataset_id>/tables/<table_id> appear in the Log Explorer as shown in the example below.
The below entries were generated by a composer dag running a kubernetespodoperator executing some dbt commands on some models. On the basis of this, I have a sink linked to pub/sub for further processing.
As seen in the image the resourceName value is appearing as-
projects/gcp-project-name/datasets/dataset-name/tables/table-name
I have shaded the actual values of projectid, datasetid, and tablename.
I can't run the similar dag job with kuberenetesoperator on test tables owing to environment restrictions. So I tried running some update queries and insert queries using BigQuery Editor. Here is how value of protoPayload.resourceName comes as -
projects/gcp-project-name/jobs/bxuxjob_
I tried same queries using Composer DAG using BigQueryInsertJobOpertor. Here is how the value of protoPayload.resourceName comes as -
projects/gcp-project-name/jobs/airflow_<>_
Here is my question. What operation/operations in BigQuery will give me protoPayload.resourceName as the one that I am expecting i.e. -
projects/<project_id>/datasets/<dataset_id>/tables/<table_id>
I have an ODBC connection to an AWS mysql database instance. It's extremely frustrating that it appears I'm obligated by the excel UI to run the query twice.
First, I have to run the query like this:
After this runs, which returns a limited amount of rows (2nd image below), then I have to run it again to load the data into excle.
My question is, is there any possible way to skip step 1 or step 2, so that I can input my query and have it load directly into the workbook?
I'm not understanding the problem. You are configuring a Query Connection. The first execution returns a preview and the "Transform data" option (if you want to further tailor the query). The second execution loads it. From that point on the query is set up. It only needs configured once.
To get new/changed data, you just need to do a "Refresh All" or configure it to automatically Refresh Data when the Excel Workbook is opened.
If you are adding a Query to many workbooks you could probably setup one then code a query substitution script.
I am using select statement in excel source to select just specific columns data from excel for import.
But I am wondering, is it possible to select data such way when I select for example column with name: Column_1, but if this column is not exists in excel then it will try to select column with name Column_2? Currently if Column_1 is missing, then data flow task fails.
Use a Script task and write .net code to read the excel file and then perform the check for the Column_1 availability in the file. If the column does not present then use Column_2 as input. Script Task in SSIS can act as a source.
SSIS is metadata based and will not support dynamic metadata, however you can use Script Component as #nitin-raj suggested to handle all known source columns. There is a good post below on how it can be done.
Dynamic File Connections
If you have many such files that can have varying columns then it is better to create a custom component.However, you cannot have dynamic metadata even with custom component, the set of columns should be known upfront to SSIS.
If the list of columns keep changing and you cannot know in advance what are expected columns then you are better off handling the entire thing in C#/VB.Net using Script Task of control flow
As a best practice, because SSIS meta data is static, any data quality and formatting issues in source files should be corrected before ssis data flow task runs.
I have seen this situation before and there is a very simple fix. In the beginning of your ssis package, using a file task to create copy of the source excel file and then run a c# script or execute a powershell to rename the columns so that if column 1 does not exist, it is either added at the appropriate spot in excel file or in case the column name is wrong is it corrected.
As a result of this, you will not need to refresh your ssis meta data every time it fails. This is a standard data standardization practice.
The easiest way is to add two data flow tasks, one data flow for each Excel source select statement and use precedence constraints to execute the second data flow when the first one fails.
The disadvantage of this approach is that if the first data flow task fails for another reason, it will also try to execute the second one. You will need some advanced error handling to check if the error is thrown due to missing columns or not.
But if have a similar situation, I will use a Script Task to check if the column exists and build the SQL command dynamically. Note that this SQL command must always return the same metadata (you must use aliases).
Helpful links
Overview of SSIS Precedence Constraints
Working with Precedence Constraints in SQL Server Integration Services
Precedence Constraints
I am new to Informatica. I am using Informatica 10.1.0 and I have created a workflow like below.
How can I make this workflow to execute the below informatica repository sql and fail the workflow is the count is greater than 0
select count(*) as cnt
from REP_TASK_INST_RUN
where workflow_run_id = (select max(workflow_run_id) from OPB_WFLOW_RUN where WORKFLOW_NAME = 'wf_Load_Customer_Transactions')
and RUN_STATUS_CODE <> 0
You have shared the view of a workflow manager. in the Informatica Designer, you can create a mapping with the source as your table. In the Source Qualifier, add a dummy query and then load this data into a designated target. Post that you can create the workflow for your mapping and run it.
https://www.guru99.com/mappings-informatica.html
The above link should be a good reference.
Once you have a functional workflow, you may add a control task for the above check in Control task to make the workflow to fail if count of target rows <1.
Design an informatica Mapping-
- SQ contains the query you have provided and output of SQ will be passed to an expression. Create a mapping variable which stores this value.
- with in the workflow using the post session workflow variable assignment- assign the mapping variable to workflow variable.
- create an assignment task which checks the value of this workflow variable- if the count >0 , use the control task to fail the workflow.
One way would be to create a mapping with your query inside of a SQL Transformation. Set it up to write to either a flat file or create a table in the DB. Add a filter to write the count to target only if it's greater than 0.
Then in the workflow, setup a session and link it to a Control Task that will fail if $TgtSuccessRows is < 1.
You can create a dummy session to put your query inside the session, then link with the next workflow. The linkage u can put $count=0. Then the next wkf session will run when the count is 0.
I've been developing a data pipeline in SSIS on an on-premise VM during my internship, and was tasked with gathering data from Marketo (re: https://www.marketo.com/ ). This package runs without error, starting with a Truncate table execute SQL task, followed by 5 data flow tasks that gather data from different sources within Marketo and moves them to staging tables within SQL Server, and concludes with an execute SQL task to load processing tables with only new data.
The problem I'm having: my project lead wants this process to be automated to run daily, and I have noticed tons of resources online that show automation of an SSIS package, but within my package, I have to have user input for the Marketo source. The Marketo source requires a user input of a time frame from which to gather data.
Is it possible to automate this package to run daily even with user input required? I was thinking there may be a way to increment the date value by one for the start and end dates (So start date could be 2018-07-01, and end date could be 2018-07-02, incrementing each day by one), to make the package run by itself. Thank you in advance for any help!
As you are automating your extract, this suggests that you have a predefined schedule on which to pull that data. From this schedule, you should be able to work out your start and end dates based on the date that the package was run.
In SSIS there are numerous ways to achieve this depending on the data source and your connection methods. If you are using a script task, you can simply calculate the dates required using your .Net code. Another alternative would be to use calculated variables that return the result of an expression, such as:
DATEADD("Month", -1, GETDATE())
Assuming you schedule your extract to run on the first day of the month, the expression above would return the first day of the previous month.