Could any one please help to fix this expression.
Under the Filter tab i am trying to do values selection from multi-value parameter.
where i have to add value 'C160' if left of column value is 'C15' and in the same manner,
i have to ignore value 'C160' if the column value is 'C16'.
--
My expression is, IIF(COND,TRUE,FALSE)
=IIF(Parameters!Site.Value="C15",Parameters!Site.Value AND "C160",(IIF(Parameters!Site.Value="C16",Parameters!Site.Value AND NOT "C160",Parameters!Site.Value)))
Error: Failed to evaluate the FilterValues.
Thanks
If I understand your requirements correctly, this should do the trick. You can set the filter expression to an expression - it doesn't just have to be a column value.
EXPRESSION (Text)
=Left(Fields!Column.Value,3)
OPERATOR
[=]
VALUE
=Parameters!Site.Value
EDIT
You could add a calculated field to the dataset like so:
Field Name Field Source
========== ============
Site2 =IIF(Fields!Site.Value="C160","C15",LEFT(Fields!Site.Value,3))
And Have your Site parameter have the following Available Values:
Label Value
===== =====
C15 C15
C16 C16
Your filter would then look like this:
EXPRESSION (Text)
=Fields!Site2.Value
OPERATOR
[=]
VALUE
=Parameters!Site.Value
HOWEVER
my two pence worth
It seems like business rules are driving this requirement. I would strongly recommend AGAINST calculating business rules in report expressions. Business rules change over time and it can be cumbersome to change rdl files. Better to have one central location where rules like this are stored. Lookup tables or UDFs are ideal, or at worst a stored procedure or embedded query will do.
It is much better to put this in the dataset if you have the ability to change it. You then have the ability to annotate the code which creates this grouping to include a reason WHY C160 is thought to be the same as C15.
Related
I want to create 3 new columns with their names reffering to some date varibales and this is not possible to write them like this. So the first column name should be YEAR2022, 2nd column YEAR2021 and 3rd column YEAR2020.
Can you please give me an idea how to write this?
select column1*2 as CONCAT('YEAR',YEAR(DATEADD(YY,0,GETDATE()))),
column1*3 as CONCAT('YEAR',YEAR(DATEADD(YY,-1,GETDATE()))),
column1*4 as CONCAT('YEAR',YEAR(DATEADD(YY,-2,GETDATE()))) FROM table1
The error that I get is:
Incorrect syntax near 'YEAR'.
As I mentioned in my comment, an alias cannot be an expression, it has to be a literal. As such the expressions you have tried to use are not allowed and generate syntax errors.
Normally, this sort requirement is the sign of a design flaw, or that you're trying to do something that should be in the presentation in the SQL layer. I'm going to assume the latter here. As a result, instead you should use static names for the columns, and then in your presentation layer, control the name of the columns there, so that when they are presented to the end user they have the names you want (for today that would be YEAR2022, YEAR2021 and YEAR2020). Then your query would just look like this:
select column1*2 AS ThisYear,
column1*3 AS YearPrior,
column1*4 AS Year2Prior
FROM dbo.table1;
How you change the names of the columns in your presentation layer is a completely different question (we don't even know what language you are using to write your application). If you want to ask about that, I suggest asking a new question (only if after you've adequately researched the problem and failed to find a solution), so that we can help you there.
Note that Though you can achieve a solution via dynamic SQL, I would strongly suggest it is the wrong solution here, and thus why I haven't given an answer providing a demonstration.
In a Mapping Data Flow activity, I have a bunch of tables coming from an unprocessed area of a storage account, and I aim to select only some of these columns for the next more-processed-area. In selecting the columns, I need to translate the column names to something more intuitive and/or lowercase the name. I intend to do this using parameters so I only need to change it in one spot if I need to make adjustments.
I managed the "easy" part - whitelisting relevant column names and making these lower case. But suppose I want to rename the columns according to a dictionary where column "abc" becomes "def" and "ghi" becomes "jkl". I am trying to do this in a Derived Column Transformation using a column pattern. I've made a map parameter (which I'm not sure is correct syntax):
['abc'->'def', 'ghi' -> 'jkl']
I think I need to find the index of the matching key in the translation map and then replace it with the correct index in the values array, but it doesn't seem like there's an easy way to extract the index from the functions available at https://learn.microsoft.com/en-us/azure/data-factory/data-flow-expression-functions.
This is what I have so far, partially pseudo-code (index):
replace($$,find(keys($translation),#item == $$),values($translation)[*index*(keys($translation),#item == $$)])
I've been stuck on this for too long, so I was hoping someone could give me some ideas on how to proceed.
Any help would be much, much appreciated.
I create a Simple Data Flow to test.
Data preview of source:
Parameter:
Then I have tested serval expression in DerivedColumn transformation:
1.In column pattern, using the following expression replace($$,find(keys($translation),toString(#item) == $$),values($translation)[mapIf(keys($translation),toString(#item) == $$,#index)[1]]), this can't work. Through this expression mapIf(keys($translation), 1 == 1, concat($$, $$)), I found $$ in mapIf() function can't work(It returns abc and ghi, expected value is abcabc and ghighi). I'm not sure this is a bug or ADF team designs it like this.
2.Then I didn't use column pattern just add column to have a try:replace(col1,find(keys($translation),toString(#item) == col1),values($translation)[mapIf(keys($translation),toString(#item) == col1,#index)[1]]) and
replace(col2,find(keys($translation),toString(#item) == col2),values($translation)[mapIf(keys($translation),toString(#item) == col2,#index)[1]])
It can get correct values:
Conclusion:
Don't use column pattern and just add column, then use this expression:replace(columnName,find(keys($translation),toString(#item) == columnName),values($translation)[mapIf(keys($translation),toString(#item) == columnName,#index)[1]])
I want to introduce a problem that we are facing in our project regarding Input parameter filtering issue.
Problem:
We have 5 input parameters in our SAP HANA view with default value ‘*’ to have the possibilities to select all values.
Now when we want to select data from this HANA view into our table function using script we pass input parameter values using “PLACEHOLDER” statement but for this statement ‘*’ is not working( it returns no result).
More important point is this that if I hard code value as ‘’, it is showing the data correctly but if I use variable (that holds ‘’ value), it shows me no data.
For example:
For plant (WERKS) filter, if I put constant ‘*’, it is giving me all data
For plant (WERKS) filter, if I put use a variable (ZIN_WERKS) that have ‘*’ value passed from input screen of final view, it is giving me no data.
I checked that variable is correctly filled with ‘*’ value but still no data that we are not able to understand.
Additional question, do we always give default value as ‘*’ for input parameters because if it is blank or empty, it always filter on blank values and value help could also not be generated?
Have you ever encounter these issues because it seems very basic points in SAP HANA…?
We would really appreciate for any help/hint regarding these issues…
This is indeed a question that has been asked already. The point here is that you seem to want to mimic the selection behaviour from SAP Netweaver based applications in your HANA models.
One difference to consider here is that the placeholder character on SQL databases like HANA is not * but %.
Also, the placeholder search only works when your model uses the LIKE comparison, but not with = (equal) or >, <, or any other combination of range queries.
In short: if you want to have this specific behaviour just like in SAP Netweaver, you will have to build your own scripted view and explicitly test for which parameters had been provided and which are "INITIAL".
One useful feature for this scenario is the APPLY_FILTER() function in SQLScript, that allows to apply dynamic filters in information models.
More on that can be found in the modelling guide.
Hope you can help me out and inspire me here
I am looking to create a Search function that searches through a datagridview column looking for key words and then filtering based on the result. I have done this already and it works fine. However this is based on the user entering strings into the textbox separating their strings with a comma. So an example would be
A DataGridViewColumn with 3 values
ERROR_QUEUE_MAY05
ERROR_QUEUE_MAY06
ORDER_QUEUE_JAN01
The User then enters the search criteria
ORDER, 06
And the result would be
ERROR_QUEUE_MAY06
ORDER_QUEUE_JAN01
As you can see the filter has used an OR to filter the column
I want the user to be able to use brackets () and also AND statements like you would in a SQL statement so they could use for example
("ORDER" AND "MAY") OR "03"
This would filter the results to show anything with ORDER and MAY in the title or 03.
Has anyone done anything like this in the past or have any ideas of going about it?
Thanks All
One approach is to scan the string from left to right, then recursively call the evaluator every time parentheses are encountered. After a recursive call, replace the parenthesized expression with "true" or "false". If you intend to allow and a higher priority than or, you can treat that recursively as well.
If I have a POCO class with ResultColumn attribute set and then when I do a Single<Entity>() call, my result column isn't mapped. I've set my column to be a result column because its value should always be generated by SQL column's default constraint. I don't want this column to be injected or updated from business layer. What I'm trying to say is that my column's type is a simple SQL data type and not a related entity type (as I've seen ResultColumn being used mostly on those).
Looking at code I can see this line in PetaPoco:
// Build column list for automatic select
QueryColumns = ( from c in Columns
where !c.Value.ResultColumn
select c.Key
).ToArray();
Why are result columns excluded from automatic select statement because as I understand it their nature is to be read only. So used in selects only. I can see this scenario when a column is actually a related entity type (complex). Ok. but then we should have a separate attribute like ComputedColumnAttribute that would always be returned in selects but never used in inserts or updates...
Why did PetaPoco team decide to omit result columns from selects then?
How am I supposed to read result columns then?
I can't answer why the creator did not add them to auto-selects, though I would assume it's because your particular use-case is not the main one that they were considering. If you look at the examples and explanation for that feature on their site, it's more geared towards extra columns you bring back in a join or calculation (like maybe a description from a lookup table for a code value). In these situations, you could not have them automatically added to the select because they are not part of the underlying table.
So if you want to use that attribute, and get a value for the property, you'll have to use your own manual select statement rather than relying on the auto-select.
Of course, the beauty of using PetaPoco is that you can easily modify it to suit your needs, by either creating a new attribute, like you suggest above, or modifying the code you showed to not exclude those fields from the select (assuming you are not using ResultColumn in other join-type situations).