Dynamically Modify Internal Table Values by Data Type - abap

This is a similar question to the one I posted last week.
I have an internal table based off of a dictionary structure with a format similar to the following:
+---------+--------+---------+--------+---------+--------+-----+
| column1 | delim1 | column3 | delim2 | column5 | delim3 | ... |
+---------+--------+---------+--------+---------+--------+-----+
| value1 | | | value 1 | | | value 1 | | | ... |
| value2 | | | value 2 | | | value 2 | | | ... |
| value3 | | | value 3 | | | value 3 | | | ... |
+---------+--------+---------+--------+---------+--------+-----+
The delim* columns are all of type delim, and the typing of the non-delimiter columns are irrelevant (assuming none of them are also type delim).
The data in this table is obtained in a single statement:
SELECT * FROM <table_name> INTO CORRESPONDING FIELDS OF TABLE <internal_table_name>.
Thus, I have a completely full table except for the delimiter values, which are determined by user input on the selection screen (that is, we cannot rely on them always being ,, or any other common delimiter).
I'd like to find a way to dynamically set all of the values of type delim to some input for every row.
Obviously I could just hardcode the delimiter names and loop over the table setting all of them, but that's not dynamic. Unfortunately I can't bank on a simple API.
What I've tried (this doesn't work, and it's such a bad technique that I felt dirty just writing it):
DATA lt_fields TYPE TABLE OF rollname.
SELECT fieldname FROM dd03l
INTO TABLE lt_fields
WHERE tabname = '<table_name>'
AND as4local = 'A'
AND rollname = 'DELIM'.
LOOP AT lt_output ASSIGNING FIELD-SYMBOL(<fs>).
LOOP AT lt_fields ASSIGNING FIELD-SYMBOL(<fs2>).
<fs>-<fs2> = '|'.
ENDLOOP.
ENDLOOP.
Once again, I'm not set in my ways and would switch to another approach altogether if I believe it's better.

Although I still believe you're barking up the wrong tree with the entire approach, You have been pointed in the right direction both here and in the previous question
ASSIGN COMPONENT <fs2> OF STRUCTURE <fs> TO FIELD-SYMBOL(<fs3>). " might just as well continue with the write-only naming conventions
IF sy-subrc = 0.
<fs3> = '|'.
ENDIF.

Related

Logic behind show/hide rows in charts

I am confused about the logic that exists behind the showing and hiding of rows in charts of QlikView/QLik Sense. Here is what I thought was the case:
If, for some row, the value of a dimension is NULL, and for that dimension "Supress NULL" is on (QV) or "Include NULLs" is off(QS), then the row is not shown.
If, for some row, all its expressions/measures are zero or NULL, and the object-level setting "Supress Zero Values" is on (QV), or "Include Zero Values" is off (QS), then the row is not shown.
The rest of the rows are shown.
However, I get a confusing example of a measure which causes rows to disappear even though I have suppress zero values off/ inlude zero values on. Here is a small script of some sample customers and their consultation:
customer:
LOAD * INLINE [
custcode,descr
C1,pan1
C2,pan2
C3,pan3
];
consultation:
LOAD * INLINE [
custcode,grp,val,x
C2,eye,sth1,1
C2,age,20,1
C3,legs,sth2,1
C3,skin,sth5,1
C3,age,20,1
C3,age,30,1
];
As you can see, custcode C1 has no consultation lines. I proceed to create a straight table with custcode as dimension and sum(x) as measure. Here is what I get:
+----------+--------+
| custcode | sum(x) |
+----------+--------+
| C1 | 0 |
| C2 | 2 |
| C3 | 4 |
+----------+--------+
Everything fine until now. Sure enough I haven't supressed zero values: If I did, the C1 row would get removed. Also, let's note that no aggr is needed for whatever reason.
Now, let's add a set analysis to that measure to only sum x for grp='age':
sum({<grp={'age'}>}x)
This hides row C1 from sight:
+----------+-----------------------+
| custcode | sum({<grp={'age'}>}x) |
+----------+-----------------------+
| C2 | 1 |
| C3 | 2 |
+----------+-----------------------+
Question 1: Why does set analysis hide the row in this case?
Adding an additional measure with a value of 1 changes nothing. We can be sure this has nothing to do with zero values settings.
Now, let us add this measure:
aggr(min(0),[custcode])
The row got back, even though the new measure is NULL :
+----------+-----------------------+-------------------------+
| custcode | sum({<grp={'age'}>}x) | aggr(min(0),[custcode]) |
+----------+-----------------------+-------------------------+
| C1 | 0 | - |
| C2 | 1 | - |
| C3 | 2 | - |
+----------+-----------------------+-------------------------+
Now, about aggr, here are two strong reasons why I think it should not be neccessary:
If the set analysis measure was wrong and needed to include aggr in some way, this would still be no reason for the engine to hide the rows - it would just return a NULL for having an invalid formula. Also, this measure actually works correct, as we can see
custcode is the field which creates the association between the two tables. But this doesn't seem to be the cause for it to unhide rows; actually, I get the same even with aggr(min(0),[]):
+----------+-----------------------+-----------------+
| custcode | sum({<grp={'age'}>}x) | aggr(min(0),[]) |
+----------+-----------------------+-----------------+
| C1 | 0 | - |
| C2 | 1 | - |
| C3 | 2 | - |
+----------+-----------------------+-----------------+
Question 2: Why does this strange aggr measure unhide the row?
Question 1:
When you only have the one expression and add the set analysis it is like telling Qlik to select the set value.
So picture 1 no selections
picture 2 with the selection
So it's not that the answer is a null, it is that the associative engine has reduced that data out of the data set based on the selection / set rule.
The aggr() should definitely not be necessary there. The dimentionality of the chart will take care of the aggregation across the dimension. aggr() is only needed when you want to use an aggregation that is not controlled by the dimensions.
I do not understand what your aggr(min(0),[]) is trying to achieve and I don;'t get the same result as your table. The expression is just creating nulls because it is can't evaluate
If you want to see all members of the dimension you should tick "Show all values" on the dimensions tab rather than trying to change the expressions

Recursive self join over file data

I know there are many questions about recursive self joins, but they're mostly in a hierarchical data structure as follows:
ID | Value | Parent id
-----------------------------
But I was wondering if there was a way to do this in a specific case that I have where I don't necessarily have a parent id. My data will look like this when I initially load the file.
ID | Line |
-------------------------
1 | 3,Formula,1,2,3,4,...
2 | *,record,abc,efg,hij,...
3 | ,,1,x,y,z,...
4 | ,,2,q,r,s,...
5 | 3,Formula,5,6,7,8,...
6 | *,record,lmn,opq,rst,...
7 | ,,1,t,u,v,...
8 | ,,2,l,m,n,...
Essentially, its a CSV file where each row in the table is a line in the file. Lines 1 and 5 identify an object header and lines 3, 4, 7, and 8 identify the rows belonging to the object. The object header lines can have only 40 attributes which is why the object is broken up across multiple sections in the CSV file.
What I'd like to do is take the table, separate out the record # column, and join it with itself multiple times so it achieves something like this:
ID | Line |
-------------------------
1 | 3,Formula,1,2,3,4,5,6,7,8,...
2 | *,record,abc,efg,hij,lmn,opq,rst
3 | ,,1,x,y,z,t,u,v,...
4 | ,,2,q,r,s,l,m,n,...
I know its probably possible, I'm just not sure where to start. My initial idea was to create a view that separates out the first and second columns in a view, and use the view as a way of joining in a repeated fashion on those two columns. However, I have some problems:
I don't know how many sections will occur in the file for the same
object
The file can contain other objects as well so joining on the first two columns would be problematic if you have something like
ID | Line |
-------------------------
1 | 3,Formula,1,2,3,4,...
2 | *,record,abc,efg,hij,...
3 | ,,1,x,y,z,...
4 | ,,2,q,r,s,...
5 | 3,Formula,5,6,7,8,...
6 | *,record,lmn,opq,rst,...
7 | ,,1,t,u,v,...
8 | ,,2,l,m,n,...
9 | ,4,Data,1,2,3,4,...
10 | *,record,lmn,opq,rst,...
11 | ,,1,t,u,v,...
In the above case, my plan could join rows from the Data object in row 9 with the first rows of the Formula object by matching the record value of 1.
UPDATE
I know this is somewhat confusing. I tried doing this with C# a while back, but I had to basically write a recursive decent parser to parse the specific file format and it simply took to long because I had to get it in the database afterwards and it was too much for entity framework. It was taking hours just to convert one file since these files are excessively large.
Either way, #Nolan Shang has the closest result to what I want. The only difference is this (sorry for the bad formatting):
+----+------------+------------------------------------------+-----------------------+
| ID | header | x | value
|
+----+------------+------------------------------------------+-----------------------+
| 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 |3,Formula,1,2,3,4,5,6,7,8 |
| 2 | ,, | ,1,x,y,z,t,u,v | ,1,x,y,z,t,u,v |
| 3 | ,, | ,2,q,r,s,l,m,n | ,2,q,r,s,l,m,n |
| 4 | *,record, | ,abc,efg,hij,lmn,opq,rst |*,record,abc,efg,hij,lmn,opq,rst |
| 5 | ,4, | ,Data,1,2,3,4 |,4,Data,1,2,3,4 |
| 6 | *,record, | ,lmn,opq,rst | ,lmn,opq,rst |
| 7 | ,, | ,1,t,u,v | ,1,t,u,v |
+----+------------+------------------------------------------+-----------------------------------------------+
I agree that it would be better to export this to a scripting language and do it there. This will be a lot of work in TSQL.
You've intimated that there are other possible scenarios you haven't shown, so I obviously can't give a comprehensive solution. I'm guessing this isn't something you need to do quickly on a repeated basis. More of a one-time transformation, so performance isn't an issue.
One approach would be to do a LEFT JOIN to a hard-coded table of the possible identifying sub-strings like:
3,Formula,
*,record,
,,1,
,,2,
,4,Data,
Looks like it pretty much has to be human-selected and hard-coded because I can't find a reliable pattern that can be used to SELECT only these sub-strings.
Then you SELECT from this artificially-created table (or derived table, or CTE) and LEFT JOIN to your actual table with a LIKE to get all the rows that use each of these values as their starting substring, strip out the starting characters to get the rest of the string, and use the STUFF..FOR XML trick to build the desired Line.
How you get the ID column depends on what you want, for instance in your second example, I don't know what ID you want for the ,4,Data,... line. Do you want 5 because that's the next number in the results, or do you want 9 because that's the ID of the first occurrance of that sub-string? Code accordingly. If you want 5 it's a ROW_NUMBER(). If you want 9, you can add an ID column to the artificial table you created at the start of this approach.
BTW, there's really nothing recursive about what you need done, so if you're still thinking in those terms, now would be a good time to stop. This is more of a "Group Concatenation" problem.
Here is a sample, but has some different with you need.
It is because I use the value the second comma as group header, so the ,,1 and ,,2 will be treated as same group, if you can use a parent id to indicated a group will be better
DECLARE #testdata TABLE(ID int,Line varchar(8000))
INSERT INTO #testdata
SELECT 1,'3,Formula,1,2,3,4,...' UNION ALL
SELECT 2,'*,record,abc,efg,hij,...' UNION ALL
SELECT 3,',,1,x,y,z,...' UNION ALL
SELECT 4,',,2,q,r,s,...' UNION ALL
SELECT 5,'3,Formula,5,6,7,8,...' UNION ALL
SELECT 6,'*,record,lmn,opq,rst,...' UNION ALL
SELECT 7,',,1,t,u,v,...' UNION ALL
SELECT 8,',,2,l,m,n,...' UNION ALL
SELECT 9,',4,Data,1,2,3,4,...' UNION ALL
SELECT 10,'*,record,lmn,opq,rst,...' UNION ALL
SELECT 11,',,1,t,u,v,...'
;WITH t AS(
SELECT *,REPLACE(SUBSTRING(t.Line,LEN(c.header)+1,LEN(t.Line)),',...','') AS data
FROM #testdata AS t
CROSS APPLY(VALUES(LEFT(t.Line,CHARINDEX(',',t.Line, CHARINDEX(',',t.Line)+1 )))) c(header)
)
SELECT MIN(ID) AS ID,t.header,c.x,t.header+STUFF(c.x,1,1,'') AS value
FROM t
OUTER APPLY(SELECT ','+tb.data FROM t AS tb WHERE tb.header=t.header FOR XML PATH('') ) c(x)
GROUP BY t.header,c.x
+----+------------+------------------------------------------+-----------------------------------------------+
| ID | header | x | value |
+----+------------+------------------------------------------+-----------------------------------------------+
| 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 | 3,Formula,1,2,3,4,5,6,7,8 |
| 3 | ,, | ,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v | ,,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v |
| 2 | *,record, | ,abc,efg,hij,lmn,opq,rst,lmn,opq,rst | *,record,abc,efg,hij,lmn,opq,rst,lmn,opq,rst |
| 9 | ,4, | ,Data,1,2,3,4 | ,4,Data,1,2,3,4 |
+----+------------+------------------------------------------+-----------------------------------------------+

Converting a number value in a field to a word

I have a SQL query that I'm building and need to convert a number to a word.
The field is titled Type and the values are 1 or 2. I need to convert the 1 to display as Problem and the 2 to display as Resolution.
How would i go about doing this. I built this as an expression in SQL data tools, but we are going in a different direction and need to add it to the query instead and display the report another way.
Thanks!
I'd recommend you have a second table with your ID/Name combination as a lookup and do a JOIN.
That way, as new types come in, you only have to change the name, and not the code.
Although, the syntax would be
CASE WHEN Type = 1 THEN 'Problem'
WHEN Type = 2 THEN 'Resolution' END
You've essentially built the first portion of a normalized database, you've just haven't completed the second portion. The numbers: 1 and 2, can be foreign keys to a second data table that links to the second table's unique auto-incremented ID field:
+-------+ +----+-------------+
| Type | | ID | Description |
+-------+ +----+-------------+
| 1 | | 1 | Problem |
| 2 | | 2 | Resolution |
| 2 | +----+-------------+
| 1 |
+-------+
Which this schema, you can then query the data like such:
SELECT `table2`.`description`
FROM `table2`
INNER JOIN `table1`
ON `table1`.`type` = `table2`.`id`
WHERE `table1`.`type` = 1
Fiddle: Live Demo
What this does is it allows you to add more IDs and Descriptions to your second data table without having to rewrite a bunch of code.
Try below option, it will work with 2 types only (as in question):
Declare #Type int = 1
select
case #Type when 1 then 'Problem'
else 'Resolution'
end as Result
set #Type = 2
select
case #Type when 1 then 'Problem'
else 'Resolution'
end as Result

How to select with bitwise flag values in SQL

I have two tables in a SQL Server DB. One table BusinessOperations has various information about this business object, the other table OperationType is purely a bitwise flag table that looks like this:
| ID | Type | BitFlag |
| 1 | Basic-A | -2 |
| 2 | Basic | -1 |
| 3 | Type A | 0001 |
| 4 | Type B | 0002 |
| 5 | Type C | 0004 |
| 6 | Type D | 0008 |
| 7 | Type E | 0016 |
| 8 | Type F | 0032 |
The BitFlag column is a varchar column, the bitflags were inserted as '0001' as an example. In the BusinessOperations table, there's a column where the application that uses these tables updates it based on what is selected in the application's UI. As an example, I have one type which has the Basic,Type A, and Type B types selected. The column value in BusinessOperations is 3.
Based on this, I am trying to write a query which will show me something like this:
| ID | Name | Description | OperationType |
| 1 | Test | Test | Basic, Type A, Type B |
Here is the actual layout of the BusinessOperations table (Basic-A and Basic are bit columns:
| ID | Name | Description | Basic-A | Basic | OperationType |
| 1 | Test | Test | 0 | 1 | 3 |
There is nothing that relates these two tables to each other, so I cannot perform a join. I am very inexperienced with bitwise operations and am at a loss on how exactly to structure my select query which is being used to analyze this data. I feel like it needs a STUFF or CASE, but I don't know how I can get this to just show the types and not just the resultant BitFlag.
SELECT ID, Name, Description, OperationType
FROM OperationType
ORDER BY ID
Since you're storing the flag in OperationType as a VARCHAR, the first thing you need to do to is CONVERT or CAST the string to a number so we can do proper bitwise comparisons. I'm slightly unfamiliar with SQL Server, but you may need to remove the leading zeroes before the cast. Thus, the OperationType column in our desired SQL will look something like
CONVERT(INT, BitFlag)
Then, comparing that to our OperationType column would look something like
CONVERT(INT, BitFlag) & OperationType
The full query would look something like (forgive my lack of SQL Server expertise again):
SELECT bo.ID, bo.Name, bo.Description, ot.Type
FROM BusinessOperations AS bo
JOIN OperationType AS ot
ON CONVERT(INT, ot.BitFlag) & OperationType <> 0
The above query will effectively get you a list of the OperationTypes. If you absolutely need them on one line, see other answers to learn how to emulate something like GROUP_CONCAT in SQL Server. Disclaimer: Joining on a bitmask gives no guarantee of performance.
The other problem this answer does not solve is that of your legacy Basic and Basic-A fields. Personally, I'd do one of two things:
Remove them from the OperationType table and have the application tack the two on, based on the Basic and Basic-A columns as appropriate.
Put Basic and Basic-A as their own, positive flags in the OperationType table, and have the application populate the legacy columns as well as the OperationType column as appropriate.
As Aaron Bertrand has said in the comments, this really isn't an issue for Bitmasking at all. Having a many-many table that associates BusinessOperations.ID to OperationType.ID would solve all your problems in a much better way.
In the BusinessOperations table the Basic-A and Basic field are bit fields which is just another way of saying the value can only be a 1 or 0. Think of it like a boolean value True/False. So, in your query you can check each of those to determine whether to include 'Basic-A' and 'Basic' or not.
The OperationType is probably an id which you can lookup in the OperationsType table to get the Type and BitFlag. Without understanding your data completely it looks as if you could do a join for that part. Hopefully that is in the right general direction. If not, let me know.

How to get numbers arranged right to left in sql server SELECT statements

When performing SELECT statements including number columns (prices, for example), the result always is left to right ordered, which reduces the readability. Therefore I'm searching a method to format the output of number columns right to left.
I already tried to use something like
SELECT ... SPACE(15-LEN(A.Nummer))+A.Nummer ...
FROM Artikel AS A ...
which gives close results, but depending on font not really. An alternative would be to replace 'SPACE()' with 'REPLICATE('_',...)', but I don't really like the underscores in output.
Beside that this formula will crash on numbers with more digits than 15, therefore I searched for a way finding the maximum length of entries to make it more save like
SELECT ... SPACE(MAX(A.Nummer)-LEN(A.Nummer))+A.Nummer ...
FROM Artikel AS A ...
but this does not work due to the aggregate character of the MAX-function.
So, what's the best way to achieve the right-justified order for the number-columns?
Thanks,
Rainer
To get you problem with the list box solved have a look at this link: http://www.lebans.com/List_Combo.htm
I strongly believe that this type of adjustment should be made in the UI layer and not mixed in with data retrieval.
But to answer your original question i have created a SQL Fiddle:
MS SQL Server 2008 Schema Setup:
CREATE TABLE dbo.some_numbers(n INT);
Create some example data:
INSERT INTO dbo.some_numbers
SELECT CHECKSUM(NEWID())
FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))X(x);
The following query is using the OVER() clause to specify that the MAX() is to be applied over all rows. The > and < that the result is wrapped in is just for illustration purposes and not required for the solution.
Query 1:
SELECT '>'+
SPACE(MAX(LEN(CAST(n AS VARCHAR(MAX))))OVER()-LEN(CAST(n AS VARCHAR(MAX))))+
CAST(n AS VARCHAR(MAX))+
'<'
FROM dbo.some_numbers SN;
Results:
| COLUMN_0 |
|---------------|
| >-1486993739< |
| > 1620287540< |
| >-1451542215< |
| >-1257364471< |
| > -819471559< |
| >-1364318127< |
| >-1190313739< |
| > 1682890896< |
| >-1050938840< |
| > 484064148< |
This query does a straight case to show the difference:
Query 2:
SELECT '>'+CAST(n AS VARCHAR(MAX))+'<'
FROM dbo.some_numbers SN;
Results:
| COLUMN_0 |
|---------------|
| >-1486993739< |
| >1620287540< |
| >-1451542215< |
| >-1257364471< |
| >-819471559< |
| >-1364318127< |
| >-1190313739< |
| >1682890896< |
| >-1050938840< |
| >484064148< |
With this query you still need to change the display font to a monospaced font like COURIER NEW. Otherwise, as you have noticed, the result is still misaligned.