Error in MODIFY TABLE clause in ABAP code - abap

Error at modify table clause. What is wrong here.
I suspect it has something to do with having a unique key- colb.
DATA : BEGIN OF line1,
cola TYPE i,
colb TYPE i,
END OF line1.
DATA mytable1 LIKE HASHED TABLE OF line1 WITH UNIQUE KEY colb.
DO 4 TIMES.
line1-cola = sy-index.
line1-colb = sy-index ** 2.
INSERT line1 INTO TABLE mytable1.
ENDDO.
line1-colb = 80.
**MODIFY TABLE mytable1 FROM line1 TRANSPORTING colb
where (colb > 2) and (cola < 5).**
LOOP AT mytable1 INTO line1.
WRITE :/ line1-cola, line1-colb.
ENDLOOP.
Error:
".", "ASSIGNING <fs>", "REFERENCE INTO data-reference", or "ASSIGNING
<fs> CASTING" expected after "COLB".
Note: Error line is in bold. The error is shown in red.

This has been in the documentation for a very long time:
You may not use a key field as a TRANSPORTING field with HASHED or
SORTED tables.

#vwegert is right, you can't change the key values in HASHED and SORTED tables. On the other hand your error is syntactical. If you change:
MODIFY TABLE mytable1 FROM line1 TRANSPORTING colb where colb > 2 and cola < 5.
For
MODIFY mytable1 FROM line1 TRANSPORTING colb where colb > 2 and cola < 5. "'TABLE' word omitted
It is also a syntactical error, however, SAP will show you the error more clearly:
You cannot change the search key using "MODIFY". "COLB" is contained
in the table key of "MYTABLE1".
Check the documentation. When specifying a condition (or including a 'WHERE') in 'MODIFY' statement you should not use the word 'TABLE'.
If you still want to modify the key field then change the internal table as 'STANDARD' like this:
DATA mytable1 LIKE STANDARD TABLE OF line1 WITH KEY colb.
Hope it helps.

Related

Inserting calculated column into existing table

I have a column with a string of text inside of it that varies in length. I want to return the first 4 characters in the string starting from position 1 and output them into a table only if there is a direct match. So if I want to return just PDFG and not return ADHR or any other combination then the below code works just fine for me.
use DB
select
substring(description, 1, 4) as newcol from table1 where substring(description, 1, 4) like '%ABCD%'
however I would like to persist this calculated column into an existing table, so something like this;
use DB
alter table table1
add newcol as ("and then the rest of the code above")
I am not sure how to reorder my code to fit the new query any help is appreciated.
Here is some sample data:
PDFG_2013 AHSDHDF
ADHR_2310 ADGDGEE
DATW_5142 NFBSAEE
The output from this should be stored in newcol within an existing table called table1. The only value in new column from the sample data should be PDFG
Use a case expression to return the first 4 characters if matching, otherwise null.
Alter table table1
add newcol as case when description like 'ABCD%' then substring(description, 1, 4) end
Or, even simplier
Alter table table1
add newcol as case when description like 'ABCD%' then 'ABCD' end

OUTPUT INTO fails due to invalid columns name

I am trying to make two inserts one after another like:
INSERT INTO tbl_tours (TimeFrom)
OUTPUT inserted.tourId, DispatchingId, TimeFrom, TimeTo INTO tbl_tourData (tour_fk, dispatchId, timeFrom, timeTo)
SELECT TimeFrom
FROM #tmpTable
SELECT * FROM tbl_tours
SELECT * FROM tbl_tourData
But I get an error:
Msg 207 Level 16 State 1 Line 13
Invalid column name 'DispatchingId'.
Msg 207 Level 16 State 1 Line 13
Invalid column name 'TimeFrom'.
Msg 207 Level 16 State 1 Line 13
Invalid column name 'TimeTo'.
You can check full code at this fiddle:
https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=c10f9886bcfb709503007f18b24eabfd
How to combine these inserts?
The output clause can only refer to columns that are inserted. So this works:
INSERT INTO tbl_tours (TimeFrom)
output inserted.tourId, inserted.TimeFrom into tbl_tourData(tour_fk, timeFrom)
SELECT TimeFrom FROM #tmpTable;
Here is the revised db<>fiddle.
If you want additional information, you need to join back to another source.
When you do an insert ... output, the "output" part can only output whatever was inserted by the "insert" part. You can't reference data from the "inserting" table.
You do insert into tbl_tours(TimeFrom). So you're only inserting a single column - the TimeFrom column, and the tour_id column will be automatically inserted, so that's available too. But then you try to use 4 columns in the output list. Where would these extra two columns come from?
One way to do this in a single step is to use the merge statement, which can get data from the "inserting" source, not just the "inserted" table. Since you know you always want to do an insert, you can join on 1 = 0:
merge tbl_tours
using #tmpTable tmp on 1 = 0
when not matched then
insert (TimeFrom)
values (tmp.TimeFrom)
output inserted.tourId,
tmp.dispatchingId,
inserted.timeFrom, -- or tmp.timeFrom, doesn't matter which
tmp.TimeTo
into tbl_tourData (tour_fk, dispatchId, timeFrom, timeTo);
I should add: This is only possible because you don't actually have a foreign key defined from tbl_tourData to tbl_Tours. You probably do intend to have one given your column name. An output clause can't output into a table with a foreign key (or a primary key with a foreign key to it), so this approach won't work at all if you ever decide to actually create that foreign key. You'll have to do it in two steps. Either per Gordon's answer (insert and join), or by creating a whole new temp table matching the schema of tbl_tourData, outputting everything into that using merge, and then dumping the second temp table into the real tbl_tourData.

Modify a db_table type table of a db

I created a database table with ID, firstname, lastname.
I created following program:
data: db_table type table of ztabletest. "Create my db data
select * from z6148tabletest into table db_table. "Fill my db data
data: modifiedLine type z6148tabletest. "Create my new line
modifiedLine-firstname = 'hey'.
modifiedLine-lastname = 'test'.
Now I want to modify the line in my db table index 2.
So I'm trying to do something like:
modify ztabletest from table db_table values modifiedLine at index 2.
I don't understand the logic for modifying.
To insert something I just do:
insert INTO ztabletest VALUES modifiedLine.
So here the logic is simple because I add in my table the values.
Can you explain me the logic to modify a line ?
A database table has no "index". The order of the table rows is unspecified. When you do a SELECT without an ORDER BY, then the database can give you the row in whatever order it feels like. Most SQL databases tend to always give you the same order, but that's for their convenience, not for yours. Especially SAP HANA tends to be very moody in this regard.
But what database tables do have is a primary key. The primary key can be thought of as an unique identifier of each table row. So when you make the primary key a number, you can simulate an index pretty well. I assume that this is the purpose of the field "ID" in your databse column and that you therefore marked it as "key" when you defined your database.
INSERT adds a new line when no line with the same key values exists. When there already is one, it fails with sy-subrc = 4.
modifiedLine-id = 2.
INSERT ztabletest FROM modifiedLine.
UPDATE changes an existing table line with the same key values. When no line with these primary key values exists in the table, it fails with sy-subrc = 4.
modifiedLine-id = 2.
UPDATE ztabletest FROM modifiedLine.
or alternative the more "traditional SQL" like syntax with SET and WHERE:
UPDATE ztabletest
SET firstname = 'hey'
lastname = 'test'
WHERE id = 2
MODIFY is the combination of INSERT and UPDATE (also known as an "upsert"). It checks if the line is already there. When it's there, it modifies the line. When it isn't, it inserts it.
modifiedLine-id = 2.
MODIFY ztabletest FROM modifiedLine.
Which is basically a shorthand for:
modifiedLine-id = 2.
UPDATE ztabletest FROM modifiedLine.
IF sy-subrc = 4.
INSERT ztabletest FROM modifiedLine.
ENDIF.

How to make massive selection SAP ABAP

I am doing a massive selection from database with the intention of saving it on application server or local directory.
Since the db has loads of entries I first tried this way:
SELECT * FROM db PACKAGE SIZE iv_package
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond IN so_cond
AND cond1 IN so_cond1.
SAVE(rt_data).
ENDSELECT.
This resulted in a dump, with the following message:
Runtime Errors: DBIF_RSQL_INVALID_CURSOR
Exeption : CX_SY_OPEN_SQL_DB
I tried doing an alternative way as well:
OPEN CURSOR WITH HOLD s_cursor FOR
SELECT * FROM db
WHERE cond IN so_cond
AND cond1 IN so_cond1.
DO.
FETCH NEXT CURSOR s_cursor INTO TABLE rt_data PACKAGE SIZE iv_package.
SAVE(rt_data).
ENDDO.
This also resulted in a dump with the same message.
What is the best approach to this scenario?
TYPES:
BEGIN OF key_package_type,
from TYPE primary_key_type,
to TYPE primary_key_type,
END OF key_package_type.
TYPES key_packages_type TYPE STANDARD TABLE OF key_package WITH EMPTY KEY.
DATA key_packages TYPE key_packages_type.
* select only the primary keys, in packages
SELECT primary_key_column FROM db
INTO TABLE #DATA(key_package) PACKAGE SIZE package_size
WHERE cond IN #condition AND cond1 IN other_condition
ORDER BY primary_key_column.
INSERT VALUE #( from = key_package[ 1 ]-primary_key_column
to = key_package[ lines( key_package ) ]-primary_key_column )
INTO TABLE key_packages.
ENDSELECT.
* select the actual data by the primary key packages
LOOP AT key_packages INTO key_package.
SELECT * FROM db INTO TABLE #DATA(result_package)
WHERE primary_key_column >= key_package-from
AND primary_key_column <= key_package-to.
save_to_file( result_package ).
ENDLOOP.
If your table has a compound primary key, i.e. multiple columns such as {MANDT, GJAHR, BELNR}, simply replace the types of the from and to fields with structures and adjust the column list in the first SELECT and the WHERE condition in the second SELECT appropriately.
If you have a range containing only option = 'EQ' records or one of the conditions has a foreign key you can simply start looping before you do the select to reduce the size of the resulting table and move the method call out of the open cursor.
OPTION = 'EQ'
Here you just loop over the range:
LOOP AT so_cond ASSIGNING FIELD-SYMBOL(<cond>).
SELECT * FROM db
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond = <cond>-low.
AND cond1 IN so_cond1.
save(rt_data).
ENDLOOP.
Foreign Key
Looping over the range is not possible in this case since you cannot easily resolve the other options like CP. But you can get each value the range selects from the foreign keytab of cond. Then you loop over the resulting table and do the SELECT statement inside like above.
SELECT cond FROM cond_foreign_keytab
WHERE cond IN #so_cond
INTO TABLE #DATA(cond_values).
LOOP AT cond_values ASSIGNING FIELD-SYMBOL(<cond>).
SELECT * FROM db
INTO CORRESPONDING FIELDS OF TABLE rt_data
WHERE cond = <cond>.
AND cond1 IN so_cond1.
save(rt_data).
ENDLOOP.

sql query to truncate columns which are above specified length

I have the following table in postgres:
create table1 (col1 character varying, col2 character varying);
My table has the following data:
col1 col2
Questions Tags Users
Value1 Value2 Val
I want find the length of col1 and col2 and when the length of values of column 1 and column2 exceeds 6, I want to truncate it and discard the remaining values. i.e. I want my final table to look like the following:
col1 col2
Questi Tags U
Value1 Value2
Actually the reason why I want to do this is, when I create index on table1 then I am getting the following error:
ERROR: index row size 2744 exceeds maximum 2712 for index "allstrings_string_key"
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
I know I can do this by importing the values to some programming language and then truncating the value. Is there some way by which I may achieve the same using an sql query in postgres.
Couldn't you just update them to contain only strings of length 6 at max?
I am no postrgres pro, so this is probably not the best method, but should do the job anyways:
UPDATE table1 SET col1 = SUBSTRING(col1, 1, 6) WHERE LEN(col1) > 6
UPDATE table1 SET col2 = SUBSTRING(col2, 1, 6) WHERE LEN(col2) > 6
I'd suggest that you actually follow the advice from Postgres, rather than changing your data. Clearly, that column with a 2k character long string shouldn't be indexed -- or not with a btree index anyway.
If the idea behind the index is searching, use full text search instead:
http://www.postgresql.org/docs/current/static/textsearch.html
If the idea behind the need is for sorting, use a functional index instead. For instance:
create index tbl_sort on (substring(col from 1 for 20));
Then, instead of ordering by col, order by substring(col from 1 for 20).
Have you tried changing the type of the column to CHAR instead of VARCHAR?
ALTER TABLE table1
ALTER COLUMN col1 SET DATA TYPE CHAR(6),
ALTER COLUMN col2 SET DATA TYPE CHAR(6)
If you need the column to be variable length, you can specify a limit (note that this is a PostgreSQL extension):
ALTER TABLE table1
ALTER COLUMN col1 SET DATA TYPE CHARACTER VARYING(6),
ALTER COLUMN col2 SET DATA TYPE CHARACTER VARYING(6)