Getting first row of non-index internal tables - abap

I have a variable with type of ANY TABLE, and it should be, as it could contain STANDARD, SORTED, or HASHED, and I need to get the first line of that table.
Using index access such as
READ TABLE itab INDEX 1` or `itab[ 1 ]
is not possible with that type. Is there an elegant way to get the first line?
My way isn't elegant:
LOOP AT itab ASSIGNING <ls_line>.
EXIT.
ENDLOOP.
Googling found similar question without appropriate answer.

Your question has no sense, since you cannot unambiguously define what is first in your task.
Your generic variable can accept any table. If this is hashed table, then it is organized by key fields, most often it resembles database table, if DB-table has the same key.
If this is index tables (standard or sorted), then its sort order is determined by index, which has nothing to do with fields order or key. If some manipulation were done with table (INSERT, UPDATE, DELETE) it will be different from natural sort and DB sort.

The meaning of the "first line" of a hashed internal table is a little bit subjective, as a hashed internal table is usually to be accessed with a key value, not a position.
If you mean the first line "in the order in which [the lines] were inserted in the table, and by the sort order used after the statement SORT [if any]", there's no solution better than the one proposed in the question:
TYPES ty_hashed_table TYPE HASHED TABLE OF string WITH UNIQUE KEY table_line.
DATA(hashed_table) = VALUE ty_hashed_table( ( `World` ) ( `Hello` ) ).
LOOP AT hashed_table ASSIGNING FIELD-SYMBOL(<line>).
EXIT.
ENDLOOP.
ASSERT <line> = `World`.
If "first line" means the line with a given component of the hashed internal table containing the lowest value, you may define a secondary sorted key:
TYPES ty_hashed_table TYPE HASHED TABLE OF string WITH UNIQUE KEY table_line
WITH NON-UNIQUE SORTED KEY by_table_line COMPONENTS table_line
##TABKEY[PRIMARY_KEY][BY_TABLE_LINE].
DATA(hashed_table) = VALUE ty_hashed_table( ( `World` ) ( `Hello` ) ).
ASSIGN hashed_table[ KEY by_table_line INDEX 1 ] TO FIELD-SYMBOL(<line>).
ASSERT <line> = `Hello`.

Related

What happens when you insert rows in a SERIAL column with existing values?

I had to import a large CSV file into a database, and one column must be a unique ID for a purchase. I set the type of the column to SERIAL (yes, I know it's not actually a type) but since I already had some data in there with their own "random" purchase IDs I'm not sure about what will happen when I insert new rows.
Will the purchase ID take the values that are not already in use? Will it start after the biggest existing ID? Will it start at 1 and not care about if a value is already in use?
The underlying SEQUENCE will not care about the values you inserted (explicitly providing values for the serial column, overruling the default), you have to adapt manually to avoid duplicate key errors:
SELECT setval(pg_get_serial_sequence('tbl', 'id'), max(id)) FROM tbl;
'tbl' and 'id' being the names of table and column respectively.
Related:
How to reset postgres' primary key sequence when it falls out of sync?
How to copy both structure and contents of PostgreSQL table, but duplicate sequences?

How to construct an sqlite table that assign and returns IDs to any name?

I would like to have an sqlite table that maps names into unique IDs. I can create this table in the following way:
CREATE TABLE name_to_id (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT)
With a select statement I can get the row containing a needed name and get from this row the corresponding ID.
The problem appears if I try to get ID for a name that is not yet in the table. The expected behavior in this case is that the new name will be added and its newly generated ID will be returned. I have two possible solutions/implementations of that.
The first solution is trivial:
We check if name is in the table.
If not we insert a row with the name.
We select the row with the name and read the needed ID from that row.
I do not like this solution because it can happen that the first process checks if the name in the table, it sees that the name is not there, meanwhile another process adds the name to the table and then the first process tries to add the same name.
The second solution seems to be better:
For any name we use insert if not exist.
We select from the table the row containing the name and get its ID.
Is the second solution optimal or there are better solutions?
The normal way to avoid duplicate entries in a table is to create an unique constraint. The database will then check for you if the record is already there and fail if so. That should be the best in terms of reliability and performance.
Next, the SQLite FAQ suggests to use the function last_insert_rowid() to fetch the ID instead of running a second query. This is actually the first question of the FAQ at all ;)
In pseudocode, the first solution looks like this:
cursor = db.execute("SELECT id FROM name_to_id WHERE name = ?", name)
if cursor.has_some_row:
id = cursor["id"]
else:
db.execute("INSERT INTO name_to_id(name) VALUES(?)", name)
id = db.last_insert_rowid
and the second like this:
db.execute("INSERT OR IGNORE INTO name_to_id(name) VALUES(?)", name)
cursor = db.execute("SELECT id FROM name_to_id WHERE name = ?", name)
id = cursor["id"]
The first solution requires a transaction around both commands, but this would be a good idea for the second solution, too, to avoid the overhead of multiple implicit transactions.
The second solution requires a unique constaint on name, but this would be a good idea for the first solution, too, for correctness and to speed up the name lookups.
Both solution use two SQL statements, and have similar speed.
(The second searches the row two times, but that data is cached.)
So there isn't anything obvious that makes one better that the other.

Yes/No column with at most one yes

I have a table which contains different versions of objects, e.g. Object A version 1, A version 2, B version 24... etc. One column stores the foreign key to the object, another stores the version number. It it obvious that these in combination should be unique and that is easy to implement with a unique index on both.
However, I want to be able to keep track of which version is the current one with an IsCurrent Yes/No column. The current version is not necessarily the one with the highest number. The problem here is that there is no way to define an index which is unique for yes values but allows many nos.
I find a lot of results when searching for this problem but none of them appear to work in Access. I have tried a "hack" in which I create a calculated column to use in a unique index which is -1 if current is true and the PK otherwise, but Access does not allow you to index calculated columns.
Is there any way to do this?
There is a trick, but you must allow only "yes/null" values for the isCurrent column - "yes" means "this row is current", and "null" otherwise.
This can be done using a validation check [isCurrent]="yes" Or [isCurrent] Is Null
Then create a composite + unique + ignore nulls index on id+isCurrent fields, and allow nulls.
Just click on the "index" button and define it in this way:
This prevents from inserting two rows with the same id + "yes" in the 'isCurrent' column, but allows many rows with the same id + null in the 'isCurrent' column.

can I insert a copy of a row from table T into table T without listing its columns and without primary key error?

I want to do something like this:
INSERT INTO T SELECT * FROM T WHERE Column1 = 'MagicValue' -- (multiple rows may be affected)
The problem is that T has a primary key column and so this causes an error as if trying to set the primary key. And frankly, I don't want to set the primary key either. I want to create entirely new rows with new primary keys but the rest of the fields being copied over from the original rows.
This is supposed to be generic code applicable to various tables. Well, so if there is no nice way of doing this, I will just write code to dynamically extract column names, construct the list etc. But maybe there is? Am I the first guy trying to create duplicate rows in a database or something?
I'm assuming by "Primary Key" you mean identity or guid data types that auto-assign or auto-increment.
Without some very fancy dynamic SQL, you can't do what you are after. If you want to insert everything but the identity field, you need to specify fields.
If you want to specify a value for that field, you need to specify all the fields in the SELECT and in the INSERT AND turn on IDENTITY_INSERT.
You don't gain anything from duplicating a row in a database (considering you didn't try to set the Primary Key). It would be wiser and will avoid problem to have another column called "amount" or something.
something like
UPDATE T SET Amount = Amount + 1 WHERE Column1 = 'MagicValue'
or if it can increase by more than 1 like amount of returned fields
Update T SET Amount = Amount * 2 WHERE Column1 = 'MagicValue'
I'm not sure what you're trying to do exactly but if the above doesn't work for what you're doing I think your design requires a new table and insert it there.
EDIT: Also as mentioned under your comments, a generic insert doesn't really make sense. Imagine, for this to work, you need the same number of fields, and they will hold the same values suggesting that they should also have the same names(even if it wouldn't require it to). It would basically be the same table structure twice.

Linked List in SQL

What's the best way to store a linked list in a MySQL database so that inserts are simple (i.e. you don't have to re-index a bunch of stuff every time) and such that the list can easily be pulled out in order?
Using Adrian's solution, but instead of incrementing by 1, increment by 10 or even 100. Then insertions can be calculated at half of the difference of what you're inserting between without having to update everything below the insertion. Pick a number large enough to handle your average number of insertions - if its too small then you'll have to fall back to updating all rows with a higher position during an insertion.
create a table with two self referencing columns PreviousID and NextID. If the item is the first thing in the list PreviousID will be null, if it is the last, NextID will be null. The SQL will look something like this:
create table tblDummy
{
PKColumn int not null,
PreviousID int null,
DataColumn1 varchar(50) not null,
DataColumn2 varchar(50) not null,
DataColumn3 varchar(50) not null,
DataColumn4 varchar(50) not null,
DataColumn5 varchar(50) not null,
DataColumn6 varchar(50) not null,
DataColumn7 varchar(50) not null,
NextID int null
}
Store an integer column in your table called 'position'. Record a 0 for the first item in your list, a 1 for the second item, etc. Index that column in your database, and when you want to pull your values out, sort by that column.
alter table linked_list add column position integer not null default 0;
alter table linked_list add index position_index (position);
select * from linked_list order by position;
To insert a value at index 3, modify the positions of rows 3 and above, and then insert:
update linked_list set position = position + 1 where position >= 3;
insert into linked_list (my_value, position) values ("new value", 3);
A linked list can be stored using recursive pointers in the table. This is very much the same hierarchies are stored in Sql and this is using the recursive association pattern.
You can learn more about it here (Wayback Machine link).
I hope this helps.
The simplest option would be creating a table with a row per list item, a column for the item position, and columns for other data in the item. Then you can use ORDER BY on the position column to retrieve in the desired order.
create table linked_list
( list_id integer not null
, position integer not null
, data varchar(100) not null
);
alter table linked_list add primary key ( list_id, position );
To manipulate the list just update the position and then insert/delete records as needed. So to insert an item into list 1 at index 3:
begin transaction;
update linked_list set position = position + 1 where position >= 3 and list_id = 1;
insert into linked_list (list_id, position, data)
values (1, 3, "some data");
commit;
Since operations on the list can require multiple commands (eg an insert will require an INSERT and an UPDATE), ensure you always perform the commands within a transaction.
A variation of this simple option is to have position incrementing by some factor for each item, say 100, so that when you perform an INSERT you don't always need to renumber the position of the following elements. However, this requires a little more effort to work out when to increment the following elements, so you lose simplicity but gain performance if you will have many inserts.
Depending on your requirements other options might appeal, such as:
If you want to perform lots of manipulations on the list and not many retrievals you may prefer to have an ID column pointing to the next item in the list, instead of using a position column. Then you need to iterative logic in the retrieval of the list in order to get the items in order. This can be relatively easily implemented in a stored proc.
If you have many lists, a quick way to serialise and deserialise your list to text/binary, and you only ever want to store and retrieve the entire list, then store the entire list as a single value in a single column. Probably not what you're asking for here though.
This is something I've been trying to figure out for a while myself. The best way I've found so far is to create a single table for the linked list using the following format (this is pseudo code):
LinkedList(
key1,
information,
key2
)
key1 is the starting point. Key2 is a foreign key linking to itself in the next column. So your columns will link something link something like this
col1
key1 = 0,
information= 'hello'
key2 = 1
Key1 is primary key of col1. key2 is a foreign key leading to the key1 of col2
col2
key1 = 1,
information= 'wassup'
key2 = null
key2 from col2 is set to null because it doesn't point to anything
When you first enter a column in for the table, you'll need to make sure key2 is set to null or you'll get an error. After you enter the second column, you can go back and set key2 of the first column to the primary key of the second column.
This makes the best method to enter many entries at a time, then go back and set the foreign keys accordingly (or build a GUI that just does that for you)
Here's some actual code I've prepared (all actual code worked on MSSQL. You may want to do some research for the version of SQL you are using!):
createtable.sql
create table linkedlist00 (
key1 int primary key not null identity(1,1),
info varchar(10),
key2 int
)
register_foreign_key.sql
alter table dbo.linkedlist00
add foreign key (key2) references dbo.linkedlist00(key1)
*I put them into two seperate files, because it has to be done in two steps. MSSQL won't let you do it in one step, because the table doesn't exist yet for the foreign key to reference.
Linked List is especially powerful in one-to-many relationships. So if you've ever wanted to make an array of foreign keys? Well this is one way to do it! You can make a primary table that points to the first column in the linked-list table, and then instead of the "information" field, you can use a foreign key to the desired information table.
Example:
Let's say you have a Bureaucracy that keeps forms.
Let's say they have a table called file cabinet
FileCabinet(
Cabinet ID (pk)
Files ID (fk)
)
each column contains a primary key for the cabinet and a foreign key for the files. These files could be tax forms, health insurance papers, field trip permissions slips etc
Files(
Files ID (pk)
File ID (fk)
Next File ID (fk)
)
this serves as a container for the Files
File(
File ID (pk)
Information on the file
)
this is the specific file
There may be better ways to do this and there are, depending on your specific needs. The example just illustrates possible usage.
There are a few approaches I can think of right off, each with differing levels of complexity and flexibility. I'm assuming your goal is to preserve an order in retrieval, rather than requiring storage as an actual linked list.
The simplest method would be to assign an ordinal value to each record in the table (e.g. 1, 2, 3, ...). Then, when you retrieve the records, specify an order-by on the ordinal column to get them back in order.
This approach also allows you to retrieve the records without regard to membership in a list, but allows for membership in only one list, and may require an additional "list id" column to indicate to which list the record belongs.
An slightly more elaborate, but also more flexible approach would be to store information about membership in a list or lists in a separate table. The table would need 3 columns: The list id, the ordinal value, and a foreign key pointer to the data record. Under this approach, the underlying data knows nothing about its membership in lists, and can easily be included in multiple lists.
This post is old but still going to give my .02$. Updating every record in a table or record set sounds crazy to solve ordering. the amount of indexing also crazy, but it sounds like most have accepted it.
Crazy solution i came up with to reduce updates and indexing is to create two tables (and in most use cases you don's sort all records in just one table anyway). Table A to hold the records of the list being sorted and table B to group and hold a record of the order as a string. the order string represents an array that can be used to order the selected records either on the web server or browser layer of a webpage application.
Create Table A{
Id int primary key identity(1,1),
Data varchar(10) not null
B_Id int
}
Create Table B{
Id int primary key Identity(1,1),
GroupName varchat(10) not null,
Order varchar(max) null
}
The format of the order sting should be id, position and some separator to split() your string by. in the case of jQuery UI the .sortable('serialize') function outputs an order string for you that is POST friendly that includes the id and position of each record in the list.
The real magic is the way you choose to reorder the selected list using the saved ordering string. this will depend on the application you are building. here is an example again from jQuery to reorder the list of items: http://ovisdevelopment.com/oramincite/?p=155
https://dba.stackexchange.com/questions/46238/linked-list-in-sql-and-trees suggests a trick of using floating-point position column for fast inserts and ordering.
It also mentions specialized SQL Server 2014 hierarchyid feature.
I think its much simpler adding a created column of Datetime type and a position column of int, so now you can have duplicate positions, at the select statement use the order by position, created desc option and your list will be fetched in order.
Increment the SERIAL 'index' by 100, but manually add intermediate values with an 'index' equal to Prev+Next / 2. If you ever saturate the 100 rows, reorder the index back to 100s.
This should maintain sequence with primary index.
A list can be stored by having a column contain the offset (list index position) -- an insert in the middle is then incrementing all above the new parent and then doing an insert.
You could implement it like a double ended queue (deque) to support fast push/pop/delete(if oridnal is known) and retrieval you would have two data structures. One with the actual data and another with the number of elements added over the history of the key. Tradeoff: This method would be slower for any insert into the middle of the linked list O(n).
create table queue (
primary_key,
queue_key
ordinal,
data
)
You would have an index on queue_key+ordinal
You would also have another table which stores the number of rows EVER added to the queue...
create table queue_addcount (
primary_key,
add_count
)
When pushing a new item to either end of the queue (left or right) you would always increment the add_count.
If you push to the back you could set the ordinal...
ordinal = add_count + 1
If you push to the front you could set the ordinal...
ordinal = -(add_count + 1)
update
add_count = add_count + 1
This way you can delete anywhere in the queue/list and it would still return in order and you could also continue to push new items maintaining the order.
You could optionally rewrite the ordinal to avoid overflow if a lot of deletes have occurred.
You could also have an index on the ordinal to support fast ordered retrieval of the list.
If you want to support inserts into the middle you would need to find the ordinal which it needs to be insert at then insert with that ordinal. Then increment every ordinal by one following that insertion point. Also, increment the add_count as usual. If the ordinal is negative you could decrement all of the earlier ordinals to do fewer updates. This would be O(n)