Identifying key field in SAP table while using JCo3 - abap

I am using JCo3. While working with BAPI, i get tables that are part of it. While reading metadata of these tables, i will be interested to know which field is the primary key field for the table.
This is important for me while writing persistence related code in java.
Edited:
In fact, I am interested in all BAPIs. For example: BAPI_PO_CREATE1, BAPI_GOODSMVT_CANCEL, etc
Idea is to make this part of the base classes so that the key is identified automatically. I also would like to understand the exceptions, if any.

I found the function module "DDIF_FIELDINFO_GET" useful to get field level metadata. This metadata has information to indicate if it is a Key field.
Good Luck!!

Related

SQL Server database design with foreign keys

I have the following partial database design:
All the tables are dependent on each other so the table bvd_docflow_subdocuments is dependent on the table bdd_docflow_subsets
and the table bvd_docflow_subdocuments is dependent on bvd_docflow_subsets. So I thought I could me smart and use foreign keys on every table (and ON DELETE CASCADE). However the FK are being drilldown how further I go in to the tables.
The problem is the table bvd_docflow_documents has no point having a reference to the 1docflow_documentset_id` PK / FK. Is there a way (and maybe my design is crappy) that only the table standing above it has an FK relationship between the tables and not all the tables above it.
Edit:
More explanation:
In the bvd_docflow_subsets table information is stored about objects to create documents. There is an relation between that table and bvd_docflow_subdocuments table (This table stores master data about all the documents for an subset. (docflow_subset_id is in both tables). This is the link between those to tables.
Going further down we also got the table bvd_docflow_documents this table contains the actual document data. The link between bvd_docflow_documents and bvd_docflow_subdocuments is bvd_docflow_subdocument_id.
On every table I got an foreign key defined so when data is removed on a table all the data linked to that data is also removed.
However when we look to the bvd_docflow_documents table it has all the foreign keys from the other tables (docflow_subset_id and docflow_documentset_id) and there is the problem. The only foreign key needed for that bvd_docflow_documents table is docflow_subdocument_id and no other.
Edit 2
I have changed my design further and removed information that I don't need after initial import of the data.
See the following link for the (total) databse design:
https://sqldbm.com/Project/SQLServer/Share/_AUedvNutCEV2DGLJleUWA
The tables subsets, subdocuments and documents have a many to many relationship so I thought a table in between those 3 documents_subdocuments is the way to go were I define all the different keys for those tables.
I am not used to the database design first and then build it. But, for everything there is a first time, and I try to do make a database that is using standards and is using the power of SQL Server the correct way.
I'll address the bottom-most table and ignore the rest for the most part.
But first some comments. Your schema is simply a model of a system. To provide feedback, one must understand this "system" and how it actually works to evaluate your model. In addition, it is important to understand your entities and your reasons for choosing them and modelling them in the specified manner. Without that understanding all of this guessing based on experience.
And another comment. Slapping an identity column into every table is just lazy modelling IMO. Others will disagree, but you need to also enforce all natural keys. Do you have natural keys? It is rare not to have any. Enforce those that do exist.
And one last comment. Stop the ridiculous pattern of prepending the column names with the table names. And you should really think long and hard about using very long table names. Given what you have, I sense you need a schema for your docflow stuff.
For the documents table, your current PK makes no sense. Again, you've slapped an identity column into the table. By itself, this column is a key for the table. The inclusion of any other columns does not make the key any more "unique" - that inclusion is logical nonsense. Following your pattern, you would designate the identity column as the primary key. But ...
According to your image, the documents table is related to one and only one subdocument. You added a foreign key to that table - which matches the image. You also added additional columns and foreign keys to the "higher" tables. So now a document "points" to a specific subdocument. It also points to a specific subset - which may have no relationship to the subdocument. The same thought applies to the other FK. I have a doubt that this is logically correct. So why do these columns (and related FKs) exist? Perhaps this is the result of premature optimization - which everyone knows is the root of all evil coding. Again, it is impossible to know if this is "right" or even "useful" for your model.
To answer your question "... is there a way", the answer is obviously yes. You remove the columns of which you complain. You added them - Why? Is this perhaps a problem with the tool you are using?
And some last comments. There is nothing special about "varchar(50)". Perhaps this is a place holder that will be updated later. It may also be another sign of laziness. And generally speaking, columns with names like "type" and "code" tend to be foreign keys to "lookup" tables - because people like to add, modify, or remove these sorts categorization values over time. I'm also concerned about the column name overlap among the tables. "Location" exists in multiple tables, as do action_code and action_id. And a column named "id" (action_id) suggests a lookup to another table - is it? Should it be? Is there a relationship between action_id and action_code? From a distance it is impossible to answer any of these questions.
But designing a database is more art than science. Sometimes you just need to create something, populate it with some sample data, and then determine if it works for your needs. Everyone will get something wrong in the first try. That is expected; that is how you learn. The most difficult part is actually completing your first attempt.

Why use sql tags in struct in some go libs like gorm?

Well I know the necessity of tags in struct in golang and how is it accessed by reflect in golang. But I have searched and could not find a reliable answer to the question of why I should use sql tags in struct while writing struct for sql results. I have explored many sample code and people are using sql:"index" in the struct and sql:"primary_key" in the struct.
Now I have done indexing in the database layer, isn’t it enough? Should I have to use sql:"index" too get the best results? Like so I have defined primary key attribute in the database should I have to specify sql:"primary_key" as well?
My code seems to work fine without those. Just want to know their benefit and usages.
I think you are referring to an ORM library like gorm
In that case, metadata like sql:"primary_key" or sql:"index" will just tell the ORM to create an index while trying to setup the tables or maybe migrate them.
A couple of examples in gorm could be: indexes, primary keys, foreign keys, many2many relations or when trying to adapt an exiting schema into your gorm models, setting the type explicitly, like for example:
type Address struct {
ID int
Address1 string `sql:"not null;unique"` // Set field as not nullable and unique
Address2 string `sql:"type:varchar(100);unique"`
Post sql.NullString `sql:"not null"`
}
Depends on the package you are using and your use-case. Is it enough for CRUD? Almost always, unless the package says so which is often rare but possible. Few packages sometime do under the hood magic which may give rise to bugs. If you are aware of these behaviours, or are quite explicit in your code, you'll probably avoid it.
Indexing tags mostly allows you to use package's migration tools translating your model declaration into sql queries (CREATE statements). So if you always want to do this by yourself, then you probably needn't bother adding such tags.
But you may find yourself a bug if your package requires a tag. For example, in case of gorm, the Model method takes a struct pointer as an input. If this struct has a field named ID it uses it as a primary key, that is, say ID has a value of "4", it will add a WHERE id=4 automatically. In case your struct has ID, you needn't even add a primary_key tag and it will still be treated as one. This behaviour may cause issues when you have both a "non-primary-key" ID field, and another field which you are actually using as the primary key. Another example for gorm is this. A possible behaviour can also be checking for nullable property and throwing an error if an INSERT statement involves a NOT NULL field getting a NULL value.
On a different note, adding tags to your structs can be considered good practice since it gives context of its properties in the DB.

SQL database design: storing the type of a row

I am designing a database to contain a table reference, with a column type that is one of several predefined values (e.g., book, movie, magazine, etc.). I intend the range of possible values to expand over time (e.g. if I realize that I missed the academic_paper type, I want to be able to put that in).
The easiest solution would seem to be to simply store a string representing the type into the table. But this sounds like it would result in a lot of wasted space.
The other solution I thought of is creating a new table reference_types, which the type column references in its foreign key. This seems to have the added benefit of ensuring valid foreign keys (so that I won't accidentally mistype a "magzine" somewhere in my code), possible allow for faster queries for all media of a certain type (since integer comparisons should be much faster than string comparisons), but also slow my application down a bit as joins would be required whenever I need the reference type, and probably complicate logic because of those extra joins.
What are your thoughts on schema design for this problem?
Your second solution is the correct one. Create a secondary table to store your reference types and link them using a foreign key.
For further reading on this subject the search term you'd want to use is 'database normalisation'.
Create the reference_types table. And in your references table use integer and also add a reference_type_name field.
You can query the references table to get the integer key and print its name when needed without performing a join to the other table, and still use that table to perfom other operations, just keep both tables with equal type names.
I know it sonds redundant, but it's really the fastest way to do a simple query by int key and have it all together.
It depends, if you will want to add some other information to reference types, then use the second approach. If not, use the first one because it's faster and the information stored is only a string (you can always select unique to retrieve your types). Read this article for more info.

Read config data from E071K

I am entertaining a 'config documenter' feature, which will document a given request by finding tasks in E070 and then the keys in E071K.
E071K only has the table and the key ( the key consists of concatenated keyfields ).
Does anybody know of FM or class to read the tables entry linked to the key found in E071K ?
This would help me tremendously.
Cheers,
T.
...you do realize that the key can be a singular *, meaning that the entire table will be read and in some cases this will probably blow your documenter apart? ;-)
You'll also know that, when an entry ending in a * is transported, the entries in the target system corresponding to the partial key are removed before the import, so you won't be able to document the real changes without examining both the source and the target. I'm just reiterating this for the sake of other poor souls who will try this out.
Other than that, there are a few function modules that can read the contents of the tables involved under certain conditions - namely SRTT_TABLE_GET_BY_KEYLIST - but be aware that you'll have to deal with all kinds of table oddities - non-char fields, cluster tables, the whole works - for yourself.

Hibernate and IDs

Is it possible in hibernate to have an entity where some IDs are assigned and some are generated?
For instance:
Some objects have an ID between 1-10000 that are generated outside of the database; while some entities come in with no ID and need an ID generated by the database.
You could use 'assigned' as the Id generation strategy, but you would have to give the entity its id before you saved it to the database. Alternately you could build your own implementation of org.hibernate.id.IdentifierGenerator to provide the Id in the manner you've suggested.
I have to agree w/ Cade Roux though, and doing so seems like it be much more difficult than using built in increment, uuid, or other form of id generation.
I would avoid this and simply have an auxiliary column for the information about the source of the object and a column for the external identifier (assuming the external identifier was an important value you wanted to keep track of).
It's generally a bad idea to use columns for mixed purposes - in this case to infer from the nature of a surrogate key the source of an object.
Use any generator you like, make sure it can start at an offset (when you use a sequence, you can initialize it accordingly).
For all other entities, call setId() before you insert them. Hibernate will only generate an id if the id property is 0. Note that you should first insert objects with ids into the db and then work with them. There is a lot of code in Hibernate which expects the object to be in the DB when id != 0.
Another solution is to use negative ids for entities which come with an id. This will also make sure that there are no collisions when you insert an new object.