Good morning fellow programmers,
I'm trying to read a table from a "distant" system in SAP (ABAP).
Using the RFC_READ_TABLE function returns the fields table properly, but not the data.
data: options type table of rfc_db_opt with header line.
data: fields type table of rfc_db_fld with header line.
data: data type table of tab512 with header line.
CALL FUNCTION 'RFC_READ_TABLE'
DESTINATION xxxx "" Name of rfc connection
EXPORTING
query_table = 'BUT100' ""Just for testing purposes""
TABLES
options = options "" contains filters etc.
fields = fields "" contains table structure
data = data "" contains data of table
loop at data.
write: data.
endloop. "" This doesn't show anything either.
if I run this code in the debugger, I get the table fields, but the data table is always empty.
I'm pretty new to ABAP, so i thought maybe someone here got an idea, why my data table is empty as result?
I've also tried different tables than BUT100 but it's always the same result.Thank you very much in advance! Best Regards and stay healthy! ;) Nico
Related
I have a mat. file called 'settings' which contains an S file and a T file, which is a Table. From this T file, which is a table nested in the mat. file, I would like to create another table, e.g., T1, and include only certain variables from the original T table by adding or removing variables.
What I did is the following:
Settings = load('settings_20211221.mat'); %load data file and its subcontituents because the table T where the data are is nested in the settings_mat. file
S = Settings.S;
T = Settings.T;
I see that Matlab has accepted that T is a table because I can see the size(T) or head(T). However, it is proving very hard to continue and create my own table afterwards.
1)
T1 = readtable('T')
Error using readtable (line 498) Unable to find or open 'T'. Check the path and filename
or file permissions.
Question 1: I do not understand why it could be that I cannot read table T unless it has to do with the fact that is nested and I am missing something? My impression was that I have specified the Table and that I could thus apply the readtable function to it.
After this error code, I decided to simply create a duplicate of the T1 table called 'Table' in case for some reason I had no permission to manipulate the original T. I want to remove lots of variables from the table and I figured the easiest thing to do would be to specify the ranges of variables corresponding to the columns I want to remove.
Removing variables from the newly created table 'Table'.
T1 = removevars(T1, (2:8)) %specifying one range between 2 and 8 works
Table = removevars(Table, [24 25 26]) %using a numeric array to indicate the individuals positions of the variables I want to remove works
Then I wanted to specify the ranges all in one go either by using () or [] to be more efficient and did the following:
Table = removevars(Table, (25:28), (30-62))
Table = removevars(Table,[25:28], [30-62])
I always got the following error 'Error using tabular/removevars. Too many input arguments.'
Question 2: How can I specify multiple ranges of numbers corresponding to the table columns/variables I want to remove?
Alternatively, I thought I could specify the variables I want to remove using strings but I got the below error message even though both the 'flip_angle' and 'SubID' columns did exist in my table.
Table = removevars(Table,{'flip_angle', 'SubID'})
Error using tabular/subsasgnParens (line 230)
Unrecognized table variable name 'flip_angle
Sometimes I tried to specify multiple strings corresponding to the names of the variables I wanted to remove (e.g., 20 strings), and then Matlab would return an error for 'too many input arguments'.
Question 3: How are variabes removed using strings?
Question 4: Is there a more efficient way to create a new table from the original T file by indexing the variables I want to include in some other way?
I want to understand why I get these error codes, so any help would be much appreciated!
Thank you!
I am trying to use VBA to populate a bunch of tables based on the values of fields in a main table. The main table has a field "Sample Name" which is linked to a lookup field in each of the tables I am trying to populate. In the main table the data type of Sample Name is dbText (10), however I'm noticing the data type of the lookup field based on Sample Name is dbLong (4). This causes problems for my code as I am trying to add new records to each table, and when I try to set the value of the lookup field to a corresponding value from Sample Name (stored in a string) I receive a data type conversion error.
Is there a reason the lookup field has a different data type than its source table? Is the lookup field storing some sort of index like the key from the main table and simply displaying the corresponding value from Sample Name?
Additional background:
The block of code (within a Case statement) that throws the error is as follows, at this point revlitho has been defined as a recordset, the "Sample Name" field is the lookup field in question, and sampName is the string variable storing the corresponding Sample Name from the main table. It is also worth noting that at time of error revlitho.fields("Sample Name").value returns Null, I am unsure if this is the default for empty fields in a recordset:
Case "Standard Image Reverse Lithography"
revlitho.AddNew
revlitho.Fields("Sample Name").Value = sampName
revlitho.Update
I'm new to PDI and Kettle, and what I thought was a simple experiment to teach myself some basics has turned into a lot of frustration.
I want to check a database to see if a particular record exists (i.e. vendor). I would like to get the name of the vendor from reading a flat file (.CSV).
My first hurdle selecting only the vendor name from 8 fields in the CSV
The second hurdle is how to use that vendor name as a variable in a database query.
My third issue is what type of step to use for the database lookup.
I tried a dynamic SQL query, but I couldn't determine how to build the query using a variable, then how to pass the desired value to the variable.
The database table (VendorRatings) has 30 fields, one of which is vendor. The CSV also has 8 fields, one of which is also vendor.
My best effort was to use a dynamic query using:
SELECT * FROM VENDORRATINGS WHERE VENDOR = ?
How do I programmatically assign the desired value to "?" in the query? Specifically, how do I link the output of a specific field from Text File Input to the "vendor = ?" SQL query?
The best practice is a Stream lookup. For each record in the main flow (VendorRating) lookup in the reference file (the CSV) for the vendor details (lookup fields), based on its identifier (possibly its number or name or firstname+lastname).
First "hurdle" : Once the path of the csv file defined, press the Get field button.
It will take the first line as header to know the field names and explore the first 100 (customizable) record to determine the field types.
If the name is not on the first line, uncheck the Header row present, press the Get field button, and then change the name on the panel.
If there is more than one header row or other complexities, use the Text file input.
The same is valid for the lookup step: use the Get lookup field button and delete the fields you do not need.
Due to the fact that
There is at most one vendorrating per vendor.
You have to do something if there is no match.
I suggest the following flow:
Read the CSV and for each row look up in the table (i.e.: the lookup table is the SQL table rather that the CSV file). And put default upon not matching. I suggest something really visible like "--- NO MATCH ---".
Then, in case of no match, the filter redirect the flow to the alternative action (here: insert into the SQL table). Then the two flows and merged into the downstream flow.
I have problems with my records within my database, so I have a template with about 260,000 records and for each record they have 3 identification columns to determine what time period the record is from and location: one for year, one for month, and one for region. Then the information for identifying the specific item is TagName, and Description. The Problem I am having is when someone entered data into this database they entered different description for the same device, I know this because the tag name is the same. Can I write code that will go through the data base find the items with the same tag name and use one of the descriptions to replace the ones that are different to have a more uniform database. Also some devices do not have tag names so we would want to avoid the "" Case.
Also moving forward into the future I have added more columns to the database to allow for more information to be retrieved, is there a way that I can back fill the data to older records once I know that they have the same tag name and Description once the database is cleaned up? Thanks in advance for the information it is much appreciated.
I assume that this will have to be done with VBA of some sort to modify records by looking for the first record with that description and using a variable to assign that description to all the other items with the same tag name? I just am not sure of the correct VBA syntax to go about this. I assume a similar method would be used for the backfilling process?
Your question is rather broad and multifaceted, so I'll answer key parts in steps:
The Problem I am having is when someone entered data into this
database they entered different description for the same device, I
know this because the tag name is the same.
While you could fix up those inconsistencies easily enough with a bit of SQL code, it would be better to avoid those inconsistencies being possible in the first place:
Create a new table, let's call it 'Tags', with TagName and TagDescription fields, and with TagName set as the primary key. Ensure both fields have their Required setting to True and Allow Zero Length to False.
Populate this new table with all possible tags - you can do this with a one-off 'append query' in Access jargon (INSERT INTO statement in SQL).
Delete the tag description column from the main table.
Go into the Relationships view and add a one-to-many relation between the two tables, linking the TagName field in the main table to the TagName field in the Tags table.
As required, create a query that aggregates data from the two tables.
Also some devices do not have tag names so we would want to avoid the
"" Case.
In Access, the concept of an empty string ("") is different from the concept of a true blank or 'null'. As such, it would be a good idea to replace all empty strings (if there are any) with nulls -
UPDATE MyTable SET TagName = Null WHERE TagName = '';
You can then set the TagName field's Allow Zero Length property to False in the table designer.
Also moving forward into the future I have added more columns to the
database to allow for more information to be retrieved
Think less in terms of more columns than more tables.
I assume that this will have to be done with VBA of some sort to modify records
Either VBA, SQL, or the Access query designers (which create SQL code behind the scenes). In terms of being able to crunch through data the quickest, SQL is best, though pure VBA (and in particular, using the DAO object library) can be easier to understand and follow.
Hi I have a table which was designed by a lazy developer who did not created it in 3rd normal form. He saved the arrays in the table instead of using MM relation . And the application is running so I can not change the database schema.
I need to query the table like this:
SELECT * FROM myTable
WHERE usergroup = 20
where usergroup field contains data like this : 17,19,20 or it could be also only 20 or only 19.
I could search with like:
SELECT * FROM myTable
WHERE usergroup LIKE 20
but in this case it would match also for field which contain 200 e.g.
Anybody any idea?
thanx
Fix the bad database design.
A short-term fix is to add a related table for the correct structure. Add a trigger to parse the info in the old field to the related table on insert and update. Then write a script to [parse out existing data. Now you can porperly query but you haven't broken any of the old code. THen you can search for the old code and fix. Once you have done that then just change how code is inserted or udated inthe orginal table to add the new table and drop the old column.
Write a table-valued user-defined function (UDF in SQL Server, I am sure it will have a different name in other RDBMS) to parse the values of the column containing the list which is stored as a string. For each item in the comma-delimited list, your function should return a row in the table result. When you are using a query like this, query against the results returned from the UDF.
Write a function to convert a comma delimited list to a table. Should be pretty simple. Then you can use IN().