SLT error: No columns found for table <table name> - sap

Our SLT (SAP Landscape Transformation) configuration fails.
Src DB2 dest: DB2
The error message is:
No columns found for table db2inst1.ASDF
What are possible solutions?

Make sure that the database user has the required access rights as described in the security guide.
When using DB2 make sure to select DB6 (not DB2) and provide the correct table space name!

Related

Hive "Show Tables" Fails with MetaException

Using Hive 2.3.7 on AWS EMR (5.33.1) I have created a database which shows correctly when calling show databases;. I then create a table which seems to work correctly (no exceptions). When I call describe <table>; It correctly returns the name and schema of the table. However when I run show tables; the following error is returned:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException
Exception thrown when executing query :
SELECT A0.TBL_NAME,A0.TBL_NAME AS NUCORDER0 FROM TBLS A0 LEFT OUTER JOIN DBS B0 ON
A0.DB_ID = B0.DB_ID WHERE B0.`NAME` = ? AND LOWER(A0.TBL_NAME) LIKE '_%' ESCAPE '\' ORDER BY NUCORDER0)
If anyone can shed any light on this issue it would be really appreciated.
I have googled around and found nothing of any use.
EDIT: show tables in <schema>; returned the same result
EDIT 2: This issue was solved by updating the EMR to emr-6.4.0. I have no great insight into the issue beyond what is mentioned here.
I think your metadata database has been corrupted/has bad data. I would take a backup. And then see if you can restore some previous backups. I would connect to the database directly and look at the those tables and see if anything looks out of the ordinary. If you find a bad table entry don't delete it. I'd try using "Delete table" commands (via hive) to remove it to keep integrity. If you have to you can delete entries in your database, you have a backup and could restore back the tables.
Hive meta store is using datanucleus, https://www.datanucleus.org/, for all CRUD of metastore database. It's generating \\ to escape backslash itself, but Mariadb interprete \\ as string literal. So it needs to use \\ as escape character.
You can see sql_mode setting here, https://mariadb.com/kb/en/sql-mode/#sql_mode-values.
Get rid of NO_BACKSLASH_ESCAPE from the mode and it should be all right.
Try providing the schema which you want to see the tables:
show tables in schema_name;

Error linking MS Access to SAP Hana database

I am trying to link an Access 2016 DB to tables in a SAP Hana database using ODBC. When I try to link to tables
" '_SYS_BIC_XYZ_PUBLISHED_Customer_Service_Tran/CVC_SERVICE_ORDER_ACTUAL_COST_REV' is not a valid name. Make sure that it does not include invalid characters or punctuation and that it is not to long. "
I'm able to connect to all other tables, but this one is giving me a grief. I suspect it's because of long name. But I cannot change the table name in SAP Hana source.
I found this article:
http://oakleafblog.blogspot.com/2010/07/linking-microsoft-access-2010-tables-to.html
but still cannot change the table in SAP Hana itself. Is there any other way to fix this error?
I see two options here:
create a synonym for the view and give the synonym a shorter name.
create a view with a shorter name that projects the calculation view with the long name.
In your question, you refer to the DB object _SYS_BIC_XYZ_PUBLISHED_Customer_Service_Tran/CVC_SERVICE_ORDER_ACTUAL_COST_REV as a table. That's likely incorrect, as the naming (_SYS_BIC-schema and CVC_ prefix) indicates that this is in fact a calculation view.
This also means you will only be able to read from this view, but not able to change data in it.

ORA-31655 when using VERSION=10.2 with expdp

i'm trying to export a table with Oracle Datapump, running on a Oracle 12C Instance. The schema has a table called KAT.
When i do the export with:
expdp USER/PASS directory=exp dumpfile=dump.dmp logfile=kat.log TABLES=KAT
everything works as expected.
When i try to do the following (to be able to import the data in a Oracle 10g database), i get the following error:
expdp USER/PASS directory=exp dumpfile=dump.dmp logfile=kat.log TABLES=KAT VERSION=10.2
ORA-39166: Object USER.KAT was not found.
ORA-31655: no data or metadata objects selected for job
Why? Any ideas?
Most likely issue is that your table is using features that exist in 12c but not in 10.2. I'm getting the exact same error message trying to export a table with a virtual column (which were introduced in 11.1) from a 12c database:
No VERSION (i.e. COMPATIBLE): works
VERSION=11.2 or 11.1: works
VERSION=10.2: ORA-39166 error.
Could be a feature on the table itself, or one of its indexes (or constraints). Check the table's DDL.

IBM DB2 - Can't Set Schema

I am trying to use the command, SET SCHEMA. However, it does not appear to be working, I get an error message. I am able to use the schema if I use Schema.Tablename, but this can be tedious. I am perfectly connected to the database and all the schema properties appear in my schemas folder.
The error message is below:
------------------------------ Commands Entered ------------------------------
SET SCHEMA RSBALANCE;
------------------------------------------------------------------------------
SET SCHEMA RSBALANCE
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0805N Package "NULLID.SQLC2H20 0X41414141415A425A" was not found.
SQLSTATE=51002
SQL0805N Package "NULLID.SQLC2H20 0X41414141415A425A
The syntax for DB2 is (Info Center link):
SET SCHEMA = 'YOUR_SCHEMA'
If you're using the Command Line Processor (which it appears you are by the error message), you have to use double-quotes (it does matter!):
SET SCHEMA = "YOUR_SCHEMA"
Information Center has documentation on the SQL0805N error.
This is the relevant course of action:
If the DB2 utility programs need to be rebound to the database, the
database administrator can accomplish this by issuing one of the
following CLP command from the bnd subdirectory of the instance, while
connected to the database:
For the DB2 utilities:
db2 bind #db2ubind.lst blocking all grant public
For CLI::
db2 bind #db2cli.lst blocking all grant public
Turns out that my machine was missing an update from IBM. This allowed me to use the command from bhamby to work properly.
Thank you all for your input.

copying a table from one database to another

I am trying to archive some of my tables into another database on the same server. However the INSERT INTO...SELECT...FROM gives me an error (SQLSTATE=42704) on build. The table exists in the second database.
Can anyone help with this?
It's not clear from your question what version of DB2 is being used. I'll presume that it's the Linux, Unix & Windows version. You look to be using federation to link the two databases.
Does the SELECT part of your query work from LS2DB001? It's worth trying to pin down which database you have the issue with.
Presuming that the problem is on LS2DB001, if the user you have defined the federated link with has permissions on the base tables in the query, check also that they have permissions on the system catalog tables. If not, they would not be able to parse and validate that you can run the query.
We've cracked it! If the following script is used then it works. The LOAD works without having to COMMIT in between batches of rows copied. ('Transaction Log full...' error problem is also solved)
CONNECT TO LS2DB001;
EXPORT TO "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.EXM"
SELECT * FROM LS2USER.TIN_TRIGGER_OUT;
CONNECT RESET;
CONNECT TO LQIFCOLD;
LOAD FROM "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.IMM"
INSERT INTO LS2USER.TIN_TRIGGER_OUT COPY NO INDEXING MODE AUTOSELECT;
COMMIT;
CONNECT RESET;
I found this on http://www.connx.com/products/connx/Connx%208.6%20UserGuide/CONNXCDD32D/DB2_SQL_States.htm:
42704 Undefined object or constraint name. Revise SQL syntax and retry.
For more help try to be more specific, eg paste the full sql statement, the table scheme etc.
You can do
Select 'insert into tblxxxx (blabla,blabal) values(' + fld1 + ',' + fld2 + ',' ...... + ')'
From tblxxxxxx
copy the result as a text script and execute it in the other DB.
The best way to do this would be to create a custom script. Depending on the size of the tables (how many records) you could either do a select of all of the data into memory and then roll over them inserting them into a copy of the table you create first, or you could export the data out as a csv file or some other text based file and then roll over that to insert the data into the other table.
If you do not have some sort of formal backup procedures that could do this already, this would be your best bet.
Note: some db2 databases, such as those on an iSeries do not actually have "databases", they have libraries. With the right user profile you can access two libraries at the same time, joining tables from them together or doing a
create table library/newFilename as
(select * from originallibrary/originalfilename) with data
But this only applies to the iSeries I believe.
I'm writing this response as another answer so I have more space.
I can only suggest breaking the steps down to their components, and working through to see where the error is occuring. Again, I'm assuming you're using federation:
a) In your FROM db, connecting as the user you're using for the federated link, does your select work?
b) In your TO db, using the link, does the select work?
c) In your TO db, using the link via a stored proc, does the select work?
d) In your TO db, using an INSERT...values(x,y,z), can you insert into the table?
e) In your TO db, via a stored proc, using INSERT...values(x,y,z), can you insert?
Without more information, this is the best line of attack I can suggest.