Sql power architect to compare two data models - sql

I need to compare the current data model with the old data model.
I am using sql power architect for it to do the comparison, I can able to configure the connections for accessing the database, where the connection is successful.
(I am using amazon redshift DB as the source for this.)
But when I tried to expand the children, i am getting the list of table objects associated with it and when I tried to do a compare datamodel option, I am seeing the below error.
Help me to resolve the same.
Caused by: ca.sqlpower.sqlobject.SQLObjectException:
relationship.populate at
ca.sqlpower.sqlobject.SQLRelationship.fetchExportedKeys(SQLRelationship.java:740)
at
ca.sqlpower.sqlobject.SQLTable.populateRelationships(SQLTable.java:731)
at ca.sqlpower.sqlobject.SQLTable.populateImpl(SQLTable.java:1337)
at ca.sqlpower.sqlobject.SQLObject.populate(SQLObject.java:186) ...
4 more Caused by: org.postgresql.util.PSQLException: Unable to
determine a value for MaxIndexKeys due to missing system catalog data.
at
org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getMaxIndexKeys(AbstractJdbc2DatabaseMetaData.java:64)
at
org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(AbstractJdbc2DatabaseMetaData.java:3196)
at
org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getExportedKeys(AbstractJdbc2DatabaseMetaData.java:3584)
at
ca.sqlpower.sql.jdbcwrapper.DatabaseMetaDataDecorator.getExportedKeys(DatabaseMetaDataDecorator.java:388)
at
ca.sqlpower.sqlobject.SQLRelationship.fetchExportedKeys(SQLRelationship.java:735)
... 7 more

you are using the wrong version of JDBC .
please follow the below steps
- download the JDBC driver from AWS redshit from AWS website.
- configure the JDBC driver part in sql power architect from connection manager.
- go to JDBC driver -> select postgres -> in add jars configure the jar downloaded -> configure the driver class name - click OK
- now go back to connection manager
- select the appropriate connection and select edit and test for connection
- you can see the downloaded jar configured
- now you can add the data object to sql power architect.

Related

OutputDataConversionError.TypeConversionError writing to Azure SQL DB using Stream Analytics from IoT Hub

I have wired up a Stream Analytics job to take data from an IoT Hub and write it to Azure SQL Database.
I am running into an issue with one input field which is a date/time object '2019-07-29T01:29:27.6246594Z' which always seems to result in an OutputDataConversionError.TypeConversionError -
[11:59:20 AM] Source 'eventssqldb' had 1 occurrences of kind 'OutputDataConversionError.TypeConversionError' between processing times '2019-07-29T01:59:20.7382451Z' and '2019-07-29T01:59:20.7382451Z'.
Input data sample (sourceeventtime is the problem - other datetime fields also fail).
{
"eventtype":"gamedata",
"scoretier":4,
"aistate":"on",
"sourceeventtime":"2019-07-28T23:59:24.6826565Z",
"EventProcessedUtcTime":"2019-07-29T00:13:03.4006256Z",
"PartitionId":1,
"EventEnqueuedUtcTime":"2019-07-28T23:59:25.7940000Z",
"IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"testdevice","ConnectionDeviceGenerationId":"636996260331615896","EnqueuedTime":"2019-07-28T23:59:25.7670000Z","StreamId":null}
}
The target field in Azure SQL DB is datetime2 and the incoming value can be converted successfully by Azure SQL DB using a query on the same server.
I've tried a bunch of different techniques including CAST on Stream Analytics, and changing the compatibility level of the Stream Analytics job all to no avail.
Testing the query using a dump of the data in Stream Analytics results in no errors either.
I have the same data writing to Table Storage fine, but need to change to Azure SQL DB to enable shorter automated Power BI refresh cycles.
I have tried multiple Stream Analytics jobs and can recreate each time with Azure SQL DB.
Turns out that this appears to have been a cached error message being displayed in the Azure Portal.
On further investigation through reviewing detailed logs it appears another value that was too long for the target SQL DB field (i.e. would have been truncated) was the actual source of the failure. Resolving this removed the error.

Cant create ORC external tables on Hawq PXF

I'm using Pivotal Hawq with ambari and now I'm trying to run some queries over ORC hive tables with hawq.
Previously I was able to create the external queries on pqsql using SELECT * FROM hcatalog.hive-db-name.hive-table-name distributed randomly;
But now everytime I get the error:
Exception report message java.lang.Exception: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.
Can you provide some help on how to surpass this?
I believe you have missed a step to update your pxf-profiles.xml file that's required after upgrading to HDB 2.2. Please see the instructions listed here:
http://hdb.docs.pivotal.io/220/hdb/install/install-ambari.html#post-install-212-req

Talend Open Studio: Load input files into database

I have an empty SQLlite database. Next to that, I have 6 input files (delimited, excel, json, xml).
Now, all I want to do is load the input files into the empty database.
I tried to connect one input file with the DB and just run it. That didn't work (the DB doens't have anything in it, I suspect that is a problem).
Then, I tried to connect an input file with a tMap, define the table there, define the schema and connect the tMap to the DB (tSQLliteOutput).
When I tried to run it, I receive the following error:
Starting job ProductDemo_Load at 16:46 15/11/2015.
[statistics] connecting to socket on port 3843
[statistics] connected
Exception in component tSQLiteOutput_1
java.sql.SQLException: no such table:
at org.sqlite.DB.throwex(DB.java:288)
at org.sqlite.NativeDB.prepare(Native Method)
at org.sqlite.DB.prepare(DB.java:114)
at org.sqlite.PrepStmt.<init>(PrepStmt.java:37)
at org.sqlite.Conn.prepareStatement(Conn.java:231)
at org.sqlite.Conn.prepareStatement(Conn.java:224)
at org.sqlite.Conn.prepareStatement(Conn.java:213)
at workshop_test.productdemo_load_0_1.ProductDemo_Load.tFileInputExcel_1Process(ProductDemo_Load.java:751)
at workshop_test.productdemo_load_0_1.ProductDemo_Load.runJobInTOS(ProductDemo_Load.java:1672)
at workshop_test.productdemo_load_0_1.ProductDemo_Load.main(ProductDemo_Load.java:1529)
[statistics] disconnected
Job ProductDemo_Load ended at 16:46 15/11/2015. [exit code=1]
I see there's something wrong with the import, but what exactly?
What should I do in order to succesfully load the data from the input files in the database?
I did the exact steps from this little tutorial:
Talend Job: load data into database.
Most talend output components have create table if not exists option.. Did u checked this in your tsqliteoutput..error seems that when talend is inserting data into empty database your table it is not able to find it as it does not exists.. So you to tell talend to create the table first..

Pentaho Kettle is not working for Vertica DB

I need to parse a CSV file and write the data to a Vertica database. The issue is that I get an error when I create a Vertica database connection in Spoon. The following is the error at the end of the post.
I tried copying the following two JAR files and adding them to libext/jdbc:
vertica-jdbc-4.1.14.jar and vertica-jdk5-6.1.2-0.jar
But the above didn't help. I am looking for pointers!
Error:
Error connecting to database [Vertica Dev] : org.pentaho.di.core.exception.KettleDatabaseException:
Error occured while trying to connect to the database
Exception while loading class
com.vertica.jdbc.Driver
org.pentaho.di.core.exception.KettleDatabaseException:
Error occured while trying to connect to the database
Exception while loading class
com.vertica.jdbc.Driver
at org.pentaho.di.core.database.Database.normalConnect(Database.java:366)
The two JAR files you copied are of two different versions of Vertica and do not use the same class.
vertica-jdk5-6.1.2-0.jar will expose com.vertica.jdbc.Driver whereas version 4 will expose com.vertica.Driver.
The error message thus makes obvious that Pentaho is looking for com.vertica.jdbc.Driver (version 5, thus). If it fails, it probably is because the JAR version 4 is loaded first.
Try to delete the version 4 only from the libext/jdbc, keep the version 5, and restart Pentaho.
On a side note, this class is hardcoded in Pentaho, so if you do need to use the JAR version 4 and feel adventurous, you just need to get the Pentaho source, update VerticaDatabaseMeta.java, and recompile.

Connecting to Oracle11g database from Websphere message broker 6

I am trying a simple insert command from websphere message broker 6 from a compute node.
The data source name which is provided in the odbc.ini file in the message broker is specified in the node property of the compute node. And have wrote the following ESQL code.
SET TABLE = 'MYTABLE';
SET MYVALUE = 'TESTVALUE';
INSERT INTO Database.TABLE VALUES(MYVALUE);
The connection url is provided in the tnsnames.ora. The url is cluster url. Which points to 3 database instances.
When I am running the query i am getting exception that table or view does not exist in the trace.
But when i connect to db using any of the 3 direct urls, i am able to see the table.
Note: database is oracle11g
Can any one explain me what is happening?
The problem was that my application was using the same DSN used by my broker. And while creating the broker, the username and password provided was pointing to different schema, which is not having the the tables for my application.
The solution was creating a new DSN, and using mqsisetdbparams to point it to the correct schema.