Ignite database: Create schema: AssertionError - sql

I want to create a schema using ignite as an in memory database. So I do the following:
try (Statement statement = connection.createStatement()) {
statement.executeQuery("CREATE SCHEMA my_schema");
}
But im getting the error:
Exception in thread "sql-connector-#38%null%" java.lang.AssertionError
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1341)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1856)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1852)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1860)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:188)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:122)
at org.apache.ignite.internal.processors.odbc.SqlListenerNioListener.onMessage(SqlListenerNioListener.java:152)
at org.apache.ignite.internal.processors.odbc.SqlListenerNioListener.onMessage(SqlListenerNioListener.java:44)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
And I have no idea what that means. I need to create the schema since im creating unit tests for my sql statements and the original table also has a schema:
my_schema.my_table
And I cant replace the table name just for unit test purposes.
EDIT
I have to mention that ignite calls that a schema. In my opinion it is just a database name. but CREATE DATABASE my_database also does not work.

Command CREATE SCHEMA is not supported in Ignite as of yet.
If you create few caches, each of them will be assigned its own schema having a matching name. Also there's a schema named PUBLIC where live all caches created with command CREATE TABLE.
Subset of DDL commands currently available in Ignite is described here:
https://apacheignite-sql.readme.io/docs/ddl

Related

Setting transactional-table properties results in external table

I am creating a managed table via Impala as follows:
CREATE TABLE IF NOT EXISTS table_name
STORED AS parquet
TBLPROPERTIES ('transactional'='false', 'insert_only'='false')
AS ...
This should result in a managed table which does not support HIVE-ACID.
However, when I run the command I still end up with an external table.
Why is this?
I found out in the Cloudera documentation that neglecting the EXTERNAL-keyword when creating the table does not mean that the table definetly will be managed:
When you use EXTERNAL keyword in the CREATE TABLE statement, HMS stores the table as an external table. When you omit the EXTERNAL keyword and create a managed table, or ingest a managed table, HMS might translate the table into an external table or the table creation can fail, depending on the table properties.
Thus, setting transactional=false and insert_only=false leads to an External Table in the interpretation of the Hive Metastore.
Interestingly, only setting TBLPROPERTIES ('transactional'='false') is completly ignored and will still result in a managed table having transactional=true).

Liquibase unable to execute schema (Snowflake) with mixed case eg. (This_Schema)

I have tried using liquibase tool for our snowflake db. It is all working with where SCHEMA name is in all CAPITAL(UPPERCASE). But liquibase is not picking up any of my schema's with mixed case, eg (This_Schema).
I have tried putting this but didn't help.
<defaultSchemaName>This_Schema</defaultSchemaName>
POM.XML configuration example:
<driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
<url>jdbc:snowflake://${env.SNOWFLAKE_ACCOUNT}.eu-central-1.snowflakecomputing.com/?db=${env.SNOWFLAKE_DB}&schema=${env.SNOWFLAKE_SCHEMA}&warehouse=${env.SNOWFLAKE_WH}&role=${env.SNOWFLAKE_ROLE}</url>
<username>${env.SNOWFLAKE_USERNAME}</username>
<password>${env.SNOWFLAKE_PASSWORD}</password>
Error setting up or running Liquibase: liquibase.exception.DatabaseException: SQL compilation error:
[ERROR] Schema 'LIQUIBASE_DB.THIS_SCHEMA' does not exist. [Failed SQL: CREATE TABLE THIS_SCHEMA.DATABASECHANGELOGLOCK (ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP_NTZ, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
NOTE: "This_Schema" is the name of my schema as it is showing here, but upon executing liquibase update this automatically changes to UPPERCASE value as in error above.
Found this comment in the README file from the liquibase snowflake extension.
The Snowflake JDBC drivers implementation of
DatabaseMetadata.getTables() hard codes quotes around the catalog,
schema and table names, resulting in queries of the form:
show tables like 'DATABASECHANGELOG' in schema "sample_db"."sample_schema"
This results in the DATABASECHANGELOG table not being found, even
after it has been created. Since Snowflake stores catalog and schema
names in upper case, the getJdbcCatalogName returns an upper case
value.
Could this explain your problems?...

Hibernate - Just want to query table but it tries to create the table as well

I try to create a simple Hibernate example which reads some articles from a database. The following code shows me the article description from the (already existing) table Art(icle). But it also tries to create the Table "Art" if openSession is called. I just want to read from the existing table, so why it tries to create the article table before it shows the existing entries?
sessionObj = buildSessionFactory().openSession();
Query<Art> query = sessionObj.createQuery("from Art",Art.class);
for(Article a : query.getResultList()) {
System.out.println(a.getDesc());
}
Which values you use in your configurations ?
From docs you can use:
hibernate.hbm2ddl.auto
Automatically validates or exports schema DDL to the database when the
SessionFactory is created. With create-drop, the database schema will
be dropped when the SessionFactory is closed explicitly.
validate | update | create | create-drop
from this
1)validate: validate the schema, makes no changes to the database.
2)update: update the schema.
3)create: creates the schema, destroying previous data.
4)create-drop: drop the schema at the end of the session.
Hope that helps

Apache Ignite: how to insert into table with IDENTITY key (SQL Server)

I have a table in SQL Server where the primary key is autogenerated (identity column), i.e.
CREATE TABLE TableName
(
table_id INT NOT NULL IDENTITY (1,1),
some_field VARCHAR(20),
PRIMARY KEY (table_id)
);
Since table_id is an autogenerated column, when I implemented the SqlFieldQuery INSERT clause I do not set any argument to table_id:
sql = new SqlFieldsQuery("INSERT INTO TableName (some_field) VALUES (?)");
cache.query(sql.setArgs("str");
However at runtime I get the following error:
Exception in thread "main" javax.cache.CacheException: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO TableName (some_field) VALUES (?), params=["str"]]
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:807)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:765)
...
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO TableName (some_field) VALUES (?), params=["str"]]
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1324)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1815)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1813)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1820)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:795)
... 5 more
Caused by: class org.apache.ignite.IgniteCheckedException: Key is missing from query
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.createSupplier(UpdatePlanBuilder.java:331)
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.planForInsert(UpdatePlanBuilder.java:196)
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.planForStatement(UpdatePlanBuilder.java:82)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.getPlanForStatement(DmlStatementsProcessor.java:438)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsDistributed(DmlStatementsProcessor.java:222)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1321)
... 11 more
This is how I planned to implement the insertion because it seemed more tedious to get the max table_id from the cache, increment and insert. I thought I could omit the table_id from the insert and let SQL Server insert the pk, but it doesn't seem to work like this.
Can you please tell me how this should typically be implemented in Ignite? I checked the ignite-examples, unfortunately the examples are too simple (i.e. fixed keys only, like 1 or 2).
Moreover, how does Ignite support the use of sequences?
I am using ignite-core 2.2.0. Any help is appreciated! Thank you.
That's true that as for now autoincrement fields are not supported.
As an option, you could generate IDs manually via for example Ignite's ID generator.
Ignite doesn't support identity columns [1] yet.
It may be non-obviuos, but Ignite SQL layer is built on top of key value store which can be backed by other CacheStore. Your SQL query will never go though to CacheStore as is.
Ignite internals will execute your query, save data in cache and only then update will be propagated to CacheStore which will create a new SQL query for your SQL server.
So, Ignite need the identity column value (actually a key) be known before data saved in cache.
[1] https://issues.apache.org/jira/browse/IGNITE-5625

What do the SchemaAutoAction values mean?

I'm getting back into NHibernate and I've noticed a new configuration property being used in examples: SchemaAutoAction. I cant seem to find documentation on what the various settings mean. The settings / my guesses as to what they mean are:
Recreate -- Drop and recreate the schema every time
Create -- If the schema does not exist create it
Update -- issue alter statements to make the existing schema match
the model
Validate -- Blow up if the schema differs from the model
Is this correct?
SchemaAutoAction is the same as schema-action mapping attribute.
As per docs:
The new 'schema-action' is set to none, this will prevent NHibernate
from including this mapping in its schema export, it would otherwise
attempt to create a table for this view
Similar, but not quite. The SchemaAutoAction is analogous to the configuration property hbm2ddl.auto, and its values are:
Create: always create the database when a session factory is created;
Validate: when a session factory is created check if the database matches the mappings and throw an exception otherwise;
Update: when a session factory is created issues DDL commands to update the database if it doesn't match the mappings;
Recreate: always creates the database and drop it when the session factory is disposed.