hive alter table add column fails due to large number of partitions - hive

I have a table that has more than 300k partitions. When I try to add a new colum like below it runs for many hours and then fails. Metastor rds is on mysql and partitions table has more than 5million rows. Has any one encountered this?
alter table tablea add columns(col1 string) cacade;
Error message:
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:638)
at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3590)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:390)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:474)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:490)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_environment_context(ThriftHiveMetastore.java:1689)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_environment_context(ThriftHiveMetastore.java:1673)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table_with_environmentContext(HiveMetaStoreClient.java:375)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table_with_environmentContext(SessionHiveMetaStoreClient.java:322)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
at com.sun.proxy.$Proxy34.alter_table_with_environmentContext(Unknown Source)

I ended up writing a for loop iterating over each partition and executing
alter table tablea add columns(col1 string)
This seems to be the safest way to do it.
Considering the number of partitions attempting to execute cascade at table level results in unpredictable behavior not to mention the time it takes for that to complete.

Related

Caused by: org.hsqldb.HsqlException: integrity constraint violation: unique constraint or index violation; UK_ENNVENCLPTVNEV6AF table: Table Name

Test cases are failing after updating spring boot version and hsqlDb but before were working perfectly
tried for hsqlDB different version but no positive

Can't update from a generated diff changelog

First of all, I'm using an npm library called liquibase to run Liquibase within Node.
So, I'm trying to use diffChangeLog to compare two databases and generate a changelog of their differences. The changelog is normally generated, but when I try to run an update using the generated changelog Liquibase responds with a validation error. Here's the output:
node_modules/liquibase/lib/liquibase-4.0.0/liquibase --changeLogFile=examples/common/migration.xml --url="jdbc:mysql://127.0.0.1:3306/mySchema?useJvmCharsetConverters=true" --username=migration --password=migration --classpath=lib/migrator/drivers/mysql-connector-java-8.0.22.jar --changeLogFile=examples/DatabaseDiff/diffChangeLog.xml update
Liquibase Community 4.0.0-beta1 by Datical
Starting Liquibase at 17:54:32 (version 4.0.0-beta1 #6 built at 2020-04-20 18:23+0000)
Unexpected error running Liquibase: Validation Failed:
16 changes have validation failures
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-1::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-2::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-3::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-4::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-5::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-6::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-7::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-8::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-9::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-10::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-11::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-12::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-13::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-14::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-15::joao (generated)
columns is empty, examples/DatabaseDiff/diffChangeLog.xml::1618001652047-16::joao (generated)
For more information, please use the --logLevel flag
And here is part of the changelog that is mentioned in the output:
<changeSet author="joao (generated)" id="1618001652047-2">
<createTable tableName="FeaturedProducer"/>
</changeSet>
<changeSet author="joao (generated)" id="1618001652047-3">
<createTable tableName="PreVerification"/>
</changeSet>
<changeSet author="joao (generated)" id="1618001652047-4">
<createTable tableName="Producer"/>
</changeSet>
I also tried to use the SQL format for the generated changelog (named changeLog.mysql.sql), but it creates a bad SQL syntax, for example:
CREATE TABLE USER ();
This raises a syntax error from MySQL. Which is right, I've tried running this query manually and the error is the same.
My guess is that Liquibase is not allowing me to create empty columns, but why?
I don't understand what's going on and there are not many results on Google.
Can anyone give me a hand?!
Well, after a while we've found out the problem.
To debug the issue I raised two local databases with Docker and tried the same operation, it worked just fine.
So, it occurred to us that the reason we were getting this error could be due to user permissions. I've seen some people talking about it, that Liquibase can spit unusual errors when the user has insufficient permissions.
So I gave my user all permissions for that schema and it worked! Actually, we just went to another error, but it was definitely progress!
The error was just like this:
liquibase.exception.LiquibaseException: liquibase.command.CommandExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0
at liquibase.integration.commandline.CommandLineUtils.doDiffToChangeLog(CommandLineUtils.java:250)
at liquibase.integration.commandline.Main.doMigration(Main.java:1285)
at liquibase.integration.commandline.Main$1.lambda$run$0(Main.java:314)
at liquibase.Scope.lambda$child$0(Scope.java:149)
at liquibase.Scope.child(Scope.java:160)
at liquibase.Scope.child(Scope.java:148)
at liquibase.Scope.child(Scope.java:127)
at liquibase.Scope.child(Scope.java:173)
at liquibase.Scope.child(Scope.java:177)
at liquibase.integration.commandline.Main$1.run(Main.java:313)
at liquibase.integration.commandline.Main$1.run(Main.java:169)
at liquibase.Scope.child(Scope.java:160)
at liquibase.Scope.child(Scope.java:134)
at liquibase.integration.commandline.Main.run(Main.java:169)
at liquibase.integration.commandline.Main.main(Main.java:148)
Caused by: liquibase.command.CommandExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0
at liquibase.command.AbstractCommand.execute(AbstractCommand.java:24)
at liquibase.integration.commandline.CommandLineUtils.doDiffToChangeLog(CommandLineUtils.java:248)
... 14 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0
at liquibase.diff.output.changelog.DiffToChangeLog.print(DiffToChangeLog.java:193)
at liquibase.diff.output.changelog.DiffToChangeLog.print(DiffToChangeLog.java:86)
at liquibase.command.core.DiffToChangeLogCommand.run(DiffToChangeLogCommand.java:63)
at liquibase.command.AbstractCommand.execute(AbstractCommand.java:19)
... 15 more
Caused by: java.lang.RuntimeException: java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0
at liquibase.diff.output.changelog.DiffToChangeLog$1.run(DiffToChangeLog.java:176)
at liquibase.Scope.lambda$child$0(Scope.java:149)
at liquibase.Scope.child(Scope.java:160)
at liquibase.Scope.child(Scope.java:148)
at liquibase.Scope.child(Scope.java:127)
at liquibase.diff.output.changelog.DiffToChangeLog.print(DiffToChangeLog.java:119)
... 18 more
Caused by: java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0
at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:459)
at liquibase.change.ColumnConfig.<init>(ColumnConfig.java:154)
at liquibase.change.AddColumnConfig.<init>(AddColumnConfig.java:16)
at liquibase.diff.output.changelog.core.MissingIndexChangeGenerator.fixMissing(MissingIndexChangeGenerator.java:70)
at liquibase.diff.output.changelog.ChangeGeneratorChain.fixMissing(ChangeGeneratorChain.java:48)
at liquibase.diff.output.changelog.ChangeGeneratorFactory.fixMissing(ChangeGeneratorFactory.java:95)
at liquibase.diff.output.changelog.DiffToChangeLog.generateChangeSets(DiffToChangeLog.java:279)
at liquibase.diff.output.changelog.DiffToChangeLog.printNew(DiffToChangeLog.java:203)
at liquibase.diff.output.changelog.DiffToChangeLog$1.run(DiffToChangeLog.java:125)
... 23 more
Really bad logging btw...
This MissingIndexChangeGenerator.fixMissing(MissingIndexChangeGenerator.java:70) part caught my attention. Surfing through the web I found barely anything, so we thought: Let's check the source code for that line. On their GitHub, we've found the exact line of the exception and everything became clearer.
That error meant that Liquibase failed to retrieve some table indexes.
Taking a look at our database we were reminded that some indexes were actually foreign keys to tables on another schema. And the user we were using didn't have permission on that other schema. We gave the user all permissions for that second schema too, and BAM, it finally worked!
The lesson here is: Liquibase has really bad logging... Oh, and also, never give up!

ORC files with Hive: java.io.IOException: Two readers

I have an ACID hive table, with files in ORC format. When attempting a compaction, I end up with the following error: Task: ... exited : java.io.IOException: Two readers for ... The full error is as follow:
2019-06-03 07:01:05,357 ERROR [IPC Server handler 2 on 41085] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1558939181485_29861_m_000001_0 - exited : java.io.IOException: Two readers for {originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}: new [key={originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}, nextRecord={2, 143, 536870912, 3386, 210, null}, reader=Hive ORC Reader(hdfs://HdfsNameService/tbl/delete_delta_0000209_0000214/bucket_00001, 9223372036854775807)], old [key={originalWriteId: 143, bucket: 536870912(1.0.0), row: 3386, currentWriteId 210}, nextRecord={2, 143, 536870912, 3386, 210, null}, reader=Hive ORC Reader(hdfs://HdfsNameService/tbl/delete_delta_0000209_0000214/bucket_00000, 9223372036854775807)]
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.ensurePutReader(OrcRawRecordMerger.java:1171)
at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:1126)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:2402)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:964)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:941)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
This table is created and updated by merge'ing avro files into an orc table, hence the bunch of deltas, both delete_delta and delta.
I have many other such tables, which do not have this issue. This table has nothing out of the ordinary and is actually quite small (<100k rows, 2.5M on disk) and was in the last month updated 100 times (20k rows updated, 5M update data). The DDL is:
CREATE TABLE `contact_group`(
`id` bigint,
`license_name` string,
`campaign_id` bigint,
`name` string,
`is_system` boolean,
`is_test` boolean,
`is_active` boolean,
`remarks` string,
`updated_on_utc` timestamp,
`created_on_utc` timestamp,
`deleted_on_utc` timestamp,
`sys_schema_version` int,
`sys_server_ipv4` bigint,
`sys_server_name` string,
`load_ts` timestamp)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://HdfsNameService/dwh/vault/contact_group'
TBLPROPERTIES (
'bucketing_version'='2',
'last_modified_by'='hive',
'last_modified_time'='1553512639',
'transactional'='true',
'transactional_properties'='default',
'transient_lastDdlTime'='1559522011')
This happens every few months. As everything else (select, merge) works, the fix is usually to create a second table (create table t as select * from contact_group) and switch the tables, but I would like to find the real underlying reason.
The only reference I found about my error is in the code itself, which does not help me much.
This is on hdp3.1, with Hive 3.
In my case, I could not resolved the problem using the solution suggested by #ShuBham ShaRma.
After looking at #cang_yun's finding, I tried to delete one of the bucket file (bucket_00001) and was able to run select statements on that table again. I'm not sure that this is the right way to do it, but it worked in my case.
I have faced the problem too.Through orc-tools I scan all files under delete_delta and it can be found that all the rows are same in these files(For example, there are 7 rows in bucket_00000, and the same 7 rows in the other file bucket_00001.).So the key(originalTransacion-bucket-rowId-currentWriteId)will be same when iterate the next bucket file.
Another fix is to create the table as bucket, maybe can avoid the problem.
In my case, it was caused from user error. It is caused from two tables referring to the same hdfs directory. When creating the table, I created the location name and accidentally copied the same directory to another table.
My program then performed changes on both transactional tables resulting in delta files that could not be resolved.
Here is the summary for the observed issue for one of our users:
Table is failing during fetch task operation from disk, it got corrupted with duplicate key identifiers in delete_delta files (https://issues.apache.org/jira/browse/HIVE-22318), there is a temporary workaround to read the table by setting set hive.fetch.task.conversion=none but this would not help in compaction or any fetch task operations to succeed.
Steps performed to create backup of table:
Connect with beeline and run below property in session:
set hive.fetch.task.conversion=none ;
Now you'll be able to run select statements over the mentioned table.
Run below statement to create a backup for the table
create table <backup_tbl_name> as select * from <problem_tbl> ;
Once you have the backup ready, logout from session and check the
backup table without setting any property, (check count and table
consistency from data quality perspective)
select * from <backup_tbl_name> ;
To create original copy from backup table:
Now you can drop problem table and replace with backup table
drop table <problem_tbl> ;
alter table <backup_tbl_name> rename to <original_tbl_name> ;
Note: To avoid this issue in future, create the table with a bucketing column in DDL

Liquibase failing while attempting to drop non-existent TMP_% tables during generateChangeLog

Running
liquibase generateChangeLog > genChgLog.txt
with the following as my liquibase.properties,
classpath=C:\Program Files (x86)\MySQL\MySQL Connector
J\mysql-connector-java-8.0.16.jar
driver=com.mysql.cj.jdbc.Driver
url=jdbc:mysql://{thisisnottheproblem,Iguarantee}
username={it'sright}
password={it'sright}
referenceUrl=jdbc:mysql://{thisisnottheproblem,Iguarantee}
referenceUsername={it'sright}
referencePassword={it'sright}
changeLogFile=databaseChangeLogSchema.mysql.sql
diffTypes=tables,columns,views,primaryKeys,indexes,foreignKeys,sequences,data
logLevel=debug
I am consistently getting the likes of this, output to the genChgLog.txt file:
Starting Liquibase at Wed, 15 May 2019 15:37:32 CDT (version 3.6.3
built at 2019-01-29 11:34:48) Unexpected error running Liquibase:
liquibase.exception.DatabaseException:
liquibase.exception.DatabaseException: Unknown table
'TMP_CTAWHBNCQVQMHSUU' [Failed SQL: DROP TABLE TMP_CTAWHBNCQVQMHSUU]
liquibase.exception.LiquibaseException:
liquibase.command.CommandExecutionException:
liquibase.exception.DatabaseException:
liquibase.exception.DatabaseException: Unknown table
'TMP_CTAWHBNCQVQMHSUU' [Failed SQL: DROP TABLE TMP_CTAWHBNCQVQMHSUU]
at
liquibase.integration.commandline.CommandLineUtils.doGenerateChangeLog(CommandLineUtils.java:279)
at liquibase.integration.commandline.Main.doMigration(Main.java:1058)
at liquibase.integration.commandline.Main.run(Main.java:199) at
liquibase.integration.commandline.Main.main(Main.java:137) Caused by:
liquibase.command.CommandExecutionException:
liquibase.exception.DatabaseException:
liquibase.exception.DatabaseException: Unknown table
'TMP_CTAWHBNCQVQMHSUU' [Failed SQL: DROP TABLE TMP_CTAWHBNCQVQMHSUU]
at liquibase.command.AbstractCommand.execute(AbstractCommand.java:24)
at
liquibase.integration.commandline.CommandLineUtils.doGenerateChangeLog(CommandLineUtils.java:277)
... 3 common frames omitted Caused by:
liquibase.exception.DatabaseException:
liquibase.exception.DatabaseException: Unknown table
'TMP_CTAWHBNCQVQMHSUU' [Failed SQL: DROP TABLE TMP_CTAWHBNCQVQMHSUU]
at
liquibase.snapshot.jvm.ForeignKeySnapshotGenerator.snapshotObject(ForeignKeySnapshotGenerator.java:223)
at
liquibase.snapshot.jvm.JdbcSnapshotGenerator.snapshot(JdbcSnapshotGenerator.java:66)
. . . at
liquibase.command.core.GenerateChangeLogCommand.run(GenerateChangeLogCommand.java:46)
at liquibase.command.AbstractCommand.execute(AbstractCommand.java:19)
... 4 common frames omitted Caused by:
liquibase.exception.DatabaseException: Unknown table
'TMP_CTAWHBNCQVQMHSUU' [Failed SQL: DROP TABLE TMP_CTAWHBNCQVQMHSUU]
at
liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:356)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:57)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:125)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:109)
at
liquibase.database.core.MySQLDatabase.hasBugJdbcConstraintsDeferrable(MySQLDatabase.java:294)
at
liquibase.snapshot.jvm.ForeignKeySnapshotGenerator.snapshotObject(ForeignKeySnapshotGenerator.java:188)
... 25 common frames omitted Caused by:
java.sql.SQLSyntaxErrorException: Unknown table 'TMP_CTAWHBNCQVQMHSUU'
at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at
com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:782)
at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:666)
at
liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:352)
... 30 common frames omitted
For more information, please use the --logLevel flag
Each time I run it, it changes the name of the phantom TMP_ tables (although they always begin with TMP_ and then a string of apparently random characters). I don't know where it's getting these non-existent TMP_ tables from that it wants to drop, but...is there some way to make it only attempt to drop them if they exist? Of potential note: it works fine if my only diffType is "tables", "data", or if I have both "tables" and "data" as diffTypes...otherwise, failure...
I don't have all the details, but it appears this can happen when using a database user with insufficient permissions. I talked with someone who said that using a root level database user fixed this issue for them.

Ignite database: Create schema: AssertionError

I want to create a schema using ignite as an in memory database. So I do the following:
try (Statement statement = connection.createStatement()) {
statement.executeQuery("CREATE SCHEMA my_schema");
}
But im getting the error:
Exception in thread "sql-connector-#38%null%" java.lang.AssertionError
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1341)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1856)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1852)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1860)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:188)
at org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:122)
at org.apache.ignite.internal.processors.odbc.SqlListenerNioListener.onMessage(SqlListenerNioListener.java:152)
at org.apache.ignite.internal.processors.odbc.SqlListenerNioListener.onMessage(SqlListenerNioListener.java:44)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
And I have no idea what that means. I need to create the schema since im creating unit tests for my sql statements and the original table also has a schema:
my_schema.my_table
And I cant replace the table name just for unit test purposes.
EDIT
I have to mention that ignite calls that a schema. In my opinion it is just a database name. but CREATE DATABASE my_database also does not work.
Command CREATE SCHEMA is not supported in Ignite as of yet.
If you create few caches, each of them will be assigned its own schema having a matching name. Also there's a schema named PUBLIC where live all caches created with command CREATE TABLE.
Subset of DDL commands currently available in Ignite is described here:
https://apacheignite-sql.readme.io/docs/ddl