H2 schema disappears after connection is closed - sql

after setting up a schema in an H2 database for unit testing, the unit tests relying on the schema could not find it.
import java.sql.DriverManager
Class.forName("org.h2.Driver")
val setupConn = DriverManager.getConnection("jdbc:h2:mem:test_data_metrics;MODE=PostgreSQL", "sa", "")
val setupStmt = setupConn.createStatement
// setup schema at the beginning of our test
setupStmt.execute("CREATE SCHEMA IF NOT EXISTS my_test_schema AUTHORIZATION sa;")
setupStmt.execute("GRANT ALL ON SCHEMA my_test_schema TO sa;")
setupStmt.execute("CREATE TABLE IF NOT EXISTS my_test_schema.my_test_table (test_id VARCHAR(255), test_column VARCHAR(255));")
setupStmt.executeQuery("select * from my_test_schema.my_test_table")
// res4: java.sql.ResultSet = rs3: org.h2.result.LocalResultImpl#3eb10d62 columns: 2 rows: 0 pos: -1
// this seems to work correctly ^^^
setupStmt.close
setupConn.close
// now run our test using the schema we just set up
val conn = DriverManager.getConnection("jdbc:h2:mem:test_data_metrics;SCHEMA=my_test_schema;MODE=PostgreSQL", "sa", "")
val stmt = conn.createStatement
stmt.executeQuery("select * from my_test_table where test_id = '1'")
// org.h2.jdbc.JdbcSQLSyntaxErrorException: Schema "MY_TEST_SCHEMA" not found; SQL statement:
// SET SCHEMA my_test_schema [90079-200]
// ^^^^ something has gone horribly wrong

You can simply add ;DB_CLOSE_DELAY=-1 to the JDBC URL; no need to have an active connection.
https://h2database.com/html/commands.html#set_db_close_delay
If you use a some recent version of H2, you may also want to add ;DATABASE_TO_LOWER=TRUE for better compatibility with PostgreSQL; the PostgreSQL compatibility mode by itself doesn't imply this setting.

This is embarrassing, but I didn't realize that when I closed the connection to my in memory database, the database would dry up and blow away. This seems obvious in retrospect. The solution is to keep the first connection to the database open throughout testing.
import java.sql.DriverManager
Class.forName("org.h2.Driver")
val setupConn = DriverManager.getConnection("jdbc:h2:mem:test_data_metrics;MODE=PostgreSQL", "sa", "")
val setupStmt = setupConn.createStatement
// setup schema at the beginning of our test
setupStmt.execute("CREATE SCHEMA IF NOT EXISTS my_test_schema AUTHORIZATION sa;")
setupStmt.execute("GRANT ALL ON SCHEMA my_test_schema TO sa;")
setupStmt.execute("CREATE TABLE IF NOT EXISTS my_test_schema.my_test_table (test_id VARCHAR(255), test_column VARCHAR(255));")
setupStmt.executeQuery("select * from my_test_schema.my_test_table")
// res4: java.sql.ResultSet = rs3: org.h2.result.LocalResultImpl#3eb10d62 columns: 2 rows: 0 pos: -1
// DON'T CLOSE THE CONNECTION YET!
//setupStmt.close
//setupConn.close
val conn = DriverManager.getConnection("jdbc:h2:mem:test_data_metrics;SCHEMA=my_test_schema;MODE=PostgreSQL", "sa", "")
val stmt = conn.createStatement
stmt.executeQuery("select * from my_test_table where test_id = '1'")
// res5: java.sql.ResultSet = rs4: org.h2.result.LocalResultImpl#293e66e4 columns: 2 rows: 0 pos: -1
// ^^^^ huzzah!

Related

JPA using #ElementCollection with #OrderColumn but it makes exception to 'duplicate key value violates unique constraint'

First of all, I'm n.b to spring and jpa. so, Sorry for the rudimentary question.
These days I tried to make server system to location points storing using springboot + jpa + docker + postgresql /kotlin
my idea is server get client call and store locations periodically
so, I using #ElementCollection for store location item with #Embeddable
but, I got exception from springTest code
Hibernate:
insert
into
pos_info_pos_list
(pos_info_id, pos_list_order, accuracy, event_time, geo_lati, geo_long)
values
(?, ?, ?, ?, ?, ?)
2022-11-12 22:07:34.963 WARN 25880 --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 23505
2022-11-12 22:07:34.963 ERROR 25880 --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: duplicate key value violates unique constraint "pos_info_pos_list_pkey"
Detail: Key (pos_info_id, pos_list_order)=(1, 0) already exists.
I'll explain the table structure below
PosInfo(one), PosData(many)
oneToMany relation
I want to use ordercolumn for performance and want posList size limitation(MAX_POS_DATA_SIZE = 200)
#Entity
data class PosInfo(
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
var id: Long? = null
) {
#ElementCollection(fetch = FetchType.EAGER, targetClass = PosData::class)
#OrderColumn
val posList: MutableList<PosData> = mutableListOf()
fun addPosData(posData: PosDataDto) {
while (posList.size >= MAX_POS_DATA_SIZE) {
posList.removeFirst()
}
val newData = PosData(posData.geoLati, posData.geoLong, posData.eventTime, posData.accuracy)
posList.add(newData)
}
}
PosData table
#Embeddable
data class PosData(
#Column
val geoLati: String,
#Column
val geoLong: String,
#Column
val eventTime: Long,
#Column
val accuracy: Int,
)
SpringTestCode is
first of all, insert maxSize posData then add more one data again
#Test
fun addPathMax() {
val dummyPosData = PosDataDto("", "", System.currentTimeMillis(), 0)
val dummyPosData2 = PosDataDto("yyyy", "eeeee", System.currentTimeMillis(), 0)
val id = "KSH"
service.tryAddUser(id, "")
val userInfo = service.getUserInfo(id)
assertThat(userInfo).isNotNull
val posIndex = userInfo!!.posIndex
val posInfo = service.getPosInfo(posIndex)
assertThat(posInfo).isNotNull
for (i in 0 until MAX_POS_DATA_SIZE) {
posInfo!!.addPosData(dummyPosData)
}
service.updatePosInfo(posInfo!!)
println("Next Input Check KSH_TEST")
val posInfo2 = service.getPosInfo(posIndex)
posInfo2!!.addPosData(dummyPosData2)
service.updatePosInfo(posInfo2!!)
}
#Transactional
service.updatePosInfo <= it just call to crudRepository save method
but I got duplicate key again and again
Q1. Shouldn't the 'pos_list_order' be 'existing last +1' since the first data of the previous data was erased and the new data was inserted? why '0'?
// Key (pos_info_id, pos_list_order)=(1, 0) already exists.
Q2. Is this structure not good for updating and storing location data periodically?(using ElementCollection, should I use OneToMany?)
=To be honest, I've tried "one To Many" before. By the way, I gave up because I was tired of fixing strange build errors. I came back with "Element Collection," which I thought was easy
Thank you in advance for all the helpful comments
===========================
= I already tried before below
OneToMany with mapped, but it made many error and when I tried insert more value, it was made all delete row and re-install all and + newer again
ElementCollection looks simple, but it was made duplicated exception again and again
I already checked using below
#CollectionTable(
name = "pos_data",
joinColumns = [JoinColumn(name = "pos_info_id")]
)
JpaRepository.save then flush doesn't work
but same result, I don't know why.. really sad
I got a solution
Now this problem was caused by my poor understanding of 'Transactional'
it's fixed with below annotation
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Rollback(false)
#Test
fun addPathMax() {
val dummyPosData = PosDataDto("", "", System.currentTimeMillis(), 0)
val dummyPosData2 = PosDataDto("yyyy", "eeeee", System.currentTimeMillis(), 0)
val id = "KSH"
service.tryAddUser(id, "")
val userInfo = service.getUserInfo(id)
assertThat(userInfo).isNotNull
val posIndex = userInfo!!.posIndex
val posInfo = service.getPosInfo(posIndex)
assertThat(posInfo).isNotNull
for (i in 0 until Constants.MAX_POS_DATA_SIZE) {
posInfo!!.addPosData(dummyPosData)
}
service.updatePosInfo(posInfo!!)
println("Next Input Check KSH_TEST")
val posInfo2 = service.getPosInfo(posIndex)
posInfo2!!.addPosData(dummyPosData2)
service.updatePosInfo(posInfo2!!)
}
I thought service already including 'Transactional' annotation
so it can be made query persist context to database
but it was not

How can I automatically infer schemas of CSV files on S3 as I load them?

Context
Currently I am using Snowflake as a Data Warehouse and AWS' S3 as a data lake. The majority of the files that land on S3 are in the Parquet format. For these, I am using a new limited feature by Snowflake (documented here) that automatically detects the schema from the parquet files on S3, which I can use to generate a CREATE TABLE statement with the correct column names and inferred data types. This feature currently only works for Apache Parquet, Avro, and ORC files. I would like to find a way that achieves the same desired objective but for CSV files.
What I have tried to do
This is how I currently infer the schema for Parquet files:
select generate_column_description(array_agg(object_construct(*)), 'table') as columns
from table (infer_schema(location=>'${LOCATION}', file_format=>'${FILE_FORMAT}'))
However if I try specifying the FILE_FORMAT as csv that approach will fail.
Other approaches I have considered:
Transferring all files that land on S3 to parquet (this involves more code, and infra setup so wouldn't be my top choice, especially that I'd like to keep some files in their natural type on s3)
Having a script (using libraries like Pandas in Python for example) that infer the schema for files in S3 (this also involves more code, and will be strange in the sense that parquet files are handled in Snowflake, but non parquet files are handled by some script on aws).
Using a Snowflake UDF to infer the schema. Haven't fully considered my options there yet.
Desired Behaviour
As a new csv file lands on S3 (on a pre-existing STAGE), I would like to infer the schema, and be able to generate a CREATE TABLE statement with the inferred data types. Preferably, I would like to do that within Snowflake as the existing aforementioned schema-inference solution exists there. Happy to add further information if needed.
UPDATE: I modified the SP that infers data types in untyped (all string type columns) tables and it now works directly against Snowflake stages. The project code is available here: https://github.com/GregPavlik/InferSchema
I wrote a stored procedure to assist with this; however, its only goal is to infer the data types of untyped columns. It works as follows:
Load the CSV into a table with all columns defined as varchars.
Call the SP with a query against the new table (main point is to get only the columns you want and limit the row count to keep type inference times reasonable).
Also in the SP call is the DB, schema, and table for the old and new locations -- old with all varchar and new with the inferred types.
The SP will then infer the data types and create two SQL statements. One statement will create the new table with the inferred data types. One statement will copy from the untyped (all varchar) table to the new table with appropriate wrappers such as try_multi_timestamp(), a UDF that extends try_to_timestamp() to try various common formats.
I meant to extend this so that it didn't require the untyped (all varchar) table at all, but haven't gotten around to it. Since it's come up here, I may circle back and update the SP with that capability. You can specify a query that reads directly from the stage, but you'd have to use $1, $2... with aliases for the column names (or else the DDL will try to create column names like $1). If the query runs directly against a stage, for the old DB, schema, and table, you could put in whatever because that's only used to generate an insert from select statement.
-- This shows how to use on the Snowflake TPCH sample, but could be any query.
-- Keep the row count down to reduce the time it take to infer the types.
call infer_data_types('select * from SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.LINEITEM limit 10000',
'SNOWFLAKE_SAMPLE_DATA', 'TPCH_SF1', 'LINEITEM',
'TEST', 'PUBLIC', 'LINEITEM');
create or replace procedure INFER_DATA_TYPES(SOURCE_QUERY string,
DATABASE_OLD string,
SCHEMA_OLD string,
TABLE_OLD string,
DATABASE_NEW string,
SCHEMA_NEW string,
TABLE_NEW string)
returns string
language javascript
as
$$
/****************************************************************************************************
* *
* DataType Classes
* *
****************************************************************************************************/
class Query{
constructor(statement){
this.statement = statement;
}
}
class DataType {
constructor(db, schema, table, column, sourceQuery) {
this.db = db;
this.schema = schema;
this.table = table;
this.sourceQuery = sourceQuery
this.column = column;
this.insert = '"#~COLUMN~#"';
this.totalCount = 0;
this.notNullCount = 0;
this.typeCount = 0;
this.blankCount = 0;
this.minTypeOf = 0.95;
this.minNotNull = 1.00;
}
setSQL(sqlTemplate){
this.sql = sqlTemplate;
this.sql = this.sql.replace(/#~DB~#/g, this.db);
this.sql = this.sql.replace(/#~SCHEMA~#/g, this.schema);
this.sql = this.sql.replace(/#~TABLE~#/g, this.table);
this.sql = this.sql.replace(/#~COLUMN~#/g, this.column);
}
getCounts(){
var rs;
rs = GetResultSet(this.sql);
rs.next();
this.totalCount = rs.getColumnValue("TOTAL_COUNT");
this.notNullCount = rs.getColumnValue("NON_NULL_COUNT");
this.typeCount = rs.getColumnValue("TO_TYPE_COUNT");
this.blankCount = rs.getColumnValue("BLANK");
}
isCorrectType(){
return (this.typeCount / (this.notNullCount - this.blankCount) >= this.minTypeOf);
}
isNotNull(){
return (this.notNullCount / this.totalCount >= this.minNotNull);
}
}
class TimestampType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "timestamp";
this.insert = 'try_multi_timestamp(trim("#~COLUMN~#"))';
this.sourceQuery = SOURCE_QUERY;
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class IntegerType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "number(38,0)";
this.insert = 'try_to_number(trim("#~COLUMN~#"), 38, 0)';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class DoubleType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "double";
this.insert = 'try_to_double(trim("#~COLUMN~#"))';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class BooleanType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "boolean";
this.insert = 'try_to_boolean(trim("#~COLUMN~#"))';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
// Catch all is STRING data type
class StringType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "string";
this.totalCount = 1;
this.notNullCount = 0;
this.typeCount = 1;
this.minTypeOf = 0;
this.minNotNull = 1;
}
}
/****************************************************************************************************
* *
* Main function *
* *
****************************************************************************************************/
var pass = 0;
var column;
var typeOf;
var ins = '';
var newTableDDL = '';
var insertDML = '';
var columnRS = GetResultSet(GetTableColumnsSQL(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD));
while (columnRS.next()){
pass++;
if(pass > 1){
newTableDDL += ",\n";
insertDML += ",\n";
}
column = columnRS.getColumnValue("COLUMN_NAME");
typeOf = InferDataType(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD, column, SOURCE_QUERY);
newTableDDL += '"' + typeOf.column + '" ' + typeOf.syntax;
ins = typeOf.insert;
insertDML += ins.replace(/#~COLUMN~#/g, typeOf.column);
}
return GetOpeningComments() +
GetDDLPrefixSQL(DATABASE_NEW, SCHEMA_NEW, TABLE_NEW) +
newTableDDL +
GetDDLSuffixSQL() +
GetDividerSQL() +
GetInsertPrefixSQL(DATABASE_NEW, SCHEMA_NEW, TABLE_NEW) +
insertDML +
GetInsertSuffixSQL(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD) ;
/****************************************************************************************************
* *
* Helper functions *
* *
****************************************************************************************************/
function InferDataType(db, schema, table, column, sourceQuery){
var typeOf;
typeOf = new IntegerType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new DoubleType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new BooleanType(db, schema, table, column, sourceQuery); // May want to do a distinct and look for two values
if (typeOf.isCorrectType()) return typeOf;
typeOf = new TimestampType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new StringType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
return null;
}
/****************************************************************************************************
* *
* SQL Template Functions *
* *
****************************************************************************************************/
function GetCheckTypeSQL(insert, sourceQuery){
var sql =
`
select count(1) as TOTAL_COUNT,
count("#~COLUMN~#") as NON_NULL_COUNT,
count(${insert}) as TO_TYPE_COUNT,
sum(iff(trim("#~COLUMN~#")='', 1, 0)) as BLANK
--from "#~DB~#"."#~SCHEMA~#"."#~TABLE~#";
from (${sourceQuery})
`;
return sql;
}
function GetTableColumnsSQL(dbName, schemaName, tableName){
var sql =
`
select COLUMN_NAME
from ${dbName}.INFORMATION_SCHEMA.COLUMNS
where TABLE_CATALOG = '${dbName}' and
TABLE_SCHEMA = '${schemaName}' and
TABLE_NAME = '${tableName}'
order by ORDINAL_POSITION;
`;
return sql;
}
function GetOpeningComments(){
return `
/**************************************************************************************************************
* *
* Copy and paste into a worksheet to create the typed table and insert into the new table from the old one. *
* *
**************************************************************************************************************/
`;
}
function GetDDLPrefixSQL(db, schema, table){
var sql =
`
create or replace table "${db}"."${schema}"."${table}"
(
`;
return sql;
}
function GetDDLSuffixSQL(){
return "\n);";
}
function GetDividerSQL(){
return `
/**************************************************************************************************************
* *
* The SQL statement below this attempts to copy all rows from the string tabe to the typed table. *
* *
**************************************************************************************************************/
`;
}
function GetInsertPrefixSQL(db, schema, table){
var sql =
`\ninsert into "${db}"."${schema}"."${table}" select\n`;
return sql;
}
function GetInsertSuffixSQL(db, schema, table){
var sql =
`\nfrom "${db}"."${schema}"."${table}" ;`;
return sql;
}
//function GetInsertSuffixSQL(db, schema, table){
//var sql = '\nfrom "${db}"."${schema}"."${table}";';
//return sql;
//}
/****************************************************************************************************
* *
* SQL functions *
* *
****************************************************************************************************/
function GetResultSet(sql){
cmd1 = {sqlText: sql};
stmt = snowflake.createStatement(cmd1);
var rs;
rs = stmt.execute();
return rs;
}
function ExecuteNonQuery(queryString) {
var out = '';
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
rs = stmt.execute();
}
function ExecuteSingleValueQuery(columnName, queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(columnName);
}
catch(err) {
if (err.message.substring(0, 18) == "ResultSet is empty"){
throw "ERROR: No rows returned in query.";
} else {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
}
return out;
}
function ExecuteFirstValueQuery(queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(1);
}
catch(err) {
if (err.message.substring(0, 18) == "ResultSet is empty"){
throw "ERROR: No rows returned in query.";
} else {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
}
return out;
}
function getQuery(sql){
var cmd = {sqlText: sql};
var query = new Query(snowflake.createStatement(cmd));
try {
query.resultSet = query.statement.execute();
} catch (err) {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
return query;
}
$$;
Have you tried STAGES?
Create 2 stages ... one with no header and the other with header .. .
see examples below.
Then a bit of SQL and voila your DDL.
Only issue - you need to know the # of cols to put correct number of t.$'s.
If someone could automate that ... we'd have an almost automatic DDL generator for CSV's.
Obviously once you have the SQL stmt then just add the create or replace table to the front and your table is nicely created with all the names from the CSV.
:-)
-- create or replace stage CSV_NO_HEADER
URL = 's3://xxx-x-dev-landing/xxx/'
STORAGE_INTEGRATION = "xxxLAKE_DEV_S3_INTEGRATION"
FILE_FORMAT = ( TYPE = CSV SKIP_HEADER = 1 FIELD_OPTIONALLY_ENCLOSED_BY = '"' )
-- create or replace stage CSV
URL = 's3://xxx-xxxlake-dev-landing/xxx/'
STORAGE_INTEGRATION = "xxxLAKE_DEV_S3_INTEGRATION"
FILE_FORMAT = ( TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' )
select concat('select t.$1 ', t.$1, ',t.$2 ', t.$2,',t.$3 ', t.$3, ',t.$4 ', t.$4,',t.$5 ', t.$5,',t.$6 ', t.$6,',t.$7 ', t.$7,',t.$8 ', t.$8,',t.$9 ', t.$9,
',t.$10 ', t.$10, ',t.$11 ', t.$11,',t.$12 ', t.$12 ,',t.$13 ', t.$13, ',t.$14 ', t.$14 ,',t.$15 ', t.$15 ,',t.$16 ', t.$16 ,',t.$17 ', t.$17 ,' from #xxxx_NO_HEADER/SUB_TRANSACTION_20201204.csv t') from
--- CHANGE TABLE ---
#xxx/SUB_TRANSACTION_20201204.csv t limit 1;

Converting StructType to Avro Schema, returns type as Union when using databricks spark-avro

I am using databricks spark-avro to convert a dataframe schema into avro schema.The returned avro schema fails to have a default value. This is causing issues when i am trying to create a Generic record out of the schema. Can, any one help with the right way of using this function ?
Dataset<Row> sellableDs = sparkSession.sql("sql query");
SchemaBuilder.RecordBuilder<Schema> rb = SchemaBuilder.record("testrecord").namespace("test_namespace");
Schema sc = SchemaConverters.convertStructToAvro(sellableDs.schema(), rb, "test_namespace");
System.out.println(sc.toString());
System.out.println(sc.getFields().get(0).toString());
String schemaString = sc.toString();
sellableDs.foreach(
(ForeachFunction<Row>) row -> {
Schema scEx = new Schema.Parser().parse(schemaString);
GenericRecord gr;
gr = new GenericData.Record(scEx);
System.out.println("Generic record Created");
int fieldSize = scEx.getFields().size();
for (int i = 0; i < fieldSize; i++ ) {
// System.out.println( row.get(i).toString());
System.out.println("field: " + scEx.getFields().get(i).toString() + "::" + "value:" + row.get(i));
gr.put(scEx.getFields().get(i).toString(), row.get(i));
//i++;
}
}
);
This is the df schema:
StructType(StructField(key,IntegerType,true), StructField(value,DoubleType,true))
This is the avro converted schema:
{"type":"record","name":"testrecord","namespace":"test_namespace","fields":[{"name":"key","type":["int","null"]},{"name":"value","type":["double","null"]}]}
The problems is that the class SchemaConverters does not include default values as part of the schema creation. You have 2 options, modify the schema adding default values before Record creation or filling the record before building with some value( it could be actually values from your row). For example null. This is an example how create a Record using your schema
import org.apache.avro.generic.GenericRecordBuilder
import org.apache.avro.Schema
var schema = new Schema.Parser().parse("{\"type\":\"record\",\"name\":\"testrecord\",\"namespace\":\"test_namespace\",\"fields\":[{\"name\":\"key\",\"type\":[\"int\",\"null\"]},{\"name\":\"value\",\"type\":[\"double\",\"null\"]}]}")
var builder = new GenericRecordBuilder(schema);
for (i <- 0 to schema.getFields().size() - 1 ) {
builder.set(schema.getFields().get(i).name(), null)
}
var record = builder.build();
print(record.toString())

H2 database create alias for function in package in schema

In my code I call stored procedure like this (and it works perfectly):
{ ? = call schema.package.function(?) }
I need to call it like this because jdbc connection is set to another schema.
But for now I can't test it because H2 database doesn't support packages. So if I change my jdbc url database name to the one I require and delete "schema" from the call everything is ok while testing.
#Test
fun test() {
val session = em.entityManager.unwrap(Session::class.java)
session.doWork {
val st = it.createStatement()
st.execute("create schema if not exists mySchema")
st.execute("create alias mySchema.myPackage.myFunction for " // the error happens here +
"\"${this.javaClass.name}.myFunction\"")
}
val response = dao.myFunction("1")
//test stuff
}
How can I change my test because now it's giving me the syntax error?

How to create database schema using slick?

I have tried
val schemas = addresses.schema
val setup = schemas.create
val db = Database.forConfig("h2disk")
Await.result(db.run(setup), Duration.Inf)
but, apparently, it is not working. Here are some logs
[error] Caused by: org.h2.jdbc.JdbcSQLException: Schema "apps" not found; SQL statement:
[error] create table "apps"."t_address" ("name" VARCHAR,"domain" VARCHAR,"t_address_id" VARCHAR NOT NULL PRIMARY KEY) [90079-196]
[error] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
[error] at org.h2.message.DbException.get(DbException.java:179)
[error] at org.h2.message.DbException.get(DbException.java:155)
[error] at org.h2.command.Parser.getSchema(Parser.java:688)
[error] at org.h2.command.Parser.getSchema(Parser.java:694)
We can try
val schemas = addresses.schema
val setup = DBIO.seq(sqlu"""create schema apps;""", schemas.create)
val db = Database.forConfig("h2disk")
Await.result(db.run(setup), Duration.Inf)
Notes: the schema name for some dbms is case sensitive, e.g. H2 will automatically convert schema to APPS
I had to use bind variables to make it work as follow : (prefixing the variable with #)
src: https://scala-slick.org/doc/3.3.3/sql.html#splicing-literal-values
val schemaName = "something"
val schemas = Cases(schemaName).schema
val setup = DBIO.seq(
sqlu"""create schema #${schemaName} AUTHORIZATION postgres""",
// create table schemas
schema.createIfNotExists
//add default data
...
// add rights
...
)
all the tables are defined like
class Cases(_tableTag: Tag, schemaName: String) extends profile.api.Table[CasesRow](_tableTag, Some(schemaName), "cases") {
....
}
def Cases(schema: String) = new TableQuery(tag => new Cases(tag,schemaName = schema))