Nifi generates wrong column name for insert constraint - sql

I use Nife 1.13.2 for build ETL process between Oracle and PostgresQL.
There is an ExecuteSQL processor for retrieving data from Oracle and a PutDatabaseRecord processor for inserting data to PostgresQL's table. In PostgresQL's processor there configured INSERT_IGNORE option. The name of key column in both tables is DOC_ID. But due to insert operation, from some reason, Nifi generate mistaken name of the column as it is seen from follow line: ON CONFLICT (DOCID) DO NOTHING
Here is whole error:
Failed to put Records to database for StandardFlowFileRecord[uuid=7ff8189a-2685-4f
9a-bab6-d0bc9b4f7ae0,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1623310567664-311, container=default, section=311], offset=604245, length=610377],offset=211592,name=7ff8189a-2685-4f9a-bab6-d0bc9b4f7ae0,size=6106].
Routing to failure.: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO src.rtl_sales(doc_id, complete, out_sale_sum_disc, kpp_num, org_id, kpp_status, im) VALUES (1830335807, '2020-06-12 +03', '530.67'::numeric, 565900, 62, 4, NULL
) ON CONFLICT (DOCID) DO NOTHING was aborted: ERROR: column "docid" does not exist
Here is table in PostgresQL:
Here is part of FlowFile from the queue:
What is wrong with me or Nifi?

OK, so it must be Translate Field Names -> False in PutDatabaseRecord:

Related

Attempting to insert data into a SQL Server table with no primary key or clustered index intermittently comes up with strange bulk load error

We have a stored procedure in SQL Server which inserts data from a staging table into a fact table. This is first joined onto various dimension tables to get their id columns.
Intermittently, it will throw the following error:
('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot bulk load. The bulk data stream was incorrectly specified as sorted or the data violates a uniqueness constraint imposed by the target table. Sort order incorrect for the following two rows: primary key of first row: (305603, 0xedd9f90001001a00), primary key of second row: (245634, 0x0832680003003200). (4819) (SQLExecDirectW)')
(The two rows mentioned in the error message usually change each time the error occurs)
The fact table doesn't have any primary key or clustered index, although it does have a few non-clustered indexes, so the error message is really confusing me.
The dimension tables DO have primary keys, but the fact table has no foreign keys related to them.
I have searched for hours on the internet but don't seem to have found a solution.
Does anybody know why I would receive such an error in this case? any help would be appreciated. We are using Azure SQL database.
I am aware that the database is badly designed however it was created before my time.

Flyway: Can't insert the value NULL in 'installed_on', table 'schema_version' column does not allow nulls. INSERT fails

Using Flyway-core:4.1.2 for database-migration. Added a new DDL file for flyway to execute. Flyway executes the DDL correctly and makes the corresponding changes to tables and columns. (We're adding a table and altering some previous columns in the new DDL). But, flyway fails to register this attempt to schema_version table: I get the following error:
Current version of schema [dbo]: 2.1
Unable to insert row for version '3.0' in metadata table [dbo].[schema_version]
Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.dbsupport.FlywaySqlException:
Message : Cannot insert the value NULL into column 'installed_on', table 'dbo.schema_version'; column does not allow nulls. INSERT fails.
Flyway successfully executes the DDL however, fails to logs it to the schema_version table due to NULL on installed_on. Any help will be greatly appreciated. Thanks in advance. !
In my case the error was that the database table flyway_schema_history had column installed_on defined like DATETIME NOT NULL while it should have been DATETIME DEFAULT GETDATE() NOT NULL.
The issue was resolved when I manually altered the column to include the default value definition.
My company has an number of databases which were created over a period of last 3 years, and i have noticed that the oldest and the youngest of them have the column set properly, while the ones from around 1.5 years have the column defined without the default. Perhaps it was a bug in some older versions of Flyway?

Apache Ignite: how to insert into table with IDENTITY key (SQL Server)

I have a table in SQL Server where the primary key is autogenerated (identity column), i.e.
CREATE TABLE TableName
(
table_id INT NOT NULL IDENTITY (1,1),
some_field VARCHAR(20),
PRIMARY KEY (table_id)
);
Since table_id is an autogenerated column, when I implemented the SqlFieldQuery INSERT clause I do not set any argument to table_id:
sql = new SqlFieldsQuery("INSERT INTO TableName (some_field) VALUES (?)");
cache.query(sql.setArgs("str");
However at runtime I get the following error:
Exception in thread "main" javax.cache.CacheException: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO TableName (some_field) VALUES (?), params=["str"]]
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:807)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:765)
...
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO TableName (some_field) VALUES (?), params=["str"]]
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1324)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1815)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1813)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1820)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:795)
... 5 more
Caused by: class org.apache.ignite.IgniteCheckedException: Key is missing from query
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.createSupplier(UpdatePlanBuilder.java:331)
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.planForInsert(UpdatePlanBuilder.java:196)
at org.apache.ignite.internal.processors.query.h2.dml.UpdatePlanBuilder.planForStatement(UpdatePlanBuilder.java:82)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.getPlanForStatement(DmlStatementsProcessor.java:438)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsDistributed(DmlStatementsProcessor.java:222)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1321)
... 11 more
This is how I planned to implement the insertion because it seemed more tedious to get the max table_id from the cache, increment and insert. I thought I could omit the table_id from the insert and let SQL Server insert the pk, but it doesn't seem to work like this.
Can you please tell me how this should typically be implemented in Ignite? I checked the ignite-examples, unfortunately the examples are too simple (i.e. fixed keys only, like 1 or 2).
Moreover, how does Ignite support the use of sequences?
I am using ignite-core 2.2.0. Any help is appreciated! Thank you.
That's true that as for now autoincrement fields are not supported.
As an option, you could generate IDs manually via for example Ignite's ID generator.
Ignite doesn't support identity columns [1] yet.
It may be non-obviuos, but Ignite SQL layer is built on top of key value store which can be backed by other CacheStore. Your SQL query will never go though to CacheStore as is.
Ignite internals will execute your query, save data in cache and only then update will be propagated to CacheStore which will create a new SQL query for your SQL server.
So, Ignite need the identity column value (actually a key) be known before data saved in cache.
[1] https://issues.apache.org/jira/browse/IGNITE-5625

Delete data from table that is not reorganized (in db2)

I have a table that needs to be reorganised using the reorg command in DB#, However due to large size of the table, I am unable to do so. So I plan to delete entire data from the table and then do any reorg if required..
Is there any possible way for doing so?
I already used
DELETE FROM TABLE-NAME
and
TRUNCATE TABLE SCHAME-NAME.TABLE-NAME IMMEDIATE
But both the above queries aren't working and are giving the following error which is the one for reorg.
Error: DB2 SQL Error: SQLCODE=-668, SQLSTATE=57016, SQLERRMC=7;DB2ADMIN.CERTIFICATE_TAB, DRIVER=3.50.152
SQLState: 57016
ErrorCode: -668

Synchronization Bulk Insertion Exception

Any one please help me out. I am syncing sql server to client using MS Sync Framework. I am getting this exception at syncOrchestrator.Synchronize();
Failed to execute the command 'BulkInsertCommand' for table ‘xyz'; the transaction was rolled back. Ensure that the command syntax is correct.
Inner Exception is:
Violation of PRIMARY KEY constraint 'PK__#B1CF471__ECD9228532A93525'. Cannot insert duplicate key in object 'dbo.#changeTable'. The duplicate key value is (0). The data for table-valued parameter \"#changeTable\" doesn't conform to the table type of the parameter. The statement has been terminated.
There is a store procedure that using above object 'dbo.#changeTable', but I don't know what to do with that??