Ignite SQL with calcite query hint merge - ignite

So I am playing about with the new calcite engine in Ignite, trying to get to grips with some of the new possibilities. Naturally hit into a puzzle. Not sure if anyone has some suggestions (or if this should work...)
So this is what I am trying to do (more or less...). It fails with the error SQL Error [1001] [42000]: Failed to validate query. java.lang.NullPointerException: rowType.
create table tmp ("request_key_" varchar(36) primary key, "request_name_" varchar(128));
merge into tmp d using (
select /*+ QUERY_ENGINE('calcite') */
cast('00cbf724-a767-4d4f-ab24-a4131023a537' as varchar(32)) as "request_key_",
cast('test merge' as varchar(128)) as "request_name_"
) s on d."request_key_" = s."request_key_"
when matched then update set
"request_name_" = s."request_name_"
when not matched then insert (
"request_key_"
,"request_name_"
) values (
s."request_key_"
,s."request_name_"
);
Is this workable at all, or is there a missing tweak required? I suspect the issue is simply with how the query engine hint is being passed....but I cant quite figure out how this should work (or do I need to initialise the JDBC driver with the Calcite engine?)
It doesnt work with a table to table merge either, so I am pretty sure its the query hint thats the problem.....
This is on Ignite 2.13...

Related

Create temp table in Azure Databricks and insert lots of rows

Here's the end result of what I'm trying to do, because I think that I'm making it needlessly complicated.
I want to query data where UPC_ID IN (VERY LONG LIST OF UPCS). Like, 20k lines.
I thought that perhaps the easiest way to do this would be to create a temporary table, and insert the lines 1000 at a time (and then use that table for the WHERE condition.)
When I try to run
CREATE TABLE #TEMP_ITEM (UPC_ID BIGINT NOT NULL)
I get
[PARSE_SYNTAX_ERROR] Syntax error at or near '#'line 1, pos 13
The list of UPCs comes from a spreadsheet, and there's no shared attributes where I can just SELECT INTO or generate the list using anything that already exists in the database.
I know that I'm missing something painfully stupid here, but I am stuck. Help?
You are almost there...
I hope this is in Databricks....
Basically we cannot insert directly, but you can simulate it with the output of an insert statement...
create
or replace temporary view TEMP_ITEM as
select
1 as UPC_ID
UNION
select
2 as UPC_ID
...
...
Please refer to the below link for further details...
Is it possible to insert into temporary table in spark?
Hope this helps...

Counting affected rows using Groovy SQL

Suppose I have the following SQL table definition:
create table test(
id number(5) primary key
);
Furthermore, I'd like to execute INSERT and DELETE statements on said table by using Groovy's SQL support.
Given the following code, which executes an INSERT statement:
Sql db = new Sql(some_dataSource)
String query = "insert into test (id) values (999)"
def result = db.execute(query)
Can I know, inside this code, how many rows were inserted into test (in this case, 1)?
Is the result variable useful in determining this?
I've tried looking at Groovy's SQL support documentation here and here, but the examples provided only seem to count rows over SELECT statements.
Thanks!
You can get the number of rows inserted with Sql.getUpdateCount(). Example:
db.execute(query)
println db.updateCount
The return value from execute is a boolean indicating whether the statement was successfully executed.
The API docs for Sql.execute() have a complete explanation.

Getting the inserted rows to update another table

I have a table which stores records which require to be inserted into another database. Once these values are inserted, I then need to mark these records as processed to prevent them being re-processed.
DECLARE #InsertedValues TABLE (
[ITEMNMBR] nchar(31),
[ITEMDESC] nchar(101),
[ITMSHNAM] nchar(15),
[ITMGEDSC] nchar(11),
[UOMSCHDL] nchar(11),
[ALTITEM1] nchar(31),
[ALTITEM2] nchar(31),
[USCATVLS_1] nchar(11),
[USCATVLS_2] nchar(11),
[USCATVLS_3] nchar(11),
[USCATVLS_6] nchar(11),
[ABCCODE] int,
[ROW_ID] int
)
-- INSERT NEW INVENTORY ITEMS INTO DB
INSERT INTO TABLE1..IV00101 (ITEMNMBR,ITEMDESC,ITMSHNAM,ITMGEDSC,UOMSCHDL,ALTITEM1,ALTITEM2,USCATVLS_1,USCATVLS_2,USCATVLS_3,USCATVLS_6,ABCCODE)
OUTPUT
INSERTED.[ITEMNMBR],
INSERTED.[ITEMDESC],
INSERTED.[ITMSHNAM],
INSERTED.[ITMGEDSC],
INSERTED.[UOMSCHDL],
INSERTED.[ALTITEM1],
INSERTED.[ALTITEM2],
INSERTED.[USCATVLS_1],
INSERTED.[USCATVLS_2],
INSERTED.[USCATVLS_3],
INSERTED.[USCATVLS_6],
INSERTED.[ABCCODE],
U.[ROW_ID] INTO #InsertedValues
SELECT U.[ITEMNMBR],U.[ITEMDESC],U.[ITMSHNAM],U.[ITMGEDSC],U.[UOMSCHDL],U.[ALTITEM1],U.[ALTITEM2],U.[USCATVLS_1],U.[USCATVLS_2],U.[USCATVLS_3],U.[USCATVLS_6],U.[ABCCODE]
FROM
DYNAMICS..TABLE2 AS U
WHERE
U.[ProcessedFlag] = 0 AND
U.[Action] = 'I' AND
U.[DestinationCompany] = 'COMPANY1' AND
U.[DestinationTable] = 'IV00101'
As it stands currently, this query doesn't work as it complains about the U.[ROW_ID] column in the OUTPUT statement which makes sense. So my problem is, how do I get the row that was inserted so that I can then run the following query?
UPDATE DYNAMICS..TABLE2
SET [ProcessedFlag] = 1, [ProcessedDateTime] = GETDATE()
FROM #InsertedValues AS U
INNER JOIN DYNAMICS..TABLE2 AS R ON U.[ROW_ID] = R.[ROW_ID]
I'd consider using eConnect, since messing with GP tables is not a good idea (though inserting into IV00101 should be OK since it's inventory master ... but still!)
What version of GP are you using? GP10 and GP2010 support webservices which have methods which allow you to insert an inventory item, otherwise you can use eConnect and provide XML files to the eConnect entrypoint which it will process. It also provides validation and error handling. You can use message queuing too if needs be
Are you trying to do an import from your own holding table into the GP tables or something like that?
I do plenty of GP and integration where I work :)
It's not possible to get the number of updated rows with standard SQL, but probably any database allows to do it. Bit it won't be that easy to help you, if you don't tell what RDBMS are you using and where are you calling the SQL instructions from. I mean a script executed on what db client application or an application you're developing in T-SQL, PL-SQL, pgplsql, java, PHP, c/c++, c#, VB or whatever language you should say, probably using a db-library you should also say.
UPDATE DYNAMICS..TABLE2
SET [ProcessedFlag] = 1, [ProcessedDateTime] = GETDATE()
WHERE
DYNAMICS..TABLE2.[ProcessedFlag] = 0 AND
DYNAMICS..TABLE2.[Action] = 'I' AND
DYNAMICS..TABLE2.[DestinationCompany] = 'COMPANY1' AND
DYNAMICS..TABLE2.[DestinationTable] = 'IV00101'
Just update the same set of records you selected in the first place.
just a suggestion. You should use identity columns when you have similar kind of scenarios. Since after that use of ##IDENTITY/SCOPE_IDENTITY() becomes pretty easy.
Anyways, I'll suggest you to use trigger if this table doesn't have multiple insertions simultaneously as triggers have a few disadvantages of there own.

How to append data from one table in another where one column is xml with sql?

Actually both tables are the same, and I just need to merge data. Problem is that one column is defined with XML shema, which is same in both tables, and for my query I am getting this error from sql server studio:
"Implicit conversion between XML types constrained by different XML schema collections is not allowed. Use the CONVERT function to run this query."
Help me writedown this query.
I have something like this:
INSERT INTO table1
SELECT * FROM table2
WHERE id NOT IN (select id from table1);
Without more info on your table structure and the xml schemas I'm not sure how much assistance I can be. That said there's an article that discusses this exact problem here
http://sqlblogcasts.com/blogs/martinbell/archive/2010/11/08/Using-XML-Schemas.aspx
And his example of using the convert statement to overcome exactly this problem is as follows.
INSERT INTO [dbo].[Test_ProductModel_Content]( [CatalogDescription] )
SELECT CONVERT(XML, [CatalogDescription] )
FROM AdventureWorks2008.Production.ProductModel
WHERE [CatalogDescription] IS NOT NULL ;
GO
Hope that helps, if not post more information and I'm sure someone can help you out.

Very simple SQL query on varchar fields with sqlite

I created a table with this schema using sqlite3:
CREATE TABLE monitored_files (file_id INTEGER PRIMARY KEY,file_name VARCHAR(32767),original_relative_dir_path VARCHAR(32767),backupped_relative_dir_path VARCHAR(32767),directory_id INTEGER);
now, I would like to get all the records where original_relative_dir_path is exactly equal to '.', without 's. What I did is this:
select * from monitored_files where original_relative_dir_path='.';
The result is no records even if in the table I have just this record:
1|'P9040479.JPG'|'.'|'.'|1
I read on the web and I see no mistakes in my syntax... I also tried using LIKE '.', but still no results. I'm not an expert of SQL so maybe you can see something wrong?
Thanks!
I see no problem with the statement.
I created the table that you described.
Did an INSERT with the same values that you provided.
And did the query, and also queried without a where clause.
No problems encountered, so I suspect that when you execute your selection, you may not be connected to the correct database.