Why Apache Ignite response is empty when using sql query - ignite

I am trying to create cache in Ignite using sql query
CacheConfiguration<?, ?> cacheCfg = new CacheConfiguration<>(DUMMY_CACHE_NAME);
IgniteCache<?, ?> cache = igniteInstance.getOrCreateCache(cacheCfg);
cache.query(new SqlFieldsQuery(
"CREATE TABLE IF NOT EXISTS MY_TABLE (id varchar, value varchar, PRIMARY KEY(id)) " +
"WITH \"template=replicated\", \"DATA_REGION=" + IgniteCacheConfiguration.DEFAULT_REGION + "\""));
Then I am trying to put some data into the cache using key-value api
IgniteCache<String, String> cache = ignite.cache("SQL_PUBLIC_MY_TABLE");
cache.put("1","example");
I know the data is stored successfully, I can retrieve it, I see that cache size is correct but when I am trying to retrieve the data with SQL
SELECT * FROM "PUBLIC".MY_TABLE
for example using DBeaver I am getting empty result
Do you know if it is how Ignite works or there is some additional configuration needed ?

By default, it wraps the kay and values in a class. You can tell it not to do that with the wrap_key and wrap_value parameter like this:
create table MY_TABLE (id varchar, value varchar, PRIMARY KEY(id)) with "wrap_key=false,wrap_value=false" ;

Related

Cannot COPY into nonexistent table when table exists

so, I have a table nba_schedule which is created below. When I try to copy data from an s3 csv file to insert to the table using COPY, I receive this error InternalError_: Cannot COPY into nonexistent table newsletter_schedule.
I'm thinking it's because this is all taking place in the same transaction, which is what I am expected to do here. Also, the redshift variables are located in an env file, I'm not sharing the code that loads that in.
redshift_table = 'nba_schedule'
# Connect to redshift
conn_string = "dbname={} port={} user={} password={} host={}".format(
redshift_dbname, redshift_port, redshift_user, redshift_password, redshift_host)
conn = psycopg2.connect(conn_string)
cursor = conn.cursor()
logging.info("Creating newsletter_schedule table in Redshift")
sql = f"""DROP TABLE IF EXISTS {schema + "." + redshift_table}"""
cursor.execute(sql)
sql = f"""CREATE TABLE IF NOT EXISTS {schema + "." + redshift_table} (
Date DATE,
Player_Name VARCHAR(255),
Player_Nickname VARCHAR(13),
Player_No VARCHAR(13),
Points VARCHAR(255),
Rebounds VARCHAR(255),
Assists VARCHAR(255),
Blocks VARCHAR(1),
3PM VARCHAR(1),
3PA VARCHAR(1),
FGM VARCHAR(50),
FGA VARCHAR(255),
three_percent VARCHAR(50),
fg_percent VARCHAR(50)
)
"""
cursor.execute(sql)
sql =f"""
COPY newsletter_schedule
FROM 's3://random_sample_data/nba_redshift/{s3_file}'
CREDENTIALS 'aws_iam_role=arn:aws:iam::4254514352:role/SampleRole'
DELIMITER ','
IGNOREHEADER 1
EMPTYASNULL
QUOTE '"'
CSV
REGION 'us-east-1';
"""
cursor.execute(sql)
conn.commit()
Any thoughts?
My first thought is that the CREATE TABLE is with the schema explicitly defined but the COPY command w/o the schema defined, just the table name. Now I don't know what schema you are using or what the search path is for this user on Redshift but it seems like you should check that this isn't just a schema search path issue.
What happens if you use schema.table in the COPY command? (this debug path is easier to explain than describing how to evaluate the user's search path)
There are other more subtle ways this could be happening but I've learned to look at the simple causes first - they are easier to rule out and more often than not the root cause.

Failed to execute query. Error: String or binary data would be truncated in table xdbo.user_info', column 'uid'

I have problem inserting values in my SQL server database on Azure, I am getting the following error:
Failed to execute query. Error: String or binary data would be truncated in table 'dummy_app.dbo.user_info', column 'uid'. Truncated value: 'u'.
The statement has been terminated.
I don't understand where I am wrong, I just created the server, and I am trying to experiment but cant fix this.
if not exists (select * from sysobjects where name='user_info' and xtype='U')
create table user_info (
uid varchar unique,
name varchar,
email varchar
)
go;
INSERT INTO dbo.user_info(uid, name, email) VALUES('uids', 'name', 'email') go;
Creating the table works fine, the only thing that doesn't work is the second command INSERT
I suspect that the reason is that you haven't defined a lenght for varchar and it defaults to 1 as length. Therefore your value gets truncated.
Set a varchar length to something like varchar(200) and you should be good to go.
This looks like the fact that the CREATE portion of your procedure for the table doesn't include a length of varchar, so you'd have to specify a length such as varchar(50) since the default is 1. Refer to the official MS docs in the link, in the remarks.
docs.miscrosoft.com
Also, here is the syntax for the CREATE TABLE in Azure which might be helpful as well.
Syntax of Azure CREATE TABLE

What is the right way to get Avro-files containing JSON into table-structure on Snowflake?

I've been struggling to get my data from Azure Event Hub into SQL-table on Snowflake-platform. I just can't wrap my head around how to do it properly if I have to transform the data multiple times. My data is in the body of the Avro-file.
I just started doing Snowflake. So far I've tried to follow this tutorial on the subject but it doesn't actually save the JSON-formatted body anywhere in the video. So far I've tried something like this
CREATE DATABASE IF NOT EXISTS MY_DB;
USE DATABASE MY_DB;
CREATE OR REPLACE TABLE data_table(
"column1" STRING,
"column2" INTEGER,
"column3" STRING
);
create or replace file format av_avro_format
type = 'AVRO'
compression = 'NONE';
create or replace stage st_capture_avros
url='azure://xxxxxxx.blob.core.windows.net/xxxxxxxx/xxxxxxxxx/xxxxxxx/1/'
credentials=(azure_sas_token='?xxxxxxxxxxxxx')
file_format = av_avro_format;
copy into avro_as_json_table(body)
from(
select(HEX_DECODE_STRING($1:Body))
from #st_capture_avros
);
copy into data_table("Column1", "Column2", "Column3" )
from(
select $1:"jsonKeyValue1", $1:"jsonKeyValue2", $1:"jsonKeyValue3"
from avro_as_json_table
);
This doesn't work as it produces "SQL compilation error: COPY statement only supports simple SELECT from stage statements for import" error and I know I should use INSERT INTO in the last statement instead of copy but my question is more to do how would I eliminate redundant avro_as_json_table from the equation?
Rather than using
copy into avro_as_json_table(body)
from ...
try
INSERT INTO avro_as_json_table(body)
from ...

BIGINT in SPARK SQL

We have configured Spark SQL(1.3.2) to work on top of Hive and we use Beeline to create the tables.
I was trying to create a table with BIGINT datatype.However I see that the table is getting created with INT datatype when I use the below command
CREATE TEMPORARY TABLE cars (blank bigint)
USING com.databricks.spark.csv
OPTIONS (path "cars.csv", header "false")
However when I use the command below I am able to create a table with bigint datatype
CREATE TABLE cars(blank bigint)
Can you let me know how can I create a tableBIGINT datatype using first method
Is it because of this
"Integral literals are assumed to be INT by default, unless the number exceeds the range of INT in which case it is interpreted as a BIGINT, or if one of the following postfixes is present on the number."
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-IntegralTypes(TINYINT,SMALLINT,INT,BIGINT)

Unable to create table in hive

I am creating table in hive like:
CREATE TABLE SEQUENCE_TABLE(
SEQUENCE_NAME VARCHAR2(225) NOT NULL,
NEXT_VAL NUMBER NOT NULL
);
But, in result there is parse exception. Unable to read Varchar2(225) NOT NULL.
Can anyone guide me that how to create table like given above and any other process to provide path for it.
There's no such thing as VARCHAR, field width or NOT NULL clause in hive.
CREATE TABLE SEQUENCE_TABLE( SEQUENCE_TABLE string, NEXT_VAL bigint);
Please read this for CREATE TABLE syntax:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable
Anyway Hive is "SQL Like" but it's not "SQL". I wouldn't use it for things such as sequence table as you don't have support for transactions, locking, keys and everything you are familiar with from Oracle (though I think that in new version there is simple support for transactions, updates, deletes, etc.).
I would consider using normal OLTP database for whatever you are trying to achieve
only you have option here like:
CREATE TABLE SEQUENCE_TABLE(SEQUENCE_NAME String,NEXT_VAL bigint) row format delimited fields terminated by ',' stored as textfile;
PS:Again depends the types to data you are going to load in hive
Use following syntax...
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.] table_name
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[ROW FORMAT row_format]
[STORED AS file_format]
And Example of hive create table
CREATE TABLE IF NOT EXISTS employee ( eid int, name String,
salary String, destination String)
COMMENT ‘Employee details’
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LINES TERMINATED BY ‘\n’
STORED AS TEXTFILE;