Cloned Snowflake table, why is every column in quotes? - formatting

I have a table of varchar values, when I copy this table by cloning the entire table has quotation marks around every varchar value.
For example 12/8/2017 becomes "12/8/2017", Finance becomes "Finance".
Wondering A, why did this happen. B is there any way to fix this?

So I tried to think of a senario where this might happen and I found this:
CREATE OR REPLACE TABLE demo_db.public.employees
(emp_id number,
first_name varchar,
last_name varchar
);
-- Populate the table with some seed records.
Insert into demo_db.public.employees
values(100,'"John"','"Smith"')
(200,'Sam','White'),
(300,'Bob','Jones'),
(400,'Linda','Carter');
SELECT * FROM demo_db.public.employees;
CREATE OR REPLACE TABLE demo_db.public.employees_clone
CLONE employees;
From demo: https://community.snowflake.com/s/article/cloning-in-snowflake
You may notice that I had to use ' ' in order for the INSERT statement to accept the data. I did the same INSERT to the cloned table below and received an error.
INSERT INTO demo_db.public.employees_clone VALUES(500,""'Mike'"",'Jones');
However this worked:
INSERT INTO demo_db.public.employees_clone VALUES(500,'"Mike"','Jones');
The results of the select * of the clone:
desc table demo_db.public.employees_clone;
So the type was still varchar, it just had ' " ' a quote in the string.
Try DESC to see what happened. I am going to guess that the original table loaded the strings with "" or from where you are reading it is putting it in quotes. Either way, please share the original data, or a sample of it with support. If you are in the community portal, please see: https://support.snowflake.net/s/article/How-to-Get-Access-to-the-Case-Console-in-the-Lodge-Community

Related

I am inserting data from multiple tbls into one and I want the last columns to be DBName so I know where it originated from. How do I do that?

I am creating the table below and for the BOTName, I want to insert the name the of the DB where the Data came from. I have no problem inserting the actual data, just trying to figure out how to custom label the rows so I know where the data originated.
Create Table #temp1
(
ID Integer,
RunDate DateTime,
BOTName TEXT
)
Just googled a bunch, but I could not find anything specific to my ask.
It depends on your particular syntax, but if you're using TSQL you could simply add the value DB_NAME() to your insert statement
INSERT INTO #temp1 (ID, RunDate, BOTName) VALUES
(1, CURRENT_TIMESTAMP, DB_NAME())
DB_NAME() returns the current database context:
USE master;
SELECT DB_NAME()
returns 'master'
Also it might be worth pointing out the the TEXT data type is depreciated and you should use NVARCHAR(x) instead:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=sql-server-ver16

Insert various value

I have a table like this.
create table help(
id number primary key,
number_s integer NOT NULL);
I had to insert value 0 from id 1 and id 915 I solved this one in a simple way doing
update help set number_s=0 where id<=915;
This one was easy.
Now I have to set a numbers ( that change every row) from id 915 to last row.
I was doing
update help set number_s=51 where id=916;
update help set number_s=3 where id=917;
There are more than 1.000 row to be updated how can I do it very fast?
When I had this problem I used to use sequence to auto increment value like id (example
insert into help(id,number_s) values (id_sequence.nextval,16);
insert into help(id,number_s) values (id_sequence.nextval,48);
And so on but on this case it cannot be used because in this case id start from 915 and not 1...) How can I do it very fast? I hope it is clear the problem.
Since you have your ids and numbers in a file with a simple structure, it's a fairly small number, and assuming this is something you're going to do once, honestly what I would do would be to pull the file into Excel, use the text functions to build 1000 insert statements and cut and paste them wherever.
If those assumptions are incorrect, you could (1) use sqlldr to load this file into a temporary table and (2) run an update on your help table based on the rows in that temporary table.
As mentioned in previous answers and according to your comment that there is a file stored in your system, You can use the external table / SQL loader to achieve the result.
I am trying to show you the demo as follows:
-- Create an external table pointing to your file
CREATE TABLE "EXT_SEQUENCES" (
"ID" number ,
"number_s" number
)
ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER
DEFAULT DIRECTORY "<directory name>" ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
BADFILE 'bad_file.txt'
LOGFILE 'log_file.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' MISSING FIELD VALUES ARE NULL
) LOCATION ( '<file name>' )
) REJECT LIMIT UNLIMITED;
-- Now update your help table
MERGE INTO help H
USING EXT_SEQUENCES
ON ( H.ID = E.ID)
WHEN MATCHED THEN
UPDATE SET H.number_s = E.number_s;
Note: You need to change the access parameters of the external table according to your actual data in the file.
Hope you will get proper direction now.
Cheers!!

Is it OK to have separate column in Audit table to store column name to reflect what changes were made

Is it a good practice to store column name to represent what were the changes made in a data in parent table which led to trigger the audit.
Ex :-
create table employee
(
emp_id character varying(10),
fname character varying(30),
lname character varying(30),
tel_no character varying(15)
);
create table aud_employee
(
emp_id character varying(10),
fname character varying(30),
lname character varying(30),
tel_no character varying(15)
aud_col_changed character varying(100)
);
--
insert into employee values('215','Mark','Cooper','222-458-254');
This will also result to insert the record in an audit table through trigger and will have null value in aud_col_changed column.
Now when I update the same record :-
update employee set tel_no='255-458-254' where emp_id='215';
So, audit would also be created for this update made and audit table should now consist another record and would consist value 'tel_no' in aud_col_changed column.
If there are multiple columns changed at a time, it would be separated by comma in same field.
If this is the right approach, could you please describe the ways of achieving it?
Please note that the table on which I am trying to implement this approach has around 18 columns out of whih 6-7 columns are JSON.
Your method is likely to be fine -- you should specify what you want to do with the audit table.
Personally, I would rather have a table where the audit table was one of the following:
One row per column changed, with the old value and the new value.
One row per row changed, with all the columns appearing twice, once for the old value and once for the new value.
In other words, I usually want to see both the old and new values together.
The first method is tricky when dealing with columns that have different types. The second is tricky when you want to modify the structure of the table.
I did some more research and I found that if we want to store column name then that data needs to be updated through function. In function we need to check each value passed with NOT NULL. If it appears to be not null then we need to hard code the column name and assign it to the variable. If more values are found which are NOT NULL, then that hard coded column name needs to be appended to main variable until we check all the values passed in function with NOT NULL.
This will definitely degrade performance of DB and making it run after every update is obviously not preferable.
Hence, I will not prefer using audit_col_changed column.

SQL Server - Select INTO statement stored in sys.tables

I know how to find the CREATE statement for a table in SQL Server but is there any place that stores the actual SQL code if I use SELECT INTO ... to create a table and if so how do I access it?
I see two ways of creating tables with SELECT INTO.
First: You know the Schema, then you can declare a #Table Variable and perform the Select INSERT
Second: You can create a temp table:
SELECT * INTO #TempTable FROM Customer
There are some limitations on the second choice:
- You need to drop the temp table afterwards.
- If there is a VARCHAR Column and the maximum number of characters of that given SELECT is 123 characters (example), and then you try to insert into the TEMP table afterwards with a greater number of characters, it will throw an error.
My recommendation is always declare a table in order to use, it makes it clear what is the intentions and increases readability.

Setting field size (per column) while generating table in Access

I am trying to export my Database as an .dbf by using a VBA script, but the dbf requires the database to have certain values for the column size.
When I leave the columns as they are in Access, I get an error saying
field will not fit in record
How can I set the column size for each column seperatly? Preferably while generating the table, so I don't have to do it manually everytime i generate a new table with queries
And where do I set them? (in a Query or in SQL?)
Thanks in advance!
Edit:
I have made sure that its the field size value that is giving me the error. I changed all the field size values manually by opening the table in Design View.
So now the second part of my question is becoming more crucial. Wether or not it is possible to set the field size while generating the table.
Edit2:
I am currently using SQL in a query to create the table as followed:
SELECT * INTO DB_Total
FROM Tags_AI_DB;
After the initial DB_Total is made, I use several Insert into queries to add other rows:
INSERT INTO DB_TOTAL
SELECT a.*
FROM Tags_STS_ENA_DB AS a
LEFT JOIN DB_TOTAL AS b
ON a.NAME = b.NAME
WHERE b.NAME IS NULL;
If I set the column values in the DB_Total table while generating it with the Select into query, will they still have those values after using the Insert Into queries to insert more rows?
Edit3:
I decided (after a few of your suggestions and some pointers from colleagues, that it would be better to first make my table and afterwards update this table with queries.
However, it seems like I have run into a dead end with Access, this is the code I am using:
CREATE TABLE DB_Total ("NAME" char(79),"TYPE" char(16), "UNIT" char(31),
"ADDR" char(254), "RAW_ZERO" char(11), "RAW_FULL" char(11), "ENG_ZERO" char(11),
"ENG_FULL" char(11), "ENG_UNIT" char(8), "FORMAT" char(11), "COMMENT" char(254),
"EDITCODE" char(8), "LINKED" char(1), "OID" char(10), "REF1" char(11), "REF2" char(11),
"DEADBAND" char(11), "CUSTOM" char(128), "TAGGENLINK" char(32), "CLUSTER" char(16),
"EQUIP" char(254), "ITEM" char(63), "HISTORIAN" char(6),
"CUSTOM1" char(254), "CUSTOM2" char(254), "CUSTOM3" char(254), "CUSTOM4" char(254),
"CUSTOM5" char(254), "CUSTOM6" char(254), "CUSTOM7" char(254), "CUSTOM8" char(254))
These are all the columns required for me to make a DBF file that is accepted by the application we are using it with.
You'll understand my sadness when this generated the following error:
Record is too large
Is there anything I can do to make this table work?
UPDATE
The maximum record size for Access 2007 is around 2kB (someone will no doubt correct that value)
When you create CHAR(255) it will use 255 bytes of space regardless as to what is in the field.
By contrast, VARCHARs do not use up space (only enough to define them) until you put something in the field, they grow dynamically.
Changing the CHAR(x)s to VARCHAR(x)s you will shrink the length of your table to within permitted values. Be aware that you may come into trouble if the row you are trying to insert is larger than the 2kB limit.
Previous
The way to specify column lengths when generating the table is to use a CREATE TABLE statement instead of a SELECT * INTO.
CREATE TABLE DB_Total
(
Column1Name NVARCHAR(255) --Use whatever datatype and length you need
,Column2Name NUMERIC(18,0) --Use whatever datatype and length you need
,...
) ;
INSERT INTO DB_Total
....
If you use a SELECT * INTO statement, SQL will use whatever field lengths and types it finds in the existing data.
It is also better practice to list the column names in your insert statement, so instead of
INSERT INTO DB_TOTAL
SELECT a.*
You should put:
INSERT INTO DB_Total
(
Column1Name
,Column2Name
,...
)
SELECT a.Column1Name
,a.Column2Name
,...
FROM ...
WHERE ... ;
In Edit2, you indicated your process starts with a "make table" (SELECT INTO) query which creates DB_Total and loads it with data from Tags_AI_DB. Then you run a series of "append" (INSERT) queries to add data from other tables.
Now your problem is that you need specific field size settings for DB_Total, but it is impossible to define those sizes with a "make table" query.
I think you should create DB_Total one time and set the field sizes as you wish. Do that manually with the table in Design View, or execute a CREATE TABLE statement if you prefer.
Then forget about the "make table" query and use only "append" queries to add the data.
If the issue is that this is a recurring operation and you want to discard previous data before importing the new, execute DELETE FROM DB_Total instead of DROP TABLE DB_Total. That will allow you to preserve the structure of the (now empty) DB_Total table so you needn't fiddle with setting the field sizes again.
Seems to me the only potential issue then might be if the structure of the source tables changes. If that happens, revise the structure of DB_Total so that it's compatible again.