Generating a decrementing ID while inserting data on a Teradata table - sql

I'm trying to insert data from a query (or a volatile table) to another table which has a id column ( with only type smallint and not null constraint) which should be unique, on Teradata using teradata SQL Assistant the min(id) = -5 and i should insert the new data with a lower id.
This is a simple example:
table a
id| aa |bb
-3|text |text_2
-5|text_3|text_4
and the data i should insert is for example :
aa | bb
text_5|text_6
text_7|text_8
text_9|text_10
so the result should be like
id| aa |bb
-3|text |text_2
-5|text_3|text_4
-6|text_5|text_6
-7|text_7|text_8
-8|text_9|text_10
I tried to pass by creating volatile table with a generated id start by -5 increment by -1 no cycle.
But I get an error
Expected something like a name or a unicode delimited identifier or a cycle keyword between an integer and ','
There is any other way to do it please ?

Related

How to call a column named "group" in Snowflake?

I have a table in Snowflake with the following structure:
| id | group | subgroup |
_________________________
| 1 | verst | burg |
| 2 | travel| plane |
| 3 | rest | bet |
I need to call only the column "group", so I tried the following code:
select t2.group
from table as t2
but the following error arises
SQL compilation error: syntax error line 1 at position 7 unexpected 'group'. syntax error line 2 at position 0 unexpected 'from'.
I have also tried using:
select group
from table as t2
select "group"
from table as t2
but I always get the same error.
I know I can call the whole table using * but the real table where I get this data from has many more columns and we want to display this data in a dashboard. Additionally, I am not the owner of the table since it is filled by a microservice, so I cannot change the column names and I can't modify the microservice process.
I would appreciate any suggestion.
Given the table could not be created without double quotes, you need to know how it was created to know how to refer to the column. Which is to say it the create code was CREATE TABLE awsome ("GrOuP" string); there you will need to type "GrOuP"
There is also a session setting to ignore case in double quotes that might help.
see QUOTED_IDENTIFIERS_IGNORE_CASE
But by default things are upper case, thus try "GROUP"
Putting group in double quotes worked fine when I tried it:
create or replace temporary table foo ( "group" string );
insert into foo values ('Hello world.');
select "group" from foo;

Adding column to sqlite database and distribute rows based on primary key

I have some data elements containing a timestamp and information about Item X sales related to this timestamp.
e.g.
timestamp | items X sold
------------------------
1 | 10
4 | 40
7 | 20
I store this data in an SQLite table. Now I want to add to this table. Especially if I get data about another item Y.
The item Y data might or might not have different timestamps but I want to insert this data into the existing table so that it looks like this:
timestamp | items X sold | items Y sold
------------------------------------------
1 | 10 | 5
2 | NULL | 10
4 | 40 | NULL
5 | NULL | 3
7 | 20 | NULL
Later on additional sales data (columns) must be added with the same scheme.
Is there an easy way to accomplish this with SQLite?
In the end I want to fetch data by timestamp and get an overview which items were sold at this time. Most examples consider the usecase to add a complete row (one record) or a complete column if it perfectly matches to the other columns.
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
(Using pythons sqlite3 package to create and manipulate the DB)
Thanks!
Dynamically adding columns is not a good design. You could add them using
ALTER TABLE your_table ADD COLUMN the_column_name TEXT
the column, for existing rows would be populated with nulls, although you could specify a DEFAULT value and the existing rows would then be populated with that value.
e.g. the following demonstrates the above :-
DROP TABLE IF EXISTS soldv1;
CREATE TABLE IF NOT EXISTS soldv1 (timestamp INTEGER PRIAMRY KEY, items_sold_x INTEGER);
INSERT INTO soldv1 VALUES(1,10),(4,40),(7,20);
SELECT * FROM soldv1 ORDER BY timestamp;
ALTER TABLE soldv1 ADD COLUMN items_sold_y INTEGER;
UPDATE soldv1 SET items_sold_y = 5 WHERE timestamp = 1;
INSERT INTO soldv1 VALUES(2,null,10),(5,null,3);
SELECT * FROM soldv1 ORDER BY timestamp;
resulting in the first query returning :-
and the second query returning :-
However, as stated, the above is not considered a good design as the schema is dynamic.
You could alternately manage an equivalent of the above with the addition of either a new column (to also be part of the primary key) or by prefixing/suffixing the timestamp with a type.
Consider, as an example, the following :-
DROP TABLE IF EXISTS soldv2;
CREATE TABLE IF NOT EXISTS soldv2 (type TEXT, timestamp INTEGER, items_sold INTEGER, PRIMARY KEY(timestamp,type));
INSERT INTO soldv2 VALUES('x',1,10),('x',4,40),('x',7,20);
INSERT INTO soldv2 VALUES('y',1,5),('y',2,10),('y',5,3);
INSERT INTO soldv2 VALUES('z',1,15),('z',2,5),('z',9,25);
SELECT * FROM soldv2 ORDER BY timestamp;
This has replicated, data-wise, your original data and additionally added another type (column items_sold_z) without having to change the table's schema (nor having the additional complication of needing to update rather than insert as per when applying timestamp 1 items_sold_y 5).
The result from the query being :-
Or is sqlite the wrong tool at all? And I should rather use csv or excel?
SQLite is a valid tool. What you then do with the data can probably be done as easy as in excel (perhaps simpler) and probably much simpler than trying to process the data in csv format.
For example, say you wanted the total items sold per timestamp and how many types were sold then :-
SELECT timestamp, count(items_sold) AS number_of_item_types_sold, sum(items_sold) AS total_sold FROM soldv2 GROUP by timestamp ORDER BY timestamp;
would result in :-

Changing a column type from integer to string

Using PostgreSQL, what's the command to migrate an integer column type to a string column type?
Obviously I'd like to preserve the data, by converting the old integer data to strings.
You can convert from INTEGER to CHARACTER VARYING out-of-the-box, all you need is ALTER TABLE query chaning column type:
SQL Fiddle
PostgreSQL 9.3 Schema Setup:
CREATE TABLE tbl (col INT);
INSERT INTO tbl VALUES (1), (10), (100);
ALTER TABLE tbl ALTER COLUMN col TYPE CHARACTER VARYING(10);
Query 1:
SELECT col, pg_typeof(col) FROM tbl
Results:
| col | pg_typeof |
|-----|-------------------|
| 1 | character varying |
| 10 | character varying |
| 100 | character varying |
I suggest a four step process:
Create a new string column. name it temp for now. See http://www.postgresql.org/docs/9.3/static/ddl-alter.html for details
Set the string column. something like update myTable set temp=cast(intColumn as text) see http://www.postgresql.org/docs/9.3/static/functions-formatting.html for more interesting number->string conversions
Make sure everything in temp looks the way you want it.
Remove your old integer column. Once again, see http://www.postgresql.org/docs/9.3/static/ddl-alter.html for details
Rename temp to the old column name. Again: http://www.postgresql.org/docs/9.3/static/ddl-alter.html
This assumes you can perform the operation while no clients are connected; offline. If you need to make this (drastic) change in an online table, take a look at setting up a new table with triggers for live updates, then swap to the new table in an atomic operation. see ALTER TABLE without locking the table?

SQL - keep values with UPDATE statement

I have a table "news" with 10 rows and cols (uid, id, registered_users, ....) Now i have users that can log in to my website (every registered user has a user id). The user can subscribe to a news on my website.
In SQL that means: I need to select the table "news" and the row with the uid (from the news) and insert the user id (from the current user) to the column "registered_users".
INSERT INTO news (registered_users)
VALUES (user_id)
The INSERT statement has NO WHERE clause so i need the UPDATE clause.
UPDATE news
SET registered_users=user_id
WHERE uid=post_news_uid
But if more than one users subscribe to the same news the old user id in "registered_users" is lost....
Is there a way to keep the current values after an sql UPDATE statement?
I use PHP (mysql). The goal is this:
table "news" row 5 (uid) column "registered_users" (22,33,45)
--- 3 users have subscribed to the news with the uid 5
table "news" row 7 (uid) column "registered_users" (21,39)
--- 2 users have subscribed to the news with the uid 7
It sounds like you are asking to insert a new user, to change a row in news from:
5 22,33
and then user 45 signs up, and you get:
5 22,33,45
If I don't understand, let me know. The rest of this solution is an excoriation of this approach.
This is a bad, bad, bad way to store data. Relational databases are designed around tables that have rows and columns. Lists should be represented as multiple rows in a table, and not as string concatenated values. This is all the worse, when you have an integer id and the data structure has to convert the integer to a string.
The right way is to introduce a table, say NewsUsers, such as:
create table NewsUsers (
NewsUserId int identity(1, 1) primary key,
NewsId int not null,
UserId int not null,
CreatedAt datetime default getdaete(),
CreatedBy varchar(255) default sysname
);
I showed this syntax using SQL Server. The column NewsUserId is an auto-incrementing primary key for this table. The columns NewsId is the news item (5 in your first example). The column UserId is the user id that signed up. The columns CreatedAt and CreatedBy are handy columns that I put in almost all my tables.
With this structure, you would handle your problem by doing:
insert into NewsUsers
select 5, <userid>;
You should create an additional table to map users to news they have registeres on
like:
create table user_news (user_id int, news_id int);
that looks like
----------------
| News | Users|
----------------
| 5 | 22 |
| 5 | 33 |
| 5 | 45 |
| 7 | 21 |
| ... | ... |
----------------
Then you can use multiple queries to first retrieve the news_id and the user_id and store them inside variables depending on what language you use and then insert them into the user_news.
The advantage is, that finding all users of a news is much faster, because you don't have to parse every single idstring "(22, 33, 45)"
It sounds like you want to INSERT with a SELECT statement - INSERT with SELECT
Example:
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1
WHERE tbl_temp1.fld_order_id > 100;

How to select a table dynamically with HSQLDB and Hibernate?

I have a table with references to other tables. Stored is the table name and the entity id.
Like this:
ref_table
id | table_name | refId
-------+------------+-------
1 | test | 6
2 | test | 9
3 | other | 5
Now I try to formulate an SQL/FUNCTION that returns the correct entities from the correct tables. Something like:
SELECT * FROM resolveId(3)
I would expect to get the entity with the id "5" from the table "other". Is this possible? I would guess I can do it with a stored procedure (CREATE FUNCTION). The function would have to inspect the "ref_table" and return the name of the table to use in the SQL statement ... but how exactly?
If you want to use the resuling entities in select statements or joins, you should use CREATE FUNCTION with RETURNS TABLE ( .. )
There is a limitation in HSQLDB routines which disallows dynamically creating SQL. Therefore the body of the CREATE FUNCTION may include a CASE or IF ELSE block that switches to a pre-defined SELECT statement based on the input value (1, 2, 3, ..).
The details of CREATE FUNCTION are documented here:
http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#N12CC4
There is one example for an SQL function with RETURNS TABLE.