Column added to table but does not show up - sql

I'm new to Postgres. I created a database and a table, then I added columns to the table:
ALTER TABLE NDQ01
ADD COLUMN Date, date
To verify, I did:
\d+ ndq01
and the output is:
Table "public.ndq01"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+------+-----------+----------+---------+---------+--------------+-------------
My new column name "Date" does not show up. What did I do wrong?
Thanks very much.
UPDATE:
This is the error from Putty:
dbfinance01-# alter table ndq01 add column Date date;
ERROR: syntax error at or near "alter"
LINE 2: alter table ndq01 add column Date date;

Looks like the alter table did not succeed. Try removing the comma:
alter table NDQ01 add column Date date;
Check out \h alter table in psql for more information.
To clarify based on other answers, according to the current docs, date is not a reserved word in Postgres, and can be set as a column name (tested on version 11.2):
=# \d+ ndq01;
Table "public.ndq01"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+------+-----------+----------+---------+---------+--------------+-------------
date | date | | | | plain | |
SQL Key Words docs

Related

how to show comment in presto after creating a table?

I created a table like below with some comment
CREATE TABLE some_catalog.some_schema.tmp (
address VARCHAR COMMENT 'address',
name VARCHAR COMMENT 'name'
)
COMMENT 'some comment'
;
How could users find the comment when they make a query, describe does NOT show it?
>describe somme_catalog.some_schema.tmp;
presto> describe somme_catalog.somme_schema.tmp;
Column | Type | Extra | Comment
---------+---------+-------+---------
address | varchar | | address
name | varchar | | name
(2 rows)
Also, what is the Extra column for?
I'm using https://prestosql.io/
Just write: SHOW CREATE TABLE some_catalog.some_schema.tmp
This will show you pretty much the same command you wrote when creating the table, so you will be able to see the comment as well.

What is the best way to change the type of a column in a SQL Server database, if there is data in said column?

If I have the following table:
| name | value |
------------------
| A | 1 |
| B | NULL |
Where at the moment name is of type varchar(10) and value is of type bit.
I want to change this table so that value is actually a nvarchar(3) however, and I don't want to lose any of the information during the change. So in the end I want to end up with a table that looks like this:
| name | value |
------------------
| A | Yes |
| B | No |
What is the best way to convert this column from one type to another, and also convert all of the data in it according to a pre-determined translation?
NOTE: I am aware that if I was converting, say, a varchar(50) to varchar(200), or an int to a bigint, then I can just alter the table. But I require a similar procedure for a bit to a nvarchar, which will not work in this manner.
The best option is to ALTER bit to varchar and then run an update to change 1 to 'Yes' and 0 or NULL to 'No'
This way you don't have to create a new column and then rename it later.
Alex K's comment to my question was the best.
Simplest and safest; Add a new column, update with transform, drop existing column, rename new column
Transforming each item with a simple:
UPDATE Table
SET temp_col = CASE
WHEN value=1
THEN 'yes'
ELSE 'no'
END
You should be able to change the data type from a bit to an nvarchar(3) without issue. The values will just turn from a bit 1 to a string "1". After that you can run some SQL to update the "1" to "Yes" and "0" to "No".
I don't have SQL Server 2008 locally, but did try on 2012. Create a small table and test before trying and create a backup of your data to be safe.

how to set column data type in db2 sql

i wanted to change EN_NO length from 21 to 16 in sql table TB_TRANSACTION. Below are mine current sql column fields.
sql command -
describe table tb_transaction
column | type schema | type name | length | scale | nulls
EN_NO| SYSIBM | VARCHAR | 21 | 0 | Yes
i tried with this command but failed.
alter table tb_transaction alter column EN_NO set data type varchar(16)<br/>
Error message:
SQL0190N ALTER TABLE "EASC.TB_TRANSACTION" specified attributes for column
"EN_NO" that are not compatible with the existing column. SQLSTATE=42837
Any help would be appreciated.
We can increase the size of the column but, we can not decrease the size of column because Data lose will be happen that's why system will not allow to decrease the size.
If you still want to decrease the size, you need to drop that column and add again.

MySQL data version control

Is there any way to setup MySQL to every time a row is changed, then a row to another table/database is created with what the data was originally? (with time stamping)
If so how would I go about doing it?
E.g.
UPDATE `live_db`.`people`
SET `live_db`.`people`.`name` = 'bob'
WHERE `id` = 1;
Causes this to happen before the update:
INSERT INTO `changes_db`.`people`
SELECT *
FROM `live_db`.`people`
WHERE `live_db`.`people`.`id` = 1;
And if you did it again it would result in something like this:
`live_db`.`people`
+----+-------+---------------------+
| id | name | created |
+----+-------+---------------------+
| 1 | jones | 10:32:20 12/06/2010 |
+----+-------+---------------------+
`changes_db`.`people`
+----+-------+---------------------+
| id | name | updated |
+----+-------+---------------------+
| 1 | billy | 12:11:25 13/06/2010 |
| 1 | bob | 03:01:54 14/06/2010 |
+----+-------+---------------------+
The live DB needs to have a created time stamp on the rows, and the changes DB needs to have a time stamp of when the live DB row was updated.
The changes DB will also have no primary keys and foreign key constraints.
I'm using InnoDB and MySQL 5.1.49 but can upgrade if required.
Use a Trigger
MySQL support for triggers started with MySQL version 5.0.2.
You can create a trigger:
DELIMITER \\
CREATE TRIGGER logtrigger BEFORE UPDATE ON live_db.people
FOR EACH ROW BEGIN
INSERT INTO changes_db.people(id,name,updated) VALUES(OLD.id,OLD.name,now());
END;
\\
This is how I ended up doing it
DELIMITER |
# Create the log table
CREATE TABLE IF NOT EXISTS `DB_LOG`.`TABLE`
LIKE `DB`.`TABLE`|
# Remove any auto increment
ALTER TABLE `DB_LOG`.`TABLE` CHANGE `DB_LOG`.`TABLE`.`PK` `DB_LOG`.`TABLE`.`PK` INT UNSIGNED NOT NULL|
# Drop the primary keys
ALTER TABLE `DB_LOG`.`TABLE` DROP PRIMARY KEY|
#Create the trigger
DROP TRIGGER IF EXISTS `DB`.`update_TABLE`|
CREATE TRIGGER `DB`.`update_TABLE` BEFORE UPDATE ON `DB`.`TABLE` FOR EACH ROW
BEGIN
INSERT INTO `DB_LOG`.`TABLE`
SELECT `DB`.`TABLE`.*
FROM `DB`.`TABLE`
WHERE `DB`.`TABLE`.`PK` = NEW.`PK`;
END|
DELIMITER ;
Sorry to comment on an old post, but I was looking to solve this exact problem! Thought I would share this information.
This outlines a solution perfectly:
http://www.hirmet.com/mysql-versioning-records-of-tables-using-triggers

Making PostgreSQL a little more error tolerant?

This is sort of a general question that has come up in several contexts, the example below is representative but not exhaustive. I am interested in any ways of learning to work with Postgres on imperfect (but close enough) data sources.
The specific case -- I am using Postgres with PostGIS for working with government data published in shapefiles and xml. Using the shp2pgsql module distributed with PostGIS (for example on this dataset) I often get schema like this:
Column | Type |
------------+-----------------------+-
gid | integer |
st_fips | character varying(7) |
sfips | character varying(5) |
county_fip | character varying(12) |
cfips | character varying(6) |
pl_fips | character varying(7) |
id | character varying(7) |
elevation | character varying(11) |
pop_1990 | integer |
population | character varying(12) |
name | character varying(32) |
st | character varying(12) |
state | character varying(16) |
warngenlev | character varying(13) |
warngentyp | character varying(13) |
watch_warn | character varying(14) |
zwatch_war | bigint |
prog_disc | bigint |
zprog_disc | bigint |
comboflag | bigint |
land_water | character varying(13) |
recnum | integer |
lon | numeric |
lat | numeric |
the_geom | geometry |
I know that at least 10 of those varchars -- the fips, elevation, population, etc., should be ints; but when trying to cast them as such I get errors. In general I think I could solve most of my problems by allowing Postgres to accept an empty string as a default value for a column -- say 0 or -1 for an int type -- when altering a column and changing the type. Is this possible?
If I create the table before importing with the type declarations generated from the original data source, I get better types than with shp2pgsql, and can iterate over the source entries feeding them to the database, discarding any failed inserts. The fundamental problem is that if I have 1% bad fields, evenly distributed over 25 columns, I will lose 25% of my data since a given insert will fail if any field is bad. I would love to be able to make a best-effort insert and fix any problems later, rather than lose that many rows.
Any input from people having dealt with similar problems is welcome -- I am not a MySQL guy trying to batter PostgreSQL into making all the same mistakes I am used to -- just dealing with data I don't have full control over.
Could you produce a SQL file from shp2pgsql and do some massaging of the data before executing it? If the data is in COPY format, it should be easy to parse and change "" to "\N" (insert as null) for columns.
Another possibility would be to use shp2pgsql to load the data into a staging table where all the fields are defined as just 'text' type, and then use an INSERT...SELECT statement to copy the data to your final location, with the possibility of massaging the data in the SELECT to convert blank strings to null etc.
I don't think there's a way to override the behaviour of how strings are converted to ints and so on: possibly you could create your own type or domain, and define an implicit cast that was more lenient... but this sounds pretty nasty, since the types are really just artifacts of how your data arrives in the system and not something you want to keep around after that.
You asked about fixing it up when changing the column type: you can do that too, for example:
steve#steve#[local] =# create table test_table(id serial primary key, testvalue text not null);
NOTICE: CREATE TABLE will create implicit sequence "test_table_id_seq" for serial column "test_table.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "test_table_pkey" for table "test_table"
CREATE TABLE
steve#steve#[local] =# insert into test_table(testvalue) values('1'),('0'),('');
INSERT 0 3
steve#steve#[local] =# alter table test_table alter column testvalue type int using case testvalue when '' then 0 else testvalue::int end;
ALTER TABLE
steve#steve#[local] =# select * from test_table;
id | testvalue
----+-----------
1 | 1
2 | 0
3 | 0
(3 rows)
Which is almost equivalent to the "staging table" idea I suggested above, except that now the staging table is your final table. Altering a column type like this requires rewriting the entire table anyway: so actually, using a staging table and reformatting multiple columns at once is likely to be more efficient.