SQLite table with some rows missing a column - sql

I have a table in a SQLite database that looks something like this, but with more columns and rows:
| Field1 | Field2 |
|---------|---------|
| A | 1 |
| B | 2 |
| C | |
What I need to do is run a SQL query like this:
SELECT * FROM <tablename> WHERE <conditions> ORDER BY Field2
The problem is, I'm getting the error: no such column: Field2
So now I've been asked to set all the missing values to 99. But when I run
UPDATE <tablename> SET Field2='99' WHERE Field2 IS NULL;
I get the same error. How do I fix this and update all those missing cells?
EDIT: I should also add that the missing values don't seem to be null since if I add a new column in my database GUI browser, all the cells show as [NULL], though this column doesn't.

This turned out to be caused by a very subtle problem in the table:
Several of the column names (the ones that were causing me problems) ended in a newline (\n). Removing the newline solved all my problems!

Related

Updating a PostgreSQL column that contains dot (.) in its name

I should update the value of a row, but the column name has the dot.
I tried name.name but nothing, even though it seems to work on MySQL.
How can I do with postgresql? I swear that before creating this thread I searched all over.
Thanks
UPDATE:
Thanks for the quick answers, I tried to use "" but this is the result
ERROR: column "name.name" of relation "my_table" does not exist
My query:
update my_table set "name.name"='a081613e-2e28-4cae-9ff7-4eaa9c918352';
You can use "" around the column name
Wrap name with double quotation marks: "name.name"
UPD:
UPDATE: Thanks for the quick answers, I tried to use "" but this is the result
Are you sure then that it's your case?
psql (13.2)
Type "help" for help.
postgres=# CREATE DATABASE example_db;
CREATE DATABASE
postgres=# \c example_db
You are now connected to database "example_db" as user "postgres".
example_db=# CREATE TABLE example_table ("example.field" int);
CREATE TABLE
example_db=# \d example_table
Table "public.example_table"
Column | Type | Collation | Nullable | Default
---------------+---------+-----------+----------+---------
example.field | integer | | |
example_db=# SELECT "example.field" FROM example_table;
example.field
---------------
(0 rows)
example_db=# SELECT "example_table"."example.field" FROM example_table;
example.field
---------------
(0 rows)
example_db=#

Use SSIS to split a single field value into multiple rows in a second table

So the situation is I am writing an SSIS package to migrate data from an old database to a new database. In the old database we have a Text column called comments that is filled with sometimes 30MB of text. Most of these are comment threads that have time stamps. I would like to use the timestamps by using a regex or some such thing to split the data up and move it to a second child table called comments. It then needs to reference the PK of the original record as well. Thanks!
So
Table1 [Profile]
PK | Comments
1 | '<timestamp> blah <timestamp> blah blah'
will turn into
Table1 [Profile]
PK | Comments
1 | ''
Table2 [Comments]
PK | FK | Comment
1 | 1 | '<timestamp> blah'
2 | 1 | '<timestamp> blah blah'
As wp78de suggested I resolved this by creating a script task and modified the output as it copies.

How to call a column named "group" in Snowflake?

I have a table in Snowflake with the following structure:
| id | group | subgroup |
_________________________
| 1 | verst | burg |
| 2 | travel| plane |
| 3 | rest | bet |
I need to call only the column "group", so I tried the following code:
select t2.group
from table as t2
but the following error arises
SQL compilation error: syntax error line 1 at position 7 unexpected 'group'. syntax error line 2 at position 0 unexpected 'from'.
I have also tried using:
select group
from table as t2
select "group"
from table as t2
but I always get the same error.
I know I can call the whole table using * but the real table where I get this data from has many more columns and we want to display this data in a dashboard. Additionally, I am not the owner of the table since it is filled by a microservice, so I cannot change the column names and I can't modify the microservice process.
I would appreciate any suggestion.
Given the table could not be created without double quotes, you need to know how it was created to know how to refer to the column. Which is to say it the create code was CREATE TABLE awsome ("GrOuP" string); there you will need to type "GrOuP"
There is also a session setting to ignore case in double quotes that might help.
see QUOTED_IDENTIFIERS_IGNORE_CASE
But by default things are upper case, thus try "GROUP"
Putting group in double quotes worked fine when I tried it:
create or replace temporary table foo ( "group" string );
insert into foo values ('Hello world.');
select "group" from foo;

Sequential update statements

When using multiple SETs on a single update query like
update table set col1=value1,col2=col1
is there an order of execution that will decide the outcome, when the same column is left or right of an equals sign? As far as I've tested so far, it seems when a column is used to the right of an equals as a data source, then its value is used from BEFORE it gets a new value within the same update statement, by being to the left of an equals sign elsewhere.
I believe that SQL Server always uses the old values when performing an UPDATE. This would best be explained by showing some sample data for your table:
col1 | col2
1 | 3
2 | 8
3 | 10
update table set col1=value1,col2=col1
At the end of this UPDATE, the table should look like this:
col1 | col2
value1 | 1
value1 | 2
value1 | 3
This behavior for UPDATE is part of the ANSI-92 SQL standard, as this SO question discusses:
SQL UPDATE read column values before setting
Here is another link which discusses this problem with an example:
http://dba.fyicenter.com/faq/sql_server/Using_Old_Values_to_Define_New_Values_in_UPDATE_Statements.html
You can assume that in general SQL Server puts some sort of lock on the table during an UPDATE, and uses a snapshot of the old values throughout the entire UPDATE statement.

What is the best way to change the type of a column in a SQL Server database, if there is data in said column?

If I have the following table:
| name | value |
------------------
| A | 1 |
| B | NULL |
Where at the moment name is of type varchar(10) and value is of type bit.
I want to change this table so that value is actually a nvarchar(3) however, and I don't want to lose any of the information during the change. So in the end I want to end up with a table that looks like this:
| name | value |
------------------
| A | Yes |
| B | No |
What is the best way to convert this column from one type to another, and also convert all of the data in it according to a pre-determined translation?
NOTE: I am aware that if I was converting, say, a varchar(50) to varchar(200), or an int to a bigint, then I can just alter the table. But I require a similar procedure for a bit to a nvarchar, which will not work in this manner.
The best option is to ALTER bit to varchar and then run an update to change 1 to 'Yes' and 0 or NULL to 'No'
This way you don't have to create a new column and then rename it later.
Alex K's comment to my question was the best.
Simplest and safest; Add a new column, update with transform, drop existing column, rename new column
Transforming each item with a simple:
UPDATE Table
SET temp_col = CASE
WHEN value=1
THEN 'yes'
ELSE 'no'
END
You should be able to change the data type from a bit to an nvarchar(3) without issue. The values will just turn from a bit 1 to a string "1". After that you can run some SQL to update the "1" to "Yes" and "0" to "No".
I don't have SQL Server 2008 locally, but did try on 2012. Create a small table and test before trying and create a backup of your data to be safe.