Inserting st_length function doesn't match row numbers - sql

I am new to sql/postgres and am trying to run st_length on a database geometry and insert the length as a cost column but the result from running the following commands inserts the data seemingly randomly and not associated with the id column value like needed.
Command:
alter table planet_osm_roads add cost float;
insert into planet_osm_roads (cost)
select st_length(st_transform(way, 4326)::geography) from planet_osm_roads;
example result:
source
target
cost
30,749
30,750
30,751
30,752
7,552
30,385
7.6144929361
41.7331770846
85.3575622508
50.0921684238
3
4
111.5246694513
43.8658606368
I've ignored the other columns as they aren't needed. The columns with 'source' and 'target' values are associated with a specific 'osm_id' value and the cost column is null for those but the commands don't associated the cost value with the linestring row value.
I would expect the cost value to be inserted in the same row as to which the linestring data comes from. That is not what happens.

Insert adds new rows. You want to Update the table and Set cost where some row matches.
ALTER TABLE planet_osm_roads ADD cost FLOAT;
UPDATE planet_osm_roads
SET cost = 7.6144929361
WHERE source = 30,749
or target = something
or both
It's possible to do this from a select but how depends on your schema and data. Depending on your version of PostGres, you may even be able to do it with a generated column - Add generated column to an existing table Postgres

Related

PostGIS st_point functoin returns null values for all records that do not meet criteria

I am trying to create geom values for new records coming from field devices that have latitude and longitude value in two fields (latitude, longitude), for a given project using the following expression:
UPDATE public.data_mds_import
SET geom=(
SELECT
st_force3d(st_setsrid(st_point(public.data_mds_import.longitude, public.data_mds_import.latitude),4326))
WHERE PUBLIC.data_mds_import.id_jobcode=220 /* ENTER ID_JOBCODE HERE */
);
We have lots of field data coming in and i need a statement where i can just change the jobcode value in the statement for each new set of field data that gets imported.
The problem is that for other jobs in the table where jobcode != 220, the function returns null values, i.e. it removes the geom value, i lose my geometry data so i go from having several thousand records with a geometry value before the function runs, to only the records where jobcode = '220'.
Why is this happening, the statement should be selecting only records where jobcode=220 right? Whereas it appears to select all records and return the geometry where jobcode = 220, and null for every other record. No action would be better than null.
Should i use a case statement? If so can someone please help with such a statement, i have tried several case statement variations and cant get a statement to execute.
Welcome to SO.
If you're updating a single table, there is no need to use a suquery to simply update a column. Also, you don't need to repeat the schema and table names.
This query selects all records that correspond to id_jobcode = 220 and creates a geometry in the column geom from the values of latitude and logitude:
UPDATE public.data_mds_import
SET geom = ST_Force3D(
ST_SetSRID(
ST_Point(longitude, latitude),4326))
WHERE id_jobcode = 220;

Alter the data type of a column in MonetDB

How can I alter the type of a column in an existing table in MonetDB? According to the documentation the code should be something like
ALTER TABLE <tablename> ALTER COLUMN <columnname> SET ...
but then I am basically lost because I do not know which standard the SQL used by MonetDB follows here and I get a syntax error. If this statement is not possible I would be grateful for a workaround that is not too slow for large (order of 10^9 records) tables.
Note: I ran into this problem while doing some bulk data imports from csv files into a table in my database. One of the columns is of type INT but the values in the file at some point exceed the INT limit of 2^31-1 (yes, the table is big) and so the transaction aborts. After I found out the reason for this failure, I wanted to change it to BIGINT but all versions of SQL code I tried failed.
This is currently not supported. However, there is a workaround:
Example table for this example, say we want to change the type of column b from integer to double.
create table a(b integer);
insert into a values(42);
Create a temporary column alter table a add column b2 double;
Set data in temporary column to original data update a set b2=b;
Remove the original column alter table a drop column b;
Re-create the original column with the new type alter table a add column b double;
Move data from temporary column to new column update a set b=b2;
Drop the temporary column alter table a drop column b2;
Profit
Note that this will change the ordering of columns if there are more than one. However, this is only a cosmetic issue.

SQL Server Unique Composite Key of Two Field With Second Field Auto-Increment

I have the following problem, I want to have Composite Primary Key like:
PRIMARY KEY (`base`, `id`);
for which when I insert a base the id to be auto-incremented based on the previous id for the same base
Example:
base id
A 1
A 2
B 1
C 1
Is there a way when I say:
INSERT INTO table(base) VALUES ('A')
to insert a new record with id 3 because that is the next id for base 'A'?
The resulting table should be:
base id
A 1
A 2
B 1
C 1
A 3
Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions.
EDIT
The base currently represents a company, the id represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the SEQUENCE objects that are provided are geared around fast access, and can't be rolled back (ie, you will get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own.
The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple:
CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED,
invoiceNumber INTEGER);
In reality the base column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity.
Next, you want a stored proc that will take a key (base) and spit out the next number in the sequence (invoiceNumber). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows:
CREATE PROCEDURE Next_Invoice_Number #baseKey CHAR(1),
#invoiceNumber INTEGER OUTPUT
AS MERGE INTO Invoice_Sequence Stored
USING (VALUES (#baseKey)) Incoming(base)
ON Incoming.base = Stored.base
WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1
WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(#baseKey)
OUTPUT INSERTED.invoiceNumber ;;
Note that:
You must run this in a serialized transaction
The transaction must be the same one that's inserting into the destination (invoice) table.
That's right, you'll still get blocking per-business when issuing invoice numbers. You can't avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued.
Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger:
CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT
AS
DECLARE #invoiceNumber INTEGER
BEGIN
EXEC Next_Invoice_Number Inserted.base, #invoiceNumber OUTPUT
INSERT INTO Invoice (base, invoiceNumber)
VALUES (Inserted.base, #invoiceNumber)
END
(obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in)
...which you can then use by simply saying:
INSERT INTO Invoice (base) VALUES('A');
So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this INSERT is committed, there are only two rows locked:
The row in Invoice_Sequence maintaining the sequence number
The row in Invoice for the new invoice.
All other rows for a particular base are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case.
That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert.
I think this may help you
MSSQL Triggers: http://msdn.microsoft.com/en-in/library/ms189799.aspx
Test Table
CREATE TABLE MyTable
( base CHAR(1),
id INT
)
GO
Trigger Definition
CREATE TRIGGER dbo.tr_Populate_ID
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO MyTable (base,id)
SELECT i.base, ISNULL(MAX(mt.id),0) +1 AS NextValue
FROM inserted i left join MyTable mt
on i.base = mt.base
GROUP BY i.base
END
Test
Execute the following statement multiple times and you will see the next values available in that group will be assigned to ID.
INSERT INTO MyTable VALUES
('A'),
('B'),
('C')
GO
SELECT * FROM MyTable
GO

Update a table and return both the old and new values

Im writing a VB app that is scrubbing some data inside a DB2 database. In a few tables i want to update entire columns. For example an account number column. I am changing all account numbers to start at 1, and increment as I go down the list. Id like to be able to return both the old account number, and the new one so I can generate some kind of report I can reference so I dont lose the original values. Im updating columns as so:
DECLARE #accntnum INT
SET #accntnum = 0
UPDATE accounts
SET #accntnum = accntnum = #accntnum + 1
GO
Is there a way for me to return both the original accntnum and the new one in one table?
DB2 has a really nifty feature where you can select data from a "data change statement". This was tested on DB2 for Linux/Unix/Windows, but I think that it should also work on at least DB2 for z/OS.
For your numbering, you might considering creating a sequence, as well. Then your update would be something like:
CREATE SEQUENCE acct_seq
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO CYCLE
CACHE 24
;
SELECT accntnum AS new_acct, old_acct
FROM FINAL TABLE (
UPDATE accounts INCLUDE(old_acct INT)
SET accntnum = NEXT VALUE FOR acct_seq, old_acct = accntnum
)
ORDER BY old_acct;
The INCLUDE part creates a new column in the resulting table with the name and the data type specified, and then you can set the value in the update statement as you would any other field.
A possible solution is to add an additional column (let's call it oldaccntnum) and assign old values to that column as you do your update.
Then drop it when you no longer need it.
Here's what I'd do:
-- create a new table to track the changes.
- with columns identifying a unique key, old-vale, new-value, timestamp
-- create a trigger on the accounts table
to write the old and new values to the new table.
But, not knowing all the conditions, it may not be worth the trouble.

Adding a new column in a temporary table

I have a temporary table in a PostgreSQL function and I want to insert a new VARCHAR column. It should have a value that depends on another column of the table, named "amount".
When the amount is positive I would like the value of the column row to be credit and when the value is negative the column should be debit.
I have one more request: I want to round the value of amount column in 2 decimal digits
You want ALTER TABLE ... ADD COLUMN followed by an UPDATE.
I initially said ALTER TABLE ... ADD COLUMN ... USING but that was wrong on two counts. ADD COLUMN takes a DEFAULT not USING - and You can't do it in one pass because neither a DEFAULT expression nor a USING expression may not refer to other columns.
So you must do:
ALTER TABLE tablename ADD COLUMN colname varchar;
UPDATE tablename SET colname = ( CASE WHEN othercol < 0 THEN 'Credit' ELSE 'Debit' END );
Think carefully about whether zero should be 'Debit' or 'Credit' and adjust the CASE accordingly.
For rounding, use round(amount,2). There isn't enough detail in your question for me to be sure how; probably by UPDATEing the temp table with UPDATE thetable SET amount = round(amount,2) but without the context it's hard to know if that's right. That statement irreversibly throws information away so it should only be used on a copy of the data.