I need a calculated field based on each record in sql server 2000.
for example there is a table like this:
col1 col2 calcfield
-------------------
1 2 col1+col2
3 5 col1*col2
I need a query to calculate the last field per record eg:
1 2 3
3 5 15
Actually This is a system that calculates a number for some persons.There are some parameters stored in fields of a table and there is another field that stores how to calculate the number from those parameters(that is the formula). for each person there are different parameters and a different formula. I want to design a query that can extract parameters and the calculated column directly from the table
Is there any way to do this and if there is what is the best and fastest way....
best regards
You just do the math and alias it. Por exemplo:
SELECT
field1,
field2,
field1 + field2 AS 'CalcField'
FROM table
If you need to do different calculations depending on the record, use a CASE statement:
SELECT
field1,
field2,
CASE
WHEN (some condition) THEN field1 + field2
WHEN (some other condition) THEN field1 * field2
ELSE (some default value or calculation)
END AS 'CalcField'
FROM table
Related
I have a small table tbl_a that is something like
id
fieldName
tableName
1
field1
tbl_1
2
field2
tbl_1
3
field3
tbl_2
and I want to be able to come up with a function or proc or something where I can specify the fieldId from tbl_a and then query the correct field and table from that. something like
select * from my_function(3)
should end up being equivalent to
select field3 from tbl_2
I've been looking into dynamic sql and user functions but can't seem to figure out how to feed the results of the query into another.
EDIT:
As #Larnu correctly surmised, there is a larger task hiding behind the one posed in the original question. The premise is this:
tblArchive stores the values of certain "static" fields (found in other tables) with a Date attached. If/when these fields are changed in their original table, then a record is inserted into tblArchive. More-or-less an audit table.
eg: in tbl_accounts, AdjustmentFactor field (fieldId=3) for accountId=1 changes from 1.0 to 0.5 on '2022-06-10'.
Insert into tblArchive (fieldId, accountId, date, value) values (3,1,'2022-06-10',0.5)
tblArchive was only created in 2019. I've been tasked with back-filling records from 2017 on. That is, to insert records that would have been inserted had tblArchive existed in 2017.
In order to backfill, I have to look into the real audit tables (for previous example this would be tblAccountsAudit for that particular fieldId).
The fields of interest and their respective tables are given in tblFields. tblFields would be tbl_a from the original question and for the example given we'd have something like
id
fieldName
tableName
3
AdjustmentFactor
tbl_accounts
Assume also that the design is what it is and I have no power to overhaul the design/structure of the database.
It sounds like you need something like this.
CREATE FUNCTION myfunction (
#key INT
)
RETURNS TABLE
AS
RETURN
SELECT
id,
fieldName,
tableName
FROM
tbl_a
WHERE
id = #key;
This will give return the table you are after through a function.
I have two columns in a table (ex. column1, column2), one INT and the other VARCHAR types. I need to combine both in another column (ex. column3) and I don't want to do it manually. Is there a way to fill this third column with a combine of the other two column with a specific format using some SQLl query?
Example:
column1 column2 column3
8 munson munson, 8
23 gatine gatine, 23
63 carbon carbon, 63
Thanks,
If you want to do it on the fly (Just query the 3 new columns)
You can do:
Select column1,colum2, CONCAT_WS(column2,', ',CAST(column1 as TEXT)) as column3 from table;
If you are trying to modify the original table to add a new column you can do something like:
UPDATE
table
SET
column3 = CONCAT_WS(column2,', ',CAST(column1 as TEXT))
The previous snippet should work for postgresql. Other engines will have different syntax for updating a column.
In SQL (I'm using postgres, but am open to other variations), is it possible to update a value based on a row location and a column name when the table doesn't have unique rows or keys? ...without adding a column that contains unique values?
For example, consider the table:
col1
col2
col3
1
1
1
1
1
1
1
1
1
I would like to update the table based on the row number or numbers. For example, change the values of rows 1 and 3, col2 to 5 like so:
col1
col2
col3
1
5
1
1
1
1
1
5
1
I can start with the example table:
CREATE TABLE test_table (col1 int, col2 int, col3 int);
INSERT INTO test_table (col1, col2, col3) values(1,1,1);
INSERT INTO test_table (col1, col2, col3) values(1,1,1);
INSERT INTO test_table (col1, col2, col3) values(1,1,1);
Now, I could add an additional column, say "id" and simply:
UPDATE test_table SET col2 = 5 WHERE id = 1
UPDATE test_table SET col2 = 5 WHERE id = 3
But can this be done just based on row number?
I can select based on row number using something like:
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER() FROM test_table
) as sub
WHERE row_number BETWEEN 1 AND 2
But this doesn't seem to play well with the update function (at least in postgres). Likewise, I have tried using some subsets or common table expressions, but again, I'm running into difficulties with the UPDATE aspect. How can I perform something that accomplishes something like this pseudo code?: UPDATE <my table> SET <col name> = <new value> WHERE row_number = 1 or 3, or... This is trivial other languages like R or python (e.g., using pandas's .iloc function). It would be interesting to know how to do this in SQL.
Edit: in my table example, I should have specified the column types to something like int.
This is one of the many instances where you should embrace the lesser evil that is Surrogate Keys. Whichever table has a primary key of (col1,col2,col3) should have an additional key created by the system, such as an identity or GUID.
You don't specify the data type of (col1,col2,col3), but if for some reason you're allergic to surrogate keys you can embrace the slightly greater evil of a "combined key", where instead of a database-created value your unique key field is derived from some other fields. (In this instance, it'd be something like CONCAT(col1, '-', col2, '-', col3) ).
Should neither of the above be practical, you will be left with the greatest evil of having to manually specify all three columns each time you query a record. Which means that any other object or table which references this one will need to have not one but three distinct fields to identify which record you're talking about.
Ideally, btw, you would have some business key in the actual data which you can guarantee by design will be unique, never-changing, and never-blank. (Or at least changing so infrequently that the db can handle cascade updates reasonably well.)
You may wind up using a surrogate key for performance in such a case anyway, but that's an implementation detail rather than a data modeling requirement.
The following problem is simplified -
I have 3 tables, table1, mapping_table and table2.
table 1 will include 3 columns - first name , last name and date.
table 2 will include 4 columns - id (that gets it value from a sequence, first name , last_name_in_germen and date).
mappingTable that will include 2 columns (last name and ast_name_in_germen).
In addition -
date is nullable in table1 but have to has some value (like the date of today) is table2.
The problems are -
The new table (table2) will have columns that exist in the original one (first_name), columns that will need to make some basic transformation like mapping (last_name) and adding a default value (date) and of course to use the sequence (id).
I was thinking about using a procedure with a loop but I don't know how to insert a row to the new table.
This sound like a standard INSERT-SELECT with a join ?
insert into table2
select my_sequence.nextval,
table1.first_name,
mappingTable.last_name_in_germen,
table1.date
from table1,
mappingTable
where table1.last_name = mappingTable.last_name
I have a SQL table with a 20 to 30 columns that I need to to search. I've set up the free text search so that I can run queries such as:
Select * from dbo.table1 where Contains(*,'asdf');
The problem is I don't know which column actually contains 'asdf' is there a straightforward way to get the specific column(s)?
EDIT
The result I'm looking for would be similar to the following:
Record Number | ColumnFoundIn
5 columnA
100 columnB
244 columnA
250 columnF
The original table has a unique record number for each row, so I would like the record number and then the column where 'asdf' was found.