I am getting blank value with this query from sql server
SELECT TOP 1 Amount from PaymentDetails WHERE Id = '5678'
it has no row,that is why its returning blank,So I want if no row then it should return 0
I already tried with COALESCE ,but its not working
how to solve this?
You are selecting an arbitrary amount, so one method is aggregation:
SELECT COALESCE(MAX(Amount), 0)
FROM PaymentDetails
WHERE Id = '5678';
Note that if id is a number, then don't use single quotes for the comparison.
To be honest, I would expect SUM() to be more useful than an arbitrary value:
SELECT COALESCE(SUM(Amount), 0)
FROM PaymentDetails
WHERE Id = '5678';
You can wrap the subquery in an ISNULL:
SELECT ISNULL((SELECT TOP 1 Amount from PaymentDetails WHERE Id = '5678' ORDER BY ????),0) AS Amount;
Don't forget to add a column (or columns) to your ORDER BY as otherwise you will get inconsistent results when more than one row has the same value for Id. If Id is unique, however, then remove both the TOP and ORDER BY as they aren't needed.
You should never, however, use TOP without an ORDER BY unless you are "happy" with inconsistent results.
I want to use the result from subquery as the column name of another query since the data changes column all the time and the subquery will decide which column the current forcast data stored. My example:
select item,
item_type
...
forcast_0 * 0.9 as finalforcast
forcast_0 * 0.8 as newforcast
from sales_data.
but the forcast_0 column is the result (fore_column_name) of the subquery, the result may change to forcast_1 or forcast2
select
fore_column_name
from forecast_history
where ...
Also, the forcast column will be used multiple times in the first query. how could I implement this?
Use your sub query as an inline table. Something like....
select item,
item_type,
..
decode(fore_column_name, 'foo', 1, 2) * 0.9 as finalforcast,
decode(fore_column_name, 'foo', 1, 2) * 0.8 as newforcast
from sales_data,
(
select fore_column_name
from forecast_history
where ...
) inlineTable
I'm assuming here that the value from the sub-query will be the same for each row - so a quick cross-join will suffice. If the value will vary depending on the values in each row of the sales_data table, then some other type of join would be more appropriate.
Quick link to decode - in case you aren't familiar with it.
SELECT firstpartno, nOccurrence, nMale, nFemale, COUNT(nMale) / CAST
((SELECT SUM(nOccurrence) AS Expr1
FROM (SELECT COUNT(dbo.vw_Tally1.nMale) AS nOccurrence
FROM dbo.vw_Split4) AS SumTally) AS decimal) AS nMProportion, COUNT(nFemale) / CAST
((SELECT SUM(nOccurrence) AS Expr1
FROM (SELECT COUNT(dbo.vw_Tally1.nFemale) AS nOccurrence
FROM dbo.vw_Split4 AS vw_Split4_1) AS SumTally_1) AS decimal) AS nFProportion
FROM dbo.vw_Tally1
GROUP BY firstpartno, nOccurrence, nMale, nFemale
If i understood your question here's the solution for you :
SELECT
firstpartno
,nOccurrence
,nMale
,nFemale
,CASE WHEN SUM_nOccurrence.SUM_nOccurrenceMale = 0
THEN 0
ELSE COUNT(nMale)/SUM_nOccurrence.SUM_nOccurrenceMale
END AS nMProportion
,CASE WHEN SUM_nOccurrence.nOccurrenceFemale = 0
THEN 0
ELSE COUNT(nFemale)/SUM_nOccurrence.nOccurrenceFemale
END AS nFProportion
FROM
dbo.vw_Tally1
LEFT JOIN
(SELECT
CAST(SUM(nOccurrenceMale)AS decimal) AS SUM_nOccurrenceMale
,CAST(SUM(nOccurrenceFemale)AS decimal) AS SUM_nOccurrenceFemale
FROM (SELECT
COUNT(dbo.vw_Tally1.nMale) AS nOccurrenceMale
,COUNT(dbo.vw_Tally1.nFemale) AS nOccurrenceFemale
FROM dbo.vw_Split4 ) AS SumTally) SUM_nOccurrence
ON 1=1
GROUP BY
firstpartno
,nOccurrence
,nMale
,nFemale
I hope this will help you
Good Luck :)
The query looks dubious to say the least. You select all records of table vw_Tally1. For each of these records you do the following:
select COUNT(vw_Tally1.nMale) from table vw_Split4. This is COUNT(*) of vw_Split4 when vw_Tally1.nMale is not null, otherwise it is null.
Then you sum this value. Which makes no sense, as the sum of a value is the value itself.
You do the same for nFemale.
At last you group by (firstpartno, nOccurrence, nMale, nFemale) and use the values found so strangly to calculate something. As you don't aggregate the found values, you get a random match per group. I.e. the dbms takes one of the matching records. As nMale and nFemale are grouping columns, the values are constant for all records of the group. So no big problem, but a lot of useless work.
So to speed this up, first think of what you want to select actually. This looks like to become a very simple select statement in the end. We can help you, if you tell us what your tables contain, what result set you are after, what does nMale and nFemale stand for, and what are the primary keys or unique columns of the tables involved.
I need to calculate the net total of a column-- sounds simple. The problem is that some of the values should be negative, as are marked in a separate column. For example, the table below would yield a result of (4+3-5+2-2 = 2). I've tried doing this with subqueries in the select clause, but it seems unnecessarily complex and difficult to expand when I start adding in analysis for other parts of my table. Any help is much appreciated!
Sign Value
Pos 4
Pos 3
Neg 5
Pos 2
Neg 2
Using a CASE statement should work in most versions of sql:
SELECT SUM( CASE
WHEN t.Sign = 'Pos' THEN t.Value
ELSE t.Value * -1
END
) AS Total
FROM YourTable AS t
Try this:
SELECT SUM(IF(sign = 'Pos', Value, Value * (-1))) as total FROM table
I am adding rows from a single field in a table based on values from another field in the same table using oracle 11g as database and sql developer as user interface.
This works:
SELECT COUNTRY_ID, SUM(
CASE
WHEN ACCOUNT IN 'PTBI' THEN AMOUNT
WHEN ACCOUNT IN 'MLS_ENT' THEN AMOUNT
WHEN ACCOUNT IN 'VAL_ALLOW' THEN AMOUNT
WHEN ACCOUNT IN 'RSC_DEV' THEN AMOUNT * -1
END) AS TI
FROM SAMP_TAX_F4
GROUP BY COUNTRY_ID;
select a= sum(Value) where Sign like 'pos'
select b = sum(Value) where Signe like 'neg'
select total = a-b
this is abit sql-agnostic, since you didnt say which db you are using, but it should be easy to adapat it to any db out there.
I want to write a query that returns 3 results followed by blank results followed by the next 3 results, and so on. So if my database had this data:
CREATE TABLE table (a integer, b integer, c integer, d integer);
INSERT INTO table (a,b,c,d)
VALUES (1,2,3,4),
(5,6,7,8),
(9,10,11,12),
(13,14,15,16),
(17,18,19,20),
(21,22,23,24),
(25,26,37,28);
I would want my query to return this
1,2,3,4
5,6,7,8
9,10,11,12
, , ,
13,14,15,16
17,18,19,20
21,22,23,24
, , ,
25,26,27,28
I need this to work for arbitrarily many entries that I select for, have three be grouped together like this.
I'm running postgresql 8.3
This should work flawlessly in PostgreSQL 8.3
SELECT a, b, c, d
FROM (
SELECT rn, 0 AS rk, (x[rn]).*
FROM (
SELECT x, generate_series(1, array_upper(x, 1)) AS rn
FROM (SELECT ARRAY(SELECT tbl FROM tbl) AS x) x
) y
UNION ALL
SELECT generate_series(3, (SELECT count(*) FROM tbl), 3), 1, (NULL::tbl).*
ORDER BY rn, rk
) z
Major points
Works for a query that selects all columns of tbl.
Works for any table.
For selecting arbitrary columns you have to substitute (NULL::tbl).* with a matching number of NULL columns in the second query.
Assuming that NULL values are ok for "blank" rows.
If not, you'll have to cast your columns to text in the first and substitute '' for NULL in the second SELECT.
Query will be slow with very big tables.
If I had to do it, I would write a plpgsql function that loops through the results and inserts the blank rows. But you mentioned you had no direct access to the db ...
In short, no, there's not an easy way to do this, and generally, you shouldn't try. The database is concerned with what your data actually is, not how it's going to be displayed. It's not an appropriate scope of responsibility to expect your database to return "dummy" or "extra" data so that some down-stream process produces a desired output. The generating script needs to do that.
As you can't change your down-stream process, you could (read that with a significant degree of skepticism and disdain) add things like this:
Select Top 3
a, b, c, d
From
table
Union Select Top 1
'', '', '', ''
From
table
Union Select Top 3 Skip 3
a, b, c, d
From
table
Please, don't actually try do that.
You can do it (at least on DB2 - there doesn't appear to be equivalent functionality for your version of PostgreSQL).
No looping needed, although there is a bit of trickery involved...
Please note that though this works, it's really best to change your display code.
Statement requires CTEs (although that can be re-written to use other table references), and OLAP functions (I guess you could re-write it to count() previous rows in a subquery, but...).
WITH dataList (rowNum, dataColumn) as (SELECT CAST(CAST(:interval as REAL) /
(:interval - 1) * ROW_NUMBER() OVER(ORDER BY dataColumn) as INTEGER),
dataColumn
FROM dataTable),
blankIncluder(rowNum, dataColumn) as (SELECT rowNum, dataColumn
FROM dataList
UNION ALL
SELECT rowNum - 1, :blankDataColumn
FROM dataList
WHERE MOD(rowNum - 1, :interval) = 0
AND rowNum > :interval)
SELECT *
FROM dataList
ORDER BY rowNum
This will generate a list of those elements from the datatable, with a 'blank' line every interval lines, as ordered by the initial query. The result set only has 'blank' lines between existing lines - there are no 'blank' lines on the ends.