Set CASE statement to variable and then use that variable in GROUP BY clause - sql

I am using SQL Server 2008 and have a very large CASE statement that is also used in the GROUP By clause. I would like to set the CASE statement to a variable in order to minimize code maintenance and maximize reuse. The problem is that I get this error:
Each GROUP BY expression must contain at least one column that is not an outer reference.
This CASED column is not the only column referenced in the GROUP By clause so I'm not sure why I get this error.
I've searched the site but didn't find a problem quite like mine (surprisingly). So, how do I go about getting around this?
UPDATE: I have included the DB type. As far as adding the code for what I have, I'm not sure that would add anything but bulk as it is over 200 lines. It's not a complex statement at all. It just takes various country codes and maps them to their full country names. For instance, the U.S. has over 50 codes so I am using the CASE statement to consolidate them. This allows me to group my info by country.

The best way to do this is with a subquery:
select var, count(*)
from (select t.*,
(case <nasty expressions go here>
end) var
from t
) t
group by var
The error that you are getting is because the variables in the group by is a constant. I'm not sure why the error message is not clearer.
And, in the event that you actually do want to include a constant in the group by for some reason (as I have had occasion to do), then a column helps:
group by (case when coalesce(col, '') = coalesce(col, '') then 'some constant' end)
At least in SQL Server 2008, the engine does not recognize the expression as a constant.

Related

Sql Limit clause based in input Parameter

I have been trying to find a solution for a limit-clause based on an input parameter from a Json-File. The current code looks somewhat like this
With myJsonTable (JsonText)
as (
Select JsonText)
Select * from Data
Where...
Limit
Case
WHEN (Select JSON_VALUE(JsonText, '$."Amount"') From myJsonTable is not null
THEN (Select JSON_VALUE(JsonText, '$."Amount"') From myJsonTable)
ELSE (10000000)
END
Which I cant seem to get work. The Output I am getting is
Non-negative integeter value expected in LIMIT clause
Is there a way to cast the select done? Trying different Selects anywhere in the Case clause caused the same error.
Exasol only allows constant expression in the limit clause, so it's not directly possible to specify a select statement that references myJsonTable there.
However, you can workaround this issue by using a approach similar to SQL query for top 5 results without the use of LIMIT/ROWNUM/TOP

Update from aggregate in same table if aggregate value wrong - SQL Server/Oracle/Firebird

I have a table with grouped tasks:
tt_plan_task_id is the id
records with tt_plantype=1 represent 'groups'
tasks in/under a group have a tt_group_id pointing to the tt_plan_task_id
there are tasks that don't belong to a group (tt_group_id is null)
groups nest multiple levels
I need to fix (update) the tt_fromdate field values for the group records if they do not match the min(tt_fromdate) from the underlying tasks (they always have a value).
To fix them all I could do
update tt_plan_task g
set tt_fromdate=
(select min(t.tt_fromdate) from tt_plan_task t
where (t.tt_group_id=g.tt_plan_task_id))
where (g.tt_plantype=1)
This statement avoids the UPDATE FROM syntax that I see in many (SQL server) answers - Firebird does not support that.
There are 2 complications
I want to do the update only if g.tt_fromdate <> min(t.tt_fromdate), so I would have to add a reference to min(tt_fromdate) to the outer where.
I tried using an alias for the aggregate and referencing that but that got me nowhere (syntax errors)
SQL Server does not like the table alias in the update, but solutions like these use the UPDATE FROM syntax again ;-( How do I work around that then?
How do I tie 1. and 2. into my update statement so that it works?
As noted in the title, this needs to execute in SQL Server, Oracle, and Firebird
Note: Since groups can contain groups, the update should ideally be executed 'from the bottom up', i.e. deepest groups first.
But since this is just a rough correction for a corrupt database, doing one 'lineair' pass over all groups is good enough.
To get around SQL Server's non-standard way for update table aliases, simply don't use any.
As to using the aggregate result in both the SET clause and the WHERE clause, I suppose the only way all DBMS work with, is to write the aggregation query twice.
update tt_plan_task
set tt_fromdate =
(
select min(t.tt_fromdate)
from tt_plan_task t
where t.tt_group_id = tt_plan_task.tt_plan_task_id
)
where (tt_plantype=1)
and
(
tt_fromdate <>
(
select min(t.tt_fromdate)
from tt_plan_task t
where t.tt_group_id = tt_plan_task.tt_plan_task_id
)
);

SQL: Error, Expression services limit reached?

"Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them."
Has anyone seen this before and found a good workaround?
I managed to get around this issue by splitting my SQL query into two parts essentially and writing the first SQL select query to a temp table and the second part, a new SQL select statement selects from the temporary table and uses alot of CROSS APPLY operator to Calculate cascading computed columns.
This is an example of how the second part looks but I'm using alot more Cross Applys to produce new columns which are calculations:
Select * from #tempTable
cross apply
(
select HmmLowestSalePrice =
round(((OurSellingPrice + 1.5) / 0.95) - (CompetitorsLowestSalePrice) + 0.08, 2)
) as HmmLowestSalePrice
cross apply
(
select checkLowestSP =
case
when adjust = 'No Room' then 'No Room'
when OrginalTestSalePrice >= CompetitorsLowestSalePrice then 'Minus'
when OrginalTeslSalePrice < CompetitorsLowestSalePrice then 'Ok'
end
) as checkLowestSP
cross apply
(
select AdjustFinalNewTestSP =
case
when FinalNewTestShipping < 0 Then NewTestSalePrice - (FinalNewTestShipping)
when FinalNewTestShipping >= 0 Then NewTestSalePrice
end
) as AdjustFinalNewTestSP
cross apply
(
select CheckFinalSalePriceWithWP =
case
when round(NewAdminSalePrice, 2) >= round(wholePrice, 2) then 'Ok'
when round(NewAdminSalePrice, 2) < round(wholePrice, 2) then 'Check'
end
) as CheckFinalPriceWithWP
DROP TABLE #tempTable
My goal to to put this into a sql report and it work fine if there is 1 user only as the #tempTable will get created and dropped in the same execution and the results are displayed in the report correctly. But in the future if there are concurrent users I'm concerned that they will be writing to the same #tempTable which will affect the results?
I've looked at putting this into stored procedures but still get the error message above.
This issue occurs because SQL Server limits the number of identifiers and constants that can be contained in a single expression of a query. The limit is 65,535. The test for the number of identifiers and constants is performed after SQL Server expands all referenced identifiers and constants. In SQL Server 2005 and above, queries are internally normalized and simplified. And that includes *(asterisk), computed columns etc.
In order to work around this issue, rewrite your query. Reference fewer identifiers and constants in the largest expression in the query. You must make sure that the number of identifiers and constants in each expression of the query does not exceed the limit. To do this, you may have to break down a query into more than one single query. Then, create a temporary intermediate result.
The same issue happens to me when we tried to change the Database Compatibility Level to 150. It is not an issue when it is 140 or lower.
I just had this problem and fixed it by removing the UNIQUE index on my table. For some reason, that seems to trigger this error, although it cannot figure out why.
By the way, the same query does work with several other indexes.
What worked for me was replacing several COALESCE statements with ISNULL whenever was possible

Recordset returns the correct number of row but with all field empty

I have the same copy of access running in 3 cities right now. They work perfectly ok. They are 99% the same with one minor difference. Each of them has two views which use different odbc connection to different cities DB (all these databases are SQL Server 2005). The views act as datasource for some two very simple queries.
However, while I tried to make a new copy for a new city, I found that one of the simple internal query returns the correct number of row but all data are empty while the other query functions correctly.
I checked the data of these two views, the data is correct.
The one causing problem are like
Select * from View_Top where Name = "ABC"
when the recordset returns, even rs!Name give me an empty string.
Please help
Well the query looks a little wrong to me, try using ' instead of " to delimit your ABC string...
Without the definition of VIEW_TOP it's hard to tell where your error is, but if you're getting rows but the columns are NULL I'm guessing that VIEW_TOP (or something it depends on) includes an OUTER JOIN and you're pulling the columns from the wrong side of the JOIN.
SELECT
acc.FIRM,
acc.OFFICE,
acc.ACCOUNT,
a.CONV_CODE,
a.OTHER_AMT AS AMOUNT,
a.TRANS_DATE,
a.DESCRIPTN,
a.JRNAL_TYPE
FROM AccTrans AS a LEFT OUTER JOIN ACC AS acc ON a.ACCNT_CODE = acc.ACCNT_CODE
WHERE
(acc.SUN_DB = 'IF1') AND
(ANAL_T0 <> '') AND
(a.TRANS_DATE < '20091022') AND
(a.JRNAL_TYPE = 'MATCH');
This is the definition of the view. Indeed, in Access i am able to view the result of this query, it has data. that's why i know the recordset returns the correct number of row (by counting the loop in the code). sorry for my mistakes, i use Account in the where clause, the select statements should be like
select Firm, Office, Account, Trans_Date.... from
view_top
where account = 'ABC'
the query returns the right number of row but all row data (even the account field) are empty string.
then i found out what really cause the problem is the AMOUNT field, if i omit the amount, everything works. though i don't understand why.
view_top definition
"Name, Account, AccountCode, Amount, Date...."
Select Statements:
Select Name, Account, AccountCode, Amount, Date
From View_Top Where Name = 'xxx'
I found that if I omit the Amount, everything works.
Though I still don't understand why.

Operation must use an updatable query. (Error 3073)

I have written this query:
UPDATE tbl_stock1 SET
tbl_stock1.weight1 = (
select (b.weight1 - c.weight_in_gram) as temp
from
tbl_stock1 as b,
tbl_sales_item as c
where
b.item_submodel_id = c.item_submodel_id
and b.item_submodel_id = tbl_stock1.item_submodel_id
and b.status <> 'D'
and c.status <> 'D'
),
tbl_stock1.qty1 = (
select (b.qty1 - c.qty) as temp1
from
tbl_stock1 as b,
tbl_sales_item as c
where
b.item_submodel_id = c.item_submodel_id
and b.item_submodel_id = tbl_stock1.item_submodel_id
and b.status <> 'D'
and c.status <> 'D'
)
WHERE
tbl_stock1.item_submodel_id = 'ISUBM/1'
and tbl_stock1.status <> 'D';
I got this error message:
Operation must use an updatable query. (Error 3073) Microsoft Access
But if I run the same query in SQL Server it will be executed.
Thanks,
dinesh
I'm quite sure the JET DB Engine treats any query with a subquery as non-updateable. This is most likely the reason for the error and, thus, you'll need to rework the logic and avoid the subqueries.
As a test, you might also try to remove the calculation (the subtraction) being performed in each of the two subqueries. This calculation may not be playing nicely with the update as well.
Consider this very simple UPDATE statement using Northwind:
UPDATE Categories
SET Description = (
SELECT DISTINCT 'Anything'
FROM Employees
);
It fails with the error 'Operation must use an updateable query'.
The Access database engine simple does not support the SQL-92 syntax using a scalar subquery in the SET clause.
The Access database engine has its own proprietary UPDATE..JOIN..SET syntax but is unsafe because, unlike a scalar subquery, it doesn’t require values to be unambiguous. If values are ambiguous then the engine silent 'picks' one arbitrarily and it is hard (if not impossible) to predict which one will be applied even if you were aware of the problem.
For example, consider the existing Categories table in Northwind and the following daft (non-)table as a target for an update (daft but simple to demonstrate the problem clearly):
CREATE TABLE BadCategories
(
CategoryID INTEGER NOT NULL,
CategoryName NVARCHAR(15) NOT NULL
)
;
INSERT INTO BadCategories (CategoryID, CategoryName)
VALUES (1, 'This one...?')
;
INSERT INTO BadCategories (CategoryID, CategoryName)
VALUES (1, '...or this one?')
;
Now for the UPDATE:
UPDATE Categories
INNER JOIN (
SELECT T1.CategoryID, T1.CategoryName
FROM Categories AS T1
UNION ALL
SELECT 9 - T2.CategoryID, T2.CategoryName
FROM Categories AS T2
) AS DT1
ON DT1.CategoryID = Categories.CategoryID
SET Categories.CategoryName = DT1.CategoryName;
When I run this I'm told that two rows have been updated, funny because there's only one matching row in the Categories table. The result is that the Categories table with CategoryID now has the '...or this one?' value. I suspect it has been a race to see which value gets written to the table last.
The SQL-92 scalar subquery is verbose when there are multiple clauses in the SET and/or the WHERE clause matches the SET's clauses but at least it eliminates ambiguity (plus a decent optimizer should be able to detects that the subqueries are close matches). The SQL-99 Standard introduced MERGE which can be used to eliminate the aforementioned repetition but needless to say Access doesn't support that either.
The Access database engine's lack of support for the SQL-92 scalar subquery syntax is for me its worst 'design feature' (read 'bug').
Also note the Access database engine's proprietary UPDATE..JOIN..SET syntax cannot anyhow be used with set functions ('totals queries' in Access-speak). See Update Query Based on Totals Query Fails.
Keep in mind that if you copy over a query that originally had queries or summary queries as part of the query, even though you delete those queries and only have linked tables, the query will (mistakenly) act like it still has non-updateable fields and will give you this error. You just simply re-create the query as you want it but it is an insidious little glitch.
You are updating weight1 and qty1 with values that are in turn derived from weight1 and qty1 (respectively). That's why MS-Access is choking on the update. It's probably also doing some optimisation in the background.
The way I would get around this is to dump the calculations into a temporary table, and then update the first table from the temporary table.
There is no error in the code. But the error is Thrown because of the following reason.
Please check weather you have given Read-write permission to MS-Access database file.
The Database file where it is stored (say in Folder1) is read-only..?
suppose you are stored the database (MS-Access file) in read only folder, while running your application the connection is not force-fully opened. Hence change the file permission / its containing folder permission like in C:\Program files all most all c drive files been set read-only so changing this permission solves this Problem.
In the query properties, try changing the Recordset Type to Dynaset (Inconsistent Updates)