Sql Limit clause based in input Parameter - sql

I have been trying to find a solution for a limit-clause based on an input parameter from a Json-File. The current code looks somewhat like this
With myJsonTable (JsonText)
as (
Select JsonText)
Select * from Data
Where...
Limit
Case
WHEN (Select JSON_VALUE(JsonText, '$."Amount"') From myJsonTable is not null
THEN (Select JSON_VALUE(JsonText, '$."Amount"') From myJsonTable)
ELSE (10000000)
END
Which I cant seem to get work. The Output I am getting is
Non-negative integeter value expected in LIMIT clause
Is there a way to cast the select done? Trying different Selects anywhere in the Case clause caused the same error.

Exasol only allows constant expression in the limit clause, so it's not directly possible to specify a select statement that references myJsonTable there.
However, you can workaround this issue by using a approach similar to SQL query for top 5 results without the use of LIMIT/ROWNUM/TOP

Related

Function which returns type runs multiple times

This is my first question here so sorry if I'm doing something wrong.
I have a function in PostgreSQL which returns a type and I want to display all fields from that type.
At first I was doing the following SQL:
SELECT (FC_FUNCTION(FIELD_A, FIELD_B, FIELD_C)).*
FROM TABLE
But I noticed that it was running way too slow. After checking it looked like it was running the function again for each field the type had. Changing the SQL to the following not only returned the same results, but was way faster:
SELECT (X).*
FROM (SELECT FC_FUNCTION(FIELD_A, FIELD_B, FIELD_C) AS X FROM TABLE) A
Is this the correct way of doing it? It feels to me more of a work around than a solution. Thanks!
This is documented:
[...] these two queries have the same result:
SELECT (myfunc(x)).* FROM some_table;
SELECT (myfunc(x)).a, (myfunc(x)).b, (myfunc(x)).c FROM some_table;
Tip
PostgreSQL handles column expansion by actually transforming the first form into the second. So, in this example, myfunc() would get invoked three times per row with either syntax. If it's an expensive function you may wish to avoid that, which you can do with a query like:
SELECT m.* FROM some_table, LATERAL myfunc(x) AS m;
Placing the function in a LATERAL FROM item keeps it from being invoked more than once per row. m.* is still expanded into m.a, m.b, m.c, but now those variables are just references to the output of the FROM item. (The LATERAL keyword is optional here, but we show it to clarify that the function is getting x from some_table.)

Fastest way to check if any case of a pattern exist in a column using SQL

I am trying to write code that allows me to check if there are any cases of a particular pattern inside a table.
The way I am currently doing is with something like
select count(*)
from database.table
where column like (some pattern)
and seeing if the count is greater than 0.
I am curious to see if there is any way I can speed up this process as this type of pattern finding happens in a loop in my query and all I need to know is if there is even one such case rather than the total number of cases.
Any suggestions will be appreciated.
EDIT: I am running this inside a Teradata stored procedure for the purpose of data quality validation.
Using EXISTS will be faster if you don't actually need to know how many matches there are. Something like this would work:
IF EXISTS (
SELECT *
FROM bigTbl
WHERE label LIKE '%test%'
)
SELECT 'match'
ELSE
SELECT 'no match'
This is faster because once it finds a single match it can return a result.
If you don't need the actual count, the most efficient way in Teradata will use EXISTS:
select 1
where exists
( select *
from database.table
where column like (some pattern)
)
This will return an empty result set if the pattern doesn't exist.
In terms of performance, a better approach is to:
select the result set based on your pattern;
limit the result set's size to 1.
Check whether a result was returned.
Doing this prevents the database engine from having to do a full table scan, and the query will return as soon as the first matching record is encountered.
The actual query depends on the database you're using. In MySQL, it would look something like:
SELECT id FROM database.table WHERE column LIKE '%some pattern%' LIMIT 1;
In Oracle it would look like this:
SELECT id FROM database.table WHERE column LIKE '%some pattern%' AND ROWNUM = 1;

Sub-Queries in Sybase SQL

We have an application which indexes data using user-written SQL statements. We place those statements within parenthesis so we can limit that query to a certain criteria. For example:
select * from (select F_Name from table_1)q where ID > 25
Though we have discovered that this format does not function using a Sybase database. Reporting a syntax error around the parenthesis. I've tried playing around on a test instance but haven't been able to find a way to achieve this result. I'm not directly involved in the development and my SQL knowledge is limited. I'm assuming the 'q' is to give the subresult an alias for the application to use.
Does Sybase have a specific syntax? If so, how could this query be adapted for it?
Thanks in advance.
Sybase ASE is case sensitive w.r.t. all identifiers and the query shall work:
as per #HannoBinder query :
select id from ... is not the same as select ID from... so make sure of the case.
Also make sure that the column ID is returned by the Q query in order to be used in where clause .
If the table and column names are in Upper case the following query shall work:
select * from (select F_NAME, ID from TABLE_1) Q where ID > 25

Set CASE statement to variable and then use that variable in GROUP BY clause

I am using SQL Server 2008 and have a very large CASE statement that is also used in the GROUP By clause. I would like to set the CASE statement to a variable in order to minimize code maintenance and maximize reuse. The problem is that I get this error:
Each GROUP BY expression must contain at least one column that is not an outer reference.
This CASED column is not the only column referenced in the GROUP By clause so I'm not sure why I get this error.
I've searched the site but didn't find a problem quite like mine (surprisingly). So, how do I go about getting around this?
UPDATE: I have included the DB type. As far as adding the code for what I have, I'm not sure that would add anything but bulk as it is over 200 lines. It's not a complex statement at all. It just takes various country codes and maps them to their full country names. For instance, the U.S. has over 50 codes so I am using the CASE statement to consolidate them. This allows me to group my info by country.
The best way to do this is with a subquery:
select var, count(*)
from (select t.*,
(case <nasty expressions go here>
end) var
from t
) t
group by var
The error that you are getting is because the variables in the group by is a constant. I'm not sure why the error message is not clearer.
And, in the event that you actually do want to include a constant in the group by for some reason (as I have had occasion to do), then a column helps:
group by (case when coalesce(col, '') = coalesce(col, '') then 'some constant' end)
At least in SQL Server 2008, the engine does not recognize the expression as a constant.

SQL: Error, Expression services limit reached?

"Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them."
Has anyone seen this before and found a good workaround?
I managed to get around this issue by splitting my SQL query into two parts essentially and writing the first SQL select query to a temp table and the second part, a new SQL select statement selects from the temporary table and uses alot of CROSS APPLY operator to Calculate cascading computed columns.
This is an example of how the second part looks but I'm using alot more Cross Applys to produce new columns which are calculations:
Select * from #tempTable
cross apply
(
select HmmLowestSalePrice =
round(((OurSellingPrice + 1.5) / 0.95) - (CompetitorsLowestSalePrice) + 0.08, 2)
) as HmmLowestSalePrice
cross apply
(
select checkLowestSP =
case
when adjust = 'No Room' then 'No Room'
when OrginalTestSalePrice >= CompetitorsLowestSalePrice then 'Minus'
when OrginalTeslSalePrice < CompetitorsLowestSalePrice then 'Ok'
end
) as checkLowestSP
cross apply
(
select AdjustFinalNewTestSP =
case
when FinalNewTestShipping < 0 Then NewTestSalePrice - (FinalNewTestShipping)
when FinalNewTestShipping >= 0 Then NewTestSalePrice
end
) as AdjustFinalNewTestSP
cross apply
(
select CheckFinalSalePriceWithWP =
case
when round(NewAdminSalePrice, 2) >= round(wholePrice, 2) then 'Ok'
when round(NewAdminSalePrice, 2) < round(wholePrice, 2) then 'Check'
end
) as CheckFinalPriceWithWP
DROP TABLE #tempTable
My goal to to put this into a sql report and it work fine if there is 1 user only as the #tempTable will get created and dropped in the same execution and the results are displayed in the report correctly. But in the future if there are concurrent users I'm concerned that they will be writing to the same #tempTable which will affect the results?
I've looked at putting this into stored procedures but still get the error message above.
This issue occurs because SQL Server limits the number of identifiers and constants that can be contained in a single expression of a query. The limit is 65,535. The test for the number of identifiers and constants is performed after SQL Server expands all referenced identifiers and constants. In SQL Server 2005 and above, queries are internally normalized and simplified. And that includes *(asterisk), computed columns etc.
In order to work around this issue, rewrite your query. Reference fewer identifiers and constants in the largest expression in the query. You must make sure that the number of identifiers and constants in each expression of the query does not exceed the limit. To do this, you may have to break down a query into more than one single query. Then, create a temporary intermediate result.
The same issue happens to me when we tried to change the Database Compatibility Level to 150. It is not an issue when it is 140 or lower.
I just had this problem and fixed it by removing the UNIQUE index on my table. For some reason, that seems to trigger this error, although it cannot figure out why.
By the way, the same query does work with several other indexes.
What worked for me was replacing several COALESCE statements with ISNULL whenever was possible