I am trying to create a materialized view that requires slightly different filters between prod, dev, qa.
We have a variables table that stores random ids and I'm trying to find a way to store something like this in my variables table:
prod_filter_values = "(D.DEFID = 123 AND D.ATTRID IN (2, 3, 4)) OR
(D.DEFID = 3112 AND D.ATTRID IN (3, 30, 34, 23, 4)) OR
(D.DEFID = 379 AND D.ATTRID IN (3, 5, 8)) OR
(D.DEFID = 3076 AND D.ATTRID = 5);"
Then I'd do something like select * from variables_table where EVAL(prod_filter_values)
Is it possible?
Yes you can as other answers have explained. However a better way would be to have this data driven - simply create tables in your various environments that have the corresponding magic numbers and join to that as required.
A second way is to have different views for the different environments with the numbers hard-coded there.
Anything that avoids building strings is going to be better for several reasons including having code in one place, stable code, no security/injection problems, no parse overhead.
Yes. Lookup dynamic SQL:
https://docs.oracle.com/cloud/latest/db112/LNPLS/dynamic.htm#LNPLS01102
something like this:
EXECUTE IMMEDIATE 'select * from vars_table where ' || prod_filter_values;
Related
I have a JSON API payload containing tablename, columnlist - how to build a SELECT query from it using pypika?
So far I have been able to use a string columnlist, but not able to do advanced querying using functions, analytics etc.
from pypika import Table, Query, functions as fn
def generate_sql (tablename, collist):
table = Table(tablename)
columns = [str(table)+'.'+each for each in collist]
q = Query.from_(table).select(*columns)
return q.get_sql(quote_char=None)
tablename = 'customers'
collist = ['id', 'fname', 'fn.Sum(revenue)']
print (generate_sql(tablename, collist)) #1
table = Table(tablename)
q = Query.from_(table).select(table.id, table.fname, fn.Sum(table.revenue))
print (q.get_sql(quote_char=None)) #2
#1 outputs
SELECT "customers".id,"customers".fname,"customers".fn.Sum(revenue) FROM customers
#2 outputs correctly
SELECT id,fname,SUM(revenue) FROM customers
You should not be trying to assemble the query in a string by yourself, that defeats the whole purpose of pypika.
What you can do in your case, that you have the name of the table and the columns coming as texts in a json object, you can use * to unpack those values from the collist and use the syntax obj[key] to get the table attribute with by name with a string.
q = Query.from_(table).select(*(table[col] for col in collist))
# SELECT id,fname,fn.Sum(revenue) FROM customers
Hmm... that doesn't quite work for the fn.Sum(revenue). The goal is to get SUM(revenue).
This can get much more complicated from this point. If you are only sending column names that you know to belong to that table, the above solution is enough.
But if you have complex sql expressions, making reference to sql functions or even different tables, I suggest you to rethink your decision of sending that as json. You might end up with something as complex as pypika itself, like a custom parser or wathever. than your better option here would be to change the format of your json response object.
If you know you only need to support a very limited set of capabilities, it could be feasible. For example, you can assume the following constraints:
all column names refer to only one table, no joins or alias
all functions will be prefixed by fn.
no fancy stuff like window functions, distinct, count(*)...
Then you can do something like:
from pypika import Table, Query, functions as fn
import re
tablename = 'customers'
collist = ['id', 'fname', 'fn.Sum(revenue / 2)', 'revenue % fn.Count(id)']
def parsed(cols):
pattern = r'(?:\bfn\.[a-zA-Z]\w*)|([a-zA-Z]\w*)'
subst = lambda m: f"{'' if m.group().startswith('fn.') else 'table.'}{m.group()}"
yield from (re.sub(pattern, subst, col) for col in cols)
table = Table(tablename)
env = dict(table=table, fn=fn)
q = Query.from_(table).select(*(eval(col, env) for col in parsed(collist)))
print (q.get_sql(quote_char=None)) #2
Output:
SELECT id,fname,SUM(revenue/2),MOD(revenue,COUNT(id)) FROM customers
I'm working with a query that is used by multiple services but the number of results returned are different based on filtering.
To avoid copying and pasting the query, I was wondering if it was possible to pass in piece of sql into a sql parameter and it would work? I'm also open to alternative solutions.
EXAMPLE:
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("filter", "and color = blue");
namedParameterJdbcTemplate.query(“select * from foo where name = 'Joe' :filter”, parameters, new urobjRowMapper());
It is very dangerous and fragile to let callers pass SQL to your program, because it opens you up to SQL injection - the very problem the parameters are there to prevent.
A better approach is to pre-code the filters in your query, and protect them by a special "selector" parameter:
SELECT *
FROM foo
WHERE name='Joe' AND
(
(:qselect = 1 AND color='blue')
OR (:qselect = 2 AND startYear = 2021)
OR (:qselect = 3 AND ...)
)
Apologies for posting a new question but I just can't think how to search for this question.
I'm creating a Crystal Report with multiple parameters and at the moment each one is connected by an ‘AND’ in the Report > Selection Formulas part of the report (not the SQL command part).
I haven’t fully authored the report and it contains lots of arrays to deal with multiple text values and wildcard searches but I think my question should be more around logic than the technical functions.
So…
Parameters are for things like product code, date range, country, batch number etc.
Currently the parameters I’m concerned with are Faults and keyword searches for complaints against products.
(Query 1) If all other parameters are set to default I can enter Fault Combination = ‘Assembly – Code’ and that gives me 17 records.
(Query 2) Entering keyword = ‘%unit%’ gives me 55 records.
The 2 parameters are connected by an AND so if I use Fault Combination = ‘Assembly – Code’ and Keyword = ‘%unit%’ then I get 12 records. If the connect the 2 queries with OR then I get 12 records.
If I compare the unique records, in excel, between query 1 & 2 then there are 60 records with Fault Combination = ‘Assembly – Code’ OR keyword = ‘%unit%.
How can write the parameter formula to get the 60 unique records with one query?
Many thanks!
Gareth
Edit - Code Added
This is the segment i'm concerned with. The arrays are defined earlier in the statement and the '*' & '%' parts of the query below are just to deal with the different wildcard operators between SQL and Crystal. There are a lot of other parameters but these 3 are the only ones that need the OR kind or connection.
Hope that helps!
(IF "%" LIKE array_fn2
THEN ((ISNULL({Command.FaultNoun})=TRUE) OR ({Command.FaultNoun} LIKE '*'))
ELSE IF {Command.RecordType} = 'Complaint'
THEN ({Command.FaultNoun} like array_fn2)
ELSE ((ISNULL({Command.FaultNoun})=TRUE) OR ({Command.FaultNoun} LIKE '*'))) AND
(IF "%" LIKE array_fa2
THEN ((ISNULL({Command.FaultAdjective})=TRUE) OR ({Command.FaultAdjective} LIKE '*'))
ELSE IF {Command.RecordType} = 'Complaint'
THEN ({Command.FaultAdjective} like array_fa2)
ELSE ((ISNULL({Command.FaultAdjective})=TRUE) OR ({Command.FaultAdjective} LIKE '*'))) AND
(IF ("%" LIKE array_k2) OR ({Command.RecordType} = 'Sale')
THEN ((ISNULL({Command.ActualStatements})=TRUE) OR ({Command.ActualStatements} LIKE '*')
OR (ISNULL({Command.ResultsAnalysis})=TRUE) OR ({Command.ResultsAnalysis} LIKE '*')
OR (ISNULL({Command.Observation})= TRUE) OR ({Command.Observation} LIKE '*'))
ELSE
({Command.ActualStatements} like array_k2) OR
({Command.ResultsAnalysis} like array_k2) OR
({Command.Observation} like array_k2))
Currently I have a scenario that involves switching a synonym definition after the completion of a scheduled job. The job will create a table with an identifier of even or odd to correspond with the hour being even or odd. What we are currently doing is this:
odd_job:
create foo_odd ...
replace foo_syn as foo_odd
and
even_job:
create foo_even ...
replace foo_syn as foo_even
What is happening is that during normal production the foo_syn is in a locked state. So what we are looking for is a production capable way of swapping synonym definitions.
The question is how can we swap a synonym definition in a production level system with minimum user interruption in Oracle 10g?
From the comments
Does foo_syn have any dependent objects?
No foo_syn is nothing more than a pointer to a table that I generate. That is there are no procedures that need to be recompiled for this switch.
That sounds like a really strange thing to do. Can you explain a bit
what that switch is for/how it is used?
Sure. We have an application that interfaces with the database, the SQL that is executed from Java (business logic queries) has a reference to foo_syn. Because of the dynamic nature of the data it is a guarantee that the hourly swap will give new results that are important as we try to get closer to real time. Prior to this it was a once a day and be happy with it type scenario.
The reasoning for the swap is I do not want dynamic SQL (in terms of table names) to be a part of my application queries. So therefore the database does a switch on the newer data set without changing the name of the synonym that is referenced as part of my application.
If using dynamic SQL is distasteful to you (and I'll quickly point out that in my experience dynamic SQL has never proved to be a performance issue, but YMMV) then a UNION query might be what you're looking for - something like
SELECT *
FROM EVEN_DATA_TABLE
WHERE TO_NUMBER(TO_CHAR(SYSDATE, 'HH')) IN (0, 2, 4, 6, 8, 10, 12)
UNION ALL
SELECT *
FROM ODD_DATA_TABLE
WHERE TO_NUMBER(TO_CHAR(SYSDATE, 'HH')) IN (1, 3, 5, 7, 9, 11)
This also eliminates the need to have a periodic job to change the synonym as it's driven off of SYSDATE.
This makes the assumption that the columns in EVEN_DATA_TABLE and ODD_DATA_TABLE are the same.
Share and enjoy.
The solution that we came up with is as follows:
1) Define a function that will return which set of tables you should be looking at:
create or replace function which_synonym return varchar2 as
to_return varchar2(4) := NULL;
is_valid number :=- 1;
current_time number := to_number(to_char(sysdate,'HH'));
is_odd boolean := FALSE;
BEGIN
if = mod(current_time,2) -- it is an even time slot
then
select success into is_valid
from success_table
where run='EVEN';
else
select success into is_valid
from success_table
where run='ODD';
end if;
if is_valid=0 and is_odd=TRUE
then to_Return ='ODD';
else
to_return='EVEN';
end if;
Return to_return;
END which_synonym;
De Morgan's laws omitted for conciseness.
2) Configure the application procedures to take advantage of this flipping:
a) Tokenize enumerated sql strings with a sequence that you want to match on:
select * from foo_&&&
b) write the function that will replace this sequence:
public String whichSynonym(String sql)
{
if(null==sql || "".equals(sql.trim()))
{
throw new IllegalArgumentException("Cannot process null or empty sql");
}
String oddEven = "";
//removed boilerplate
PreparedStatement statement = conn.prepareStatement("Select which_synonym from dual");
statement.execute();
ResultSet results = statement.getResults();
while(results.next())
{
oddEven=results.getString(1);
}
return sql.replace("&&&",oddEven);
}
I have a tquery (going thru BDE or BDE emulating component) that has been used to select either a single record or all records.
Traditionally this has been done as such:
select * from clients where (clientid = :clientid or :clientid = -1)
And then they would put a -1 in the field when they wanted the query to return all values. Going through this code though, I have discovered that when they have done this the query does not use proper indexing for the table and only does a natural read.
Is there a best practices method for achieving this? Perhaps a way to tell a parameter to return all values, or must the script be modified to remove the where clause entirely when all values are desired?
Edit: This is Delphi 7, by the way (And going against Firebird 1.5 sorry for leaving that out)
As you use deprecated BDE, that may be one more reason to migrate from BDE to 3d party solutions. AnyDAC (UniDAC, probably others too. Most are commercial libraries) has macros, which allow to dynamically change a SQL command text, depending on the macro values. So, your query may be written:
ADQuery1.SQL.Text := 'select * from clients {IF &clientid} where clientid = &clientid {FI}';
if clientid >= 0 then
// to get a single record
ADQuery1.Macros[0].AsInteger := clientid
else
// to get all records
ADQuery1.Macros[0].Clear;
ADQuery1.Open;
For the queries with "optional" parameters I always use ISNULL (MSSQL, or NVL Oracle), ie.
SELECT * FROM clients WHERE ISNULL(:clientid, clientid) = clientid
Setting the parameter to NULL then selects all records.
You also have to take care of NULL values in the table fields because NULL <> NULL. This you can overcome with a slight modification:
SELECT * FROM clients WHERE COALESCE(:clientid, clientid, -1) = ISNULL(clientid, -1)
I would use this:
SELECT * FROM CLIENTS WHERE clientid = :clientid or :clientid IS NULL
Using two queries is best:
if (clientid <> -1) then
begin
DBComp.SQL.Text := 'select * from clients where (clientid = :clientid)';
DBComp.ParamByName('clientid').Value := clientid;
end else
begin
DBComp.SQL.Text := 'select * from clients';
end;
DBComp.Open;
...
Alternatively:
DBComp.SQL.BeginUpdate;
try
DBComp.SQL.Clear;
DBComp.SQL.Add('select * from clients');
if (clientid <> -1) then
DBComp.SQL.Add('where (clientid = :clientid)');
finally
DBComp.SQL.EndUpdate;
end;
if (clientid <> -1) then
DBComp.ParamByName('clientid').Value := clientid;
DBComp.Open;
...
Remy's answer may be re-formulated as single query.
It may be better, if you gonna prepare it once and then re-open multiple times.
select * from clients where (clientid = :clientid)
and (:clientid is not null)
UNION ALL
select * from clients where (:clientid is null)
This just aggregates two distinct queries (with same results vector) together. And condition just turns one of those off.
Using would be like that:
DBComp.Prepare.
...
DBComp.Close;
DBComp.ParamByName('clientid').Value := clientid;
DBComp.Open;
...
DBComp.Close;
DBComp.ParamByName('clientid').Clear;
DBComp.Open;
However this query would rely on SQL Server optimizer capability to extract query invariant (:clientid is [not] null) and enable/disable query completely. But well, your original query depends upon that too.
Why still use obsolete FB 1.5 ? Won't FB 2.5.2 work better there ?
I think your original query is formulated poorly.
select * from clients where (:clientid = -1) or ((clientid = :clientid) and (:clientid <> -1))
would probably be easier on SQL Server optimizer. Yet i think FB could do better job there. Try to download later FB, and run your query in it, using IDEs like IBExpert or FlameRobin. Re-arranging parenthesis and changing -1 to NULL are obvious ideas to try.
Using BDE is fragile now. It is not very fast, limiting in datatypes and connectivity (no FB/IB Events for example). And would have all sorts of compatibility problems with Vista/Win7 and Win64. If FB/IB is your server of choice, consider switching to some modern component set:
(FLOSS) Universal Interbase by http://uib.sf.net (RIP all Delphi pages of http://Progdigy.com )
(FLOSS) ZeosLib DBO by http://zeos.firmos.at/
(propr) FIB+ by http://DevRace.com
(propr) IB Objects by http://IBobjects.com
(propr) AnyDAC by http://da-soft.com - sold out to Embarcadero, not-avail for D7
(propr) IB-DAC/UniDAC http://DevArt.com
Also it would be good thing to show the table and indices definition and selectivity of those indices.