Set default value for WHERE clause for specific columns - sql

Suppose I've created a table like this:
CREATE TABLE stuff(
a1 TEXT,
a2 TEXT,
. . .
an TEXT,
data TEXT
);
I need to make SELECT queries like this:
SELECT data FROM stuff WHERE
a1="..." and a2="..." and . . . and an="..."
selecting a specific value of data, with all a values specified.
There are a lot of as that have some value that I consider default.
Is there a way to add some kind of statement to the table or the query that will use some default value for as which are not explicitly constrained in the where clause? For example, so that if I don't write a1="b" after where I get only rows where a1 is equal to "b", rather than any value, but if I write a1="c" I get those.
The default is the same for all as.
The best solution for me would be to bake the default into the table or the database file.

Is there a way to add some kind of statement to the table or the query
that will use some default value for as which are not explicitly
constrained in the where clause?
Short answer: No.
Based off of what you said here:
For example, so that if I don't write a1="b" after where
I'm thinking you might be running this query often, maybe even manually, and you want to pass in some different values. If this is indeed the case, you can pass parameters in and use some variables to handle this.
What you can do, is have a CASE statement in that WHERE to help. Consider the following:
SELECT *
FROM
table
WHERE
CASE
WHEN %s IS NOT NULL
THEN a1 = %s
ELSE a1 = 'b'
END
[AND/OR] <other constraints here>
Now, the syntax of how you actually do this will vary, based on what you're actually using to execute your queries. Are they being ran from a python program? something in .net? etc etc
Also, you wouldn't have to strictly stick to the IS NOT NULL I used there, you could have some other values to do other things with. Up to you.
EDIT:
To Shawn's point in the comments, if %s, the argument being passed in, is either NOT NULL or a legitmate value, then ifnull() would be a cleaner alternative for this case statement. Example below:
WHERE
a1 = ifnull(%s, 'b')

Related

PL SQL select count(*) giving wrong answer

I have I table consisting of 3 columns: system, module and block. Table is filled in a procedure which accepts system, module and block and then it checks if the trio is in the table:
select count(*) into any_rows_found from logs_table llt
where system=llt.system and module=llt.module and block=llt.block;
If the table already has a row containing those three values, then don't write them into the table and if it doesn't have them, write them in. The problem is, if the table has values 'system=a module=b block=c' and I query for values 'does the table have system=a module=d block=e' it returns yes, or, to be precise, any_rows_found=1. Value 1 is only not presented when I send a trio that doesn't have one of it's values in the table, for example: 'system=g module=h and block=i'. What is the problem in my query?
Problem is in this:
where system = llt.system
Both systems are the same, it is as if you put where 1 = 1, so Oracle is kind of confused (thanks to you).
What to do? Rename procedure's parameters to e.g. par_system so that query becomes
where llt.system = par_system
Another option (worse, in my opinion) is to precede parameter's name with the procedure name. If procedure's name was e.g. p_test, then you'd have
where llt.system = p_test.system
From the documentation:
If a SQL statement references a name that belongs to both a column and either a local variable or formal parameter, then the column name takes precedence.
So when you do
where system=llt.system
that is interpreted as
where llt.system=llt.system
which is always true (unless it's null). It is common to prefix parameters and local variables (e.g. with p_ or l_) to avoid confusion.
So as #Littlefoot said, either change the procedure definition to make the parameter names different to the column names, or qualify the parameter names with the procedure name - which some people prefer but I find more cumbersome, and it's easier to forget and accidentally use the wrong reference.
Root cause is alias used for table name.
where system=llt.system and module=llt.module and block=llt.block;
Table name alias in select query and input to procedure having the same name(i.e. llt
). You should consider either renaming one of them.

Inserting new rows into table-1 based on constraints defined on table-2 and table-3

I want to append new rows to a table-1 d:\dl based on the equality constraint lower(rdl.subdir) = lower(tr.n1), where rdl and tr would be prospective aliases for f:\rdl and f:\tr tables respectively.
I get a function name is missing ). message when running the following command in VFP9:
INSERT INTO d:\dl SELECT * FROM f:\rdl WHERE (select LOWER(subdir)FROM f:\rdl in (select LOWER(n1) FROM f:\tr))
I am using the in syntax, instead of the alias based equality statement lower(rdl.subdir) = lower(tr.n1) because I do not know where to define aliases within this command.
In general, the best way to get something like this working is to first make the query work and give you the results you want, and then use it in INSERT.
In general, in SQL commands you assign aliases by putting them after the table name, with or without the keyword AS. In this case, you don't need aliases because the ones you want are the same as the table names and that's the default.
If what you're showing is your exact code and you're running it in VFP, the first problem is that you're missing the continuation character between lines.
You're definitely doing too much work, too. Try this:
INSERT INTO d:\dl ;
SELECT * ;
FROM f:\rdl ;
JOIN f:\tr ;
ON LOWER(rdl.subdir) = LOWER(tr.n1)

SQL: How to apply a function (stored procedure) within an UPDATE-clause to change values?

the following function deletes all blanks in a text or varchar column and returns the modified text/varchar as an int:
select condense_and_change_to_int(number_as_text_column) from mytable;
This exact query does work.
Though my goal is to apply this function to all rows of a column in order to consistently change its values. How would I do this? Is it possible with the UPDATE-clause, or do i need to do this within a function itself? I tried the following:
UPDATE mytable
SET column_to_be_modiefied = condense_and_change_to_int(column_to_be_modiefied);
Basically i wanted to input the value of the current row, modify it and save it to the column permanantly.
I'd welcome all ideas regarding how to solve scenarios like these. I'm working with postgresql (but welcome also more general solutions).
Is it possible with an update? Well, yes and sort-of.
From your description, the input to the function is a string of some sort. The output is a number. In general, numbers should be assigned to columns with a number type. The assumption is that the column in question is a number.
However, your update should work. The result will be a string representation of the number.
After running the update, you can change the column type, with something like:
alter table mytable alter column column_to_be_modiefied int;

SQL WHERE is anything

I'm working on a database query via a search bar and would like it to sometimes yield all results (depending on what is inputted)
I know that for SELECT you can use * in order to select all columns. Is there similar SQL syntax: i.e. WHERE name IS * to essentially always be true?
Edit to clarify:
The nature of the clause is that a variable is used to set the name (I'm actually not able to change the clause, that was made clear). i.e. WHERE name IS [[inputName]] (inputName is the decided by the search bar)
WHERE ISNULL(name, '') = ISNULL(name, '')
(assuming that 'name' is of a string type)
Just make the column reference itself. However, if this is the only goal of your query, why are you against omitting the WHERE clause?
If you want to return all results in a SQL statement, you can simply omit the WHERE clause:
SELECT <* or field names> FROM <table>;
You should use WHERE only when you want to filter your data on a certain field. In your case you just don't want to filter at all.
Actually you don't need WHERE clause at all in this situation. But if you insist then you should write your predicate so it always returns true. This can be done many ways:
Any predicate like:
WHERE 1=1
With column:
WHERE name = name OR name is null
With LIKE:
WHERE name LIKE '%' OR name is null
With passed parameter:
WHERE name = #name OR #name is null
You can think of more of course. But I think you need the last one. Pass NULL from app layer if you want all rows.

Dynamic Query in SQL Server

I have a table with 10 columns as col_1,col_2,.... col_10. I want to write a select statement that will select a value of one of the row and from one of these 10 columns. I have a variable that will decide which column to select from. Can such query be written where the column name is dynamically decided from a variable.
Yes, using a CASE statement:
SELECT CASE #MyVariable
WHEN 1 THEN [Col_1]
WHEN 2 THEN [Col_2]
...
WHEN 10 THEN [Col_10]
END
Whether this is a good idea is another question entirely. You should use better names than Col_1, Col_2, etc.
You could also use a string substitution method, as suggested by others. However, that is an option of last resort because it can open up your code to sql injection attacks.
Sounds like a bad, denormalized design to me.
I think a better one would have the table as parent, with rows that contain a foreign key to a separate child table that contains ten rows, one for each of those columns you have now. Let the parent table set the foreign key according to that magic value when the row is inserted or updated in the parent table.
If the child table is fairly static, this will work.
Since I don't have enough details, I can't give code. Instead, I'll explain.
Declare a string variable, something like:
declare #sql varchar(5000)
Set that variable to be the completed SQL string you want (as a string, and not actually querying... so you embed the row-name you want using string concatenation).
Then call: exec(#sql)
All set.
I assume you are running purely within Transact-SQL. What you'll need to do is dynamically create the SQL statement with your variable as the column name and use the EXECUTE command to run it. For example:
EXECUTE('select ' + #myColumn + ' from MyTable')
You can do it with a T-SQl CASE statement:
SELECT 'The result' =
CASE
WHEN choice = 1 THEN col1
WHEN choice = 2 THEN col2
...
END
FROM sometable
IMHO, Joel Coehoorn's case statement is probably the best idea
... but if you really have to use dynamic SQL, you can do it with sp_executeSQL()
I have no idea what platform you are using but you can use Dynamic LINQ pretty easily to do this.
var query = context.Table
.Where( t => t.Id == row_id )
.Select( "Col_" + column_id );
IEnumerator enumerator = query.GetEnumerator();
enumerator.MoveNext();
object columnValue = enumerator.Current;
Presumably, you'll know which actual type to cast this to depending on the column. The nice thing about this is you get the parameterized query for free, protecting you against SQL injection attacks.
This isn't something you should ever need to do if your database is correctly designed. I'd revisit the design of that element of the schema to remove the need to do this.