SQL: run only one of two stataments if an internal condition is met - sql

Don't know if someone here knows Tasker Android app, but I think everyone could globally understand what I'm looking to accomplish, because I will basically talk about "raw" SQL code, as it's written on most common languages.
First, this is what I want, roughly:
IF (SELECT * FROM ("january") WHERE ("day") = (19)) MATCHES [%records(#) = 1] END
ELSE
SELECT * FROM ("january") WHERE ("day") = (19) ORDER BY ("timea") DESC END
What I want to say above is: If in the first part of the code (IF ... END) the number of the resulting records, matching the number 19 on 'day' column, is just one, end execution here; but if more than one record is found, jump to the next part, after ELSE.
And if you are a Tasker user, you will understand the next (my current) setup:
A1: SQL Query [ Mode:Raw File:Tasker/Resources/Calendar Express/calendar_db Table:january Columns:day Query:SELECT * FROM ("january") WHERE ("day") = (19) Selection Parameters: Order By: Output Column Divider: Variable Array:%records Use Root:Off ]
A2: SQL Query [ Mode:Raw File:Tasker/Resources/Calendar Express/calendar_db Table:january Columns:day Query:SELECT * FROM ("january") WHERE ("day") = (19) ORDER BY ("timea") DESC Selection Parameters: Order By: Output Column Divider: Variable Array:%records Use Root:Off ] If [ %records(#) > 1 ]
An:...
So, as you can see, A1 will run always, without exceptions, getting the result in the variable array '%records()' (% is how Tasker identifies vars, as $ in other langs; and the use of parenthesis rather than brackets). Then, if the number of entries inside the array is just one, A2 will be jumped (if %records(#) > 1), and following actions are executed.
But, if after running A1 the %records() array contains 3, A2 action will be executed overwritting the content of %records() array, previoulsy set. But this time will contain same number of records (3), but reordered.
Is possible to do so, in just one code line? Thanks ;)

As 'sticky bit' replied on a comment before, I can just still using the second action, as it won't affect the output if it's only a single record. Solved!

Related

Syntax error on WITH clause

I am working on a web app and there are some long winded stored procedures and just trying to figure something out, I have extracted this part of the stored proc, but cant get it to work. The guy who did this is creating alias after alias.. and I just want to get a section to work it out. Its complaining about the ending but all the curly brackets seem to match. Thanks in advance..
FInputs is another stored procedure.. the whole thing is referred to as BASE.. the result of this was being put in a temp table where its all referred to as U. I am trying to break it down into separate sections.
;WITH Base AS
(
SELECT
*
FROM F_Inputs(1,1,100021)
),
U AS
(
SELECT
ISNULL(q.CoverPK,r.CoverPK) AS CoverPK,
OneLine,
InputPK,
ISNULL(q.InputName,r.InputName) AS InputName,
InputOrdinal,
InputType,
ParentPK,
InputTriggerFK,
ISNULL(q.InputString,r.InputString) AS InputString,
PageNo,
r.RatePK,
RateName,
Rate,
Threshold,
ISNULL(q.Excess,r.Excess) AS Excess,
RateLabel,
RateTip,
Refer,
DivBy,
RateOrdinal,
RateBW,
ngRequired,
ISNULL(q.RateValue,r.RateValue) AS RateValue,
ngClass,
ngPattern,
UnitType,
TableChildren,
TableFirstColumn,
parentRatePK,
listRatePK,
NewParentBW,
NewChildBW,
ISNULL(q.SumInsured,0) AS SumInsured,
ISNULL(q.NoItems,0) AS NoItems,
DisplayBW,
ReturnBW,
StringBW,
r.lblSumInsured,
lblNumber,
SubRateHeading,
TrigSubHeadings,
ISNULL(q.RateTypeFK,r.RateTypeFK) AS RateTypeFK,
0 AS ListNo,
0 AS ListOrdinal,
InputSelectedPK,
InputVis,
CASE
WHEN ISNULL(NewChildBW,0) = 0
THEN 1
WHEN q.RatePK is NOT null
THEN 1
ELSE RateVis
END AS RateVis,
RateStatus,
DiscountFirstRate,
DiscountSubsequentRate,
CoverCalcFK,
TradeFilter,
ngDisabled,
RateGroup,
SectionNo
FROM BASE R
LEFT JOIN QuoteInputs Q
ON q.RatePK = r.RatePK
AND q.ListNo = 0
AND q.QuoteId = 100021 )
Well, I explained the issue in the comments section already. I'm doing it here again, so future readers find the answer more easily.
A WITH clause is part of a query. It creates a view on-the-fly, e.g.:
with toys as (select * from products where type = 'toys') select * from toys;
Without the query at the end, the statement is invalid (and would not make much sense anyhow; if one wanted a permanent view for later use, one would use CREATE VIEW instead).

Perl: for (min .. max) uses random order, but I want it in order 0,1,2,

As I am a total beginner to perl, oracle sql and everything else. I have to write a script to parse an excel file and write the values into an oracle sql database.
Everything is good so far. But it writes the rows in random order into the database.
for ($row_min .. $row_max) {...insert into db code $sheetValues[$_][col0] etc...}
I don't get it why the rows are inserted in a random order?
And obviously how can I get them in order? excel_row 0 => db_row 0 and so on...
The values in the array are in order! The number of rows is dynamic.
Thanks for your help, I hope you got all the information you need.
Edit:
&parseWrite;
sub parseWrite {
my #sheetValues;
my $worksheet = $workbook->worksheet(0);
my ($row_min, $row_max) = $worksheet->row_range();
print "| Zeile $row_min bis $row_max |";
my ($col_min, $col_max) = $worksheet->col_range();
print " Spalte $col_min bis $col_max |<br>";
for my $row ($row_min .. $row_max) {
for my $col ($col_min .. $col_max) {
my $cell = $worksheet->get_cell ($row,$col);
next unless $cell;
$sheetValues[$row][$col] = $cell->value();
print $sheetValues[$row][$col] .
"(".$row."," .$col.")"."<br>";
}
}
for ($row_min .. $row_max) {
my $sql="INSERT INTO t_excel (
a,b,c,d,e
) VALUES (
'$sheetValues[$_][0 ]',
'$sheetValues[$_][1 ]',
'$sheetValues[$_][2 ]',
'$sheetValues[$_][3 ]',
'$sheetValues[$_][4 ]',
'$sheetValues[$_][5 ]'
)";
$dbh->do($sql);
}
}
With in order I mean that my PL/SQL Developer 8.0.3 (given by my company)
shows with SELECT * FROM t_excel;
pic
But shell = (2,0), maggie = (0,0) and 13 = (1,0) in the array.
The rows are being inserted in the order you expect. I believe the mistaken assumption here is that SELECT will return rows in the same order they're inserted. This is not true. While implementations may make it seem like it does, SELECT has no default order. You're thinking a table is basically like a big list, INSERT is adding to the end of it, and SELECT just iterates through it. That's not a bad approximation, but it can lead you to make bad assumptions. The reality is that you can say little for sure about how a table is stored.
SQL is a declarative language which means you tell the computer what you want. This is different from a most other language types where you tell the computer what to do. SELECT * FROM sometable says "give me all the rows and all their columns in the table". Since you didn't give an order, the database can return them in whatever order it likes. Contrast with the procedural meaning which would be "iterate through all the rows in the table" as if the table was some sort of list.
Most languages encourage you to take advantage of how data is stored. Declarative languages prevent you from knowing how data is stored.
If you want your SELECT to be ordered, you have to give it an ORDER BY.

Crystal Reports 8.5 showing multi-values from parameters on report footer

In Crystal Reports 8.5 when I have setup a parameter for multi-value the user enters 90654-90658A. Normally I would use Join() but being that this is not just text but numeric I have tried a few things but with no results.
Local NumberVar i;
Local NumberVar j;
Local StringVar param_values;
if 0 in {?CPT} then
"CPT #s: All CPTs"
else
(
for i := 1 to UBound ({?CPT}) do
for j := Minimum ({?CPT}[ i ]) to Maximum ({?CPT}[ i ]) do
param_values := param_values + "," + CStr (j, "#");
"CPT #s: " + Mid (param_values, 2)
)
This works fine for 90654-90658 but when the user selects 90654-90658A it fails.
Also the selection criteria will not pass to SQL in the query sent to SQL with the correct where clause. Meaning there is not indication that I am even asking for a where. It should show in the select for sql a where table.data >= '90654' and table.data <= '90658A'
I am lost as to where I am going wrong with this. Any help would be great this is my first time seeking an answer on this site but I have not received any help on this request.
Thanks
I tried a similar query with the Xtreme.mdb database, referencing the Customer table. I created a string, range parameter that accepted multiple values (i.e. multiple ranges).
When I supplied it with two ranges, the follow query was generated:
SELECT `Customer`.`Postal Code`
FROM `Customer` `Customer`
WHERE (
(`Customer`.`Postal Code`>='04000' AND `Customer`.`Postal Code`<='04999') OR
(`Customer`.`Postal Code`>='55000' AND `Customer`.`Postal Code`<='55999')
)
As you can see, Crystal Reports will build the necessary BETWEEN or >= <= statements.
In you situation, try:
( "0" IN {?CPT} OR {TABLE.FIELD} IN {?CPT} )
You could adapt your formula field to display the values of the parameter, if you want.
I do appreciate everyones input but I was able to work through the problem. For the record selection I put in the following. {TABLE.FIELD} in CStr({#MinCPT}) to CStr({#MaxCPT}). This pulled the range after I created two formulas. One MinCPT and the other MaxCPT. Here is the formula. Left (ToText (Minimum ({?CPT})),2 ) & Mid (ToText (Minimum ({?CPT})),4 ,3 ) and the same for Max. The report works fine now.
Thanks Again.

Simple, but most likely an incorrect Join is giving a MAX_JOIN_SIZE error

Probably a simple SQL query, but struggling as still learning
The following query runs fine:
SELECT NationalArea. *
FROM NationalArea
WHERE NationalArea.AreaCode = '01922'
This returns about 30 results.
This also runs fine:
SELECT DestinationNames.Name
FROM `DestinationNames`
WHERE DestinationNames.AreaCode = '01922'
This returns just the one
I am trying to run a query that joins the two where the National Area will give a list of area codes and the destination will match those area codes with the names of the towns. The query I have is as follows:
SELECT NationalArea.*, DestinationNames.Name
FROM NationalArea
JOIN DestinationNames
ON NationalArea.AreaCode=DestinationNames.AreaCode
WHERE NationalArea.AreaCode = '01922'
But I get the following error
1104 - The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay
Thanks in advance
You can display the current value with
SHOW VARIABLES LIKE '%MAX_JOIN_SIZE%';
You can change it with:
SET MAX_JOIN_SIZE = 100
Or skip the check entirely with (run this as a separate command before your query):
SET SQL_BIG_SELECTS = 1
But I would first examine why your join returns more than that. It doesn't look like it should. The default value of max_join_size is 4294967295!

What is the best way to run N independent column updates in PostgreSQL? What is the best way to do it in the SQL spec?

I'm looking for a more efficient way to run many columns updates on the same table like this:
UPDATE TABLE table
SET col = regexp_replace( col, 'foo', 'bar' )
WHERE regexp_match( col, 'foo' );
Such that foo, and bar, will be a combination of 40 different regex-replaces. I doubt even 25% of the dataset needs to be updated at all, but what I'm wanting to know is it is possible to cleanly achieve the following in SQL.
A single pass update
A single match of the regex, triggers a single replace
Not running all possible regexp_replaces if only one matches
Not updating all columns if only one needs the update
Not updating a row if no column has changed
I'm also curious, I know in MySQL (bear with me)
UPDATE foo SET bar = 'baz'
Has an implicit WHERE bar != 'baz' clause
However, in PostgreSQL I know this doesn't exist: I think I could at least answer one of my questions if I knew how to skip a single row's update if the target columns weren't updated.
Something like
UPDATE TABLE table
SET col = *temp_var* = regexp_replace( col, 'foo', 'bar' )
WHERE col != *temp_var*
Do it in code. Open up a cursor, then: grab a row, run it through the 40 regular expressions, and if it changed, save it back. Repeat until the cursor doesn't give you any more rows.
Whether you do it that way or come up with the magical SQL expression, it's still going to be a row scan of the entire table, but the code will be much simpler.
Experimental Results
In response to criticism, I ran an experiment. I inserted 10,000 lines from a documentation file into a table with a serial primary key and a varchar column. Then I tested two ways to do the update. Method 1:
in a transaction:
opened up a cursor (select for update)
while reading 100 rows from the cursor returns any rows:
for each row:
for each regular expression:
do the gsub on the text column
update the row
This takes 1.16 seconds with a locally connected database.
Then the "big replace," a single mega-regex update:
update foo set t =
regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(regexp_replace(t,
E'\bcommit\b', E'COMMIT'),
E'\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b',
E'9ACF10762B5F3D3B1B33EA07792A936A25E45010'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:53:13\b', E'04:53:13'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bUpdate\b', E'UPDATE'),
E'\bversion\b', E'VERSION'),
E'\bto\b', E'TO'), E'\b2.9.1\b',
E'2.9.1'), E'\bcommit\b', E'COMMIT'),
E'\b61c89e56f361fa860f18985137d6bf53f48c16ac\b',
E'61C89E56F361FA860F18985137D6BF53F48C16AC'),
E'\bAuthor:\b', E'AUTHOR:'),
E'\bCarl\b', E'CARL'), E'\bWorth\b',
E'WORTH'), E'\b\b',
E''), E'\bDate:\b',
E'DATE:'), E'\bMon\b', E'MON'),
E'\bOct\b', E'OCT'), E'\b26\b',
E'26'), E'\b04:51:58\b', E'04:51:58'),
E'\b2009\b', E'2009'), E'\b-0700\b',
E'-0700'), E'\bNEWS:\b', E'NEWS:'),
E'\bAdd\b', E'ADD'), E'\bnotes\b',
E'NOTES'), E'\bfor\b', E'FOR'),
E'\bthe\b', E'THE'), E'\b2.9.1\b',
E'2.9.1'), E'\brelease.\b',
E'RELEASE.'), E'\bThanks\b',
E'THANKS'), E'\bto\b', E'TO'),
E'\beveryone\b', E'EVERYONE'),
E'\bfor\b', E'FOR')
The mega-regex update takes 0.94 seconds to update.
At 0.94 seconds compared to 1.16, it's true that the mega-regex update is faster, running in 81% of the time of doing it in code. It is not, however a lot faster. And ye Gods, look at that update statement. Do you want to write that, or try to figure out what went wrong when Postgres complains that you dropped a parenthesis somewhere?
Code
The code used was:
def stupid_regex_replace
sql = Select.new
sql.select('id')
sql.select('t')
sql.for_update
sql.from(TABLE_NAME)
Cursor.new('foo', sql, {}, #db) do |cursor|
until (rows = cursor.fetch(100)).empty?
for row in rows
for regex, replacement in regexes
row['t'] = row['t'].gsub(regex, replacement)
end
end
sql = Update.new(TABLE_NAME, #db)
sql.set('t', row['t'])
sql.where(['id = %s', row['id']])
sql.exec
end
end
end
I generated the regular expressions dynamically by taking words from the file; for each word "foo", its regular expression was "\bfoo\b" and its replacement string was "FOO" (the word uppercased). I used words from the file to make sure that replacements did happen. I made the test program spit out the regex's so you can see them. Each pair is a regex and the corresponding replacement string:
[[/\bcommit\b/, "COMMIT"],
[/\b9acf10762b5f3d3b1b33ea07792a936a25e45010\b/,
"9ACF10762B5F3D3B1B33EA07792A936A25E45010"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:53:13\b/, "04:53:13"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bUpdate\b/, "UPDATE"],
[/\bversion\b/, "VERSION"],
[/\bto\b/, "TO"],
[/\b2.9.1\b/, "2.9.1"],
[/\bcommit\b/, "COMMIT"],
[/\b61c89e56f361fa860f18985137d6bf53f48c16ac\b/,
"61C89E56F361FA860F18985137D6BF53F48C16AC"],
[/\bAuthor:\b/, "AUTHOR:"],
[/\bCarl\b/, "CARL"],
[/\bWorth\b/, "WORTH"],
[/\b<cworth#cworth.org>\b/, "<CWORTH#CWORTH.ORG>"],
[/\bDate:\b/, "DATE:"],
[/\bMon\b/, "MON"],
[/\bOct\b/, "OCT"],
[/\b26\b/, "26"],
[/\b04:51:58\b/, "04:51:58"],
[/\b2009\b/, "2009"],
[/\b-0700\b/, "-0700"],
[/\bNEWS:\b/, "NEWS:"],
[/\bAdd\b/, "ADD"],
[/\bnotes\b/, "NOTES"],
[/\bfor\b/, "FOR"],
[/\bthe\b/, "THE"],
[/\b2.9.1\b/, "2.9.1"],
[/\brelease.\b/, "RELEASE."],
[/\bThanks\b/, "THANKS"],
[/\bto\b/, "TO"],
[/\beveryone\b/, "EVERYONE"],
[/\bfor\b/, "FOR"]]
If this were a hand-generated list of regex's, and not automatically generated, my question is still appropriate: Which would you rather have to create or maintain?
For the skip update, look at suppress_redundant_updates - see http://www.postgresql.org/docs/8.4/static/functions-trigger.html.
This is not necessarily a win - but it might well be in your case.
Or perhaps you can just add that implicit check as an explicit one?