Excel connected dbf database with null/empty, SQL query not working - sql

An old dbf have been succesfully ODBC-connected to Excel (2010). The following SQL query (through Microsoft Query) works as expected giving a single result.
Microsoft Query (Excel 2010) code
SELECT c.NDELNUM AS deliverynote, SUM(c.NQTY*a.CPREDEF2) AS Total
FROM DelCusL AS c, Article AS a
WHERE c.CREF = a.CREF AND ((c.NDELNUM=?))
GROUP BY c.NDELNUM
I am interested in getting also single value, but the following retuns NULL:
SELECT c.NDELNUM AS deliverynote, SUM(c.NQTY*(a.CPREDEF2+c.CPROP2)) AS Total
FROM DelCusL AS c, Article AS a
WHERE c.CREF = a.CREF AND ((c.NDELNUM=?))
GROUP BY c.NDELNUM
I guess this does not work because empty values are encountered in either a.CPREDEF2 or c.CPROP2. When an empty value is encounterd, I'd like it to be treated as 0. I have tried casting value functions available in Microsoft Query to not much avail.
Any idea on converting EMPTY values to 0s so that the operation is successful?
NQTY is a number and always non-empty. CPREDEF2 is treated as VARCHAR and can be EMPTY: when EMPTY it seems to be treated as NULL from other tests; CPROP2 is an alternative value to CPREDEF2 and can also be EMPTY, if filled it can be understood as number(like CPREDEF2). Both CPREDEF and CPROP2 are treated as VARCHAR rather than numerical values but when on their own are correctly multiplied and aggregated as numbers. (It fails when I try to add them together before the aggregate function).

Try to CAST before you add the values:
... * (CAST(a.CPREDEF2 AS DECIMAL(10,5))+CAST(c.CPROP2 AS DECIMAL(10,5))) ...

Related

SQL Decode format numbers only

I want to format amounts to salary format, e.g. 10000 becomes 10,000, so I use to_char(amount, '99,999,99')
SELECT SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0)) Salary,
SUM(DECODE(e.element_name,'Transportation Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Transportation,
SUM(DECODE(e.element_name,'GOSI Processing',to_char(v.screen_entry_value,'99,999,99'),0)) GOSI,
SUM(DECODE(e.element_name,'Housing Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Housing
FROM values v,
values_types vt,
elements e
WHERE vt.value_type = 'Amount'
this gives error invalid number because not all values are numbers until value_type is equal to Amount but I guess decode check all values anyway although what I know is that the execution begins with from then where then select, what's going wrong here?
You said you added decode(...), but it looks like you might have actually added sum(decode(...)).
You are converting your values to strings with to_char(v.screen_entry_value,'99,999,99'), so your decode() generates a string - the default 0 will be converted to '0' - giving you a value like '1,234,56'. Then you are aggregating those, so sum() has to implicitly convert those strings to numbers - and it is throwing the error when it tries to do that:
select to_number('1,234,56') from dual
will also get "ORA-01722: invalid number", unless you supply a similar format mask so it knows how to interpret it. You could do that, e.g.:
SUM(to_number(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0),'99,999,99'))
... but it's maybe more obvious that something is strange, and even if you did, you would end up with a number, not a formatted string.
So instead of doing:
SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0))
you should format the result after aggregating:
to_char(SUM(DECODE(e.element_name,'Basic Salary',v.screen_entry_value,0)),'99,999,99')
fiddle with dummy tables, data and joins.

Query to ignore rows which have non hex values within field

Initial situation
I have a relatively large table (ca. 0.7 Mio records) where an nvarchar field "MediaID" contains largely media IDs in proper hexadecimal notation (as they should).
Within my "sequential" query (each query depends on the output of the query before, this is all in pure T-SQL) I have to convert these hexadecimal values into decimal bigint values in order to do further calculations and filtering on these calculated values for the subsequent queries.
--> So far, no problem. The "sequential" query works fine.
Problem
Unfortunately, some of these Media IDs do contain non-hex characters - most probably because there was some typing errors by the people which have added them or through import errors from the previous business system.
Because of these non-hex chars, the whole query fails (of course) because the conversion hits an error.
For my current purpose, such rows must be skipped/ignored as they are clearly wrong and cannot be used (there are no medias / data carriers in use with the current business system which can have non-hex character IDs).
Manual editing of the data is not an option as there are too many errors and it is not clear with what the data must be replaced.
Challenge
To create a query which only returns records which have valid hex values within the media ID field.
(Unfortunately, my SQL skills are not enough to create the above query. Your help is highly appreciated.)
The relevant section of the larger query looks like this (xxxx is where your help comes in :-))
select
pureMediaID
, mediaID
, CUSTOMERID
,CONTRACT_CUSTOMERID
from
(
select concat('0x', Replace(Ltrim(Replace(mediaID, '0', ' ')), ' ', '0')) AS pureMediaID
--, CUSTOMERID
, *
from M_T_CONTRACT_CUSTOMERS
where mediaID is not null
and mediaID like '0%'
and xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
) as inner1
EDIT: As per request I have added here some good and some bad data:
Good:
4335463357
4335459809
1426427996
4335463509
4335515039
4335465134
4427370396
4335415661
4427369036
4335419089
004BB03433
004e7cf9c6
00BD23133
00EE13D8C1
00CCB5522C
00C46522C
00dbbe3433
Bad:
4564589+
AB6B8BFC.8
7B498DFCnm
DB218DFChb
d<tgfh8CFC
CB9E8AFCzj
B458DFCjhl
rytzju8DFC
BFCtdsjshj
DB9888FCgf
9BC08CFCyx
EB198DFCzj
4B628CFChj
7B2B8DFCgg
After I did upgrade the compatibility level of the SQL instance to SQL2016 (it was below 2012 before) I could use try_convert with same syntax as the original convert function as donPablo has pointed out. With that the query could run fully through and every MediaID which is not a correct hex value gets nicely converted into a null value - really, really nice.
Exactly what I needed.
Unfortunately, the solution of ALICE... didn't work out for me as this was also (strangely) returning records which had the "+" character within them.
Edit: The added comment of Alice... where you create a calculated field like this:
CASE WHEN "KEY" LIKE '%[^0-9A-F]%' THEN 0 ELSE 1 end as xyz
and then filter in the next query like this:
where xyz = 1
works also with SQL Instances with compatibility level < SQL 2012.
Great addition for people which still have to work with older SQL instances.
An option (although not ideal in terms of performance) is to check the characters in the MediaID through a case statement and regular expression
Hexadecimals cannot contain characters other than A-F and numbers between 0 and 9
CASE WHEN MediaID LIKE '%[0-9A-F]%' THEN 1 ELSE 0 END
I would recommend writing a function that can be used to evaluate MediaID first and checks if it is hexadecimal and then running the query for conversion

Why do I get different results depending on the function I use? (SQL Server)

I've been tasked with creating a report for my company. The report is generated from the results returned by the Stored Procedure spGenerateReport, which has multiple filters.
Inside the SP, this is how the filter is expected to work:
SELECT * FROM MyTable WHERE column1 IN (
'filters', 'for', 'this', 'report'
)
Entering the code above yields ~30000 rows in 9s. However, I want to be able to change my SP's filter by passing it a single argument (since I may use 1 or 2 or n filters), like so:
spGenerateReport 'Filters,for,this,report'
For this I have the User-Created Function fnSplitString (yes, I do know that there is a STRING_SPLIT function but I can't use it due to a lower compatibility level of my database) which splits a single string into a table, like so:
SELECT splitData FROM fnSplitString('Filters,for,this,report')
Returns:
splitData
------
Filters
for
this
report
Thus the final code in my SP is:
SELECT * FROM MyTable WHERE column1 IN (
SELECT * FROM fnSplitString('Filters,for,this,report')
)
However, this instead yields ~10000 rows in 60s. The time taken to complete this SP is weird but isn't too much of a problem, however nearly a quarter of my rows disappearing into the void certainly is. The results only have rows from the first couple filters (for example, 'Filters' and 'for'; if I change the order of the arguments (e.g.: fnSplitString('report,for,Filters,this')), I get a different number of rows, and only from filters 'report', 'for', 'Filters'! I don't understand why using the function returns different results than those obtained when using the literal strings. Is there some inside gimmick that I'm not aware of?
PS - I'm sorry in advance for being bad at explaining myself, and for any grammar mistakes
You should definitely be getting the same results with both techniques. Something is wrong.
You havent posted the fnSplitString code but I suspect fnSplitString is not outputting the last string in the list, or maybe the last string in the list is being truncated before it reaches fnSplitString so that no matches are found.
e.g. if the parameter going into your spGenerateReport stored procedure is varchar(20) then what will reach the function is 'Filters,for,this,rep' with the last bit truncated.
SSRS, for example, will truncate strings that are being passed into an SP instead of warning you with an error message

Coldfusion Query of Queries with Empty Strings

The query I start out with has 40,000 lines of empty rows, which stems from a problem with the original spreadsheet from which it was taken.
Using CF16 server
I would like to do a Query of Queries on a variably named 'key column'.
In my query:
var keyColumn = "Permit No."
var newQuery = "select * from source where (cast('#keyColumn#' as varchar) <> '')";
Note: the casting comes from this suggestion
I still get all those empty fields in there.
But when I use "City" as the keyColumn, it works. How do the values in both those columns differ when they both say [empty string] on the query dump?
Is it a problem with column names? What kind of data are in those cells?
where ( cast('Permit No.' as varchar) <> '' )
The problem is the SQL, not the values. By enclosing the column name in quotes, you are actually comparing the literal string "P-e-r-m-i-t N-o-.", not the values inside that column. Since the string "Permit No." can never equal an empty string, the comparison always returns true. That is why the resulting query still includes all rows.
Unless it was fixed in ColdFusion 2016, QoQ's do not support column names containing invalid characters like spaces. One workaround is to use the "columnNames" attribute to specify valid column names when reading the spreadsheet. Failing that, another option is to take advantage of the fact that query columns are arrays and duplicate the data under a valid column name: queryAddColumn(yourQuery, "PermitNo", yourQuery["Permit No."]) (Though the latter option is less ideal because it may require copying the underlying data internally):

SQL Select to keep out fields that are NULL

I am trying to connect a Filemaker DB to Firebird SQL DB in both ways import to FM and export back to Firebird DB.
So far it works using the MBS Plug-in but FM 13 Pro canot handle NULL.
That means that for example Timestamp fields that are empty (NULL) produce a "0" value.
Thats means in Time something like 01.01.1889 00:00:00.
So my idea was to simply ignore fields containing NULL.
But here my poor knowlege stops.
First I thought I can do this with WHERE, but this is ignoring whole records sets:
SELECT * FROM TABLE WHERE FIELD IS NOT NULL
Also I tried to filter it later on like this:
If (IsEmpty (MBS("SQL.GetFieldAsDateTime"; $command; "FIELD") ) = 0 ; MBS("SQL.GetFieldAsDateTime"; $command; "FIELD"))
With no result either.
This is a direct answer to halfbit's suggestion, which is correct but not for this SQL dialect. In a query to provide a replacement value when a field is NULL you need to use COALESCE(x,y). Where if X is null, Y will be used, and if Y is null then the field is NULL. Thats why it is common for me to use it like COALESCE(table.field,'') such that a constant is always outputted if table.field happens to be NULL.
select COALESCE(null,'Hello') as stackoverflow from rdb$database
You can use COALESCE() for more than two arguments, I just used two for conciseness.
I dont know the special SQL dialect, but
SELECT field1, field2, value(field, 0), ...FROM TABLE
should help you:
value gives the first argument, ie, your field if it is NOT NULL or the second argument if it is.