sql server logic to workout the right most position of the numeric field will have a sign OVER it designating positive or negative - sql

Example of the data in csv
Column_header
000000025000{
000000007185E
The doucmention I have
*The right most position of the numeric field will have a sign OVER it
designating positive or negative.
Example of Data
I dont understand how write the logic to support the the symbol,number,letter to get the correct value.

I'd create a table (or view) with the static mapping of character-value, meaning:
Symbol
Value
J
-1
A
+1
about the data rows themselves, it seems to me there is always a symbol at the end, therefore you can split the data into two columns, value, and symbol...
I have no idea about how the data are inserted but it seems logically easy
SELECT
_YourValue_
,LEFT(_YourValue_, LENGTH(_YourValue_)-1) as Value
,RIGHT(_YourValue_, 1) as Symbol
FROM _Whatever_
you can also cast to whatever datatype is correct for those data.
Finally you can join the tables and show/calculate whatever is needed

select value , if(value LIKE '%{%' or value LIKE '%J%' or value LIKE '%E%' or value LIKE '%C%',concat(SUBSTRING(value,1,char_length(value)-1),'+'),concat(SUBSTRING(value,1,char_length(value)-1),'-')) as new_value from yourtablename
Output
value
New Value
000000025000{
000000025000+
000000007185E
000000007185+
Add all other character on first parameter of if clause for positive designation.

Related

SQL Decode format numbers only

I want to format amounts to salary format, e.g. 10000 becomes 10,000, so I use to_char(amount, '99,999,99')
SELECT SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0)) Salary,
SUM(DECODE(e.element_name,'Transportation Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Transportation,
SUM(DECODE(e.element_name,'GOSI Processing',to_char(v.screen_entry_value,'99,999,99'),0)) GOSI,
SUM(DECODE(e.element_name,'Housing Allowance',to_char(v.screen_entry_value,'99,999,99'),0)) Housing
FROM values v,
values_types vt,
elements e
WHERE vt.value_type = 'Amount'
this gives error invalid number because not all values are numbers until value_type is equal to Amount but I guess decode check all values anyway although what I know is that the execution begins with from then where then select, what's going wrong here?
You said you added decode(...), but it looks like you might have actually added sum(decode(...)).
You are converting your values to strings with to_char(v.screen_entry_value,'99,999,99'), so your decode() generates a string - the default 0 will be converted to '0' - giving you a value like '1,234,56'. Then you are aggregating those, so sum() has to implicitly convert those strings to numbers - and it is throwing the error when it tries to do that:
select to_number('1,234,56') from dual
will also get "ORA-01722: invalid number", unless you supply a similar format mask so it knows how to interpret it. You could do that, e.g.:
SUM(to_number(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0),'99,999,99'))
... but it's maybe more obvious that something is strange, and even if you did, you would end up with a number, not a formatted string.
So instead of doing:
SUM(DECODE(e.element_name,'Basic Salary',to_char(v.screen_entry_value,'99,999,99'),0))
you should format the result after aggregating:
to_char(SUM(DECODE(e.element_name,'Basic Salary',v.screen_entry_value,0)),'99,999,99')
fiddle with dummy tables, data and joins.

Adding column to table based on whether another column = a specific string

I want to add a column called "Sweep" that contains bools based on whether the "Result" was a sweep or not. So I want the value in the "Sweep" column to be True if the "Result" is '4-0' or '0-4' and False if it isn't.
This is a part of the table:
I tried this:
ALTER TABLE "NBA_finals_1950-2018"
ADD "Sweep" BOOL;
UPDATE "NBA_finals_1950-2018"
SET "Sweep" = ("Result" = '4-0' OR "Result" = '0-4');
But for some reason, when I run this code...:
SELECT *
FROM "NBA_finals_1950-2018"
ORDER BY "Year";
...only one of the rows (last row) has the value True even though there are other rows where the result is a sweep ('4-0' or '0-4') as shown in the picture below.
I don't know why this is happening but I guess there is something wrong with the UPDATE...SET code. Please help.
Thanks in advance.
NOTE: I am using PostgreSQL 13
This would occur if the strings are not really what they look like -- this is often due to spaces at the beginning or end. Or perhaps to hyphens being different, or other look-alike characters.
You just need to find the right pattern. So so with a select. This returns no values:
select *
from "NBA_finals_1950-2018"
where "Result" in ('4-0', '0-4');
You can try:
where "Result" like '%0-4%' or
"Result" like '%4-0%'
But, this should do what you want:
where "Result" like '%4%' and
"Result" like '%0%'
because the numbers are all single digits.
You can incorporate this into the update statement.
Note: double quotes are a bad idea. I would recommend creating tables and columns without escaping the names.

I am trying to compare two number using Sql query. for e.g 123.45 and 12345 are same if i ignore decimal so it should come in output

I am trying to compare two string using Sql query. for e.g In table A i have A123.45 and in table B i have A12345. this two string are same if i ignore decimal point so as a output i would want table A's value.
First, to avoid the XY problem, it's a little unclear to me why you'd want to do this in the first place - I'm not sure exactly why 123.45 should be equal to 12345. Definitely something to think about.
With that said, if you insist, you can do something like the following:
select case when replace(cast(floatingPointNumber as varchar(50)), '.', '') = cast(yourInteger as varchar(50)) then 1 else 0 end
from YourTable
Obviously, floatingPointNumber is a float and yourInteger is an integer.
I'm not sure what platform you're using since you didn't tag it but I wrote/tested this in SQL Server. You can do something similar in Oracle/MySQL if that's what you're using.
Basically, what this is doing is casting both the floating point number and the integer to strings, removing the decimal from the floating point number, and comparing them. If they're equal, it returns 1; otherwise it returns 0.

Automatically change while query "Case When" in Jasper

I have the data whose type is float. I would like to show them as numeric/integer but for just one case, it will show as a float. For the first i thought with Case When in the query, it will be solved but it didn't happen.
i put this code
CASE WHEN SUBSTRING (c.kode_hs,1,2) = '71' THEN CAST(c.brutto AS float) ELSE c.brutto END AS brutto,
CASE WHEN SUBSTRING (c.kode_hs,1,2) = '71' THEN CAST(c.netto AS float) ELSE c.netto END AS netto,
But it didn't turn out to be a float type.
i try to modify the setting of jasper, but it just work for one data type.
There's another alternative i took, i changed the type into number, so that case CASE WHEN SUBSTRING (c.kode_hs,1,2) = '71' worked out. But, unfortunately,when its value was highest (or have much character) was shown as scientific number e.x.: 2E9. That look is definitely ignored in report view. Is it any other solution for me? Thanks anyway
You could select the data from the database as it is, as a float, and do the trick in Jasper.
Set the type of c.kode_hs field in JasperReport as Float, ant the ExpresionClass of the TextBox in which you want to show the value as String. Then set the Text Field Expression as
$F{kode_hs}.toString().substring(0,2).equals("71")?new java.text.DecimalFormat("#0.00").format($F{brutto}):new java.text.DecimalFormat("##").format($F{brutto})
assuming $F{kode_hs} and $F{brutto} are the fields which hold the respective values.

SQL Server comma delimiter for money datatype

I import Excel files via SSIS to SQL-Server. I have a temp table to get everything in nvarchar. For four columns I then cast the string to money type and put in my target table.
In my temp table one of those four columns let me call it X has a comma as the delimiter the rest has a dot. Don't ask me why, I have everything in my SSIS set the same.
In my Excel the delimiter is a comma as well.
So now in my target table I have everything in comma values but the X column now moves the comma two places to the right and looks like this:
537013,00 instead of 5370,13 which was the original cell value in the temp and excel column.
I was thinking this is a culture setup problem but then again it should or shouldn't work on all of these columns.
a) Why do I receive dot values in my temp table when my Excel displays comma?
b) how can I fix this? Can I replace the "," in the temp table with a dot?
UPDATE
I think I found the reason but not the solution:
In this X column in excel the first three cells are empty - the other three columns all start with 0. If I fill these three cells of X with 0s then I also get the dot in my temp table and the right value in my target table. But of course I have to use the Excel file as is.
Any ideas on that?
Try the code below. It checks whether the string value being converted to money is of numeric data type. If the string value is of numeric data type, then convert it to money data type, otherwise, return a NULL value. And it also replaces the decimal symbol and the digit grouping symbol of the string value to match the expected decimal symbol and digit grouping symbol of SQL Server.
DECLARE #MoneyString VARCHAR(20)
SET #MoneyString = '$ 1.000,00'
SET #MoneyString = REPLACE(REPLACE(#MoneyString, '.', ''), ',', '.')
SELECT CAST(CASE WHEN ISNUMERIC(#MoneyString) = 1
THEN #MoneyString
ELSE NULL END AS MONEY)
As for the reason why you get comma instead dot I have no clue. My first guess would be cultural settings but you already checked that. What about googling, did you get some results?
First the "separator" in SQL is the decimal point: its only excel that is using the comma. You can change the formatting in excel: you should format the excel column as money and specify a decimal point as the separator. Then in the SSIS import wizard split out the transformation of the column so it imports to a money data type. Its a culture thing, but delimiter tends to be used in the context of signifying the end of one column and the start of the next (as in csv)
HTH
Well thats a longstanding problem with excel. It uses the first 30 or so rows to infer data type. It can lead to endless issues. I think your solution has to be to process everything as a string in the way Yaroslav suggested, or supply an excel template to have data predefined and formatted data type columns, which then have the values inserted. Its a pita.