Actually I'm a bit confused about what should i wrote in the subject.
The point is like this, I want to average the Speed01,Speed02,Speed03 and Speed04 :
SELECT
Table01.Test_No,
Table01.Speed01,
Table01.Speed02,
Table01.Speed03,
Table01.Speed04,
I want to create new column that consists of this average -->>
AVG(Table01.Speed01, Table01.Speed02, Table01.Speed03,Table01.Speed04) as "Average"
I have tried this, but it did not work.
From
Table01
So, the contain of the Speed column could be exist but sometimes the Speed02 don't have number but the others are have numbers. sometimes speed04 data is also missing and the others is exist, sometimes only one data (example: only Speed01) have the data. lets say it depends on the sensor ability to catch the speed of the test material.
It will be a big help if you can find the solution. I'm newbie here.
THANK YOU ^^
AVG is a SQL aggregate function, therefore not applicable. So simply do the math. Average is sum divided by count:
(SPEED01 + SPEED02 + SPEED03 +SPEED04)/4
To deal with missing values, use NULLIF or COALESCE:
(COALESCE(SPEED01, 0) + COALESCE(SPEED02, 0) + COALESCE(SPEED03, 0) + COALESCE(SPEED04, 0))
That leaves the denominator. You need to add 1 for every non null. For example:
(COALESCE(SPEED01/SPEED01,0) + COALESCE(SPEED02/SPEED02,0) + ...)
You can also use CASE, depending on the supported SQL dialect, to avoid the possible divide by 0:
CASE WHEN SPEED01 IS NULL THEN 0 ELSE 1
OR you can normalize the data, extract all SPEEDs into a 1:M relation and use the AVG aggregate, avoiding all these issues. Not to mention the possibility to add a 5th measurement, then a 6th and so on and so forth!
Just add the columns and divide them by 4. To deal with the "missing" values use coalesce to treat NULL values as zero:
SELECT Test_No,
(coalesce(Speed01,0) + coalesce(Speed02,0) + coalesce(Speed03,0) + coalesce(Speed04,0)) / 4 as "Average"
FROM Table01;
You didn't mention your DBMS (Postgres, Oracle, ...), but the above is ANSI (standard) SQL and should run on nearly every DBMS.
As I understood your question, I supposed that Table01.Speed01, Table01.Speed03, Table01.Speed04 are nullable and of type int whereas Table01.Speed02 is nullable and of type nvarchar:
SELECT
Table01.Test_No,
(
ISNULL(Table01.Speed01, 0) +
CASE ISNUMERIC(Table01.Speed02) WHEN 0 THEN 0 ELSE CAST(Table01.Speed02 AS int) END +
ISNULL(Table01.Speed03, 0) +
ISNULL(Table01.Speed04, 0)
)/4 AS AVG
FROM Table01
Related
I have a table with IDs, some have letters, most do not. And associated points with those IDs. The IDs are stored as text.
I would like to add points to a given range of IDS that ARE integers and give them all the same points. 535.
I have found a way using a subquery to SELECT the data I need, but it appears that updating it is another matter. I would love to be able to only get data that is CAST-able without a subquery. however since it errors out when it touches something that isn't a number, that doesn't seem to be possible.
select * from (select idstring, amount from members where idstring ~ '^[0-9]+$') x
WHERE
CAST(x.idstring AS BIGINT) >= 10137377001
and
CAST(x.idstring AS BIGINT) <= 10137377100
What am I doing ineficiently in the above, and how an I update the records that I want to?
In a perfect world my statement would be as simple as:
UPDATE members SET amount = 535
WHERE idstring >= 10137377001
AND idstring <= 10137377100
But since Idstring both contains entries that contain letters and is stored as text, it complicates things significantly. TRY_CAST would be perfect here, however there is no easy equivalent in postgres.
An example of the ids in the table might be
A52556B
36663256
6363632900B
3000525
ETC.
You can use TO_NUMBER together with your regular expression predicate, like so:
UPDATE members
SET amount = 535
WHERE idstring ~ '^[0-9]+$'
AND to_number(idstring, '999999999999') BETWEEN 10137377001 AND 10137377100
Working example on dbfiddle
You can encase the typecast and integer comparison within a CASE statement.
UPDATE members
SET amount = COALESCE(amount, 0) + 535
WHERE CASE WHEN idstring ~ '^[0-9]+$'
THEN idstring::BIGINT BETWEEN 10137377001 AND 10137377100
ELSE FALSE END;
Here I've assumed you might want to add 535 rather than setting it explicitly to 535 based on what you said above, but if my assumption is incorrect then SET amount = 535 is just fine.
In a hive table how can I add the '-' sign in a field, but for random records? If I use the syntax below it changes all the records in the field to negative, but I want to change random records to negative.
This is the syntax I used which changed all the records to negative:
CAST(CAST(-1 AS DECIMAL(1,0)) AS DECIMAL(19,2))
*CAST(regexp_replace(regexp_replace(TRIM(column name),'\\-',''),'-','') as decimal(19,2)),
If you want to change random values to negative, why not use a case expression?
select (case when rand() < 0.5 then - column_name else column_name end)
Despite your query, this assumes that the column is a number of some sort, because negating strings doesn't make much sense.
this is my first question here.
I am building an SQL query in which I need to verify that the version of the object B is always lower or equal than the version of the object A. This is a link table, here is an example :
The query is :
SELECT *
FROM TABLE
WHERE B_VERSION <= A_VERSION
As you can see, it works for the 2 first rows, but not the third, because AA0 is detected as smaller than H08 while it shouldn't (when we arrive at Z99 the next version number is AA0 so the <= operator doesn't work anymore).
So I would like to do something like to parse the version to compare on how many letters are they in the versions, and only if both versions have the same number of letters then I use the <= operator.
I don't know however how to do that in an SQL query. Didn't find anything usefull on google neither. Do you have a solution ?
Thanks in advance
The key for solving this problem is the function PATINDEX. You can find more information here.
This query takes the value of A_VERSION and finds the first occurrence of a number. Then uses this position to divide the value in two parts. The first part is padded to the right with spaces because it is alphabetic, while the second part is padded to the right with zeros ('0') because it is numeric.
The same process occurs for B_VERSION.
Noticed that in this example, each part is assumed to be of maximum 5 characters, so this will work in your case for versions ranging from A0 to ZZZZZ99999. Feel free to adjust as you need.
SELECT *
FROM TABLE
WHERE RIGHT(SPACE(5)
+ SUBSTRING(A_VERSION,
1,
PATINDEX('%[0-9]%', A_VERSION) - 1), 5)
+ RIGHT(REPLICATE('0', 5)
+ SUBSTRING(A_VERSION,
PATINDEX('%[0-9]%', A_VERSION),
LEN(A_VERSION)), 5)
<= RIGHT(SPACE(5)
+ SUBSTRING(B_VERSION,
1,
PATINDEX('%[0-9]%', B_VERSION) - 1), 5)
+ RIGHT(REPLICATE('0', 5)
+ SUBSTRING(B_VERSION,
PATINDEX('%[0-9]%', B_VERSION),
LEN(B_VERSION)) ,5)
If you are going to do this operation in many places, you might consider creating a function for this operation.
Hope this helps.
Many thanks! It helped a lot however I am using sql developer and I cannot use PATINDEX with this software, I found the equivalent which is REGEXP_INSTR, it works very similarly.
I used this alrogithm that filters out the lines where there are more letters in VERSION_B than VERSION_A and then filter out the lines where VERSION_B is bigger than VERSION_A when they have both the same quantity of letters:
WHERE
(REGEXP_INSTR(VERSION_B, '[0-9]') < REGEXP_INSTR(VERSION_A, '[0-9]')) OR
(REGEXP_INSTR(VERSION_B, '[0-9]') = REGEXP_INSTR(VERSION_A, '[0-9]') AND VERSION_B <= VERSION_A)
My current subquery is taking :26 secs to run. I'm using the following query as a subquery as part of another code which altogether takes 2 mins and 25 secs for one month a data to return.
Is there a faster query for this? The current procedureID contains alpha and numeric characters. I only want to pull the ProcedureID's beginning with numeric characters.
SELECT DISTINCT
ProcedureID
FROM Transactions
WHERE Substring(ProcedureID,1,1) NOT LIKE '[A-z]%'
Run the query with the Execution Plan turned on. That should identify any indexes that may help. In addition if you add a new column that is 1 char long with it being populated with the first char of the ProcedureID and add an index to that column you should get better performance when querying based on that column as opposed to the substring() query you have used.
First, the issue is unlikely to be the substring(). The performance hog is the select distinct.
You can simplify the logic. Something like:
SELECT DISTINCT ProcedureID
FROM Transactions
WHERE ProcedureID < 'A' or ProcedureID >= '{' -- 'z' + 1
or:
WHERE ProcedureId >= '0' AND ProcedureId < ':' -- '9' + 1
The magic characacters '{' and ':' are simply the ASCII values of the characters that follow "z" and "9". They could be replaced by an expression such as CHR(ASCII('9') + 1) if you prefer.
However, this will probably have minimal affect on performance. An index on Transactions(ProcedureID) would help, because it covers the query/subquery.
If you really want help on the larger query, you should ask another question and provide the query that you really want optimized (or perhaps a representative simpler version).
EDIT:
You might actually find that a version like this is much faster with the right indexes:
SELECT p.ProcedureId
FROM Procedures p
WHERE p.ProcedureId >= '0' AND p.ProcedureId < ':' AND -- '9' + 1
EXISTS (SELECT 1 FROM Transactions t WHERE t.ProcedureId = p.ProcedureId);
This assumes that you have a table of which ProcedureId is the primary key.
Then, for performance, you want an index on Transactions(ProcedureId).
Try this:
WHERE IsNumeric(Substring(ProcedureID, 1, 1)) = 1
If you do this often enough, it might be worth creating a calculated column that contains just the first character of ProcedureID.
seems you are using a REGEXP not a like
SELECT DISTINCT
ProcedureID
FROM Transactions
WHERE Substring(ProcedureID,1,1) NOT REGEXP '^[A-z]'
I need to work out the total in percentages:
Please feel free to ask me for anymore details.
=Sum(Fields!OutSLA.Value, "SLAPriority")/Sum(Fields!InSLA.Value, "SLAPriority")*100 I had this but didn't quite go to plan. Very basic knowledge of this sorry
Sorry to post on this one again.. I'm getting weird percentages. i.e. -200 or -Infinity:
To do this at the report level, you can just use something like:
=1 - (Fields!OutSLA.Value / Fields!InSLA.Value)
I wouldn't multiply by 100 in the expression, I would just set the textbox Format property to the approriate value, e.g. P0.
The above expression assumes a detail row. To apply at the in a header or footer row, you would use something like:
=1 - (Sum(Fields!OutSLA.Value) / Sum(Fields!InSLA.Value))
I tried out the first formula in a basic report:
The results are slightly different from your example, but you can see the underlying figures in the last column and why they are rounded in the P0 column.
Edit after comment
You can prevent Infinity values by using an IIf statements to check for 0 InSLA totals, like:
=1 -(IIf(Sum(Fields!InSLA.Value) <> 0
, Sum(Fields!OutSLA.Value) / Sum(Fields!InSLA.Value)
, 0))
-200% is the correct value for that particular row based on your calculation: 1 - (3/1) = -2 = -200 %. What would you expect it to be?
You can do this with convert or with Cast
Convert:
Select
AssignedTo,
INSLA,
OUTSLA,
CONVERT(VARCHAR(50), CAST(((1-(OUTSLA/INSLA))*100) as int))+ '%' as Percentage
FROM SLATABLE
Cast:
Select
AssignedTo,
INSLA,
OUTSLA,
CAST(CAST((1-(OUTSLA/INSLA))*100 as int) AS VARCHAR(50))+ '%' as Percentage
FROM SLATABLE