SELECT track_id, name, ROUND(bytes/1000000,6) AS megabytes
FROM track
LIMIT 5;
Result
track_id name megabytes
1 For Those About To Rock (We Salute You) 11
2 ■■■■■ to the Wall 5
3 Fast As a Shark 3
4 Restless and Wild 4
5 Princess of the Dawn 6
How do I get to show for megabytes to 6 decimal places. First one should be 11.170334.
I put in 6 for the round command
If the bytes field holds precision you are casting it to an integer when you divide by an integer. The easiest way to make this statement work is divide by a decimal like:
SELECT track_id, name, ROUND(bytes/1000000.0,6) AS megabytes
FROM track
LIMIT 5;
a better approach would be to do this in the thing making the query.
i.e. A web Page, excel, access etc.
Related
I have a large Oracle table (millions of rows) with two columns of type NUMBER. I am trying to write a query to get the max number of decimal places on pl_pred (I expect this to be around 7 or 8). When I do a to_char on the column, there are extra decimal values showing up and it says the max is 18 when I'm only seeing around 4-7 when selecting on that column. Anyone know why as well as how to accurately assess what the max amount of decimal places is? I need to transfer this data to SQL Server and was trying to come up with the right precision and scale values for the numeric data type.
select
pl.pl_pred as pred_number,
to_char(pl_pred) as pred_char,
length(to_char(pl_pred)) as pred_len
from pollution pl
where length(to_char(pl_pred)) > 15
Results:
PRED_NUMBER PRED_CHAR PRED_LEN
4.6328 "4.6327999999999987" 18
5.8767 "5.8766999999999996" 18
11.19625 "11.196250000000001" 18
13.566375 "13.566374999999997" 18
Table:
CREATE TABLE RHT.POLLUTION
(
LOCATION_GROUP VARCHAR2(20 BYTE),
PL_DATE DATE,
PL_PRED NUMBER,
PL_SE NUMBER
)
Update (again): I ran the same query in SQL Developer and got this, where the two values are showing up exactly the same. So that's interesting. I went back and was able to look at the raw data and it does not match up, though. 4.63278 and 5.8767 is what I see. There are some longer ones, like 10.4820321428571. It's like it's treating Number like float but I thought Number in Oracle was exact.
"PRED_NUMBER" "PRED_CHAR" "PRED_LEN"
4.6327999999999987 "4.6327999999999987" 18
5.8766999999999996 "5.8766999999999996" 18
10.4820321428571 "10.4820321428571" 15
Raw data:
4.6328
5.8767
10.4820321428571
the Title can be a little confused. Let me explain the problem. I have a pipeline that loads new record daily. This record contain sales. The key is <date, location, ticket, line>. This data are loaded into a redshift table and than are exposed through a view that is read by a system. This system have a limit, the column for the ticket is a varchar(10) but the ticket is a string of 30 char. If the system take only the first 10 character will generate duplicate. The ticket number can be a "fake" number. Doesn't matter if it isn't equal to the real number. So I'm thinking to add a new column on the redshift table that contain a progressive number. The problem is that I cannot use an identity column because the record belonging to the same ticket must have the same "progressive number". Then I will expose this new column (ticket_id) instead of the original one.
That is what I want:
day
location
ticket
line
amount
ticket_id
12/12/2020
67
123...GH
1
10
1
12/12/2020
67
123...GH
2
5
1
12/12/2020
67
123...GH
3
23
1
12/12/2020
23
123...GB
1
13
2
12/12/2020
23
123...GB
2
45
2
...
...
...
...
...
...
12/12/2020
78
123...AG
5
100
153
The next day when new data will be loaded I want start with the ticket_id 154 and so on.
Each row have a column which specify the instant in which it was inserted. Rows inserted the same day have the same insert_time.
My solution is:
insert the record with ticket_id as a dense_rank. But each time (that I load new record, so each day) the ticket_id start by one, so...
... update the rows just inserted as ticket_id = ticket_id + the max number that I find under the ticket_id column where insert_time != max(insert_time)
Do you think that there is a better solution? It would be very nice if a hash function existed that take <day, location, ticket> as input and return a number of max 10 characters.
So from the comments it sounds like you cannot add a dimension table to just look up the number or 10 character string that identifies each ticket as this would be a data model change. This is likely the best and most accurate way to do this.
You asked about a hash function to do this and there are several. But first let's talk about hashes - these take strings of varying length and make a signature out of them. Since this process can significantly reduce the number of characters there is a possibility that 2 different string will generate the same hash. The longer the hash value is the lower the odds are for having such a collision but the odds are never zero. Since you can only have 10 chars this sets the odds of a hash collision.
The md5() function on Redshift will take a string and make a 32 character string (base 16 characters) out of it. md5(day::text || location || ticket:text) will make such a hash out of the columns you mentioned. This process can make 16^32 possible different strings which is a big number.
But you only want a string of 10 character. The good news is that hash functions like md5() spread the differences between strings across the whole output so you can just pick any 10 characters to use. Doing this will reduce the number of unique values to 16^10 or about 1.1 trillion - still a big number but if you have billions of rows you could see a collision. One way to improve this would be to base64 encode the md5() output and then truncate to 10 characters. Doing this will require a UDF but would improve the number of possible hashes to 1.1E18 - a million times larger. If you want the output to be an integer you can convert hex strings to integers with strtol() but a 10 digit number only has 10 billion possible values.
So if you are sure you want to use a hash this is quite possible. Just remember what a hash does.
What I am trying to do is fairly simple. I just want to add a row number to a query. Since this is in Access is a bit more difficult than other SQL, but under normal circumstances is still doable using solutions such as DCount or Select Count(*), example here: How to show row number in Access query like ROW_NUMBER in SQL or Access SQL how to make an increment in SELECT query
My Issue
My issue is I'm trying to add this counter to a multi-join query that orders by fields from numerous tables.
Troubleshooting
My code is a bit ridiculous (19 fields, seven of which are long expressions, from 9 different joined tables, and ordered by fields from 5 of those tables). To make things simple, I have an simplified example query below:
Example Query
SELECT DCount("*","Requests_T","[Requests_T].[RequestID]<=" & [Requests_T].[RequestID]) AS counter, Requests_T.RequestHardDeadline AS Deadline, Requests_T.RequestOverridePriority AS Priority, Requests_T.RequestUserGroup AS [User Group], Requests_T.RequestNbrUsers AS [Nbr of Users], Requests_T.RequestSubmissionDate AS [Submitted on], Requests_T.RequestID
FROM (((((((Requests_T
INNER JOIN ENUM_UserGroups_T ON ENUM_UserGroups_T.UserGroups = Requests_T.RequestUserGroup)
INNER JOIN ENUM_RequestNbrUsers_T ON ENUM_RequestNbrUsers_T.NbrUsers = Requests_T.RequestNbrUsers)
INNER JOIN ENUM_RequestPriority_T ON ENUM_RequestPriority_T.Priority = Requests_T.RequestOverridePriority)
ORDER BY Requests_T.RequestHardDeadline, ENUM_RequestPriority_T.DisplayOrder DESC , ENUM_UserGroups_T.DisplayOrder, ENUM_RequestNbrUsers_T.DisplayOrder DESC , Requests_T.RequestSubmissionDate;
If the code above is trying to select a field from a table not included, I apologize - just trust the field comes from somewhere (lol i.e. one of the other joins I excluded to simply the query). A great example of this is the .DisplayOrder fields used in the ORDER BY expression. These are fields from a table that simply determines the "priority" of an enum. Example: Requests_T.RequestOverridePriority displays to the user as an combobox option of "Low", "Med", "High". So in a table, I assign a numerical priority to these of "1", "2", and "3" to these options, respectively. Thus when ENUM_RequestPriority_T.DisplayOrder DESC is called in order by, all "High" priority requests will display above "Medium" and "Low". Same holds true for ENUM_UserGroups_T.DisplayOrder and ENUM_RequestNbrUsers_T.DisplayOrder.
I'd also prefer to NOT use DCOUNT due to efficiency, and rather do something like:
select count(*) from Requests_T where Requests_T.RequestID>=RequestID) as counter
Due to the "Order By" expression however, my 'counter' doesn't actually count my resulting rows sequentially since both of my examples are tied to the RequestID.
Example Results
Based on my actual query results, I've made an example result of the query above.
Counter Deadline Priority User_Group Nbr_of_Users Submitted_on RequestID
5 12/01/2016 High IT 2-4 01/01/2016 5
7 01/01/2017 Low IT 2-4 05/06/2016 8
10 Med IT 2-4 07/13/2016 11
15 Low IT 10+ 01/01/2016 16
8 Low IT 2-4 01/01/2016 9
2 Low IT 2-4 05/05/2016 2
The query is displaying my results in the proper order (those with the nearest deadline at the top, then those with the highest priority, then user group, then # of users, and finally, if all else is equal, it is sorted by submission date). However, my "Counter" values are completely wrong! The counter field should simply intriment +1 for each new row. Thus if displaying a single request on a form for a user, I could say
"You are number: Counter [associated to RequestID] in the
development queue."
Meanwhile my results:
Aren't sequential (notice the first four display sequentially, but then the final two rows don't)! Even though the final two rows are lower in priority than the records above them, they ended up with a lower Counter value simply because they had the lower RequestID.
They don't start at "1" and increment +1 for each new record.
Ideal Results
Thus my ideal result from above would be:
Counter Deadline Priority User_Group Nbr_of_Users Submitted_on RequestID
1 12/01/2016 High IT 2-4 01/01/2016 5
2 01/01/2017 Low IT 2-4 05/06/2016 8
3 Med IT 2-4 07/13/2016 11
4 Low IT 10+ 01/01/2016 16
5 Low IT 2-4 01/01/2016 9
6 Low IT 2-4 05/05/2016 2
I'm spoiled by PLSQL and other software where this would be automatic lol. This is driving me crazy! Any help would be greatly appreciated.
FYI - I'd prefer an SQL option over VBA if possible. VBA is very much welcomed and will definitely get an up vote and my huge thanks if it works, but I'd like to mark an SQL option as the answer.
Unfortuantely, MS Access doesn't have the very useful ROW_NUMBER() function like other clients do. So we are left to improvise.
Because your query is so complicated and MS Access does not support common table expressions, I recommend you follow a two step process. First, name that query you already wrote IntermediateQuery. Then, write a second query called FinalQuery that does the following:
SELECT i1.field_primarykey, i1.field2, ... , i1.field_x,
(SELECT field_primarykey FROM IntermediateQuery i2
WHERE t2.field_primarykey <= t1.field_primarykey) AS Counter
FROM IntermediateQuery i1
ORDER BY Counter
The unfortunate side effect of this is the more data your table returns, the longer it will take for the inline subquery to calculate. However, this is the only way you'll get your row numbers. It does depend on having a primary key in the table. In this particular case, it doesn't have to be an explicitly defined primary key, it just needs to be a field or combination of fields that is completely unique for each record.
I have a table that contains the field as:
doses_given decimal(9,2)
that I want to multiply against this field:
drug_units_per_dose varchar(255)
So I did something like this:
CAST(ppr.drug_units_per_dose as decimal(9,2)) * doses_given dosesGiven,
However, looking at the data, I notice some odd characters:
select distinct(drug_units_per_dose) from patient_prescr
NULL
1
1-2
1-4
1.5
1/2
1/4
10
12
15
1½
2
2-3
2.5
20
2½
3
3-4
30
4
5
6
7
8
½
As you can see, I am getting some characters that cannot be CAST to decimal. On the web page these fields are interpreted as a small 1/2 symbol:
Is there anyway to replace the ½ field with a .5 to accurately complete the multiplication?
The 1/2 symbol is ascii character 189, so to replace:
CAST(REPLACE(ppr.drug_units_per_dose,char(189),'.5') as decimal(9,2)) * doses_given dosesGiven
You have a rather nasty problem. You have a field drug_units_per_dose that a normal human being would consider to be an integer or floating point number. Clearly, the designers of your database are super-normal people, and they understand a much wider range of options for this concept.
I say that partly tongue in cheek, but to make an important point. The column in the database does not represent a number, at least not in all cases. I would suggest that you have a translation table for drug_units_per_dose. It would have columns such as:
1 1
1/2 0.5
3-4 ??
I realize that you will have hundreds of rows, and a lot of them will look redundant because they will be "50,50" and "100,100". However, if you want to keep control of the business logic for turning these strings into a number, then a lookup table seems like the sanest approach.
CAST(prod.em_amccom_comset AS int) * invline.qtyinvoiced AS setcredits
syntax - CAST(char_value as int) * integer_value as alias_name
I have a VARCHAR2 column that I want to sort numerically. 99% (or possibly even 100%) of the time it will contain numbers. I was looking around and found this solution. Quoting the source:
Remember that our goal is to sort the supplier_id field in ascending order (based on its
numeric value). To do this, try using
the LPAD function.
For example,
select * from supplier order by
lpad(supplier_id, 10);
This SQL pads the front of the
supplier_id field with spaces up to 10
characters. Now, your results should
be sorted numerically in ascending
order.
I've played around a little bit with this solution and it seems to be workign (so far), but how does it work, can anyone explain?
When sorting strings/varchar, the field is always serted from left to right, like you would sort normal words.
That is why you have problems when sorting
1
2
3
10
11
20
which would be sorted as
1
10
11
2
20
3
But, now if you pad the values left, you will have something like
001
002
003
010
011
020
which would sort correctly