Here it goes.
I have two tables say, Application and Report.
There tables have a common column(appId, externalAppId respectively) which can be joined to find unique values.
My problems is when I join these 2 tables, I'm getting values which I don't really want.
Sample values
Application Report
No appId ReportNo ExternalAppId
1 123 1 123
2 456 2 00000123
3 789 3 321
So when I say Application.appId = Report.ExternalAppId in my where condition, it is returning me rows of 123 and 00000123 from Report table . The leading zeros are not taken into account in the join. I need result with exact matches only. In this case, the first row alone.The cause of the issue I think is the appId is number and ExternalAppId is varchar. I cant change this also. Is there any workaround ? I have seen regex which can remove the leading zeros and then match, but just want to know if there is a better solution. ie can I specify that the join will work only for the values with exact match ?
Oracle can only compare two values of the same datatype. I can't stress this enough. In fact most languages can compare two values only if they are the same datatype. Relations in math will also be defined with objects of the same type (so that you can define transitivity, reflexivity...). There's also the saying with oranges and apples: don't try to compare them.
So when you ask Oracle to compare two values of different datatypes, an implicit conversion will take place. In most cases you should avoid these conversions, since the rules of which datatype will be chosen over the other can be quite complex and (like in this case) will often produce bugs because you will incorrectly guess which type will win. You should rely on explicit conversions (conversions that you specify).
I assume that Application (appId) is a NUMBER and Report (ExternalAppId) is of type VARCHAR2. In this case Oracle chose to convert ExternalAppId to a NUMBER, and in the NUMBER space, 00123=123 because numbers have no format.
You should have written instead your join condition as:
to_char(application.appId) = report.externalAppId
Related
I've been wracking my brain here trying to figure out a way to achieve a solution to the following without external applications (such as Excel).
I'll set up the structure: We are using a 3rd party ERP that provides a nicely configured conversion system for product packaging types. I'm trying to create a query that will take all conversions for a given product and return them inline. Because the number of conversion records is indeterminate, the query would need to be recursive.
To make things simple, let's use package quantites for this scenario example. If a product can be shipped in [eaches, pairs, sets, packages, and cartons], the conversion table records would look something like this:
pkConvKey
fkProdID
childUnit
parentUnit
chPerParent
ConvRec001
Prod123
each
pair
2
ConvRec002
Prod123
pair
set
3
ConvRec003
Prod123
set
pack
7
ConvRec004
Prod123
pack
carton
24
Using the table above, I can determine how many pairs of Prod123 are contained in a carton by following the math:
24 packs x 7 sets x 3 pairs = 504 pairs per carton.
I could further multiply that by 2 to get the count of individual pieces in a carton (1,008). That's the idea behind the conversion table but here's my actual problem.
I'd like to return a table of records where associated conversions are in-line, thusly:
fkProdID
unit1
unit2
qtyInUnit2
unit3
qtyInUnit3
unit4
qtyInUnit4
unit5
qtyInUnit5
Prod123
each
pair
2
set
3
pack
7
carton
24
Complicating the matter is that the unit types are unknown (arbitrary) values and there is no requirement to have a full, intact chain from unit A to unit Z. (For example, there might be a conversion record from each to pair, and another from set to pack, but not one from pair to set).
In this scenario, the select can't recursively link the records, and they would appear in the resulting table as two separate records - which is fine.
I have attempted to join the table to itself on t1.parentUnit = t2.childUnit, but that obviously doesn't work recursively.
I fear my only solution is to left join the table over and over - as many as 20 times in the query, settling for NULL values if additional conversions do not exist but then I would also have many duplicate rows (with incomplete conversion chains) to weed out.
Can this be done in a select query?
Thanks in advance!
-Dan
I have below table TEST with singe column DATA
00001900-01-01Aseopenigaccount-RF RF-ADIT
00341900-02-01Aseopenigaccount-RASS RASS-ADIT
00761900-03-01Adminopenigaccount-RASS OPEN-System
I required above column DATA split into below columns 4 columns
Code Date Description ShortDesc
0000 1900-01-01 Aseopenigaccount-RF RF-ADIT
0034 1900-02-01 Aseopenigaccount-RASS RF-ADIT
0076 1900-03-01 Adminopenigaccount-RASS RF-ADIT
#at9063, welcome to the community. As the comments indicate, you should provide a sample of your solution in your future questions. It would, also, be really helpful to provide any logical assumptions behind your dataset.
The solution is based on the data that you provided as an example. The first two columns can be extracted by taking the first 4 characters and the following 10. The Description column would start on your 15th character and would go up until the first space. ShortDescr would start from the first space.
SELECT LEFT(my_data,4) AS My_Code,
SUBSTRING(TRIM(my_data),5,10) AS my_date,
SUBSTRING(TRIM(my_data),15,CHARINDEX(' ',my_data)-15) AS My_Description,
SUBSTRING(TRIM(my_data),CHARINDEX(' ',my_data),LEN(TRIM(my_data))+1-CHARINDEX(' ',my_data)) AS my_ShortDesc
FROM test
My client has a set of numeric data stored in a string field in a database. So of course it doesn't sort correctly. These rows sort like this:
105
3
44
When they should sort like this:
3
44
105
This is very much a legacy database and I can't change it at all. I also can't change the software that uses the database. The client doesn't own it or have the source code. It has never worked the way they want. However, there is an unused string field that I could use to sort on (only a small number of fields can be sorted on.)
What I would like to do is take the input data, derive a string from it, and store the new string in the unused field, such that when the data is sorted on this new data, the original data sorts correctly, i.e., numerically.
So, for an overly simplistic example, if the algorithm produced the following new data:
105 -> c
3 -> a
44 -> b
Then when the second column was sorted, the first column would look 'correct'.
The tricky bit is that when new rows are added to the database, they must also sort correctly, without having to regenerate the sort data for all rows. This is the part of the problem that has my brain in a twist. I'm not sure it's actually possible.
You can assume that the number will never be more than 5 'digits'.
I realize this is a total kludge, but since I can't change the system, I have to find a work around, rather than a quality solution. Welcome to the real world.
~~~~~~~~~~~~~~~~~~~~~~ S O L U T I O N ~~~~~~~~~~~~~~~~~~
I don't think this is an uncommon problem, so here are the results of Gordon's solution:
mysql> select * from t order by new;
+------+------------+
| orig | new |
+------+------------+
| 3 | 0000000003 |
| 44 | 0000000044 |
| 105 | 0000000105 |
+------+------------+
In most databases, you can just do:
order by cast(col as int)
This will convert the string representation to a number and use that for ordering. There is no need for an additional column. If you add one, I would recommend adding a numeric column to contain the actual value.
If you really want to store something in the unused field, then you can left pad the number. How to do this depends on the database, but here is one typical method:
update t
set unused = right(concat('0000000000', col), 10);
Not all databases support these two specific functions, but all offer this basic functionality in some method.
Try something like
SELECT column1 FROM table1 ORDER BY LENGTH(column1) ASC, column1 ASC
(Adjust the column and table name for your environment.)
This is a bit of a hack but works as long as the "numbers" in your string column are natural, non-negative numbers only.
If you are looking for a more sophisticated approach or algorithm, try searching for natural sort together with your DBMS.
Suppose I have a large table to store ranges of integers. I can do this with two fields:
start|end
10 |210 (represents 10 to 210)
5 |55 (represents 5 to 55)
(quick to select by end column), or:
start|length
10 | 200 (represents 10 to 210)
5 | 50 (represents 5 to 55)
(quick to select by length column).
What if sometimes I need to select by end, and sometimes by length, and both queries need to be fast? I could store both:
start|length|end
10 | 200 |210
5 | 50 |55
But then this is not normalised and everyone has to remember to update both fields, and is just bad design.
I know I can select by start + length or end - start but for a very large table, isn't this extremely slow?
How can I query by calculated values quickly without storing redundant data - or should I just store the extra column?
Depending on the database type you are using, you might want to use a trigger to calculate the derived field. That way, they can never get out of synch.
This means that the field (length) could be re-calculated every time start or end changes.
I'd store the length, but I'd make sure the calculation was done in my insert and update sprocs so that as long as everyone uses your sprocs there is no more overhead for them.
Unfortunately neither of your target databases support computed columns. I would do the following:
First, determine whether you really have a performance problem. It is true that WHERE end - start = ? will perform more slowly than WHERE length = ?, but you don't define what a "really big table" is in your application, nor what the required performance is. No need to optimize away a problem that may not exist.
Determine whether you can support any latency in your searches. If so, you can add the calculated column to the table but dedicate a separate task, running every five minutes, each hour, or whatever, to fill in the values.
In PostgreSQL you could consider a materialized view, which I believe are supported at the engine level. (See Catcall's comment, below).
Finally, if all else fails, consider using a trigger to maintain the calculated column.
I've been beating my head on the desk trying to figure this one out. I have a table that stores job information, and reasons for a job not being completed. The reasons are numeric,01,02,03,etc. You can have two reasons for a pending job. If you select two reasons, they are stored in the same column, separated by a comma. This is an example from the JOBID table:
Job_Number User_Assigned PendingInfo
1 user1 01,02
There is another table named Pending, that stores what those values actually represent. 01=Not enough info, 02=Not enough time, 03=Waiting Review. Example:
Pending_Num PendingWord
01 Not Enough Info
02 Not Enough Time
What I'm trying to do is query the database to give me all the job numbers, users, pendinginfo, and pending reason. I can break out the first value, but can't figure out how to do the second. What my limited skills have so far:
select Job_number,user_assigned,SUBSTRING(pendinginfo,0,3),pendingword
from jobid,pending
where
SUBSTRING(pendinginfo,0,3)=pending.pending_num and
pendinginfo!='00,00' and
pendinginfo!='NULL'
What I would like to see for this example would be:
Job_Number User_Assigned PendingInfo PendingWord PendingInfo PendingWord
1 User1 01 Not Enough Info 02 Not Enough Time
Thanks in advance
You really shouldn't store multiple items in one column if your SQL is ever going to want to process them individually. The "SQL gymnastics" you have to perform in those cases are both ugly hacks and performance degraders.
The ideal solution is to split the individual items into separate columns and, for 3NF, move those columns to a separate table as rows if you really want to do it properly (but baby steps are probably okay if you're sure there will never be more than two reasons in the short-medium term).
Then your queries will be both simpler and faster.
However, if that's not an option, you can use the afore-mentioned SQL gymnastics to do something like:
where find ( ',' |fld| ',', ',02,' ) > 0
assuming your SQL dialect has a string search function (find in this case, but I think charindex for SQLServer).
This will ensure all sub-columns begin and start with a comma (comma plus field plus comma) and look for a specific desired value (with the commas on either side to ensure it's a full sub-column match).
If you can't control what the application puts in that column, I would opt for the DBA solution - DBA solutions are defined as those a DBA has to do to work around the inadequacies of their users :-).
Create two new columns in that table and make an insert/update trigger which will populate them with the two reasons that a user puts into the original column.
Then query those two new columns for specific values rather than trying to split apart the old column.
This means that the cost of splitting is only on row insert/update, not on _every single select`, amortising that cost efficiently.
Still, my answer is to re-do the schema. That will be the best way in the long term in terms of speed, readable queries and maintainability.
I hope you are just maintaining the code and it's not a brand new implementation.
Please consider to use a different approach using a support table like this:
JOBS TABLE
jobID | userID
--------------
1 | user13
2 | user32
3 | user44
--------------
PENDING TABLE
pendingID | pendingText
---------------------------
01 | Not Enough Info
02 | Not Enough Time
---------------------------
JOB_PENDING TABLE
jobID | pendingID
-----------------
1 | 01
1 | 02
2 | 01
3 | 03
3 | 01
-----------------
You can easily query this tables using JOIN or subqueries.
If you need retro-compatibility on your software you can add a view to reach this goal.
I have a tables like:
Events
---------
eventId int
eventTypeIds nvarchar(50)
...
EventTypes
--------------
eventTypeId
Description
...
Each Event can have multiple eventtypes specified.
All I do is write 2 procedures in my site code, not SQL code
One procedure converts the table field (eventTypeIds) value like "3,4,15,6" into a ViewState array, so I can use it any where in code.
This procedure does the opposite it collects any options your checked and converts it in
If changing the schema is an option (which it probably should be) shouldn't you implement a many-to-many relationship here so that you have a bridging table between the two items? That way, you would store the number and its wording in one table, jobs in another, and "failure reasons for jobs" in the bridging table...
Have a look at a similar question I answered here
;WITH Numbers AS
(
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS N
FROM JobId
),
Split AS
(
SELECT JOB_NUMBER, USER_ASSIGNED, SUBSTRING(PENDING_INFO, Numbers.N, CHARINDEX(',', PENDING_INFO + ',', Numbers.N) - Numbers.N) AS PENDING_NUM
FROM JobId
JOIN Numbers ON Numbers.N <= DATALENGTH(PENDING_INFO) + 1
AND SUBSTRING(',' + PENDING_INFO, Numbers.N, 1) = ','
)
SELECT *
FROM Split JOIN Pending ON Split.PENDING_NUM = Pending.PENDING_NUM
The basic idea is that you have to multiply each row as many times as there are PENDING_NUMs. Then, extract the appropriate part of the string
While I agree with DBA perspective not to store multiple values in a single field it is doable, as bellow, practical for application logic and some performance issues. Let say you have 10000 user groups, each having average 1000 members. You may want to have a table user_groups with columns such as groupID and membersID. Your membersID column could be populated like this:
(',10,2001,20003,333,4520,') each number being a memberID, all separated with a comma. Add also a comma at the start and end of the data. Then your select would use like '%,someID,%'.
If you can not change your data ('01,02,03') or similar, let say you want rows containing 01 you still can use " select ... LIKE '01,%' OR '%,01' OR '%,01,%' " which will insure it match if at start, end or inside, while avoiding similar number (ie:101).