I have a table (a) that contains imported data, and one of the values in that table needs to be joined to another table (b) based on that value. In table b, sometimes that value is in a comma separated list, and it is stored as a varchar. This is the first time I have dealt with a database column that contains multiple pieces of data. I didn't design it, and I don't believe it can be changed, although, I believe it should be changed.
For example:
Table a:
column_1
12345
67890
24680
13579
Table b:
column_1
12345,24680
24680,67890
13579
13579,24680
So I am trying to join these table together, based on this number and 2 others, but when I run my query, I'm only getting the one that contain 13579, and none of the rest.
Any ideas how to accomplish this?
Storing lists as a comma delimited data structure is a sign of bad design, particularly when storing ids, which are presumably an integer in their native format.
Sometimes, this is necessary. Here is a method:
select *
from a join
b
on ','+b.column_1+',' like '%,'+cast(a.column_1 as varchar(255))+',%'
This will not perform particularly well, because the query will not take advantage of any indexes.
The idea is to put the delimiter (,) at the beginning and end of b.column_1. Every value in the column then has a comma before and after. Then, you can search for the match in a.column_1 with commas appended. The commas ensure that 10 does not match 100.
If possible, you should consider an alternative way to represent the data. If you know there are at most two values, you might consider having two columns in a. In general, though, you would have a "join" table, with a separate row for each pair.
Related
I have a VARCHAR column category_text in the table that contain tags to a notification stored. I have three tags Query, Complaint and Suggestion and column can have one or more values separated by comma. I am applying a filter and filter can have one or more values as well in comma separated pattern.
Now what I want is to retrieve all the rows that contain at least one tag based on the filter user is applying, for instance user can select 'query,suggestion' as a filter and result would be all the rows that contain one of the tags i.e. query or suggestion.
select
t.category_text
from
real_time_notifications t
where
charindex('query, suggestion, complaints', t.category_text) > 0
order by
t.id desc
Create a new table, like user_category (user.id link to user table, category) and create an index on both. It will speed up a lot for searching and ease your future maintenance a lot.
If you still persist to do that, create an inline function to split string to records and then merge to test.
I have two tables that store two sets of ID's which are the same, when any of the four ID's are NULL there's an issue on the front end application. These four values always vary as to which can be NULL but there will always be one with the correct entry.
My question is can I enter these four values into a temp table then update all the NULL values using the column which has actually has a value? As the column with the correct value changes all the time it makes it harder.
Basically i'm making a stored proc but can't figure this logic out.
It sounds like you just need to use coalesce to find the non-NULL value.
coalesce(table1.col1, table1.col2, table2,col1, table2.col2)
The only caveat is that if two columns have different non-NULL values, then this expression returns the first one (in the order you list the columns) it finds. But if you don't have that situation occur, or if you can specify which column you'd use when it does occur, this should work regardless of what combination of columns has NULL.
Using table expression to join two tables then from the result table, update the column missing data with the ones have.
We have a huge table and one of the column contains queries like e.g. in row 1
1. (((firstname:Adam OR firstname:Neil ) AND lastname:Lee) ) AND category:"Legal" AND type:Individual
and in row 2 of same column
2. (((firstname:Adam* OR firstname:Neil ) AND lastname:Lee) ) AND category:"Legal" AND type:Organization
Similarly there are few other types of Query strings which are used eventually to query external services.
Issue is based on certain criteria I have to group and remove duplicates from this table.
There are few rules to determine grouping of Strings in different rows.One of them is that if first name and lastname are same then ignore category and type values, therefore above two rows will be grouped to one. There are around million rows. Comparing Strings and doing grouping is not looking elegant solution. What could be best possible solution using sql.
In an Oracle database, in table A, there is a clob field called 'ID_CLOB', storing some id's from another table B.
Example:
| ID_CLOB |
,15,16,17,18,19,21,23,24,25,30,32,33,
And here is my question, how can I know from a SQL statement to tell if a number, say 15, is in the 'ID_CLOB' field?
Thanks in advcance.
The situation:
I am actually working on a third party application that come with this db schema. Think about the scenario: that in table B, there are person information per line, and table A, let's assume its a department table, each row is a department and the clob field is used to store the information of who are in that department.
If the format of the data in that field is guaranteed to have comma delimiters before and after each value with no spaces, then the POSITION scalar would find it:
SELECT * FROM A
WHERE POSITION( ',15,', in id_clob ) > 0;
This is not very efficient, though, and is fragile. If there are spaces between the values and the commas or if the first value is not preceded by a comma or if the last value is not trailed by a comma, it will fail.
As others have pointed out, it would be better (if you can) to change the database design. In the real world, though, that is not always possible.
i would suggest building a new table -
person_department
-----------------
person_id
department_id
and struggle through the one time parsing of your badly formatted data and put it into this structure.
then you can query easily.
Instead of thinking of a query for your problem you should rather redesign your DB. Your structure violates the 1st normalisation form of database design.
I have an array of 50+ elements that dictates how many hours were worked for a given week.
What is the proper way to store this information into a database table?
My initial idea was to use a delimiter, but the text is too large (280 characters) to fit.
Additionally, there seems something "wrong" with creating a table column for each element.
Ideas?
Array using delimiter (comma):
37.5,37.5,37.5,37.5,37.5,37.5,37.5,37.5,37.5,37.5, ...
The "proper" way is to store the array's contents as multiple rows in a whole other table, each with a foreign key referencing the record they belong to back in the first table. There may be other things that work for you, though.
[EDIT]: From the details you added I'm guessing your array elements consist of a number of hours worked each week and you have 50+ of them because a year has 52-ish weeks. So what I think you're looking for, is I guess that your current (main) table is called something like "employees," is that each row there should have some unique identifier for each employee record. So your new table might be called "work_weeks" and consist of something like employee_id (which matches the employee id in the current table), week_number, and hours_worked.
Seems like a 1 to many relationship. For this example, tableA is the 1 and tableBlammo is the many.
tableA => column blammoId
tableBlammo => column blammoId, column data
One row in tableA joins to multiple rows in tableBlammo via the blammoId column.
Each row in tableBlammo has one element of the array in the data column.