Query sql on string - sql

I have a db with users that have all this record .
I would like to do a query on a data like
CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com
and I need to group all users by the same ou=department.
How can I do the select with the substring to search a department??
My idea for the solution is to create another table that is like this:
---------------------------------------------------
ldapstring | society | site
---------------------------------------------------
"CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com" | societyName1 | societySite1
and my idea is to compare the string with these on the new table with the tag like but how can I take the society and site when the like string occurs?????
Please help me

You could always do ColumnName LIKE '%OU=Department%'.
Regardless, I think this needs to be normalized into a better table, if possible. Multivalue columns should be avoided as much as possible.
IF you aren't dealing with a database, the next best thing would be a regular expression.

Maybe you should look into MySQL regular expressions. I, myself, have never used it, but just wanted to suggest it :-)
http://dev.mysql.com/doc/refman/5.1/en/regexp.html

Related

SQL - reverse like. Is it possible?

I have a db table containing these data
| id | curse_word |
----------------------
| 1 | d*mn |
| 2 | sh*t |
| 3 | as*h*le |
I am creating this website that sort of behaves like an online forum where people can created discussion threads and talk about it. To be able to post you need to register. To register to need a forum username. We wanted to prevent usernames from containing curse words anywhere in it. These curse words are defined in our database.
To check for this using the table above, I have thought of using an sql query with like.
But what if the username registered is something like this -> "someshttyperson". Since there is the word sht in it, the username should not be allowed. So this is something like using a sql query with reverse like.
I tried the following command below but it won't work:
select * from table where "somesh*ttyperson" LIKE curse_word
How can I make it work?
Although I'd give Tomalak's comment some consideration, here's a solution that might fit your needs:
SELECT COUNT(*) FROM curse_words
WHERE "somesh*ttyperson" LIKE CONCAT('%', curse_word, '%');
In this way you are actually composing a LIKE comparison term for each of the curse words by prepending and appending a % (e.g. %sh*t%).
LIKE might be a bit expensive to query if you plan on having millions of curse words but I think it's reasonable to assume you aren't.
All you have to do now is test for this result being strictly equal to 0 to let the nickname through, or forbid it otherwise.

Best Way to query/visualizate period data (SQL/BI)

lately we have a few cases were I had to build reports where I have one table like:
1|text|text
2|text|text
3|text|text
and another table
1|1.1.2017|text
1|1.2.2017|text
2|1.1.2017|text
2|1.2.2017|text
3|1.1.2017|and so on
result should be:
Jan|Feb|March|...
1|text|text|2 | 1 | ...
2|text|text|2 | 1 | ...
3|text|text|1 | 1 | ...
My first question would be if there is common way to do this. I already build querys to do this but maybe not so sufficient as they could be. Seems to me as a very common business case so maybe there are (standardized) techniques to do this which I don't know yet.
Another question would be: The queried data would go into a BI tool later. So is it maybe better (faster) to do first the queries, put the tables in BI tools and then manipulate the data as desired? Maybe someone has experience in this and could give me advice...
Thanks
Have a look at the answer to duplicate ID in sql select - except the first sentence.
In my experience it is best to do as you suggested: use SQL to group and aggregate the data as you want and then use a BI tool to present it.

SQL - Calculating columns using dynamic functions

I'm trying to create a set of data that I'm going to write out to a file, it's essentially a report composed of various fields from a number of different tables, some columns need to have some processing done on them, some can just be selected.
Different users will likely want different processing performed on certain columns, and in the future, I'll probably need to add additional functions for computed columns.
I'm considering the cleanest/most flexable approach to storing and using all the different functions I'm likely to need for these computed columns, I've got two ideas in my head, but I'm hoping there might be a much more obvious solution I'm missing.
For a simple, slightly odd example, a Staff table:
Employee | DOB | VacationDays
Frank | 01/01/1970 | 25
Mike | 03/03/1975 | 24
Dave | 05/02/1980 | 30
I'm thinking I'd either end up with a query like
SELECT NameFunction(Employee, optionID),
DOBFunction(DOB, optionID),
VacationFunction(VacationDays, optionID),
from Employee
With user defined functions, where the optionID would be used in a case statement inside the functions to decide what processing to perform.
Or I'd want to make the way the data is returned customisable using a lookup table of other functions:
ID | Name | Description
1 | ShortName | Obtains 3 letter abbreviation of employee name
2 | LongDOB | Returns DOB in format ~ 1st January 1970
3 | TimeStampDOB | Returns Timestamp for DOB
4 | VacationSeconds | Returns Seconds of vaction time
5 | VacationBusinessHours | Returns number of business hours of vacation
Which seems neater, but I'm not sure how I'd formulate the query, presumably using dynamic SQL? Is there a sensible alternative?
The functions will be used on a few thousand rows.
The closest answer I've found was in this thread:
Call dynamic function name in SQL
I'm not a huge fan of dynamic SQL, although in this case I think it might be the best way to get the result I'm after?
Any replies appreciated,
Thanks,
Chris
I would go for the second solution. You could even use real stored proc names in your lookup table.
create proc ShortName (
#param varchar(50)
)as
begin
select 'ShortName: ' + #param
end
go
declare #proc sysname = 'ShortName'
exec #proc 'David'
As you can see in the example above, the first parameter of exec (i.e. the procedure name) can be a parameter. This said with all the usual warnings regarding dynamic sql...
In the end, you should go with whichever is faster, so you should try both ways (and any other way someone might come up with) and decide after that.
I like better the first option, as long as your functions don't have extra selects to a table. You may not even need the user defined functions, if they are not going to be reused in a different report.
I prefer to use Dynamic SQL ony to improve a query's performance, such as adding a dynamic ordering or adding / removing complex WHERE conditions.
But these are all subjective opinions, the best thing is try, compare, and decide.
Actually, this isn't a question of what's faster. It is a question of what makes the code cleaner, particularly for adding new functionality (new columns, new column formats, re-ordering them).
Don't think of your second approach as "using dynamic SQL", since that tends to have negative connotations. Instead, think of it as a data-driven approach. You want to build a table that describes the columns that users can get, and the formats. This is great! Users can then provide a list of columns, and you'll have a magical stored procedure that combines the information from the users with the information in your metadata table, and produces the desired result.
I'm a big fan of data-driven approaches, and dynamic SQL is the best SQL tool I've found so far for implementing them.

Adding columns dynamically to a View or return from Stored Procedure

I've found a lot of bits and pieces of this, but I can't put the together. This is basically the idea of the table where name is a varchar, date is a datetime, and number is an int
Name | Date | Number
A 1-2-11 15
B 1-2-11 8
A 1-1-11 5
I'd like to create a view that looks like this
Name | 1-2-11 | 1-1-11
A 15 5
B 8
At first I was using a temp table, and appending each date row to it. I read on another forum that way was a major resource hog. Is that true? Is there a better way to do this?
I would combine dynamic SQL with a pivot as I mentioned in this answer.
You want to look into "cross-tab" or "pivot" statements. In SQL Server 2005 and up, its PIVOT, but syntax varries between platform.
This is a very complex subject, particuarly since you want to add columns to a view as your data grows over time. Besides your platform's documentation, check out the myriad other SO posts on the subject.
If the date column is a known set then you can use pivot in some cases.
It is often faster to use dynamic sql BUT this can be very dangerous so be wary.
To really know what the best solution is for your problem we would need some more information -- how much data -- how much variation is expected in the different columns, etc.
However, it is true, both PIVOT and Dynamic SQL will be faster than a temp table.
I would do it with Access or Excel instead of T-SQL.

MySQL: select the closest match?

I want to show the closest related item for a product. So say I am showing a product and the style number is SG-sfs35s. Is there a way to select whatever product's style number is closest to that?
Thanks.
EDIT: to answer your questions. Well I definitely want to keep the first 2 letters as that is the manufacturer code but as for the part after the first dash, just whatever matches closest. so for example SG-sfs35s would match SG-shs35s much more than SG-sht64s. I hope this makes sense whenever I do LIKE product_style_number it only pulls the exact match.
There normally isn't a simple way to match product codes that are roughly similar.
A more SQL friendly solution is to create a new table that maps each product to all the products it is similar to.
This table would either need to be maintained manually, or a more sophisticated script can be executed periodically to update it.
If your product codes follow a consistent pattern (all the letters are the same for similar products, with only the numbers changing), then you should be able to use a regular expression to match the similar items. There are docs on this here...
It sounds like what you want is levenshtein distance .
Unfortunately, there isn't a built-in levenshtein function for mysql, but some folks have come up with a user-defined function that does it(deadlink).
You will probably want to do it as a stored procedure, as I expect that the algorithm may not be trivial.
For example, you may split the term at the -, so you have two parts. You do a LIKE query on each part and use that to make a decision.
You could just loop though, replacing the last character with "%" until you get at least one result, in your stored procedure.
Sounds like you need something like Lucene, though i'm not sure if that would be overkill for your situation. But it certainly would be able to do text searches and return the ones most similar first.
If you need something more simple I would try to start by searching with the full product code, then if that doesn't work try to use wildcards/remove some characters until you return a result.
JD Isaacks.
This situation of yours is very simple to solve.
It`s not like you need to use Artificial Intelligence like the Google.
http://www.w3schools.com/sql/sql_wildcards.asp
Take a look at this manual at w3schools about wildcards to use with your SELECT code.
But also you will need to create a new table with 3 columns: LeftCode, RightCode and WildCard.
Example:
Rows on Table:
LeftCode = SG | RightCode = 35s | WildCard = SG-s_s35s
LeftCode = SG | RightCode = 64s | WildCard = SG-s_t64s
SQL Code
If the user typed the code that matches the row1 of the table:
SELECT * FROM PRODUCTS WHERE CODE LIKE "$WildCard";
Where $WildCard is the PHP variable containing the column 3 of the new table.
I hope I helped, even 4 years late...