Create a Global Persistent List of Strings as Variable - sql

I'm using SQL Server 2008 R2, in MS SQL Server Management Studio. I've only ever done Selecting and all the standard stuff but I find myself frequently using the same lists of strings for different queries and I'd like to be able to build variable that holds them. I don't have the access rights to create a new table, otherwise I would just do that. Is this even possible?
Let's say I have a bunch of client numbers that I want to use to include only their client account data with a query, example:
SELECT * FROM SALES
WHERE CLIENTNUMBER IN ('123','456','789')
Is there a way to create a variable that will hold those 3 values, so that I can instead just say
SELECT * FROM SALES
WHERE CLIENTNUMBER IN #CLOTHING_CLIENTS
The list is longer than 3 client numbers of course. And there are different categories etc. I think it would be MUCH simpler to do as a separate table but of course I don't have the ability to create new tables. I could do JOINs and the like too but that's getting even more work than just putting in the client numbers each time.
I'm trying to simplify things and make it more readable for other people, not make it more efficient for the database or more "correct".

There's a couple of ways you can do this involving temp tables or table variables. Try something like this:
declare #CLOTHING_CLIENTS table (ClientNumber varchar(20) not null);
-- Your list of values goes here
insert into #CLOTHING_CLIENTS (ClientNumber)
values ('123')
,('456')
,('789');
select * from Sales
where ClientNumber in (select ClientNumber from #CLOTHING_CLIENTS);
The #CLOTHING_CLIENTS variable can be used again anywhere in the same batch that it was created in. This post does a good job explaining the scope of table variables.

Good news there are Global things in T-SQL!!!
An extension of Jeff Lewis's answer that makes things a bit more 'Global' is to use the ## Type of table.
Assuming you can make them then a ##Table is a temporary table that can be access by other connections and even other databases. Just make sure your on the same server.
So you can do this:
CREATE TABLE ##MyValues(A INT)
INSERT INTO ##MyValues(A) VALUES (1)
Once done you can go anywhere and do
SELECT * FROM ##MyTables
Now all you need to do is update your snippets to do things like
SELECT * FROM SALES AS S
INNER JOIN ##MyClientIDTable AS MCIT OM MCIT.CLIENTNUMBER = S.CLIENTNUMBER
Just make sure your ##MyClientIDTable Has a CLIENTNUMBER column in it and the correct data.
Hope this helps a bit.

Related

Creating a Table Using Previous Values (Iterative Process)

I'm completely new to Visual FoxPro (9.0) and I was having trouble with creating a table which uses previous values to generate new values. What I mean by this is I have a given table that is two columns, age and probability of death. Using this I need to create a survival table which has the columns Age, l(x), d(x), q(x), m(x), L(x), T(x), and q(x) Where:
l(x): Survivorship Function; Defined as l(x+1) = l(x) * EXP(-m(x))
d(x): Number of Deaths; Defined as l(x) - l(x+1)
q(x): Probability of Death; This is given to me already
m(x): Mortality Rate; Defined as -LN(1-q(x))
L(x): Total Person-Years of Cohorts in the Interval (x, x+1); Defined as l(x+1) + (0.5 * d(x))
T(X): Total Person-Years of all Cohorts in the Interval (x, N); Defined as SUM(L(x)) [From x, N]
e(x): Remaining Life Expectancy; Defined as T(x) / l(x)
Now I'm not asking how to get all of these values, I just need help getting started and maybe pointed in the right direction. As far as I can tell, in VFP there is no way to point to a specific row in a data-table, so I can't do what I normally do in R and just make a loop. I.E. I can't do something like:
for (i in 1:length(given_table$Age))
{
new_table$mort_rate[i] <- -LN(1-given_table$death_prop[i])
}
It's been a little while so I'm not sure that's 100% correct anyway, but my point is I'm used to being able to create a table, and alter the values individually by pointing to a specific row and/or column using a loop with a simple counter variable. However, from what I've read there doesn't seem to be a way to do this in VFP, and I'm completely lost.
I've tried to make a Cursor, populating it with dummy values and trying to update each value sequentially using a SCATTER NAME and SCAN/REPLACE thing, but I don't really understand what's happening or how to fine tune this each calculation/entry that I need. (This is the post I was referencing when I tried this: Multiply and subtract values in previous row for new row in FoxPro.
So, how do I go about making a table that relies on iterative process to calculate subsequent values in Visual FoxPro? Are there any good resources that explain Cursors and the Scatter/Scan thing that I was trying (I couldn't find any resources that explained it in terms I could understand)?
Sorry if I've worded things poorly, I'm fairly new to programming in general. Thank you.
You absolutely can write a loop through an existing table in VFP. Use the SCAN command. However, if you're planning to add records to the same table as you go, you're going to run into some issues. Is that what you meant here? If so, my suggestion is to put the new records into a cursor as you create them and then APPEND them to the original table after you've processed all the records that were there when you started.
If you're putting records into a different table as you loop through the original, this is straightforward:
* Assumes you've already created the table or cursor to hold the result
SELECT YourOriginalTable && substitute in the alias/name for the original table
SCAN
* Do your calculations
* Substitute appropriately for YourNewTable and the two lists
INSERT INTO YourNewTable (<list of fields>) VALUES (<list of values>)
ENDSCAN
In the INSERT command, if you refer to any fields of the original table, you need to alias them, like this: YourOriginalTable.YourField, again substituting appropriately.
A bit too late but maybe still helps.
The steps to achieve what you want are:
0. close the tables - just in case (see CLOSE DATABASE)
open the Age table (see USE in VFP help)
create the Survival table structure (see CREATE TABLE)
for this you need to know the field type for each of your l(x), d(x), etc functions
Lets say that you named the fields like your functions (i.e. Lx,Dx, etc)
select the Age table (see SELECT)
loop through Age table (see SCAN)
pass each record into variables (see SCATTER)
made your calculations starting from the Age table data (variables) using L(x),D(x),etc formulas and store it into variables named as M.Your Survival Table Field
i.e. M.mx = -LOG(1-m.Age) && see LOG
Note: in these calculations you can use any mix of Age table variables and the new created variables.
after you calculated all the fields from Survival write it into table (see APPEND && GATHER commands)
close the tables (see CLOSE DATABASE)

Tableau - carry out join on IP Numbers SSMS

I am trying to put two tables into Tableau. One has IP addresses in dot format ie 192.168.32.1, and the other has IP numbers corresponding to cities and postcodes etc that I want to make available to visualisation.
The idea is to carry out the steps here (http://kb.tableau.com/articles/howto/mapping-ip-address-geocode-data) to do a join on the two tables, where the join converts the IP address in one table into a number that can then be compared to the number in the other table.
However when i followed the steps in the guide here it ran for 40 minutes and then crashed.
Can anyone shed any light on this?
My tables are in Microsoft SQL Server Management Studio - I have also looked into using Computed columns to do the same thing but with no luck so far ( I am very new to SQL and cannot work out how to save then apply a function, as suggested here https://www.stev.org/post/mssqlconvertiptobigint).
Preface: I'd suggest trying to run the below query to see if it converts correctly and quickly (try to stick to under 30 seconds as a good rule of thumb) and go from there. That can tell you whether you're better off investing more time in SQL or in Tableau.
There are many approaches one could take, this is just my suggestion. What you could consider is writing a query that creates another table with the data formatted. A stored procedure that is set to run in a job (or just a job) and add to the table every few minutes (or nightly, whatever you think is appropriate) would give you the base data in SQL. Then, you could use Tableau to do the joins .
select [IP Address],
--add as many columns as you want from the base table to take the place of one of the tables you join to
[CodeForIPAddressIntegerFromYourHelpSite] as 'IPINT'
--converts IP Address to Integer - use the code from your help site
Into [IPIntegerConversion]
--this will create a permanent table automatically
from YourTableWithIPAddress
This method would get you a table that has both the IP Address and the IP Integer that would allow you to link between the two (you should be able to paste the code from their site over [CodeForIPAddressIntegerFromYourHelpSite]. Then, you could set this up to run automatically in SQL Agent (which is very easy, actually). If the query itself isnt expensive, you can paste it into the job. If you pass the data already computed, it may be more efficient.
I think this should get you close:
ParseName is a function in SQL Server that parses IP addresses. I am not an expert in IP stuff and got the basics from about 5 minutes of google searching. You may have to reverse the order, but this is the basic query structure and it should be pretty fast.
select ip.ip
,parsename(ip.ip,4)*16777216 -- 2^24
+parsename(ip.ip,3)*65536 -- 2^16
+parsename(ip.ip,2)*256 -- 2^8
+parsename(ip.ip,1) ip4
,ipv4.*
from tableWYourIPs ip
left join ipv4 on parsename(#ip,4)*16777216
+parsename(#ip,3)*65536
+parsename(#ip,2)*256
+parsename(#ip,1) between ipv4.start and ipv4.end
Make sure you apply the indexes the site recommends:
CREATE INDEX [ip_from] ON [ip2location].[dbo].[ip2location_db9]([ip_from]) ON [PRIMARY]
GO
CREATE INDEX [ip_to] ON [ip2location].[dbo].[ip2location_db9]([ip_to]) ON [PRIMARY]
GO
The conversion follows this logic:
http://blogs.lessthandot.com/index.php/datamgmt/datadesign/how-to-convert-ip-addresses-between-bigi/

SQL selecting results from multiple tables

Hello I want to display results from unrelated tables where a text string exists in a column which is common to all tables in the database.
I can get the desired result with this:
SELECT *
FROM Table1
WHERE Title LIKE '%Text%'
UNION
SELECT *
FROM Table2
WHERE Title LIKE '%Text%'`
However my question is is there a more efficient way to go about this as I need to search dozens of tbls. Thanks for any help you can give!
ps the system I am using supports most dialects but would prefer to keep it simple with SQL Server as that is what I am used to.
There is a SP script you can find online called SearchAllTables (http://vyaskn.tripod.com/search_all_columns_in_all_tables.htm).
When you call it pass in the string, it will return the tables and columns as well as the full string.
You can modify it to work with other datatypes quite easily. It's a fantastic resource for tasks exactly like yours.

Building Query from Multi-Selection Criteria

I am wondering how others would handle a scenario like such:
Say I have multiple choices for a user to choose from.
Like, Color, Size, Make, Model, etc.
What is the best solution or practice for handling the build of your query for this scneario?
so if they select 6 of the 8 possible colors, 4 of the possible 7 makes, and 8 of the 12 possible brands?
You could do dynamic OR statements or dynamic IN Statements, but I am trying to figure out if there is a better solution for handling this "WHERE" criteria type logic?
EDIT:
I am getting some really good feedback (thanks everyone)...one other thing to note is that some of the selections could even be like (40 of the selections out of the possible 46) so kind of large. Thanks again!
Thanks,
S
What I would suggest doing is creating a function that takes in a delimited list of makeIds, colorIds, etc. This is probably going to be an int (or whatever your key is). And splits them into a table for you.
Your SP will take in a list of makes, colors, etc as you've said above.
YourSP '1,4,7,11', '1,6,7', '6'....
Inside your SP you'll call your splitting function, which will return a table-
SELECT * FROM
Cars C
JOIN YourFunction(#models) YF ON YF.Id = C.ModelId
JOIN YourFunction(#colors) YF2 ON YF2.Id = C.ColorId
Then, if they select nothing they get nothing. If they select everything, they'll get everything.
What is the best solution or practice for handling the build of your query for this scenario?
Dynamic SQL.
A single parameter represents two states - NULL/non-existent, or having a value. Two more means squaring the number of parameters to get the number of total possibilities: 2 yields 4, 3 yields 9, etc. A single, non-dynamic query can contain all the possibilities but will perform horribly between the use of:
ORs
overall non-sargability
and inability to reuse the query plan
...when compared to a dynamic SQL query that constructs the query out of only the absolutely necessary parts.
The query plan is cached in SQL Server 2005+, if you use the sp_executesql command - it is not if you only use EXEC.
I highly recommend reading The Curse and Blessing of Dynamic SQL.
For something this complex, you may want a session table that you update when the user selects their criteria. Then you can join the session table to your items table.
This solution may not scale well to thousands of users, so be careful.
If you want to create dynamic SQL it won't matter if you use the OR approach or the IN approach. SQL Server will process the statements the same way (maybe with little variation in some situations.)
You may also consider using temp tables for this scenario. You can insert the selections for each criteria into temp tables (e.g., #tmpColor, #tmpSize, #tmpMake, etc.). Then you can create a non-dynamic SELECT statement. Something like the following may work:
SELECT <column list>
FROM MyTable
WHERE MyTable.ColorID in (SELECT ColorID FROM #tmpColor)
OR MyTable.SizeID in (SELECT SizeID FROM #tmpSize)
OR MyTable.MakeID in (SELECT MakeID FROM #tmpMake)
The dynamic OR/IN and the temp table solutions work fine if each condition is independent of the other conditions. In other words, if you need to select rows where ((Color is Red and Size is Medium) or (Color is Green and Size is Large)) you'll need to try other solutions.

SQL to search and replace in mySQL

In the process of fixing a poorly imported database with issues caused by using the wrong database encoding, or something like that.
Anyways, coming back to my question, in order to fix this issues I'm using a query of this form:
UPDATE table_name SET field_name =
replace(field_name,’search_text’,'replace_text’);
And thus, if the table I'm working on has multiple columns I have to call this query for each of the columns. And also, as there is not only one pair of things to run the find and replace on I have to call the query for each of this pairs as well.
So as you can imagine, I end up running tens of queries just to fix one table.
What I was wondering is if there is a way of either combine multiple find and replaces in one query, like, lets say, look for this set of things, and if found, replace with the corresponding pair from this other set of things.
Or if there would be a way to make a query of the form I've shown above, to run somehow recursively, for each column of a table, regardless of their name or number.
Thank you in advance for your support,
titel
Let's try and tackle each of these separately:
If the set of replacements is the same for every column in every table that you need to do this on (or there are only a couple patterns), consider creating a user-defined function that takes a varchar and returns a varchar that just calls replace(replace(#input,'search1','replace1'),'search2','replace2') nested as appropriate.
To update multiple columns at the same time you should be able to do UPDATE table_name SET field_name1 = replace(field_name1,...), field_name2 = replace(field_name2,...) or something similar.
As for running something like that for every column in every table, I'd think it would be easiest to write some code which fetches a list of columns and generates the queries to execute from that.
I don't know of a way to automatically run a search-and-replace on each column, however the problem of multiple pairs of search and replace terms in a single UPDATE query is easily solved by nesting calls to replace():
UPDATE table_name SET field_name =
replace(
replace(
replace(
field_name,
'foo',
'bar'
),
'see',
'what',
),
'I',
'mean?'
)
If you have multiple replaces of different text in the same field, I recommend that you create a table with the current values and what you want them replaced with. (Could be a temp table of some kind if this is a one-time deal; if not, make it a permanent table.) Then join to that table and do the update.
Something like:
update t1
set field1 = t2.newvalue
from table1 t1
join mycrossreferncetable t2 on t1.field1 = t2.oldvalue
Sorry didn't notice this is MySQL, the code is what I would use in SQL Server, my SQL syntax may be different but the technique would be similar.
I wrote a stored procedure that does this. I use this on a per database level, although it would be easy to abstract it to operate globally across a server.
I would just paste this inline, but it would seem that I'm too dense to figure out how to use the markdown deal, so the code is here:
http://www.anovasolutions.com/content/mysql-search-and-replace-stored-procedure