I have a spreadsheet with 12290 unique reference numbers on that I need to find any payment transactions against.
For a handful id just manually do this. But for a large number like this what would be the best way?
Can I reference a notepad file in a query in anyway?
Thanks,
This is a bit long for a comment.
Although you could put the values into a giant in statement, I would recommend that you load the data into a separate table. That table can have the reference number as a primary key. You can then use a query (join or exists) to get matching values.
This also has the nice side effect that you have the reference numbers in the database, so they can be used for another purpose -- or archived so you can keep track of what your application is doing.
While using #Gordon Linoff's answer.
To upload data there are various methods.
As in the main comments ie. writing a formula in XL and copying that and run the script in Management Studio (I would keep that in a permanent table to avoid doing the same in the future, if the list is not going to be changed often.) and also I wouldn't recommend this for 12000 records, but for less than 100 records it is OK.
Using Import Export wizard of Management Studio
Create a table to hold that data and open that in Management Studio, in Edit mode and copy-paste data to the table.
Related
I am using SQL Server 2016 Management Studio, and I encounter something that I haven't seen before and I probably just need some explanation no technical coding involved.
So I was working on some query and need to find tables and usually I check Object Explorer and directly right click table for the first 1000 records. I was joining my claims table with another table for amounts paid column, and I couldn't locate my second table.
I could use select * to get the columns of the table, but it is not visible in my object explorer. Not sure if I am blocked from viewing, but this is my first time seeing it.
I am showing you where my claims table ends and right after it is the Communication table.
Just for an official answer (answers were given in OP comments):
The "table" you can't find is actually a View and can be found in the Views subfolder under the Database.
This may be a simple one (whether it is possible or not).. I have two tables which are identical with a set of records. I need to find the best way that when one of the memo fields is modified, that the modifications get carried to the identical record on the corrisponding identical table. In many cases the field may have hundreds of words.. The only way that i know how to potentially do this is using an update query, but i am wondering if the test within is really large, will it break? Or is there another way? Thanks, A
Since you're working with Access 2010 you can use an After Update data macro to copy the changes back to the main table:
For more information see
Create a data macro
Here is the problem: I am currently working with a Microsoft Access database that the previous employee created by just adding all the data into one table (yes, all the data into one table). There are about 186 columns in that one table.
I am now responsible for dividing each category of data into its own table. Everything is going fine although progress is too slow. Is there perhaps an SQL command that will somehow divide each category of data into its proper table? As of now I am manually looking at the main table and carefully transferring groups of data into each respective table along with its proper IDs making sure data is not corrupted. Here is the layout I have so far:
Note: I am probably one of the very few at my campus with database experience.
I would approach this as a classic normalisation process. Your single hugely wide table should contain all of the entities within your domain so as long as you understand the domain you should be able to normalise the structure until you're happy with it.
To create your foreign key lookups run distinct queries against the columns your going to remove and then add the key values back in.
It sounds like you know what you're doing already ? Are you just looking for reassurance that you're on the right track ? (Which it looks like you are).
Good luck though, and enjoy it - it sounds like a good little piece of work.
I've got about 25 tables that I'd like to update with random data that's picked from a subset of data. I'd like the data to be picked at random but meaningful -- like changing all the first names in a database to new first names at random. So I don't want random garbage in the fields, I'd like to pull from a temp table that's populated ahead of time.
The only way I can think of to do this is with a loop and some dynamic sql.
insert pick-from names into temp table
with id field
foreach table name in a list of
tables:
build a dynamic sql that updates all
first name fields to be a name
picked at random from the temp table based on rand() * max(id) from temp table
But anytime I think "loop" in SQL I figure I'm doing something wrong.
The database in question has a lot of denormalized tables in it, so that's why I think I'd need a loop (the first name fields are scattered across the database).
Is there a better way?
Red Gate have a product called SQL Data Generator that can generate fake names and other fake data for testing purposes. It's not free, but they have a trial so you can test it out, and it might be faster than trying to do it yourself.
(Disclaimer: I have never used this product, but I've been very happy with some of their other products.)
I wrote a stored procedure to do something like this a while back. It is not as good as the Red Gate product and only does names, but if you need something quick and dirty, you can download it from
http://www.joebooth-consulting.com/products/
The script name is GenRandNames.sql
Hope this helps
Breaking the 4th wall a bit by answering my own question.
I did try this as a sql script. What I learned is that SQL pretty much sucks at random. The script was slow and weird -- functions that referenced views that were only created for the script and couldn't be made in tempdb.
So I made a console app.
Generate your random data, easy
to do with the Random class (just
remember to only use one instance of
Random).
Figure out what columns and table
names that you'd like to update via
a script that looks at
information_schema.
Get the IDs
for all the tables that you're going
to update, if possible (and wow will
it be slow if you have a large table
that doesn't have any good PKs).
Update each table 100 rows at a time. Why 100? No idea. Could be 1000. I just picked a number. Dictionary is handy here: pick a random ID from the dict using the Random class.
Wash, rinse, repeat. I updated about 2.2 million rows in an hour this way. Maybe it could be faster, but it was doing many small updates so it didn't get in anyone's way.
I'm working with a SQL Server 2000 database that likely has a few dozen tables that are no longer accessed. I'd like to clear out the data that we no longer need to be maintaining, but I'm not sure how to identify which tables to remove.
The database is shared by several different applications, so I can't be 100% confident that reviewing these will give me a complete list of the objects that are used.
What I'd like to do, if it's possible, is to get a list of tables that haven't been accessed at all for some period of time. No reads, no writes. How should I approach this?
MSSQL2000 won't give you that kind of information. But a way you can identify what tables ARE used (and then deduce which ones are not) is to use the SQL Profiler, to save all the queries that go to a certain database. Configure the profiler to record the results to a new table, and then check the queries saved there to find all the tables (and views, sps, etc) that are used by your applications.
Another way I think you might check if there's any "writes" is to add a new timestamp column to every table, and a trigger that updates that column every time there's an update or an insert. But keep in mind that if your apps do queries of the type
select * from ...
then they will receive a new column and that might cause you some problems.
Another suggestion for tracking tables that have been written to is to use Red Gate SQL Log Rescue (free). This tool dives into the log of the database and will show you all inserts, updates and deletes. The list is fully searchable, too.
It doesn't meet your criteria for researching reads into the database, but I think the SQL Profiler technique will get you a fair idea as far as that goes.
If you have lastupdate columns you can check for the writes, there is really no easy way to check for reads. You could run profiler, save the trace to a table and check in there
What I usually do is rename the table by prefixing it with an underscrore, when people start to scream I just rename it back
If by not used, you mean your application has no more references to the tables in question and you are using dynamic sql, you could do a search for the table names in your app, if they don't exist blow them away.
I've also outputted all sprocs, functions, etc. to a text file and done a search for the table names. If not found, or found in procedures that will need to be deleted too, blow them away.
It looks like using the Profiler is going to work. Once I've let it run for a while, I should have a good list of used tables. Anyone who doesn't use their tables every day can probably wait for them to be restored from backup. Thanks, folks.
Probably too late to help mogrify, but for anybody doing a search; I would search for all objects using this object in my code, then in SQL Server by running this :
select distinct '[' + object_name(id) + ']'
from syscomments
where text like '%MY_TABLE_NAME%'