In an MS Access 2010 database, I have a massive table that I need to shrink in order to make it usable. I am only interested in a subset of the records in the table, so I want to select all the data that I care about and insert it into another table that has all the same identical fields. The problem is that the table has MANY fields and it would be error-prone to list them all explicitly. Is there some way to simply select all fields and insert into all fields without listing each field explicitly? If so, how do I change the following code in order to accomplish this?
INSERT INTO massivetable_destination (*)
SELECT * FROM massivetable_source
WHERE State='MS';
I may be misunderstanding you, but if the tables are in the same access database, it seems you could do the following steps and let the IDE do all of the heavy lifting for you.
Right click your massive table and select copy.
Right click in the object explorer area and select paste.
Optional - rename the copied table.
Run a delete query on the copied table, removing all records that you do not want. The delete query would look like the following:
Query Text
DELETE *
FROM MyCopiedTable
WHERE State <> 'ms';
Related
I use a SELECT statement to fill an internal table with a large amount of records. I am new to ABAP and OpenSQL. I know how cursors work and why I need them in this case, but I can't seem to find any good examples that show a correct implementation of them. This is the code I am working with:
TYPES: BEGIN OF lty_it_ids,
iteration_id TYPE dat_itr_id,
END OF lty_it_ids.
DATA: lt_it_ids TYPE STANDARD TABLE OF lty_it_ids,
lt_records_to_delete TYPE STANDARD TABLE OF tab_01p.
SELECT 01r~iteration_id
INTO TABLE lt_it_ids
FROM tab_01r AS 01r INNER JOIN tab_01a AS 01a
ON 01r~iteration_id = 01a~iteration_id
WHERE 01a~collection_id = i_collection_id.
IF lt_it_ids IS NOT INITIAL.
SELECT * FROM tab_01p INTO CORRESPONDING FIELDS OF TABLE lt_records_to_delete
FOR ALL ENTRIES IN lt_it_ids
WHERE iteration_id = lt_it_ids-iteration_id AND collection_id = i_collection_id.
IF lt_records_to_delete IS NOT INITIAL.
DELETE tab_01p FROM TABLE lt_records_to_delete.
ENDIF.
ENDIF.
In the first SELECT statement I fill a small internal table with some values that correspond with the index of a larger table. With these indexes I can search faster through the larger table to find all the entries I want to DELETE. It is the second SELECT statement that fills a large (a few million rows) internal table. All the records from this (lt_records_to_delete) internal table I want to delete from the database table.
In what way can I introduce a cursor to this code so it selects and deletes records in smaller batches?
There is a good example in the documentation. I am not entirely sure why you need to read the entries before deleting them, but there might be a good reason that you neglected to mention (for example logging the values). For the process you are implementing, be aware of the following warning in the documentation:
If write accesses are made on a database table for which a database
cursor is open, the results set is database-specific and undefined.
Avoid this kind of parallel access if possible.
It's annoying to preview all columns all the time (especially when tables has a lot of) and even worse to create filter after every restart of SQLDeveloper?
I can't see there any option to save them.
Has someone workaround for this?
My version is 4.1.0.17.
List down the required columns in the select query. Compose the final query with only the columns you want in the select list. Save the query in your local as .sql file with a proper name of your choice. From next time open this file in SQL Worksheet. This would be applicable if you use the same query quite often.
Alternatively, you could create a new table from existing table, specify the columns in the order you want to display first and keep the rest columns towards the end.
Create table_new as select
From table_old;
Drop table tqble_old;
Rename table_new to table_old;
I have a table with 5 fields that are all the same. They each can hold a reference to a row from another table with relationships. I want to update all of these fields at the same time on a row, but with a randomly selected row from the table for each field (with no duplicates). I am not sure how in access SQL you can update a lookup/relationship field like this. Any advice is greatly appreciated.
Simple answer is that you can't, not as it appears you would like to anyway. The closest thing possible would be to create an Insert query with parameters, and then feed in your 5 values using VBA. Since you will have to use VBA anyway, you may as well go the whole hog and conduct the entire process with Recordsets.
But that's not the fiddly part, (relatively speaking) selecting your source values is. What you will need to do is open a Recordset on your source table, and hook it up to your random-no-duplicates logic in order to select your 5 record references, then you open up a Recordset on your destination table, and drop them in the appropriate fields.
This tutorial will get you started on Recordsets: http://www.utteraccess.com/wiki/index.php/Recordsets_for_Beginners
We receive a data feed from our customers and we get roughly the same schema each time, though it can change on the customer end as they are using a 3rd party application. When we receive the data files we import the data into a staging database with a table for each data file (students, attendance, etc). We then want to compare that data to the data that we already have existing in the database for that customer and see what data has changed (either the column has changed or the whole row was possibly deleted) from the previous run. We then want to write the updated values or deleted rows to an audit table so we can then go back to see what data changed from the previous data import. We don't want to update the data itself, we only want to record what's different between the two datasets. We will then delete all the data from the customer database and import the data exactly as is from the new data files without changing it(this directive has been handed down and cannot change). The big problem is that I need to do this dynamically since I don't know exactly what schema I'm going to be getting from our customers since they can make customization to their tables. I need to be able to dynamically determine what tables there are in the destination, and their structure, and then look at the source and compare the values to see what has changed in the data.
Additional info:
There are no ID columns on source, though there are several columns that can be used as a surrogate key that would make up a distinct row.
I'd like to be able to do this generically for each table without having to hard-code values in, though I might have to do that for the surrogate keys for each table in a separate reference table.
I can use either SSIS, SPs, triggers, etc., whichever would make more sense. I've looked at all, including tablediff, and none seem to have everything I need or the logic starts to get extremely complex once I get into them.
Of course any specific examples anyone has of something like this they have already done would be greatly appreciated.
Let me know if there's any other information that would be helpful.
Thanks
I've worked on a similar problem and used a series of meta data tables to dynamically compare datasets. These meta data tables described which datasets need to be staged and which combination of columns (and their data types) serve as business key for each table.
This way you can dynamically construct a SQL query (e.g., with a SSIS script component) that performs a full outer join to find the differences between the two.
You can join your own meta data with SQL Server's meta data (using sys.* or INFORMATION_SCHEMA.*) to detect if the columns still exist in the source and the data types are as you anticipated.
Redirect unmatched meta data to an error flow for evaluation.
This way of working is very risky, but can be done if you maintain your meta data well.
If you want to compare two tables to see what is different the keyword is 'except'
select col1,col2,... from table1
except
select col1,col2,... from table2
this gives you everything in table1 that is not in table2.
select col1,col2,... from table2
except
select col1,col2,... from table1
this gives you everything in table2 that is not in table1.
Assuming you have some kind of useful durable primary key on the two tables, everything in both sets, is a change. Everything in the first set is an insert; Everything in the second set is a delete.
I would like to know if there's a way to add a column to an SQL Server table after it's created and in a specific position??
Thanks.
You can do that in Management-Studio. You can examine the way this is accomplished by generating the SQL-script BEFORE saving the change. Basically it's achieved by:
removing all foreign keys
creating a new table with the added column
copying all data from the old into the new table
dropping the old table
renaming the new table to the old name
recreating all the foreign keys
In addition to all the other responses, remember that you can reorder and rename columns in VIEWs. So, if you find it necessary to store the data in one format but present it in another, you can simply add the column on to the end of the table and create a single table view that reorders and renames the columns you want to show. In almost every circumstance, this view will behave exactly like the original table.
The safest way to do this is.
Create your new table with the correct column order
Copy the data from the old table.
Drop the Old Table.
The only safe way of doing that is creating a new table (with the column where you want it), migrating the data, dropping the original table, and renaming the new table to the original name.
This is what Management Studio does for you when you insert columns.
As others have pointed out you can do this by creating a temp table moving the data and droping the orginal table and then renaming the other table. This is stupid thing to do though. If your table is large, it could be very time-consuming to do this and users will be locked out during the process. This issomething you NEVER want to do to any table in production.
There is absolutely no reason to ever care what order the columns are in a table since you should not be relying on column order anyway (what if someone else did this same stupid thing?). No queries should use select * or ordinal positions to get columns. If you are doing this now, this is broken code and needs to be fixed immediately as the results are not always going to be as expected. For instance if you do insert a column where you want it and someone else is using select * for a report, suddenly the partnumber is showing up in the spot that used to hold the Price.
By doing what you want to do, you may break much more than you fix by putting the column where you personally want it. Column order in tables should always be irrelevant. You should not be doing this every time you want columns to appear in a differnt order.
With Sql Server Management Studio you can open the table in design and drag and drop the column wherever you want
As Kane says, it's not possible in a direct way. You can see how Management Studio does it by adding a column in the design mode and checking out the change script.
If the column is not in the last position, the script basically drops the table and recreates it, with the new column in the desired position.
In databases table columns don't have order.
Write proper select statement and create a view
No.
Basically, SSMS behind the scenes will copy the table, constraints, etc, drop the old table and rename the new.
The reason is simple - columns are not meant to be ordered (nor are rows), so you're always meant to list which columns you want in a result set (select * is a bit of a hack)