Copy and Paste Rows into Same SQL Table with Different Values - sql

I wrote an application for resident housing at a college. In one of the tables (rooms) I have a list of all the rooms and their current/max occupancy. Now, I've added a new column called "semester" and set all of the existing rows to have a semester value of "fall." Now I want copy and paste all of these rows into the table but change the semester value to "spring." The result should be twice as many rows as I started with - half with fall in the semester value and half with spring. Wondering what the best way to accomplish this is?

INSERT INTO rooms
(roomname, current_occupancy, max_occupancy, semester)
SELECT roomname, current_occupancy, max_occupancy,'spring'
FROM rooms
WHERE [semester]='fall'
(assuming names for your room and occupancy columns)

Use a temp table to make it simple no matter how many columns are involved;
SELECT * INTO #ROOMS FROM ROOMS;
UPDATE #ROOMS SET SEMESTER='spring';
INSERT INTO ROOMS SELECT * FROM #ROOMS;

Insert Into Rooms
Select col1, col2, col3, 'Spring' as Semester -- select each column in order except 'Semester', pass it in literally as 'Spring'
From rooms where
Semester = 'Fall'

Well if you're just trying to do this inside Sql Server Management Studio you could copy the table, run an Update command and set the semester to spring on the cloned table, then use the wizard to append the data from the cloned table to the existing table.
If you know a programming language you could pull all of the data, modify the semester, then insert the data into the existing table.
Note: The other answers are a much better way of achieving this.

Related

Copy record from one Oracle SQL table to another with extra columns

This is part of a larger Java program, but I'm looking for a way in Oracle SQL to copy a record from one table to another. The original table is 45 columns. The second table is an archive table of the original table.
EDIT - It has the same 45 columns, but also has a NewKey column - created in the archive table using
("NEWKEY" NUMBER GENERATED ALWAYS as IDENTITY(START with 1 INCREMENT by 1)
and an archive_date column
"ARCHIVEDATE" DATE DEFAULT CURRENT_TIMESTAMP
Is there a way to do a query a la
INSERT INTO Archive_Table A
SELECT * (plus NEWKEY, ARCHIVEDATE) FROM Original Table O
WHERE O.CUSTKEY = passed_param;
where the only parameter passed is the CUSTKEY? Once a record is copied from the original table, it will then be deleted from the original table.
ARCHIVEDATE is a date field, so it should default to SYSDATE, not CURRENT_TIMESTAMP. This eliminates a conversion for every insert.
As others have said, just list out the columns:
insert into archivetable (custkey, cola, colb, colc)
select custkey, cola, colb, colc
from originaltable
where custkey = passedparam;
There is no need to include NEWKEY or ARCHIVEDATE as they will be initialized on the insert.
If you started typing those 45 columns half an hour ago (when you posted the question), your code would have already be up and running.
Anyway: using an asterisk (*) in select is probably not the best idea; for what? What benefit do you expect? To save some keyboard strokes? Because, if you explicitly name all columns, you exactly know which value goes into which column. What if a new column is added into one of those two tables? insert into select * won't work any longer.
What you ask can be done using some dynamic SQL (so, within PL/SQL) by querying user_tab_columns, composing the insert into statement, paying attention to differences (identity and timestamp columns). Once you're done with coding and testing, as I said - a straightforward naming all those columns would be finished ages ago.
If I were you, I'd do exactly that: name all 45 columns.

Advice on removing records completely from a database

I am looking for some advice on the best way to remove multiple records (approximatley over 3000) completly from a database. I have been assigned a job of removing old records from our database for GDPR reasons.
However this is a database i do not have much knowledge on and there is no documentation, ERD's etc on how the tables are joined together.
I managed to work out the tables which will need to have records removed to completely remove details from the database, there are about 24 tables which need to have records removed from.
I have a list of ID numbers which need to be removed so i was thinking of creating a temporary table with the list of IDs and then creating a stored procedure to loop through the temproary tables. Then for each of the 24 tables check to see if it contains records connected to the ID number and then if they do delete them.
Does anyone know if there is any better way of removing these records??
I would use a table variable and union all:
declare #ids table (id int primary key)
insert into #ids (id)
select 1 union all
select 2 union all
...
select 3000
delete from table_name where id in
(select id from #ids)
Obviously just change the numbers to the actual ids

Transpose to Count columns of Boolean values on Access SQL

Ok, so I have a Student table that has 6 fields, (StudentID, HasBamboo, HasFlower, HasAloe, HasFern, HasCactus) the "HasPlant" fields are boolean, so 1 for having the plant, 0 for not having the plant.
I want to find the average number of plants that a student has. There are hundreds of students in the table. I know this could involve transposing of some sort and of course counting the boolean values and getting an average. I did look at this question SQL to transpose row pairs to columns in MS ACCESS database for information on Transposing (never done it before), but I'm thinking there would be too many columns perhaps.
My first thought was using a for loop, but I'm not sure those exist in SQL in Access. Maybe a SELECT/FROM/WHERE/IN type structure?
Just hints on the logic and some possible reading material would be greatly appreciated.
you could just get individual totals per category:
SELECT COUNT(*) FROM STUDENTS WHERE HasBamboo
add them all up, and divide by
SELECT COUNT(*) FROM STUDENTS
It's not a great database design though... Better normalized would be:
Table Students; fields StudentID, StudentName
Table Plants; fields PlantID, PlantName
Table OwnedPlants; fields StudentID,PlantID
The last table then stores records for each student that owns a particular plant; but you could easily add different information at the right place (appartment number to Students; Latin name to Plants; date aquired to OwnedPlants) without completely redesigning table structure and add lots of fields. (DatAquiredBamboo, DateAquiredFlower, etc etc)

How to add dates to database records for trending analysis

I have a SQL server database table that contain a few thousand records. These records are populated by PowerShell scripts on a weekly basis. These scripts basically overwrite last weeks data so the table only has information pertaining to the previous week. I would like to be able to take a copy of that tables data each week and add a date column with that day's date beside each record. I need this so can can do trend analysis in the future.
Unfortunately, I don't have access to the PowerShell scripts to edit them. Is there any way I can accomplish this using MS SQL server or some other way?
You can do the following. Create a table that will contain the clone + dates. Insert the results from your original table along with the date into your clone table. From your description you don't need a where clause because the results of the original table are wiped out only holding new data. After the initial table creation there is no need to do it again. You'll just simply do the insert piece. Obviously the below is very basic and is just to provide you the framework.
CREATE TABLE yourTableClone
(
col1 int
col2 varchar(5)...
col5 date
)
insert into yourTableClone
select *, getdate()
from yourOriginalTable

Suggested techniques for storing multiple versions of SQL row data

I am developing an application that is required to store previous versions of database table rows to maintain a history of changes. I am recording the history in the same table but need the most current data to be accessible by a unique identifier that doesn't change with new versions. I have a few ideas on how this could be done and was just looking for some ideas on the best way of doing this or whether there is any reason not to use one of my ideas:
Create a new row for each row version, with a field to indicate which row was the current row. The drawback of this is that the new version has a different primary key and any references to the old version will not return the current version.
When data is updated, the old row version is duplicated to a new row, and the new version replaces the old row. The current row can be accessed by the same primary key.
Add a second table with only a primary key, add a column to the other table which is foreign key to new table's primary key. Use same method as described in option 1 for storing multiple versions and create a view which finds the current version by using the new table's primary key.
PeopleSoft uses (used?) "effective dated records". It took a little while to get the hang of it, but it served its purpose. The business key is always extended by an EFFDT column (effective date). So if you had a table EMPLOYEE[EMPLOYEE_ID, SALARY] it would become EMPLOYEE[EMPLOYEE_ID, EFFDT, SALARY].
To retrieve the employee's salary:
SELECT e.salary
FROM employee e
WHERE employee_id = :x
AND effdt = (SELECT MAX(effdt)
FROM employee
WHERE employee_id = :x
AND effdt <= SYSDATE)
An interesting application was future dating records: you could give every employee a 10% increase effective Jan 1 next year, and pre-poulate the table a few months beforehand. When SYSDATE crosses Jan 1, the new salary would come into effect. Also, it was good for running historical reports. Instead of using SYSDATE, you plug in a date from the past in order to see the salaries (or exchange rates or whatever) as they would have been reported if run at that time in the past.
In this case, records are never updated or deleted, you just keep adding records with new effective dates. Makes for more verbose queries, but it works and starts becoming (dare I say) normal. There are lots of pages on this, for example: http://peoplesoft.wikidot.com/effective-dates-sequence-status
#3 is probably best, but if you wanted to keep the data in one table, I suppose you could add a datetime column that has a now() value populated for each new row and then you could at least sort by date desc limit 1.
Overall though - multiple versions needs more info on what you want to do effectively as much as programatically...ie need more info on what you want to do.
R
Have you considered using AutoAudit?
AutoAudit is a SQL Server (2005, 2008) Code-Gen utility that creates
Audit Trail Triggers with:
Created, CreatedBy, Modified, ModifiedBy, and RowVersion (incrementing INT) columns to table
Insert event logged to Audit table
Updates old and new values logged to Audit table
Delete logs all final values to the Audit tbale
view to reconstruct deleted rows
UDF to reconstruct Row History
Schema Audit Trigger to track schema changes
Re-code-gens triggers when Alter Table changes the table
For me, history tables are always separate. So, definitely I would go with that, but why create some complex versioning thing where you need to look at the current production record. In reporting, this results in nasty unions that are really unnecessary.
Table has a primary key and who cares what else.
TableHist has these columns: incrementing int/bigint primary key, history written date/time, history written by, record type (I, U, D for insert, update, delete), the PK from Table as an FK on TableHist, the remaining columns all other columns with the same name are in the TableHist table.
If you create this history table structure and populate it via triggers on Table, you will have all versions of every row in the tables you care about and can easily determine the original record, every change, and the deletion records as well. AND if you are reporting, you only need to use your historical tables to get all of the information you'd like.
create table table1 (
Id int identity(1,1) primary key,
[Key] varchar(max),
Data varchar(max)
)
go
create view view1 as
with q as (
select [Key], Data, row_number() over (partition by [Key] order by Id desc) as 'r'
from table1
)
select [Key], Data from q where r=1
go
create trigger trigger1 on view1 instead of update, insert as begin
insert into table1
select [Key], Data
from (select distinct [Key], Data from inserted) a
end
go
insert into view1 values
('key1', 'foo')
,('key1', 'bar')
select * from view1
update view1
set Data='updated'
where [Key]='key1'
select * from view1
select * from table1
drop trigger trigger1
drop table table1
drop view view1
Results:
Key Data
key1 foo
Key Data
key1 updated
Id Key Data
1 key1 bar
2 key1 foo
3 key1 updated
I'm not sure if the disctinct is needed.