One of our staff members made a change on the live DB that has wiped the address details of circa 3000 users. I have attached yesterdays backup to a new DB, meaning I have a table of all the correct address on a different database.
Is there an easy way I can essentially copy this table from the backup DB to the live DB? I will of course test it first, as I wish this user did. I don't want to restore from the backup is it will erase a lot in different tables.
I have attached a backup of the DB in order to extract the records needed.
I probably should've added this, but the entire table hasn't been erased, just a number of records within the table. The table itself has
SerialNum, FirstName, LastName, Address 1, PostCode etc
The SerialNum, FirstName, LastName are all still fine, it's just the address fields that have been erased through a bad data import
You want something like this:
INSERT INTO ProdDB.dbo.[TableName]
SELECT *
FROM RestoredDB.dbo.[TableName] r
WHERE NOT EXISTS (SELECT 1 FROM ProdDB.dbo.[TableName] p WHERE p.ID = r.ID)
Of course you'll need to adjust that based on the real primary key and name of the table, and you may also need to turn on IDENTITY_INSERT.
Related
My company has a large Access database which lists every user that has ever existed for a particular client of ours. This database is curated manually. I have been asked to delete 500+ users. This means I have to modify three columns for each user:
Status (must be changed to "deleted")
Date Deleted (to current date)
Date Revised (to current date)
Obviously, I don't want to have to Ctrl+F and change these fields manually for over 500 users. What is the easiest way to go about doing this more quickly?
I have the list of users that need to be deleted in Excel. I tried to create a query that shows all of these users in one table so that I don't have to sort through the users who don't need to be modified. It looked something like this:
SELECT UserID, Status, Date Deleted, Date Revised
FROM [database name]
WHERE UserID = 'a'
OR UserID 'b'
//(and then 500+ more OR statements for each UserID)
ORDER BY UserID;
I figured if I can at least do this, at least I'll have all the users I need to edit in front of me so that I don't have to Ctrl+F. But this didn't work, because it exceeded the 1,024 character limit in Access. Any ideas for how I can accomplish this?
Don't attempt to write 500+ UserID values into your SQL statement. Instead, import the Excel list as a table into your Access database. Then you can use that list of UserID values to identify which rows of your main table should be updated.
UPDATE MainTable AS m
SET m.Status = 'deleted', m.[Date Deleted] = Date(), m.[Date Revised] = Date()
WHERE m.UserID IN (SELECT UserID FROM tblFromExcel)
I have a table joined from two other tables. I would like this table to stay updated with entries in the other two tables.
First Table is "employees"
I am using the ID, Last_Name, and First_Name.
And the second Table is "EmployeeTimeCardActions"
using columns ID, ActionTime, ActionDate, ShiftStart, and ActionType.
ID is my common column that the join was created by..Joined Table...
Because I usually have a comment saying I did not include enough information, I do not need a exact specific code sample and I think I have included everything needed. If there is a good reason to include more I will, I just try to keep as little company information public as possible
Sounds like you're having your data duplicated across tables. Not a smart idea at all. You can update data in one table when a row is updated in a different one via triggers but this is a TERRIBLE approach. If you want to display data joined from 2 tables, the right approach here is using an SQL VIEW which will display the current data.
I just got stuck in a problem, where there are two ways of solving this.
Let me first explain the case,
I have a DB table consisting of some columns say id, name, address, priority. Here name and address is not unique but name + address + priority is unique.
Input provided to me is name and list of addresses. Now, what I have to do is to arrange name and address in the same order as given in input in my DB table.
There are two ways of solving:
selecting on the basis of name and address and make update queries for those data which are changed and execute them.
delete the data corresponding to name and address from table and insert the data with new priority.
I know that one update is faster than delete + insert but here in this case there is one select query too.
My intuition is that 1st method will be more fast but I don't have any technical details about it.
Am I missing something?
I have an ODBC database that I've linked to an Access table. I've been using Access to generate some custom queries/reports.
However, this ODBC database changes frequently and I'm trying to discover where the discrepancy is coming from. (hundreds of thousands of records to go through, but I can easily filter it down into what I'm concerned about)
Right now I've been manually pulling the data each day, exporting to Excel, counting the totals for each category I want to track, and logging in another Excel file.
I'd rather automate this in Access if possible, but haven't been able to get my heard around it yet.
I've already linked the ODBC databases I'm concerned with, and can generate the query I want to generate.
What I'm struggling with is how to capture this daily and then log that total so I can trend it over a given time period.
If it the data was constant, this would be easy for me to understand/do. However, the data can change daily.
EX: This is a database of work orders. Work orders(which are basically my primary key) are assigned to different departments. A single work order can belong to many different departments and have multiple tasks/holds/actions tied to it.
Work Order 0237153-03 could be assigned to Department A today, but then could be reassigned to Department B tomorrow.
These work orders also have "ranking codes" such as Priority A, B, C. These too can be changed at any given time. Today Work Order 0237153-03 could be priority A, but tomorrow someone may decide that it should actually be Priority B.
This is why I want to capture all available data each day (The new work orders that have come in overnight, and all the old work orders that may have had changes made to them), count the totals of the different fields I'm concerned about, then log this data.
Then repeat this everyday.
the question you ask is very vague so here is a general answer.
You are counting the items you get from a database table.
It may be that you don't need to actually count them every day, but if the table in the database stores all the data for every day, you simply need to create a query to count the items that are in the table for every day that is stored in the table.
You are right that this would be best done in access.
You might not have the "log the counts in another table" though.
It seems you are quite new to access so you might benefit form these links videos numbered 61, 70 here and also video 7 here
These will help or buy a book / use web resources.
PART2.
If you have to bodge it because you can't get the ODBC database to use triggers/data macros to log a history you could store a history yourself like this.... BUT you have to do it EVERY day.
0 On day 1 take a full copy of the ODBC data as YOURTABLE. Add a field "dump Number" and set it all to 1.
1. Link to the ODBC data every day.
join from YOURTABLE to the ODBC table and find any records that have changed (ie test just the fields you want to monitor and if any of them have changed...).
Append these changed records to YOURTABLE with a new value for "dump number of 2" This MUST always increment!
You can now write SQL to get the most recent record for each primary key.
SELECT *
FROM Mytable
WHERE
(
SELECT PrimaryKeyFields, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
You can compare the most recent records with any previous records.
ie to get the previous dump
Note that this will NOT work in the abvoe SQL (unless you always keep every record!)
AND t1.MAXDumpNumber-1 = Mytable.DumpNumber
Use something like this to get the previous row:
SELECT *
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS TabLatest
ON TabLatest.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND
TabLatest.MAXDumpNumber <> Mytable.DumpNumber
-- Note that the <> is VERY important
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
Create 4 and 5 and MS Access named queries (or SS views) and then treate them like tables to do comparison.
Make sure you have indexes created on the PK fields and the DumpNumber and they shoudl be unique - this will speed things up....
Finish it in time for christmas... and flag this as an answer!
I am having trouble figuring out how to pull a value from a secondary table, to use as selection criteria on a per-record basis.
I am working with Crystal Reports 2011 on Windows 7, over an ODBC connection to an Oracle 11g database.
I am creating an employee directory that utilizes information from two locations:
table: TEACHERS
view: PVSIS_CUSTOM_TEACHERS
The teachers table is set up with your predictable fields: id, lastname, firstname, telephone, address, city, state, zip, etc. etc. etc.
The view has the following fields available:
TEACHERID
FIELD_NAME
TEXT_VALUE
The database application I am using allows me to create "custom fields" that are related back to the main teachers table. In truth, the fields I am creating are actually stored in a separate table, but are then accessible through the PVSIS_CUSTOM_TEACHERS view. Since the database application allows for any number of "custom fields" to be created, the view can have any number of records in it that can be tied back to the records within the teachers table.
There are MANY custom fields that have been created, but for the purposes of my current project, only 3 of them matter:
empDirSupRecord
empDirSupPhone
empDirSupAddr
The view for my personal teacher record would look like this:
TeacherID Field_Name Text_Value
1 empDirSupRecord
1 empDirSupPhone 1
1 empDirSupAddr 1
1 AnotherField another_value
1 YetAnotherField yetanother_value
(This would indicate that I've asked for my phone and address to be suppressed, but would still want my name to be included in the directory)
These fields will each contain a '1' if the user has asked that their phone number, or address not be published, or if we need to suppress the entire record altogether.
When I first started my report, I pulled both the table and view into the database expert and linked them together with teachers.id = pvsis_custom_teachers.teacherid. However, this causes each teacher's name to print on the report once for every record with their teacher id in the view. Since that's not the behavior I want, I removed the view from the database expert, and tried using SQL Expression fields to retrieve the contents of the custom field. This is where I'm currently stuck. I would need to write the sql in a way that selects the correctly named field, for each of the teacher records as the record is being processed by the report.
Currently, my sql expressions statement is written as:
(SELECT text_value FROM pvsis_custom_teachers WHERE field_name = 'empDirSupRecord' AND teacherid = '1')
What I need to do is figure out how to get the report to intelligently select the record for teacherid = (whatever teacherid is currently being processed). I'm not sure if SQL Expression fields are the way to go to accomplish this, so am definitely open to alternate suggestions if my current approach will not work.
Thanks for taking a look. :-)
You're almost there with the SQL Expression. You can refer back to the main query with double quoted field names. So what you're looking for is:
case when "teacher"."id" is null then null
else (SELECT max(text_value)
FROM pvsis_custom_teachers
WHERE field_name = 'empDirSupRecord' AND teacherid = "teacher"."id")
end
Note that CR will likely complain without the null check and use of max(), since it wants to be sure that only a scalar will ever be returned.
The alternative, and likely less-performance-intensive way to do this, is to join the table and view like you first described. Then, you can group by {teacher.id} and keep track of each field name in the view via variables. This will require more work and more formulas, though. Something like this, for example:
//Place this formula in the Group Header
whileprintingrecords;
stringvar empDirSupRecord:="";
//Place this formula in the Details section
whileprintingrecords;
stringvar empDirSupRecord;
if {pvsis_custom_teachers.field_name} = 'empDirSupRecord'
then empDirSupRecord:={pvsis_custom_teachers.text_value}
//Place this formula in the Group Footer
whileprintingrecords;
stringvar empDirSupRecord;