I have two SQL tables: TableA and LookupTable. TableA stores LookupTableID (PK) as a foreign key and some other fields. The LookupTable stores LookupTableID, PharmacyName, PharmacyAddress, PostCode, PharmacyTelephone, PharmacyFax, PharmacyEmail etc.
The data from TableA is displayed in an Asp.net form. The form has two buttons for each row of the grid: Print button and Preview button. When user clicks Print button, a pdf document containing the pharmacy information from LookupTable (for LookupTableID in TableA) along with some other information from TableA is produced in the document. Moreover, the Printed field in TableA is set to true, and a new record in TableA is created, the only difference between the printed and the newly created record is in the following fields: StartDate, EndDate and Printed. In the newly created record, StartDate becomes "EndDate from printed record + 1 Day", EndDate becomes "EndDate from printed record + 7 days", and Printed is false for the newly created record.
When user clicks Preview button, a pdf document containing the pharmacy information from LookupTable (for LookupTableID in TableA) along with some other information from TableA is produced in the document. Note that the system doesn't create a new record or set printed to true in this case.
The issue that I have with above database design is: if one of the columns lets say PostCode in the Lookup Table is amended and user clicks on Preview button for historical/printed records then the preview pdf document will display up to date information i.e. new post code rather than the post code at the time of the printing. I know, this is an expected behaviour. However, I would like to display the information for historical records as they were printed, and the up to date information for only those records which are not yet printed.
One work around that I have is to treat LookupTable as a library table for the user to select a pharmacy and then copy everything (all fields from the lookup table) across to the TableA for the selected pharmacy. This will create a lot of duplicate data in the database. Is there a better way to achieve my objective? Any help would be appreciated.
This seems like a needlessly complicated and inflexible way of taking a snapshot of your data every time it is printed. It would seem like a much better alternative would be to version both tables and create a third table which would just contain the TableA ID and the date it was printed. There is no need to make a copy of the data. To reproduce the data as it was on the date of any particular print, just use the print date to query the versioned data as of that date. You would get back the data as it appeared on that date, no matter how much it changed since then.
One answer I have given for a similar situation is here:
https://dba.stackexchange.com/questions/114580/best-way-to-design-a-database-and-table-to-keep-records-of-changes/114738#114738
Hope it helps.
For historical data there are (at least) two approaches:
You store your information with a ValidUntil date. The one with a NULL is the current.
Older data can be found with something similar to:
DECLARE #test TABLE(id INT,ValidUntil DATETIME);
INSERT INTO #test VALUES(1,{d'2015-01-01'})
,(2,{d'2015-02-01'})
,(3,{d'2015-03-01'})
,(4,NULL);
DECLARE #ValidOn DATETIME={d'2015-01-02'}; --set any value you like, even NULL
WITH HelperDates AS
(
SELECT ISNULL(#ValidOn,GETDATE()) AS ValidOn
,(SELECT MAX(ValidUntil) FROM #test) AS MaxDate
)
SELECT TOP 1 tbl.*
FROM #test AS tbl
CROSS JOIN HelperDates AS hd
WHERE hd.MaxDate IS NULL OR hd.ValidOn > hd.MaxDate OR hd.ValidOn <= tbl.ValidUntil
ORDER BY tbl.ValidUntil ASC
Especially with out-prints I'd prefer this way
Store the necessary data in the moment of creation of your document. Best in my eyes is an XML column together with the created document.
Good luck in finding the best approach!
Related
I have a table and want to know the best way to see if there had been any data changes to one particular column, and if so, I would want to get an email alert - if that is possible.
My idea is to create a base table that only has this column with the data, lets say its called 'mytable'. The column is 'Reporting_code'
Then every day I would do a select distinct for this column in the main table. If the value is new or changed it would display it.
Select distinct reporting_code a from Prod_table where reporting_code a not in
(select distinct reporting_code b from mytable)
But is there a better way to be doing this?
I am trying to retrieve data from tickets that meet search matches. The relevant bits of data here are that a ticket has a name, and any number of comments.
Currently I'm matching a search against the ticket name like so:
JOIN freetexttable(Tickets,TIC_Name,'Test ') s1
ON TIC_PK = s1.[key]
Where the [key] from the full text catalog is equal to TIC_PK.
This works well for me, and gives me access to s1.rank, which is important for me to sort by.
Now my problem is that this method wont work for ticket searching, because the key in the comment catalog is the comment PK, an doesn't give me any information I can use to link to the ticket.
I'm very perplexed about how to go about searching multiple descriptions and still getting a meaningful rank.
I'm pretty knew to full-text search and might be missing something obvious.
Heres my current attempt at getting what I need:
WHERE TIC_PK IN(
SELECT DES_TIC_FK FROM freetexttable(TicketDescriptions, DES_Description,'Test Query') as t
join TicketDescriptions a on t.[key] = a.DES_PK
GROUP BY DES_TIC_FK
)
This gets me tickets with comments that match the search, but I dont think it's possible to sort by the rank data freetexttable returns with this method.
To search the name and comments at the same time and get the most meaningful rank you should put all of this info into the same table -- a new table -- populated from your existing tables via an ETL process.
The new table could look something like this:
CREATE TABLE TicketsAndDescriptionsETL (
TIC_PK int,
TIC_Name varchar(100),
All_DES_Descriptions varchar(max),
PRIMARY KEY (TIC_PK)
)
GO
CREATE FULLTEXT INDEX ON TicketsAndDescriptionsETL (
TIC_Name LANGUAGE 'English',
All_DES_Descriptions LANGUAGE 'English'
)
Schedule this table to be populated either via a SQL job, triggers on the Tickets and TicketDescriptions tables, or some hook in your data layer. For tickets that have multiple TicketDescriptions records, combine the text of all of those comments into the All_DES_Descriptions column.
Then run your full text searches against this new table.
While this approach does add another cog to the machine, there's really no other way to perform full text searches across multiple tables and generate one rank.
I have an ODBC database that I've linked to an Access table. I've been using Access to generate some custom queries/reports.
However, this ODBC database changes frequently and I'm trying to discover where the discrepancy is coming from. (hundreds of thousands of records to go through, but I can easily filter it down into what I'm concerned about)
Right now I've been manually pulling the data each day, exporting to Excel, counting the totals for each category I want to track, and logging in another Excel file.
I'd rather automate this in Access if possible, but haven't been able to get my heard around it yet.
I've already linked the ODBC databases I'm concerned with, and can generate the query I want to generate.
What I'm struggling with is how to capture this daily and then log that total so I can trend it over a given time period.
If it the data was constant, this would be easy for me to understand/do. However, the data can change daily.
EX: This is a database of work orders. Work orders(which are basically my primary key) are assigned to different departments. A single work order can belong to many different departments and have multiple tasks/holds/actions tied to it.
Work Order 0237153-03 could be assigned to Department A today, but then could be reassigned to Department B tomorrow.
These work orders also have "ranking codes" such as Priority A, B, C. These too can be changed at any given time. Today Work Order 0237153-03 could be priority A, but tomorrow someone may decide that it should actually be Priority B.
This is why I want to capture all available data each day (The new work orders that have come in overnight, and all the old work orders that may have had changes made to them), count the totals of the different fields I'm concerned about, then log this data.
Then repeat this everyday.
the question you ask is very vague so here is a general answer.
You are counting the items you get from a database table.
It may be that you don't need to actually count them every day, but if the table in the database stores all the data for every day, you simply need to create a query to count the items that are in the table for every day that is stored in the table.
You are right that this would be best done in access.
You might not have the "log the counts in another table" though.
It seems you are quite new to access so you might benefit form these links videos numbered 61, 70 here and also video 7 here
These will help or buy a book / use web resources.
PART2.
If you have to bodge it because you can't get the ODBC database to use triggers/data macros to log a history you could store a history yourself like this.... BUT you have to do it EVERY day.
0 On day 1 take a full copy of the ODBC data as YOURTABLE. Add a field "dump Number" and set it all to 1.
1. Link to the ODBC data every day.
join from YOURTABLE to the ODBC table and find any records that have changed (ie test just the fields you want to monitor and if any of them have changed...).
Append these changed records to YOURTABLE with a new value for "dump number of 2" This MUST always increment!
You can now write SQL to get the most recent record for each primary key.
SELECT *
FROM Mytable
WHERE
(
SELECT PrimaryKeyFields, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
You can compare the most recent records with any previous records.
ie to get the previous dump
Note that this will NOT work in the abvoe SQL (unless you always keep every record!)
AND t1.MAXDumpNumber-1 = Mytable.DumpNumber
Use something like this to get the previous row:
SELECT *
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS TabLatest
ON TabLatest.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND
TabLatest.MAXDumpNumber <> Mytable.DumpNumber
-- Note that the <> is VERY important
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
Create 4 and 5 and MS Access named queries (or SS views) and then treate them like tables to do comparison.
Make sure you have indexes created on the PK fields and the DumpNumber and they shoudl be unique - this will speed things up....
Finish it in time for christmas... and flag this as an answer!
Quick Version: I have 4 tables (TableA, TableB, TableC, TableD) identical in design. TableC is a complete History of TableA & B. I want to periodically update TableC with new data from TableA & B. TableD contains a copy of the row most recently transferred from A/B to C. I need to select all records from TablesA/B that are more recent than the record in TableD. Any advice?
Long Version: I'm trying trying to ETL (Extract, Transform, Load) some information from a few different tables into some other tables for quicker, easier reporting... kind of like a data warehouse but within the same database (don't ask).
Basically we want to record and report on system performance. ORACLE have logs for this in tables flows_030100.wwv_flow_activity_log1$ and flows_030100.wwv_flow_activity_log2$ - I believe these tables are filled and cleared every two weeks or something...
I have created a table:
CREATE TABLE dw_log_hist AS
SELECT * FROM flows_030100.wwv_flow_activity_log WHERE 1=0
and filled it with the current information:
INSERT INTO dw_log_hist
SELECT *
FROM flows_030100.wwv_flow_activity_log1$
INSERT INTO dw_log_hist
SELECT *
FROM flows_030100.wwv_flow_activity_log2$
HOWEVER, these log files record EVERY click in the APEX screens. As such, they are continually growing.
I want to periodically update my DW_Log_Hist table with only new information (I am fully aware my history table will grow to be ridiculously sized but I'll deal with that later).
Unfortunately, these tables have no primary key, so I've had to create another table to store marker records that will tell me the latest logs I copied over -_-
CREATE TABLE dw_log_temp AS
SELECT * FROM flows_030100.wwv_flow_activity_log
WHERE time_stamp = (SELECT MAX (time_stamp)
FROM flows_030100.wwv_flow_activity_log2$)
NOW THEN after all that waffle... this is what I need your help with:
Does anyone know whether one of the log tables (wwv_flow_activity_log1$ or wwv_flow_activity_log2$) always has the latest logs? Is it a case of log1$ filling up, log2$ filling then log1$ being overwritten with log2$ so that log2$ always has the latest data? Or do they both fill up and then get filled up again?
Can anyone advise how I would go about populating the DW_Log_Hist table using the DW_Log_Temp marker records?
Conceptually it would be something like:
insert everything into dw_log_hist from activity_log1$ and activity_log2$ where the time_stamp is > (time_stamp of the record in dw_log_temp)
Super sorry for such a long post.
Got the answer :-)
A chap on Reddit helped me realise my over complication...
insert into dw_log_hist
select *
from flows_030100.wwv_flow_activity_log1$
where time_stamp > (select max(time_stamp)
from dw_log_hist)
union
select *
from flows_030100.wwv_flow_activity_log2$
where time_stamp > (select max(time_stamp)
from dw_log_hist)
Hurrah! Always feel like such an idiot when you see the simple answer...
I am trying to create "feeds" in SQL that contain "items", and when I am getting all the feeds (SELECT * FROM Feeds), I want to order them by when they were last updated (the last time an item was added to the feed). The "item" has a publish date column.
So far my query looks like this:
SELECT
F.FeedID,
F.Title,
F.Link,
F.Language,
F.Copyright,
F.Subtitle,
F.Author,
F.Summary,
F.OwnerName,
F.OwnerEmail,
F.ImageURL,
F.Category
FROM Feeds F
LEFT JOIN Items I
ON F.FeedID = I.FeedID
ORDER BY I.PublishDate DESC
Somehow I want to order the "items" joined on so that the most recent item is joined on with that feed. Is this possible? Or should I just add a "last updated" column to the "feeds" table?
You need to add a last updated column of type DateTime, and set the value appropriately when inserting or updating rows, as your needs dictate. That is to say, set its value depending on whether you want the most recently updated item or the item which was most recently added (updated versus inserted), as they may differ. You can then order by this new column.
You cannot use order by on the data as shown to find the last modified item, as publish date is (almost certainly) the date the book was published, not the date the row was added to the database.
Date created and date updated are very important things to track in a database.
A ERD (entity relationship diagram) shows how the database is laid out.
http://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model
The details of both tables will help in clearing up what you are trying to accomplish.