To find out the User name or ID who updated the address of the other staff - sql

This is my first post on this forum and hope I will get an answer.
I have very limited info with me about my database.
The query is like:
I wanted to know who has updated the address of the other staff, surely it is updated from the Java based application, but I came to know that in my database I have audit schema and in that I can find out the user name who updated the address.
But I don't know in which table this information will be available as we have around 1000+ tables in my database.
Could you please assist me to find out the exact table where this info will be available.

Aija, this is a difficult question to answer as there are so many possibilities, however we maybe able to help you narrow it down. Tables like this often start with audit, history, change, etc. or the reverse and have that appended to the end of the file they are tracking. E.g. audit_personnel or personnel_change. You say you have 1,000+ tables. That is a lot, but I have worked with bigger. It is still feasible to go through the list by the name of the table one by one. When databases get this big, naming standards come into play. Have a look at the way the table names are put together, and you will be able to narrow down your search a lot.

Thanks for your input
i have gone throgh all the tables by name starting/ending with name audit .i found one table audit trail in that there are multiple tables but i could not able to find the info which is expected.
even iam not sure these tables are coming under my prebvillage or its in under sys or any other user.

Another option then is the single table audit control. In this style, the table has 4 major components. First the data being changed which will be something like the table and field, maybe recid. Second is the original data. Third is the new data. Fourth is the who and when of the change. If this is the style, then you will need to know which table it is that you want to track. Then you will need something like "select * from [audit table] where [audit table].[monitored table] = [target table]".

Related

Question about creating table, relation and cardinality

I have a problem creating a database (I am still learning). I need to create a database for a company that generates a report like the one in the image.
Report to reply
I have the tables created as follows, but I have a problem in the t_stack with the t_tracking_number.
DB model and Relationship
The part that I don´t know how to do, is create the t_stack table, to register an id_stack, an id_driver, and then insert many tracking_numbers. at this moment according to the image, I should register for every track_number, an id_stack and an id_driver.
If someone can give me an idea of ​​how to do it, or if I must change whatever in the database it doesn't matter. Sorry if I didn't make myself clear, this is my first project and I want to do my best. Thank you.
You have to add in your t_stack table an id (ID_stack), identity increment as a PrimaryKey. Then you can have all tracking_numbers and combinations you need.

Custom user defined database fields, what is the best solution?

To keep this as short as possible I'm going to use and example.
So let's say I have a simple database that has the following tables:
company - ( "idcompany", "name", "createdOn" )
user - ( "iduser", "idcompany", "name", "dob", "createdOn" )
event - ( "idevent", "idcompany", "name", "description", "date", "createdOn" )
Many users can be linked to a single company as well as multiple events and many events can be linked to a single company. All companies, users and events have columns as show above in common. However, what if I wanted to give my customers the ability to add custom fields to both their users and their events for any unique extra information they wish to store. These extra fields would be on a company wide basis, not on a per record basis ( so a company adding a custom field to their users would add it to all of their users not just one specific user ). The custom fields also need to be sesrchable and have the ability to be reported on, ideally automatically with some sort of report wizard. Considering the database is expected to have lots of traffic as well as lots of custom fields, what is the best solution for this?
My current research and findings in possible solutions:
To have generic placeholder columns such as "custom1", "custom2" etc.
** This is not viable as there will eventually be too many custom columns and there will be too many NULL values stored in the database
To have 3x tables per current table. eg: user, user-custom-field, user-custom-field-value. The user table being the same. The user-custom-field table containing the information about the new field such as name, data type etc. And the user-custom-field-value table containing the value for the custom field
** This one is more of a contender if it were not for its complexity and table size implications. I think it will be impossible to avoid a user-custom-field table if I want to automatically report on these fields as I will have to store the information on how to report on these fields here. However, In order to pull almost any data you would have to do a million joins on the user-custom-field-value table as well as the fact that your now storing column data as rows which in a database expected to have a lot of traffic as well as a lot of custom fields would soon cause a problem.
Create a new user and event table for each new company that is added to the system removing the company id from within those tables and instead using it in the table name ( eg user56, 56 being the company id ). Then allowing the user to trigger DB commands that add the new custom columns to the tables giving them the power to decide if it has a default value or auto increments etc.
** Everytime I have seen this solution it has always instantly been shut down by people saying it would be unmanageable as you would eventually get thousands of tables. However nobody really explains what they mean by unmanageable. Firstly as far as my understanding goes, more tables is actually more efficient and produces faster search times as the tables are much smaller. Secondly, yes I understand that making any common table changes would be difficult but all you would have to do is run a script that changes all your tables for each company. Finally I actually see benefits using this method as it would seperate company data making it impossible for one to accidentally access another's data via a potential bug, plus it would potentially give the ability to back up and restore company data individually. If someone could elaborate on why this is perceived as a bad idea It would be appreciated.
Convert fully or partially to a NoSQL database.
** Honestly I have no experience with schemaless databases and don't really know how dynamic user defined fields on a per record basis would work ( although I know it's possible ). If someone could explain the implications of the switch or differences in queries and potential benefits that would be appreciated.
Create a JSON column in each table that requires extra fields. Then add the extra fields into that JSON object.
** The issue I have with this solution is that it is nearly impossible to filter data via the custom columns. You would not be able to report on these columns and until you have received and processed them you don't really know what is in them.
Finally if anyone has a solution not mentioned above or any thoughts or disagreements on any of my notes please tell me as this is all I have been able to find or figure out for myself.
A typical solution is to have a JSON (or XML) column that contains the user-defined fields. This would be an additional column in each table.
This is the most flexible. It allows:
New fields to be created at any time.
No modification to the existing table to do so.
Supports any reasonable type of field, including types not readily available in SQL (i.e. array).
On the downside,
There is no validation of the fields.
Some databases support JSON but do not support indexes on them.
JSON is not "known" to the database for things like foreign key constraints and table definitions.

What is the best method of logging data changes and user activity in an SQL database?

I'm starting a new application and was wondering what the best method of logging is. Some tables in the database will need to have every change recorded, and the user that made the change. Other tables may just need to have the last modified time recorded.
In previous applications I've used different methods to do this but want to hear what others have done.
I've tried the following:
Add a "modified" date-time field to the table to record the last time it was edited.
Add a secondary table just for recording changes in a primary table. Each row in the secondary table represents a changed field in the primary table. So one record update in the primary could create several records in the secondary table.
Add a table similar to no.2 but it records edits across three or fours tables, reference the table it relates to in an additional field.
what methods do you use and would recommend?
Also what is the best way to record deleted data? I never like the idea that a user can permanently delete a record from the DB, so usually I have a boolean field 'deleted' which is changed to true when its deleted, and then it'll be filtered out of all queries at model level. Any other suggestions on this?
Last one.. What is the best method for recording user activity? At the moment I have a table which records logins/logouts/password changes etc, and depending what the action is, gives it a code either 1,2, 3 etc.
Hope I haven't crammed too much into this question. thanks.
I know it's a very old question, but I'd wanted to add more detailed answer as this is the first link I got googling about db logging.
There are basically two ways to log data changes:
on application server layer
on database layer.
If you can, just use logging on server side. It is much more clear and flexible.
If you need to log on database layer you can use triggers, as #StanislavL said. But triggers can slow down your database performance and limit you to store change log in the same database.
Also, you can look at the transaction log monitoring.
For example, in PostgreSQL you can use mechanism of logical replication to stream changes in json format from your database to anywhere.
In the separate service you can receive, handle and log changes in any form and in any database (for example just put json you got to Mongo)
You can add triggers to any tracked table to olisten insert/update/delete. In the triggers just check NEW and OLD values and write them in a special table with columns
table_name
entity_id
modification_time
previous_value
new_value
user
It's hard to figure out user who makes changes but possible if you add changed_by column in the table you listen.

Having two tables for capturing data at a specific moment

I'm creating an application which will hold curriculum vitaes
the user should be able to:
create different work information for using with different CV's
Name of work, Start date, End Date, ...
CV will have many WorkInformations
Workinformation belongs to many CV's
though when a user changes workinformation outside the scope of the CV I don't want it to change within the current CV's.
Is it correct to have an extra table with the same information?
Its supposed to create a new "workinformation" from a copy of a "workinformation_that_shouldent.."
or any other approach I should look into, open for all suggestions, new to designing relational databases.
No, I don't think you should have a different workinformation table.
Instead, you should have the CV point to a work information record. When the work information record changes outside the CV world, then create a new version of the record. That way, all work information records are in the same table. The ones that CVs refer to remain the same.
You can keep track of different versions of the same record in more than one way. A simple way is to have versions refer back to the base work information record, with another field having the version number.
By the way, I find it unusual that a work information record would be referred to by multiple CVs.

Designing a database schema for a Job Listing website?

For a school project I'm making a simple Job Listing website in ASP.NET MVC (we got to choose the framework).
I've thought about it awhile and this is my initial schema:
JobPostings
+---JobPostingID
+---UserID
+---Company
+---JobTitle
+---JobTypeID
+---JobLocationID
+---Description
+---HowToApply
+---CompanyURL
+---LogoURL
JobLocations
+---JobLocationID
+---City
+---State
+---Zip
JobTypes
+---JobTypeID
+---JobTypeName
Note: the UserID will be linked to a Member table generated by a MembershipProvider.
Now, I am extremely new to relational databases and SQL so go lightly on me.
What about naming? Should it be just "Description" under the JobPostings table, or should it be "JobDescription" (same with other columns in that main table). Should it be "JobPostingID" or just "ID"?
General tips are appreciated as well.
Edit: The JobTypes are fixed for our project, there will be 15 job categories. I've made this a community wiki to encourage people to post.
A few thoughts:
Unless you know a priori that there is a limited list of job types, don't split that into a separate table;
Just use "ID" as the primary key on each table (we already know it's a JobLocationID, because it's in the JobLocations table...);
I'd drop the 'Job' prefix from the fields in JobPostings, as it's a bit redundant.
There's a load of domain-specific info that you could include, like salary ranges, and applicant details, but I don't know how far you're supposed to be going with this.
Job Schema http://gfilter.net/junk/JobSchema.png
I split Company out of Job Posting, as this makes maintaining the companies easier.
I also added a XREF table that can store the relationship between companies and locations. You can setup a row for each company office, and have a very simple way to find "Alternative Job Locations for this Company".
This should be a fun project...good luck.
EDIT: I would add Created and LastModifiedBy (Referring to a UserID). These are great columns for general housekeeping.
Looks good to me, I would recommend also adding Created, LastModified and Deleted columns to the user updateable tables as well for future proofing.
Make sure you explicitly define your primary and foreign keys as well in your schema.
What about naming? Should it be just
"Description" under the JobPostings
table, or should it be JobDescription
(same with other columns in that main
table). Should it be "JobPostingID" or
just "ID"?
Personally, I specify generic-sounding fields like "ID" and "Description" with prefixes as you suggest. It avoids confusion about what the id/description applies to when you write queries later on (and saves you the trouble of aliasing them).
I'd recommend folding the data you're going to be storing in JobLocations back into the main table. It's ok to have a table for states and another for countries, but I doubt you want a table that contains every city/state/country pair, you really don't gain anything from it. What happens if someone goes in and edits their location? You'd have to check to make sure no other joblisting points to the location and edit it, else create a new location and point to that instead.
My usual pattern is address and city as text with the record and FK to a state table.