Oracle drop user check - sql

DB: Oracle11gR2
OS: Linux
I want to drop USER1 Oracle user which is already locked for few weeks now.
I can run "drop user USER1 cascade;" to drop user but before dropping want to confirm nobody else is using or used objects after user was locked.
How to verify in Oracle that nobody is accessing or have accessed USER1 objects in last month or so?
Is there a db query/view available which we can use to make sure it's safe to run DROP command?
Thanks

Ideally, you would have enabled auditing of accesses on the various objects when you locked the account and left that in place for however long you would need to feel comfortable. A month may be sufficient but there may be quarterly or annual processes as well that you need to consider.
Assuming that you didn't enable auditing at the time and don't want to enable auditing now and wait another month, there are less complete approaches that you may be able to use (with the understanding that those approaches are going to provide less certainty).
You can query v$segment_statistics joined to v$statname to look at a variety of statistics about the table segments. "db block gets" and "consistent gets", for example, would show you how many times some process did a current or a consistent read on a block in a table. But it won't tell you what did the reads-- the background job that gathers statistics, for example, might read the data from the table. Those tables should accumulate data since the database was last restarted which may be significantly longer or shorter than the time period that you're interested in. You can get a list of the available statistics in the Oracle documentation to fine-tune exactly what you want to look for.
You can query dba_hist_seg_stat rather than v$segment_statistics. That will break out the statistics by time period so it will tell you when reads and writes happened. But it won't tell you who did them. That also requires that you be licensed to use the AWR (otherwise querying the table may violate your license and create an issue if you're ever audited).
You can look at dba_dependencies to see if any objects depend on objects owned by the user in question. But that will only work for stored objects (views, procedures, etc.). It won't capture information about SQL statements that are submitted from applications or ad hoc queries issued by users.
If you don't want to enable auditing and wait an appropriate period, you may be better served revoking privileges on the user1 objects from whatever roles/ users have them rather than dropping the objects outright. That way if something blows up from lack of privileges, it's relatively easy to restore the privilege without getting the object(s) back from backup. You could also create a trigger on a permission denied error that told you where the request was coming from.

Related

SQL Server sp_clean_db_free_space

I just want to ask if I can use the stored procedure sp_clean_db_free_space as part of preventive maintenance?
What are the pros and cons of this built in stored procedure? I just want to hear the inputs of someone who already uses this.
Thank you
You would typically not need to explicitly run this, unless you have specific reasons.
Quoting from SQL Server Books Online's page for sp_clean_db_free_space:
Delete operations from a table or update operations that cause a row
to move can immediately free up space on a page by removing references
to the row. However, under certain circumstances, the row can
physically remain on the data page as a ghost record. Ghost records
are periodically removed by a background process. This residual data
is not returned by the Database Engine in response to queries.
However, in environments in which the physical security of the data or
backup files is at risk, you can use sp_clean_db_free_space to clean
these ghost records.
Notice that SQL Server already has a background process to achieve the same result as sp_clean_db_free_space.
The only reason you might wish to explicitly run sp_clean_db_free_space is if there is a risk that the underlying database files or backups can be compromised and analysed. In such cases, any data that has not yet been swept up by the background process can be exposed. Of course, if your system has been compromised in such a way, you probably also have bigger problems on your hands!
Another reason might be that you have a time-bound requirement that deleted data should not be retained in any readable form. If you have only a general requirement that is not time-bound, then it would be acceptable to wait for the regular background process to perform this automatically.
The same page also mentions:
Because running sp_clean_db_free_space can significantly affect I/O
activity, we recommend that you run this procedure outside usual
operation hours.
Before you run sp_clean_db_free_space, we recommend
that you create a full database backup.
To summarize:
You'd use the sp_clean_db_free_space stored procedure only if you have a time-bound requirement that deleted data should not be retained in any readable form.
Running sp_clean_db_free_space is IO intensive.
Microsoft recommend a full database backup prior to this, which has its own IO and space requirements.
Take a look at this related question on dba.stackexchange.com: https://dba.stackexchange.com/questions/11280/how-can-i-truly-delete-data-from-a-sql-server-table-still-shows-up-in-notepad/11281

Table with multiple foreign keys -- only one not null

I'm trying to design a system where an administrator will have to approve changes to the data and other various administrative tasks -- add a user, add an admin etc.
My idea is to have a notification table that contains these notifications, but the problem is that a notification can be any of the previously mentioned types, ie it's data is stored in one of many tables. Here is a picture to describe my current plan -- note I'm sure that it's not a proper ER diagram.
full_screen
Also, the data goes into a pending table, that reflects the table it will eventually wind up in, provided the data is approved -- it's a staging ground of sorts. So, a pending_user is a user that is not in the user table. And as you can see the user table, amongst others, is not shown here, but one can use their imagination.
I'm concerned that the multiple null values in the pending table will have adverse effects that I'm not totally aware of, such as increased space usage and possibly increase query time. Also, I'm not sure how I'll implement the retrieval of these notifications. My naive approach is to select the first X notifications, analyze the rows to find the non-null column, retrieve the appropriate data and then load all the data in a response.
Is there a more straight forward pattern for this type of problem?
Thanks in advance for any help.
I think, the traditional way is to provide various levels of access/read/write rights to users. These access rights define what actions a user can and can't perform. In this traditional approach if a user has access to a certain function, he can do it without further approval.
Also, traditionally there are some kind of audit logs that contain a trace of all important changes to the data. With such logs it would be possible to know who made a change (and when).
If you need to build a two-stage system, where a change has to go through an approval, I'd add a flag column to each important table that would indicate that values in the given row are not final and have to be approved. The table would store all historical changes to the data and with the help of this flag the system would know which variant is the latest approved version and which variant is pending and waiting for approval.
I would not try to make a single universal table that would hold data related to changes in many different tables. Each table is different and approval process for each table is likely to be different. I doubt that you'll have more than a dozen entities that are important enough to go through this approval process.

How to get SQL executed or transaction history on a Table (AS400) DB2 for IBM i

I have an issue in our database(AS400- DB2) in one of our tables all the rows were deleted. I do not know if it was a program or SQL that a user executed. All I know it hapend +- 3am in the morning. I did check for any scheduled jobs at that time.
We managed to get the data back from backups but I want to investigate what deleted the records or what user.
Are there any logs on die as400 on physical tables to check what SQL executed and when on a specified table? This will help me determine what caused this.
I tried checking I systems navigator but could not find any logs... Is there a way of getting transnational data on a table using i system navigator or green screen? And If I can get the SQL that executed in the timeline.
Any help would be appreciated.
There was no mention of how the time was inferred\determined, but for lack of journaling, I would suggest a good approach is immediately to gather information about the file and member; DSPOBJD for both *SERVICE and *FULL, DSPFD for *ALL, DMPOBJ, and perhaps even a copy of the row for the TABLE from the catalog [to include the LAST_ALTERED_TIMESTAMP for ALTEREDTS column of SYSTABLES or the based-on field DBXATS from the QADBXREF]. Gathering those, worthwhile almost only if done before any other activity [esp. before any recovery activity], can help establish the time of the event and perhaps allude to what was the event; most timestamps are reflective of only of the most recent activity against the object [rather than as a historical log], so any recovery activity is likely to effect loss of any timestamps that would be reflective of the prior event\activity.
Even if there was no journal for the file and nothing in the plan cache, there may have been [albeit unlikely] an active SQL Monitor. An active monitor should be available visible somewhere in the iNav GUI as well. I am not sure of the visibility of a monitor that may have been active in a prior time-frame.
Similarly despite lack of journaling, there may be some system-level object or user auditing in effect for which the event was tracked either as a command-string or as an action on the file.member; combined with the inferred timing, all audit records spanning just before until just after can be reviewed.
Although there may have been nothing in the scheduled jobs, the History Log (DSPLOG) since that time may show jobs that ended, or [perhaps soon] prior to that time show jobs that started, which are more likely to have been responsible. In my experience, often the name of the job may be indicative; for example the name of the job as the name of the file, perhaps due only to the request having been submitted from PDM. Any spooled [or otherwise still available] joblogs could be reviewed for possible reference to the file and\or member name; perhaps a completion message for a CLRPFM request.
If the action may have been from a program, the file may be recorded as a reference-object such that output from DSPPGMREF may reveal programs with the reference, and any [service] program that is an SQL program could have their embedded SQL statements revealed with PRTSQLINF; the last-used for those programs could be reviewed for possible matches. Note: module and program sources can also be searched, but there is no way to know into what name they were compiled or into what they may have been bound if created only temporarily for the purpose of binding.
Using System i Navigator, expand Databases. Right click on your system database. Select SQL Plan Cache-> Show Statements. From here, you can filter based on a variety of criteria.
This is not sure-fire, but often saves me some time. Using System i Navigator, right-click on the table and choose Index Advisor. If you're lucky, one or more indexes are advised. If so, sort by date last advised and right click on the index with the newest date and select Show Statements... In that dialog box, either sort by date to help narrow things down or just scroll through the statements to find the one you're interested in. Right-click it and select Work with SQL Statement and there you go.

Should I create separate SQL Server database for each user?

I am working on Asp.Net MVC web application, back-end is SQL Server 2012.
This application will provide billing, accounting, and inventory management. The user will create an account by signup. just like http://www.quickbooks.in. Each user will create some masters and various transactions. There is no limit, user can make unlimited records in the database.
I want to keep stable database performance, after heavy data load. I am maintaining proper indexing and primary keys in it, but there would be a heavy load on the database, per user.
So, should I create a separate database for each user, or should maintain one database with UserID. Add UserID in each table and making a partition based on UserID?
I am not an expert in SQL Server, so please provide suggestions with clear specifications.
Please inform me if there is any lack of information.
A DB per user is what happens when customers need to be able pack up and leave taking the actual database with them. Think of a self hosted wordpress website. Or if there are incredible risks to one user accidentally seeing another user's data, so it's safer to rely on the servers security model than to rely on remembering to add the UserId filter to all your queries. I can't imagine a scenario like that, but who knows-- maybe if the privacy laws allowed for jail time, I would rather data partitioned by security rules rather than carefully writing WHERE clauses.
If you did do user-per-database, creating a new user will be 10x more effort. While INSERT, UPDATE and so on stay the same from version to version, with each upgrade the syntax for database, user creation, permission granting and so on will evolve enough to break those scripts each SQL version upgrade.
Also, this will multiply your migration headaches by the number of users. Let's say you have 5000 users and you need to add some new columns, change a columns data type, update a trigger, and so on. Instead of needing to run that change script 1x, you need to run it 5000 times.
Per user Dbs also probably wastes disk space. Each of those databases is going to have a transaction log, sitting idle taking up the minimum log space.
As for load, if collectively your 5000 users are doing 1 billion inserts, updates and so on per day, my intuition tells me that it's going to be faster on one database, unless there is some sort of contension issue (everyone reading and writing to the same table at the same time and the same pages of the same table). Each database has machine resources (probably threads and memory) per database doing housekeeping, so these extra DBs can't be free.
Anyhow, the best thing to do is to simulate the two architectures and use a random data generator to simulate load and see how they perform.
It's not an easy answer to give.
First, there is logical design to be considered. Then you have integrity, security, management and performance (in this very order).
A database is a logical unit of data, self contained. Ideally, you should be able to take a database, move it to another instance, probably change the connection strings and be running again.
All the constraints are database-level. No foreign keys can exist referencing some object outside the database.
So, try thinking in these terms first.
How would you reliably prevent one user messing up the other user's data? Keep in mind that it's just a matter of time before someone opens an excel sheet and fire up queries on the database bypassing your application. Row level security in SQL Server is something you don't want to deal with.
Multiple databases mean that all management tasks should be scripted out and executed on all databases. Yes, there is some overhead to it, but once you set it up it's just the matter of monitoring. If a database goes suspect, it's a single customer down, not all of them. You can even have different versions for different customes if each customer have it's own database. Additionally, if you roll an upgrade, you can do it per customer, so the inpact will be much less.
Performance is the least relevant factor here. Of course, it really depends on how many customers and how much data, but proper indexing will solve these issues. Scale-out is much easier with multiple databases.
BTW, partitioning, as you mentioned it, is never a performance booster, it's simply a management feature, allowing for faster loading and evicting of data from a table.
I'd probably put each customer in separate database, but it's up to you eventually to make a decision for yourself. Hope I've helped some with this.

Access sql that triggered the trigger from within trigger (Sybase)

Is there a way to access the sql that triggered a trigger from within the trigger? I've managed to get it by joining to the master..monProcessSQLText MDA table but this only works for users with the mon_role and I don't want to give that to everyone. Is there a global variable I've missed?
I'm trying to log all the updates run against a table so I can trace it back to an IP address and username.
This is with ASE 12.5.
If you are trying to
log all the updates run against a table so I can trace it back to an IP address and username
A trigger is definitely the wrong way to go about it, triggers were not designed for that, and there are other ASE facilities which were designed for that. It is not about the table, it is about security and monitoring in general.
Sybase Auditing.
It takes a bit of setting up, much less overhead than MAD tables; but most important, it was designed for auditing (MDA was not). And there is no coding requirements such as for MDA. It is highly configurable, the idea is to capture only what you need, and not more.
Monitoring.
I would not recommend MDA tables, but since you have them in place, and you have enabled monitoring, and accepted the 22% overhead for capturing SQL text... The info is very transient. In order to use them for any relevant purpose, such as yours, you need to write a capture-and-store mechanism, archiving all required info to an archive database. This has to be done on an ongoing basis, and completely independent of a trigger, etc. You can also filter on the fly to reduce the volume of data stored (warning, it is huge). purge data over 7 days old, etc. It is a little project in itself, that is why there are commercially available from 3rd parties.
Once either of these facilities are in place, then, separately, whenever you wish to inquire about who updated a table, when and from where, all you need to do is inspect the archive. nothing to do with a trigger, or difficulties getting the info from a trigger, or giving admin privileges to ordinary users.
Also, it needs to be appreciated that you do not have normal security in place, the tables are being updated directly by users; thus direct update permissions have been granted to either specific users, or worse, all users. The consequence is, there is no way of knowing who is updating the table, and who is breaking the data or referential integrity.
The secure method is to place the entire transaction in a stored proc, thus eliminating the possibility of incomplete transactions (as well as improving execution speed); and to grant permissions on the procs, not the tables, thus eliminating direct updates. Over time, you may wish to implement security in the server, so that the consequences do not have to be chased down and closed one by one, a process with no finite end.
As far as Auditing goes, if security were in place, then the auditing burden is also substantially reduced: you need to audit stored proc executions only. Otherwise, you need to audit all updates to all tables.