In PostgreSQL, How to get all users who are logged into the session and also get their IP address and Query whenever they access databse? - sql

I am using PostgreSQL. Nearly five people will be using the same database. I want to get the data about who are running which query both in HeidiSQL tool and Webapplication.
I tried using pg_stat_activity table to get the details. But it returns only one row per IP, which is the corresponding machine's query details.

to log who was connected use
https://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-CONNECTIONS
to log what statements were used, use
https://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-STATEMENT
pg_stat_activity shows connected session only:
https://www.postgresql.org/docs/current/static/monitoring-stats.html
One row per server process, showing information related to the current
activity of that process, such as state and current query.
https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW
you might also be interested in https://github.com/pgaudit/pgaudit and https://www.postgresql.org/docs/current/static/pgstatstatements.html

Related

SQL Server query to show last activity time for current session

Back end: SQL Server 2017 Express
Front End: Microsoft Access 2019 with ODBC linked tables to the SQL Server database
Objective: To detect inactivity of say 30 minutes in the current session, and then exit from Access
I would like a SQL Server query (to be called from the front-end Access database, via a timer) which will return the date/time of the last SQL statement (e.g. select/insert/update/delete) for the current session, so that the Access application can exit after a defined period of inactivity.
So far, I have looked at sp_who, sp_who2, sysprocesses, dm_exec_connections, dm_db_index_usage_stats and dm_exec_sessions.
Whilst these return useful looking columns such as LastBatch, the problem is that the act of querying the database updates the return value. For instance, if I run sp_who2 and look at the row for my SPID, the value of LastBatch is always the same as GetDate().
I know that the options above would work if I was monitoring another session (SPID) but I'm looking for a way to find the time of last activity (excluding sp_who2 etc) for my own session.
Any suggestions?
You are correct that the last request time is the time of the request requesting the last request time. (Had to write that.) Maybe you should be detecting if Access is idle instead? Searching "automatically close access if idle" has promising results.
https://learn.microsoft.com/en-us/office/vba/access/concepts/miscellaneous/detect-user-idle-time-or-inactivity
https://www.iaccessworld.com/set-program-close-automatically/
https://www.tek-tips.com/faqs.cfm?fid=1432
If you absolutely have to do the check in SQL Server, then perhaps a job that checks connections. It can kill connections if they are idle a while without an open transaction. I don't know if the access connections will be obvious as not all apps provide their name. I also don't know how the access front end will respond. I'd be leery of doing this.

I am not able to see QH_CLIENT_WORKSTATION_NAME column, it is Null. I'm querying fron Aginity connected on Netezza server

I would like to start monitoring our system closely as to see who and at what time did a user run a query.
Currently, on the tables from HIST DB, we are able to see the query Texts, username, date, time, and client IP. But what we are more interested in is to see the client host machine name.
When we run a query requesting client hostname, the output comes as unknown.
Below is the query that we are running to get our required information:
SELECT *
FROM NZ_QUERY_HISTORY
Is there anything else that we can look at or implement for us to be able to see client machine name.
FYI: When we run: show session all; we do infact see the client host machine.
We did our own view on top of the query history database when we started using netezza a few years ago, and it does not include the DNS name (hostname) of the client. I guess we left it out because it is empty most of the time. Another guess is that our DNS setup doesn’t not allow reverse DNS lookups for all IP addresses from the netezza host.
Instead we rely on:
the clien IP address and netezza username
the username&ip address on the client machine
In total that is quite powerfull
Furthermore we add a bit if ‘pre sql’ to the connection configuration of the client tools we use (sas, powercenter, business objects, etc) and add as much info as we can to the 4 ‘client_application_*’ variables. See here for syntax: https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.dbu.doc/r_dbuser_set.html
For powercenter we add the workflow,session and other -names...

Talend's tOracleInput does not read data

My colleague created a project in Talend to read data from Oracle database.
I used his project and so I have his Job context with connection parameters to Oracle DB and Talend successfully connects on that computer.
I've created a trivial job which is composed of two components: tOracleInput which should be reading data and tLogRow which should be redirecting output to Talend's terminal.
The problem is that when I start the job - data is not outputted to terminal and instead of row amount outputted per second I see Starting ... status.
Would it be connection issues, inappropriate java version on my computer or something else?
Starting... status means that the query is being executed. Usually it takes a few seconds to execute a simple query against the database. This is because of the Oracle database behavior that it starts to return the data without completing a full table scan. To use this feature you can use joins and filters, but not group by / order by.
On the other hand if you're using a view or executing a complex query, or just simply use DISTINCT it could happen that the query execution takes a few minutes. This is because the oracle database generates the ResultSet on the database side before returning the records.

Find what application is connected with what login to what database what table and what columns

Is there a Script which finds the current activity
from application->login->Database->Table->Column level ?
I have used
SP_who2, sp_who2'Active',Sysprocesses
Activity Monitor
Audit
Profiler
Trigger
Extended Events
and coludnt get column level data connections, i was able to get the sql statements, table name, database,instance, application, login name ...but I couldn't get Column Names
the reason I am trying to find to track all usage and re architect the Database..
any help is appreciated
SP_who2 and sp_who are the ones I have even used to get the required information. You can as well check against sys.sysprocesses to know about processes that are running on an instance of SQL Server.
If you want the columns involved in the queries then consider using SQL Server Tracing probably.

How to determine an Oracle query without access to source code?

We have a system with an Oracle backend to which we have access (though possibly not administrative access) and a front end to which we do not have the source code. The database is quite large and not easily understood - we have no documentation. I'm also not particularly knowledgable about Oracle in general.
One aspect of the front end queries the database for a particular set of data and displays it. We have a need to determine what query is being made so that we can replicate and automate it without the front end (e.g. by generating a csv file periodically).
What methods would you use to determine the SQL required to retrieve this set of data?
Currently I'm leaning towards the use of an EeePC, Wireshark and a hub (installing Wireshark on the client machines may not be possible), but I'm curious to hear any other ideas and whether anyone can think of any pitfalls with this particular approach.
Clearly there are many methods. The one that I find easiest is:
(1) Connect to the database as SYS or SYSTEM
(2) Query V$SESSION to identify the database session you are interested in.
Record the SID and SERIAL# values.
(3) Execute the following commands to activate tracing for the session:
exec sys.dbms_system.set_bool_param_in_session( *sid*, *serial#*, 'timed_statistics', true )
exec sys.dbms_system.set_int_param_in_session( *sid*, *serial#*, 'max_dump_file_size', 2000000000 )
exec sys.dbms_system.set_ev( *sid*, *serial#*, 10046, 5, '' )
(4) Perform some actions in the client app
(5) Either terminate the database session (e.g. by closing the client) or deactivate tracing ( exec sys.dbms_system.set_ev( sid, serial#, 10046, 0, '' ) )
(6) Locate the udump folder on the database server. There will be a trace file for the database session showing the statements executed and the bind values used in each execution.
This method does not require any access to the client machine, which could be a benefit. It does require access to the database server, which may be problematic if you're not the DBA and they don't let you onto the machine. Also, identifying the proper session to trace can be difficult if you have many clients or if the client application opens more than one session.
Start with querying Oracle system views like V$SQL, v$sqlarea and
v$sqltext.
Which version of Oracle? If it is 10+ and if you have administrative access (sysdba), then you can relatively easy find executed queries through Oracle enterprise manager.
For older versions, you'll need access to views that tuinstoel mentioned in his answer.
Same data you can get through TOAD for oracle which is quite capable piece of software, but expensive.
Wireshark is indeed a good idea, it has Oracle support and nicely displays the whole conversation.
A packet sniffer like Wireshark is especially interesting if you don't have admin' access to the database server but you have access to the network (for instance because there is port mirroring on the Ethernet switch).
I have used these instructions successfully several times:
http://www.orafaq.com/wiki/SQL_Trace#Tracing_a_SQL_session
"though possibly not administrative access". Someone should have administrative access, probably whoever is responsible for backups. At the very least, I expect you'd have a user with root/Administrator access to the machine on which the oracle database is running. Administrator should be able to login with a
"SQLPLUS / AS SYSDBA" syntax which will give full access (which can be quite dangerous). root could 'su' to the oracle user and do the same.
If you really can't get admin access then as an alternative to wireshark, if your front-end connects to the database through an Oracle client, look for the file sqlnet.ora. You can set trace_level_client, trace_file_client and trace_directory_client and get it to log the Oracle network traffic between the client and database server.
However it is possible that the client will call a stored procedure and retrieve the data as output parameters or a ref cursor, which means you may not see the query being executed through that mechanism. If so, you will need admin access to the db server, and trace as per Dave Costa's answer
A quick and dirty way to do this, if you can catch the SQL statement(s) in the act, is to run this in SQL*Plus:-
set verify off lines 140 head on pagesize 300
column sql_text format a65
column username format a12
column osuser format a15
break on username on sid on osuser
select S.USERNAME, s.sid, s.osuser,sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
order by s.sid,t.piece
/
You need access those v$ views for this to work. Generally that means connecting as system.