I have an application which is based on a Postgres database and I need to be able to examine the requests the application sends of the database.
I want to have Postgres log all of the queries it receives somewhere that I can examine them in order to rebuild some of its functionality in another application.
Can someone recommend a simple way to logging the queries Postgres receives in a Windows operating system?
Thanks,
Craig
Edit your postgresql.conf for PostgreSQL server, and either change log_statement to 'all' or change log_min_duration_statement to 0.
After the change you have to reload PostgreSQL configuration, and the queries will be logged to PostgreSQL log.
Related
There are some features in our LOB application that allow users to define their own queries to retrieve data for reports and listings within the app. The problem that we are encountering is that sometimes these queries they have written a really heavy (and sometimes erroneous) and cause massive load on the server.
Removing these features is out of the question but Im wanting to know if there is a way to create some type of sandbox within SQL server so that the queries that they execute are only allotted a certain amount of resources to execute therefore not giving them the chance to cause any damage to anyone else using the system. Any ideas?
The Resource governor has been mentioned in the comments above already. One other solution I can think of is using SQL Server High Availability Groups.
At the last place I worked had this kind of set up. There is a primary server which takes in all the transactions that write stuff to the database, with a secondary in case the primary fails. Added to this we also had read-only replicas added to the availability group.
The main purpose of this is in the event that your main server goes down you are automatically transferred to another replica. When you connect your application to the database, you connect it to the Availability Group rather than a specific server. Then if a server goes down you are automatically transferred to a secondary server instead. However, it can also be used to optimise application functionality that just needs read-only access by taking load off the primary server.
Any functionality that we knew that it only needed read-only access then we could connect to the availability group and add into the connection string ApplicationIntent=READONLY which means that we're using the read-only replica rather than the primary, leaving the primary for regular transactions. (IIRC, by default the primary will accept any read/write connection, so you have to configure the primary not to accept read-only connections)
Anyway, the kicking off point for reading up about this is here: https://msdn.microsoft.com/en-us/library/ms190202.aspx
The latest Windows 10 1903 upgrade already has inbuilt Sandbox features, where you can run SQL server within it's own sandbox. I don't think SQL Server itself has its own inbuilt sandbox environment, as it would be practically impossible to manage within a normal Windows server that is not using sandbox, if you know what I mean.
I have users entering data in SharePoint (Running on SQL Server), but my application to view that data will be an Oracle Apex app running on Oracle, obviously. How do I have the data be pushed into the Oracle db automatically?
First off, are you sure that you need to replicate the data to Oracle? Oracle Heterogeneous Services allows you to create a database link in Oracle that connects to a non-Oracle database using ODBC (assuming you use the Transparent Gateway for ODBC which is free). Your APEX application could then query and report on data that is in SQL Server by issuing queries that run over the database link. Tim Hall has a good article (though it's a bit dated and some of the components have been renamed, the general approach is still the same) on configuring Heterogeneous Services.
If you do need to replicate the data, you can create materialized views in Oracle that query the objects in SQL Server using the database link you created with Heterogeneous Services and schedule those materialized views to refresh on a regular basis. The materialized views will need to do a complete refresh, though, which means that every row will need to be copied from SQL Server to Oracle every time there is a refresh. That generally limits the frequency with which you can realistically have refreshes happen. If you need the data to be replicated to the Oracle database and you need to send incremental changes so that the Oracle side doesn't lag too far behind, you can use Streams from a non-Oracle database to an Oracle database but that involves a lot more work.
In SQL Server you can setup linked servers that allow you to view data from other db's. You might see if Oracle has something similar, if not the same. Alternatively, you could use the sql's integration services to push the data over to an oracle table. Unfortunately I only know how to setup linked servers in SQL Server and I don't have a lot of experience with ssis to tell you how to do that, but those are the first two options I can think of that you might explore further.
Here's a link I found that might be helpful as well: http://www.dba-oracle.com/t_connecting_sql_server_oracle.htm
There's no way to do it "automatically" that I know of that will work across DBMS. ETL tools like Sql Server Integration Services might help but there's going to be a loading delay (as it will have to poll for changes). You could build some update triggers on the SharePoint database tables but that's going to turn into a support nightmare.
I am using Oracle SQL Dev 2.1.1.64
I work with application that uses oracle database for storage.
Is there any way in SQL Dev. to monitor and log all the insert commands that are "coming" from the web application into database? Can you tell me how to do that?
audit insert table by <web-application-user> by access
should get you started.
Be sure to set the parameters audit_trail and audit_file_dest as you need them.
After that, you find the operations either in sys.aud$ or in the directory specified by audit_file_dest.
There is also fine grained auditing into which you might take a look, but from your question, using fine grained auditing (FGA) would seem to be overkill.
You can write a trigger for the tables you want to monitor. If you are only interested on the insert queries coming from the Web Application, you can check on the trigger for some specific username/schema accessing the table, and use that username as your web application credentials.
Alternatively you can also use Oracle's AUDIT feature. It requires a little bit of Oracle Database Administration knowledge to implement though...
You could query v$SQL, but you would need to have the relevant GRANTS to enable you to do this.
For long running sessions you can also monitor progress using v$session_longops
hope this helps you.
Create a trigger that writes to a journaling table whenever a change of data in the table happens (insert, update, delete).
Before delete, after insert, after update triggers are what you want.
It won't specifically log only the web application, but if you log the user making the change you will be able to filter on that when viewing the data.
I've written an application that connects to a system over the network and logs events from that system to a SQL Server database.
I need to test the behaviour of the application when the SQL Server goes down. Is there a way to Kill just the one Database on a SQL Server system without affecting the others?
If not is there a way to simulate the SQL Server going down.
It shouldn't matter but the app is written in Java.
You can use sqlcmd to set the database in single-user mode or detach the database using T-SQL. This will simulate the database going offline in a controlled fashion, but not simulate the server going down in an uncontrolled fashion, which perhaps could be more useful.
Extending #bzlm's answer:
USE master
GO
ALTER DATABASE YourDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
you can make the database offline
ALTER DATABASE YourDatabase SET OFFLINE
GO
In addition to the other answers: You might even want to test different failure modes.
The other answers simulate the DB going down, while the server the DB runs on stays up.
You might for example also want to simulate a network failure or a server crash; this could probably be done by altering the network settings on the app server, or just by pulling its (network) plug.
Of course, whether this makes sense depends on your app.
I simply stop SQL Server service.
I am using SQL Server 2008 Enterprise for development. I find from SQL Server logs, there are items like,
2009-09-20 19:54:33.55 spid53 Starting up database 'DummyOrderDB'.
My confusion is, I think we could only start/stop database server instance (the contained database will be started/stopped when the containing database server instance start/stop), can we just start/stop a database without touch database server instance? I did not find such menu from SSMS.
thanks in advance,
George
That is an auto close database. Auto-close databases are 'closed' when not in use and each time an user uses one, they run a short recovery and the text above is displayed. SQL Express creates databases as auto close ON by default. To turn off the auto-close behavior, run:
ALTER DATABASE <dbname> SEt AUTO_CLOSE OFF;
Yes, we can. Of course Starting and Stopping databases only make sense when the server itself is started (that helps ;-) ), but each individual database has to be, say, initialized before it can be used in earnest. Also when you detach a database, it first shuts down. (which ensures data integrity and other clean-up are taken care of etc.)