Tracking changes in table - sql

I m working with pgsql .I want to save the audit record to any file system(spread sheet,word...). ie, I have a web application. Any changes(insert,delete,update) occur in the app, will recorded in the audit logg table.But no of tables are in db also each table have more than 5000 rows. so it is difficult (bulk data)to save audit logg as a table.So I want to save audit log as a file in pgSQL. How can it implement?
Thankyou..

Veena.I have worked on pgsql past 2 years back.To my knowledge,To configure a PostgreSQL database as a standalone audit log database or to save audit file, just follow this.firstly Gather database information after that create the audit store schema and configure a PostgreSQL Server data source for CA SiteMinder and Point the Policy Server to the database finally restart the policy server.
You can create the logging schema so the pgsql server database can store audit logs.
To create audit logs,Open sm_postgresql_logs.sql in a text editor and copy the contents of the entire file and start a SQL client, such as psql, and log in as the user who administers the Policy Server database.Select the database instance from the database list and paste the schema from sm_postgresql_logs.sql into the query after that execute the query.
The audit log store schema is created in the database.
Hope this will help you.

Related

Imported SQL Server Database Username

When importing a SQL Server database from another machine using a .BAK file through the restore option, the process appears to be successful and completes with the database now in the list within SQL Server Management Studio. But one thing I need clarified please.
When the Restore dialogue is importing the database a row of fields are displayed including "Name", "component", "Type" etc. But the last field is the username. Is this field simply showing the owner of the database where it originated from or is this value used with relevance in the imported database?
I looked in database_name >> Security > Users in SSMS but the user shown in the restore process is not listed

Azure Machine Learning Write output to Azure SQL Database

I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!

Azure SQL, Copy most of a database into an existing one (not new one) same server

I know I can clone DB into a new one with
CREATE DATABASE Database1_copy AS COPY OF Database1;
(https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql)
and this goes flawesly, except in Azure, where db properties are managed by Azure portal, so I am try to find a way to copy most of the schema/resources/data into an EXISTING DB
would be great for:
CLONE DATABASE Database_test AS COPY OF Database_production
[even first approach has been to "clone" the entire db, indeed few tables on destination db should be kept, so better approach would be to CLONE EVERYTHING EXCEPT ('table1','table2'). Actually plan to achieving this by scripting the few tables needed on destination db and overwriting them after import, but bet solution would be the other]
You can do this in several ways:
Through the Azure Portal
Open your database in the Azure Portal(https://portal.azure.com)
In the overview blade of your database select the "copy" option
Fill in the parameters, in which server would you like the copy
Using a sql server client and connecting to the server
Open your SQL Server blade in Azure
Select the "Firewall" option
Click on "Add client IP"
Connect to your database with your connection string and your favorite client, could be SSMS
Execute your sql query to clone the database in the same server
-- Copy a SQL database to the same server
-- Execute on the master database.
-- Start copying.
CREATE DATABASE Database1_copy AS COPY OF Database1;
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql
The above SQL statement works perfectly fine as expected in Azure SQL Database.
Important Notes:
Log on to the master database (System Databases) using the
server-level principal login or the login that created the
database you want to copy.
Logins that are not the server-level principal must be members of
the dbmanager role in order to copy databases.
Use updated version of the SQL Server Management Studio

How to attach database in SQL Server 2008 R2 without index.idx file

I have a database test (test.mdf, test_ldf.log and test_idx.idx). By mistake I deleted the test_idx.idx file and due to file was over 20 GB.
I have used all type of recovery software but can't find the file.
Unfortunately that database is not backed up, and now SQL Server 2008 R2 doesn't let me attach the database.
Is there any way to attach the database test.mdf file without index file?
Error when I try to attach database:
Unable to open the physical file ".idx". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". (Microsoft SQL Server, Error: 5120)
create a dummy database (name is not important) with the same file structures. ie a data file, log file and index file.
take the index file and database offline like so:
alter database <dbname> modify file(name = '<name of your index file>', offline)
alter database <dbname> set offline
You'll not be able to recover the offline file (to my knowledge) so dont give it the same logical name as your original file - you'll see why in a moment.
now overwrite the dummy database files with what you have left of your broken database (obviously keep the copies somewhere safe!)
bring the database back online
alter database <dbname> set online
At this point your data should be accessible.
create new filegroup/file with the same logical names as your original and rebuild indexes
Good Luck!

Data Replication - SQL Server 2008

Good day
I needed to create a data replication between two databases. I created the Local Publication with one table for testing purposes. I then created the Local Subscription and it worked 100%. I tested it and the data gets updated. I then started to add more tables to the Local Publication that I created. I noticed that the new tables did not pull through to the new database through the Local Subscription I created. Do I need to create a new Subscription for the updates? Do I need to delete the current Subscription or is there another way that I can just update the Current Subscription?
Thanks
Ruan
Got this description from this Article: http://www.mssqltips.com/sqlservertip/2502/limit-snapshot-size-when-adding-new-article-to-sql-server-replication/
You must start snapshot agent, but check that already replicated tables are not marked for reinitialization, because in such case data from old tables will be transfered once more.