Must remove all the permissions SSRS 2008 all reports and leave only one group, is there any way via script in PS, VB, T-SQL that performs this task?
I can see 2 ways of doing it:
The recommended (supported) way
Go through all reports and restore the parent security.
This can take a lot of time depending on the number or reports you have.
The unsupported way
This should do what you want without too much work, but is quite risky.
Backup your ReportServer DB (important)
Apply the permissions you want on the root in the web interface
Go in the Catalog table and look for the PolicyID of the corresponding entry (it should be the first line, with almost all other columns = NULL, and PolicyRoot = 1)
Execute the following query:
update [dbo].[Catalog] set [PolicyID] = <YourRootPolicyID>
(Optional) Clean the PolicyUserRole table, which maps a user to a role and a policy:
delete from [dbo].[PolicyUserRole] where [PolicyID] <> <YourRootPolicyID>
(Optional) Clean the Policies table, which holds the list of policies (= security settings):
delete from [dbo].[Policies] where [PolicyID] <> <YourRootPolicyID>
All your items will now have same the security settings.
Related
I'm trying to automate the initialising of a SQL DB on Azure. For some (lookup) tables, data needs to be copied from a source DB into the new DB each time it is initialised.
To do this I execute a query containing
SELECT * INTO [target_db_name]..[my_table_name] FROM [source_db_name].dbo.[my_table_name]
At this point an exception is thrown telling me that
Reference to database and/or server name in 'source_db_name.dbo.my_table_name'
is not supported in this version of SQL Server.
Having looked into this, I've found that it's now possible to reference another Azure SQL DB provided it has been configured as an external data source. [here and here]
So, in my target DB I've executed the following statement:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
CREATE DATABASE SCOPED CREDENTIAL cred
WITH IDENTITY = '<username>',
SECRET = '<password>';
CREATE EXTERNAL DATA SOURCE [source_db_name]
WITH
(
TYPE=RDBMS,
LOCATION='my_location.database.windows.net',
DATABASE_NAME='source_db_name',
CREDENTIAL= cred
);
CREATE EXTERNAL TABLE [dbo].[my_table_name](
[my_column_name] BIGINT NOT NULL
)
WITH
(
DATA_SOURCE = [source_db_name],
SCHEMA_NAME = 'dbo',
OBJECT_NAME = 'my_table_name'
)
But the SELECT INTO statement still yields the same exception.
Furthermore, a simple SELECT * FROM [source_db_name].[my_table_name] yields the exception "Invalid object name 'source_db_name.my_table_name'".
What am I missing?
UPDATE
I've found the problem: CREATE EXTERNAL TABLE creates what appears to be a table in the target DB. To query this, the source DB name should not be used. So where I was failing with:
SELECT * FROM [source_db_name].[my_table_name]
I see that I should really be querying
SELECT * FROM [my_table_name]
It looks like you might need to define that external table, according to what appears to be the correct syntax:
CREATE EXTERNAL TABLE [dbo].[source_table](
...
)
WITH
(
DATA_SOURCE = source_db_name
);
The three part name approach is unsupported, except through elastic database query.
Now, since you're creating an external table, the query can pretend the external table is an object native to our [target_db]- this allows you to write the query SELECT * FROM [my_table_name], as you figured out from your edits. From the documentation, it is important to note that "This allows for read-only querying of remote databases." So, this table object is not writable, but your question only mentioned reading from it to populate a new table.
As promised, here's how I handle database deploys for SQL Server. I use the same method for on-prem, Windows Azure SQL Database, or SQL on a VM in Azure. It took a lot of pain, trial and error.
It all starts with SQL Server Data Tools, SSDT
If you're not already using SSDT to manage your database as a project separate from your applications, you need to. Grab a copy here. If you are already running a version of Visual Studio on your machine, you can get a version of SSDT specific for that version of Visual Studio. If you aren't already running VS, then you can just grab SSDT and it will install the minimal Visual Studio components to get you going.
Setting up your first Database project is easy! Start a new Database project.
Then, right click on your database project and choose Import -> Database.
Now, you can point at your current development copy of your database and import it's schema into your project. This process will pull in all the tables, views, stored procedures, functions, etc from the source database. When you're finished you will see something like the following image.
There is a folder for each schema imported, as well as a security folder for defining the schemas in your database. Explore these folders and look through the files created.
You will find all the scripts created are the CREATE scripts. This is important to remember for managing the project. You can now save your new solution, and then check it into your current source control system. This is your initial commit.
Here's the new thought process to managing your database project. As you need to make schema changes, you will come into this project to make changes to these create statements to define the state you want the object to be. You are always creating CREATE statements, never ALTER statements in your schema. Check out the example below.
Updating a table
Let's say we've decided to start tracking changes on our dbo.ETLProcess table. We will need columns to track CreatedDateTime, CreatedByID, LastUpdatedDateTime, and LastUpdatedByID. Open the dbo.ETLProcess file in the dbo\Tables folder and you'll see the current version of the table looks like this:
CREATE TABLE [dbo].[ETLProcess] (
[ETLProcessID] INT IDENTITY (1, 1) NOT NULL
, [TenantID] INT NOT NULL
, [Name] NVARCHAR (255) NULL
, [Description] NVARCHAR (1000) NULL
, [Enabled] BIT DEFAULT ((1)) NOT NULL
, CONSTRAINT [PK_ETLProcess__ETLProcessID_TenantID]
PRIMARY KEY CLUSTERED ([ETLProcessID], [TenantID])
, CONSTRAINT [FK_ETLProcess_Tenant__TenantID]
FOREIGN KEY ([TenantID])
REFERENCES [dbo].[Tenant] ([TenantID])
);
To record the change we want to make, we simply add in the columns into the table like this:
CREATE TABLE [dbo].[ETLProcess] (
[ETLProcessID] INT IDENTITY (1, 1) NOT NULL
, [TenantID] INT NOT NULL
, [Name] NVARCHAR (255) NULL
, [Description] NVARCHAR (1000) NULL
, [Enabled] BIT DEFAULT ((1)) NOT NULL
, [CreatedDateTime] DATETIME DEFAULT(GETUTCDATE())
, [CreatedByID] INT
, [LastUpdatedDateTime] DATETIME DEFAULT(GETUTCDATE())
, [LastUpdatedByID] INT
, CONSTRAINT [PK_ETLProcess__ETLProcessID_TenantID]
PRIMARY KEY CLUSTERED ([ETLProcessID], [TenantID])
, CONSTRAINT [FK_ETLProcess_Tenant__TenantID]
FOREIGN KEY ([TenantID])
REFERENCES [dbo].[Tenant] ([TenantID])
);
I didn't add any foreign keys to the definition, but if you wanted to create them, you would add them below the Foreign Key to Tenant. Once you've made the changes to the file, save it.
The next thing you'll want to get in the habit of is checking your database to make sure it's valid. In the programming world, you'd run a test build to make sure it compiles. Here, we do something very similar. From the main menu hit Build -> Build Database1 (the name of our database project).
The output window will open and tell you if there are any problems with your project. This is where you'll see things like Foreign keys referencing tables that don't yet exist, bad syntax in your create object statements, etc. You'll want to clean these up before you check your update into source control. You'll have to fix them before you will be able to deploy your changes to your development environment.
Once your database project builds successfully and it's checked in to source control, you're ready for the next change in process.
Deploying Changes
Earlier I told you it was important to remember all your schema statements are CREATE statements. Here's why: SSDT gives you two ways to deploy your changes to a target instance. Both of them use these create statements to compare your project against the target. By comparing two create statements it can generate ALTER statements needed to get a target instance up to date with your project.
The two options for deploying these changes are a T-SQL change script, or dacpac. Based on the original post, it sounds like the change script will be most familiar.
Right click on your database project and choose Schema Compare.
By default, your database project will be the source on the left. Click Select target on the right, and select the database instance you want to "upgrade". Then click Compare in the upper left, and SSDT will compare the state of your project with the target database.
You will then get a list of all the objects in your target database that are not in the project (in the DROP section), a list of all objects that are different between the project and target database (in the ALTER Section), and a list of objects that are in your project and not yet in your target database (in the ADD section).
Sometimes you'll see changes listed that you don't want to make (changes in the Casing of your object names, or the number of parenthesis around your default statements. You can deselect changes like that. Other times you will not be ready to deploy those changes in the target deployment, you can also deselect those. All items left checked will either be changed in target database, if you choose update (red box below), or added to your change script (green box below), if you hit the "Generate Script" icon.
Handling lookup data in your Database Project
Now we're finally to your original question, how do I deploy lookup data to a target database. In your database project you can right click on the project in Solution Explorer and choose Add -> New Item. You'll get a dialog box. On the left, click on User Scripts, then on the right, choose Post-Deployment Script.
By adding a script of this type, SSDT knows you want to run this step after any schema changes. This is where you will enter your lookup values, as a result they're included in source control!
Now here's a very important note about these post deployment scripts. You need to be sure any T-SQL you add here will work if you call the script in a new database, in an existing database, or if you called it 100 times in a row. As a result of this requirement, I've taken to including all my lookup values in merge statements. That way I can handle inserts, updates, and deletes.
Before committing this file to source control, test it in all three scenarios above to be sure it won't fail.
Wrapping it all up
Moving from making changes directly in your target environments to using SSDT and source controlling your changes is a big step in the maturation of your software development life-cycle. The good news is it makes you think about your database as part of the deployment process in a way that is compatible with continuous integration/continuous deployment methods.
Once you get used to the new process, you can then learn how to add a dacpac generated from SSDT into your deployment scripts and have the changes pushed at just the right time in your deployment.
It also frees you from your SELECT INTO problem, your original problem.
We have a oracle database and we have been running into problems with our build and install procedures where when we update the table schema (add, modify columns, triggers, etc) it doesn't always get deployed to all the instances.
Right now we handle schema updates by putting notes on the install steps for the build to run alter table commands, etc. But these always assume you are going from the last build (i.e. build 3 is installed and we are going to 4). If 1 is installed, there might be alter scripts going from 1 to 2, then 2 to 3, then 3 to 4. So this is a giant pain of a manual process that we often mess up and miss an altar.
Is there a easy way to do a "create or replace" on a table without dropping it and losing data? Essentially we want to compare the current table to what it should be and update it. We do not want to backup the table, drop it, create it, and then restore it.
"Essentially we want to compare the current table to what it should be and update it"
Assuming you have a good source version that you want to use to update the other instances, you can Toad's schema compare (you need the DBA Admin module or Toad Xpert Edition) and generate the scripts needed to update a single table, a set of tables, or whatever list of objects you choose.
I would say that the scripts should still be checked/verified before running against the target instance. Some changes may be best handled in a different way (rename a column vs drop/create for example). So be careful.
One more note that others will probably bring up is that this problem shows definite holes in your company's change management process (which is a much bigger topic than this question).
In an effort to maintain versions of the databases we have in our CMDB, I have to obtain the versions of some databases deployed to our servers by a third party company.
Is there a system table, view or procedure that allows me to view information regarding recent deployments (code changes from an update script) to a SQL database?
You have three options.
First, you can build your own logging based on a table and a ddl trigger which will log each change in any procedure etc.
Second, you can track changes in one of this sys tables:
select * from sys.all_sql_modules -- Get the sourcecode of each proc (and track it)
select * from sys.objects -- Get information which object is modified at which date
Third, you can reverse engineer changes in the recent past by reading the trace log of the sql server itself and filter for drop/create events. (needed SA permission)
-- Get the current server trace file
select *
from fn_trace_getinfo(NULL)
where property=2
and traceid = 1
-- Copy value from the query above and paste it here
select *
from fn_trace_gettable('[PASTE PATH HERE!]', -1)
where EventClass IN(46,47) -- Create/Drop Object
Hopefully there is one solution for you which is helpfully for you.
But by the way. Another idea is, if your workflow allows this. Just use SSDT to create deployment packages and keep track of your changes.
Best regards,
Ionic
I have a Rails 3.2 multi-tenant subdomain based app which I'm trying to migrate over to PostgreSQL's schemas (each account getting its own schema -- right now all of the accounts use the same tables).
So, I'm thinking I need to:
Create a new DB
Create a schema for each Account (its id) and the tables under them
Grab all the data that belongs to each account and insert it into the new DB under the schema of said account
Does that sound correct? If so, what's a good way of doing that? Should I write a ruby script that uses ActiveRecord, plucks the data, then inserts it (pretty inefficient, but should get the job done) into the new DB? Or does Postgres provide good tools for doing such a thing?
EDIT:
As Craig recommended, I created schemas in the existing DB. I then looped through all of the Accounts in a Rake task, copying the data over with something like:
Account.all.each do |account|
PgTools.set_search_path account.id, false
sql = %{INSERT INTO tags SELECT DISTINCT "tags".* FROM "tags" INNER JOIN "taggings" ON "tags"."id" = "taggings"."tag_id" WHERE "taggings"."tagger_id" = #{admin.id} AND "taggings"."tagger_type" = 'User'}
ActiveRecord::Base.connection.execute sql
#more such commands
end
I'd do the conversion with SQL personally.
Create the new schemas in the same database as the current one for easy migration, because you can't easily query across databases with PostgreSQL.
Migrate the data using appropriate INSERT INTO ... SELECT queries. To do it without having to disable any foreign keys, you should build a dependency graph of your data. Copy the data into tables that depend on nothing first, then tables that depend on them, and so on.
You'll need to repeat this for each customer schema, so consider creating a PL/PgSQL function that uses EXECUTE ... dynamic SQL to:
Create the schema for a customer
Create the tables within the schema
Copy data in the correct order by looping over a hard-coded array of table names, doing:
EXECUTE `'INSERT INTO '||quote_ident(newschema)||'.'||quote_ident(tablename)||' SELECT * FROM oldschema.'||quote_ident(tablename)||' WHERE customer_id = '||quote_literal(customer_id)'||;'
where newschema, tablename and customer_id are PL/PgSQL variables.
You can then invoke that function from SQL. While you could do just select convert_customer(c.id) FROM customer GROUP BY c.id, I'd probably do it from an external control script just so each customer's work got done and committed individually, avoiding the need to start again from scratch if the second-to-last customer conversion fails.
For bonus crazy points it's even possible to define triggers on the main customer schema's tables that replicate changes to already-migrated customers over to the copy of their data in the new schema, so they can keep using the system during the migration. I'd avoid that unless the migration was just too big to do without downtime, as it'd be a nightmare to test and you'd still need the triggers to throw an error on access by customer id x while the migration of x's data was actually in-progress, so it wouldn't be fully transparent.
If you're using different login users for different customers (strongly recommended) your function can also:
REVOKE rights on the schema from public
GRANT limited rights on the schema to the user(s) or role(s) who'll be using it
REVOKE rights on public from each table created
GRANT the desired limited rights on each table to the user(s) and role(s)
GRANT on any sequences used by those tables. This is required even if the sequence is created by a SERIAL pseudo-column.
That way all your permissions are consistent and you don't need to go and change them later. Remember that your webapp should never log in as a superuser.
I need to create an audit to track all CRUD events for all the tables in a database ,
now i have more than 100 tables in the DB , is there a way to create the specification which will include all the tables in the DB ?
P.S : I am using SQL Server 2008
I had the same question. The answer is actually simpler than expected and doesn't need a custom C# app to generate lots of SQL to cover all the tables. Example SQL below. The important point was to specify database and public for INSERT/UPDATE/DELETE.
USE [master]
GO
CREATE SERVER AUDIT [CancerStatsAudit]
TO FILE
( FILEPATH = N'I:\CancerStats\Audit\'
,MAXSIZE = 128 MB
,MAX_ROLLOVER_FILES = 64
,RESERVE_DISK_SPACE = OFF
)
WITH
( QUEUE_DELAY = 1000
,ON_FAILURE = CONTINUE
,AUDIT_GUID = '5a0a18cf-fe42-4171-ad01-5e19af9e27d1'
)
ALTER SERVER AUDIT [CancerStatsAudit] WITH (STATE = ON)
GO
USE [CancerStats]
GO
CREATE DATABASE AUDIT SPECIFICATION [CancerStatsDBAudit]
FOR SERVER AUDIT [CancerStatsAudit]
ADD (INSERT ON DATABASE::[CancerStats] BY [public]),
ADD (UPDATE ON DATABASE::[CancerStats] BY [public]),
ADD (DELETE ON DATABASE::[CancerStats] BY [public]),
ADD (EXECUTE ON DATABASE::[CancerStats] BY [public]),
ADD (DATABASE_OBJECT_CHANGE_GROUP),
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
WITH (STATE = ON)
GO
NB: DATABASE_OBJECT_CHANGE_GROUP and SCHEMA_OBJECT_CHANGE_GROUP are not needed for auditing INSERT, UPDATE and DELETE - see additional notes below.
Additional notes:
The example above also includes the DATABASE_OBJECT_CHANGE_GROUP and the SCHEMA_OBJECT_CHANGE_GROUP. These were included since my requirement was to also track CREATE/ALTER/DROP actions on database objects. It is worth noting that the documentation is wrong for these.
https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-action-groups-and-actions
The above page states DATABASE_OBJECT_CHANGE_GROUP tracks CREATE, UPDATE and DELETE. This is not true (I've tested in SQL Server 2016), only CREATE is tracked, see:
https://connect.microsoft.com/SQLServer/feedback/details/370103/database-object-change-group-audit-group-does-not-audit-drop-proc
In fact, to track CREATE, UPDATE, DELETE, use SCHEMA_OBJECT_CHANGE_GROUP. Despite the above learn.microsoft.com documentation page suggesting this only works for schemas, it actually works for objects within the schema as well.
Change Data Capture
You can use the Change Data Capture functionality mechanism provided by SQL Server 2008.
http://msdn.microsoft.com/en-us/library/bb522489.aspx
Note that this will only do Create, Update and Delete.
Triggers and Audit tables
Even for 100 tables, you can use a single script that will generate the audit tables and the necessary triggers. Note, that this is not a very good mechanism - it will slow down the control will not be returned unless the trigger execution is complete.
Found a way to create Database Audit specification ,
wrote a c# code that dynamically generated sql statement for all the tables and all the actions I needed and executed the resultant string.
Frankly the wizard provided is no help at all if you are creating a Database Audit Specification for more than a couple of tables.