Primary key unique constraint is disabled after import into Azure SQL - sql

Many months ago we were running our website (ASP.NET C#) locally on premise and decided to migrate our website and database into azure (WebApp and Azure SQL). Everything was migrated successfully and we encountered minimal issues.
Now we just spotted that it seems that during our import of our SQL database into Azure SQL it disabled all constraints including PK constraints and unique constraints, etc.
I was wondering if anyone has encountered this before and what can be done to fix the issue?
The idea that I'm working with right now, and not sure if it would work, is to export the database from Azure SQL using SqlPackage.exe and VerifyExtraction=false and hopefully, that would work because otherwise the export doesn’t work because it’s trying to verify the schema, then import it to my local SQL server, try to run EXEC sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all" (I believe sp_msforeachtable is not available in Azure SQL). And if that works and the constraints are re-enabled properly, then try importing it back into Azure SQL again, assuming it won’t try to disable them again.
Any help or ideas are appreciated!

"Everything was migrated successfully and we encountered minimal issues."
I doubt that's actually true. It sounds like the import has failed to add primary keys and constraints during database restore, which is actually a pretty big deal.
If your database is small (think less than 10 million rows total across all tables) you could code a script to add primary keys and clustered indexes in-situ by using the ALTER TABLE statement.
If your database is any bigger than that, you'll need to either re-import it from scratch, paying attention to any error logs this time, or build new tables, select all your data into them, drop the existing tables and rename your new tables to what the old tables were named.
I guess it all depends on how many tables you have and how many rows they have as to which approach will be best for you.
SSMS has some great tools and shortcuts to do this sort of stuff. Right click table, click "Script as", click "create to", click "new query window" and voila, you have a create table statement ready to go. Add your constraints to it, rename it to [Tablename]2, run it, and voila! You've just created a new table with a Primary Key constraint. Select everything out of the existing table into your new table, delete the old table, rename your new table back to what the old table was named and bam, you're done.
If you need to do this 20 times, no big deal. If you need to do this 2000 times, you can script these operations into one long query.

Related

How to store and manage a sql schema, with the ability to update and insert data

I am wondering what the best way (or any way) to manage a database schema. I have a sql file with a bunch of statements like CREATE TABLE Users { id SERIAL PRIMARY KEY ....}; which represents the schema for my database.
I have postgres installed on my dev machine however am having trouble syncing the changes I make to the schema with the local database. Currently I just run drop the entire database and run the schema file on the database.
I figure there has to be a better way I don't know about. I will also need to be able to set a database up for production when the project becomes more stabilized and obviously dropping a table in production wont work.
Suggestions?

Refreshing Oracle database tables after initial copy is made

I have a production and development database (on different systems of course). Many months ago, I copied the production database to the development system. I used exp/imp at the time. Since then there has been quite a few changes in the production database I would like to copy down to the development database. I'd rather not wipe out the development database and start over because of data I've had to add to the development database.
My original thought was to use MERGE INTO to copy the new records. But this apparently requires me to do this for tables, and list all fields of all tables. We're talking hundreds of tables and thousands of fields here. Not a pretty solution.
Is there an easier way?
Why not use the TABLE_EXISTS parameter of impdp to append the new data to the existing tables? Duplicate keys will error off but the rest of the data should still import. The results will be a bit messy. Prior to running TRUNCATE any tables in test where you can just bring the entire production table. Disable FK. Re-enable after import.
- -
Another option create a database link and generate INSERT/SELECT into all tables where data not in existing test table. You probably also want to disable FK prior to running and re-enable when done.

Can I use postgres_fdw without foreign tables defined?

I have a production database "PRODdb1", with a read-only user account. I have a need to query(select statement) this database and insert the data into a secondary database named "RPTdb1". I originally planned to just create a temp table in PRODdb1 from my select, but permissions are the issue.
I've read abut dblink & postgres_fdw, but are either of these a solution for my issue? I wouldn't be creating foreign tables because my SELECT is joining many tables from PRODdb1, so I'm unfamiliar if postgres_fdw would still be an option for my use case.
Another option would be any means of getting the results of the SELECT to a .CSV file or something. My main blocker here is that I only have a read-only user to work with, but no way around that issue.
The simple answer is no. You can not use postgres_fdw without defining a foreign table in your RPTdb1. This should not be much of an issue though, since it is quite easy to create the foreign tables.
I am in much the same boat as you. We use a 3rd party product (based on Postgres 9.3) for our production database and the user roles we have are very restrictive (i.e. read-only access, no replication, no ability to create triggers/functions/tables/etc).
I believe that postgres_fdw has the functionality you are looking for, with one caveat. Your local reporting server needs to be running PostgreSQL version 10 (or 9.6 at a minimum). We currently use 9.3 on our local server and while simple queries work beautifully, anything more complicated takes forever, because the FDW in 9.3 tries to pull all data in the table before it is able to do JOINs or even use the WHERE statement.
version 9.6: Pushes down JOIN to the remote database before returning results.
version 10: Pushes down aggregates such as COUNT and SUM to the remote database before returning results.
(I am not sure which version adds the ability to push down WHERE statements to the remote DB, but I know it was not possible in 9.5).
We are in the process of upgrading our local server to version 10 this week. I can try to keep you updated with our progress, feel free to do the same.

MS Access 2007 - After importing tables, recordsets are no longer updateable

We have an in-house program at the company I work for, and inside of MS Access we link all of our tables to our three databases. However; in order to create new routes for students, someone needs an isolated copy of our program to work with that won't impact the actual database.
After deleting the linked tables, importing them all locally, and saving the .mdb I can no longer change values inside of most forms. For example:
A drop down menu with a list of possible route codes for a student will appear. Usually you can select one. You are no longer able to, and in the bottom left hand corner you see "this recordset is not updateable".
I'm a bit new at this, but I can't imagine why importing the tables would break anything. I wouldn't expect any key violations to occur (like I might when linking tables), or anything of that nature. If anyone can point me in the right direction it would be much appreciated! Thanks!
Access can handle a compound primary key on a linked table or view. The important thing, of course, is to make sure you're telling Access the right fields to use. It's common for Access not to get the primary key info when you're linking to a view (perhaps the ODBC driver doesn't pass that info along?), but I bet it can happen with tables too.
Not sure if these pics will help, but hopefully it can point you in the right direction. Here's a linked view, as you can see in this picture:
If you open it in design view, you can see that there are no primary keys:
which means that the table isn't editable (the add new button is greyed out):
So I run this command to tell Access to use a compound primary key on the linked view:
CurrentDb.Execute "Create Unique Index PrimaryKey On View_AssnsWithSorterField([Serial No], AssignmentDate)"
If you open the linked view in design view again, you can see the compound primary key:
and now the add new button appears.
You said that you tried to add a compound primary key but Access wouldn't allow it, which sounds like there's something about the table's data or structure that prevents it from using that key. How about creating an empty copy of the table in the backend database, then try linking to that? If that still doesn't work then maybe there's a unique constraint, or a trigger, or something else on the table that causes Access not to like making it updateable. Conversely, if it links fine and you can add/edit records in the linked blank test table, then it must be data in the real table that's causing the issue.
Sometimes if you cut down a copy of the table into smaller chunks (like maybe just start with the primary key columns alone) and gradually rebuild it piece by piece, you eventually hit on the thing that makes Access not let it be updateable.
One other possibility, of course, is a problem with the driver itself. The ODBC drivers that link to SQLServer and Oracle are good quality and can handle complex primary keys, but I've used junky drivers that link to obscure databases and they couldn't handle complex things like subqueries, union queries, etc, even though you could do those things in the database itself; the driver was only written to handle basic select/insert/update/deletes and that was it.

Create a database from another database?

Is there an automatic way in SQL Server 2005 to create a database from several tables in another database? I need to work on a project and I only need a few tables to run it locally, and I don't want to make a backup of a 50 gig DB.
UPDATE
I tried the Tasks -> Export Data in Management studio, and while it created a new sub database with the tables I wanted, it did not copy over any table metadata, ie...no PK/FK constraints and no Identity data (Even with Preserve Identity checked).
I obviously need these for it to work, so I'm open to other suggestions. I'll try that database publishing tool.
I don't have Integration Services available, and the two SQL Servers cannot directly connect to each other, so those are out.
Update of the Update
The Database Publishing Tool worked, the SQL it generated was slightly buggy, so a little hand editing was needed (Tried to reference nonexistent triggers), but once I did that I was good to go.
You can use the Database Publishing Wizard for this. It will let you select a set of tables with or without the data and export it into a .sql script file that you can then run against your other db to recreate the tables and/or the data.
Create your new database first. Then right-click on it and go to the Tasks sub-menu in the context menu. You should have some kind of import/export functionality in there. I can't remember exactly since I'm not at work right now! :)
From there, you will get to choose your origin and destination data sources and which tables you want to transfer. When you select your tables, click on the advanced (or options) button and select the check box called "preserve primary keys". Otherwise, new primary key values will be created for you.
I know this method can hardly be called automatic but why don't you use a few simple SELECT INTO statements?
Because I'd have to reconstruct the schema, constraints and indexes first. Thats the part I want to automate...Getting the data is the easy part.
Thanks for your suggestions everyone, looks like this is easy.
Integration Services can help accomplish this task. This tool provids advanced data transformation capabilities so you will be able to get exact subset of data that you need from large database.
Assuming that such data is needed for testing/debugging you may consider applying Row Sampling to reduce amount of data exported.
Create new database
Right click on it,
Tasks -> Import Data
Follow instructions