Azure recently announced that they are discontinuing the EasyTable and EasyAPI aspects of App Services. I've extensively used both of these, and want to continue to do so.
this article explains that all the existing functionality is still there, you just manually have to do it yourself. Now I'm relatively comfortable with creating basic SQL tables myself, but azure easy tables have some special properties that I don't know how to create.
When you creat an Azure Easy Table, it creates an id column which has auto-created values that look like dc405ef6-6c40-465d-ba8a-00e1ad86d5e4 - I know how to make an auto-incrementing id column but not one like that. It also has columns for createdAt and updatedAt, of type datetimeoffset, which get automatically filled in.
What's the CREATE TABLE commands to replicate this?
Also, not sure if there's something special I have to do to make my odata queries not have deleted rows show up.
What's the CREATE TABLE commands to replicate this?
You can set the default value of the column to be NEWID() function and it will insert a GUID as value for that column.
Related
In my application (C# application, using Entity-Framework and SQL Database), I am needed to create a daily task to update/insert data from a third-party application (both the applications are using SQL server database). For efficiency sake, I am looking for a way to determine what all records from the previous day have been modified and thus import only those records.
I know I can add a modified_on column to the source table and create a trigger to update that column when something is changed on that record, but that will need me to make changes to the third-party application's database schema which I want to avoid.
There's the change tracking feature but it's of limited use to you as you're using EF and that makes the way the data is queried awkward. You may be able to use it somehow, but I doubt it's elegant.
Way easier is to indeed change the schema but add only a single column of type rowversion. That binary datatype (loaded as byte[] in EF) is special and gets larger every time something (such as the third-party application) updates the row. No need for any triggers. You can look what the largest one is you already processed and then query all those that are larger than that.
In addition to change tracking suggested by John, in another answer, you can think of setting up Temporal tables.
You can run queries against the temporal tables to identify the changed records and pull them accordingly from main table.
I need to audit DDL changes made to a database. Those changes need to be replicated in many other databases at a later time. I found here that one can enable DDL triggers to keep track of DDL activities, and that works great for create table and drop table operations, because the trigger gets the T-SQL that was executed, and I can happily store it somewhere and simply execute it on the other servers later.
The problem I'm having is with alter operations: when a column name is changed from Management Studio, the event that is produced doesn't contain any information about columns! It just says the table was locked... What's more, if many columns are changed at once (say, column foo => oof, and also, column bar => rab) the event is fired only once!
My poor man's solution would be to have a table to store the structure of the table that's going to be altered, before and after the alter operation. That way, I could compare both structures and figure out what happened to which column(s).
But before I do that, I was wondering if it is possible to do it using some other feature from SQL Server that I have overlooked, or maybe there's a better way. How would you go about this?
There is a product meant for doing just that (I wrote it).
It monitors scripts that contained ddl changes, who wrote them and when together with their effect on performance, and it gives you the ability to easily copy them as one deployment script. For what you asked, the free version is sufficient.
http://www.seracode.com/
There is no special feature in SQL Server regarding your need. You can use triggers, but they require a lot of T-SQL coding for proper function. Fast solution would be some third party tools, but they're not free. Please take a look at this answer regarding the third party tools https://stackoverflow.com/a/18850705/2808398
I'm working on a design of a relational database. It has several tables and there are multiple users on application level. I need to know that changes to a certain record of a certain table are made, by which user, which time, and what has actually changed. There is a table for saving the user's information and this table is also included in this behavior.
How should I do this in the SQL database design so I can let users see which one of them made these changes?
What you want is a Wiki-like versioning. Basically, for every table you want to keep versions, you'll want to create at least a copy of that table with the fields you mentioned added (userid, when it was added). That's probably all there is to it, as long as you only need to track changes. Then, upon an edit, you just create a backup of the current row in that copied table and put the new one in the actual table. This way you can (hopefully) add the versioning without having to touch existing presentational code.
It gets a little more tricky, if you need to record additional actions like creation of new rows and deletion.
If you need a code example, just have look under the hood of some Wiki like https://mediawiki.org/
For starters you can look at sql server version tracking mechanisms (row versioning or row changes). After that you can look at sql server audit features. I think sql server audit would be the best for your needs.
On the other hand, if you want to make ad-hok versioning then YOU MUST NOT go to triggers. Imagine, you must create triggers for all tables for inserts, updates and deletes. This IS bad practice.
I think ad-hoc versioning should be avoided (degradation in performance and difficult to support) but in case it cannot be avoided, I would surely use CONTEXT_INFO in order to track current user and then I would try to create something that would read the schema of the table, I would get changes by using sql server change tracking mechanisms and store that in a tablename, changeduser, changedtime, column, prevValue, newValue style. I would not replicate each and every table for the changes.
I'm creating the front-end for a project and I made a copy of the back-end database from the company's server and put it on my computer. I needed to make some changes (a few new tables and two new columns in an existing table) for security roles and other things so I duplicated the copied database and made my changes on the new one.
I want to deploy my project to the company's server now but we need to modify the original back-end database. I need to generate a SQL script that finds the changes between the old-database and my newer database, which can be run on the old database to create the new tables and columns. The script should retain the data from the old database and NOT add any junk/testing data I made in my new database.
By the way, I'm using SQL Server 2008 R2 and the old database on the server is on 2005. I've been looking around for utilities to use and found tablediff. However, it looks like it will copy the data and I can't see an argument on the information page to toggle this.
I'm sure it's simple but I'm not really sure how to do this. Any help would be appreciated. Thanks.
By far the solution I trust most to handle schema comparisons is Red Gate's SQL Compare:
http://www.red-gate.com/products/sql-development/sql-compare/
It has a companion called Data Compare which is designed specifically for data. You can grab the free trial to see if it does what you need in this case.
There are other options as well, for example SQL Server Data Tools has this functionality, though I haven't tested it to any degree that I could compare feature sets, performance, etc.
I've also blogged about why you want to use a tool and just pay for this functionality, rather than solve it programmatically yourself. The post also mentions a variety of alternatives if budget is a primary blocker:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.