I want to create an Azure function which is a queue trigger, that when it fires it connects to a SQL DB and gets a record and updates it.
How do I handle making sure the SQL connection queries the right database? Eg Staging DB vs Production DB
Do I need to have two instances of the same azure function? One that has it's connection string set in application settings to point to staging db and the other set to point at production db? Surely not?!
Every article I can find talks about your local.settings.json and production..which is fine. But in the real world we might have local, testing, staging, production.
I can pass through the environment as part of the queued message that comes into my Azure function, but surely there is a more elegant way and I'm missing something?
I think this depends on your solution design, size and deployment strategy. Here are 3 options:
Option 1 (our solution):
We are using Azure Functions in a large scale and 4 environments (DEV, TEST, STAGE, PROD).
Therefore, we've created a function for each instance having the right connection string on each stage.
Option 2:
Another possibility would be to create different deployment slots and -settings, then you could just use one function with different settings.
Option 3:
You could create parametrized settings and decide which one you might choose.
Related
How can I duplicate my instance in sql server !?
I want to have a test instance on my same server just for testing purposes instead of applying all tests on the real data.
is it possible to create an new instance and copy all the same data and users with permissions on a new instance?
Or is there any other way other than VM, because my DB is in running and in use from other user but I want to do my test environment without disturbing other users. And all that on the same server.
You could either install a new instance by starting the installer again or simply use the same instance and restore a backup of your prod database to a test database.
without disturbing other users
This requirement alone means you will have to run your instance on another machine or VM. You cannot expect to maintain an instance on a server without certain things affecting the server as a whole, and any other instance running on it. (e.g. restarts for patching or troubleshooting)
There is no reason if you have the resources to not just put it on another VM, but that all depends on what you want to test (e.g. unit, integration or performance testing).
With regards to duplicating your server, you can utilize dbatools. The Start-SqlMigration would perform the work to bring over the major parts. To make it the easiest process it helps to make sure your new SQL Server instance has the same drive configuration.
Yes, you can do it. Just create new instance, and then restore your prod. database on that instance. You might need to create users there.
Following might help with creating users and mapping them to users in DB.
USE [master]
GO
CREATE LOGIN [myDBUser] WITH PASSWORD=N'myPassword' MUST_CHANGE, DEFAULT_DATABASE=[myDB], CHECK_EXPIRATION=ON, CHECK_POLICY=ON
GO
USE mdb
exec sp_change_users_login #Action = 'Update_One',#UserNamePattern = 'myDBUser', #LoginName = 'myDBUser'
You can automate this work using DBATools's Start-SqlMigration powershell commandlet.
However, I would warn against running both the production & the test instances on the same physical hardware, as you will be starving the production instance of resources.
I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
Examples:
When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
[Edit]
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008.
I can tell you how we at xSQL Software do this using our tools:
deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
every morning the developers have up to date schema snapshots of the production and staging databases available. They use our Schema Compare tool to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK.
I'd say you can have a structural copy of test and production servers as additional development databases and keep in mind to always apply change when you send something.
On these databases you can establish triggers that will capture all DDL events and put them into table with getdate() attached. With that you should be able to handle changes pretty easily and some simple compare will also be easier to apply.
Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
http://www.liquibase.org/
I have two databases on separate servers (dev and production) I need to move my data from dev to production from multiple tables without affecting the pre-existing data on production. Any idea if SQL Manager support something like this or am I going to have to write a script for it?
My situation in detail:
I have a tool which allow me to create surveys for my company. The tool is located on dev and also on production. Since I don't want to add test data in my production db I am using the dev version of the tool to create my surveys and test them locally. The tool is tied to few tables in my db such as surveys, questions, anwers, results, etc.
My current setup: When I am done with a survey and it is ready to launch, I have to use the production version of the tool to manually redo all of the previous work that i did on production. This is not ideal at all not only because of the time that I have to spend doing it but also risking making mistakes during the manual copying.
What I need to do:
Those tables that I mentioned above, already have production data in them and they are available for my company to use. When I create a new survey I need to transfer only the specific records of the new survey (from all tables) from dev to production without affecting anything that I had there from before.
Use Import and Export Data
Or Add the DEV server as a linked server in your PROD server and then use INSERT/SELECT statements
You can use a database compare tool, for SQL Server I use SQL Delta, wich allows you to automatically create a script to run in the database you wish, http://www.sqldelta.com/
You're not going to find any out-of-the-box solutions for this, but there are tools that can help once you've got a clear idea of what you're trying to accomplish -- in detail. A little time spent at this point to make sure you're really clear on what you expect to have happen will pay huge dividends when you move to production.
The scenario you're describing sounds like you've got some configuration-type data in your database alongside your transactional, or domain data. In other words, you've got changes that need to be promoted from your development environment to production in order for your application to work properly. This isn't unusual, but you've got to be pretty deliberate and very careful when you set up a promotion plan for a scenario like this -- after all, you don't want to push test data to your production system along with your configuration changes. It's critical, therefore, to identify the tables you're going to push from dev to prod and make sure those are the only tables you're pushing in that direction.
You also mentioned something about "without affecting the pre-existing data on production". Can you tell us more about this (maybe an example)? Typically, you'd want to keep specific tables (by convention) set up to move changes in one direction only -- ie, from dev to prod. If you've got tables that need to contain merged changes, you're going to have to apply even more attention to getting this right, because you need to deal with merge errors -- what happens when you've got data to push and it's already present in the target database, for instance?
Once you've got a plan for what you actually want to move, some of the tools mentioned in other answers would probably work, or check out Redgate's tools (like SQL Data Compare) -- they make some really nice products to help with DB management tasks.
---- addendum ----
Based on edits to the question, here are a couple of additional thoughts:
(1) Allow your production surveys to have a "disabled" or "testing" mode, so you don't have to make your data changes in another environment. This allows you to be able to move stuff from dev to production only when actual development changes exist.
(2) Define a "package" mechanism to move a survey from one environment to another. This would allow you to deal with merge conflicts, ID changes, etc., generically and reliably. As a bonus, this would allow you to also move a production survey back to dev for debugging and testing purposes.
I am working on a project which uses a relational database (SQL Server 2008). The local (on-premises) application both reads and writes to the database. I am working on a different front end for Azure (MVC2 Web Role), which will use the same data, but in a read only fashion. If I was deploying a traditional web app, I would use SQL Express to act as the local database, and deploy changes with updates to the application (the data changes very slowly) or via some sync system.
With Azure, the picture is a little cloudy (sorry, I had to). I can't seem to find any information to indicate if SQL Express will work inside of Web Roles, and if so, how to do it. Does anyone know if using SQL Express in an Azure web role is possible?
Other options I could do if forced: SQL CE or use SQL Azure. Both have a number of downsides, and are definitely less than perfect.
Thanks,
Erick
Edit
I think my scenario may not have been clear enough.
This data won't change between deployments, and is only accessed from within the Web Role; it is basically a static cache. The on-premises part is kind of a red herring, as it doesn't impact the data on the web role (aside from being its source). Basically, what I want to do is have a local data store/cache that I use existing T-SQL/DAL code with.
While I could use SQL Azure, it doesn't add anything, and if anything only adds additional overhead and failure points. I could also use a VM Role, but that is way too costly/complex.
In a perfect world, I would package the MDF into the cspkg (so it gets deployed with the app) and then use it locally from within the role. If there is no way to do this, then that is ok and I need to figure out the pros and cons of other solutions. We don't live in a perfect world. :)
You might be able to run SQL Express using a custom VHD but you won't be able to rely on any data every being present on that VHD. The VMs are completely reset when they reboot - there is no physical persistence across reboots.
If you wanted to, you might be able to locate your entire SQL Server installation in Azure blob storage.
However, in doing all of this, you'll only be able to have one worker/web role that can use that database. Remember: a SQL Server database can only be attached to one SQL Server at a time. If you want to scale out, you'll have to create new SQL Server instances for every web/worker role.
Outside of cost concerns, I can't think of anything that is in SQL Express that should be a show stopper for 99.9% of applications out there.
Adding to Jeremiah's answer: SQL Azure should give you nearly everything SQL Express does today, and you can use the Sync service to synchronize on-premise SQL Server with SQL Azure.
If you installed SQL Express into a VM role, you'd be consuming around $90 monthly just for that instance, plus blob storage (you'd want a Cloud Drive for durability). By definition, a VM Role (or any role) must support scale-out; if you were to scale to 2 instances for whatever reason, both instances would need their own copy of the database, so you'd need to create a blob snapshot for each instance.
Keep in mind, though, if you choose to install SQL Express in a VM: once you're at 2 instances, along with, say, 20GB per instance of blob storage, you're nearing $200 monthly and you're maintaining your VM's OS patches, SQL Express configuration and updates, failure recovery procedures, etc. In contrast, SQL Azure at 20GB, while costing the same $200, will offer better performance and works with the sync service, while completely removing any OS or database server management tasks from you.
To add to the already existing answers and for anyone wondering if its a good idea to run SQL Express in the cloud:
it does makes sense as a temporary storage area. Consider this architectural approach:
say you're spinning up nodes to run jobs. Storing a gazillion of calculation results might be a good idea inside a local SQL Express for each node, and provide the aggregated responses immediately when the job finishes on the node. Transfer of the no longer hot results to off-prem SQL server for future reporting/etc can be done afterwords. SQL Azure may not be optimal from the volume/latency/cost perspective to store gazillion of results and ATS will not always fit the bill, especially when relational data, performance or existing code are involved.
To expand on what David mentioned you can register for SQL Azure Data Sync CTP2 that would allow sync from SQL Server to SQL Azure here: http://www.microsoft.com/en-us/SQLAzure/datasync.aspx
Make sure to use CTP2 though since CTP1 did not support SQL Server.
If it's a read only local cache - SQL CE 4 or SQLite.
Both have Entity Framework providers.
If you're writing to it - SQL Azure
We have 18 databases that should have identical schemas, but don't. In certain scenarios, a table was added to one, but not the rest. Or, certain stored procedures were required in a handful of databases, but not the others. Or, our DBA forgot to run a script to add views on all of the databases.
What is the best way to keep database schemas in sync?
For legacy fixes/cleanup, there are tools, like SQLCompare, that can generate scripts to sync databases.
For .NET shops running SQL Server, there is also the Visual Studio Database Edition, which can create change scripts for schema changes that can be checked into source control, and automatically built using your CI/build process.
SQL Compare by Red Gate is a great tool for this.
SQLCompare is the best tool that I have used for finding differences between databases and getting them synced.
To keep the databases synced up, you need to have several things in place:
1) You need policies about who can make changes to production. Generally this should only be the DBA (DBA team for larger orgs) and 1 or 2 backaps. The backups should only make changes when the DBA is out, or in an emergency. The backups should NOT be deploying on a regular basis. Set Database rights according to this policy.
2) A process and tools to manage deployment requests. Ideally you will have a development environment, a test environment, and a production environment. Developers should do initial development in the dev environment, and have changes pushed to test and production as appropriate. You will need some way of letting the DBA know when to push changes. I would NOT recommend a process where you holler to the next cube. Large orgs may have a change control committee and changes only get made once a month. Smaller companies may just have the developer request testing, and after testing is passed a request for deployment to production. One smaller company I worked for used Problem Tracker for these requests.
Use whatever works in your situation and budget, just have a process, and have tools that work for that process.
3) You said that sometimes objects only need to go to a handful of databases. With only 18 databases, probably on one server, I would recommend making each Databse match objects exactly. Only 5 DBs need usp_DoSomething? So what? Put it in every databse. This will be much easier to manage. We did it this way on a 6 server system with around 250-300 DBs. There were exceptions, but they were grouped. Databases on server C got this extra set of objects. Databases on Server L got this other set.
4) You said that sometimes the DBA forgets to deploy change scripts to all the DBs. This tells me that s/he needs tools for deploying changes. S/He is probably taking a SQL script, opening it in in Query Analyzer or Manegement Studio (or whatever you use) and manually going to each database and executing the SQL. This is not a good long term (or short term) solution. Red Gate (makers of SQLCompare above) have many great tools. MultiScript looks like it may work for deployment purposes. I worked with a DBA that wrote is own tool in SQL Server 2000 using O-SQl. It would take an SQL file and execute it on each database on the server. He had to execute it on each server, but it beat executing on each DB. I also helped write a VB.net tool that would do the same thing, except it would also go through a list of server, so it only had to be executed once.
5) Source Control. My current team doesn't use source control, and I don't have enough time to tell you how many problems this causes. If you don't have some kind of source control system, get one.
I haven't got enough reputation to comment on the above answer but the pro version of SQL Compare has a scriptable API. Given that you have to replicate stuff to all of these databases you could use this to make an automated job to either generate the change scripts or to validate that the databases are all in sync. It's also not much more expensive than the standard version.
Aside from using database comparison tools, with 18 databases you should have a DBA, so enforce a policy that only the DBA can change tables at the database level by restricting access to CREATE and ALTER to the DBA only. On both your test and live databases. The dev database shouldn't have this, of course! Make the developers who have been creating or altering the schemas willy-nilly go via the DBA.
Create a single source-controlled DDL/SQL script for each release and only use it to update the databases. The diff tools can be useful but mainly for checking that you haven't made a mistake and getting out of trouble when the policies fail. Combine the DDL, SQL, and stored procedure scripts into a single script so that it's not easy to "forget" to run one of the scripts.
We have got a tool called DB Schema Difftective that can compare and sync database schemas. With our other tool, DB MultiRun you can easily deploy generated (sync) scripts to multiple db servers (project based).
I realize this post is old, but TurnKey is correct. If you are a developer working in a team environment, the best way to maintain a database schema for a large application, is to make updates to a Master Schema in what ever source safe you use. Simply write your own Scripting class and your Database will be perfect every time.