Is it possible to run AspnetBoilerplate CompanyName?ProjectName.Migrator as per environment and how?
What I can see is that it can only read settings from appsettings.json but not from appsettings.{Environment}.json for example. This is totally not affordable for CI/CD scenario where I plan to run Migrator as a part of the process.
Any help or idea would be appreciated.
Migrator gets the host connection string from its own appsettings.json file. In the beginning, it will be the same in the appsettings.json in the .Web.Host project. Be sure that the connection string in the config file is the database you want. After getting the host connection string, it first creates the host database and applies migrations if they don't already exist. It then gets the connection strings of the tenant databases and runs migrations against those databases. It skips a tenant if it does not have a dedicated database or its database has already been migrated by another tenant (for shared databases between multiple tenants).
You can use this tool on the development or on the production environment to migrate databases on deployment instead of EntityFramework's own tooling (which requires some configuration and can only work for a single database/tenant in one run).
You can refer this document related to connection string.
Related
Shortly my scenario is to test a remote API if there was any changes in the called APIs, like some parameter removed or something like that.
To get this info I need to have a token.
My problem is, I can't store it in the Database and use windowsCredentials, because in the AzurePipeline the build agents has no access and connection to the Database. And if I pass the token through variables in the pipeline then I won't have the token when I run the code in local.
appSetting is stored in Git so it is not safe.
Any idea on this?
Thanks!
UserSecrets + Environment variables are the key here. Appsettings.json is for configuration that is non-sensitive, but there is a concept called user secrets (see link below) that will allow you to have stored an appsettings.json equivalent just on your machine and not in git. When specifying info in it it should override or add onto anything in your appsettings.json.
If that info is also needed for production, then environment variables should be used. Instead of a file, single configurations can be specified/overridden using environment variables.
This is all accomplished ONLY if the aspnet core web server configuration is setup to accept from all of these places. The default setup from the template should accomplish this but read the links below to make sure that your setup works.
All the configuration and best practices can be found here.
Non sensitive info/Defaults: appsettings.json
Sensitive info for devs/dev specific info: UserSecrets
Sensitive info for prod: Environment variables
I created the scaffolded project according to this tutorial
There is written
For local development, the ASP.NET Core configuration system reads the connection string from the appsettings.json file.
What is the correct procedure to change this local database (*.mdf file) to "global" database (e.g. MSSQL server installed on the network)?
Scaffolding has nothing to do with whether the database is local or remote, assuming of course that they share the same schema. If the remote database schema is different, you need only rescaffold, which is what you will need to do for any change of schema.
All you really have is a connection string. The connection string just happens to point to a local database. To point at a remote database, you simply change the connection string to the information for that remote database. Local or remote means nothing as far as your app is concerned; it's just connecting to whatever you told it to connect to.
By default, the connection string exists in appsettings.json. To change it, you can either directly change it in appsettings.json or override it using another config source such as environment-specific JSON (i.e. appsettings.Production.json), environment variables, command-line arguments, etc.
However, a connection string is going to contain sensitive information (user/pass), and it's therefore a secret. As such, you should not store any real database information in something like JSON, which is not encrypted and will be committed to source control. Instead, you should use a config source external to your app code (e.g. environment variables on the server) and preferably something where data is encrypted at rest (e.g. Azure Key Vault). In development, you can also use user secrets. It's just another JSON source, so it's still not encrypted. However, it's stored outside your project, and therefore will at least not end up in your source control.
I am using database connector component, with vault component to store the database credentials. Now as per the documentation of both components i have created different properties file for each environment to store the encrypted credentials for diff env.
Following is the structure of my mule project
Now the problem with this structure is that i have to build new deployable zip file whenever i have to update the database credentials for any environment.
I need a solution where i can keep all credentials encrypted and centralized and i don't have to create a build every time after updated the credentials, We can afford to restart the server, but building new zip and deploying is really cumbersome.
Second problem we have this approach is a developer needs to know the production db to update it in properties file, this is also a security issue.
Please suggest alternate approach for credentials management for mule projects.
I'm going to recommend you do NOT try to change the secure solution provided to you by MuleSoft. To alleviate the need for packaging and deployment, you would have to extract the properties files outside of the deployment and this would be a huge risk. Regardless of where you store the property files within the deployment if you change the files, you have to package and re-deploy. I see the only solution to your problem as moving the files outside of the deployment and securely storing them. Mule has provided a solution while it may be cumbersome, they are securing these files first with encryption and secondly within the server container. You can move out the property files but you have to provide a custom implementation and you will be assuming great risk to your protected resources.
Set a VM arguement e.g. environment.type=local for local machine on your anypoint studio.
Read this variable in wherever you are reading your properties file in a way that environment type is read dynamically such as below.
" location="classpath:properties/sample-app-${environment.type}.properties" doc:name="Secure Property Placeholder"/>
In order to set the environment type on your production server(or wherever you are using mule runtime), open \conf\wrapper.conf and add the arguement wrapper.java.additional.=-Dserver.type=production. If you already have any property in this file, you may need to set the value of n appropriately. For example 13 or 14.
This way you don't need to generate different deployment artefacts for different environment because correct properties file is picked by using environment specific VM arguement.
I have a very simple SSIS Package that has 2 connections defined in the Connection Manager section. An MS Access Data Source and an MS SQL Data Source Destination. All this package does is Truncate a table in the SQL Destination and Imports data from MS Access into the SQL table. This works as expected during Development within VS2013.
Now, I also have enabled Package Configurations for the package and have a couple of XML Configuration files (1 for each Connection) in a folder on the root of the C: drive. The Configuration file connections differ based on the server where they reside, but the folder structure exists on both servers so the package can execute against the server from which it is run.
I've checked the box to enable Package Configurations and deploy the package to 2 different Servers. 1 for Development and the other for QA. When I execute the package via the SSMS Integration package execution on my Development Server, the package utilizes the Development table. But when I execute the same package on my QA environment, it also utilizes the Development table.
Since the Development connection is the one that is embedded in the package via the Connection Manager, it appears (presumably anyway) that the package is using the embedded connection and ignoring the configuration files.
I have alternatively explicitly added the path to the Configuration file within the Execute Package Utility in the Configurations section to see if it made any difference but the results are the same. The configuration file is not acknowledged. So it again appears that the package is using the embedded connections that defined in the Configuration Managers.
I suppose I "may" be able to remove the Connections from the package in the Connection Managers section and turn off validations during Design time and then deploy again in effort of forcing the package to use the Config files but that doesn't seem like the way to go and a hack at best; provided that it would even work.
Not that I think it should make a difference but to provide more detail, here is a bit more concerning my Server Configuration:
Development - SQL 2014 [ServerName]
Quality Assurance - SQL 2014 [ServerName][InstanceName]
I don't recall ever having this issue before, hence my reason for posting.
Ok, since I am working against a dead line; I was hoping to acquire an answer sooner than later. But since that wasn't the case and because I've seen variations of this question before without a definitive answer (at least to satisfy this scenario) I performed some tests and am posting this for others who may also have need of this information.
The Following Conditions will ignore the use of Configuration Files even if Package Configurations are enabled in an SSIS Package. These findings are based on actual tests and affirmed to be true for SQL 2014 although prior versions may also be applicable.
Disclaimer: These tests focused on the Configuration Files as they pertained to actual Server Connections. (E.g. Connection Strings) and not any other variables although it’s conceivable that any other values within the Configuration file would also be affected.
Execution of the Package from within SSMS while connected to the Integrated Services Component and selecting to Run Package. The noted behavior is that whatever Connection value was acquired prior to deployment to the Server is the one that will be used; irrespective of the Configuration Files
Note: This holds true even if configurations are added in the Configurations section prior to execution. Although there is mention that the configurations are not imported and they cannot be edited; the fact is they were neither used during the testing.
If an SQL job is of type SQL Server Integration Services Package and no Configuration File references are actually added to the Configurations tab, the values the job will execute under whatever values were used during the last build within BIDS prior to deployment (Embedded Values)
If multiple configuration files are used by the package but some are omitted in the Configurations tab of the job; the job will use those Configuration Files designated but will default to the last values used in Development (Embedded Values) for those which are not present in the context of the job
Some of these behaviors are not very obvious and I'd imagine it could be a frustrating puzzle when someone expecting to follow the rules of most online tutorials for using Package Configuration files; would have the expected more straight forward results.
I know it was a time consuming task of testing to identify the root cause for me and although I'm not an expert; I'm certainly far from a novice with SSIS.
At any rate, I hope this helps someone else from hours of work and investigations.
I have read many articles here, but haven't quite found the solution. I have a SQL Server Express database that is used by my VB.NET application. I have packaged and deployed the application via an MSI file and everything works great except I cannot figure out how to include my database file with the package. I understand there are three general ways to do this (copy the files over manually, custom actions, and SQL scripts). I didn't need anything fancy here, just a quick way to put the DB on the client machine so my app can access it.
I decided copying over the DB manually was the quickest option. I tried putting it in the working directory and in the \DATA directory of the client's SQL Server Express install, but my app wouldn't connect. I also tried changing my connection in the project to .\SQLEXPRESS instead of [my_computer_name]\SQLEXPRESS followed by a rebuild of the deployment project and reinstall on the client machine, but no soup for me. Same issue. I tried changing the "UserInstance" property in the project to "True" but my project would not let me save that action.
Am I correct that a manual copy is the quickest and easiest way to get this done?
You should to attach your file to the Sql Server instance.
CREATE DATABASE YourDatabaseName
ON (FILENAME = 'C:\your\data\directory\your_file.mdf'),
(FILENAME = 'C:\your\data\directory\your_file_Log.ldf')
FOR ATTACH;
You need to attach your database file to the running SQL Server on the client machine.
This could be easily done using this variation on the connection string stored in your configuration file (app.config or web.config)
Server=.\SQLExpress;AttachDbFilename=where_you_have_stored_the_mdf_file;
Database=dbname; Trusted_Connection=Yes;
in alternative, you could use the |DataDirectory| substitution string.
This shortcut eliminates the need to hard-code the full path.
Using DataDirectory, you can have the following connection string:
Server=.\SQLExpress;AttachDbFilename=|DataDirectory|\yourfile.mdf”
Database=dbname; Trusted_Connection=Yes;