I created the scaffolded project according to this tutorial
There is written
For local development, the ASP.NET Core configuration system reads the connection string from the appsettings.json file.
What is the correct procedure to change this local database (*.mdf file) to "global" database (e.g. MSSQL server installed on the network)?
Scaffolding has nothing to do with whether the database is local or remote, assuming of course that they share the same schema. If the remote database schema is different, you need only rescaffold, which is what you will need to do for any change of schema.
All you really have is a connection string. The connection string just happens to point to a local database. To point at a remote database, you simply change the connection string to the information for that remote database. Local or remote means nothing as far as your app is concerned; it's just connecting to whatever you told it to connect to.
By default, the connection string exists in appsettings.json. To change it, you can either directly change it in appsettings.json or override it using another config source such as environment-specific JSON (i.e. appsettings.Production.json), environment variables, command-line arguments, etc.
However, a connection string is going to contain sensitive information (user/pass), and it's therefore a secret. As such, you should not store any real database information in something like JSON, which is not encrypted and will be committed to source control. Instead, you should use a config source external to your app code (e.g. environment variables on the server) and preferably something where data is encrypted at rest (e.g. Azure Key Vault). In development, you can also use user secrets. It's just another JSON source, so it's still not encrypted. However, it's stored outside your project, and therefore will at least not end up in your source control.
Related
Shortly my scenario is to test a remote API if there was any changes in the called APIs, like some parameter removed or something like that.
To get this info I need to have a token.
My problem is, I can't store it in the Database and use windowsCredentials, because in the AzurePipeline the build agents has no access and connection to the Database. And if I pass the token through variables in the pipeline then I won't have the token when I run the code in local.
appSetting is stored in Git so it is not safe.
Any idea on this?
Thanks!
UserSecrets + Environment variables are the key here. Appsettings.json is for configuration that is non-sensitive, but there is a concept called user secrets (see link below) that will allow you to have stored an appsettings.json equivalent just on your machine and not in git. When specifying info in it it should override or add onto anything in your appsettings.json.
If that info is also needed for production, then environment variables should be used. Instead of a file, single configurations can be specified/overridden using environment variables.
This is all accomplished ONLY if the aspnet core web server configuration is setup to accept from all of these places. The default setup from the template should accomplish this but read the links below to make sure that your setup works.
All the configuration and best practices can be found here.
Non sensitive info/Defaults: appsettings.json
Sensitive info for devs/dev specific info: UserSecrets
Sensitive info for prod: Environment variables
Is it possible to run AspnetBoilerplate CompanyName?ProjectName.Migrator as per environment and how?
What I can see is that it can only read settings from appsettings.json but not from appsettings.{Environment}.json for example. This is totally not affordable for CI/CD scenario where I plan to run Migrator as a part of the process.
Any help or idea would be appreciated.
Migrator gets the host connection string from its own appsettings.json file. In the beginning, it will be the same in the appsettings.json in the .Web.Host project. Be sure that the connection string in the config file is the database you want. After getting the host connection string, it first creates the host database and applies migrations if they don't already exist. It then gets the connection strings of the tenant databases and runs migrations against those databases. It skips a tenant if it does not have a dedicated database or its database has already been migrated by another tenant (for shared databases between multiple tenants).
You can use this tool on the development or on the production environment to migrate databases on deployment instead of EntityFramework's own tooling (which requires some configuration and can only work for a single database/tenant in one run).
You can refer this document related to connection string.
I am using database connector component, with vault component to store the database credentials. Now as per the documentation of both components i have created different properties file for each environment to store the encrypted credentials for diff env.
Following is the structure of my mule project
Now the problem with this structure is that i have to build new deployable zip file whenever i have to update the database credentials for any environment.
I need a solution where i can keep all credentials encrypted and centralized and i don't have to create a build every time after updated the credentials, We can afford to restart the server, but building new zip and deploying is really cumbersome.
Second problem we have this approach is a developer needs to know the production db to update it in properties file, this is also a security issue.
Please suggest alternate approach for credentials management for mule projects.
I'm going to recommend you do NOT try to change the secure solution provided to you by MuleSoft. To alleviate the need for packaging and deployment, you would have to extract the properties files outside of the deployment and this would be a huge risk. Regardless of where you store the property files within the deployment if you change the files, you have to package and re-deploy. I see the only solution to your problem as moving the files outside of the deployment and securely storing them. Mule has provided a solution while it may be cumbersome, they are securing these files first with encryption and secondly within the server container. You can move out the property files but you have to provide a custom implementation and you will be assuming great risk to your protected resources.
Set a VM arguement e.g. environment.type=local for local machine on your anypoint studio.
Read this variable in wherever you are reading your properties file in a way that environment type is read dynamically such as below.
" location="classpath:properties/sample-app-${environment.type}.properties" doc:name="Secure Property Placeholder"/>
In order to set the environment type on your production server(or wherever you are using mule runtime), open \conf\wrapper.conf and add the arguement wrapper.java.additional.=-Dserver.type=production. If you already have any property in this file, you may need to set the value of n appropriately. For example 13 or 14.
This way you don't need to generate different deployment artefacts for different environment because correct properties file is picked by using environment specific VM arguement.
So, a common practice these days is to put connection strings & passwords as environment variables to avoid their being placed into a file. This is all fine and dandy, but I'm not sure how to make this work when trying to set up a continuous deployment workflow with some configuration management tool such as Salt/Ansible or Chef/Puppet.
Specifically, I have the following questions in environments using the above mentioned configuration management tools:
Where do you store connection strings/passwords/keys separate from codebases?
Do you keep those items in a code-repo of some type (git, etc.)?
Do you use some structure built-in to your tool?
How do you keep those same items secure?
Do you track changes/back-up these items, and if so, how?
In Chef you can
store passwords or API tokens in either encrypted data bags or using chef-vault. They are then decrypted while chef does the provisioning (with encrypted data bags using a shared secret, with chef-vault using the existing PKI of Chef client).
set environment variables when calling external software using the environment parameter of e.g. the execute resource.
not sure, what to write here -- I'd say you don't really manage them. This way you set the variables only for the command that needs it, not e.g. for the whole chef run.
With puppet, the preferred way is probably to store the secrets in Hiera files, which are just plain YAML files. That means that all secrets are stored on the master, separate from the manifest files.
truecrypt virtual encrypted disks are cross-platform and independent of tooling. Mount it read-write to change the secrets in the files it contains, unmount it and then commit/push the encrypted disk image into version control. Mount read-only for automation.
ansible-vault can be used to encrypt sensitive data files. A CI server like Jenkins however is not the safest place to store access credentials. If you add Hashicorp Vault and Ansible Tower/AWX, then you can provide a secure solution for several teams.
I wish to ship SQL server database file with my application. I am very very new to SQL.
1) I do not know how to protect this file from being opened.
2) If this file is emailed, can anybody read it?
3) Is there any possibility of protecting it like Access database is password protected so even emailed, no one can open it.
Thanks
Furqan
Regular SQL Server database files (.mdf, .ldf) aren't intended to be shipped with your application and installed locally - they are intended to be used on a SQL Server instance, running in a secure environment where typical users don't have physical access to the files per se.
As such, .mdf/.ldf files cannot really be protected by a password or anything like that - you can define users and their permissions, but that only applies to the permissions inside the database - not the database file(s) itself.
For your scenario, I guess you'd be better off with SQL Server Compact Edition - an in-process (just a bunch of DLL's), one-file-for-your-entire-database (*.sdf) kind of database - much more closely an Access replacement than the full-fledged SQL Server.
The documentation clearly states:
SQL Server Compact Edition was
designed from the beginning assuming
the user had access to the physical
file. Without an additional security
mechanism, the user could bypass your
application and use tools such as
MSQuery to view and edit the raw data.
SQL Server Compact Edition supports
the ability to password protect and
encrypt the data file, thereby
limiting access to your application
which embeds the password. The
password protection of the database
file adds a layer of protection that
travels with the file, making it
harder to access the data in the event
a rogue user obtains the file.
Read more about SQL Server Compact 3.5 and you might also want to check out the SQL Server Compact blog which discusses the latest developments (SQL Server Compact 4.0 is in testing right now).
1) Assign a username and password to through SSMS or Enterprise Manager
2) No, because of [1]
3) Yes, because of [1]
Just to be doubly sure: take the database offline, zip encrypt .mdf file, and send it
If you ship it - than someone on the other end need to open and install it.
You can protect the file using zip software with password, and tell it to the other party on the phone.
1) Even if you protect it before it is added to the SQL server the user will be able to get into the database once it is loaded and running on the server.
2) Possible, but why would you want to email a database file?
3) When you email it you can add it to an archive with password, like a password protected zip file.