I just started reading about DevOps on Microsoft site and I come across these 2 files. I'm confused about the process needing 2 files, with YAML using variable and template using parameter...At moment, I'm still unclear they have to separate between variable and parameter?
They are talking about ARM Templates. While you can have Parent/Child Templates it's more complex and I've never seen an example that split Variables and Parameters to different files.
To clear up the confusion ARM Templates have 5 main parts:
Variables things you know at design time like VNet address.
Parameters things you know at runtime and don't want stored in the template's (YAML format) script such as Usernames and Passwords.
Functions things you compute in your template.
Resources things you are provisioning.
Outputs things you can reuse, such as a Url result of provisioning a Azure Web App.
Related
I decided to learn more about Terraform and see if I could replicate what I did manually in the console, using Terraform. I set up two VMs, one that was publicly accessible and one that was not and had to be accessed through the first VM. These two VMs are almost identical, apart from the firewall rules.
In the interest of being DRY, I thought I'd create a module, so that I don't have to repeat all the options for the two VMs and just specify the differences. Since I wasn't sure about how to create a module, I checked the documentation and found the following:
When to write a module
[...]
We do not recommend writing modules that are just thin wrappers around single other resource types. If you have trouble finding a name for your module that isn't the same as the main resource type inside it, that may be a sign that your module is not creating any new abstraction and so the module is adding unnecessary complexity. Just use the resource type directly in the calling module instead.
Source: https://www.terraform.io/docs/modules/index.html#when-to-write-a-module
It makes sense to me that publishing a module that is just a wrapper around a single resource may not be that useful, but for internal use in your configuration, it seems like a useful tool to make your configuration DRY. If 9 out of 10 arguments are the same for all of your VMs, why wouldn't you create a module to hide the 9 common arguments from the main configuration and not repeat them?
As I am new to Terraform, I just want to make sure that I am not teaching myself bad practices.
I am using IntelliJ to develop Java applications which uses YAML files for the app properties. These YAML files have some placeholder/template params like:
credentials:
clientId: ${client.id}
secretKey: ${secret.key}
My CI/CD pipeline takes care of substituting the actual value for these params (client.id and secret.key) based on the environment on which it is getting deployed.
I'm looking for something similar in IntelliJ. Something like, I configure some static/fixed values for the params (Ex: client.id and secret.key) within the IDE and when I run locally using the IDE, these values should be substituted onto these YAML files and run.
This will actually save me from updating the YAML files with the placeholder params each time I check in some other changes to my version control system.
There is no such feature in IDEA, because IDEA cannot auto detect every possible known or unknown expression language or template macros that you could use in a yaml file. Furthermore, IDEA must create a context for that or these template files.
For IDEA it's just a normal yaml file.
IDEA has a language injection feature.
That can be used to inject sql into a java string for instance or inject any language into a yaml field.
This is a really nice feature and can help you to rename sql column names aso. but this won't solve your special problem, because you want to make that template "runnable" within in certain context where you define your variables.
My suggestion would be, to write a small simple program that makes nearly the same as the template engine does.
When you only need simple string replacements and no macro execution then this could be done via regular expression.
If it's more complicated I would use the same template engine as the "real processor" does.
If you want further help, it would be good to know how your yaml processing pipeline looks like.
I am looking at using a static generator to generate up to hundreds of thousands of pages (on S3) of data-driven content from json or csv files, each of which has an html form that posts to an external API. Is this a feasible undertaking?
It depends on your requirements, but at minimum, you might even get away with a simple node program that uses fs to read/write. Going up the complexity spectrum, you might do with a Gulp setup. Going even further up the spectrum, you can use static website generator to read/write your data files (but that's probably worth the trouble only if you already know static generator and/or you will want to have a blog on S3 as well, driven by .MD files, besides hundreds of thousands of data-driven pages).
If going simple node script route, you would create your local application in a js file, run it through command line in node. It would generate thousands of pages locally, then you would upload them to S3. You can either use standard fallbacks or fancier way using promises (like using Bluebird). This way is the most manual but you have the most control over the result.
For the record, you could whip up a script in any programming language that you are proficient, like for example, PHP. JavaScript is popular these days, that's why I'm assuming you would use JS.
If going Gulp route, I imagine a custom function that reads data files from the location, parses their contents into an array, and writes the contents into files.
If going Hugo route, simply use data driven content reference, getCSV function. You'll still need to work in the context of a website, this means the more you stray off the website's setup, the more you'll have to fight Hugo.
As I mentioned, arguments against static website generator would be if you don't need the website part, only to perform operations on data and write files, it might stand in a way.
Hugo is a good option for thousands of files because it's fast.
Solution also depends on if your CSV files are going to change, or is it one-off thing; also how much automation you need. Gulp approach might be handy even if you go Hugo route.
So, yes, it is a very feasible undertaking.
My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.
So far I have two short questions:
1) What precisely are the benefits of creating custom nature?
2) Is it possible to somehow programmatically read files in [project]/.setting or [workspace]/.metadata/.plugins?
I'm using Eclipse Helios (3.6).
Ad 1. I've read that you can't have two natures ofthe same set, that you can use it to associate certain perspectives/tools (ex. builder) with it but well.. anyting else I can't do easily without nature? Ex. I can easily add a builder by modifying an IProject variable.
Ad 2. I tried to find a way to read project specific settings or plugin settings but failed. No specs, different file types, inconsistent XML tags... Is it at all possible without parsing them manually?
Thanks for your help!
Paweł
Think of a nature as a flag. All project-related functionality in Eclipse is triggered by natures. Project properties pages, context menu items, etc. appear based on presence of natures. Third parties can check for presence of nature to tell if the project is of certain "type". A nature also has install/uninstall methods. This gives you a convenient place to implement all actions that need to happen on the project when your technology is enabled. Why is that convenient? Because a third party can simply add the nature without knowing what else is necessary to configure and your code takes care of the rest.
Plugins write to [project]/.setting or [workspace]/.metadata/.plugins locations in different ways. The file formats are never documented as they aren't meant to be manipulated directly. Some plugins re-use the common ProjectScope and InstanceScope classes to read/write the data. Some read/write on their own. I would start with what information you are trying to read, figure out which plugin it belongs to and then see if there is public API in that plugin for accessing that information. Reading these settings directly is almost never going to be the correct approach.