Is there a way to extract rendered preset information from CMake? - cmake

I have a CMakePresets.json file which makes use of inheritance and macro expansion.
Here is an excerpt, in reality I use multiple versions of "Foo":
{
"configurePresets": [
{
"name": "default",
"hidden": true,
"generator": "Unix Makefiles",
"binaryDir": "cmake-build-${presetName}",
"environment": {
"PATH": "/opt/foo/$env{FOO_VERSION}/bin:$penv{PATH}",
"LD_LIBRARY_PATH": "/opt/foo/$env{FOO_VERSION}.0/lib"
},
"cacheVariables": {
"FOO_VERSION": "$env{FOO_VERSION}",
}
},
{
"name": "debug-foo1",
"inherits": "default",
"environment": { "FOO_VERSION": "1" }
},
{
"name": "release-foo1",
"inherits": "debug-foo1",
"cacheVariables": { "CMAKE_BUILD_TYPE": "Release" }
}
],
"buildPresets": [
{
"name": "debug-foo1",
"configurePreset": "debug-foo1"
},
{
"name": "release-foo1",
"configurePreset": "release-foo1"
}
]
}
Now assume I select the preset release-foo1. This would render the following variables, among others:
binaryDir = "cmake-build-release-foo1"
FOO_VERSION = "1"
LD_LIBRARY_PATH = "/opt/foo/1.0/lib"
Is there a way to query these results for a given preset? For example, given release-foo1, I want to know the resulting binaryDir.
Of course I could parse the JSON myself, but that seems tedious, especially because of the cross references and substitutions which are being made by CMake.

You can use the -N flag to print the computed presets info without running the configure or generate steps. After modifying the presets file in your question to work, I created an empty CMakeLists.txt next to it and ran this test (*** stands in for my PATH):
$ cmake --preset=release-foo1 -N
Preset CMake variables:
CMAKE_BUILD_TYPE="Release"
FOO_VERSION="1"
Preset environment variables:
FOO_VERSION="1"
LD_LIBRARY_PATH="/opt/foo/1.0/lib"
PATH="/opt/foo/1/bin:***"
This is good enough for debugging your presets. If you need to go further, you could process the output using standard unix command line tools, like awk, grep, cut, etc.
I think this is the best you can do for now (CMake <=3.21) since there's nothing else in the command line documentation or the file API for IDEs that I could find.

Building on Alex Reinkings answer, you can get the binaryDir (and other parts) with the -N option if you tweek the preset a little bit.
You need to add a environment variable and reference it for binaryDir:.
See this tweeked example from CMake documentation:
{
"version": 2,
"cmakeMinimumRequired": {
"major": 3,
"minor": 20,
"patch": 0
},
"configurePresets": [
{
"name": "default",
"displayName": "Default Config",
"description": "Default build using Ninja generator",
"generator": "Ninja",
"binaryDir": "$env{BUILD_DIR}",
"cacheVariables": {
"FIRST_CACHE_VARIABLE": {
"type": "BOOL",
"value": "OFF"
},
"SECOND_CACHE_VARIABLE": "ON"
},
"environment": {
"BUILD_DIR": "${sourceDir}/build/default",
"MY_ENVIRONMENT_VARIABLE": "Test",
"PATH": "$env{HOME}/ninja/bin:$penv{PATH}"
},
"vendor": {
"example.com/ExampleIDE/1.0": {
"autoFormat": true
}
}
},
{
"name": "ninja-multi",
"inherits": "default",
"displayName": "Ninja Multi-Config",
"description": "Default build using Ninja Multi-Config generator",
"generator": "Ninja Multi-Config"
}
],
"buildPresets": [
{
"name": "default",
"configurePreset": "default"
}
],
"testPresets": [
{
"name": "default",
"configurePreset": "default",
"output": {"outputOnFailure": true},
"execution": {"noTestsAction": "error", "stopOnFailure": true}
}
],
"vendor": {
"example.com/ExampleIDE/1.0": {
"autoFormat": false
}
}
}
This will give you:
$ cmake --preset default -N
Preset CMake variables:
FIRST_CACHE_VARIABLE:BOOL="OFF"
SECOND_CACHE_VARIABLE="ON"
Preset environment variables:
=>BUILD_DIR="D:/tmp/presets/build/default"<=
MY_ENVIRONMENT_VARIABLE="Test"
PATH="..."

Related

how to use react-native-config to set environment variable in nx?

I want to use react-native-config to set environment variable with nx. Is it possible to combine this two package together to achieve the goal? or just use nx environment variable setting. Any suggestion is appreciated.Thanks.
I want to use #nrwl/workspace:run-commands to run my customize script.
"customCommand": {
"executor": "nx:run-commands",
"options": {
"command": "react-native run-android"
}
}
when I run the command in command line.
npx nx run employee:make
It shows below error
TypeError: Cannot read properties of undefined (reading 'options')
Not sure you want to run in libs or apps. I assume you try to run command in libs. Your project.json look like that:
{
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/assets/src",
"projectType": "library",
"tags": [],
"targets": {
"lint": {
"executor": "#nrwl/linter:eslint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["libs/assets/**/*.{ts,tsx,js,jsx}"]
}
},
"test": {
"executor": "#nrwl/jest:jest",
"outputs": ["coverage/libs/assets"],
"options": {
"jestConfig": "libs/assets/jest.config.ts",
"passWithNoTests": true
}
}
}
}
and add your command:
{
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "libs/assets/src",
"projectType": "library",
"tags": [],
"targets": {
"customCommand": {
"executor": "nx:run-commands",
"options": {
"command": "react-native run-android"
}
},
"lint": {
"executor": "#nrwl/linter:eslint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["libs/assets/**/*.{ts,tsx,js,jsx}"]
}
},
"test": {
"executor": "#nrwl/jest:jest",
"outputs": ["coverage/libs/assets"],
"options": {
"jestConfig": "libs/assets/jest.config.ts",
"passWithNoTests": true
}
}
}
}
run your command with:
nx run assets:customCommand
in case cmd not found run code below and try again:
nx reset

Change syntax highlighting of embedded code based on a previous line's keyword?

I'm trying to write a TextMate grammar for a VS Code language extension. Take the following example
(lang=css attribute2=something-else)
"""
.css-class {
background: gray;
}
"""
The (...) part is an "attributes" section, and the """ ... """ is a code section. I'm trying to highlight everything in the code section according to the lang attribute.
The problem is, they are two distinct sections where one might be present without the other in other parts of the file. For example, you can have attributes without the code block.
In the grammars section of package.json I have
"embeddedLanguages": {
"meta.embedded.block.css": "css",
"meta.embedded.block.javascript": "javascript"
}
In the tmLanguage.json file I have both patterns in the repository property.
"attributes": {
"begin": "\\(",
"end": "\\)",
"captures": {
"0": {
"name": "punctuation.definition.annotation punctuation.section.group punctuation.section.parens"
}
},
"patterns": [
{
"begin": "[a-zA-Z_][a-zA-Z0-9_\\.-]*",
"beginCaptures": {
"0": {
"name": "entity.other.attribute-name"
}
},
"end": "(?=\\s*+[^=\\s])",
"patterns": [
{
"begin": "=",
"beginCaptures": {
"0": {
"name": "punctuation.separator.key-value"
}
},
"end": "(?<=[^\\s=])(?!\\s*=)|(?=/?>)",
"patterns": [
{
"match": "([^0-9-.\\s='\"][^\\s='\")]*)",
"name": "string.unquoted.html"
},
{
"match": "=",
"name": "invalid.illegal.unexpected-equals-sign"
},
{
"include": "#strings"
},
{
"include": "#number"
}
]
}
]
}
]
},
"fenced-code": {
"begin": "\\(.*lang=(css|javascript).*\\)\\s*(\"\"\")",
"beginCaptures": {
"1": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$1",
"patterns": [
{
"include": "source.css"
}
]
}
I have a third pattern not shown where I'm using these together by including them in its patterns array. They seem to be mutually exclusive though. I can have the attributes, and I can have a code block if I start the pattern at """, but if I start the code pattern with \\(.*lang=(css|javascript).*\\)\\s*(\"\"\") to capture the lang attribute, the attributes stop getting highlighting.
Is this even possible? I've never worked with TextMate grammars outside of a VS Code theme and VS Code doesn't seem to have deep documentation on the more "advanced" (I guess) things like this.
I tried using VS Code's HTML grammar for its uses of embedded code, but I don't think the HTML one needs to swap syntax based on something in a previous line.
Update
The fenced-code pattern I have below allows the attributes pattern highlighting while also having the code block with dynamic contentName property, i.e. "meta.embedded.block.$2" becomes meta.embedded.block.css when "css" is found in attributes.
"fenced-code": {
"begin": "(\\(.*lang=(css|javascript).*\\))\\s*(\"\"\")",
"beginCaptures": {
"1": {
"patterns": [
{
"include": "#attributes"
}
]
},
"3": {
"name": "string.quoted.triple"
}
},
"end": "\"\"\"",
"endCaptures": {
"0": {
"name": "string.quoted.triple"
}
},
"contentName": "meta.embedded.block.$2",
"patterns": [
{
"include": "source.css"
},
{
"include": "source.js"
}
]
}
However, there's two things wrong so far.
It only works when the opening """ is on the same line as the attributes section
fenced-code's patterns array doesn't seem to allow a dynamic include, i.e. "patterns": [{"include": "source.$2}]. I'm not sure if including the different languages as I did above will work

Is there a way to get Step Functions input values into EMR step Args

We are running batch spark jobs using AWS EMR clusters. Those jobs run periodically and we would like to orchestrate those via AWS Step Functions.
As of November 2019 Step Functions has support for EMR natively. When adding a Step to the cluster we can use the following config:
"Some Step": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:addStep.sync",
"Parameters": {
"ClusterId.$": "$.cluster.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--class",
"com.some.package.Class",
"JarUri",
"--startDate",
"$.time",
"--daysToLookBack",
"$.daysToLookBack"
]
}
}
},
"Retry" : [
{
"ErrorEquals": [ "States.ALL" ],
"IntervalSeconds": 1,
"MaxAttempts": 1,
"BackoffRate": 2.0
}
],
"ResultPath": "$.firstStep",
"End": true
}
Within the Args List of the HadoopJarStep we would like to set arguments dynamically. e.g. if the input of the state machine execution is:
{
"time": "2020-01-08",
"daysToLookBack": 2
}
The strings in the config starting with "$." should be replaced accordingly when executing the State Machine, and the step on the EMR cluster should run command-runner.jar spark-submit --class com.some.package.Class JarUri --startDate 2020-01-08 --daysToLookBack 2. But instead it runs command-runner.jar spark-submit --class com.some.package.Class JarUri --startDate $.time --daysToLookBack $.daysToLookBack.
Does anyone know if there is a way to do this?
Parameters allow you to define key-value pairs, so as the value for the "Args" key is an array, you won't be able to dynamically reference a specific element in the array, you would need to reference the whole array instead. For example "Args.$": "$.Input.ArgsArray".
So for your use-case the best way to achieve this would be to add a pre-processing state, before calling this state. In the pre-processing state you can either call a Lambda function and format your input/output through code or for something as simple as adding a dynamic value to an array you can use a Pass State to reformat the data and then inside your task State Parameters you can use JSONPath to get the array which you defined in in the pre-processor. Here's an example:
{
"Comment": "A Hello World example of the Amazon States Language using Pass states",
"StartAt": "HardCodedInputs",
"States": {
"HardCodedInputs": {
"Type": "Pass",
"Parameters": {
"cluster": {
"ClusterId": "ValueForClusterIdVariable"
},
"time": "ValueForTimeVariable",
"daysToLookBack": "ValueFordaysToLookBackVariable"
},
"Next": "Pre-Process"
},
"Pre-Process": {
"Type": "Pass",
"Parameters": {
"FormattedInputsForEmr": {
"ClusterId.$": "$.cluster.ClusterId",
"Args": [
{
"Arg1": "spark-submit"
},
{
"Arg2": "--class"
},
{
"Arg3": "com.some.package.Class"
},
{
"Arg4": "JarUri"
},
{
"Arg5": "--startDate"
},
{
"Arg6.$": "$.time"
},
{
"Arg7": "--daysToLookBack"
},
{
"Arg8.$": "$.daysToLookBack"
}
]
}
},
"Next": "Some Step"
},
"Some Step": {
"Type": "Pass",
"Parameters": {
"ClusterId.$": "$.FormattedInputsForEmr.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args.$": "$.FormattedInputsForEmr.Args[*][*]"
}
}
},
"End": true
}
}
}
You can use the States.Array() intrinsic function. Your Parameters becomes:
"Parameters": {
"ClusterId.$": "$.cluster.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args.$": "States.Array('spark-submit', '--class', 'com.some.package.Class', 'JarUri', '--startDate', $.time, '--daysToLookBack', '$.daysToLookBack')"
}
}
}
Intrinsic functions are documented here but I don't think it explains the usage very well. The code snippets provided in the Step Functions console are more useful.
Note that you can also do string formatting on the args using States.Format(). For example, you could construct a path using an input variable as the final path segment:
"Args.$": "States.Array('mycommand', '--path', States.Format('my/base/path/{}', $.someInputVariable))"

AWS Code Build - cached DOWNLOAD_SOURCE taking too long

I am trying to improve the build speed of one of my projects on CodeBuild. The project uses a Github source provider and source caching of type local is enabled.
The first time I ran the build it took 103 secs. I ran it again immediately after the first one finished expecting it to run in a few seconds due to source caching, but it took 60 secs.
What I am missing here? Is the cache not working? If it is working, why does it take that long on the second run?
Thanks
Project Details:
{
"projectsNotFound": [],
"projects": [
{
"environment": {
"computeType": "BUILD_GENERAL1_LARGE",
"imagePullCredentialsType": "SERVICE_ROLE",
"privilegedMode": true,
"image": "111669150171.dkr.ecr.us-east-1.amazonaws.com/***********/ep-build-env:latest",
"environmentVariables": [
{
"type": "PLAINTEXT",
"name": "NEXUS_URI",
"value": "http://***************"
},
{
"type": "PLAINTEXT",
"name": "REGISTRY",
"value": "111669150171.dkr.ecr.us-east-1.amazonaws.com/*********"
}
],
"type": "LINUX_CONTAINER"
},
"timeoutInMinutes": 60,
"name": "StorefrontApi",
"serviceRole": "arn:aws:iam::111669150171:role/CodeBuild-ECRReadOnly",
"tags": [],
"artifacts": {
"type": "NO_ARTIFACTS"
},
"lastModified": 1571227097.581,
"cache": {
"type": "LOCAL",
"modes": [
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
"LOCAL_CUSTOM_CACHE"
]
},
"vpcConfig": {
"subnets": [
"subnet-fd7f958b"
],
"vpcId": "vpc-71e3f414",
"securityGroupIds": [
"sg-19b65e6c",
"sg-9e28e9f9"
]
},
"created": 1571082681.262,
"sourceVersion": "refs/heads/ep-mysql",
"source": {
"buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - env\n - cd extensions\n - mvn --settings $CODEBUILD_SRC_DIR_DEVOPS_WINE/pipelines/storefront/build-war/settings.xml --projects storefront/ext-storefront-webapp -am -DskipAllTests clean install\n\nartifacts:\n secondary-artifacts:\n storefront-war:\n base-directory: $CODEBUILD_SRC_DIR/extensions/storefront/ext-storefront-webapp/target\n files:\n - \"*.war\"\n\ncache:\n paths:\n - '/root/.m2/**/*'\n - '/root/.npm/**/*'",
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
},
"badge": {
"badgeEnabled": false
},
"queuedTimeoutInMinutes": 480,
"secondaryArtifacts": [],
"logsConfig": {
"s3Logs": {
"status": "DISABLED",
"encryptionDisabled": false
},
"cloudWatchLogs": {
"status": "ENABLED"
}
},
"secondarySources": [
{
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"sourceIdentifier": "DEVOPS_WINE",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
}
],
"encryptionKey": "arn:aws:kms:us-east-1:111669150171:alias/aws/s3",
"arn": "arn:aws:codebuild:us-east-1:111669150171:project/StorefrontApi",
"secondarySourceVersions": [
{
"sourceVersion": "refs/heads/staging",
"sourceIdentifier": "DEVOPS_WINE"
}
]
}
]
}
Apparently, at time of writing, CodeBuild does not use the native git client to fetch the source from GitHub. I understand that the CodeBuild internal teams have an internal feature request to move from whatever they're using to the native git client to improve performance.
Does your repository, by change, have lots of large files in its history? You can use this answer for a command to run to analyze your repository.
If you have lots of large files in your history and you're able to remove them, you can then use a tool like BFG Repo Cleaner to rewrite history. That should speed up the DOWNLOAD_SOURCE phase.
Also, if you have a dedicated support plan with AWS, you should reach out to your TAM to upvote the feature request to move to native git for GitHub source downloads.

Create env fails when using a daemonset to create processes in Kubernetes

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.