Set the position for service CMake files - cmake

I'm building a C[++] project with CMake.
After running cmake --build ... I have a build folder in my project containing my binary plus some CMake service file like that:
.
|____CMakeLists.txt
|____build
| |____compile_commands.json
| |____CMakeFiles
| |____Makefile
| |____cmake_install.cmake
| |____CMakeCache.txt
| |____project.a
| |____.cmake
|____include
|____src
Is it possible to configure CMake to move all those files (except the actually built binaries) to some other place?
I can imagine something like:
.
|____CMakeLists.txt
|____build
| |____project.a
| |____.cmake
| |____compile_commands.json
| |____CMakeFiles
| |____Makefile
| |____cmake_install.cmake
| |____CMakeCache.txt
| |____etc
| |____...
|____include
|____src

Trying to force cmake to generate a specific file structure well supported. I recommend approaching the problem from the other end instead: Determine the location where cmake outputs the binaries. You simply need to set some or all of the following variables:
CMAKE_ARCHIVE_OUTPUT_DIRECTORY
CMAKE_LIBRARY_OUTPUT_DIRECTORY
CMAKE_PDB_OUTPUT_DIRECTORY
CMAKE_RUNTIME_OUTPUT_DIRECTORY
CMake presets would be a convenient place to set this kind of info:
...
configurePresets": [
{
...
"cacheVariables": {
"CMAKE_RUNTIME_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_LIBRARY_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_ARCHIVE_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
},
"CMAKE_PDB_OUTPUT_DIRECTORY": {
"type": "PATH",
"value": "${sourceDir}/build_binaries"
}
},
...
},
...
]
...
This only provides default values though. The corresponding target properties (e.g. RUNTIME_OUTPUT_DIRECTORY) take precedence. Furthermore there's the possibility of the cache variables being shadowed by variables of the same name specified in your cmake files.
Note: If you want to put the binaries for a project in a file structure suitable for deployment, you should be using install() functionality instead.

Related

Local changes on s3-cloudformation-template.json are overriden on amplify push

As part of my backend configuration, I need that an S3 bucket to get its objects automatically expired after 1 day. I included the S3 bucket to my backend with amplify storage add, but AMPLIFY-CLI is a bit limited on which can be configured for buckets.
So, after creating the bucket through amplify, I opened s3-cloudformation-template.json and manualy included a rule for object expiration:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "S3 resource stack creation using Amplify CLI",
"Parameters": {...},
"Conditions": {...},
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DependsOn": [
"TriggerPermissions"
],
"DeletionPolicy" : "Retain",
"Properties": {
"BucketName": {...},
"NotificationConfiguration": {...},
"CorsConfiguration": {...},
"LifecycleConfiguration": {
"Rules": [
{
"Id": "ExpirationRule",
"Status": "Enabled",
"ExpirationInDays": 1
}
]
}
}
},
...
}
After that, I issued an amplify status, where the change in the cloudformation template was detected:
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------------- | --------- | ----------------- |
| Storage | teststorage | Update | awscloudformation |
Finally, I issued an amplify push but the command finished without any cloudformation logs for this change, and a new indication of No Change for the S3 storage:
โœ” Successfully pulled backend environment dev from the cloud.
Current Environment: dev
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------------- | --------- | ----------------- |
| Storage | teststorage | No Change | awscloudformation |
Checking s3-cloudformation-template.json again I noticed that the configuration I've added before was overriden/removed during the push command:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "S3 resource stack creation using Amplify CLI",
"Parameters": {...},
"Conditions": {...},
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DependsOn": [
"TriggerPermissions"
],
"DeletionPolicy" : "Retain",
"Properties": {
"BucketName": {...},
"NotificationConfiguration": {...},
"CorsConfiguration": {...}
}
},
...
}
So, pretty sure I'm making some mistake here since I couldn't find other posts with this problem, but, where is the mistake?
You need to "override" the s3 bucket configuration (Override Amplify-generated S3)...
From the command line type:
amplify override storage
This will show:
โœ… Successfully generated "override.ts" folder at C:\myProject\amplify\backend\storage\staticData
โˆš Do you want to edit override.ts file now? (Y/n) ยท yes
Press Return to choose yes then update the override.ts file with the following:
import { AmplifyS3ResourceTemplate } from '#aws-amplify/cli-extensibility-helper'
import * as s3 from '#aws-cdk/aws-s3'
export function override(resources: AmplifyS3ResourceTemplate) {
const lifecycleConfigurationProperty: s3.CfnBucket.LifecycleConfigurationProperty =
{
rules: [
{
status: 'Enabled',
id: 'ExpirationRule',
expirationInDays: 1
}
]
}
resources.s3Bucket.lifecycleConfiguration = lifecycleConfigurationProperty
}
Back to the command line, press Return to continue:
Edit the file in your editor: C:\myProject\amplify\backend\storage\staticData\override.ts
? Press enter to continue
Now update the backend using:
amplify push
And you're done.
Further information on the configuration options can be found at interface LifecycleConfigurationProperty
Further information on what else can be overridden for the S3 bucket can be found at class CfnBucket (construct)

How to get started with Aurelia UX

So, i'm trying to get aurelia-ux to work but am unable to.
There is a demo application, but that one is based on 0.3 while 0.6.1 exists. And it seems quite some stuff changed.
The latest npm package for aurelia-ux that exists is 0.3.0, so I guess that is not the one to use. A package aurelia-ux/components seems the one to use.. for that one exists a 0.6.1 package.
So I added to my package.json:
"dependencies": {
"#aurelia-ux/components": "^0.6.1",
And to my aurelia.json:
{
"name": "aurelia-ux",
"path": "../node_modules/#aurelia-ux/components/dist/amd",
"main": "index",
"resources": ["**/*.{css,html}"]
}
And this gives me this in my build output:
Tracing aurelia-ux...
------- File not found or not accessible ------
| Location: C:/Work/Dat/AuFront/src/#aurelia-ux/button.js|
| Requested by: C:\Work\Dat\AuFront\node_modules\#aurelia-ux\components\dist\amd\index.js
| Is this a package? Make sure that it is configured in aurelia.json and that it is not a Node.js package
So this looks like some kind of path reference problem? Is there someone with a working example?
So, the trick is to include this npm package:
#aurelia-ux/components
And in the Aurelia.json add this:
{
"name": "#aurelia-ux/core",
"path": "../node_modules/#aurelia-ux/core/dist/amd",
"main": "index",
"resources": ["**/*.{css,html}"]
},
And then every component you want to use, like so:
{
"name": "#aurelia-ux/button",
"path": "../node_modules/#aurelia-ux/button/dist/amd",
"main": "index",
"resources": ["**/*.{css,html}"]
},
Then, use it on your main.ts like this:
.plugin('#aurelia-ux/core')
.plugin('#aurelia-ux/button')

Unbundeling a pre-built Javascript file built using browserify

I have a third party library, non uglified which was bundled using browserify. Unfortunately the original sources are not available.
Is there a way to unbundled it into different files/sources.
You should be able to 'unbundle' the pre-built Browserify bundle using browser-unpack.
It will generate JSON output like this:
[
{
"id": 1,
"source": "\"use strict\";\r\nvar TodoActions = require(\"./todo\"); ... var VisibilityFilterActions = require(\"./visibility-filter\"); ...",
"deps": {
"./todo": 2,
"./visibility-filter": 3
}
},
{
"id": 2,
"source": "\"use strict\";\r\n ...",
"deps": {}
},
{
"id": 3,
"source": "\"use strict\";\r\n ...",
"deps": {}
},
...
]
It should be reasonably straight-forward to transform the JSON output into source files that can be required. Note that the mappings of require literals (like "./todo") are in the deps. That is, the module required as "./todo" corresponds to the source with an id of 2.
There is also a browserify-unpack tool - which writes the contents as files - but I've not used it.

Query Extensionless File using Apache Drill

I imported data in Hadoop using Sqoop 1.4.6. Sqoop imports and saves the data in HDFS in an extensionless file but in csv format. I used Apache Drill to query the data from this file but got Table not found error. In Storage Plugin configuration, I even put null, blank (""), space (" ") in extensions but was not able to query the file. Even I was able to query the file when I changed the filename with an extension. Putting any extension in the configuration file works other than null extension. I could query the file saved in csv format but with extension 'mat' or anything.
Is there any way to query the extensionless files?
You can use a default input format in the storage plugin configuration to solve this problem. For example:
select * from dfs.`/Users/khahn/Downloads/csv_line_delimit.csv`;
+-------------------------+
| columns |
+-------------------------+
| ["hello","1","2","3!"] |
. . .
Change the file name to remove the extension and modify the plugin config "location" and "defaultInputFormat":
{
"type": "file",
"enabled": true,
"connection": "file:///",
"workspaces": {
"root": {
"location": "/Users/khahn/Downloads",
"writable": false,
"defaultInputFormat": "csv"
},
Query the file that has no extension.
0: jdbc:drill:zk=local> select * from dfs.root.`csv_line_delimit`;
+-------------------------+
| columns |
+-------------------------+
| ["hello","1","2","3!"] |
. . .
I have the same experience. First, I imported 1 table from oracle to hadoop 2.7.1 then query via drill. This is my plugin config set through web UI:
{
"type": "file",
"enabled": true,
"connection": "hdfs://192.168.19.128:8020",
"workspaces": {
"hdf": {
"location": "/user/hdf/my_data/",
"writable": false,
"defaultInputFormat": "csv"
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
}
}
}
then, in drill cli, query like this:
USE hdfs.hdf
SELECT * FROM part-m-00000
Also, in hadoop file system, when I cat the content of 'part-m-00000', the below format printed on the console:
2015-11-07 17:45:40.0,6,8
2014-10-02 12:25:20.0,10,1

VSCode appends taskName to msbuild automatically

In Visual Studio Code (VSCode) I've create a task to build my c++ project. The build-process is based on Visual-Studio 12.0 projects files create by CMake. It provides configurations for Release/Debug/... modes and I want to create for each configuration a separate task.
Problem: VSCode appends the taskName to msbuild automatically. My tasks.json file looks like:
{
"version": "0.1.0",
"command": "msbuild",
"args": ["${cwd}/build/PROJECTNAME.sln",
"/property:GenerateFullPaths=true"],
"taskSelector": "/t:",
"tasks": [
{
"taskName": "build release",
"args": ["/p:Configuration=Release"],
"problemMatcher": "$msCompile"
},
{
"taskName": "build debug",
"args": ["/p:Configuration=Debug"],
"problemMatcher": "$msCompile"
}]
}
The argument /t:${taskName} seems to be appended automatically to msbuild. If I add the parameter /t:Build in the args variable of a task manually, It gives me the error, that two targets are specified in msbuild. Removing the taskSelector variable does not help. The only way I get it running, is to set all taskName variables to Build, but then I can not distiguish between different tasks in the tasks-selector.
Any ideas how to solve this?
PS: is there a reference of possible parameters for the tasks.json file, except those provided in the example file and on the official documentation site?
We have a work item to support suppressTaskName on the task description. If this does get implemented would it solve your problem.
After playing around with msbuild command-line arguments I've figured out a workaround. It is not nice and only works in some cases, but for me it is fine. Hopefully in later versions of VSCode a better solution can be implemented.
The idea: Add a dummy argument to msbuild, that has no meaningful effect. I've tried two version: (1) add the preprocess command-line switch, that creates a file with content that you could ignore, i.e. "taskSelect": "/pp:"; (2) add a dummy property to msbuild that accepts any argument, like /p:DefineConstants=....
The final tasks.json file looks like:
{
"version": "0.1.0",
"command": "msbuild",
"args": ["${cwd}/build/PROJECTNAME.sln",
"/property:GenerateFullPaths=true"],
// a dummy taskSelector to overcome a restriction in msbuild
// (1) "taskSelector": "/pp:",
// (2) ...
"taskSelector": "/p:DefineConstants=taskName_",
"tasks": [
{
"taskName": "build_release",
"args": ["/t:Build", "/p:Configuration=Release"],
"problemMatcher": "$msCompile"
},
{
"taskName": "build_debug",
"args": ["/t:Build", "/p:Configuration=Debug"],
"problemMatcher": "$msCompile"
}
]}
You can only use taskName without spaces, but this is ok, since one can now distinguish between different tasks.
Maybe in other build-systems like grunt there is a similar dummy-parameter that can be set to taskName without changing the build-process.
I'm open to better solutions to this problem.