Reduce duplications (save complicated transforms for later use) in VScode snippets - variables

Is there a way to create custom variables inside a VScode snippet?
I have these kind of snippets where I create a singleton, based on the name of a file and a folder.
Here's the snippet:
"Service": {
"prefix": "singletonByPath",
"body": [
"class ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/upcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g} {",
" $0",
"}",
"",
"export const ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/downcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g} = new ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/upcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g}();",
""
],
"description": "Create an exported singleton instance and a class based on the filename and path"
},
So, when the snippet is triggered in a path like: '..../customers/service.ts' You will have this result:
class CustomersService {
}
export const customersService = new CustomersService();
The problem is that I have duplications of long hard to read regexes, and I would like to extract them to variables (or mirrors without tab stops).
I would even prefer having these variables in a "snippet-global location" so that I can use them in multiple snippets.
Is it possible to somehow reduce these duplications?

There are a couple of things you can do to simplify your snippet. There is no built-in way to save "variables" of pre-defined snippet parts.
Here though is a simplification of your code:
"Service": {
"prefix": "singletonByPath",
"body": [
// "class ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/upcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g} {",
"class ${1:${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/upcase}$2/g}}${2:${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g}} {",
-- --
" $0",
"}",
"",
// "export const ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/downcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g} = new ${TM_DIRECTORY/.*[^\\w]([a-z])(\\w+)$/${1:/upcase}$2/g}${TM_FILENAME_BASE/([a-z])(\\w+)/${1:/upcase}$2/g}();",
"export const ${1/(\\w+)/${1:/downcase}/}$2 = new $1$2();",
""
],
"description": "Create an exported singleton instance and a class based on the filename and path"
}
Note the use of $1:${TM_DIRECTORY...} and likewise ${2:${TM_FILENAME_BASE...}
This effectively sets $1 to the result of the TM_DIRECTORY transform and $2 to the result of the TM_FILENAME_BASE transform and those "variables" can be used elsewhere in the snippet by just referring to $1 and $2.
Those "variables" can even be transformed themselves as in the ${1/(\\w+)/${1:/downcase}/} transform in the last line.
The last line of your snippet then becomes simply:
"export const ${1/(\\w+)/${1:/downcase}/}$2 = new $1$2();",
You will have to tab a couple of times because those "variables" are now tabstops, and the last transform won't be completed until you do tabstop past it, but that is a small price to pay for such a simplification.
There are other simplifications to your snippet that aren't "variable-related":
"body": [
"class ${1:${TM_DIRECTORY/.*[\\/\\\\](.*)/${1:/capitalize}/}}${2:${TM_FILENAME_BASE/(.*)/${1:/capitalize}/}} {",
" $0",
"}",
"",
"export const ${1/(\\w+)/${1:/downcase}/g}$2 = new $1$2();",
""
],
You can use the capitalize transform. Also note that this last body works for Windows and linux path separators.

Related

is there a good way to import .json file content using .sql script file?

we have SQL project in Visual Studio with several .sql script files in it to populate data to Microsoft SQL Server.
some .sql scripts import JSON data, which now is defined as a raw string inside sql files. I was thinking on extracting JSONs to separate files. What I found is examples like this
SELECT * FROM OPENROWSET (BULK '**C:\**sampledata.txt', SINGLE_CLOB) as importData
but they all have full path to source file, which is ridiculous and probably fail if run on another machine with different folder structure.
Is there a way to embed .json files into SQL project and reference them from .sql file in more flexible way, to guarantee that compiled script will work?
I've looked into this myself, and I don't think it'll work.
The OPENROWSET BULK method of reading data from a file only accepts a full path.
You could try and get around this with a pre/post-deployment script and some SQLCMD variables, but then you run into the next issue.
OPENROWSET will only work if the file can be accessed from the machine running the query. If you run this against a remote server, you'll have to jump through some hoops to give it access to your data file.
Also, if you want to do this from an Azure SQL database, you'll need to store the data in Azure Blob Storage.
OPENROWSET documentation
Here's the best recommendation I can provide. Assuming that your primary goals are to a) ensure legibility in the code and b) allow for a more clear diff when updating the JSON data, create separate scalar-valued functions for each of your JSON strings. Each function will simply return your raw JSON string. Not exactly creative, but it will allow you to format your JSON for maximum legibility while allowing you to clean up the scripts that actually use this data.
e.g. Instead of having one file that looks like this:
DECLARE #json NVARCHAR(MAX) = N'[{"attr":"value 1"},{"attr":"value 2"},{"attr":"value 3"},{"attr":"value 4"},{"attr":"value 5"},{"attr":"value 6"},{"attr":"value 7"},{"attr":"value 8"},{"attr":"value 9"},{"attr":"value 10"}]'
--do stuff with the #json
Have two files, one:
CREATE FUNCTION GetJson()
RETURNS NVARCHAR(MAX)
AS
BEGIN
RETURN N'
[
{
"attr": "value 1"
},
{
"attr": "value 2"
},
{
"attr": "value 3"
},
{
"attr": "value 4"
},
{
"attr": "value 5"
},
{
"attr": "value 6"
},
{
"attr": "value 7"
},
{
"attr": "value 8"
},
{
"attr": "value 9"
},
{
"attr": "value 10"
}
]
'
END
and two:
DECLARE #json NVARCHAR(MAX) = dbo.GetJson()
--do stuff with #json

How to trace the data that is going through caches and DRAM memory in gem5?

--exec-flags Cache,DRAM show addresses and sizes, but sometimes I just need to see the actual data being sent.
I know that this might produce large logs, but that is fine as I'm restricting my area of interest well via --debug-start and -m/--debug-break (used a hack here to just finish the simulation at a tick).
https://gem5-users.gem5.narkive.com/VUAhxc7J/how-can-i-trace-data-of-cache mentions using CommMonitor. It is a bit annoying to have to modify the run script, but that's also a valid solution. It would be good to give a minimal example here that patches say se.py to add it and how to view its output.
I also have DPRINTF patch which seems to help and I'll try to publish. Here's a sketch:
## -385,14 +386,17 ## void
Packet::print(std::ostream &o, const int verbosity,
const std::string &prefix) const
{
- ccprintf(o, "%s%s [%x:%x]%s%s%s%s%s%s", prefix, cmdString(),
+ ccprintf(o, "%s%s [%x:%x]%s%s%s%s%s%s D=%llx", prefix, cmdString(),
getAddr(), getAddr() + getSize() - 1,
req->isSecure() ? " (s)" : "",
req->isInstFetch() ? " IF" : "",
req->isUncacheable() ? " UC" : "",
isExpressSnoop() ? " ES" : "",
req->isToPOC() ? " PoC" : "",
- req->isToPOU() ? " PoU" : "");
+ req->isToPOU() ? " PoU" : "",
+ flags.isSet(STATIC_DATA|DYNAMIC_DATA) ?
+ mem2hex_string(getConstPtr<const char>(), getSize())
+ : "");
}
and then implement mem2hex_string as shown at: C++ read binary file and convert to hex
Edit: I managed to add a CommMonitor by hacking se.py, but I could not see any memory value output in any files nor in its code, the only thing I could see was new stats being added, so not sure that can help at all.

Karate - how to access an array element by UUID during a 'retry until' statement

I have an endpoint which returns this JSON response:
{
"jobs": [
{
"name": "job1",
"id": "d6bd9aa1-0708-436a-81fd-cf22d5042689",
"status": "pending"
},
{
"name": "job2",
"id": "4fdaf09f-51de-4246-88fd-08d4daef6c3e",
"status": "pending"
}
]
I would like to repeatedly GET call this endpoint until the job I care about ("job2") has a "status" of "completed", but I'd like to check this by using a UUID stored in a variable from a previous call.
i.e. by doing something like this:
#NB: code for previous API call is executed
* def uuidVar = response.jobRef
#NB: uuidVar equates to '4fdaf09f-51de-4246-88fd-08d4daef6c3e' for this scenario
* configure retry = { count: 5, interval: 10000 }
Given path /blah
And retry until response.jobs[?(#.id==uuidVar)].status == 'completed'
When method GET
Could anyone suggest the correct syntax for the retry until?
I've tried referencing the fantastic Karate docs & examples (in particular, js-arrays.feature) and some questions on SO (including this one: Karate framework retry until not working as expected) but sadly I haven't been able to get this working.
I also tried using karate.match here as suggested in the link above, but no cigar.
Apologies in advance if I am missing something obvious.
First I recommend you read this answer on Stack Overflow, it is linked from the readme actually, and is intended to be the definitive reference. Let me know if it needs to be improved: https://stackoverflow.com/a/55823180/143475
Short answer, you can't use JsonPath in the retry until expression, it has to be pure JavaScript.
While you can use karate.jsonPath() to bridge the worlds of JsonPath and JS, JsonPath can get very hard to write and comprehend. Which is why I recommend using karate.filter() to do the same thing, but break down the steps into simple, readable chunks. Here is what you can try in a fresh Scenario:. Hint, this is a good way to troubleshoot your code without making any "real" requests.
* def getStatus = function(id){ var temp = karate.filter(response.jobs, function(x){ return x.id == id }); return temp[0].status }
* def response =
"""
{
"jobs": [
{
"name": "job1",
"id": "d6bd9aa1-0708-436a-81fd-cf22d5042689",
"status": "pending"
},
{
"name": "job2",
"id": "4fdaf09f-51de-4246-88fd-08d4daef6c3e",
"status": "pending"
}
]
}
"""
* def selected = '4fdaf09f-51de-4246-88fd-08d4daef6c3e'
* print getStatus(selected)
So if you have getStatus defined up-front, you can do this:
* retry until getStatus(selected) == 'completed'
Note you can use multiple lines for a JS function if you don't like squeezing it all into one line, or even read it from a file.

Properly accessing cluster_config '__default__' values

I have a cluster.json file that looks like this:
{
"__default__":
{
"queue":"normal",
"memory":"12288",
"nCPU":"1",
"name":"{rule}_{wildcards.sample}",
"o":"logs/cluster/{wildcards.sample}/{rule}.o",
"e":"logs/cluster/{wildcards.sample}/{rule}.e",
"jvm":"10240m"
},
"aln_pe":
{
"memory":"61440",
"nCPU":"16"
},
"GenotypeGVCFs":
{
"jvm":"102400m",
"memory":"122880"
}
}
In my snakefile I have a few rules that try to access the cluster_config object in their params
params:
memory=cluster_config['__default__']['jvm']
But this will give me a 'KeyError'
KeyError in line 27 of home/bwubb/projects/Germline/S0330901/haplotype.snake:
'__default__'
Does this have something to do with '__default__' being a special object? It pprints in a visually appealing dictionary where as the others are labeled OrderDict, but when I look at the json it looks the same.
If nothing is wrong with my json, then should I refrain from accessing '__default__'?
The default value is accessed via the keyword "cluster", not
__default__
See here in this example in the tutorial:
{
"__default__" :
{
"account" : "my account",
"time" : "00:15:00",
"n" : 1,
"partition" : "core"
},
"compute1" :
{
"time" : "00:20:00"
}
}
The JSON list in the URL above and listed above is the one being accessed in this example. It's unfortunate they are not on the same page.
To access time, J.K. uses the following call.
#!python
#!/usr/bin/env python3
import os
import sys
from snakemake.utils import read_job_properties
jobscript = sys.argv[1]
job_properties = read_job_properties(jobscript)
# do something useful with the threads
threads = job_properties[threads]
# access property defined in the cluster configuration file (Snakemake >=3.6.0)
job_properties["cluster"]["time"]
os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))

Parsing JSON file using JSONKit

I am constructing a tuning fork app. The fork should allow up to 12 preset pitches.
Moreover, I wish to allow the user to choose a theme. Each theme will load a set of presets (not necessary to use all of them).
My configuration file would look something like this*:
theme: "A3"
comment: "An octave below concert pitch (ie A4 440Hz)"
presets: {
A3 220Hz=220.0
}
// http://en.wikipedia.org/wiki/Guitar_tuning
theme: "Guitar Standard Tuning"
comment:"EADGBE using 12-TET tuning"
presets: {
E2=82.41
A2=110.00
D3=146.83
G3=196.00
B3=246.94
E4=329.63
}
theme: "Bass Guitar Standard Tuning"
comment: "EADG using 12-TET tuning"
presets: {
E1=41.204
A2=55.000
D3=73.416
G3=97.999
}
...which need to be extracted into some structure like this:
#class Preset
{
NSString* label;
double freq;
}
#class Theme
{
NSString* label;
NSMutableArray* presets;
}
NSMutableArray* themes;
How do I write my file using JSON? ( I would like to create a minimum of typing on the part of the user -- how succinct can I get it? Could someone give me an example for the first theme? )
And how do I parse it into the structures using https://github.com/johnezang/JSONKit?
Here's a valid JSON example based on your thoughts:
[
{
"name": "Guitar Standard Tuning",
"comment": "EADGBE using 12-TET tuning",
"presets": {
"E2": "82.41",
"A2": "110.00",
"D3": "146.83",
"G3": "196.00",
"B3": "246.94",
"E4": "329.63"
}
},
{
"name": "Bass Guitar Standard Tuning",
"comment": "EADG using 12-TET tuning",
"presets": {
"E1": "41.204",
"A1": "55.000",
"D2": "73.416",
"G2": "97.999"
}
}
]
Read a file and parse using JSONKit:
NSData* jsonData = [NSData dataWithContentsOfFile: path];
JSONDecoder* decoder = [[JSONDecoder alloc]
initWithParseOptions:JKParseOptionNone];
NSArray* json = [decoder objectWithData:jsonData];
After that, you'll have to iterate over the json variable using a for loop.
Using the parser in your question and assuming you have Simeon's string in an NSString variable. Here's how to parse it:
#import "JSONKit.h"
id parsedJSON = [myJSONString objectFromJSONString];
That will give you a hierarchy of arrays and dictionaries that you can walk to get your Preset and Theme objects. In the above case, you would get an array with two dictionaries each with a name, comment and presets key. The first two will have NSString values and the third (presets) will have a dictionary as it's value with the note name as keys and the frequencies as values (as NSString objects).