Dojo 1.8 already defines AMD modules. For example you can do things like this:
require(["dojo/_base/lang"], function (lang) {
var ab = lang.mixin({a: 1}, {b: 2});
});
But how to I avoid getting an error when I attempt to import this module?
import lang = module ("dojo/_base/lang");
Is is possible?
You will need a typescript definition file for lang. Assuming that lang.d.ts exists in the same directory as lang.js this code:
import lang = module('dojo/_base/lang')
var ab = lang.mixin({a: 1}, {b: 2});
compiled with
tsc --module amd yourfile.ts
generates
define(["require", "exports", 'dojo/_base/lang'], function(require, exports, __lang__) {
var lang = __lang__;
var ab = lang.mixin({a: 1}, {b: 2});
}
If you don't want to have to match up the directory structures for whatever reason do this instead. Assuming lang.d.ts is in in a subdirectory called 3rd that is a sibling of test.ts.
test.ts:
///<reference path="3rd/lang.d.ts"/>
import lang = module('dojo/_base/lang');
var ab = lang.mixin({a: 1}, {b: 2});
3rd/lang.d.ts:
declare module 'dojo/_base/lang' {
...
}
generates the same as above.
You can normally load modules via the dojo loader, you don't have to use the import statement. But if you want you have to declare the module:
declare module "dojo/_base/lang" {
export function ...
export class ...
}
Related
I'm trying to recursively read a file in Deno using Deno.readDir, but the example they provide only does the folder given:
for await (const entry of Deno.readDir(Deno.cwd())) {
console.log(entry.name);
}
How can I make this recursive?
Deno's standard library includes a function called walk for this purpose. It's available in std/fs/walk.ts. Here's an example:
/Users/deno/so-74953935/main.ts:
import { walk } from "https://deno.land/std#0.170.0/fs/walk.ts";
for await (const walkEntry of walk(Deno.cwd())) {
const type = walkEntry.isSymlink
? "symlink"
: walkEntry.isFile
? "file"
: "directory";
console.log(type, walkEntry.path);
}
Running in the terminal:
% pwd
/Users/deno/so-74953935
% ls -AF
.vscode/ deno.jsonc deno.lock main.ts
% ls -AF .vscode
settings.json
% deno --version
deno 1.29.1 (release, x86_64-apple-darwin)
v8 10.9.194.5
typescript 4.9.4
% deno run --allow-read main.ts
directory /Users/deno/so-74953935
file /Users/deno/so-74953935/main.ts
file /Users/deno/so-74953935/deno.jsonc
file /Users/deno/so-74953935/deno.lock
directory /Users/deno/so-74953935/.vscode
file /Users/deno/so-74953935/.vscode/settings.json
Since that function returns an async generator, you can make your own generator function that wraps around Deno.readDir:
(Do note that the example provided will join the path and name, giving you strings such as /directory/name.txt)
import { join } from "https://deno.land/std/path/mod.ts";
export async function* recursiveReaddir(
path: string
): AsyncGenerator<string, void> {
for await (const dirEntry of Deno.readDir(path)) {
if (dirEntry.isDirectory) {
yield* recursiveReaddir(join(path, dirEntry.name));
} else if (dirEntry.isFile) {
yield join(path, dirEntry.name);
}
}
}
for await (const entry of recursiveReaddir(Deno.cwd())) {
console.log(entry)
}
OR, you can use recursive_readdir, which is a 3rd party library in Deno made for this purpose.
For a code repository project in Palantir Foundry, I am struggling with re-using some of my transformation logic.
It seems almost trivial, but: is there way to send an Input to a Transform that is not a dataset/dataframe reference?
In my case I want to pass in strings or lists/arrays.
This is my code:
from pyspark.sql import functions as F
from transforms.api import Transform, Input, Output
def my_computation(result, customFilter, scope, my_categories, my_mappings):
scope_df = scope.dataframe()
my_categories_df = my_categories.dataframe()
my_mappings_df = my_mappings.dataframe()
filtered_cat_df = (
my_categories_df
.filter(F.col('CAT_NAME').isin(customFilter))
)
# ... more logic
def generateTransforms(config):
transforms = []
for key, value in config.items():
o = {}
for outKey, outValue in value['outputs'].items():
o[outKey] = Output(outValue)
i = {}
for inpKey, inpValue in value['inputs'].items():
i[inpKey] = Input(inpValue)
i['customFilter'] = Input(value['my_custom_filter'])
transforms.append(Transform(my_computation, inputs=i, outputs=o))
return transforms
config = {
"transform_one": {
"my_custom_filter": {
"foo",
"bar"
},
"inputs": {
"scope": "/my-project/input/scope",
"my_categories": "/my-project/input/my_categories",
"my_mappings": "/my-project/input/my_mappings"
},
"outputs": {
"result": "/my-project/output/result"
}
}
}
TRANSFORMS = generateTransforms(config)
The concrete question is: how can I send in the values from my_custom_filter into customFilter in the transformation function my_computation?
If I execute it like above, I get the error "TypeError: unhashable type: 'set'"
This looks like a python issue, any chance you can point out which line is causing the error?
Reading throung your code, I would guess it's this line:
i['customFilter'] = Input(value['my_custom_filter'])
Your python logic is wrong, if we unpack your code you're trying to do this call:
i['customFilter'] = Input({"foo", "bar"})
Edit to answer the comment on how to create a python transform to lock a variable in a closure:
def create_transform(inputs={}, outputs={}, my_other_var):
#transform(**inputs, **outputs)
def compute(input_foo, input_bar, output_foobar, ctx):
df = input_foo.dataframe()
df = df.withColumn("mycol", F.lit(my_other_var))
output_foorbar.write_dataframe(df)
return compute
and now you can call this:
transforms.append(create_tranform(inputs, outptus, "foobar"))
I know it is very simple to do in OrientDB :
select * from asset where bucket.repository_name = 'my-repo-release';
but I need to get this list remotely, not in local orientdb console, so I need a groovy script and I can't find it anywhere.
Here is the example script :
https://github.com/sonatype-nexus-community/nexus-scripting-examples
it can be found in the "nexus-script-example" project :
import org.sonatype.nexus.repository.storage.Asset
import org.sonatype.nexus.repository.storage.Query
import org.sonatype.nexus.repository.storage.StorageFacet
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
def assetListFile = new File('/tmp/assetListFile.txt')
def request = new JsonSlurper().parseText("{\"repoName\":\"maven-releases\",\"startDate\":\"2016-01-01\"}")
assert request.repoName: 'repoName parameter is required'
assert request.startDate: 'startDate parameter is required, format: yyyy-mm-dd'
log.info("Gathering Asset list for repository: ${request.repoName} as of startDate: ${request.startDate}")
def repo = repository.repositoryManager.get(request.repoName)
StorageFacet storageFacet = repo.facet(StorageFacet)
def tx = storageFacet.txSupplier().get()
try {
tx.begin()
Iterable<Asset> assets = tx.findAssets(Query.builder().where('last_updated > ').param(request.startDate).build(), [repo])
assets.each {Asset asset ->
assetListFile << asset.name() + '\n'
}
}
finally {
tx.close()
}
All properties of an asset (object org.sonatype.nexus.repository.storage.Asset) are accessible, including asset.size().
I need a workaround for using count.index inside a module block for some input variables. I have a habit of over-complicating problems, so maybe there's a much easier solution.
File/Folder Structure:
modules/
main.tf
ignition/
main.tf
modules/
files/
main.tf
template_files/
main.tf
End Goal: Create an Ignition file for each instance I'm deploying. Each Ignition file has instance-specific info like hostname, IP address, etc.
All of this code works if I use a static value or a variable without cound.index. I need help coming up with a workaround for the address, gateway, and hostname variables specifically. If I need to process the count.index inside one of the child modules, that's totally fine. I can't seem to wrap my brain around that though. I've tried null_data_source and null_resource blocks from the child modules to achieve that, but so far no luck.
Variables:
workers = {
Lab1 = {
"lab1k8sc8r001" = "192.168.17.100/24"
}
Lab2 = {
"lab2k8sc8r001" = "192.168.18.100/24"
}
}
gateway = {
Lab1 = [
"192.168.17.1",
]
Lab2 = [
"192.168.18.1",
]
}
From modules/main.tf, I'm calling the ignition module:
module "ignition_workers" {
source = "./modules/ignition"
virtual_machines = var.workers[terraform.workspace]
ssh_public_keys = var.ssh_public_keys
files = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
template_files = {
"files_eth0.nmconnection.yaml" = {
interface-name = "eth0",
address = element(values(var.workers[terraform.workspace]), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}
"files_etc_hostname.yaml" = {
hostname = element(keys(var.workers[terraform.workspace]), count.index),
}
"files_chronyd.yaml" = {
ntp_server = var.ntp_server,
}
}
}
From modules/ignition/main.tf I take the files and template_files variables to build the Ignition config:
module "ingition_file_snippets" {
source = "./modules/files"
files = var.files
}
module "ingition_template_file_snippets" {
source = "./modules/template_files"
template_files = var.template_files
}
data "ct_config" "fedora-coreos-config" {
count = length(var.virtual_machines)
content = templatefile("${path.module}/assets/files_ssh_authorized_keys.yaml", {
ssh_public_keys = var.ssh_public_keys
})
pretty_print = true
snippets = setunion(values(module.ingition_file_snippets.files), values(module.ingition_template_file_snippets.files))
}
I am not quite sure what you are trying to achieve so I can not give any detailed examples.
But modules in terraform do not support count or for_each yet. So you can also not use count.index.
You might want to change your module to take lists/maps of input and create those lists/maps via for-expressions by transforming them from some input variables.
You can combine for with if to create a filtered subset of your source list/map. Like in:
[for s in var.list : upper(s) if s != ""]
I hope this helps you work around the missing count support.
Is it possible in Perl6 to find all installed modules whose file-name matches a pattern?
In Perl5 I would write it like this:
use File::Spec::Functions qw( catfile );
my %installed;
for my $dir ( #INC ) {
my $glob_pattern = catfile $dir, 'App', 'DBBrowser', 'DB', '*.pm';
map { $installed{$_}++ } glob $glob_pattern;
}
There is currently no way to get the original file name of an installed module. However it is possible to get the module names
sub list-installed {
my #curs = $*REPO.repo-chain.grep(*.?prefix.?e);
my #repo-dirs = #curs>>.prefix;
my #dist-dirs = |#repo-dirs.map(*.child('dist')).grep(*.e);
my #dist-files = |#dist-dirs.map(*.IO.dir.grep(*.IO.f).Slip);
my $dists := gather for #dist-files -> $file {
if try { Distribution.new( |%(from-json($file.IO.slurp)) ) } -> $dist {
my $cur = #curs.first: {.prefix eq $file.parent.parent}
take $_ for $dist.hash<provides>.keys;
}
}
}
.say for list-installed();
see: Zef::Client.list-installed()