Cloudflare Wrangler "No KV Namespaces configured!" - cloudflare

I'm new to Cloudflare/Wrangler, but it seems like the documentation is missing something as
following the directions don't seem to work.
Starting from here:
https://developers.cloudflare.com/workers/wrangler/workers-kv/
I run the first command as wrangler kv:namespace create apikeys
🌀 Creating namespace with title "emailvalidator-apikeys"
✨ Success!
Add the following to your configuration file in your kv_namespaces array:
{ binding = "apikeys", id ="4818.........aa2c" }
I have a wrangler.toml file, and I add it to the kv_namespaces array:
kv_namespaces = [
{ binding = "apikeys", id = "4818.........aa2c" }
]
I attempt to add a key/value entry with Wrangler: wrangler kv:key put --binding=apikeys "MYKEY" "MYKEYVALUE"
✘ [ERROR] No KV Namespaces configured! Either use --namespace-id to upload directly or add a KV namespace to your wrangler config file.
What am I missing? I already validated in the Cloudflare platform that the namespace does exist, named as expected from the documentation as emailvalidator-apikeys.

Do you have your full wrangler.toml available? TOML has table inheritance which can cause issues where your kv_namespaces key isn't actually top-level or under an environment.
As an example:
name = "foo"
[triggers]
cron = ["* * * * *"]
kv_namespaces = [
{...}
]
This wouldn't work as kv_namespaces is now apart of the triggers table, so you would want to move it above or instead use the [[kv_namespaces]] syntax.
[[kv_namespaces]]
binding = "..."
id = "..."

Related

why read tsconfig.json using readConfigFile instead of directly requiring the path of tsconfig.json?

Upon investigating create-react-app's configuration, I found something interesting.
// config/modules.js
...
if (hasTsConfig) {
const ts = require(resolve.sync("typescript", {
basedir: paths.appNodeModules,
}));
config = ts.readConfigFile(paths.appTsConfig, ts.sys.readFile).config;
// Otherwise we'll check if there is jsconfig.json
// for non TS projects.
} else if (hasJsConfig) {
config = require(paths.appJsConfig);
}
...
Unlike reading jsconfig.json file using direct require(paths.appJsConfig), why is here using resolve.sync and ts.readConfigFile to read the tsconfig.json?
...
if (hasTsConfig) {
config = require(paths.appTsConfig)
// Otherwise we'll check if there is jsconfig.json
// for non TS projects.
} else if (hasJsConfig) {
config = require(paths.appJsConfig);
}
...
If I change the code like just above, the result is same. (at least the console output is same.)
There must be a reason why create-react-app using such a complicated way to read the typescript config file.
Why is that?
The ts config reader is a bit smarter than simply reading and parsing a json file. There's two differences I can think of right now:
in tsconfig files, you can use comments. JSON.parse will throw an exception because / is not an allowed character at an arbitrary position
ts config files can extend each other. Simply parsing a JSON file will ignore the extension and you'll receive a config object that doesn't represent what typescript actually uses.

unexpected type error after terraform 0.12 migration

I am in the process of migrating my terraform 0.11 configuration to terraform 0.12.5.
The migration (using 0.12upgrade) went relatively smoothly, but then I encountered this error during the next plan
Error: Invalid value for module argument
on main.tf line 72, in module "foo":
72: subnet_ids = module.vpc.subnet_ids
The given value is not suitable for child module variable "subnet_ids"
defined at ../../modules/foo/main.tf:10,1-30: element 0: string
required.
The module foo has a (migrated) variable declaration subnet_ids that looks like this:
variable "subnet_ids" {
type = list(string)
}
while the vpc module has an output declaration that is declared like this:
output "subnet_ids" {
value = [aws_subnet.private.*.id]
}
It seems that if I relax the type constraint on the foo module the error goes away.
However, is this the correct thing to do. After all, isn't the output of the vpc module actually a list of strings? How do I check the type of the vpc output variable?
Update: relaxing the type constraint allows the first part of the validation to succeed, but merely causes problems for the consuming module when the variable is applied as per this output
Error: Incorrect attribute value type
on ../../modules/foo/main.tf line 350, in resource "aws_ecs_service" "api":
350: subnets = var.subnet_ids
Inappropriate value for attribute "subnets": incorrect set element type:
string required.
So the question is: what am I doing wrong when I am defining the output value? How to I ensure the output value is a list of strings so that I don't get the original error? How can I inspect the type of vpc.subnet_ids?
Turns out, I needed to change this:
output "subnet_ids" {
value = [aws_subnet.private.*.id]
}
to this:
output "subnet_ids" {
value = aws_subnet.private[*].id
}
If you have list of subnet_ids you need to add like below
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
Check this answer as well
In the variable files mention the "subnet_ids" as a list.
variable "subnet_ids" {
type = list
default = []
}
Then in your module grab the subnet list as following
subnet_ids = [var.subnet_ids[0], var.subnet_ids[1]]

Drupal 8 custom modules - Installing additional configuration from YML files with a hook_update_N

I have a custom module which installs a specific set of configurations - these are all stored in the config/install folder, which means they are installed when the module is installed.
The configuration includes a content type, paragraphs, view modes, form modes, field storages and fields attached to both the content type and the paragraphs, etc. The idea is to use this module to install a 'feature' (a blog) and use it across multiple sites, as well as provide updates and extensions when we add more stuff to this feature.
Since upon initial install, you cannot add more configuration through the config/install folder, I've been trying to find a way to import additional configuration files through an update hook, and this is one that works:
<?php
use \Symfony\Component\Yaml\Yaml;
/**
* Installs the file upload element
*/
function MODULE_NAME_update_8002() {
// Is the flaw with this the fact that the order of loading configurations now
// matters and is a little bit more difficult to deal with?
// NOTE: YES. If, for example, you comment out the installing of the
// field_storage for the field_cb_file, but try to add the field_cb_file to
// the paragraph type, the update is successful and no errors are thrown.
// This is basically me trying to re-create the drupal configuration management
// system, without the dependency checks, etc. What is the PROPER way of
// importing additional configuration from a module through an update?
// FIXME:
$configs_to_install = [
'paragraphs.paragraphs_type.cbsf_file_download',
'field.storage.paragraph.field_cb_file',
'field.field.paragraph.cbsf_file_download.field_cb_file',
'field.field.paragraph.cbsf_file_download.field_cb_heading',
'field.field.paragraph.cbsf_file_download.field_cb_icon',
'field.field.paragraph.cbsf_file_download.field_cb_text',
'core.entity_form_display.paragraph.cbsf_file_download.default',
'core.entity_view_display.paragraph.cbsf_file_download.default',
];
foreach ($configs_to_install as $config_to_install) {
$path = drupal_get_path('module', 'MODULE_NAME') . '/config/update_8002/' . $config_to_install . '.yml';
$content = file_get_contents($path);
$parsed_yml = Yaml::parse($content);
$active_storage = \Drupal::service('config.storage');
$active_storage->write($config_to_install, $parsed_yml);
}
}
however, there are flaws with this method since it means you have to order configuration files in the right order if they depend on each other, and any dependencies that are present in the config file are not checked.
Is there a way to utilise configuration management to import config properly, in this same, 'loop over the files' way? Or to point to a folder that contains all of the config files and install them?
EDIT: There are further issues with this method - even if you've ordered the files correctly in terms of dependencies, no database tables are created. The configuration is simply 'written in' as is, and no other part of Drupal seems to be made aware that new entities were created, so they cannot run any of the functions that are otherwise ran if you were to create the entities through Drupal GUI. Definitely not the recommended way of transferring more complex configuration.
I've pushed this a step further - there is a way to use the EntityTypeManager class to create / update configurations.
2 links have helped me with this largely:
https://drupal.stackexchange.com/questions/164713/how-do-i-update-the-configuration-of-a-module
pwolanins answer at the bottom provides a function that either updates the configuration if it exists, or creates the configuration outright.
https://www.metaltoad.com/blog/programmatically-importing-drupal-8-field-configurations
the code on this page gives a clearer idea of what is happening - for each configuration that you'd like to install, you run the YML file through the respective storage manager, and then create the appropriate entity configurations, which creates all of the required DB tables.
What I ended up doing was:
Utilised a slightly modified version of pwolanins code and create a generic config updater function -
function _update_or_install_config( String $prefix, String $update_id, String $module) {
$updated = [];
$created = [];
/** #var \Drupal\Core\Config\ConfigManagerInterface $config_manager */
$config_manager = \Drupal::service('config.manager');
$files = glob(drupal_get_path('module', $module) . '/config/update_' . $update_id. '/' . $prefix . '*.yml') ;
foreach ($files as $file) {
$raw = file_get_contents($file);
$value = \Drupal\Component\Serialization\Yaml::decode($raw);
if(!is_array($value)) {
throw new \RuntimeException(sprintf('Invalid YAML file %s'), $file);
}
$type = $config_manager->getEntityTypeIdByName(basename($file));
$entity_manager = $config_manager->getEntityManager();
$definition = $entity_manager->getDefinition($type);
$id_key = $definition->getKey('id');
$id = $value[$id_key];
/** #var \Drupal\Core\Config\Entity\ConfigEntityStorage $entity_storage */
$entity_storage = $entity_manager->getStorage($type);
$entity = $entity_storage->load($id);
if ($entity) {
$entity = $entity_storage->updateFromStorageRecord($entity, $value);
$entity->save();
$updated[] = $id;
}
else {
$entity = $entity_storage->createFromStorageRecord($value);
$entity->save();
$created[] = $id;
}
}
return [
'udpated' => $updated,
'created' => $created,
];
}
I placed all of my yml files in folder config/update_8002, then utilised this function to loop over the config files in a hook_update_N function:
function MODULE_NAME_update_8002() {
$configs_to_install = [
'paragraphs.paragraphs_type.cbsf_file_download',
'core.entity_form_display.paragraph.cbsf_file_download.default',
'core.entity_view_display.paragraph.cbsf_file_download.default',
'field.storage.paragraph.field_cb_file',
'field.field.paragraph.cbsf_file_download.field_cb_file',
'field.field.paragraph.cbsf_file_download.field_cb_heading',
'field.field.paragraph.cbsf_file_download.field_cb_icon',
'field.field.paragraph.cbsf_file_download.field_cb_text',
];
foreach ($configs_to_install as $config_to_install) {
_update_or_install_config('paragraphs.paragraphs_type', '8002', 'MODULE_NAME');
_update_or_install_config('field.storage.paragraph', '8002', 'MODULE_NAME');
_update_or_install_config('field.field.paragraph', '8002', 'MODULE_NAME');
_update_or_install_config('core.entity_view_display.paragraph', '8002', 'MODULE_NAME');
_update_or_install_config('core.entity_form_display.paragraph', '8002', 'MODULE_NAME');
}
}
Note that the _update_or_install_config function loops over all of the configs in the folder that match a specific entity type manager - thus you should just include the prefix in the function, and all of the YML files that import configuration of the same type will be included.

get the sys_file_metadata values using typo3 repository methods

I am building an extension using typo3. I got stuck at one point. That is, I need the sys_file_metadata value along with sys_file informations. I am getting sys_file informations using repository methods. But not getting the metadata informations like title, description.
Can anyone help me to find a repository method to fetch metadata informations using repository methods?
$storageRepository = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\\Core\\Resource\\StorageRepository'); // create instance to storage repository
$storage = $storageRepository->findByUid(2); // get file storage with uid 1 (this should by default point to your fileadmin/ directory)
$folder = $storage->getFolder('/Audios/',false);
$files = $storage->getFilesInFolder($folder);
foreach ($files as $key => $value) {
$array_file = $files[$key]->toArray();
$uid = $array_file['uid'];
$array['name'] = $array_file['name'];
$array['extension'] = $array_file['extension'];
}
I found the answer. We can use the method
$value->_getMetaData();
To get the specified properties, we can use
$value->getProperty('description');

DNN - Adding a secure folder

I'm currently coding a module where users can add secure folders.
But the instance method requires a parameter of an instance name, i've no idea what they mean. Could someone explain it to me?
DotNetNuke.Services.FileSystem.SecureFolderProvider.Instance("Test2").AddFolder(txtFolderName.Text, new FolderMappingInfo
{
PortalID = base.PortalId,
MappingName = txtFolderName.Text
});
Any suggestions what i am doing wrong?
With some help of garethbh, i came up with this:
// Get folder mapping
var folderMapping = FolderMappingController.Instance.GetFolderMapping(PortalId, "Secure");
// Add folder and get the result back of the folder information
var folder = FolderManager.Instance.AddFolder(new FolderMappingInfo
{
FolderProviderType = folderMapping.FolderProviderType,
FolderMappingID = 9,
Priority = 2,
PortalID = PortalId,
}, portalFilePath);
This works fine for me.
You need to pass in the name of your folder mapping provider type. If you search for usages of SecureFolderProvider's base class (FolderProvider), you'll see what you need.
Eg:
var folderMapping = FolderMappingController.Instance.GetFolderMapping(PortalId, "Secure");
if (folderMapping != null)
{
SecureFolderProvider.Instance(folderMapping.FolderProviderType).AddFolder(folderPath, folderMapping);
}
I've never actually used the secure folder provider before so I'm just guessing you need the one with the 'Secure' mapping name (but you may want to use 'Database' depending on your needs or create your own folder provider). See the FolderMappings table in the database for available types.
From the DNN wiki http://www.dnnsoftware.com/wiki/Page/Folder-Types and http://www.dnnsoftware.com/wiki/Page/Folder-providers