Cloudflare worker publish failing with 10019 error - cloudflare

We are trying to deploy a Cloudflare worker using wrangler. While trying to publish this to a new zone, we are getting below error
2022-09-16 05:24:16.850 ⠤ Configuring routes...
2022-09-16 05:24:16.853 Successfully published your script to
2022-09-16 05:24:16.855 https://*/cfw/admin-api/* => creation failed: Code 10019: workers.api.error.invalid_route_script_missing
Here is the wrangler-template.toml
name = "cf_worker_api_interceptor"
type = "javascript"
routes = [ "https://*/cfw/admin-api/*" ]
account_id ="${CLOUDFLARE_ACCOUNT_ID}"
zone_id="${CLOUDFLARE_ZONE_ID}"
workers_dev = false
compatibility_date = "2021-11-15"
compatibility_flags = []
[build]
command = "npm run build"
[build.upload]
format = "modules"
dir = "dist"
main = "./index.mjs"
[vars]
PROXY_URL ="prod-domain.com/"
[env.production]
vars = { PROXY_URL ="prod-domain.com/" }
routes = [ "https://*/cfw/admin-api/*" ]
[env.development]
vars = { PROXY_URL ="dev-domain.com/" }
routes = [ "https://*/cfw/admin-api/*" ]
[env.qa]
vars = { PROXY_URL ="dev-domain.com/" } //Sample value
routes = [ "https://*/cfw/admin-api/*" ]
We are using wrangler publish --env development for publishing the worker.
What could be going wrong here?

Related

how to use react-monaco-editor together with worker-loader?

Describe the bug
react-monaco-editor cannot be used together with worker-loader.
To Reproduce
create a new typescript app with CRA, run a min react-monaco-editor demo. (everything is fine)
install worker loader and add config in config-overrides.js, and start app.
example repo to reproduce
ERROR in ./node_modules/monaco-editor/esm/vs/editor/editor.worker.js
Module build failed (from ./node_modules/worker-loader/dist/cjs.js):
TypeError: Cannot read properties of undefined (reading 'replace')
at getDefaultChunkFilename (/Users//Documents/test/my-project/node_modules/worker-loader/dist/utils.js:23:24)
at Object.pitch (/Users//Documents/test/my-project/node_modules/worker-loader/dist/index.js:61:108)
Child vs/editor/editor compiled with 1 error
assets by status 1.27 KiB [cached] 1 asset
./node_modules/monaco-editor/esm/vs/language/json/json.worker.js 39 bytes [not cacheable] [built] [1 error]
ERROR in ./node_modules/monaco-editor/esm/vs/language/json/json.worker.js
Module build failed (from ./node_modules/worker-loader/dist/cjs.js):
TypeError: Cannot read properties of undefined (reading 'replace')
at getDefaultChunkFilename (/Users//Documents/test/my-project/node_modules/worker-loader/dist/utils.js:23:24)
at Object.pitch (/Users//Documents/test/my-project/node_modules/worker-loader/dist/index.js:61:108)
Child vs/language/json/jsonWorker compiled with 1 error
details of my config-overrides.js
const webpack = require('webpack');
const MonacoWebpackPlugin = require('monaco-editor-webpack-plugin');
module.exports = function override(config, env) {
config.plugins.push(
new MonacoWebpackPlugin({
languages: ['json']
})
);
config.stats = {
children: true
}
config.module.rules.push(
{
test: /\.worker\.js$/,
use: {
loader: 'worker-loader',
options: {
inline: 'fallback',
filename: '[contenthash].worker.js',
},
},
},
{
test: /\.worker\.ts$/,
use: [
{
loader: 'worker-loader',
options: {
inline: 'fallback',
filename: '[contenthash].worker.js',
},
},
'ts-loader',
],
},
);
return config;
};
Environment (please complete the following information):
OS: MacOS
Browser Chrome
Bundler webpack5 (CRA)
[ ] I will try to send a pull request to fix this issue.
I have solved it. It seems not to be a problem of react-monaco-editor or monaco-editor.
The problem is between worker-loader and monaco-editor-webpack-plugin.
I temporarily update my worker-loader config to match workers in my src folder only and solve this problem.
It could be better to figure out how to configure it in monaco-editor-webpack-plugin because it build files contains worker from monaco-editor without hashcode.

Configuring Gas Price when running Code Coverage

I'm currently using the following code coverage tool found here. I'm trying to set the gas price to 0 when configuring ganache in the .solcover.js file. I've used the following:
module.exports = {
client: require("ganache-cli") ,
providerOptions: { gasPrice : "0" }
};
and
module.exports = {
client: require("ganache-cli") ,
providerOptions: { options: { gasPrice : "0" }}
};
when I run npx truffle run coverage in my test I print out ganache like:
const g = require("ganache-cli");
.
.
.
console.log(g.provider().options.gasPrice);
But I set get the default hex value of 0x77359400 instead of 0. I'm not too sure where I'm going wrong?

Terraform Gives errors Failed to load plugin schemas

I have below code which I am using for create s3 bucket and cloud front in aws through terraform but terraform gives error.
I am using latest version of terraform cli exe for windows.
Main.tf
Please find below code of main.tf file :
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.70.0"
}
}
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "${var.aws_region}"
}
resource "aws_s3_bucket" "mybucket" {
bucket = "${var.bucket_name}"
acl = "public-read"
website {
redirect_all_requests_to = "index.html"
}
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["PUT","POST"]
allowed_origins = ["*"]
expose_headers = ["ETag"]
max_age_seconds = 3000
}
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${var.bucket_name}/*"
}
]
}
EOF
}
resource "aws_cloudfront_distribution" "distribution" {
origin {
domain_name = "${aws_s3_bucket.mybucket.website_endpoint}"
origin_id = "S3-${aws_s3_bucket.mybucket.bucket}"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
default_root_object = "index.html"
enabled = true
custom_error_response {
error_caching_min_ttl = 3000
error_code = 404
response_code = 200
response_page_path = "/index.html"
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-${aws_s3_bucket.mybucket.bucket}"
forwarded_values {
query_string = true
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Restricts who is able to access this content
restrictions {
geo_restriction {
# type of restriction, blacklist, whitelist or none
restriction_type = "none"
}
}
# SSL certificate for the service.
viewer_certificate {
cloudfront_default_certificate = true
}
}
Please find below error message:
Error: Failed to load plugin schemas
│
│ Error while loading schemas for plugin components: Failed to obtain provider schema: Could not load the schema for provider registry.terraform.io/hashicorp/aws: failed to retrieve schema
│ from provider "registry.terraform.io/hashicorp/aws": Plugin did not respond: The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).GetProviderSchema call. The
│ plugin logs may contain more details...
Please help to resolve the issue , I'm new in terraforms.
P.S. This error generated while terraform plan
I had faced the same issue. The error was a little different tho. I was following the HashiCups provider example.
It happens when initializing it fails to upgrade or detect corrupt cached providers, or if you have changed the version but when you only run terraform init it finds the version in cache and decides to go with it. Delete the terraform directory and lock file, and then init again terraform init -upgrade
If you're on running it on Apple M1 chip, you might as well need to set this:
export GODEBUG=asyncpreemptoff=1;
https://discuss.hashicorp.com/t/terraform-aws-provider-panic-plugin-did-not-respond/23396
https://github.com/hashicorp/terraform/issues/26104
I'm a beginner at terraform and I had same problem, so I hope this can help. I had same error "Failed to load plugin schemas" and I made a mess with .terraform.lock.hcl .terraform files.
Here's what I did
I created new project at the path where I installed terraform and aws, and it was at “desktop” ..
I don't know if that was necessary or not.
Then I created new two files provider.tf, and main.tf then I typed in terminal terraform init
after that, it will automatically generate those files:
.terraform
.terraform.lock.hcl
and I didn't change something on them. then run terraform plan in terminal.
here's my provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
shared_config_files = ["~/.aws/config"]
shared_credentials_files = ["~/.aws/credentials"]
profile = "terraform-user"
}
and here's main.tf
resource "aws_vpc" "vpc" {
cidr_block = "10.123.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "vpc"
}
}
here's some other problems you may face too:
after you create provider.tf and main.tf then after running terraform init in terminal you get "no change", then you should save files before you run the commands in terminal.
I had a similar problem today, where I was getting the same error with terraform plan, apply or destroy commands.
After looking for easy solutions in vain, I decided to run terraform init and it solved the error. I was able to run terraform destroy to successfully destroy 52 resources. Phew!
I was facing the same problem and for me the issue was that I was running terraform plan from /home where the partition mount point had "noexec" enabled.
You can simply run your terraform from somewhere else or disable the "noexec" from current mount point:
vi etc/fstab to edit and remove the noexec flag, change
/dev/mapper/VG00-LVhome /home ext4 defaults,noexec,nosuid
To
/dev/mapper/VG00-LVhome /home ext4 defaults,nosuid
And remount /home with mount -o remount /home
I hope it helps.
I also got this issue too. I previously used Terraform's time_static resource (link here), ran an apply, then no longer needed it and removed it from my Terraform code, then tried to run a plan and got this error.
What fixed it for me was running terraform state list, finding the time_static resource in my Terraform state, then terraform state rm on the time_static resource, then deleted my .terraform directory, ran terraform init then terraform plan and this worked

PM2 environmental variables with vue

I have a web application using vue, built with vue-cli for the front end and wanting to switch to use PM2s static page serve. Everything is working fine but noticed PM2 can also declare the environmental variables for the vue app via the ecosystem.config.js file. Trying to set those in that file, I am not able to get it to work. Specifically, its my vue app api url. It works fine if I keep my .env file in the directory with the same var defined but once I remove that, kill PM2 and restart the app, the app does not work leading me to believe the env variables set in the ecosystem.config.js file is not working.
This is what I currently have in my ecosystem.config.js file -
module.exports = {
"apps" : [{
"name": "AppName",
"script": "serve",
"exec_mode": "cluster",
"instances" : 2,
"env_develop": {
"PM2_SERVE_PATH": 'dist',
"PM2_SERVE_PORT": 8084,
"PM2_SERVE_SPA": 'true',
"PM2_SERVE_HOMEPAGE": '/index.html',
"NODE_ENV": 'develop',
"VUE_APP_API_URL": "http://myappurl.com/api/",
}
}],
"deploy" : {
"develop" : {
"user" : "username",
"key" : "/home/user/.ssh/key.priv",
"host" : "localhost",
"ref" : "origin/develop",
"repo" : "gitrepourl",
"path" : "/home/user/FE",
"pre-deploy-local": "",
"post-deploy" : "npm ci && npm run build && pm2 startOrReload ecosystem.config.js --env develop",
"pre-setup": ""
}
}
}
Any insight on what I may be doing wrong?
Thanks

botium project in eclipse with multiple botium.json not working

I have setup botium project according to direction given in https://chatbotsmagazine.com/5-steps-automated-testing-of-chatbots-in-eclipse-ef4c3dcaf233 and its working fine for single botium.json file.
but when i try to setup multiple connector together ex-
1)botium_dialog.json
{
"botium": {
"Capabilities": {
"PROJECTNAME": "jokes",
"CONTAINERMODE": "dialogflow",
"DIALOGFLOW_PROJECT_ID": "###",
"DIALOGFLOW_CLIENT_EMAIL": "###",
"DIALOGFLOW_PRIVATE_KEY": "###",
"DIALOGFLOW_USE_INTENT": false
}
}
}
2) botium_watson.json
{
"botium": {
"Capabilities": {
"PROJECTNAME": "IBM Watson Conversation Sample",
"SCRIPTING_UTTEXPANSION_MODE": "all",
"SCRIPTING_FORMAT": "xlsx",
"SCRIPTING_XLSX_STARTROW": 2,
"SCRIPTING_XLSX_STARTCOL": 1,
"CONTAINERMODE": "watson",
"WATSON_USER": "#",
"WATSON_PASSWORD": "#",
"WATSON_WORKSPACE_ID": "#"
}
}
}
in the same project but running 1 at a time using
mocha --reporter mochawesome --reporter-options
\"reportDir=reportsDialog,reportFilename=index.html,code=false\"
--convos ./spec/convo/dialog --config botium_dialog.json --exit spec "
its giving error
Error: Capability 'CONTAINERMODE' missing
at BotDriver._getContainer (node_modules\botium-core\src\BotDriver.js:316:13)
at async.series (node_modules\botium-core\src\BotDriver.js:154:30)
The "--convos" and the "--config" command line parameters are actually for the Botium CLI, not for mocha. You either switch your test scripts to Botium CLI, or you configure Botium in a way to use several configuration files and several convo directories. My recommendation would be to pack each section in an own subdirectory - so you have a "botium_dialog" and a "botium_watson" directory, each with it's own package.json, botium.json, spec/convo folders etc.
With some configuration changes, it is also possible to use your current folder structure.
Add multiple botium.spec.js in spec folder:
botium_dialog.spec.js:
const BotiumBindings = require('botium-bindings')
const bb = new BotiumBindings({ convodirs: [ './spec/convo/dialog' ] })
BotiumBindings.helper.mocha().setupMochaTestSuite({ bb })
botium_watson.spec.js:
const BotiumBindings = require('botium-bindings')
const bb = new BotiumBindings({ convodirs: [ './spec/convo/watson' ] })
BotiumBindings.helper.mocha().setupMochaTestSuite({ bb })
Add multiple test scripts to your package.json:
package.json:
...
"scripts": {
"test_dialog": "BOTIUM_CONFIG=botium_dialog.json mocha --reporter spec --exit spec/botium_dialog.spec.js",
"test_watson": "BOTIUM_CONFIG=botium_watson.json mocha --reporter spec --exit spec/botium_watson.spec.js"
}
...
Run both of the test scripts
For example:
npm run test_dialog
npm run test_watson