I am attempting to reference assigned property generatorName when setting outputDir.
Attempted to reference generatorName property using same syntax as other task properties (i.e. $buildDir). Also attempted to more fully qualify the property name openApiGenerator.generatorName.
openApiGenerate {
verbose = false
generatorName = "html2" // assignment to property
inputSpec = "$buildDir/swagger/testing.yml".toString()
//outputDir = "$buildDir/generated".toString()
outputDir = "$buildDir/generated/$generatorName".toString() // fails
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
// debugging code
println " buildDir: $buildDir".toString()
println " generatorName: $generatorName".toString() // this fails
}
output from debugging code shows failure to reference generatorName property:
> Configure project :
buildDir: C:\Users\jgunchy\repos\testingproject\build
generatorName: property(class java.lang.String, fixed(class java.lang.String, html2))
This is an observable property, not a string. You should be able to access the underlying string using .get() like this:
openApiGenerate {
verbose = false
generatorName = "html2"
inputSpec = "$buildDir/swagger/testing.yml".toString()
outputDir = "$buildDir/generated/${generatorName.get()}".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
}
Another option would be to use configuration rather than the project extension container's properties directly. For instance, add to gradle.properties:
generatorName=html2
Then, your configuration would look like this:
openApiGenerate {
verbose = false
generatorName = project.ext.generatorName
inputSpec = "$buildDir/swagger/testing.yml".toString()
outputDir = "$buildDir/${project.ext.generatorName}".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
}
$buildDir is a getter on the Project instance with a toString() method that happens to output the File path, which is why it behaves differently.
Related
I have the following terraform code snippet where I'm trying to use a self_link in the subnet.network resource that references the title of the network resource.
main.tf
resource "google_compute_network" "demo-vpc-network" {
auto_create_subnetworks = "false"
delete_default_routes_on_create = "false"
name = var.GCP_COMPUTE_NETWORK_NAME
project = var.GCP_PROJECT_NAME
routing_mode = "REGIONAL"
}
resource "google_compute_subnetwork" "demo-subnet" {
ip_cidr_range = "10.200.0.0/24"
name = "kubernetes"
network = google_compute_network.vpc_network.self.link
private_ip_google_access = "false"
project = var.GCP_PROJECT_NAME
region = "us-west1"
}
However, I get the following error.
Error: Reference to undeclared resource
on main.tf line 77, in resource "google_compute_subnetwork" "demo-subnet":
77: network = google_compute_network.vpc_network.self.link
A managed resource "google_compute_network" "vpc_network" has not been
declared in the root module.
google_compute_network.vpc_network.self.link
won't work because google_compute_network.vpc_network doesn't exist.
It's easy to fix because google_compute_network.demo-vpc-network does exist.
Update: Also, as you've noted in your comment self-link (with a hyphen) won't work and needs to be self_link (with an underscore).
Here's the second resource block with the bug fixed:
resource "google_compute_subnetwork" "demo-subnet" {
ip_cidr_range = "10.200.0.0/24"
name = "kubernetes"
network = google_compute_network.demo-vpc-network.self.link
private_ip_google_access = "false"
project = var.GCP_PROJECT_NAME
region = "us-west1"
}
That's because the resource for the main network is:
resource "google_compute_network" "vpc_network"
Then you could set a name for it with the property:
name = demo-vpc-network
Check here for more details
I'm using Tf 0.12. I have an s3 module that outputs a list of buckets, that I would like to use as an input for a cloudfront module that I've got.
The problem I'm facing is that when I do terraform plan/apply I get the following error count.index is 0 |var.redirect-buckets is tuple with 1 element
I've tried all kinds of splats moving the count.index call around to no avail. My sample code is below.
module.s3
resource "aws_s3_bucket" "redirect" {
count = length(var.redirects)
bucket = element(var.redirects, count.index)
}
mdoule.s3.output
output "redirect-buckets" {
value = [aws_s3_bucket.redirect.*]
}
module.cdn.variables
...
variable "redirect-buckets" {
description = "Redirect buckets"
default = []
}
....
The error is thrown down here
module.cdn
resource "aws_cloudfront_distribution" "redirect" {
count = length(var.redirect-buckets)
default_cache_behavior {
// Line below throws the error, one amongst many
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
....
//Another error throwing line
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
Any help is greatly appreciated.
module.s3
resource "aws_s3_bucket" "redirects" {
for_each = var.redirects
bucket = each.value
}
Your variable definition for redirects needs to change to something like this:
variable "redirects" {
type = map(string)
}
module.s3.output:
output "redirect_buckets" {
value = aws_s3_bucket.redirects
}
module.cdn
resource "aws_cloudfront_distribution" "redirects" {
for_each = var.redirect_buckets
default_cache_behavior {
target_origin_id = "cloudfront-distribution-origin-${each.value.id}.s3.amazonaws.com"
}
Your variable definition for redirect-buckets needs to change to something like this (note underscores, using skewercase is going to behave strangely in some cases, not worth it):
variable "redirect_buckets" {
type = map(object(
{
id = string
}
))
}
root module
module "s3" {
source = "../s3" // or whatever the path is
redirects = {
site1 = "some-bucket-name"
site2 = "some-other-bucket"
}
}
module "cdn" {
source = "../cdn" // or whatever the path is
redirects_buckets = module.s3.redirect_buckets
}
From an example perspective, this is interesting, but you don't need to use outputs from S3 here since you could just hand the cdn module the same map of redirects and use for_each on those.
There is a tool called Terragrunt which wraps Terraform and supports dependencies.
https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules
I'm trying to use compiled proto models into my kotlin code. Project is managed by bazel. So I reproduce problem with simple "HelloWorld" project.
WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
RULES_KOTLIN_VERSION = "9051eb053f9c958440603d557316a6e9fda14687"
http_archive(
name = "io_bazel_rules_kotlin",
sha256 = "c36e71eec84c0e17dd098143a9d93d5720e81b4db32bceaf2daf939252352727",
strip_prefix = "rules_kotlin-%s" % RULES_KOTLIN_VERSION,
url = "https://github.com/bazelbuild/rules_kotlin/archive/%s.tar.gz" % RULES_KOTLIN_VERSION,
)
load("#io_bazel_rules_kotlin//kotlin:kotlin.bzl", "kotlin_repositories", "kt_register_toolchains")
kotlin_repositories()
kt_register_toolchains()
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
BUILD
load("#io_bazel_rules_kotlin//kotlin:kotlin.bzl", "kt_jvm_library")
load("#rules_java//java:defs.bzl", "java_binary", "java_lite_proto_library", "java_proto_library")
load("#rules_proto//proto:defs.bzl", "proto_library")
package(default_visibility = ["//visibility:public"])
proto_library(
name = "clicklocation_proto",
srcs = ["ClickLocation.proto"],
)
java_proto_library(
name = "clicklocation_java_lite_proto",
deps = [":clicklocation_proto"],
)
kt_jvm_library(
name = "app_lib",
srcs = ["Main.kt"],
deps = [":clicklocation_java_lite_proto"]
)
java_binary(
name = "myapp",
main_class = "MyApp",
runtime_deps = [":app_lib"],
)
Proto file
syntax = "proto2";
package objectrecognition;
option java_package = "com.kshmax.objectrecognition.proto";
option java_outer_classname = "ClickLocationProtos";
message ClickLocation {
required float x = 1;
required float y = 2;
}
Main.kt
import com.kshmax.objectrecognition.proto.ClickLocationProtos
class MyApp {
companion object {
#JvmStatic
fun main(args: Array<String>) {
val location = ClickLocationProtos.ClickLocation.newBuilder()
location.x = 0.1f
location.y = 0.2f
location.build()
}
}
}
I have done it as described in protocolbuffers/protobuf repository examples .
But I got an error:
error: supertypes of the following classes cannot be resolved. Please
make sure you have the required dependencies in the classpath: class
com.kshmax.objectrecognition.proto.ClickLocationProtos.ClickLocation,
unresolved supertypes: com.google.protobuf.GeneratedMessageV3 class
com.kshmax.objectrecognition.proto.ClickLocationProtos.ClickLocationOrBuilder,
unresolved supertypes: com.google.protobuf.MessageOrBuilder class
com.kshmax.objectrecognition.proto.ClickLocationProtos.ClickLocation.Builder,
unresolved supertypes: com.google.protobuf.GeneratedMessageV3.Builder
What am I doing wrong?
The code that uses protobuf needs to depend on the protobuf library itself.
In theory, this could be exported by the rule, but since it doesn't work, I would add a dep on protobuf directly, similarly to BuildEventServiceTest
deps= [
"#com_google_protobuf//:protobuf_java",
"my_foo_java_proto",
]
or java_lite_proto_library / "#com_google_protobuf//:protobuf_java_lite" where the remote repository was defined in the WORKSPACE as
http_archive(
name = "com_google_protobuf",
patch_args = ["-p1"],
patches = ["#io_bazel//third_party/protobuf:3.11.3.patch"],
patch_cmds = EXPORT_WORKSPACE_IN_BUILD_FILE,
patch_cmds_win = EXPORT_WORKSPACE_IN_BUILD_FILE_WIN,
sha256 = "cf754718b0aa945b00550ed7962ddc167167bd922b842199eeb6505e6f344852",
strip_prefix = "protobuf-3.11.3",
urls = [
"https://mirror.bazel.build/github.com/protocolbuffers/protobuf/archive/v3.11.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v3.11.3.tar.gz",
],
)
(I'm not sure you need the patch, and there might be a more recent version; look at the example)
There is something that I am not understanding about terraform variables. I am getting prompted for two variables in when I run "terraform apply". I don't think that I should be prompted for any as I defined a terraform.tfvars. I am getting prompted for (applicationNamespace, and staticIpName) but I am not sure why. What am I misunderstanding?
I created a file (terraform.tfvars):
#--------------------------------------------------------------
# General
#--------------------------------------------------------------
cluster = "reddiyo-development"
project = "<MYPROJECTID>"
region = "us-central1"
credentialsLocation = "<MYCERTLOCATION>"
bucket = "reddiyo-terraform-state"
vpcLocation = "us-central1-b"
network = "default"
staticIpName = "dev-env-ip"
#--------------------------------------------------------------
# Specific To NODE
#--------------------------------------------------------------
terraformPrefix = "development"
mainNodeName = "primary-pool"
nodeMachineType = "n1-standard-1"
#--------------------------------------------------------------
# Specific To Application
#--------------------------------------------------------------
applicationNamespace = "application"
I also have a terrform script:
variable "cluster" {}
variable "project" {}
variable "region" {}
variable "bucket" {}
variable "terraformPrefix" {}
variable "mainNodeName" {}
variable "vpcLocation" {}
variable "nodeMachineType" {}
variable "credentialsLocation" {}
variable "network" {}
variable "applicationNamespace" {}
variable "staticIpName" {}
data "terraform_remote_state" "remote" {
backend = "gcs"
config = {
bucket = "${var.bucket}"
prefix = "${var.terraformPrefix}"
}
}
provider "google" {
//This needs to be updated to wherever you put your credentials
credentials = "${file("${var.credentialsLocation}")}"
project = "${var.project}"
region = "${var.region}"
}
resource "google_container_cluster" "gke-cluster" {
name = "${var.cluster}"
network = "${var.network}"
location = "${var.vpcLocation}"
remove_default_node_pool = true
# node_pool {
# name = "${var.mainNodeName}"
# }
node_locations = [
"us-central1-a",
"us-central1-f"
]
//Get your credentials for the newly created cluster so that microservices can be deployed
provisioner "local-exec" {
command = "gcloud config set project ${var.project}"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster} --zone ${var.vpcLocation}"
}
}
resource "google_container_node_pool" "primary_pool" {
name = "${var.mainNodeName}"
cluster = "${var.cluster}"
location = "${var.vpcLocation}"
node_count = "2"
node_config {
machine_type = "${var.nodeMachineType}"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
management {
auto_repair = true
auto_upgrade = true
}
autoscaling {
min_node_count = 2
max_node_count = 10
}
}
# //Reserve a Static IP
resource "google_compute_address" "ip_address" {
name = "${var.staticIpName}"
}
//Install Ambassador
module "ambassador" {
source = "modules/ambassador"
applicationNamespace = "${var.applicationNamespace}"
}
You can try to force it to read your variables by using:
terraform apply -var-file=<path_to_your_vars>
For reference, read below, if anybody face the similar issue.
“terraform.tfvars” is the default variable file name, from where terraform will read variables.
If any other file name is used, it needs to be passed in the command line i.e: “terraform plan –var=whateverName.tfvars
Also, order of Loading for variables by Terraform program.
Environment Variables
terraform.tfvars
terraform.tfvars.json
Any .auto.tfvars
Any –var or –var-file options
This used to work but after some modifications by the other programmers it just fails to work. I have this code on my Bootstrap:
protected function _initDatabase ()
{
$resource = $this->getPluginResource('multidb');
$resource->init();
Zend_Registry::set('gtap', $resource->getDb('gtap'));
Zend_Registry::set('phpbb', $resource->getDb('phpbb'));
}
Upon loading, this error shows up:
Fatal error: Call to a member function init() on a non-object in
/var/www/gamebowl3/application/Bootstrap.php on line 105
My php.ini has this entry on tis include_path:
.:/usr/share/php:/etc/apache2/libraries
and the i can see that multidb.php is located in:
/etc/apache2/librarties/Zend/Application/Resource
Can somebody tell me what causes the error? Thanks!
I just found out that the problem is in application.ini. Added a newly-introduced setting to the usual set of configs. Here it is:
;Gtap Database
resources.multidb.gtap.adapter = "PDO_MYSQL"
resources.multidb.gtap.host = "localhost"
resources.multidb.gtap.username = "root"
resources.multidb.gtap.password = "letmein1"
resources.multidb.gtap.dbname = "gtap"
resources.multidb.gtap.isDefaultTableAdapter = true
resources.multidb.gtap.default = true
;Forum Database
resources.multidb.phpbb.adapter = "PDO_MYSQL"
resources.multidb.phpbb.host = "localhost"
resources.multidb.phpbb.username = "root"
resources.multidb.phpbb.password = "letmein1"
resources.multidb.phpbb.dbname = "phpbb"
resources.multidb.phpbb.isDefaultTableAdapter = false
Also, make sure you have the latest Zend Framework Library and add it to PHP's include path. That should fix everything up.