Substitutions not occurring with Typesafe Config - config

When I use Config.withValue to create an updated config, substitutions are not re-evaluated, even with a call to resolve:
application.conf:
zooKeeperAddr = "localhost:2181"
zooKeeperAddr2 = ${zooKeeperAddr}
Application code:
val config = ConfigFactory.load()
.withValue("zooKeeperAddr", ConfigValueFactory.fromAnyRef("abc"))
.resolve;
val zooKeeperAddr = config.getAnyRef("zooKeeperAddr")
val zooKeeperAddr2 = config.getAnyRef("zooKeeperAddr2")
println(s"zooKeeperAddr, zooKeeperAddr2 is $zooKeeperAddr, $zooKeeperAddr2")
I expect, of course, to see
zooKeeperAddr, zooKeeperAddr2 is abc, abc
But what I see instead is:
zooKeeperAddr, zooKeeperAddr2 is abc, localhost:2181
How can I get substitutions re-evalauted?
(The larger issue is, I'm trying to inject command-line arguments, in particular, Twitter Module Flags, into the Typesafe Config. Perhaps there's a better way to achieve this goal?
My actual code is:
val config = flag.getAll(false).foldLeft(ConfigFactory.load()){
case (conf, f) if f.isDefined => conf.withValue(f.name, ConfigValueFactory.fromAnyRef(f.get.get))
case (conf, _) => conf
}.resolve
)

So I (the OP) ended up doing the following:
val config = flag.getAll(false).foldLeft(ConfigFactory.empty()){
case (conf, f) if f.isDefined => conf.withValue(f.name, ConfigValueFactory.fromAnyRef(f.get.get))
case (conf, _) => conf
}
.withFallback(ConfigFactory.defaultOverrides())
.withFallback(ConfigFactory.defaultApplication())
.withFallback(ConfigFactory.defaultReference())
.resolve
flag.getAll returns an Iterable[com.twitter.app.Flag]; for each flag that isDefined, we add it to an initaily empty config (ConfigFactory.empty()).
Then we withFallback to, in order, the default overrides (the settings properties), the application config (application.conf, and the default reference (which should include, I hope, all reference.confs in all jars).
withFallback, according to its documentation, "Returns a new value computed by merging this value with another, with keys in this value "winning" over the other one."
Finally, we resolve.
This seems to propagate the substitutions as I want, but I can't help but think the Config API provides an easier way to do this.

Related

Automatically determine values for Terraform Azure Private endpoint module

I need a help with a terraform module that I've created. It works perfectly, but I need to add some automation.
I created a module which creates multiple private endpoints, but I always need to put the variable values in manually.
This is the module:
resource "azurerm_private_endpoint" "endpoint" {
for_each = try({ for endpoint in var.endpoints : endpoint.name => endpoint }, toset([]))
name = each.key
location = var.location
resource_group_name = var.resource_group_name
subnet_id = each.value.subnet_id
dynamic "private_service_connection" {
for_each = each.value.private_service_connection
content {
name = each.key
private_connection_resource_id = private_service_connection.value.private_connection_resource_id
is_manual_connection = false
subresource_names = var.subresource_name ### see values on : https://learn.microsoft.com/fr-fr/azure/private-link/private-endpoint-overview#private-link-resource
}
}
lifecycle {
ignore_changes = [
private_dns_zone_group
]
}
tags = var.tags
}
I need to have:
1 - for the private endpoint name : I need it to be automatically provided: "pendp-(the subresource_name value in lower cases- my resource_name =>(mysql server for example))"
2 - for the private connection name: I need the values to be automatically: "connection-(the subresource_name value in lower cases- my ressource_name =>(mysql server for exemple))"
3 - some automation to detect automatically the subresource_name ( if I create a private endpoint for a blob or for a mariadb or for a mysqlserver, the module should detected it.
terraform version:
terraform {
required_version = "~> 1"
required_providers {
azurerm = "~> 3.0"
}
}
The easiest way to combine values automatically would be to use the Terraform string join() function to join multiple strings together. For lower case strings, you can use the lower() function.
Some examples:
name = join("-", ["pandp", lower(var.subresource_name)])
...
name = join("-", ["connection", lower(var.subresource_name), lower(each.key)])
For your third rule, you want to use a conditional expression to determine if it's a blob, or mariadb, or mysqlserver.
In this example, we set an example_name local with a value some-blob-value if var.subresource_name contains a string that starts with "blob", and set it to something-else if the condition is false:
locals {
example_name = startswith(lower(var.subresource_name), "blob") ? "some-blob-value" : "something-else"
}
There are many options available for doing a conditional on if a value is passed to what you expect and then determine a result based on that value. What exactly you want isn't clear in the question, but hopefully this will get you pointed in the right direction.
Terraform even has several helper functions that might help you if you only need part of a string, such as startswith(), endswith(), or contains() depending on your needs.

Scalding Unit Test - How to Write A Local File?

I work at a place where scalding writes are augmented with a specific API to track dataset meta data. When converting from normal writes to these special writes, there are some intricacies with respect to Key/Value, TSV/CSV, Thrift ... datasets. I would like to compare the binary file is the same prior to conversion and after conversion to the special API.
Given I cannot provide the specific api for the metadata-inclusive writes, I only ask how can I write a unit test for .write method on a TypedPipe?
implicit val timeZone: TimeZone = DateOps.UTC
implicit val dateParser: DateParser = DateParser.default
implicit def flowDef: FlowDef = new FlowDef()
implicit def mode: Mode = Local(true)
val fileStrPath = root + "/test"
println("writing data to " + fileStrPath)
TypedPipe
.from(Seq[Long](1, 2, 3, 4, 5))
// .map((x: Long) => { println(x.toString); System.out.flush(); x })
.write(TypedTsv[Long](fileStrPath))
.forceToDisk
The above doesn't seem to write anything to local (OSX) disk.
So I wonder if I need to use a MiniDFSCluster something like this:
def setUpTempFolder: String = {
val tempFolder = new TemporaryFolder
tempFolder.create()
tempFolder.getRoot.getAbsolutePath
}
val root: String = setUpTempFolder
println(s"root = $root")
val tempDir = Files.createTempDirectory(setUpTempFolder).toFile
val hdfsCluster: MiniDFSCluster = {
val configuration = new Configuration()
configuration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, tempDir.getAbsolutePath)
configuration.set("io.compression.codecs", classOf[LzopCodec].getName)
new MiniDFSCluster.Builder(configuration)
.manageNameDfsDirs(true)
.manageDataDfsDirs(true)
.format(true)
.build()
}
hdfsCluster.waitClusterUp()
val fs: DistributedFileSystem = hdfsCluster.getFileSystem
val rootPath = new Path(root)
fs.mkdirs(rootPath)
However, my attempts to get this MiniCluster to work haven't panned out either - somehow I need to link the MiniCluster with the Scalding write.
Note: The Scalding JobTest framework for unit testing isn't going to work due actual data written is sometimes wrapped in bijection codec or setup with case class wrappers prior to the writes made by the metadata-inclusive writes APIs.
Any ideas how I can write a local file (without using the Scalding REPL) with either Scalding alone or a MiniCluster? (If using the later, I need a hint how to read the file.)
Answering ... There is an example of how to use a mini cluster for exactly reading and writing to HDFS. I will be able to cross read with my different writes and examine them. Here it is in the tests for scalding's TypedParquet type
HadoopPlatformJobTest is an extension for JobTest that uses a MiniCluster.
With some hand-waiving on detail in the link, the bulk of the code is this:
"TypedParquetTuple" should {
"read and write correctly" in {
import com.twitter.scalding.parquet.tuple.TestValues._
def toMap[T](i: Iterable[T]): Map[T, Int] = i.groupBy(identity).mapValues(_.size)
HadoopPlatformJobTest(new WriteToTypedParquetTupleJob(_), cluster)
.arg("output", "output1")
.sink[SampleClassB](TypedParquet[SampleClassB](Seq("output1"))) {
toMap(_) shouldBe toMap(values)
}
.run()
HadoopPlatformJobTest(new ReadWithFilterPredicateJob(_), cluster)
.arg("input", "output1")
.arg("output", "output2")
.sink[Boolean]("output2")(toMap(_) shouldBe toMap(values.filter(_.string == "B1").map(_.a.bool)))
.run()
}
}

Gatling use session.set and feed simultaneously

If I use .get("/***/quotes-${endPoint}/quotes?source=rtbp&userid=test&symbol=${pTypeSymbol}${authM}${pEqSymbol}") then ${pEqSymbol} work but
${pTypeSymbol} will be ${pEqSymbol} it's incorrect example of get it should be in code below
val getApiKeyScenario = scenario("getApiKey")
.feed(getApiKeyData)
.feed(pEqSymbolFeed)
.feed(pOptionSymbol)
.feed(pOtherSymbol)
.exec(session => session
.set("endPoint", "v1")
.set("pTypeSymbol", "${pEqSymbol}")
.set("authM", "&apikey=***********"))
.exec(http("getApiKeyRequest")
.get("/******/quotes-${endPoint}/quotes?source=rtbp&userid=test&symbol=${pTypeSymbol}${authM}")
.check(status.is(200))
.check(checkIf(doLogResponse) {
bodyString.saveAs("pResponse")
})
)
.doIf(doLogResponse) {
logResponse()
}
If I try .set("pTypeSymbol", pEqSymbolFeed.readRecords.head("pEqSymbol")) will be loop
If I try .set("pTypeSymbol", s"${pEqSymbol.isDefined}") not found: value pEqSymbol
If I try s"${pEqSymbol}" not found: value pEqSymbol
I logs now is GET *******/quotes-v1/quotes?source=rtbp&userid=test&symbol=${pEqSymbol}&apikey=******
But should be GET *******/quotes-v1/quotes?source=rtbp&userid=test&symbol="Here my value from feed"&apikey=******
Please read the official documentation:
This Expression Language only works on String values being passed to Gatling DSL methods. Such Strings are parsed only once, when the Gatling simulation is being instantiated.
For example queryParam("latitude", session => "${latitude}") wouldn’t work because the parameter is not a String, but a function that returns a String.
In your example, you don't need to copy pEqSymbol into pTypeSymbol, you can directly write:
.exec(session => session
.set("endPoint", "v1")
.set("authM", "&apikey=***********"))
.exec(http("getApiKeyRequest")
.get("/******/quotes-${endPoint}/quotes?source=rtbp&userid=test&symbol=${pEqSymbol}${authM}")
)
)
But if you insist on copying, you have to use the Session API:
.set("pTypeSymbol", session("pEqSymbol").as[String])

Why can I not pickle my case classes? What should I do to solve this manually next time?

Edit 2: Observations and questions
I am pretty sure along with the commenter below Justin that the problem is due to an errant build.sbt configuration. However, this is the first time I have seen an errant build.sbt configuration that literally works for everything else except for pickers. Maybe that is becaus they use macros and I as a rule avoid them.
Why would it matter whether Flow.merge is used vs Flow.map if the problem is with the sbt?
Suspicious build.sbt extract
lazy val server = project
.dependsOn(sharedJvm, client)
Suspicious stack trace
So this is the top of the stack: it goes from a method I cannot find to the linking environment to the string encoding utils. Ok.
server java.lang.RuntimeException: stub
Huh? stub?
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
HUH?
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
???
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
Edit 1: My big and beautiful build.sbt might be the problem
What you cannot see is that I organized in my project folder:
JvmDependencies.scala which has regular Jvm dependencies
SjsDependencies.scala which has Def.settingsKeys of libraryDependencies on JsModuleIDs
WebJarDependencies.scala which has javascripts and css's
build.sbt
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
.settings(SjsDependencies.pickling.toSettingsDefinition(): _*)
.settings(SjsDependencies.tagsAndDom.toSettingsDefinition(): _*)
.settings(SjsDependencies.css.toSettingsDefinition(): _*)
lazy val sharedJvm = shared.jvm
lazy val sharedJs = shared.js
lazy val cmdlne = project
.dependsOn(sharedJvm)
.settings(
libraryDependencies ++= (
JvmDependencies.commandLine ++
JvmDependencies.logging ++
JvmDependencies.akka ++
JvmDependencies.serialization
)
)
lazy val client = project
.enablePlugins(ScalaJSPlugin, SbtWeb, SbtSass)
.dependsOn(sharedJs)
.settings(
(SjsDependencies.shapeless ++ SjsDependencies.audiovideo ++ SjsDependencies.databind ++ SjsDependencies.functional ++ SjsDependencies.lensing ++ SjsDependencies.logging ++ SjsDependencies.reactive).toSettingsDefinition(),
jsDependencies ++= WebjarDependencies.js,
libraryDependencies ++= WebjarDependencies.notJs,
persistLauncher in Compile := true
)
lazy val server = project
.dependsOn(sharedJvm, client)
.enablePlugins(SbtNativePackager)
.settings(
copyWebJarResources := { streams.value.log("Copying webjar resources")
val `Web Modules target directory` = (resourceManaged in Compile).value / "assets"
val `Web Modules source directory` = (WebKeys.assets in Assets in client).value / "lib"
final class UsefulFileFilter(acceptable: String*) extends FileFilter {
// TODO ADJUST TO EXCLUDE JS MAP FILES
import scala.collection.JavaConversions._
def accept(file: File) = (file.isDirectory && FileUtils.listFiles(file, acceptable.toArray, true).nonEmpty) || acceptable.contains(file.ext) && !file.name.contains(".js.")
}
val `file filter` = new UsefulFileFilter("css", "scss", "sass", "less", "map")
IO.createDirectory(`Web Modules target directory`)
IO.copyDirectory(source = `Web Modules source directory`, target = `Web Modules target directory` / "script")
FileUtils.copyDirectory(`Web Modules source directory`, `Web Modules target directory` / "style", `file filter`)
},
// run the copy after compile/assets but before managed resources
copyWebJarResources <<= copyWebJarResources dependsOn(compile in Compile, WebKeys.assets in Compile in client, fastOptJS in Compile in client),
managedResources in Compile <<= (managedResources in Compile) dependsOn copyWebJarResources,
watchSources <++= (watchSources in client),
resourceGenerators in Compile <+= Def.task {
val files = ((crossTarget in(client, Compile)).value ** ("*.js" || "*.map")).get
val mappings: Seq[(File,String)] = files pair rebase((crossTarget in(client, Compile)).value, ((resourceManaged in Compile).value / "assets/").getAbsolutePath )
val map: Seq[(File, File)] = mappings.map { case (s, t) => (s, file(t))}
IO.copy(map).toSeq
},
reStart <<= reStart dependsOn (managedResources in Compile),
libraryDependencies ++= (
JvmDependencies.akka ++
JvmDependencies.jarlocating ++
JvmDependencies.functional ++
JvmDependencies.serverPickling ++
JvmDependencies.logging ++
JvmDependencies.serialization ++
JvmDependencies.testing
)
)
Edit 0: A very obscure chat thread has a guy saying what I am feeling: no, not **** scala, but
Mark Eibes #i-am-the-slime Oct 15 2015 09:37
#ochrons I'm still fighting. I can't seem to pickle anything anymore.
https://gitter.im/scala-js/scala-js/archives/2015/10/15
I have a rather simple requirement - I have one web socket route on a akka http server that is defined AkkaServerLogEventToMessageHandler():
object AkkaServerLogEventToMessageHandler
extends Directives {
val sourceOfLogs =
Source.actorPublisher[AkkaServerLogMessage](AkkaServerLogEventPublisher.props) map {
event ⇒
BinaryMessage(
ByteString(
Pickle.intoBytes[AkkaServerLogMessage](event)
)
)
}
def apply(): server.Route = {
handleWebSocketMessages(
Flow[Message].merge(sourceOfLogs)
)
}
}
This fits into a tiny set of routes in the most obvious way.
Now why is that I cannot get boopickle, upickle, or prickle to serialize something as simple as this stupid case class?
sealed case class AkkaServerLogMessage(
message: String,
level: Int,
timestamp: Long
)
No nesting
All primitive types
No generics
Only three of them
These all produced roughly the same error
Using all three of the common picklers to write
Using TextMessage instead of BinaryMessage and the corresponding upickle or prickle writeJs or whatever methods
Varying the case class down to nothing (nothing, as in no members)
Varying the input itself to the case class
Importing various permutations of Implicits and underscore stuff
... specifically, they gave me variations on the same stupid error (not the same error, but considerably similar)
server [ERROR] [04/21/2016 22:04:00.362] [app-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(app)] WebSocket handler failed with stub
server java.lang.RuntimeException: stub
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
server at boopickle.Encoder.writeString(Codecs.scala:338)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:183)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:134)
server at boopickle.PickleState.pickle(Pickler.scala:511)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at boopickle.PickleImpl$.apply(Default.scala:70)
server at boopickle.PickleImpl$.intoBytes(Default.scala:75)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:31)
This worked
Not using Flow.merge (defeats the purpose, I want to keep sending out put with logs)
Using a static value
Other useless things
Appeal
Please let me know where and why I am stupid... I spent four hours on this problem today in different forms, and it is driving me nuts.
In your build.sbt, you have:
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
Do not do this. You must not enable the Scala.js plugin on a cross-project, ever. This also adds it to the JVM side, which will wreak havoc. Most notably, this will cause %%% to resolve the Scala.js artifacts of your dependencies in the JVM project, and that is really bad. This is what causes your issue.
crossProject already adds the Scala.js plugin to the JS part, and only that one. So simply remove that enablePlugins line.
Mystery solved. Thanks to #Justin du Coeur for pointing me in the right direction.
The reason boopickle wasn't working in particular was because in the dependency chain I was including both the sjs and the jvm version of boopickle in the server project.
I removed the server dependsOn for client and for sharedJs and also removed boopickle from the shared dependencies. Now it works.

which OO programming style in R will result readable to a Python programmer?

I'm author of the logging package on CRAN, I don't see myself as an R programmer, so I tried to make it as code-compatible with the Python standard logging package as I could, but now I have a question. and I hope it will give me the chance to learn some more R!
it's about hierarchical loggers. in Python I would create a logger and send it logging records:
l = logging.getLogger("some.lower.name")
l.debug("test")
l.info("some")
l.warn("say no")
In my R package instead you do not create a logger to which you send messages, you invoke a function where one of the arguments is the name of the logger. something like
logdebug("test", logger="some.lower.name")
loginfo("some", logger="some.lower.name")
logwarn("say no", logger="some.lower.name")
the problem is that you have to repeat the name of the logger each time you want to send it a logging message. I was thinking, I might create a partially applied function object and invoke that instead, something like
logdebug <- curry(logging::logdebug, logger="some.lower.logger")
but then I need doing so for all debugging functions...
how would you R users approach this?
Sounds like a job for a reference class ?setRefClass, ?ReferenceClasses
Logger <- setRefClass("Logger",
fields=list(name = "character"),
methods=list(
log = function(level, ...)
{ levellog(level, ..., logger=name) },
debug = function(...) { log("DEBUG", ...) },
info = function(...) { log("INFO", ...) },
warn = function(...) { log("WARN", ...) },
error = function(...) { log("ERROR", ...) }
))
and then
> basicConfig()
> l <- Logger$new(name="hierarchic.logger.name")
> l$debug("oops")
> l$info("oops")
2011-02-11 11:54:05 NumericLevel(INFO):hierarchic.logger.name:oops
> l$warn("oops")
2011-02-11 11:54:11 NumericLevel(WARN):hierarchic.logger.name:oops
>
This could be done with the proto package. This supports older versions of R (its been around for years) so you would not have a problem of old vs. new versions of R.
library(proto)
library(logging)
Logger. <- proto(
new = function(this, name)
this$proto(name = name),
log = function(this, ...)
levellog(..., logger = this$name),
setLevel = function(this, newLevel)
logging::setLevel(newLevel, container = this$name),
addHandler = function(this, ...)
logging::addHandler(this, ..., logger = this$name),
warn = function(this, ...)
this$log(loglevels["WARN"], ...),
error = function(this, ...)
this$log(loglevels["ERROR"], ...)
)
basicConfig()
l <- Logger.$new(name = "hierarchic.logger.name")
l$warn("this may be bad")
l$error("this definitely is bad")
This gives the output:
> basicConfig()
> l <- Logger.$new(name = "hierarchic.logger.name")
> l$warn("this may be bad")
2011-02-28 10:17:54 WARNING:hierarchic.logger.name:this may be bad
> l$error("this definitely is bad")
2011-02-28 10:17:54 ERROR:hierarchic.logger.name:this definitely is bad
In the above we have merely layered proto on top of logging but it would be possible to turn each logging object into a proto object, i.e. it would be both, since both logging objects and proto objects are R environments. That would get rid of the extra layer.
See the http://r-proto.googlecode.com for more info.
Why would you repeat the name? It would be more convenient to pass the log-object directly to the function, ie
logdebug("test",logger=l)
# or
logdebug("test",l)
A bit the way one would use connections in a number of functions. That seems more the R way of doing it I guess.