Debugging terragrunt dependency block resulting in s3 permission error - amazon-s3

I'm trying to use a dependency block for the first time, but get aws s3 list object permission denied issues and have trouble debugging the issue.
The setup is as follows, using an s3 backend for storing terraform state:
A git repo containing the terraform modules:
archive
s3_inventory
Instantiations of the above:
prod/eu/archive/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//archive?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
dependency "s3-inventory" {
config_path = "../s3-inventory/"
}
prod/eu/s3_inventory/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//s3_inventory?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
Running terragrunt apply in prod/eu/archive works just fine when I remove the dependency block from the hcl file. It fails when I add the dependency block in.
Running terragrunt output -json in prod/eu/s3-inventory also works just fine.
With debugging flags on I still don't seem to get enough info as to why it's failing.
terragrunt apply --terragrunt-log-level debug --terragrunt-debug in prod/eu/archive results in something like this:
...<omitted>...
DEBU[0000] Detected module /Users/tim.kersten/prod/eu/s3-inventory/terragrunt.hcl is already init-ed. Retrieving outputs directly from working directory. prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
DEBU[0000] Running command: terraform output -json prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
Failed to load state: AccessDenied: Access Denied
status code: 403, request id: ABC123DEF456GHI, host id: WW91J3JlIHRlcnJpYmx5IG5vc2UgZm9yIHRyeWluZyB0byBsb29rIGF0IG15IGhvc3QK
ERRO[0003] exit status 1
Something is clearly different, but the debugging options I set on terragrunt don't seem to give me enough info to understand what's different.
Anyone understand what's going on here?
Edit:
terragrunt version: 0.28.6

Related

Deploy warning calling external JS function in `serverless.yml`

So, in my serverless.yml file I have this:
custom:
my_attr: ${file(./serverless/get-custom-value.js):my_attr}
And in that file (located in ./serverless/get-custom-value.js) is this JavaScript code:
module.exports.my_attr = async function(slsArg) {
const stage = slsArg.providers.aws.getStage()
console.debug(`### stage: "${stage}".`)
return stage
}
When doing a sls package -s {stage} or sls deploy -s {stage} (which are both successful), I see this warning:
Serverless: Deprecation warning: Variables resolver reports following resolution errors:
- Cannot resolve variable at "custom.my_attr": Cannot resolve "my_attr" out of "get-custom-value.js": Resolved a JS function not confirmed to work with a new parser, falling back to old resolver
Yet, despite the warning it works exactly as expected…
It's a deprecation warning, which serves to inform you that in the next version of the Serverless Framework, this specific resolver syntax is deprecated (and will error) as internally the process for resolving variables has changed.
You can adopt the new custom resolver very simply by changing the declaration in the serverless.yml and modifying the function arguments in get-custom-value.js, then you can set
variablesResolutionMode: 20210326
in your serverless.yml to indicate you have migrated. That will instruct the Serverless Framework to use the new resolver, as indicated by the warning message.
The full overview is in the documentation

How to change the path for local backend state when using workspaces in terraform?

What is the expected configuration for using terraform workspaces with the local backend?
The local backend supports workspacing, but it does not appear you have much control over where the actual state is stored.
When you are not using workspaces you can supply a path parameter to the local backend to control where the state files are stored.
# Either in main.tf
terraform {
backend "local" {
path = "/path/to/terraform.tfstate
}
}
# Or as a flag
terraform init -backend-config="path=/path/to/terraform.tfstate"
I expected analogous functionality when using workspaces in that you would supply a directory for path and the workspaces would get created under that directory
For example:
terraform new workspace first
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
terraform new workspace second
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
would result in the state
/path/to/terraform.tfstate.d/first/terraform.tfstate
/path/to/terraform.tfstate.d/second/terraform.tfstate
This does not appear to be the case however. It looks like the local backend ignores the path parameter and puts the workspace configuration in the working directory.
Am I missing something or are you unable to control local backend workspace state?
There is an undocumented flag for the local backend workspace_dir that solves this issue.
The documentation task is tracked here
terraform {
backend "local" {
workspace_dir = "/path/to/terraform.tfstate.d"
}
}

Janus Graph Remote Graph NoSuchFieldError: V3_0 error

I follow this example;
https://github.com/JanusGraph/janusgraph/tree/master/janusgraph-examples/example-remotegraph
and I would like to debug this project, I configured(HBase+Solr) and run Janus Graph server with
$JANUSGRAPH_HOME/bin/gremlin-server.sh $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml
command.
I passed this argument to IDEA via Run Configuration > Program Arguments
[Project Home]/conf/jgex-remote.properties
my jgex-remote.properties file is:
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
# cluster file has the remote server configuration
gremlin.remote.driver.clusterFile=[Project Home]/conf/remote-objects.yaml
# source name is the global graph traversal source defined on the server
gremlin.remote.driver.sourceName=g
and my remote-objects.yaml file includes:
hosts: [127.0.0.1]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
}
}
It tries to run this command:
cluster = Cluster.open(conf.getString("gremlin.remote.driver.clusterFile"));
And throws this exception:
Exception in thread "main" java.lang.NoSuchFieldError: V3_0 at
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0.(GryoMessageSerializerV3d0.java:41)
at
org.apache.tinkerpop.gremlin.driver.ser.Serializers.simpleInstance(Serializers.java:77)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:472)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:469)
at
org.apache.tinkerpop.gremlin.driver.Cluster.getBuilderFromSettings(Cluster.java:167)
at
org.apache.tinkerpop.gremlin.driver.Cluster.build(Cluster.java:159)
at org.apache.tinkerpop.gremlin.driver.Cluster.open(Cluster.java:233)
at
com.ets.dataplatform.init.RemoteGraphApp.openGraph(RemoteGraphApp.java:72)
at com.ets.dataplatform.init.GraphApp.runApp(GraphApp.java:290) at
com.ets.dataplatform.init.RemoteGraphApp.main(RemoteGraphApp.java:195)
It is not meaningful for me.
Thanks in advance.
I would try to align your versions. I assume that you are using JanusGraph 0.2.0. If you look at the pom.xml for that version you'll see that it is bound to TinkerPop 3.2.6:
https://github.com/JanusGraph/janusgraph/blob/v0.2.0/pom.xml#L68
Change to that version in your application and see if the connection works. Taking that approach should not only fix your problem but also ensure that you don't run into other incompatibilities. That is not to say that you can't configure later versions of TinkerPop to work with 3.2.6, but it requires a bit more configuration and you have to be aware of minor changes that might affect how certain operations might behave.

F4 IDE gives "Invalid Uri scheme for local file" when running Fantom app

I started a very simple project using Xored's F4 IDE for Fantom. The first few times I ran it there was no error, but I started adding dependencies (fanbatis) and at some point the error below starting showing up every time I run a test or a dummy Hello World app.
[23:44:18 22-Nov-13] [err] [pathenv] Cannot parse path: C:\dev\f4workspace\auth\bin\fan
sys::ArgErr: Invalid Uri scheme for local file: c:\dev\f4workspace\auth\bin\fan/
fan.sys.LocalFile.uriToFile (LocalFile.java:64)
fan.sys.File.make (File.java:26)
util::PathEnv.parsePath (PathEnv.fan:47)
fan.sys.List.each (List.java:555)
util::PathEnv.parsePath (PathEnv.fan:43)
util::PathEnv.make$ (PathEnv.fan:22)
util::PathEnv.make (PathEnv.fan:20)
java.lang.reflect.Method.invoke (Unknown)
fan.sys.Method.invoke (Method.java:559)
fan.sys.Method$MethodFunc.callList (Method.java:198)
fan.sys.Type.make (Type.java:246)
fan.sys.ClassType.make (ClassType.java:110)
fan.sys.Type.make (Type.java:236)
fan.sys.Sys.initEnv (Sys.java:447)
fan.sys.Sys. (Sys.java:224)
fanx.tools.Fan.execute (Fan.java:28)
fanx.tools.Fan.run (Fan.java:298)
fanx.tools.Fan.main (Fan.java:336)
Hello, World!
It is more a nuisance at the moment because the tests and the dummy app still run. I created another project, copying all the source code adding class by class and testing after each change and the error never occurred. Any ideas please?
That's an interesting issue!
tl/dr: you have an empty project 'auth' in your workspace, either create some dummy class inside it or go to Run -> Run configurations, find your launch config and uncheck project without sources on 'Projects' tab.
In order to keep your Fantom installation clean from projects in a workspace, F4 puts built pods into project/bin/fan/lib/fan. When F4 launches projects from workspace, it uses PathEnv and builds FAN_ENV_PATH by joining paths to Fantom installation and bin/ folders in projects in workspace.
When Fantom runtime analyzes FAN_ENV_PATH, at first it interprets a path as native OS path, but if dir does not exist, it attempts to interpret it as file URI, here's relevant part of PathEnv source:
path.split(File.pathSep[0]).each |item|
{
if (item.isEmpty) return
dir := File.os(item).normalize
if (!dir.exists) dir = File(item.toUri.plusSlash, false).normalize
if (!dir.exists) { log.warn("Dir not found: $dir"); return }
The problem code is item.toUri – On Mac OS X and Linux this is parsed as an URI without scheme with path only, so if directory does not exist, this code just prints a warning in a console.
But on Windows, because of disk name in path, disk name is interpreted as scheme:
fansh> "C:\\Users".toUri { echo(path); echo(scheme) }
[\Users]
c
fansh> "/Users".toUri { echo(path); echo(scheme) }
[Users]
null
And then File constructor fails, because expects either 'file' scheme, or null scheme:
public static java.io.File uriToFile(Uri uri)
{
if (uri.scheme() != null && !uri.scheme().equals("file"))
throw ArgErr.make("Invalid Uri scheme for local file: " + uri);
return new java.io.File(uriToPath(uri));
}
I've created an issue here, so that F4 would automatically skip empty projects when building FAN_ENV_PATH – https://github.com/xored/f4/issues/25.
I thought the problem had something to do with the forward slash at the end of path as shown in this line of the error message
Invalid Uri scheme for local file: c:\dev\f4workspace\auth\bin\fan/
However, I found that such path didn't exist. I manually created both the bin and the fan folders and the error disappeared. To be honest I don't really know why F4 needs and checks for that folder because so far it hasn't written any file in it.

Apache Directory LDAP API - getting up and running

When I try and run a test using the Apache LDAP API, I am getting the following error. I set up a Maven project , and my pom.xml has many dependencies for the Apache Directory server and API artifacts. My code (which I copied and pasted an example, just to get up and running, so that I can explore) all builds fine. However, when I run it (as a Junit Test), I get the following....
Can anyone help me? maybe even just provide an example of where the Apache LDAP API is being used successfully, and maybe give me the pom.xml with the correct dependencies also? (The apche LDAP API documentation seems to be out of date).
I am currently starting the test using the embedded Apache Directory server, using the following...
#RunWith(FrameworkRunner.class)
#CreateLdapServer(transports =
{
#CreateTransport(protocol = "LDAP") ,
#CreateTransport(protocol = "LDAPS") })
// disable changelog, for more info see DIRSERVER-1528
#CreateDS(enableChangeLog = false, name = "PasswordPolicyTest")
public class PasswordPolicyIT extends AbstractLdapTestUnit
{ .......etc }
So, therefore, an alternative approach, is that if I tailor some of the tests to just connect to a local Directory Server instance that I have running on my machine. I assume that this would stop the error messages that I am getting below..Again, if anyone could provide a code snippet there, it would be useful..
Many Thanks
> 2013-06-20 16:05:10 ERROR FrameworkRunner:287 - Problem locating LDIF
> file in schema repository Multiple copies of resource named
> 'schema/ou=schema/cn=apachemeta/ou=matchingrules/m-oid=1.3.6.1.4.1.18060.0.4.0.1.3.ldif'
> located on classpath at urls
> jar:file:/Users/rk/.m2/repository/org/apache/directory/api/api-ldap-client-all/1.0.0-M17/api-ldap-client-all-1.0.0-M17.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
> jar:file:/Users/rk/.m2/repository/org/apache/directory/shared/shared-ldap-schema-data/1.0.0-M7/shared-ldap-schema-data-1.0.0-M7.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
> jar:file:/Users/rk/.m2/repository/org/apache/directory/server/apacheds-all/2.0.0-M12/apacheds-all-2.0.0-M12.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
You need to exclude the shared-ldap-schema-data dependency from apacheds-all. Take a look at this comment