I'm having the following case:
I setting several environments variables on my serverless.yml file like:
ONE_CLIENT_SECRET=${ssm:/one/key_one~true}
ONE_CLIENT_PUBLIC=${ssm:/one/key_two~true}
ANOTHER_SERVICE_KEY=${ssm:/two/key_one~true}
ANOTHER_SERVICE_SECRET=${ssm:/two/key_two~true}
let' say I have like 10 envs, when I try to deploy I get the following error:
An error occurred: SecureLambdaFunction - Lambda was unable to configure your environment variables because the environment variables you have provided exceeded the 4KB limit. String measured: JSON_WITH_MY_VARIABLES_HERE
So I cannot deploy, I have an idea of what the problem is but I dont have a clear path to solve it, so my questions are:
1) How can I extend the 4Kb limit?
2) assuming my variables are set using SSM, I'm using the EC2 Parameter store to save them. (this is more related to a serverless team or someone that knows the topic) how does it work behind the scenes?
- when I run sls deploy does it fetch for the values and included on the .zip file? (this is what I think it does, I just want to clarify) or does it fetch the values when I exec the lambdas? I'm asking cause I go to the aws lambda console and I can see em set there.
Thanks!
After taking a look around in deep, I came with the following conclusion:
Using this pattern ONE_CLIENT_SECRET=${ssm:/one/key_one~true} means that the sls framework is going to download the values on compilation time and embed into the project, this is where the problem comes, you can see this after uploading the project, your variables are going to be set on plain text on the lambda console.
My solution was to use a middy middleware to load ssm values when executing the lambda. This means, you need to code your project in a way that does not trigger any code until the variables are available and find a good strategy to catch the variables (cold start), otherwise, it will add more time to the execution.
The limit of 4Kb cannot be changed and after read about this, it seems obvious.
So short story, find a strategy of middleware and embed values that work best for you if you find this problem.
Related
I have a job that might fail with a specific configuration, what I want to do is let it run once and if it fails run a slightly different configuration.
I found the attempt parameter but I don't find a way to access it outside the resources tag...
do you know how to access it or any alternative?
the attempt counter is contained as argument "'--attempt' 'int'" to the jobscript (in my case a wrapper script in python)
therefore you can access it for example with:
sys.argv[sys.argv.index('--attempt')+1]
I would like to specify singularity bind paths inside the snakefile (ie. snakemake script) and not via command line. I believe this could be done somehow via their api by from snakemake import something, etc. How do I achieve this?
Broadly speaking, how do we supply options/arguments to snakemake via their api within a Snakefile?
I made a pipeline that does a couple things, and one of those things is download samples. Starting 30 downloads at the same time is a waste of resources so I wanted to limit the number of parallel downloads, and I don't want to always pass --resources parallel_resources=1 to the command. I noticed that the snakemake.workflow exists when a Snakefile is executed, and here I added this as a resource:
workflow.global_resources.update({'parallel_downloads': 1})
I have no experience with singularity, so I don't fully understand what you want. But my guess is that this is somewhere stored in the workflow and you can change it there.
p.s. this is not at all through an API, or guaranteed to work between versions
I would like to know how it is possible to use different data sets on runtime when executing tests in various environments. I have read the documentation but i am unable to find the best solution for this scenario.
Requirement: Execute a test in QA environment and then execute the same test in SIT. However, use different data in the request e.g customerIds. The reason for this is because the data setup in each environment is very different.
Would appreciate it if you could propose the best solution for this scenario.
Here in the documentation, you can find an explanation on how to do this : https://github.com/intuit/karate#environment-specific-config
Then you can simply specify the environment when launching karate :
mvn test -DargLine="-Dkarate.env=e2e"
And all your tests will be able to use the variables you've defined for the specified environment.
Edit: another hint, in your config file, specify the path of a file. Now, depending on your env, you'll be able to read a different file, containing all your data.
Edit after your comment :
Let's say you defined two environments, "qa" and "prod".
For every data where there is a difference between the two, simply create two files : myFile-qa.json and myFile-prod.json.
Now, in your tests, when you want to read a file, just read ('myFile-'+env+'.json'). And just like that, you read the correct file depending on your defined environment.
I'm trying to get acquainted with test automation using Microsoft TFS Api.
I've created the program which runs my test set - it uses code similar to one described here, e.g.:
var testRun = _testPoint.Plan.CreateTestRun(false);
testRun.DateStarted = DateTime.Now;
// ...
testRun.Save();
I believe this forces them to start as soon as any of agents can run them, instead of being delayed to certain time. Am I wrong? Anyway, it works all right.
But I was told by my lead that the task should be started each time the new input files are copied to certain folder (on the network I think, or perhaps in TFS).
So I'm searching for a way which allow to trigger tests on some condition - but currently without any luck. Probably I miss proper keywords.
I only found something vaguely related here but it seems they say it is not possible in a proper way.
So are there any facilities in TFS / MTM, any ways or approaches to achieve my goal? Thanks in advance for any hints / links.
You would need to write a system service (or other) that uses the file system watcher. Then when the file changes you can run your code above.
There is no built in feature in TFS to watch a folder for changes.
This is regarding my developmnent stage and the practice of testing all the JS before releasing it.
Unfortunatly we have some hardcoded references in our code. and this is the reason why there is no way for me to test a new version of test.js on the Stage server. and you only see the effects when it goes live.
Now, I know I should use relative paths etc.. but I was wondering if there is a Firefox plugin that could maybe substitute http://remote.site/test.js with /dev_path/to/test.js during pageload ?
I have also tried using hosts file for this purpose but it doesn't work in my scenario as I only need to map it to this 1 reference and not the whole domain.
Is there anything stopping you from changing the hard-coded references? That's really the easiest answer to your problem.
Run a find-and-replace on your files to replace the absolute links to relative ones. So long as the site hierarchy is the same for development and production, there shouldn't be any problems.