In Serverless Framework, is there a way update environment variables populated by ssm variables at lambda runtime? - serverless-framework

I'm using serverless.yml ssm variables (AWS parameter store) to set lambda environment variables. They're looked up and set at Serverless deploy time. I'd like the environment variables to be up to date at lambda runtime, so that I can change them in parameter store and not re-deploy. Is there a way to achieve this with Serverless?
P.S. I know I could achieve this by looking them up in my lambda code instead of relying on Serverless to set them. I'd just like to know if Serverless has this capability.

Call the SSM client from your code:
const SSM = require('aws-sdk/clients/ssm')
const getParameter = async (paramName) => {
const client = new SSM()
try {
const { Parameter } = await getSSMClient()
.getParameter({ Name: paramName })
.promise()
return Parameter.Value ? JSON.parse(Parameter.Value) : null
} catch (e) {
console.error(e)
}
}
You can put this outside of your default handler export which will call it only on lambda launch (which might be less than on invoke depending on traffic)
const foo = await getParameter('foo')
module.exports = (event, context) => {
console.log(foo)
...
}
But as far as redeploying - it can be extremely fast. I personally separate my stuff into multiple services and redeploying lambda is a separate process than say DynamoDB setup so it goes fast.

Environment variables are designed to be set at deployment because they pertain to the environment of the Lambda. If you need values that are changed regularly there are many other options; DynamoDB sticks out as the perfect solution to that issue. If all you need is a Key Value store (which is what environment variables are), then DynamoDB absolutely excels. Its super fast and way cheaper than SSM that can be used to do the same thing.

Related

Redis StackExchange LuaScripts with parameters

I'm trying to use the following Lua script using C# StackExchange library:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", KEYS[1])
if current == 1 then
redis.call(""expire"", KEYS[1], KEYS[2])
return 1
else
return current
end
Whenever i'm evaluating the script "as a string", it works properly:
var incrementValue = await Database.ScriptEvaluateAsync(LuaScriptToExecute,
new RedisKey[] { key, ttlInSeconds });
If I understand correctly, each time I invoke the ScriptEvaluateAsync method, the script is transmitted to the redis server which is not very effective.
To overcome this, I tried using the "prepared script" approach, by running:
_setCounterWithExpiryScript = LuaScript.Prepare(LuaScriptToExecute);
...
...
var incrementValue = await Database.ScriptEvaluateAsync(_setCounterWithExpiryScript,
new[] { key, ttlInSeconds });
Whenever I try to use this approach, I receive the following error:
ERR Error running script (call to f_7c891a96328dfc3aca83aa6fb9340674b54c4442): #user_script:3: #user_script: 3: Lua redis() command arguments must be strings or integers
What am I doing wrong?
What is the right approach in using "prepared" LuaScripts that receive dynamic parameters?
If I look in the documentation: no idea.
If I look in the unit test on github it looks really easy.
(by the way, is your ttlInSeconds really RedisKey and not RedisValue? You are accessing it thru KEYS[2] - shouldnt that be ARGV[1]? Anyway...)
It looks like you should rewrite your script to use named parameters and not arguments:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", #myKey)
if current == 1 then
redis.call(""expire"", #myKey, #ttl)
return 1
else
return current
end";
// We should load scripts to whole redis cluster. Even when we dont have any.
// In that case, there will be only one EndPoint, one iteration etc...
_myScripts = _redisMultiplexer.GetEndPoints()
.Select(endpoint => _redisMultiplexer.GetServer(endpoint))
.Where(server => server != null)
.Select(server => lua.Load(server))
.ToArray();
Then just execute it with anonymous class as parameter:
for(var setCounterWithExpiryScript in _myScripts)
{
var incrementValue = await Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key, // or new RedisKey(key) or idk
ttl: (RedisKey)ttlInSeconds
}
)// .ConfigureAwait(false); // ? ;-)
// when ttlInSeconds is value and not key, just dont cast it to RedisKey
/*
var incrementValue = await
Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key,
ttl: ttlInSeconds
}
).ConfigureAwait(false);*/
}
Warning:
Please note that Redis is in full-stop mode when executing scripts. Your script looks super-easy (you sometimes save one trip to redis (when current != 1) so i have a feeling that this script will be counter productive in greater-then-trivial scale. Just do one or two calls from c# and dont bother with this script.
First of all, Jan's comment above is correct.
The script line that updated the key's TTL should be redis.call(""expire"", KEYS[1], ARGV[1]).
Regarding the issue itself, after searching for similar issues in RedisStackExchange's Github, I found that Lua scripts do not work really well in cluster mode.
Fortunately, it seems that "loading the scripts" isn't really necessary.
The ScriptEvaluateAsync method works properly in cluster mode and is sufficient (caching-wise).
More details can be found in the following Github issue.
So at the end, using ScriptEvaluateAsync without "preparing the script" did the job.
As a side note about Jan's comment above that this script isn't needed and can be replaced with two C# calls, it is actually quite important since this operation should be atomic as it is a "Rate limiter" pattern.

Effector: how to reset all domain stores before each test?

I want to reset all domain stores before each test case. Is there some way to do it with Effector?
There is no such API in effector. You can create separate event and subscribe every store to it:
const resetForm = createEvent()
formDomain.onCreateStore(store => store.reset(resetForm))
But in general you shouldn't manually reset stores in tests.
Prefer Fork API usage instead
https://effector.dev/docs/api/effector/fork - docs
https://dev.to/effector/the-best-part-of-effector-4c27 - article
Example:
test('stuff', async () => {
// create new forked scope, which is completly independent
const scope = fork({
// apply modifications like initial store values in this scope
values: [[$myStore, "value"], [$myOtherStore, 0]], // changed value in $myStore specifically for this scope
handlers: [[myFx, mockHandler)]] // changed effect handler to mock one for this scope
});
// launching event or effect, which triggers the logic we want to test
// we doing it just in our forked scope
await allSettled(startEvent, {
scope,
params: // params of startEvent
})
// check states of stores in this scope after all calculations ended
expect(scope.getState($myStore)).toEqual(...)
})

Convert snake_case to camelCase field names in apollo-server-express

I'm new to GraphQL and Apollo Server, though I have scoured the documentation and Google for an answer. I'm using apollo-server-express to fetch data from a 3rd-party REST API. The REST API uses snake_case for its fields. Is there a simple way or Apollo Server canonical way to convert all resolved field names to camelCase?
I'd like to define my types using camel case like:
type SomeType {
id: ID!
createdTime: String
updatedTime: String
}
but the REST API returns object like:
{
"id": "1234"
"created_time": "2018-12-14T17:57:39+00:00",
"updated_time": "2018-12-14T17:57:39+00:00",
}
I'd really like to avoid manually normalizing field names in my resolvers i.e.
Query: {
getObjects: () => new Promise((resolve, reject) => {
apiClient.get('/path/to/resource', (err, response) => {
if (err) {
return reject(err)
}
resolve(normalizeFields(response.entities))
})
})
}
This approach seems error prone, given that I expect the amount of resolvers to be significant. It also feels like normalizing field names shouldn't be a responsibility of the resolver. Is there some feature of Apollo Server that will allow me to wholesale normalize field names or override the default field resolution?
The solution proposed by #Webber is valid.
It is also possible to pass a fieldResolver parameter to the ApolloServer constructor to override the default field resolver provided by the graphql package.
const snakeCase = require('lodash.snakecase')
const snakeCaseFieldResolver = (source, args, contextValue, info) => {
return source[snakeCase(info.fieldName)]
}
const server = new ApolloServer({
fieldResolver: snakeCaseFieldResolver,
resolvers,
typeDefs
})
See the default field resolver in the graphql source code
I'd imagine you can place the normalizeFields function inside a graphql middleware right before it returns the results to the client side. Something like so Graphql Middleware.
A middleware would be a good centralized location to put your logic, so you don't need to add the function each time you have a new resolver.
If you are using Knex.js, I highly recommend using an ORM such as Objection.js (https://vincit.github.io/objection.js/). An ORM has lots of very useful features that make querying easier in Node.js, including a function called knexSnakeCaseMappers, which, when passed to the Knex object, will automatically convert snake_case table and column names to camel case before they ever reach your server. Thus your entire server can be written in camel case, matching your GraphQL schema and your client code. Learn more here.

Hapi server methods vs server.app.doSomething

I am writing a hapi js plugin, and was wondering what's the difference between the two ways of exposing methods that other plugins can use.
Method 1:
server.method("doSomething",
function () {
// Something
});
Method 2:
server.app.doSomething = function () {
// Something
};
In the first approach, the function can later be called as server.doSomething(), while using the second approach as server.app.doSomething().
So why would I use one way instead of another?
Looking at the API docs it sounds like they intended server.methods to be used for functions and server.app to be used for app settings/configuration. My guess is you should stick with server.method if you want to expose server level methods to be used in your plugins.
server.methods
An object providing access to the server methods where each server
method name is an object property.
var Hapi = require('hapi');
var server = new Hapi.Server();
server.method('add', function (a, b, next) {
return next(null, a + b);
});
server.methods.add(1, 2, function (err, result) {
// result === 3
});
server.app
Provides a safe place to store server-specific run-time application
data without potential conflicts with the framework internals. The
data can be accessed whenever the server is accessible. Initialized
with an empty object.
var Hapi = require('hapi');
server = new Hapi.Server();
server.app.key = 'value';
var handler = function (request, reply) {
return reply(request.server.app.key);
};

Scoping in embedded groovy scripts

In my app, I use Groovy as a scripting language. To make things easier for my customers, I have a global scope where I define helper classes and constants.
Currently, I need to run the script (which builds the global scope) every time a user script is executed:
context = setupGroovy();
runScript( context, "global.groovy" ); // Can I avoid doing this step every time?
runScript( context, "user.groovy" );
Is there a way to setup this global scope once and just tell the embedded script interpreter: "Look here if you can't find a variable"? That way, I could run the global script once.
Note: Security is not an issue here but if you know a way to make sure the user can't modify the global scope, that's an additional plus.
Shamelessly stolen from groovy.codehaus :
The most complete solution for people
who want to embed groovy scripts into
their servers and have them reloaded
on modification is the
GroovyScriptEngine. You initialize the
GroovyScriptEngine with a set of
CLASSPATH like roots that can be URLs
or directory names. You can then
execute any Groovy script within those
roots. The GSE will also track
dependencies between scripts so that
if any dependent script is modified
the whole tree will be recompiled and
reloaded.
Additionally, each time you run a
script you can pass in a Binding that
contains properties that the script
can access. Any properties set in the
script will also be available in that
binding after the script has run. Here
is a simple example:
/my/groovy/script/path/hello.groovy:
output = "Hello, ${input}!"
import groovy.lang.Binding;
import groovy.util.GroovyScriptEngine;
String[] roots = new String[] { "/my/groovy/script/path" };
GroovyScriptEngine gse = new GroovyScriptEngine(roots);
Binding binding = new Binding();
binding.setVariable("input", "world");
gse.run("hello.groovy", binding);
System.out.println(binding.getVariable("output"));
This will print "Hello, world!".
Found: here
Would something like that work for you?
A simple solution is to use the code from groovy.lang.GroovyShell: You can precompile the script like so:
GroovyCodeSource gcs = AccessController.doPrivileged( new PrivilegedAction<GroovyCodeSource>() {
public GroovyCodeSource run() {
return new GroovyCodeSource( scriptCode, fileName, GroovyShell.DEFAULT_CODE_BASE );
}
} );
GroovyClassLoader loader = AccessController.doPrivileged( new PrivilegedAction<GroovyClassLoader>() {
public GroovyClassLoader run() {
return new GroovyClassLoader( parentLoader, CompilerConfiguration.DEFAULT );
}
} );
Class<?> scriptClass = loader.parseClass( gcs, false );
That's was the expensive part. Now use InvokeHelper to bind the compiled code to a context (with global variables) and run it:
Binding context = new javax.script.Binding();
Script script = InvokerHelper.createScript(scriptClass, context);
script.run();