Is it a good idea to run integration tests on my CI using serverless offline?
I am on AWS and I want to test the Lambda <-> SQS integration.
My Lambda reads from an API Gateway, which I know is emulated on the serverless offline.
const JEST_SLS_OFFLINE_URL = localhost:3000 // Default sls offline url
describe('Version endpoint ', () => {
const fetchUser = async () => {
const url = `${String(JEST_SLS_OFFLINE_URL)}/user/123`
}
test('Should fetchUser', async () => {
expect(await fetchUser()).toBe('')
})
})
An alternative is to spin up a new servererless function on AWS (for every PR), which is quite resource consuming
I don't think this is a right or wrong answer - people kinda create their preferences with this but imo yes it's fine. It's what I do for integration tests. I also spinup a docker with DynamoDb. For SQS however you're going to have to emulate that - For integration tests I spin up a simple server and mock the calls/response - I do this for SQS,SNS,Cognito and a few other things that are either not available in serverless offline or do not provide the type of testing framework I desire. You can check out one of my answers on mocking Cognito - the same process applies to every AWS service which is very handy.
Related
I'm working on a project, that uses GCP and App Engine. In dev it will print out errors saying:
2020-09-20 15:07:24 dev[development] Error: Could not load the default credentials. Browse
to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/workspace/node_modules/google-auth-
library/build/src/auth/googleauth.js:161:19) at runMicrotasks (<anonymous>) at
processTicksAndRejections (internal/process/task_queues.js:97:5) at runNextTicks
(internal/process/task_queues.js:66:3) at listOnTimeout (internal/timers.js:518:9)
at processTimers (internal/timers.js:492:7) at async GoogleAuth.getClient
(/workspace/node_modules/google-auth-library/build/src/auth/googleauth.js:503:17) at
async GrpcClient._getCredentials (/workspace/node_modules/google-
gax/build/src/grpc.js:108:24) at async GrpcClient.createStub
(/workspace/node_modules/google-gax/build/src/grpc.js:229:23)
Keep in mind this is development mode, but is running on the GCP App Engine infrastructure, it is not being run on localhost. I'm viewing the logs with the command:
gcloud app logs tail -s dev
According to the GCP App Engine docs # https://cloud.google.com/docs/authentication/production#cloud-console
If your application runs inside a Google Cloud environment that has a default service
account, your application can retrieve the service account credentials to call Google Cloud
APIs.
I checked my app engine service accounts. And I have a default service account and it is active. Please see the redacted image here:
So I guess my question is: If I have an active default service account, and my application is supposed to automatically use the default service account key when it makes API calls, why am I seeing this authentication error? What am I doing wrong?
Edit: here's the code that is printing out errors:
async function updateCrawledOnDateForLink (crawlRequestKey: EntityKey, linkKey: EntityKey): Promise<void> {
try {
const datastore = new Datastore();
const crawlRequest = await datastore.get(crawlRequestKey)
const brand = crawlRequest[0].brand.replace(/\s/g, "")
await data.Link.update(
+linkKey.id,
{ crawledOn: new Date() },
null,
brand)
} catch (error) {
console.error('updateCrawledOnDateForLink ERROR:', `CrawlRequest: ${crawlRequestKey.id}`, `Link: ${linkKey.id}`, error.message)
}
}
My prediction is that creating a new Datastore() each time is the problem, but let me know your thoughts.
The fact that removing new Datastore() from a couple of functions solves the issue indicates that the issue is not with Authentication with App Engine but it is with Datastore, which confirms the documentation piece you shared.
I believe that the issue is that Datastore is getting lost with the credentials when you are creating multiple instances of it in your code for some unknown reason.
Since you mentioned in the comments that you don't really need multiple instances of Datastore in your code the solution to your problem is to use a single instance in a global variable and use that variable in multiple functions.
I have an ExpressJS project with several routes
var app = new express();
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');
app.use(express.static(path.join(__dirname, 'public')));
app.use('/car', car);
app.use('/bike', bike);
app.use('/bus', bus);
app.use('/train', train);
app.get('/',function(req,res){
res.render('layout', { title: 'app example' });
});
module.exports = app
I have deployed with ClaudiaJS to AWS Lambda and the deployment seems to work.
After that, I am configuring AWS API Gateway to invoke different resource paths in the Lambda Function. What I have found is that it properly works for the root path '/' but when I try to invoke a different resource path from API Gateway I get this error in API Gateway:
"You do not have permission to perform this action"
Additionally I get this message in the Lambda Function:
"The API with ID XXXXXXXXX does not include a resource with path /car having an integration arn:aws:lambda:myzone:XXXXXXXXXXXXX:function:functioname on the GET method."
Is this something possible to do currently with ClaudiaJS or even a supported configuration (multiple resource paths) in Lambda functions? Any experience?
Update 1: this seems to be possible for AWS Lambdas. See here: Is it possible to connect API gateway with node routes in AWS lambda? Not sure if ClaudiaJS can manage this use case
Update 2: ClaudiaJS confirms in their support group https://gitter.im/claudiajs/claudia that deploying a multi-route ExpressJS app to a single AWS Lambda is possible with their product and refers me to https://livebook.manning.com/#!/book/serverless-apps-with-node-and-claudiajs/chapter-13/v-5/167 So it looks some config/invokation error on my side
Update 3: Managed to invoke successfully 2 routes:
app.get('/test', function (req, res) {
res.send('Hello World test!');
});
app.get('/', function (req, res) {
res.send('Hello World!');
});
Getting {"message": "Internal server error"} for third route accessing MongoDB on EC2. Looks a permission issue.
Finally, this issue had nothing to do with ClaudiaJS.
It just needed to use the internal IP of the EC2, instead the external one, as described here Invalid permission from Lambda to MongoDB in EC2
I am dealing with some legacy applications and want to use Amazon AWS API Gateway to mitigate some of the drawbacks.
Application A, is able to call URLs with parameters, but does not support HTTP basic AUTH. Like this:
https://example.com/api?param1=xxx¶m2=yyy
Application B is able to handle these calls and respond. BUT application B needs HTTP basic authentication.
The question is now, can I use Amazon AWS API Gateway to mitigate this?
The idea is to create an API of this style:
http://amazon-aws-api.example.com/api?authcode=aaaa¶m1=xxx¶m2=yyy
Then Amazon should check if the authcode is correct and then call the API from Application A with all remaining parameters while using some stored username+password. The result should just be passed along back to Application B.
I could also give username + password as a parameter, but I guess using a long authcode and storing the rather short password at Amazon is more secure. One could also use a changing authcode like the ones used in 2-factor authentications.
Path to a solution:
I created the following AWS Lambda function based on the HTTPS template:
'use strict';
const https = require('https');
exports.handler = (event, context, callback) => {
const req = https.get(event, (res) => {
let body = '';
res.setEncoding('utf8');
res.on('data', (chunk) => body += chunk);
res.on('end', () => callback(null, body));
});
req.on('error', callback);
req.end();
};
If I use the Test function and provide it with this event it works as expected:
{
"hostname": "example.com",
"path": "/api?param1=xxx¶m2=yyy",
"auth": "user:password"
}
I suppose the best way from here is to use the API gateway to provide an interface like:
https://amazon-aws-api.example.com/api?user=user&pass=pass¶m1=xxx¶m2=yyy
Since the params of an HTTPs request are encrypted and they are not stored in Lambda, this method should be pretty secure.
The question is now, how to connect the API gateway to the Lambda.
You can achieve the scenario mentioned with AWS API Gateway. However it won't be just a proxy integration, rather you need to have a Lambda function which will forward the request by doing the transformation.
If the credentials are fixed credentials to invoke the API, then you can use the environmental variables in Lambda to store them, encrypted by using AWS KMS Keys.
However if the credentials are sent for each user (e.g logged into the application from a web browser) the drawbacks of this approach is that you need to store username and password while also retrieving it. Its not encourage to store passwords even encrypted. If this is the case, its better to passthrough (Also doing the transformations) rather storing and retrieving in between.
I need to create a RESTful API to expose a Windows application as a service. My first step is to create a simple REST API that returns a string, and then connect it to Amazon API Gateway.
I already launched a Windows Server instance, installed Node.js and created a simple API like this:
var express = require('express');
var app = express();
app.get('/test', function (req, res) {
console.log( "response" );
res.end( "response" );
});
var server = app.listen(8080, function () {
var host = server.address().address;
var port = server.address().port;
console.log("Example app listening at http://%s:%s", host, port);
});
I'd tested it opening http://localhost:8080/test and it works perfect.
The thing is, now I have to connect it with Amazon API Gateway but I haven't found clear documentation of how to do that. I have to use the "HTTP Proxy" option (see image below) but how do I get an "Endpoint URL"? All the tutorials take for granted that I already have that URL, but I don't.
Go to the ec2 console
Look for your instance
In the description of the instance find its public ip
Make sure its security group has the right permissions other wise you will no be able to connect with it
Use the instance public ip in API Gateway
In production use a more robust configuration, but for testing purposes you should be good.
we are re-writing our web application in ember.js. we use our rest api and the api uses oAuth 2.0 authentication system. Now, we are trying to use ember simple auth https://github.com/simplabs/ember-simple-auth and we also tried to use https://github.com/Vestorly/torii but it seems both needs to have AMD loader or ember cli. Unfortunately we are not using any of them. I would like to know what people using for authentication for oAuth2.0. Thanks in advance.
Update:
I downloaded ember simple auth 0.7.0 from distribution. But how do I configure with my ember application. I tried use it like this; But it didn't work.
Ember.Application.initializer({
name: 'authentication',
after: 'simple-auth',
initialize: function(container, application) {
var applicationRoute = container.lookup('route:application');
var session = container.lookup('simple-auth-session:main');
// handle the session events
session.on('sessionAuthenticationSucceeded', function() {
applicationRoute.transitionTo('index');
});
}
});
var ApplicationRouteMixin = requireModule('simple-auth/mixins/application-route-mixin')['default'];
in my route like this:
App.ApplicationRoute = Ember.Route.extend(SimpleAuth.ApplicationRouteMixin, {});
#marcoow do you have any example ?
Ember Simple Auth also has a distribution that exports a global (SimpleAuth) - download here: https://github.com/simplabs/ember-simple-auth/releases/tag/0.7.0
You should really use Ember CLI of course though...