First time trying out serverless framework.
Am trying to use the local web shell to do some inspection.
But realise I couldn't list tables or show a list of records.
Web shell example:
var params = {
TableName: 'stocks-table-dev',
};
dynamodb.scan(params, function(err, data) {
if (err) ppJson(err); // an error occurred
else ppJson(data); // successful response
});
The above command throws a status code 413 error.
But aws cli works fine: aws dynamodb scan --table-name=stocks-table-dev --endpoint-url='http://localhost:8000'
I start the web shell with the command sls dynamodb start.
Prior to that, I used the following commands to install the plugin:
npm install --save-dev serverless-dynamodb-local
sls dynamodb install
Am I suppose to use the web shell for inspection?
Is there some configuration to be done for the web shell to work?
Related
I'm trying to provision an EMR with a bootstrap action. I can see the stdout log and it finishes fine.
The last action is install boto3.
Installing collected packages: jmespath, python-dateutil, botocore, s3transfer, boto3
Successfully installed boto3-1.18.28 botocore-1.21.28 jmespath-0.10.0 python-dateutil-2.8.2 s3transfer-0.5.0
However after that EMR fails with "On the master instance, application provisioning failed". See log below.
I think this might be due to what I install in the bootstrap. java 11, python 3.7 etc. However, If run the same script manually via SSH after EMR has been provisioned everything works fine. Is there any way to execute the bootstrap action after all applications have been installed?
Error log: from provision-node/apps-phase/0/60c849d6-ca64-486d-8b4a-4c60201b168f/
2021-08-25 15:01:07,025 ERROR main: Encountered a problem while provisioning
com.amazonaws.emr.node.provisioner.puppet.api.PuppetException: Unable to complete transaction and some changes were applied.
at com.amazonaws.emr.node.provisioner.puppet.api.ApplyCommand.handleExitcode(ApplyCommand.java:74)
at com.amazonaws.emr.node.provisioner.puppet.api.ApplyCommand.call(ApplyCommand.java:56)
at com.amazonaws.emr.node.provisioner.bigtop.BigtopPuppeteer.applyPuppet(BigtopPuppeteer.java:73)
at com.amazonaws.emr.node.provisioner.bigtop.BigtopDeployer.deploy(BigtopDeployer.java:22)
at com.amazonaws.emr.node.provisioner.NodeProvisioner.provision(NodeProvisioner.java:25)
at com.amazonaws.emr.node.provisioner.workflow.NodeProvisionerWorkflow.doWork(NodeProvisionerWorkflow.java:196)
at com.amazonaws.emr.node.provisioner.workflow.NodeProvisionerWorkflow.work(NodeProvisionerWorkflow.java:101)
at com.amazonaws.emr.node.provisioner.Program.main(Program.java:30)
There is a way to run post provisioning (second stage) bootstrapping actions on an EMR. It's a bit of a hack and it works like this.
As your last bootstrapping action you need to copy the file you want to run after the installation of stuff like Hadoop or Spark from S3 into the node and run it as a background action.
The background action will wait for the node to be fully provisioned before it actually runs the code you wanted to run in the first place before it exits the loop.
Here's the code:
set_up_post_provisioning.sh
#!/bin/bash -x
aws s3 cp s3://path/to/bootstrap/scripts/post_provisioning.sh /home/hadoop/post_provisioning.sh &&
sudo bash /home/hadoop/post_provisioning.sh &
exit 0
post_provisioning.sh
#!/bin/bash
while true
do
NODEPROVISIONSTATE=$(sed -n '/localInstance [{]/,/[}]/{
/nodeProvisionCheckinRecord [{]/,/[}]/ {
/status: / { p }
/[}]/a
}
/[}]/a
}' /emr/instance-controller/lib/info/job-flow-state.txt | awk '{ print $2 }')
if [[ "$NODEPROVISIONSTATE" == "SUCCESSFUL" ]]
then
sleep 10
echo "Your code here"
exit
fi
sleep 10
done
Make sure that only set_up_post_provisioning.sh is an actual bootstrap action, since the next stage will not start if all the bootstrap actions are not finished.
I hope it helps!
I have a bunch of tests setup with Jest that test Express server endpoints in the same repo. In order to accomplish this testing, I have Jest spinning up an Express server in the beforeAll() method. When I run Jest with the --coverage flag, I get coverage information for just the scripts that were run in order to start Jest and no reporting on the scripts that were triggered by hitting the endpoints. That makes sense, but is there a way to get coverage information on the endpoint code?
Snippet of test code:
const rp = require('request-promise')
describe('testFunctions', () => {
beforeAll(done => {
app.listen(config.PORT, async () => {
done()
})
})
it('hit endpoint', async () => {
const response = await rp({ uri: '/testFunctions', method: 'POST' })
expect(response).toEqual('response)
})
})
I'm trying to get a coverage report for all of the server code hit with the /testFunctions request.
This is the solution that worked for me. It required a bit of refactoring, but I think it was cleaner in the end. In my ecosystem, we are running a Parse Server as middleware with an Express server, but this could apply to anyone who needs to run a separate server with the tests.
So to get my server in a place that nyc (the coverage reporting tool I'm using) could monitor, I abstracted the server initialization completely out of the jest test suite and created a special npm script to run it in an entirely separate process.
NPM Script (package.json):
"scripts": {
"coverage": "nyc --silent npm start & ./listen_on_port_5000.sh && jest test/cloud/integration && kill $(lsof -t -i tcp:5000) && nyc report --reporter=html",
}
listen_on_port_5000.sh
while ! nc -z localhost 5000
do
sleep 0.5
done
So how does this work?
nyc --silent npm start runs which is the normal command we would run to start our server with Express and Parse but with the prepended nyc part with the --silent flag so that nyc can run in the background and watch the server code.
Because we know that the server always starts on port 5000, we run the start script in the background and start a separate process running a shell script (listen_on_port_5000.sh) to wait for the server to boot up.
Once the listener script detects anything running on port 5000, the npm script moves onto the jest command (all while the Express server is up and running).
When Jest finishes running, the final script runs a kill script to close the server running on port 5000.
We then run a second nyc script to generate the report that it collected in the first script. The generated report can be found in your project's directory under /coverage/lcov-report/ (or you can use a different coverage reporter.
I have a working Symfony 4.0.1 application running on PHP 7.1.14 (locally) that I would like to deploy to AWS Elastic Beanstalk using the EB CLI
I have a dist package of the application on my master git branch configured for production (vendor folder removed etc) that I am able to successfully deploy to Heroku. Now I need to deploy to AWS EB.
The AWS EB environment has already been set up (although I dont have access to the console). Some environment details are as follows:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.7.7
Tier: WebServer-Standard-1.0
At first, I was able to successfully deploy the application, but accessing the URL gave a 404 error for every page.
I did some googling and found a few articles describing the use of .config files. I have added one named 03_main.config with the following contents.
commands:
300-composer-update:
command: "export COMPOSER_HOME=/root && composer.phar self-update -n"
container_commands:
300-run-composer:
command: "composer.phar install --no-dev --optimize-autoloader --prefer-dist --no-interaction"
600-update-cache:
command: "source .ebextensions/bin/update-cache.sh"
700-remove-dev-app:
command: "rm web/app_dev.php"
Deploying with this .config file gives the following deployment failure error:
ERROR: [Instance: i-0c5f61f41d55a18bc] Command failed on instance. Return code: 127 Output: /bin/sh: composer.phar: command not found. command 300-composer-update in .ebextensions/03-main.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I understand the purpose of .config files but do not understand what additional configuration is needed for get this Symfony app running.
I guess you should use the full path to composer like bellow :
100-update-composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update -n
I'm attempting to use vsts-npm-auth to get the authentication token for our VSTS package repository. On my development machine I can run the commands
npm install -g vsts-npm-auth
vsts-npm-auth -config path-to-my\.npmrc
and it succeeds in providing me with an authentication token. I'm now trying to recreate this as a build step on VSTS, so I create the powershell script auth-vsts.ps1
$npmrcFile = "$PSScriptRoot\path-to-my\.npmrc";
npm install -g vsts-npm-auth;
vsts-npm-auth -config $npmrcFile;
and add it as a powershell task. However, the task fails as follows
2017-05-30T09:37:41.1082686Z ##[section]Starting: auth-vsts
2017-05-30T09:37:41.1092712Z ==============================================================================
2017-05-30T09:37:41.1092712Z Task : PowerShell
2017-05-30T09:37:41.1092712Z Description : Run a PowerShell script
2017-05-30T09:37:41.1092712Z Version : 1.2.3
2017-05-30T09:37:41.1092712Z Author : Microsoft Corporation
2017-05-30T09:37:41.1092712Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613736)
2017-05-30T09:37:41.1092712Z ==============================================================================
2017-05-30T09:37:41.1112679Z ##[command]. 'd:\a\1\s\auth-vsts.ps1'
2017-05-30T09:37:47.3792461Z C:\NPM\Modules\vsts-npm-auth -> C:\NPM\Modules\node_modules\vsts-npm-auth\bin\vsts-npm-auth.exe
2017-05-30T09:37:47.3792461Z C:\NPM\Modules
2017-05-30T09:37:47.3802239Z `-- vsts-npm-auth#0.25.0
2017-05-30T09:37:47.3802239Z
2017-05-30T09:37:47.3802239Z
2017-05-30T09:37:47.3802239Z vsts-npm-auth v0.25.0.0
2017-05-30T09:37:47.3802239Z -----------------------
2017-05-30T09:37:47.3802239Z Creating npmrcFile. Path: D:\a\1\s\.npmrc
2017-05-30T09:37:47.3802239Z Getting new credentials for source:https://our-domain/_packaging/SharedLib/npm/registry/, scope:vso.packaging_write vso.drop_write
2017-05-30T09:37:49.8729702Z Caught exception: The prompt option is invalid because the process is not interactive.
2017-05-30T09:37:49.8729702Z Parameter name: PromptType
2017-05-30T09:37:49.8729702Z Caught exception: The prompt option is invalid because the process is not interactive.
2017-05-30T09:37:49.8729702Z Parameter name: PromptType
2017-05-30T09:37:49.8729702Z Couldn't get an authentication token for //our-domain/_packaging/SharedLib/npm/registry/:_authToken.
2017-05-30T09:37:50.1769711Z ##[error]Process completed with exit code 1.
2017-05-30T09:37:50.1809715Z ##[section]Finishing: auth-vsts
The error gives no indication as to why it can't obtain the credentials. Any ideas why this might be?
I faced this issue while trying to execute via Visual Studio Code`s powershell terminal
vsts-npm-auth -config .npmrc
But running the same command via simple console solved this issue and I was redirected to authentication window.
Can suggest that due to internal limitations powershell disabled to open another windows.
The error did indicate why it cannot obtain the credentials:
The prompt option is invalid because the process is not interactive.
This could be caused by the build agent does not run in interactive mode which make the credential dialog cannot be prompted. If you are using Hosted Build Agent, the build agent is run as service and there isn't any way to change to interactive mode.
However, the issue here is that if you want to use the feed in a build step, it does not make sense to prompt a credential dialog during the build process since the build step cannot enter the required credential automatically. Not sure if there is any specific requirement in your environment, but the general workflow should be uploading the .npmrc file generated in your local machine to the Source Control so that npm can use the auth token in the file to install/publish packages to VSTS Feed.
Inside your project, you can open a terminal and run
vsts-npm-auth -F -C .npmrc
This script refreshes the npm token. Here I set two parameters: -F forces the refresh (if not set, the token is refreshed only if it is already expired), while -C fileName defines the configuration file.
The vsts authentication system sometimes authenticates the use by popping up a browser window. If the terminal you're running the command from is not interactive (e.g., ssh terminal, vscode terminal) it won't be able to pop up that window, and the authentication will fail.
This worked for me
npx vsts-npm-auth -config .npmrc
Most of our front end development workflow is automated using gulp tasks. We're wondering if there is a way to create a gulp task for starting redis.
Currently we're using redis-server which we launch with redis-server. We'd like to be able do something like: gulp redis. What would this entail?
you could spawn a child process that starts up redis (this basically just runs the bash command used to start up your redis instance, so you can add different options to it as well - like you would if you start it from your terminal):
var gulp = require('gulp');
var child_process = require('child_process');
gulp.task('redis-start', function() {
child_process.exec('redis-server', function(err, stdout, stderr) {
console.log(stdout);
if (err !== null) {
console.log('exec error: ' + err);
}
});
});
If you are using OS X you can install redis through Homebrew:
brew install redis
and adjust it to start during OS startup as described in Homebrew formula:
To have launchd start redis at login:
ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents
Then to load redis now:
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.redis.plist
I think this is better and easy then invent different spikes for start/stop redis use Glup.