After I run fresh Tron node with the command
docker run -it -p 9090:9090 --rm --name tron trontools/quickstart
I'm trying to get blocks:
curl -X POST http://127.0.0.1:9090/wallet/getblockbylimitnext -d '{"startNum": 1, "endNum": 2}'
But instead of an array, the response contains an empty object:
{}
How to make node to work as it described in documentation here: https://tronprotocol.github.io/documentation-en/api/http/?
It should be a GET request. A corresponding pull request to the docs is created.
Related
I want to use an ASP.NET Core 6 healthcheck as a docker healthcheck.
The docs state:
Containers that use images based on Alpine Linux can use the included wget in place of curl
But there is no guidance for that, and as usual getting the docker config "just right" is more of an art than a science.
How do I do this?
It's possible to specify a healthcheck via the docker run CLI, or in a docker-compose.yml. I prefer to do it in the Dockerfile.
Configure
First note that the ASP.NET Core docker images by default expose port 80, not 5000 (so the docs linked in the question are incorrect).
This is the typical way using curl, for a non-Alpine image:
HEALTHCHECK --start-period=30s --interval=5m \
CMD curl --fail http://localhost:80/healthz || exit 1
But curl is unavailable in an Alpine image. Instead of installing it, use wget:
HEALTHCHECK --start-period=30s --interval=5m \
CMD wget --spider --tries=1 --no-verbose http://localhost:80/healthz || exit 1
HEALTHCHECK switches documented here.
wget switches documented here. --spider prevents the download of the page (similar to an HTTP HEAD), --tries=1 allows docker to control the retry logic, --no-verbose (instead of --quiet) ensures errors are logged by docker so you'll know what went wrong.
Test
For full status:
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq
Or:
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq '.Status'
# "healthy"
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq '.Log[].Output'
# "Connecting to localhost:80 (127.0.0.1:80)\nremote file exists\n"
I'm following the instructions to locally test lambda container https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
but I am unable to do so.
I've created a sample project to reproduce it https://gitlab.com/sunnyatticsoftware/sandbox/lambda-dotnet5-webapi (see the README for step by step on its generation)
Basically I am using an Amazon dotnet template that generates an AWS Lambda function as a .NET 5 web api using containers.
It's all good with the project. The Dockerfile is described as
FROM public.ecr.aws/lambda/dotnet:5.0
WORKDIR /var/task
COPY "bin/Release/net5.0/publish" .
Now I want to test it locally using the Amazon Lambda Runtime Interface Emulator (RIE) and these are the steps I follow:
Build project with dotnet build -c Release
Publish artifacts with dotnet publish -c Release
Build docker image with docker build -t lambda-dotnet .
Download the RIE with
mkdir -p ~/.aws-lambda-rie && curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && chmod +x ~/.aws-lambda-rie/aws-lambda-rie
I can see the emulator downloaded properly
ls -la ~/.aws-lambda-rie/aws-lambda-rie
-rw-r--r-- 1 diego.martin 1049089 8155136 Feb 22 14:32 /c/Users/diego.martin/.aws-lambda-rie/aws-lambda-rie
Run the emulator passing the lambda image
docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 --entrypoint /aws-lambda/aws-lambda-rie lambda-dotnet:latest
Here is when I get the error
12997dddc6e50aca3020527be30a1479eee9ceef412ab5009b99e9eb8cf1fa67
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie": stat C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie: no such file or directory: unknown.
What am I missing? I am not specifying any entrypoint because I don't have any.
PS: The last step would be to send some lambda event to my container's function with
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
The lambda docker images for dotnet already include the RIE, so it's enough with the following (see repo with further details):
To build image
docker build -t lambda-dotnet:latest .
To run it
docker run -p 9000:8080 lambda-dotnet "LambdaDotNet5::LambdaDotNet5.LambdaEntryPoint::FunctionHandlerAsync"
And then to test it, I can use CURL from a different terminal
curl -vX POST http://localhost:9000/2015-03-31/functions/function/invocations -d #test_request.json --header "Content-Type: application/json"
and in the test_request.json file I can have the json for the event I want to send to the lambda.
I am trying to pull a project ID using gitlab REST API v4, but when I issue the curl command, I get this error:
"jobs:test:script config should be a string or an array of strings"
The command is this one:
curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"
I tried to single quote it:
'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'
But when I do it, it removes the failure, but the command is ignored.
So I tried to eval it like this:
eval - 'curl -k -H "PRIVATE-TOKEN: PRIVATE-TOKEN" "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=$CI_PROJECT_NAME"'
When I do it, the failure its produced again:
"jobs:test:script config should be a string or an array of strings"
Any clue how should I issue the curl command? I think what is causing the failure is the colon within the "PRIVATE-TOKEN: PRIVATE-TOKEN"
This worked for me
Declare Job variables in variables sections eg:
variables:
PRIVATE-TOKEN: "TokenValue"
PRIVATE_HEADER: "PRIVATE-TOKEN: ${PRIVATE-TOKEN}"
Then under Script Section of the CI file used Curl command as follows
script:
curl -k -H ${PRIVATE_HEADER} "https://gitlab.nbg992.poc.dcn.telekom.de/api/v4/projects?search=${CI_PROJECT_NAME}
Using the {} braces around variable names made sure that ":" issue doesn't show up
I am trying to add Pass/Fail status in Saucelabs whenever I run an Automated test but I can't figure out how shall I do it. I use Behat - Selenium Driver. I read the documentation but it didn't help me.
I tried to use the Saucelabs Rest API guide and I launch in my console the following
curl -X PUTĀ \
-s -d '{"passed":true}' \
-u https://USERNAME:APIKEY#saucelabs.com/rest/v1/users/USERNAME
But it doesn't work.
I think you need the session Id
ownCloud uses:
curl -X PUT -s -d "{\"passed\": $PASSED}" -u $SAUCE_USERNAME:$SAUCE_ACCESS_KEY https://saucelabs.com/rest/v1/$SAUCE_USERNAME/jobs/$SAUCELABS_SESSIONID
see: https://github.com/owncloud/core/blob/master/tests/travis/start_ui_tests.sh#L235
and this Id is pulled from the URL: https://github.com/owncloud/core/blob/master/tests/ui/features/bootstrap/FeatureContext.php#L171
but there might be better ways of getting it
I am doing a on prem setup of openwhisk using local couchdb installation on ubuntu 16.04 for which I downloaded the code from the github. I have followed all the steps of the setup, after the build, I have to run various playbooks
when is run the below playbook with the below command
ansible-playbook -i environments/local openwhisk.yml
I get error
"error": "The server is currently unavailable (because it is overloaded or down for maintenance).",
"code": 4
when I check I found it is coming while executing installRouteMgmt.sh from /openwhisk/ansible/roles/routemgmt/files
the line in the script which is throwing error is
enter code here`echo Installing routemgmt package.
$WSK_CLI -i -v --apihost "$APIHOST" package update --auth "$AUTH" --shared no "$NAMESPACE/routemgmt" \
-a description "This experimental package manages the gateway API configuration." \
-p gwUser "$GW_USER" \
-p gwPwd "$GW_PWD" \
-p gwUrl "$GW_HOST" \
-p gwUrlV2 "$GW_HOST_V2"
where
APIHOST=172.17.0.1
AUTH=path to auth.whisk.system
WSK_CLI= wsk path
NAMESPACE= whisk.system
This error comes when the DB host value is not resolvable from the controller container or when the DB which the controller trying to connect to is not created in the couch DB. Mine was the second case once __subjects db was there,
it was able to run