I have a nooby Directus question :
How to create an endpoint and attach it to my project ?
I try to follow the doc in vain :
Create a directus project
npm init directus-project example-project
SQLite / Admin / Password
cd example-project; npx directus start
I can access directus admin at http://0.0.0.0:8055/admin/content
CTRL+C
Create an endpoint
cd .., going away of my directus project, npm init directus-extension
endpoint / demo-directus-endpoint / javascript
Modifying endpoint / to /hello in src/index
cd demo-directus-endpoint; npm run build
Deploy extensions inside directus project
https://docs.directus.io/extensions/creating-extensions/
To deploy your extension, you have to move the output from the dist/ folder into your project's ./extensions/<extension-folder>/<extension-name>/ folder. <extension-folder> has to be replaced by the extension type in plural form (e.g. interfaces). <extension-name> should be replaced with the name of your extension.
cd ../example-project
mkdir ./extensions/endpoints/demo
cp -R ../demo-directus-endpoint/dist/index.js ./extensions/endpoints/demo
index.js looks like this :
"use strict";module.exports=e=>{e.get("/hello",((e,l)=>l.send("Hello, World!")))};
npx directus start
17:43:40 ✨ Loaded extensions: demo
17:43:40 ⚠️ PUBLIC_URL should be a full URL
17:43:40 ⚠️ Spatialite isn't installed. Geometry type support will be limited.
17:43:40 ✨ Server started at http://0.0.0.0:8055
Trying to get url http://0.0.0.0:8055/hello
curl http://0.0.0.0:8055/hello => {"errors":[{"message":"Route /hello doesn't exist.","extensions":{"code":"ROUTE_NOT_FOUND"}}]}
17:43:55 ✨ request completed GET 404 /hello 8ms
What to do in order to get Hello, World! when curl http://0.0.0.0:8055/hello ?
Thank you for your help
Answer found
curl http://0.0.0.0:8055/demo/hello => Hello, World!
Related
I am complete beginner so I apologize in advance.
I have installed npm with these scripts in terminal
1.curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
2.nvm install node
then I set it to run like this
http-server -a localhost
Starting up http-server, serving ./public
Available on:
http://localhost:8081
an I have an index. html in my documents that I would like to display. I have tried to just state the whole path in the browser so like http://localhost:8081/Documents/testServer/index.html
But that doesn't work
You must install the tools inside the folder that contains index.html file as below
First : Open the folder that contain index.html
Second : Install Tools
1.curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
2.nvm install node
Third : Open the live server at :
http://localhost:8081/Documents/testServer/index.html
Currently I'm trying to build container serving VueJS application via Cloud Native Buildpacks.
I already have working Docker file that builds VueJS in production mode and then copy results to nginx image, but I would like to try to use CNB.
So I just have created empty VueJS project for test via vue create vue-tutorial and trying to do with CNB somehting like described there https://cli.vuejs.org/guide/deployment.html#heroku but using CNB.
Does anyone know working recipe how to do that with CNB?
P.S. Currently I'm trying to build that with
pack build spa --path . \ SIGINT(2) ↵ 17:22:41
--buildpack gcr.io/paketo-buildpacks/nodejs \
--buildpack gcr.io/paketo-buildpacks/nginx
but getting next error (and I'm not sure that I'm on right way):
===> DETECTING
ERROR: No buildpack groups passed detection.
ERROR: Please check that you are running against the correct path.
ERROR: failed to detect: no buildpacks participating
ERROR: failed to build: executing lifecycle: failed with status code: 100
UPD
My current dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:1.19-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
We chatted about this in Slack, but I wanted to capture it here too:
pack build --buildpack heroku/nodejs --buildpack https://cnb-shim.herokuapp.com/v1/heroku-community/static yourimage
This command may do what you want. The static buildpack used in that example is not yet converted to a cloud native buildpack, but the shim may allow you to build a workable artifact. Then run your image with something like docker run -it -e PORT=5000 -p 5000:5000 yourimagename
I have a very basic integration configured for Gitlab-CI but it fails almost at the beginning when it has to clone the code.
My integration is this:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
- dist/
build-prod:
stage: build
script:
- npm install
- npm run build-prod
artifacts:
paths:
- node_modules/
- dist/
test_with_karma:
stage: test
script: ng test
And the error that I get is this:
Running with gitlab-runner 11.7.0 (8bb608ff)
on fakehost 2eaf11ea
Using Docker executor with image node:latest ...
Pulling docker image node:latest ...
Using docker image sha256:8c67bfd7b95bdc535edc4a4144f5392b0f73efd6385fbcb47747d028d7059359 for node:latest ...
Running on runner-2eaf11ea-project-56-concurrent-0 via fakehost...
Cloning repository...
Cloning into '/builds/redacted/frontend'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#working-domain.com/redacted/frontend.git/': The requested URL returned error: 403
/bin/bash: line 65: cd: /builds/redacted/frontend: No such file or directory
ERROR: Job failed: exit code 1
What is the problem here?
Check if this is covered by gitlab-org/gitlab-ce issue 39469
YAY - it works for me. This problem seems to have multiple solutions.
The one that worked for me is #44855
To summarize. Being an Administrator on Gitlab does not mean you have the "access" to do whatever you want to do in Gitlab.
"Unable to access" permissions applies to the person who is logged into Gitlab and running the job.
To fix the problem - the person / account running the job must be a member (master) of the project.
This will apply to private projects.
It is not necessary to make a private project Public even though that appears to fix the problem. GITLAB suggests you must have https for the project to work you can use http.
SOLUTION - add your account to the project even if you are the Administrator
And:
Conrad has described it correctly.
You need to have rights to the project to run pipeline, however, as administrator, you can start any pipeline.
I've got the case when the user being Admin in Gitlab could push his commit from command line, although theoretically having no rights to project - and the pipeline has failed.
This inconsistency need to be fixed, either Admin user should not be able to push/start pipeline, having no rights for it, or he should authomatically be granted all rights to all projects. I'd prefer the first one, because it separates gitlab administration from project rights. Sometimes I prefer not having full rights, just like working as non-root under Linux.
I have a working Symfony 4.0.1 application running on PHP 7.1.14 (locally) that I would like to deploy to AWS Elastic Beanstalk using the EB CLI
I have a dist package of the application on my master git branch configured for production (vendor folder removed etc) that I am able to successfully deploy to Heroku. Now I need to deploy to AWS EB.
The AWS EB environment has already been set up (although I dont have access to the console). Some environment details are as follows:
Platform: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.7.7
Tier: WebServer-Standard-1.0
At first, I was able to successfully deploy the application, but accessing the URL gave a 404 error for every page.
I did some googling and found a few articles describing the use of .config files. I have added one named 03_main.config with the following contents.
commands:
300-composer-update:
command: "export COMPOSER_HOME=/root && composer.phar self-update -n"
container_commands:
300-run-composer:
command: "composer.phar install --no-dev --optimize-autoloader --prefer-dist --no-interaction"
600-update-cache:
command: "source .ebextensions/bin/update-cache.sh"
700-remove-dev-app:
command: "rm web/app_dev.php"
Deploying with this .config file gives the following deployment failure error:
ERROR: [Instance: i-0c5f61f41d55a18bc] Command failed on instance. Return code: 127 Output: /bin/sh: composer.phar: command not found. command 300-composer-update in .ebextensions/03-main.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I understand the purpose of .config files but do not understand what additional configuration is needed for get this Symfony app running.
I guess you should use the full path to composer like bellow :
100-update-composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update -n
In official docs we can see:
# docker build github.com/creack/docker-firefox
It just works fine to me. docker-firefox is a repository and has Dockerfile within root dir.
Then I want to buid redis image and exact version 2.8.10 :
# docker build github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10
2014/11/05 16:20:32 Error trying to use git: exit status 128 (Initialized empty Git repository in /tmp/docker-build-git067001920/.git/
error: The requested URL returned error: 403 while accessing https://github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10/info/refs
fatal: HTTP request failed
)
I got error above. What's the right format with build docker image from github repos?
docker build url#ref:dir
Git URLs accept context configuration in their fragment section,
separated by a colon :. The first part represents the reference that
Git will check out, this can be either a branch, a tag, or a commit
SHA. The second part represents a subdirectory inside the repository
that will be used as a build context.
For example, run this command to use a directory called docker in the
branch container:
docker build https://github.com/docker/rootfs.git#container:docker
https://docs.docker.com/engine/reference/commandline/build/
The thing you specified as repo URL is not a valid git repository. You will get error when you will try
git clone github.com/docker-library/redis/tree/99c172e82ed81af441e13dd48dda2729e19493bc/2.8.10
Valid URL for this repo is github.com/docker-library/redis. So you may want to try following:
docker build github.com/docker-library/redis
But this will not work too. To build from github, docker requires Dockerfile in repository root, howerer, this repo doesn't provide this one. So, I suggest, you only have to clone this repo and build image using local Dockerfile.
One can use the following example which sets up a Centos 7 container for testing ORC file format. Make sure to escape the # sign:
$ docker build https://github.com/apache/orc.git\#:docker/centos7 -t orc-centos7