I am getting error AccessRules: Account does not have the right to perform the operation when I am using postman to hit the register api of ejabberd - api

What version of ejabberd are you using?
17.04
What operating system (version) are you using?
ubuntu 16.04
How did you install ejabberd (source, package, distribution)?
package
What did not work as expected? Are there error messages in the log? What
was the unexpected behavior? What was the expected result?
I used postman to make a HTTP request to ejabberd register api. The ejabberd is set up and the admin is running properly at the url - http://localhost:5280/admin.
The Url of http request is - http://localhost:5280/api/register
Body - {
"user": "bob",
"host": "example.com",
"password": "SomEPass44"
}
Header - [{"key":"Content-Type","value":"application/json","description":""}]
Response - {
"status": "error",
"code": 32,
"message": "AccessRules: Account does not have the right to perform the operation."
}
I searched a lot to and figured out that it will require some changes in ejabberd.yml file. My yml file is available on the link attached.
THIS LINK CONTAINS YML FILE
ANY HELP WILL GREAT.

In config file /opt/ejabberd/conf/ejabberd.yml
Find api_permissions
Change values of public commands who and what. Compare your code with mentioned below.
see this post:
http://www.centerofcode.com/configure-ejabberd-api-permissions-solve-account-not-right-perform-operation-issue/

Related

The Error "EOF" is occurred in Minio console login

I am trying to set secure access to stand-alone MinIO server using Docker container. I copied private.key and public.crt files to /root/.minio/certs. The ports is mapped like this: 9000:443 and 9001:9001.
When I access image uploaded MinIO through HTTPS, It is work well. But When I tried to login to MinIO web console, I got the simple error message: "EOF". Here is the capture image of console.
It is the returned message from API https://{$my_domain}:9001/api/v1/login, full reponse is as follows.
{
"code": 500,
"detailedMessage": "EOF",
"message": "invalid Login"
}
Any idea to solve this error?
I had the same error. The reason was, that I forgot to change the
MINIO_SERVER_URL=http... to MINIO_SERVER_URL=https...
I hope it helps.
I found cause of the problem myself. I forgot to change SERVER URL in env_file, so it was remained localhost.
MINIO_SERVER_URL=http://localhost:9000

How to deploy Strapi to an Apache cPanel

I'm setting up a Strapi install in my Apache cPanel (WHM on CentOS 7), and can't find a proper way to deploy it. I've managed to get it running, but when I try to access the dashboard (/admin), it just shows the index page (the one in public/index).
Is this the proper way to deploy Strapi to an Apache server?
Is the "--quickstart" setting only for testing purposes, or can this be used in Production? If so, what are the pre-deployment steps I need to take?
This is for a simple project that requires easy to edit content that will be grabbed via API manually from another cPanel installation.
Reading through the Strapi docs, I could only find deployment information about Heroku, Netlify and other third-party services such as these, nothing on hosting it yourself on Apache/cPanel.
I've tried setting up a "--quickstart" project locally, getting it working and then deploying via Bitbucket Pipelines. After that, just going into the cPanel terminal and starting it - though the aforementioned problem occurs, can't access admin dashboard.
Here's my server.json configuration:
Production
{
"host": "api.example.com",
"port": 1337,
"production": true,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
Development
{
"host": "localhost",
"port": 1337,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
There are no console errors, nor 404s when trying to access it.
Edit
Regarding deployment with the --quickstart setting:
there are many features (mainly related to searching) that don't work properly with SQLite (lack of proper index support) Not to mention the possible slowness due to disk speed and raw IOPS of the disk.
A suggestion on how to implement:
Respectfully, to deploy strapi you likely need to:
1. build a docker container for it
2. make a script to deploy it
3. use SSH and do it manually
4. use a CI/CD platform and scripted to deploy it
In summary:
Strapi is not your typical "copy the files and start apache" it's not a flat file system, Strapi itself is designed to be run as a service similar to Apache/Nginx/MySQL ect. They are all services (Strapi does need Apache/Nginx/Traefik to do ssl for it though via proxying)
If you have the index page when you visit /admin it's because the admin is not built.
Please run yarn build before starting your application.

Implementing Local Passport Strategy for Hyperledger-Composer-Rest Server: "Cannot GET /auth/local","status":404

I'm trying to implement the Local Passport Strategy for Hyperledger Composer Rest Server.
To achieve this, I did the following:
First, I installed passport-local by running the following command:
npm install -g passport-local
In my home folder I created a file called "envvars.txt" with the following content:
COMPOSER_PROVIDERS='{
"local": {
"provider": "local",
"module": "passport-local",
"usernameField": "username",
"passwordField": "password",
"authPath": "/auth/local",
"callbackURL":"/auth/local/callback",
"successRedirect": "/",
"failureRedirect": "/",
"setAccessToken": true,
"callbackHTTPMethod": "post"
}
}'
Then, in oder to set the environment variable COMPOSER_PROVIDERS, I ran the following command:
source envvars.txt
After that I started the composer-rest-server using the following specifications:
When I went to localhost:3000/explorer, http-requests were blocked (as expected) because I was not authenticated.
So far so good.
But when I tried to go to address localhost:3000/auth/local (in order to authenticate), this was not possible ... the web browser gave me an error message, the beginning of which was as follows:
{"error":{"statusCode":404,"name":"Error","message":"Cannot GET /auth/local","status":404,"stack":"Error: Cannot GET /auth/local\n at raiseUrlNotFoundError
What went wrong here?
Any help would be much appreciated.
I think your problem is that you are not persisting the data. In the envvars.txt you need to specify where to persist the data because right now there's no database to store the user and password.
Like explained in the official docs you need to persist your data in MongoDB for example.

'Unauthorized' to push images into SSL Artifactory Docker Registry

Im sorry if this topic is duplicated, I was not able to find anything similar to this problem.
Our docker clients v17.X + (Docker for Mac & Docker for Linux) are unable to push images under a SSL V2 Registry but are successfully authenticated for pushes under an Insecure V2 Registry (CNAME) that serves the same machine. The output is always the same: unauthorized even if I docker login correctly.
The weird thing is: for our old docker clients (v1.6) we are able to login and push Docker images to a secure v2 Docker registry without any problem using the credentials file stored at ~/.dockercfg. My Nginx appears to be working just fine. Any ideas about what I'm missing here?
_
Im attaching both credentials configuration files, if anyone wants to check:
Docker client: v.17
~/.docker/config.json
{
"auths" : {
"https://secure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
}
},
"credsStore" : "osxkeychain"
}
Obs: In Docker for Mac's case I tried with 'credsStore' and without it
Obs2: Even allowing anonymous to push images, I'm still getting an unauthorized for this registry.
Obs3: Logs are not very clean about this problem
Obs4: Artifactory is configured using a LDAP Group
Docker client: v.1.6.2
~/.dockercfg
{
"secure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
},
"insecure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
}
}
Artifactory Pro's version: 5.4.2

ETL Pull failing, error message giving mixed messages

When following the instructions on http://developer.gooddata.com/article/loading-data-via-api, I always get a HTTP400 error:
400: Neither expected file "upload_info.json" nor archive "upload.zip" found (is accessible) in ""
When I HTTP GET the same path that I did for the HTTP PUT, the file downloads just fine.
Any pointers to what I'm probably doing wrong?
GoodData is going trough migration from AWS to RackSpace.
Try to change of all get/post/put requests:
secure.gooddata.com to na1.secure.gooddata.com
secure-di.gooddata.com to na1-di.gooddata.com
You can check the datacenter where the project is located via /gdc/projects/{projectId} resource - the "project.content.cluster" field.
For example:
https://secure.gooddata.com/gdc/projects/myProjectId:
{
"project" : {
"content" : {
"cluster" : "na1",
....
For AWS this field has an empty value, "na1" means rackspace.