ERROR/MainProcess consumer: Cannot connect to amqp:// (Docker Compose and Celery-RabbitMQ) - rabbitmq

I'm trying to make Flask, Celery and RabbitMQ work using docker-compose service. I am able to get the things up and running, but the Celery client isn't able to connect to the RabbitMQ server. On the celery server start, I get this message:
[2017-02-27 17:52:17,466: ERROR/MainProcess] consumer: Cannot connect to amqp://radmin:**#174.138.76.93:5672//: [Errno 104] Connection reset by peer.
Trying again in 2.00 seconds...
[2017-02-27 17:52:19,481: ERROR/MainProcess] consumer: Cannot connect to amqp://radmin:**#174.138.76.93:5672//: Socket closed.
Trying again in 4.00 seconds...
My folder structure for the project is:
├── docker-compose.yml
├── order
│   ├── Dockerfile
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── __pycache__
│   │   └── __init__.cpython-35.pyc
│   ├── requirements.txt
│   ├── main.py
├── celery
│   ├── Dockerfile
│   ├── __init__.py
│   ├── requirements.txt
│   └── tasks.py
├── proxy
│   ├── Dockerfile
│   └── order.conf
The order folder is where the main flask server is and the celery has the folder where my celery tasks are present. I'm creating a separate celery worker, wherein the tasks submitted by main.py through
task = celery.send_task('mytasks.outputservice', args=[arg1, arg2])
would be received by tasks.py #app.task(name='mytasks.outputservice')
I've implemented this locally and it works fine (the remote celery tasks thing), I guess the issue is with the RabbitMQ and Celery connections.
My docker-compose file is:
version: '2.1'
services:
# Application
ordermanagement:
restart: always
build: ./ordermanagement
expose:
- "8080"
ports:
- "8080:8080"
volumes:
- /usr/src/app/static
depends_on:
- rabbit
- worker
#env_file: .env
#command: /usr/local/bin/gunicorn -w 2 --timeout 90 --keep- alive 90 --bind :8080 scheduledpurchases:app
# RabbitMQ
rabbit:
hostname: rabbit
image: rabbitmq:management
environment:
- RABBITMQ_DEFAULT_USER=radmin
- RABBITMQ_DEFAULT_PASS=radmin
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
# Nginx server
proxy:
restart: always
build: ./proxy
expose:
- "80"
ports:
- "85:80"
depends_on:
- ordermanagement
links:
- ordermanagement:ordermanagement
# Celery worker
worker:
build: ./ordermanagement_celery
user: nobody
depends_on:
- rabbit
I've seen other questions related to Celery and RabbitMQ, but majority of the problems were either resolved (ip address resolution) or some other small issue.
If someone could give me lead on how to resolve this.
PS: I'm directly deploying to DigitalOcean servers from terminal, checked the ip for RabbitMQ (Celery Broker URL, it is the same as the docker-machine ip).

Related

failed to open the plugin manifest /.traefik.yml: open /.traefik.yml: no such file or directory

I'm new to Traefik. I'm trying to develop a plugin and "connect" it to my webserver using Docker Compose. I use docker compose for development. When I try to run the Docker Compose I encountered this error:
/dev/echo-docker-compose $ docker-compose up --build
Starting echo-server ... done
Recreating traefik ... done
Attaching to echo-server, traefik
echo-server | Echo server listening on port 8080.
traefik | time="2022-03-18T07:40:11Z" level=info msg="Configuration loaded from flags."
traefik | time="2022-03-18T07:40:11Z" level=info msg="Traefik version 2.6.1 built on 2022-02-14T16:50:25Z"
traefik | time="2022-03-18T07:40:11Z" level=debug msg="Static configuration loaded {\"global\":{\"checkNewVersion\":true},\"serversTransport\":{\"maxIdleConnsPerHost\":200},\"entryPoints\":{\"traefik\":{\"address\":\":8080\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"udp\":{\"timeout\":\"3s\"}},\"web\":{\"address\":\":80\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"udp\":{\"timeout\":\"3s\"}}},\"providers\":{\"providersThrottleDuration\":\"2s\",\"docker\":{\"watch\":true,\"endpoint\":\"unix:///var/run/docker.sock\",\"defaultRule\":\"Host(`{{ normalize .Name }}`)\",\"swarmModeRefreshSeconds\":\"15s\"}},\"api\":{\"insecure\":true,\"dashboard\":true},\"log\":{\"level\":\"DEBUG\",\"format\":\"common\"},\"pilot\":{\"dashboard\":true},\"experimental\":{\"localPlugins\":{\"traefik-plugin-example\":{\"moduleName\":\"github.com/usergithub/traefik-plugin-example\"}}}}"
traefik | time="2022-03-18T07:40:11Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n"
traefik | 2022/03/18 07:40:11 traefik.go:79: command traefik error: 1 error occurred:
traefik | * failed to open the plugin manifest plugins-local/src/github.com/usergithub/traefik-plugin-example/.traefik.yml: open plugins-local/src/github.com/usergithub/traefik-plugin-example/.traefik.yml: no such file or directory
traefik |
traefik |
traefik exited with code 1
I think I can conclude that the cause of this error is that it can't find the .traefik.yml file which is quite obvious. But, I already have the file in that directory.
This is my full directory structure
dev/plugins-local/
└── src
└── github.com
└── traefik
└── traefik-plugin-example
├── plugin.go
├── plugin_test.go
├── .traefik.yml
├── go.mod
├── Dockerfile
├── Makefile
├── traefik
├── traefik-plugin.go
├── rules-plugin.go
dev/echo-docker-compose/
└── docker-compose.yml
└── cmd
└── ....
For some reasons, I think Docker Compose can't read .traefik.yml. But, it's also the same case when I try to run ./traefik --configfile traefik-plugin.yml from dev/plugins-local. It also throws the same error where it can't read the .traefik.yml.
Anyone knows how to fix this problem?
This is the relevant codes:
On server repo
# dev/echo-docker-compose/.env
PLUGIN_MODULE=github.com/usergithub/traefik-plugin-example
PLUGIN_NAME=traefik-plugin-example
# dev/echo-docker-compose/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.6"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--experimental.localPlugins.${PLUGIN_NAME}.moduleName=${PLUGIN_MODULE}"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
echo-server:
image: "usergithub/echo-server"
container_name: "echo-server"
labels:
- "traefik.enable=true"
# The domain the service will respond to
- "traefik.http.routers.echoserver.rule=Host(`echoserver.localhost`)"
# Allow request only from the predefined entry point named "web"
- "traefik.http.routers.echoserver.entrypoints=web"
- "traefik.http.routers.echoserver.entrypoints=web"
- "traefik.http.routers.echoserver.middlewares=echoserver-demo"
- "traefik.http.middlewares.echoserver-demo.plugin.${PLUGIN_NAME}.headers.DoesPluginWork=YES"
- "traefik.http.routers.echoserver.tls.certresolver=default"
On plugin repo
# dev/plugins-local/traefik-plugin.go
pilot:
token: "token"
api:
dashboard: true
insecure: true
experimental:
localPlugins:
traefik-plugin-example:
moduleName: github.com/usergithub/traefik-plugin-example
entryPoints:
http:
address: ":8000"
forwardedHeaders:
insecure: true
providers:
file:
filename: rules-plugin.yaml
# dev/plugins-local/src/github.com/usergithub/traefik-plugin-example/.traefik.yml
displayName: Traefik Example Plugin
type: middleware
import: github.com/usergithub/traefik-plugin-example
summary: 'Write plugin'
testData:
userAgent:
- Firefox
- Mozilla/5.0
according to the doc, "There must be a .traefik.yml file at the root of your project describing your plugin" and from what I see in your directory structure, you placed .traefik.yml in the "src" folder with several sub folder in between.
Putting the file in the right place should solve the problem, please note that go.mod should also be at the root, (and the plugin must use git tag).
As an additional ressource, you can check the demo plugin,
Also for anyone reading, I must add a disclaimer because Traefik plugins are still an "Experimental Features" and should be used with caution

How to fix Conflicting distribution for my local apt repo?

I was trying to make an apt repo. I have this deb which is not architecture dependant and this is the structure of my repo:
.
├── dists
│ └── testing
│ ├── InRelease
│ ├── main
│ │ ├── Packages
│ │ └── Packages.gz
│ ├── Release
│ └── Release.gpg
├── KEY.gpg
└── pool
└── testing
└── main
└── s
└── savcli
└── savcli_0.0.1_all.deb
I add deb <uri-to-repo> testing main to my sources.list. I also add the key, But when I apt update I get these errors:
W: Conflicting distribution: <uri-to-repo> testing InRelease (expected testing but got )
W: Skipping acquire of configured file 'main/binary-amd64/Packages' as repository '<uri-to-repo> testing InRelease' does not seem to provide it (sources.list entry misspelt?)
W: Skipping acquire of configured file 'main/binary-i386/Packages' as repository '<uri-to-repo> testing InRelease' does not seem to provide it (sources.list entry misspelt?)
I'm not sure what's wrong and how I can fix this. I don't want to make a flat repo and add [trusted=yes]. So what have I done wrong?
It seems that you are missing configuration details in the release and in release files some examples of how a Release File should look like are. You can see more examples here https://lists.debian.org/debian-mentors/2006/04/msg00294.html and Configuring reprepro https://wiki.debian.org/DebianRepository/SetupWithReprepro#Generating_OpenPGP_keys
I used apt-ftparchive to genereate my release file and so I used the following CLI command
apt-ftparchive \
-o APT::FTPArchive::Release::Origin="my app" \
-o APT::FTPArchive::Release::Label="my app" \
-o APT::FTPArchive::Release::Architectures="arm64" \
-o APT::FTPArchive::Release::Components="main" \
-o APT::FTPArchive::Release::Description="Apt repository for my app" \
-o APT::FTPArchive::Release::Codename="stable" \
-o APT::FTPArchive::Release::Suite="main" \
release ./dists/stable/ > ./dists/stable/Release
The Release File output will look somthing similar too
Architectures: arm64
Codename: stable
Components: main
Date: Tue, 27 Dec 2022 14:53:28 +0000
Description: Apt repository for my app
Label: My App
Origin: My App
Suite: main
MD5Sum:
f948e3b9ecc3ee1bb89490eec5a897e8 197 Release
50101e65a457f7adfdb11be49f36e2e4 600 main/binary-arm64/Packages
ff153fcc8b9f9f49d0c917afd97bff72 454 main/binary-arm64/Packages.gz
SHA1:
3f9bdf152d1060d28faef385f22e2b3a39bdba95 197 Release
5648155184dabe68f8b01f09d4c70afee215f289 600 main/binary-arm64/Packages
7582505ccae55b2d624f8d3ae42ec5104ddad057 454 main/binary-arm64/Packages.gz
SHA256:
1abb7494951bbdeb04a5e0fc8124b35144a7d22b16c6716e18a140135328fa82 197 Release
afbb25d8792a6377c8b63b8fe3754419337eed319cfb546945cecc95d3207f3b 600 main/binary-arm64/Packages
7cfc6394fc938e0824c82ea15a03dfa1e72c5d344e8d85abeb4d317b0b643fc3 454 main/binary-arm64/Packages.gz
SHA512:
d8c327407e2eca79a58db5934ebbe617198778fa34b9a506dc6c9ac57f3f680658e8c069f15af58eb75a27596166b6e2ee6991861e05f5346ea503874ab2aa88 197 Release
e275bdc954cfbf05d525c0a1c94c709411caf84bc5dd8c9e888b78d6108c4d93f7a5b31f42466ed21b740e1e69a14784f79f2815d019faaa8b411f9a30562ea1 600 main/binary-arm64/Packages
d8ccc972408816d791130076a859249b822c19183b746137ee61d765134ef59ab9e72ce43c9755c11c8540dfb55f7d573796036137f4f8296f35d8cafb79b3b6 454 main/binary-arm64/Packages.gz

How to copy a whole directory recursively with an npm script on Windows 10 Powershell?

How to copy a whole directory recursively with an npm script on Windows 10 Powershell?
Right now I have the following tree:
test
├───1
│ package.json
│
└───2
└───src
│ asd.txt
│
└───asd
asd - Copy (2).txt
asd - Copy.txt
asd.txt
What I want is a script that when run in dir 1 it goes to dir 2 and copies the whole dir src recursively from there to dir 1. So in the end I would have a similar src in 1 as there is in 2.
When I cd to the directory 1 and run npm run build:ui which is defined in package.json as
"scripts": {
"build:ui": "cd ..\\2 && copy src ..\\1"
}
it starts doing kind of what I want but not quite; it copies stuff from directory 2 to 1. The problem is it doesn't copy the whole directory with all of its subdirectories and all the possible contents, instead it just copies the files from directly inside 2/src/. In other words, here's what the tree looks like after the operation:
test
├───1
│ asd.txt
│ package.json
│
└───2
└───src
│ asd.txt
│
└───asd
asd - Copy (2).txt
asd - Copy.txt
asd.txt
So only the file asd.txt got copied.
Other configurations I have tried without success include:
"scripts": {
"build:ui": "cd ..\\2 && copy -r src ..\\1"
}
"scripts": {
"build:ui": "cd ..\\2 && Copy-Item -Recursive src ..\\1"
}
"scripts": {
"build:ui": "cd ..\\2 && cp -r src ..\\1"
}
...none of which are even valid.
Consider utilizing the xcopy command instead of copy as it better suits your requirement.
Redefine your build:ui script in the scripts section of your package.json file as follows:
Scripts section of package.json:
"scripts": {
"build:ui": "xcopy /e/h/y/q \"../2/src\" \"./src\\\" > nul 2>&1"
}
Running:
When you cd to the directory named 1, (i.e. the directory that contains the package.json with the aforementioned build:ui script defined in it), and then run:
npm run build:ui
it will produce the resultant directory structure:
test
├── 1
│   ├── package.json
│   └── src
│   ├── asd
│   │   ├── asd - Copy (2).txt
│   │   ├── asd - Copy.txt
│   │   └── asd.txt
│   └── asd.txt
└── 2
└── src
├── asd
│   ├── asd - Copy (2).txt
│   ├── asd - Copy.txt
│   └── asd.txt
└── asd.txt
As you can see, the src folder inside folder 2, and all of it's contents, has been copied to folder 1.
Explanation:
The following provides a detailed breakdown of the aforementioned xcopy command:
Options:
/e - Copy folders and subfolders, including empty folders.
/h - Copy hidden and system files and folders.
/y - Suppress prompt to confirm overwriting a file.
/q - Do not display file names while copying.
Notes:
Each pathname has been encased in JSON escaped double quotes, i.e. \"...\"
The ./src\\ part has a trailing backslash (\), which has been JSON escaped (\\), to inform xcopy that the destination is a directory. This also ensures the src directory is created if it doesn't already exist.
The > nul 2>&1 part suppresses the confirmation log that states how many files were copied.
Related information:
It's worth noting that on Windows npm utilizes cmd.exe as the default shell for running npm scripts - regardless of the CLI tool you're using, e.g. PowerShell. You can verify this by utilizing the npm-config command to check the script-shell setting. For instance run the following command:
npm config get script-shell
Edit:
If you want your resultant directory structure to be like this:
test
├── 1
│   ├── asd
│   │   ├── asd - Copy (2).txt
│   │   ├── asd - Copy.txt
│   │   └── asd.txt
│   ├── asd.txt
│   └── package.json
└── 2
└── src
├── asd
│   ├── asd - Copy (2).txt
│   ├── asd - Copy.txt
│   └── asd.txt
└── asd.txt
This time the contents of the src folder inside the folder named 2 has been copied to folder 1 - but not the actual containing src folder itself.
Then you need to define your npm script as follows:
"scripts": {
"build:ui": "xcopy /e/h/y/q \"../2/src\" \".\" > nul 2>&1"
}
Note: the destination path has been changed from \"./src\\\" to \".\".
For something like this, I might use an approach similar to the below.
Modify your NPM script (build:ui) to call a Powershell script(build.ui.ps1) that is located in the same dir as the package.json file.
"scripts": {
"build:ui": "#powershell -NoProfile -ExecutionPolicy Unrestricted -Command ./build.ui.ps1"
},
Create the aforementioned Powershell script with the following contents.
param(
$srcParentDir = '2',
$srcDir = 'src',
$srcDestDir = '1'
)
Set-Location (get-item $PSScriptRoot).parent.FullName
Copy-Item -Path "$srcParentDir\$srcDir" -Destination $srcDestDir -Recurse
Run the npm script
npm run build:ui

NSIS - check if process exists (nsProcess not working)

For my NSIS uninstaller, I want to check if a process is running. FindProcDLL is not working under Windows 7 x64, so I tried nsProcess.
I've downloaded the version 1.6 from the website: http://nsis.sourceforge.net/NsProcess_plugin
If I start the nsProcessTest.nsi in the Example folder, I get the following errors:
Section: "Find process" ->(FindProcess)
!insertmacro: nsProcess::FindProcess
Invalid command: nsProcess::_FindProcess
Error in macro nsProcess::FindProcess on macroline 1
Error in script "C:\Users\Sebastian\Desktop\nsProcess_1_6\Example\nsProcessTest.nsi" on line 14 -- aborting creation process
This is line 14 of the example script:
${nsProcess::FindProcess} "Calc.exe" $R0
Do somebody know what is wrong? How can I check if a process is running with NSIS?
NSIS does not find the plug-in, so make sure you copied its files to the correct folder.
NSIS 2.x:
NSIS/
├── Include/
│ └── nsProcess.nsh
└── Plugins/
└── nsProcess.dll
NSIS 3.x:
NSIS/
├── Include/
│ └── nsProcess.nsh
└── Plugins/
├── x86-ansi/
│ └── nsProcess.dll
└── x86-unicode/
└── nsProcess.dll
The file inside Plugins\x86-unicode is nsProcessW.dll renamed to nsProcess.dll (blame the author for making it overly complicated!)
More generally, refer to How can I install a plugin? on the NSIS Wiki.

Are multiple document roots to serve the same path possible?

Is it possible to have a directory structure on an Apache server similar to this:
└── www
├── root1
│   └── doc1.html
└── root2
└── doc2.html
such that doc1.html and doc2.html are both served on the same path?
(e.g. http://localhost/doc1.html and http://localhost/doc2.html both result in successful requests?)
Do it with aliasing. See httpd.apache.org/docs/2.2/mod/mod_alias.html#alias