Vite-proxy ECONNREFUSED with node v17+ - vue.js

I am using Node v18.12.1 and vite v3.0.4. Below is my proxy code to connect to a Node.js REST API from the Vue.js vite dev server:
proxy: {
"/api": {
target: "http://localhost:3000",
changeOrigin: true,
}
}
After updating my node version from v16 I now get this error from vite-proxy:
[vite] http proxy error:
Error: connect ECONNREFUSED ::1:3000
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16) (x3)
I have heard that sine v17, Node favours ipv6 for localhost. How do I fix this?

You could run the service you are trying to connect to on ::1 or configure your proxy target to use the IPv4 address (http://127.0.0.1:3000).

Related

Mercure keeps binding to port 80

I'm using the Mercure hub 0.13, everything works fine on my development machine, but on my test server the hub keeps on trying to bind on port 80, resulting in a error, as nginx is already running on port 80.
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm starting the hub with the following command:
MERCURE_PUBLISHER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_PUBLISHER_JWT_ALG=RS256 \
MERCURE_SUBSCRIBER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_SUBSCRIBER_JWT_ALG=RS256 \
./mercure run -config Caddyfile.dev
Caddyfile.dev is as follows:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:localhost:3000}
log
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins *
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I provider the SERVER_NAME as an environment variable, without a domain, SERVER_NAME=:3000, the hub actually starts on port 3000, but runs in http mode, which only allows for anonymous subscriptions and is not what I need.
Server:
Operating System: CentOS Stream 8
Kernel: Linux 4.18.0-383.el8.x86_64
Architecture: x86-64
Full output when trying to start the Mercure hub:
2022/05/10 04:50:29.605 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/05/10 04:50:29.606 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/05/10 04:50:29.609 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/05/10 04:50:29.610 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/05/10 04:50:29.610 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003d6150"}
2022/05/10 04:50:29.627 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/05/10 04:50:29.628 INFO tls finished cleaning storage units
2022/05/10 04:50:29.642 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2022/05/10 04:50:29.643 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc0003d6150"}
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm a bit late, but I hope that will help someone.
As mentionned here, you can specify the http_port manually in your caddy configuration file.

I can see live app on secured port 443 red5pro

I prepared server ubuntu like from docs. I created SSL cert to my domin and i have open required ports. I installed red5pro in to /usr/local/red5pro/ and server fine. When i will go to http://example.com:5080/ i can see home page red5pro and is ok. But when i click on broadcast i have a info: No suitable Publisher found. WebRTC & Flash not supported. Ok, maybe because is http not https. I decided create test index page in to /var/www/test/index.html and i have basic configuration like:
var config = {
protocol: 'wss',
host: 'example.com',
port: 443,
app: 'live',
streamName: 'abccaccaa',
rtcConfiguration: {
iceServers: [{urls: 'stun:stun2.l.google.com:19302'}],
iceCandidatePoolSize: 2,
bundlePolicy: 'max-bundle'
} // See https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/RTCPeerConnection#RTCConfiguration_dictionary
};
And now when i try broadcast have an info: WebSocket connection to 'wss://example.com/live/?id=abccaccaa' failed: Error during WebSocket handshake: Unexpected response code: 404
Looks like have no example.com/live and cant figure out what is wrong :( since 2 days. Maybe someone could give me an advice ? Or alternative on other application than red5pro

DataStax Lifecycle Manager Failing to Install New Cluster

We are using Lifecycle Manager to distribute DSE to a new Cass cluster. The install fails no matter what is tried. The log shows the following where I edited ip addresses.
Meld failed on name="node1_name" ssh-management-address="node1_ip" node-id="37292a99-f324-4967-803c-d9c50ab0b87b" job-id="d413b5ba-ba87-4de4-bac7-2efac334d2d4" stdout="503 Server Error: Connect failed for url: http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status event-resource=http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status2017-07-10 15:53:43,925 - opsc-meld - ERROR - 503 Server Error: Connect failed for url: http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status event-resource=http://opscenter:8888/api/v1/lcm/internal/nodes/37292a99-f324-4967-803c-d9c50ab0b87b/status " stderr=""

getaddrinfo ENOTFOUND error when connecting to Mongolab over VPN

This is only occurring when I'm on a VPN on my Win 7 machine. Outside VPN, on any public or private wifi connection, everything runs fine.
C:\node\ultronnode2\node_modules\mongoose\node_modules\mongodb\lib\server.js:228
process.nextTick(function() { throw err; })
^
Error: getaddrinfo ENOTFOUND dsXXXXXX.mongolab.com
at errnoException (dns.js:44:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:94:26)
Any idea how can I connect my Express app with Mongolab over VPN?
Thanks in advance.

RabbitMQ Management : webmachine error: path="/api/overview"

After I login to rabbitmq, I get the following error :
Got response code 500 with body
Internal Server Error
The server encountered an error while processing this request:
{error,{error,{badmatch,{error,nxdomain}},
[{rabbit_nodes,cluster_name_default,0},
{rabbit_nodes,cluster_name,0},
{rabbit_mgmt_wm_overview,to_json,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1},
{webmachine_decision_core,decision,1},
{webmachine_decision_core,handle_request,2}]}}
I see the following error in the log file in /var/log/rabbitmq :
=ERROR REPORT==== 31-Oct-2014::06:20:40 ===
webmachine error: path="/api/overview"
{error,{error,{badmatch,{error,nxdomain}},
[{rabbit_nodes,cluster_name_default,0},
{rabbit_nodes,cluster_name,0},
{rabbit_mgmt_wm_overview,to_json,2},
{webmachine_resource,resource_call,3},
{webmachine_resource,do,3},
{webmachine_decision_core,resource_call,1},
{webmachine_decision_core,decision,1},
{webmachine_decision_core,handle_request,2}]}}
The workers are able to connect to the broker and are receiving the messages, also the new relic plugin for rabbitmq seems to be working fine. However I am unable to login thru the management plugin. Any pointers in this regard will be helpful.
I had updated the hostname of the system and that was causing the issue. See the link below
https://groups.google.com/forum/#!msg/rabbitmq-users/9P-BAwGVHJU/fwOpZPJywwYJ
I added 127.0.0.1 'hostname' in /etc/hosts. That solved the management plugin problem. However rabbitmqctl still showed the following error. Restarted rabbitmq and it solved the rabbitmqctl problem as well
Listing queues ...
Error: unable to connect to node 'rabbit#<hostname>': nodedown
DIAGNOSTICS
===========
attempted to contact: ['rabbit#<hostname>']
rabbit#<hostname>:
* connected to epmd (port 4369) on <hostname>
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* suggestion: hostname mismatch?
* suggestion: is the cookie set correctly?
current node details:
- node name: <nodename>
- home dir: <homedir>
- cookie hash: <cookiehash>