Enable ejabberd web interface in mongooseIM - mongoose-im

When I do bin/mongooseim start and the mongooseim is running, then I put on url 127.0.0.1:5280/ I have as result on web a white page, I think that this is mean that ejabberd_cowboy ( having the port 5280) is listenning but I haven't the interface khnown of ejabberd.
In ejabberd.cfg I have these lines for Cowboy
{listen,
[
%% HTTP endpoints
{ {{cowboy_port}}, ejabberd_cowboy, [
{num_acceptors, 10},
{max_connections, 1024},
{modules, [
Modules used here should also be listed in the MODULES section.
{"localhost", "/api", mongoose_api,
[{handlers,[mongoose_api_metrics,mongoose_api_users]}]},
{"_", "/http-bind", mod_bosh},
{"_", "/ws-xmpp", mod_websockets}
%% Uncomment to serve static files
%{"_", "/static/[...]", cowboy_static,
% {dir, "/var/www", [{mimetypes, cow_mimetypes, all}]}
%},
]}]},

Related

auth_ldap.log is not appearing when auth_ldap.log = network in RabbitMQ

I've created a rabbitmq.conf and advanced.config for RabbitMQ intended to allow LDAP authentication with internal fallback. Because RabbitMQ is dumb and tries to use the installing user's appdata which is a terrible design for a Windows service, I've also redirected locations with environment variables:
RABBITMQ_BASE = D:\RabbitMQData\
RABBITMQ_CONFIG_FILE = D:\RabbitMQData\config\rabbitmq.conf
RABBITMQ_ADVANCED_CONFIG_FILE = D:\RabbitMQData\config\advanced.config
The config locations appear to be working correctly as they are referenced in the startup information and cause no errors on startup.
rabbitmq.conf (trimmed to relevant portions)
auth_backends.1 = ldap
auth_backends.2 = internal
auth_ldap.servers.1 = domain.local
auth_ldap.use_ssl = true
auth_ldap.port = 636
auth_ldap.dn_lookup_bind = as_user
auth_ldap.log = network
log.dir = D:\\RabbitMQData\\log
log.file.level = info
log.file.rotation.date = $D0
log.file.rotation.size = 10485760
advanced.config
[
{rabbitmq_auth_backend_ldap, [
{ssl_options, [{cacertfile,"D:\\RabbitMQData\\SSL\\ca.pem"},
{certfile,"D:\\RabbitMQData\\SSL\\server_certificate.pem"},
{keyfile,"D:\\RabbitMQData\\SSL\\server_key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}
]},
{user_bind_pattern, ""},
{user_dn_pattern, ""},
{dn_lookup_attribute, "sAMAccountName"},
{dn_lookup_base, "DC=domain,DC=local"},
{group_lookup_base,"OU=Groups,DC=domain,DC=local"},
{vhost_access_query, {in_group, "cn=RabbitUsers,OU=Groups,DC=domain,DC=local"}},
{tag_queries, [
{administrator, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}},
{management, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}}
]}
]}
].
I'm using auth_ldap.log = network so there should be an ldap_auth.log file in my log directory which would help me troubleshoot but it's not there. Why would this occur? I've not seen any documented settings for auth_ldap logging other than .log so I would assume it would be with the other logs.
I'm currently running into issues with LDAP, specifically the error LDAP bind error: "xxxx" anonymous_auth. As I'm using simple bind via auth_ldap.dn_lookup_bind = as_user I should not be getting anonymous authentication. Without the detailed log however, I can't get additional information.
Okay looks I made two mistakes here:
Going back and re-reading, looks like I misinterpreted the documentation and believed auth_ldap.log was a separate file rather than just a setting. All the LDAP logging goes into the normal RabbitMQ log.
I had pulled Luke Bakken's config from https://groups.google.com/g/rabbitmq-users/c/Dby1OWQKLs8/discussion but the following lines ended up as:
{user_bind_pattern, ""},
{user_dn_pattern, ""}
instead of
    {user_bind_pattern, "${username}"},
    {user_dn_pattern, "${ad_user}"},
I had used a Powershell script with a herestring to create the config file which erroneously interpreted those variables as empty strings. Fixing that let me log on with "domain\username".

vue/cli-plugin-pwa: server requests (with axios) aren’t cached by the service worker

I’m unsuccessfully trying to add PWA to the project. Server requests aren’t cached by the service worker. Request addresses aren’t added to Cache Storage. Accordingly, the offline mode doesn’t work.
Project config:
vue spa
requests to the server using axios library
server responses don’t contain cache-control header
pwa is implemented using the standard vue plugin: vue /
cli-plugin-pwa
PWA config in vue.config.js:
module.exports = {
publicPath: '/',
pwa: {
name: 'Old Vehicles',
manifestOptions: {
name: "Old Vehicles",
display: "standalone",
scope: "/",
start_url: "/"
},
workboxPluginMode: 'GenerateSW',
workboxOptions: {
navigateFallback: '/index.html',
runtimeCaching: [{
urlPattern: new RegExp('^http'),
handler: 'NetworkFirst',
options: {
networkTimeoutSeconds: 2,
cacheName: 'api-cache',
cacheableResponse: {
statuses: [0, 200],
},
},
}]
}
}
};
PS And also this: developer console in browser -> Application tab -> Installability -> "Page does not work offline". The service worker successfully connected (excepting request caching), the manifest is identified. Why does it show such a message?
I'm currently learning about PWA too. What I read about this issue:
Axios is using behind the lines XHR requests and not fetch requests.
Because workers run separately from the main thread, service workers are independent of the application they are associated with, consequences : synchronous XHR and localStorage cannot be used in a service worker
So I guess you may need to use standard fetch requests.
The other answer is misleading.
Assuming the service worker is successfully registered/etc, any (https) network request on the main thread (or web worker threads) will be sent via the service worker.
Axios is part of the main thread, not the service worker thread.
XHR in the main thread are the same as any other method (eg js: fetch(), html: , css: url())...all requests will trigger the 'fetch' event in the service worker (presumably to be cached).
The problem is quite likely to be because the network requests in the main thread are http, not https.
If possible, upgrade the server serving the resources from http to https, and change the URLs used on the main thread. It will not be possible to 'rewrite' the protocol in the service worker.
If you cannot change the code on the main thread for some reason (eg legacy app with no source), then it should be possible to get the server that is sending the document that causes the request (ie the js file with fetch('http://.../')/XHR, the html file with <img src="http://.../">, the css file with url(http://.../)) to add a header:
Content-Security-Policy: upgrade-insecure-requests
More info: https://github.com/w3c/ServiceWorker/issues/813
and https://w3c.github.io/webappsec-upgrade-insecure-requests/#goals

ASP.NET Core Web Api route certain requests to microservice running on other port

I have created a SPA which has to communicate to various microservices in the backend.
For deployment and latency reasons, we have to make sure, that the frontend only communicates with one endpoint, which then routes to the other microservices internally.
How can i make this work with ASP.NET Core? I looked at Ocelot which provides something quite similar but as i see in the documentation i have to configure the IP adress / hostname under which the backend will be accessed from the client and i don't know this information upfront as this will be determined after deployment (and can be different for every machine, given this service will run on various Edge devices).
Can this be achieved using a simple routing middleware which looks for a certain path in the url (e.g. /api/otherservice), send a http request to the responsible microservice (e.g. http://localhost:1234/api/otherservice) and return the information to the caller?
I am using ASP.NET Core 2.1.
Update: I managed to get ocelot running to apply the desired routing to the downstream microservice. However, the service where i use Ocelot provides routes itself (serving the web app frontend and some other backend api routes).
Does anybody know how i can tell Ocelot to fall back to the routes declared in the service it is running if there is a route which is not contained in ocelot.json (or the other way round, to tell ASP.NET Core to just use ocelot for routes id can't resolve on it's own).
Update 2:
This is my ocelot config which results in an infinite loop:
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/tool/{url}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5200
}
],
"UpstreamPathTemplate": "/api/tool/{url}",
"UpstreamHttpMethod": [ "Get", "Post", "Put", "Delete" ]
},
{
"DownstreamPathTemplate": "/{everything}",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5100
}
],
"UpstreamPathTemplate": "/{everything}",
"Priority": 0
}
]
}
Omitting the second (catchAll) - route successfully routes to the tool-route, but can't serve routes provided by the ASP.NET Core Service which contains the Ocelot Middleware.
Maybe you want a catch all reroute? Since this type of reroute have the least priority it will only do it if all previous reroutes with higher priority fail:
https://ocelot.readthedocs.io/en/latest/features/routing.html#catch-all
{
"DownstreamPathTemplate": "/{url}",
"DownstreamScheme": "https",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 80,
}
],
"UpstreamPathTemplate": "/{url}",
"UpstreamHttpMethod": [ "Get" ]
}
If you don't want this default behavior you can take a look at the priority attribute:
https://ocelot.readthedocs.io/en/latest/features/routing.html#priority
{
"UpstreamPathTemplate": "/goods/{catchAll}"
"Priority": 0
}

what is asterisk 13 configration for enabling WebRTC?

I want to setup asterisk 13 that working on ubuntu 16.04 on local machine to enable WebRTC, I'am testing with https://www.doubango.org/sipml5/ on firefox
I had the sipml5 client connected successfully to asterisk but when imitating a call it's said Call in Progress,
http is enable and bound to port 8088
this is sip.conf :
[web_rtc]
context=default
host=dynamic
secret=abc101
type=friend
transport=udp,ws,wss,tcp
encryption=yes
avpf=yes
force_avp=yes
icesupport=yes
directmedia=no
disallow=all
allow=opus
allow=ulaw
dtlsenable=yes
dtlsverify=fingerprint
dtlscertfile=/etc/asterisk/ast.pem
dtlscafile=/etc/asterisk/ast.pem
dtlssetup=actpass/
rtcp_mux=yes
and this is extension.conf :
[web_rtc]
exten => 100,1,Answer()
exten => n,Playback(hello-world)
exten => n,Hangup()
I have tested sipml5 on Ubuntu 18.04 on Asterisk 13 and it worked fine. I suggest to add to your rtp.conf file (/etc/asterisk/rtp.conf) an stun server by adding the following line:
stunaddr=stun.l.google.com:19302
Then, on the live demo, configure your ICE Servers on the expert mode section adding [{ url: 'stun:stun.l.google.com:19302'}]
Anyway, I tried sipml5 and I had some problems integrating it with Asterisk (I have problems muting the call or pausing it), so I tried other libraries and finally I decided to use sip.js (https://sipjs.com/), which I recommend.
Hope it helps.
I edit to add the sipml5 client stack configuration that I used on my JS script:
sipStack = new SIPml.Stack({
realm: 'example.com',
impi: 'sip_user',
impu: 'sip:sip_user#example.com:port',
password: 'super_secret_password',
websocket_proxy_url: 'wss://example.com:port/ws',
outbound_proxy_url: null,
ice_servers: "[{ url: 'stun:stun.l.google.com:19302'}]",
enable_rtcweb_breaker: true,
enable_early_ims: true,
enable_media_stream_cache: true,
sip_headers: [
{ name: 'enter code hereUser-Agent', value: 'IM-client/OMA1.0 sipML5-v1.2016.03.04' },
{ name: 'Organization', value: 'My_company' }
],
events_listener: { events: '*', listener: eventsListener }
});

How to ask to the service worker to ignore requests matching a specific URL pattern in Polymer?

My application is built on Polymer v2 and uses the Firebase Auth service for authentication. Actually, I use the login-fire element. For a better experience on mobile devices, I choose to sign-in with redirect.
In the "network" tab of the DevTool (in Chrome) I see that a request containing the /__/auth/handler? pattern is sent for requesting Google authentication (for example, if the provider used is Google).
With the service workers enabled, this request is caught and the response is the login page of my application. No login attempted, the response comes from the service worker and I get a Network Error from Firebase API because of a timeout.
When I deploy the app without service workers the authentication process is working and I can reach the app.
I tried many ways to config the service workers to ignore all requests to a URL with the /auth/ pattern but I failed.
See the last version of my config file bellow.
sw-precache-config.js
module.exports = {
globPatterns: ['**\/*.{html,js,css,ico}'],
staticFileGlobs: [
'bower_components/webcomponentsjs/webcomponents-loader.js',
'images/*',
'manifest.json',
],
clientsClaim: true,
skipWaiting: true,
navigateFallback: 'index.html',
runtimeCaching: [
{
urlPattern: /\/auth\//,
handler: 'networkOnly',
},
{
urlPattern: /\/bower_components\/webcomponentsjs\/.*.js/,
handler: 'fastest',
options: {
cache: {
name: 'webcomponentsjs-polyfills-cache',
},
},
},
{
urlPattern: /\/images\//,
handler: 'cacheFirst',
options: {
cacheableResponse: {
statuses: [0, 200],
},
},
},
],
};
Do you have a better solution? Do you notice what I missed?
Thank you for your help.
You can add this to your sw-precache-config.js file
navigateFallbackWhitelist: [/^(?!\/auth\/)/],
You should only whitelist the paths of your application. This should be known to you.
So everything you do not whitelist, will not be served from the serviceworker.
navigateFallbackWhitelist: [/^\/news\//,/^\/msg\//, /^\/settings\//],
With this example, only news/*, msg/*,settings/* will be delivered.
/auth/*,/api/*,... will not be caught.