I'm using RabbitMqBundle in Symfomy4.
What i would like to achieve is to publish a message (a notification in my case) and by a routing key choose if to store the message in Db or send it by email or both.
I'm focused on topic exchange but i can't figure out how to reach this goal, maybe i didn't understand completely the mechanism of RabbitMQ, but i'm completely new to it.
This is my configuration
old_sound_rabbit_mq:
connections:
default:
#url: '%env(RABBITMQ_URL)%'
url: 'amqp://guest:guest#localhost:5672'
vhost: '/'
lazy: false
connection_timeout: 3
read_write_timeout: 3
producers:
notifications:
connection: default
exchange_options: {name: 'notifications', type: topic}
consumers:
store_notifications:
connection: default
exchange_options: {name: 'notifications', type: topic}
queue_options:
name: 'notifications'
routing_keys:
- 'notification.store'
# - 'notification.*' # this will match everything
callback: App\Consumer\Notification\DbHandler
email_notifications:
connection: default
exchange_options: {name: 'notifications', type: topic}
queue_options:
name: 'notifications'
routing_keys:
- 'notification.email'
callback: App\Consumer\Notification\EmailHandler
In this case i can publish a message just to one of the routing key: notification.store or notification.email
I would like to have something like publish($msg, ['notification.store', 'notification.email']), but i know i can make a consumer listen to multiple routing keys and with wildcard but i can't figure out how to configure it.
Is this possible?
I think you can do by:
If you just want to store DB, the routing key is: notification.store
If you just want to send email, the routing key is: notification.email
If you want to do both, the routing key is: notification.both
Then, your queue should bound to the exchange with these routing keys:
store_notifications: [notification.store, notification.both]
email_notifications: [notification.email, notification.both]
By doing this, if the message with routing notification.store just go to store_notifications, notification.email just go to email_notifications. But message with routing notification.both goes to both queues.
Configuration:
consumers:
store_notifications:
connection: default
exchange_options: {name: 'notifications', type: topic}
queue_options:
name: 'notifications'
routing_keys:
- 'notification.store'
- 'notification.both'
callback: App\Consumer\Notification\DbHandler
email_notifications:
connection: default
exchange_options: {name: 'notifications', type: topic}
queue_options:
name: 'notifications'
routing_keys:
- 'notification.email'
- 'notification.both'
callback: App\Consumer\Notification\EmailHandler
Hope this helps.
Related
I have a Laravel(Lumen) Login API, which generates a JWT using HS256. Then I sent my bearer token to Envoy Gateway and get from Envoy
JWT verification fails
On official JWT decode site I could successfully decode and verify my bearer token. Here I generate my JWT:
{
$payload = [
'iss' => config('app.name'), // Issuer vom Token
'sub' => strval($user->ID), // Subject vom Token
'username' => $user->username,
'iat' => time() - 500, // Time when JWT was issued.
'exp' => time() + config('jwt.ttl'), // Expiration time
'alg' => 'HS256',
'kid' => 'ek4Z9ouLmGnCoezntDXMxUwmjzNTBqptKNkfaqc6Ew8'
];
$secretKey = 'helloworld'; //my base64url
$jwtEnc = JWT::encode($payload, $secretKey, $payload['alg'], $payload['kid']);
return $jwtEnc;
}
Here is my Envoy config:
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': 'type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager'
stat_prefix: edge
http_filters:
- name: envoy.filters.http.jwt_authn
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
providers:
provider1:
issuer: 'Lumen'
forward: true
local_jwks:
inline_string: '{"keys": [{"kty": "oct", "use": "sig", "kid": "ek4Z9ouLmGnCoezntDXMxUwmjzNTBqptKNkfaqc6Ew8", "k": "helloworld", "alg": "HS256"}]}' //'k' is here base64url
rules:
- match:
prefix: "/list"
requires:
provider_name: "provider1"
- name: envoy.filters.http.router
route_config:
virtual_hosts:
- name: all_domains
domains: [ "*" ]
routes:
- match:
prefix: "/api"
route:
cluster: loginapi
clusters:
- name: loginapi
connect_timeout: 5s
load_assignment:
cluster_name: loginapi
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 8080
The token is signed and verified with a symmetric algorithm (HS256).
The key parameters of the symmetric key are provided in form of a JSON Web Key in the local_jwks parameter in the Envoy configuration. The key value itself in the parameter "k" is supposed to be stored in Base64Url format:
The "k" (key value) parameter contains the value of the symmetric (or other single-valued) key. It is represented as the base64url encoding of the octet sequence containing the key value.
(see RFC7518 Section 6.4.1)
Base64Url encoding is used here in order to be able to use binary keys (i.e keys in which every byte can have any value in the full range from 0 to 255) for signing.
When the key is used for signing and verification, it has to be decoded to it's (potentially) binary form.
To stick with the simple example key "helloworld" (of course, just for illustration, not as a real key), this key would have to be stored as "k":"aGVsbG93b3JsZA" (the base64url form of "helloworld") in the inline jwk in the configuration and
used in the not encoded form "helloworld" to sign the token. The receiving side also uses the base64url decoded value of k to verify the signature.
Summary:
create a binary key and base64url encode it
store the encoded key in the "k" parameter of the local_jwks parameter in the Envoy configuration
decode the value of "k" to use it as a key to verify or sign the token
You can use the following website https://base64.guru/standards/base64url/encode to encode base64url.
in "k": tempo-secret base64URL encoded
my working config:
http_filters:
- name: envoy.filters.http.jwt_authn
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
providers:
provider1:
issuer: 'jwt-issuer'
forward: true
local_jwks:
inline_string: '{"keys":[{"kty":"oct","alg":"HS256","k":"dGVtcG8tc2VjcmV0"}]}'
# from_headers:
# - name: authorization
rules:
- match:
prefix: "/"
requires:
I'm trying to use GCP API Gateway to create a single endpoint for a couple of my backend services (A,B,C,D), each with their own path structure. I have the Gateway configured for one of the services as follows:
swagger: '2.0'
info:
title: <TITLE>
description: <DESC>
version: 1.0.0
schemes:
- https
produces:
- application/json
paths:
/service_a/match/{id_}:
get:
summary: <SUMMARY>
description: <DESC>
operationId: match_id_
parameters:
- required: true
type: string
name: id_
in: path
- required: true
type: boolean
default: false
name: bool_first
in: query
- required: false
type: boolean
default: false
name: bool_Second
in: query
x-google-backend:
address: <cloud_run_url>/match/{id_}
deadline: 60.0
responses:
'200':
description: Successful Response
'422':
description: Validation
This deploys just fine. But when I hit the endpoint gateway_url/service_a/match/123, it gets routed to cloud_run_url/match/%7Bid_%7D?id_=123 instead of cloud_run_url/match/123.
How can I fix this?
Editing my answer as I misunderstood the issue.
It seems like the { are getting leaked from your configuration as ASCII code, so when you call
x-google-backend:
address: <cloud_run_url>/match/{id_}
deadline: 60.0
it doesn't show the correct ID.
So this should be a leak issue from your yaml file and you can approach this the same way as in this thread about using path params
I am looking to add SASL Plaintext authentication in Banzai Kafka. I have added following configs in my read only config section.
readOnlyConfig: |
auto.create.topics.enable=false
cruise.control.metrics.topic.auto.create=true
cruise.control.metrics.topic.num.partitions=1
cruise.control.metrics.topic.replication.factor=2
delete.topic.enable=true
offsets.topic.replication.factor=2
group.initial.rebalance.delay.ms=3000
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
listener.name.external.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
I have scripted following in listener config
listenersConfig:
externalListeners:
- type: "sasl_plaintext"
name: "external"
externalStartingPort: 51985
containerPort: 29094
accessMethod: LoadBalancer
internalListeners:
- type: "plaintext"
name: "internal"
containerPort: 29092
usedForInnerBrokerCommunication: true
- type: "plaintext"
name: "controller"
containerPort: 29093
usedForInnerBrokerCommunication: false
usedForControllerCommunication: true
When I try to connect producer or consumer - kafka returns Authentication Authorization failed error.
I am setting following properties:
session.timeout.ms=60000
partition.assignment.strategy=org.apache.kafka.clients.consumer.StickyAssignor
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
Can any one suggest on this?
I've taken the Dapr pub/sub How-to sample and tried to update it to use RabbitMQ
I've downloaded the Docker rabbitmq:3 image from DockerHub and it should be listening on amqp://localhost:5672.
I have created a new component file for RabbitMQ called rabbitmq.yaml and placed it in the .dapr/components directory. My component configuration for RabbitMQ is:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-rabbitmq
spec:
type: pubsub.rabbitmq
version: v1
metadata:
- name: host
value: amqp://localhost:5672
- name: durable
value: true # Optional. Default: "false"
- name: deletedWhenUnused
value: false # Optional. Default: "false"
- name: ttlInSeconds
value: 60
- name: prefetchCount
value: 0
My subscription is defined in subscription-rabbitmq.yaml located in the same .dapr/components directory. and it looks as follows:
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: rabbitmq-subscription
spec:
topic: deathStarStatus
route: /dsstatus
pubsubname: my-rabbitmq
scopes:
- app2
When I run the sample using the Redis Streams component and subscription, the node application receives the message and displays in within the output of the dapr run command.
dapr publish --topic deathStarStatus --pubsub pubsub --data '{ "message": "This is a test" }'
but when I publish a message to RabbitMQ, it states the "Event Published Successfully", but the node app doesn't receive the message.
dapr publish --topic deathStarStatus --pubsub my-rabbitmq --data '{ "message": "This is a test" }'
Here is the node app and the command to run it:
dapr run --app-id app2 --app-port 3000 node app2.js
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
// app.get('/dapr/subscribe', (req, res) => {
// res.json([
// {
// pubsubname: "my-rabbitmq",
// topic: "deathStarStatus",
// route: "dsstatus"
// }
// ]);
// })
app.post('/dsstatus', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
What am I doing wrong?
It turns out that a "topic" in RabbitMQ (Dapr world) is really an Exchange and not a Queue.
When running "app2" with RabbitMQ subscription, a queue is created with a prepended appid (e.g. {appid}-{queueName}), but the Exchange was not created. Not sure if this is by design or my specific configuration.
I ended up creating an Exchange called "deathStarStatus" and mapped that Exchange to my queue called "app2-deathStarStatus" and everything worked.
Uncaught DOMException: Failed to construct 'RTCPeerConnection': Both username and credential are required when the URL scheme is "turn" or "turns".
i have getting this error her is my using ice servers:
var servers =
{'iceServers': [
{url:'turn:numb.viagenie.ca'},
{url:'stun:stun01.sipphone.com'},
{url:'stun:stun.ekiga.net'},
{url:'stun:stun.fwdnet.net'},
{url:'stun:stun.ideasip.com'},
{url:'stun:stun.iptel.org'},
{url:'stun:stun.rixtelecom.se'},
{url:'stun:stun.schlund.de'},
{url:'stun:stun.l.google.com:19302'},
{url:'stun:stun1.l.google.com:19302'},
{url:'stun:stun2.l.google.com:19302'},
{url:'stun:stun3.l.google.com:19302'},
{url:'stun:stun4.l.google.com:19302'},
{url:'stun:stunserver.org'},
{url:'stun:stun.softjoys.com'},
{url:'stun:stun.voiparound.com'},
{url:'stun:stun.voipbuster.com'},
{url:'stun:stun.voipstunt.com'},
{url:'stun:stun.voxgratia.org'},
{url:'stun:stun.xten.com'},
{
url: 'turn:numb.viagenie.ca',
credential: 'muazkh',
username: 'webrtc#live.com'
},
{
url: 'turn:192.158.29.39:3478?transport=udp',
credential: 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
username: '28224511:1379330808'
},
{
url: 'turn:192.158.29.39:3478?transport=tcp',
credential: 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
username: '28224511:1379330808'
}
]
};
where is my falt?What can i do?
What the error message says. The first server in your list specifies no username or credentials:
{url:'turn:numb.viagenie.ca'},
You also repeat the same server further down, this time with credentials.
These also look like non-working turn servers cut'n'pasted off the internet. Free turn servers is a lie.
Also waaaaay too many servers. One or two stun and/or turn will do. Too many slows down ICE.