Any provision to push message between "Queues" by change the Queue name dynamically in stream #Input("suffix"+"QueueA")and#Input("suffix"+"QueueA") - rabbitmq

Is there any provision to push message between two "Queues" by change the Queue name dynamically in cloud.stream #Input("suffix"+"SampleQueueA") and #Input("suffix"+"SampleQueueB")
This use case with Sprig cloud stream using messaging server RabbitMQ.
I tried pushing messages into two different Queues by dynamically changing the Queue name.
# Input bindings used for testing
spring:
rabbitmq:
host: 127.0.0.1
virtual-host: /defaultVH
username: guest
password: guest
cloud:
stream:
bindings:
ClientSampleQueueA:
binder: rabbit-A
contentType: application/x-java-object
group: groupA
destination: ClientSampleQueueA
VendorSampleQueueA:
binder: rabbit-A
contentType: application/x-java-object
group: groupA
destination: VendorSampleQueueA
# cloud.stream.bindings.input1.destination: customerId-1
# spring.cloud.stream.bindings.input2.destination: customerId-2
binders:
rabbit-A:
defaultCandidate: false
inheritEnvironment: false
type: rabbit
environment:
spring:
rabbitmq:
host: 127.0.0.1
virtualHost: /vhA
username: guest
password: guest
port: 5672
connection-timeout: 10000
interface Sink {
String INPUT1 = "ClientSampleQueueA";
String INPUT2 = "VendorSampleQueueA";
#Input(INPUT1)
SubscribableChannel input1();
#Input(INPUT2)
SubscribableChannel input2();
}
#Bean(name = "sourceChannel")
public MessageChannel localChannel() {
return new DirectChannel();
}
#Autowired
#Qualifier("sourceChannel")
private MessageChannel localChannel;
Want to resolve the Queue dynamically by Object parameter.
private void sendMessage(Object body, Object contentType) {
localChannel.send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}

But what is subscribed to localChannel.
As I said in my comment:
spring.cloud.stream.bindings.ClientSampleQueueA.destination=commonDest
spring.cloud.stream.bindings.ClientSampleQueueA.group=ClientSampleQueueA
spring.cloud.stream.rabbit.bindings.ClientSampleQueueA.bindingRoutingKey=ClientSampleQueueA
spring.cloud.stream.bindings.VendorSampleQueueA.destination=commonDest
spring.cloud.stream.rabbit.bindings.VendorSampleQueueA.bindingRoutingKey=VendorSampleQueueA
spring.cloud.stream.bindings.VendorSampleQueueA.group=ClientSampleQueueA
and
spring.cloud.stream.bindings.sourceChannel.destination=commonDest
spring.cloud.stream.rabbit.bindings.sourceChannel.routingKeyExpression=headers.routeTo
and then
private void sendMessage(Object body, Object contentType) {
localChannel.send(MessageBuilder.withPayload(body)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType)
.setHeader("routeTo", "VendorSampleQueueA")
.build();
}

Related

How to expose RSK node to an external network?

I am having problems exposing my RSK node to an external IP.
My startup command looks as follows:
java \
-cp $HOME/Downloads/rskj-core-3.0.1-IRIS-all.jar \
-Drsk.conf.file=/root/bitcoind-lnd/rsk/rsk.conf \
-Drpc.providers.web.cors=* \
-Drpc.providers.web.ws.enabled=true \
co.rsk.Start \
--regtest
This is my rsk.conf:
rpc {
providers {
web {
cors: "*",
http {
enabled = true
bind_address = "0.0.0.0"
hosts = ["localhost", "0.0.0.0"]
port: 4444
}
}
}
}
API is accessible from localhost, but from external network I get error 400. How do I expose it to external network?
You should add your external IP to hosts. Adding just 0.0.0.0 is not enough to indicate all IPs to be valid. Port forwarding needs to be enabled for the port number that you have configured in rsk.conf, which in this case is the default value of 4444.
rpc {
providers {
web {
cors: “*”,
http {
enabled = true
bind_address = “0.0.0.0"
hosts = [“localhost”, “0.0.0.0", “216.58.208.100”]
port: 4444
}
}
}
}
where 216.58.208.100 is your external IP

How to configure spring-cloud-gateway to use sleuth to log request/response body

I am looking to develop a gateway server based on spring-cloud-gateway:2.0.2-RELEASE and would like to utilize sleuth for logging purposes. I have sleuth running since when I write to the log I see the Sleuth details (span Id, etc), but I an hoping to see the body of messages being logged automatically. Is there something I need to do to get Sleuth to log request/response out of the box with Spring-Cloud-Gateway?
Here is the request headers that arrive at my downstream service
headers:
{ 'x-request-foo': '2a9c5e36-2c0f-4ad3-926c-cb20d4428462',
forwarded: 'proto=http;host=localhost;for="0:0:0:0:0:0:0:1:51720"',
'x-forwarded-for': '0:0:0:0:0:0:0:1',
'x-forwarded-proto': 'http',
'x-forwarded-port': '80',
'x-forwarded-host': 'localhost',
'x-b3-traceid': '5bd33eb8050c7a32dfce6adfe68b06ca',
'x-b3-spanid': 'ba202a6d6f3e2893',
'x-b3-parentspanid': 'dfce6adfe68b06ca',
'x-b3-sampled': '0',
host: 'localhost:8080' },
Gradle file in the gateway service..
buildscript {
ext {
kotlinVersion = '1.2.61'
springBootVersion = '2.0.6.RELEASE'
springCloudVersion = 'Finchley.RELEASE'
}
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-sleuth:2.0.2.RELEASE"
mavenBom 'org.springframework.cloud:spring-cloud-gateway:2.0.2.RELEASE'
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
dependencies {
implementation('org.springframework.cloud:spring-cloud-starter-sleuth')
implementation('org.springframework.cloud:spring-cloud-starter-gateway')
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
implementation("org.jetbrains.kotlin:kotlin-reflect")
testImplementation('org.springframework.boot:spring-boot-starter-test')
}
and finally the application.yml file for the gateway service...
server:
servlet:
contextPath: /
port: 80
spring:
application:
name: api.gateway.ben.com
sleuth:
trace-id128: true
sampler:
probability: 1.0
cloud:
gateway:
routes:
- id: admin-ui-2
predicates:
- Path=/admin-ui-2/echo/*
filters:
- SetPath=/fred
- AddRequestHeader=X-Request-Foo, 2a9c5e36-2c0f-4ad3-926c-cb20d4428462
- AddResponseHeader=X-Response-Foo, Bar
uri: http://localhost:8080
logging:
pattern:
level: "[%X{X-B3-TraceId}/%X{X-B3-SpanId}] %-5p [%t] %C{2} - %m%n"
level:
org.springframework.web: DEBUG
Spring Cloud Gateway already can log the request and response, you just need to change your log level to TRACE.
logging:
level:
org.springframework: TRACE
or to be more precise to just log req/resp:
logging:
level:
org.springframework.core.codec.StringDecoder: TRACE
Other option is to use a Filter in Spring Cloud Gateway and log req/resp etc with any loggers like Log4j:
routeBuilder.route(id,
r -> {
return r.path(path).and().method(requestmethod).and()
.header(routmap.getRequestheaderkey(), routmap.getRequestheadervalue()).and()
.readBody(String.class, requestBody -> {
return true;
}).filters(f -> {
f.rewritePath(rewritepathregex, replacement);
f.prefixPath(perfixpath);
f.filter(LogFilter);
return f;
}).uri(uri);
});
This filer "LogFilter" you need to define which would have logging logic.

Masstransit cannot access host machine RabbitMQ from a docker container

I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan

Unable to configure and run pithos.io using AWS Java SDK

I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081

Terraform cannot connect Chef provisioner with ssh

I cannot get terraform's ssh to connect via private aws keypair for chef provisioning - the error looks to just be a timeout:
aws_instance.app (chef): Connecting to remote host via SSH...
aws_instance.app (chef): Host: 96.175.120.236:32:
aws_instance.app (chef): User: ubuntu
aws_instance.app (chef): Password: false
aws_instance.app (chef): Private key: true
aws_instance.app (chef): SSH Agent: true
aws_instance.app: Still creating... (5m30s elapsed)
Error applying plan:
1 error(s) occurred:
* dial tcp 96.175.120.236:32: i/o timeout
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Here is my terraform plan - note the ssh settings.. the key_name setting is set to my AWS keypair name and the ssh_for_chef.pem is the private key
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
}
resource "aws_instance" "app" {
ami = "ami-88aa1ce0"
count = "1"
instance_type = "t1.micro"
key_name = "ssh_for_chef"
security_groups = ["sg-c43490e1"]
subnet_id = "subnet-75dd96e2"
associate_public_ip_address = true
provisioner "chef" {
server_url = "https://api.chef.io/organizations/xxxxxxx"
validation_client_name = "xxxxxxx-validator"
validation_key = "/home/user01/Documents/Devel/chef-repo/.chef/xxxxxxxx-validator.pem"
node_name = "dubba_u_7"
run_list = [ "motd_rhel" ]
user_name = "user01"
user_key = "/home/user01/Documents/Devel/chef-repo/.chef/user01.pem"
ssl_verify_mode = "false"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("/home/user01/Documents/Devel/ssh_for_chef.pem")}"
}
}
Any ideas?
I'm not sure if we had the same problem, since you didn't specify if you were able to ssh to the instance.
In my case, I was running terraform from within the VPC, and the connection was allowed with a security groups, which can't be used with a public IP.
the solution is simple (but you will have to use the new conditional interpolations of terraform v.0.8.0) -
Define this variable - variable use_public_ip { default = true }
Then, inside the connection section of the chef provisioner, add the following line -
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}"
If you wish to use the public IP, set the variable as true, otherwise, set it to false.
I use this for aws -
connection {
user = "ubuntu"
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}","
}