Service Broker tutorial -- two SQL Server instances on one machine. How to route? - service-broker

I am following the Completing a Conversation Between Instances tutorial from MSDN. The Lesson 2: Creating the Initiator Database (at the end) shows, how to create routes at the initiator site (shortened):
...
USE InstInitiatorDB;
CREATE ROUTE InstTargetRoute
WITH SERVICE_NAME =
N'//TgtDB/2InstSample/TargetService',
ADDRESS = N'TCP://MyTargetComputer:4022';
...
USE msdb;
CREATE ROUTE InstInitiatorRoute
WITH SERVICE_NAME =
N'//InstDB/2InstSample/InitiatorService',
ADDRESS = N'LOCAL'
and the Lesson 3: Completing the Target Conversation Objects does the same on the target instance:
USE InstTargetDB;
CREATE ROUTE InstInitiatorRoute
WITH SERVICE_NAME =
N'//InstDB/2InstSample/InitiatorService',
ADDRESS = N'TCP://MyInitiatorComputer:4022';
...
USE msdb
CREATE ROUTE InstTargetRoute
WITH SERVICE_NAME =
N'//TgtDB/2InstSample/TargetService',
ADDRESS = N'LOCAL';
However, the tutorial assumes that the SQL server instances run on separate hardware. How should I change the routing or whatever if the two SQL server instances run on the same machine?

The two instances cannot share the listener port. On Lesson 1 you had this:
...
CREATE ENDPOINT InstTargetEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4022 )
...
and on Lesson 2 you had this:
...
CREATE ENDPOINT InstInitiatorEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4022 )
...
This will not work as both instances are configured to listen on the same TCP port. One has to be different. Lets make the target listen on 4023:
...
CREATE ENDPOINT InstTargetEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4023 )
...
Then the route from the initiator to the target has to specify port 4023 now:
...
CREATE ROUTE InstTargetRoute
WITH SERVICE_NAME =
N''//TgtDB/2InstSample/TargetService'',
ADDRESS = N''TCP://MyTargetComputer:4023'';';
...
Everything else stays the same.

Related

Celery fanout signals with `send_task`

I'm trying to establish communication between different processes running celery. I successfully sent tasks from one process to others using app.send_task on a celery instance. I am struggling now to broadcast tasks through a fanout rabbitmq exchange to all other instances (basically a publish-subscribe pattern for celery).
It must be related to the routing across exchanges and queues but I simply can't make it work.
This is the master emitter which broadcasts a task named signal through the default exchange of type fanout:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('emitter',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
if __name__ == '__main__':
app.send_task(name='signal', exchange='default')
To my understanding the routing, queue and exchange setup on the other app needs to be identical. Thus, this is a very similarly looking piece of code but defining a task that gets called:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('CLIENT_A',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
#app.task(name='signal')
def signal():
print('client_a signal')
return 'signal'
The second client will look exactly the same as the first except for the name and the print message:
# [...]
app = Celery('CLIENT_B', ...
# [...] identical to the part above
#app.task(name='signal')
def signal():
print('client_b signal')
I'm starting both client workers with different node names (otherwise celery will complain):
celery -A client_a worker -n node_a
celery -A client_b worker -n node_b
If I then call the emitter (first piece of code) I see the signals being triggered alternately by client_a and client_b but never both as I would like them to be triggered.
The rabbitmq management platform looks as expected with the default exchange defined as fanout and the routing looks alright.
I'm not sure if I'm on the completely wrong track here but that's what I imagined should be possible with correct routing.

Multiple conversations on Service Broker

Let say I have two instances of the same app interacting with a backend service in Service Broker. How can each instance know to handle only conversations it initiated and ignore the rest? If I recall correctly, every RECEIVE will remove the message from the queue.
Here's an example:
-- Assume the SquareService return the square of the number sent to it
-- Instance 1
BEGIN DIALOG #Conversation1
FROM SERVICE InitService
TO SERVICE 'SquareService'
ON CONTRACT (MyContract)
WITH ENCRYPTION = OFF;
SEND ON CONVERSATION #Conversation1 MESSAGE TYPE MyMessageType('1');
-- Instance 2
BEGIN DIALOG #Conversation2
...;
SEND ON CONVERSATION #Conversation2 MESSAGE TYPE MyMessageType('2');
Now who should I write the RECEIVE statement so that Instance 1 will correctly get 1 and Instance 2 get 4 back?
You are already using a Conversation Group.
Is this not sufficient for your needs when Receiving the messages?
-> using GET CONVERSATION GROUP and RECEIVE together
you can read more about it here: http://technet.microsoft.com/en-us/library/ms166131%28v=sql.105%29.aspx
and also here Sql Server Service Broker Conversation Groups
I'm assuming you have an InitQueue associated with your InitService. You can use a WHERE clause with RECEIVE to listen for messages on the same conversation:
WAITFOR (RECEIVE #response = CONVERT(xml, message_body)
FROM InitQueue -- InitService setup to use InitQueue?
WHERE conversation_handle = #Conversation1

What this iptables log entry is about?

I have a rails application running on a server where I added some iptables rules to improve security. Now Omniauth callbacks stopped working. Every time I try to log in with any provider I get this error into my application log
Errno::ENETUNREACH (Network is unreachable - connect(2))
And this dropped package gets logged into syslog
IN=eth0 OUT= MAC=40:40:ea:31:ac:8d:64:00:f1:cd:1f:7f:08:00 SRC=66.220.147.99 DST=my_ip LEN=56 TOS=0x00 PREC=0x00 TTL=88 ID=0 DF PROTO=TCP SPT=443 DPT=37035 WINDOW=14480 RES=0x00 ACK SYN URGP=0
Can someone tell me what that entry in my syslog is about and what kind of iptables rule is needed to allow it.
If needed I could add also the rules I have applied this far.
EDIT:
The syslog line was incorrect, so I replaced it.
The answer to my original question found from http://lists.debian.org/debian-user/2002/07/msg01187.html
IN = interface the packet came in
OUT = interface used for sending the packet
MAC = MAC address for source and destination
SRC = IP of the sender
DST = IP of the receiver
LEN = Length of the packet
TOS = ?
PREC = Precedence
TTL = Time to live (hop count of the package)
ID = Packet ID number
DF = Don't fragment bit
PROTO = The protocol
SPT = Sender port
DPT = Receiving port
WINDOW = ?
RES = Received bits
And then some TCP flags in the end of the row. Didn't yet dig the meaning of those.
ACK = ?
SYN = ?
URGP = ?

How to monitor GlassFish thread pool via asadmin interface

I'm trying to use the asadmin interface to monitor a thread-pool on GlassFish 3.1.1. I'm executing the following command:
asadmin get -m server.network.my-listener.thread-pool.*
and I'm getting data back, but most of it has lastsampletime = -1 (so the related data is zero; and is worthless).
Note: I've also tried the REST interface, which I believe asadmin delegates to, and the JMX interface. Same problem: much of the data has lastsampletime = -1.
I've already turned monitoring to HIGH for all modules. What am I missing?
It seems like redeploying my application was necessary for the monitoring to actually get values. Perhaps I interpreted the manual incorrectly but it seems to suggest that a restart/redeploy wouldn't be required:
Oracle GlassFish Server 3.1 Administration Guide
Also, it is weird that the following shows there is no monitoring data:
asadmin get -m server.thread-pools.thread-pool.http-thread-pool.*
Instead you must go through a specific network listener like:
asadmin get -m server.network.http-listener-2.thread-pool.*
It also took me by surprise that enabling thread-pool monitoring IS NOT enough to see thread pool statistics. You must also enable http-service monitoring:
asadmin enable-monitoring
asadmin set server.monitoring-service.module-monitoring-levels.thread-pool=HIGH
asadmin set server.monitoring-service.module-monitoring-levels.http-service=HIGH
That's all you should need to do.
Enable monitoring, set to HIGH, for the http-service module on the DAS, stand-alone instance, or cluster you want to monitor.
Deploy an app to the DAS, stand-alone instance, or cluster and make http-requests.
asadmin get -m *instancename*.network.*listener*.thread-pool.*
Looks like you are monitoring DAS, since you are using asadmin get -m server.network.my-listener.thread-pool.*.
I deployed a simple war to DAS and made a bunch of http requests. I see the corethreads-count and maxthreads-count have last sample time as -1. And the remaining statistics have actual last sample times.
asadmin get -m "server.network.http-listener-1.thread-pool.*"
server.network.http-listener-1.thread-pool.corethreads-count = 0
server.network.http-listener-1.thread-pool.corethreads-description = Core number of threads in the thread pool
server.network.http-listener-1.thread-pool.corethreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.corethreads-name = CoreThreads
server.network.http-listener-1.thread-pool.corethreads-starttime = 1320764890444
server.network.http-listener-1.thread-pool.corethreads-unit = count
server.network.http-listener-1.thread-pool.currentthreadcount-count = 5
server.network.http-listener-1.thread-pool.currentthreadcount-description = Provides the number of request processing threads currently in the listener thread pool
server.network.http-listener-1.thread-pool.currentthreadcount-lastsampletime = 1320765351708
server.network.http-listener-1.thread-pool.currentthreadcount-name = CurrentThreadCount
server.network.http-listener-1.thread-pool.currentthreadcount-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadcount-unit = count
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 0
server.network.http-listener-1.thread-pool.currentthreadsbusy-description = Provides the number of request processing threads currently in use in the listener thread pool serving requests
server.network.http-listener-1.thread-pool.currentthreadsbusy-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.currentthreadsbusy-name = CurrentThreadsBusy
server.network.http-listener-1.thread-pool.currentthreadsbusy-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadsbusy-unit = count
server.network.http-listener-1.thread-pool.dotted-name = server.network.http-listener-1.thread-pool
server.network.http-listener-1.thread-pool.maxthreads-count = 0
server.network.http-listener-1.thread-pool.maxthreads-description = Maximum number of threads allowed in the thread pool
server.network.http-listener-1.thread-pool.maxthreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.maxthreads-name = MaxThreads
server.network.http-listener-1.thread-pool.maxthreads-starttime = 1320764890443
server.network.http-listener-1.thread-pool.maxthreads-unit = count
server.network.http-listener-1.thread-pool.totalexecutedtasks-count = 31
server.network.http-listener-1.thread-pool.totalexecutedtasks-description = Provides the total number of tasks, which were executed by the thread pool
server.network.http-listener-1.thread-pool.totalexecutedtasks-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.totalexecutedtasks-name = TotalExecutedTasksCount
server.network.http-listener-1.thread-pool.totalexecutedtasks-starttime = 1320764890444
server.network.http-listener-1.thread-pool.totalexecutedtasks-unit = count
Command get executed successfully.
To instantly enable monitoring without restart use enable-monitoring command
enable-monitoring
enable-monitoring --modules jvm=LOW
enable-monitoring --modules thread-pool=HIGH
enable-monitoring --modules http-service=HIGH
enable-monitoring --modules jdbc-connection-pool=HIGH
The trick is that thread-pool and http-service modules must have high level to get monitoring info.
For more info refer https://docs.oracle.com/cd/E26576_01/doc.312/e24928/monitoring.htm#GSADG00558

Extend existing Twisted Service with another Socket/TCP/RPC Service to get Service informations

I'm implementing a Twisted-based Heartbeat Client/Server combo, based on this example. It is my first Twisted project.
Basically it consists of a UDP Listener (Receiver), who calls a listener method (DetectorService.update) on receiving packages. The DetectorService always holds a list of currently active/inactive clients (I extended the example a lot, but the core is still the same), making it possible to react on clients which seem disconnected for a specified timeout.
This is the source taken from the site:
UDP_PORT = 43278; CHECK_PERIOD = 20; CHECK_TIMEOUT = 15
import time
from twisted.application import internet, service
from twisted.internet import protocol
from twisted.python import log
class Receiver(protocol.DatagramProtocol):
"""Receive UDP packets and log them in the clients dictionary"""
def datagramReceived(self, data, (ip, port)):
if data == 'PyHB':
self.callback(ip)
class DetectorService(internet.TimerService):
"""Detect clients not sending heartbeats for too long"""
def __init__(self):
internet.TimerService.__init__(self, CHECK_PERIOD, self.detect)
self.beats = {}
def update(self, ip):
self.beats[ip] = time.time()
def detect(self):
"""Log a list of clients with heartbeat older than CHECK_TIMEOUT"""
limit = time.time() - CHECK_TIMEOUT
silent = [ip for (ip, ipTime) in self.beats.items() if ipTime < limit]
log.msg('Silent clients: %s' % silent)
application = service.Application('Heartbeat')
# define and link the silent clients' detector service
detectorSvc = DetectorService()
detectorSvc.setServiceParent(application)
# create an instance of the Receiver protocol, and give it the callback
receiver = Receiver()
receiver.callback = detectorSvc.update
# define and link the UDP server service, passing the receiver in
udpServer = internet.UDPServer(UDP_PORT, receiver)
udpServer.setServiceParent(application)
# each service is started automatically by Twisted at launch time
log.msg('Asynchronous heartbeat server listening on port %d\n'
'press Ctrl-C to stop\n' % UDP_PORT)
This heartbeat server runs as a daemon in background.
Now my Problem:
I need to be able to run a script "externally" to print the number of offline/online clients on the console, which the Receiver gathers during his lifetime (self.beats). Like this:
$ pyhb showactiveclients
3 clients online
$ pyhb showofflineclients
1 client offline
So I need to add some kind of additional server (Socket, Tcp, RPC - it doesn't matter. the main point is that i'm able to build a client-script with the above behavior) to my DetectorService, which allows to connect to it from outside. It should just give a response to a request.
This server needs to have access to the internal variables of the running detectorservice instance, so my guess is that I have to extend the DetectorService with some kind of additionalservice.
After some hours of trying to combine the detectorservice with several other services, I still don't have an idea what's the best way to realize that behavior. So I hope that somebody can give me at least the essential hint how to start to solve this problem.
Thanks in advance!!!
I think you already have the general idea of the solution here, since you already applied it to an interaction between Receiver and DetectorService. The idea is for your objects to have references to other objects which let them do what they need to do.
So, consider a web service that responds to requests with a result based on the beats data:
from twisted.web.resource import Resource
class BeatsResource(Resource):
# It has no children, let it respond to the / URL for brevity.
isLeaf = True
def __init__(self, detector):
Resource.__init__(self)
# This is the idea - BeatsResource has a reference to the detector,
# which has the data needed to compute responses.
self._detector = detector
def render_GET(self, request):
limit = time.time() - CHECK_TIMEOUT
# Here, use that data.
beats = self._detector.beats
silent = [ip for (ip, ipTime) in beats.items() if ipTime < limit]
request.setHeader('content-type', 'text/plain')
return "%d silent clients" % (len(silent),)
# Integrate this into the existing application
application = service.Application('Heartbeat')
detectorSvc = DetectorService()
detectorSvc.setServiceParent(application)
.
.
.
from twisted.web.server import Site
from twisted.application.internet import TCPServer
# The other half of the idea - make sure to give the resource that reference
# it needs.
root = BeatsResource(detectorSvc)
TCPServer(8080, Site(root)).setServiceParent(application)