I am using dynamic queues for testing with names like dynamicQueues/Foo, but in the web console I am seeing the queue names as dynamicQueues/Foo rather than just Foo.
Other code (not ours) uses the same dynamicQueues/Foo but the queue name on the console is just Foo so things are misaligned so to speak.
I have followed the instructions here: http://activemq.apache.org/jndi-support.html
I am confused about whether the queue name reported in the web console should include dynamicQueues or not - finding it hard to debug our problem as a result.
You should see Foo in the console window, yes.
This code will produce a message on FOO and show the queue as FOO in the web console (amq 5.6.0):
Properties props = new Properties();
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,"org.apache.activemq.jndi.ActiveMQInitialContextFactory");
props.setProperty(Context.PROVIDER_URL,"tcp://127.0.0.1:61616");
javax.naming.Context ctx = new InitialContext(props);
ConnectionFactory cf = (ConnectionFactory)ctx.lookup("ConnectionFactory");
Connection conn = cf.createConnection();
Destination dest = (Destination)ctx.lookup("dynamicQueues/FOO");
Session s = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer prod = s.createProducer(dest);
prod.send(s.createTextMessage("Hello, World!"));
Are you sure you are using JNDI to lookup the queue and that you did not configure anything in jndi.properties?
(I can't respond to the comment above so sorry about answering here.)
This is my problem (in scala):
This works, queue is called FOO
val destination = JmsConnectionFactory.initialContext.lookup("dynamicQueues/FOO").asInstanceOf[Destination]
val consumer = session.createConsumer(destination)
This doesn't, queue is called dynamicQueues/FOO
val queue = session.createQueue("dynamicQueues/FOO")
val consumer = session.createConsumer(queue)
sigh, it makes sense I guess.
Related
In one of my services I am adding to the queue:
RBlockingQueue<String> queue = redissonClient.getBlockingQueue("ABC");
queue.add(receivedTask.toString());
And in the 2nd service I am connecting to the same redis instance and want to read/pop from the queue once a new element gets added from the 1st service, something like this:
RBlockingQueue<String> queue = redisClient.getRedissonClient().getBlockingDeque("ABC");
System.out.println("received: " + queue.poll(0, TimeUnit.SECONDS));
I was earlier dealing with RTopic and it was working fine but the use case has changed and now have to use RQueue instead. Not sure what I am doing wrong here.
Actually found what I was doing wrong.
Should be using subscribeOnElements(): it notifies the listener and polls the new element which can be accessed.
queue.subscribeOnElements((msg) -> {
//code
});
my aim is to add only slaves URIs, because master is not available in my case. But lettuce library returns
io.lettuce.core.RedisException: Master is currently unknown: [RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6382], role=SLAVE], RedisMasterSlaveNode [redisURI=RedisURI [host='127.0.0.1', port=6381], role=SLAVE]]
So the question is: Is it possible so avoid this exception somehow? Maybe configuration. Thank you in advance
UPDATE: Forgot to say that after borrowing object from pool I set connection.readFrom(ReadFrom.SLAVE) before running commands.
GenericObjectPoolConfig config = fromRedisConfig(properties);
List<RedisURI> nodes = new ArrayList<>(properties.getUrl().length);
for (String url : properties.getUrl()) {
nodes.add(RedisURI.create(url));
}
return ConnectionPoolSupport.createGenericObjectPool(
() -> MasterSlave.connect(redisClient, new ByteArrayCodec(), nodes), config);
The problem was that I tried to set data, which is possible only with master node. So there is no problem with MasterSlave. Get data works perfectly
Some basic question regarding Spark. Can we use spark only in the context of processing jobs?In our use case we have stream of positon and motion data which we can refine and save it to cassandra tables.That is done with kafka and spark streaming.But for a web user who want to view some report with some search criteria can we use Spark(Spark SQL).Or for this purpose should we restrict to cql ? If we can use spark , how can we invoke spark-sql from a webservice deployed in tomcat server.
Well, you can do it by passing a SQL request via HTML address like:
http://yourwebsite.com/Requests?query=WOMAN
At the receiving point, the architecture will be something like:
Tomcat+Servlet --> Apache Kafka/Flume --> Spark Streaming --> Spark SQL inside a SS closure
In the servlet (if you don't know what a servlet is, better look it up) in the webapplication folder in your tomcat, you will have something like this:
public class QueryServlet extends HttpServlet{
#Override
public void doGet(ttpServletRequest request, HttpServletResponse response){
String requestChoice = request.getQueryString().split("=")[0];
String requestArgument = request.getQueryString().split("=")[1];
KafkaProducer<String, String> producer;
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("acks", "all");
properties.setProperty("retries", "0");
properties.setProperty("batch.size", "16384");
properties.setProperty("auto.commit.interval.ms", "1000");
properties.setProperty("linger.ms", "0");
properties.setProperty("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.setProperty("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.setProperty("block.on.buffer.full", "true");
producer = new KafkaProducer<>(properties);
producer.send(new ProducerRecord<String, String>(
requestChoice,
requestArgument));
In the Spark Streaming running application (which you need to be running in order to catch the queries, otherwise you know how long it takes Spark to start), You need to have a Kafka Receiver
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(batchInt*1000));
Map<String, Integer> topicMap = new HashMap<>();
topicMap.put("wearable", 1);
//FIrst Dstream is a couple made by the topic and the value written to the topic
JavaPairReceiverInputDStream<String, String> kafkaStream =
KafkaUtils.createStream(jssc, "localhost:2181", "test", topicMap);
After this, what happens is that
You do a GET setting either the GET body or giving the argument to the query
The GET is caught by your servlet, which immediately creates, send, close a Kafka Producer (it is possible to actually avoid the Kafka Step, simply sending your Spark Streaming app the information in any other way; see SparkStreaming receivers)
Spark Streaming operates your SparkSQL code as any other submitted Spark application, but it keeps running waiting for other queries to come.
Of course, in the servlet you should check the validity of the request, but this is the main idea. Or at least the architecture I've been using
I am very new to NServiceBus, and in one of our project, we want to accomplish following -
Whenever table data is modified in Sql server, construct a message and insert in sql server broker queue
Read the broker queue message using NServiceBus
Publish the message again as another event so that other subscribers
can handle it.
Now it is point 2, that I do not have much clue, how to get it done.
I have referred the following posts, after which I was able to enter the message in broker queue, but unable to integrate with NServiceBus in our project, as the NServiceBus libraries are of older version and also many methods used are deprecated. So using them with current versions is getting very troublesome, or if I was doing it in improper way.
http://www.nullreference.se/2010/12/06/using-nservicebus-and-servicebroker-net-part-2
https://github.com/jdaigle/servicebroker.net
Any help on the correct way of doing this would be invaluable.
Thanks.
I'm using the current version of nServiceBus (5), VS2013 and SQL Server 2008. I created a Database Change Listener using this tutorial, which uses SQL Server object broker and SQLDependency to monitor the changes to a specific table. (NB This may be deprecated in later versions of SQL Server).
SQL Dependency allows you to use a broad selection of all the basic SQL functionality, although there are some restrictions that you need to be aware of. I modified the code from the tutorial slightly to provide better error information:
void NotifyOnChange(object sender, SqlNotificationEventArgs e)
{
// Check for any errors
if (#"Subscribe|Unknown".Contains(e.Type.ToString())) { throw _DisplayErrorDetails(e); }
var dependency = sender as SqlDependency;
if (dependency != null) dependency.OnChange -= NotifyOnChange;
if (OnChange != null) { OnChange(); }
}
private Exception _DisplayErrorDetails(SqlNotificationEventArgs e)
{
var message = "useful error info";
var messageInner = string.Format("Type:{0}, Source:{1}, Info:{2}", e.Type.ToString(), e.Source.ToString(), e.Info.ToString());
if (#"Subscribe".Contains(e.Type.ToString()) && #"Invalid".Contains(e.Info.ToString()))
messageInner += "\r\n\nThe subscriber says that the statement is invalid - check your SQL statement conforms to specified requirements (http://stackoverflow.com/questions/7588572/what-are-the-limitations-of-sqldependency/7588660#7588660).\n\n";
return new Exception(messageMain, new Exception(messageInner));
}
I also created a project with a "database first" Entity Framework data model to allow me do something with the changed data.
[The relevant part of] My nServiceBus project comprises two "Run as Host" endpoints, one of which publishes event messages. The second endpoint handles the messages. The publisher has been setup to IWantToRunAtStartup, which instantiates the DBListener and passes it the SQL statement I want to run as my change monitor. The onChange() function is passed an anonymous function to read the changed data and publish a message:
using statements
namespace Sample4.TestItemRequest
{
public partial class MyExampleSender : IWantToRunWhenBusStartsAndStops
{
private string NOTIFY_SQL = #"SELECT [id] FROM [dbo].[Test] WITH(NOLOCK) WHERE ISNULL([Status], 'N') = 'N'";
public void Start() { _StartListening(); }
public void Stop() { throw new NotImplementedException(); }
private void _StartListening()
{
var db = new Models.TestEntities();
// Instantiate a new DBListener with the specified connection string
var changeListener = new DatabaseChangeListener(ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
/* START OF EVENT HANDLING CODE */
//This uses LINQ against the EF data model to get the changed records
IEnumerable<Models.TestItems> _NewTestItems = DataAccessLibrary.GetInitialDataSet(db);
while (_NewTestItems.Count() > 0)
{
foreach (var qq in _NewTestItems)
{
// Do some processing, if required
var newTestItem = new NewTestStarted() { ... set properties from qq object ... };
Bus.Publish(newTestItem);
}
// Because there might be a number of new rows added, I grab them in small batches until finished.
// Probably better to use RX to do this, but this will do for proof of concept
_NewTestItems = DataAccessLibrary.GetNextDataChunk(db);
}
changeListener.Start(string.Format(NOTIFY_SQL));
/* END OF EVENT HANDLING CODE */
};
// Now everything has been set up.... start it running.
changeListener.Start(string.Format(NOTIFY_SQL));
}
}
}
Important The OnChange event firing causes the listener to stop monitoring. It basically is a single event notifier. After you have handled the event, the last thing to do is restart the DBListener. (You can see this in the line preceding the END OF EVENT HANDLING comment).
You need to add a reference to System.Data and possibly System.Data.DataSetExtensions.
The project at the moment is still proof of concept, so I'm well aware that the above can be somewhat improved. Also bear in mind I had to strip out company specific code, so there may be bugs. Treat it as a template, rather than a working example.
I also don't know if this is the right place to put the code - that's partly why I'm on StackOverflow today; to look for better examples of ServiceBus host code. Whatever the failings of my code, the solution works pretty effectively - so far - and meets your goals, too.
Don't worry too much about the ServiceBroker side of things. Once you have set it up, per the tutorial, SQLDependency takes care of the details for you.
The ServiceBroker Transport is very old and not supported anymore, as far as I can remember.
A possible solution would be to "monitor" the interesting tables from the endpoint code using something like a SqlDependency (http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx) and then push messages into the relevant queues.
.m
I have a service with application scope, not transactional.
I have a service method which:
uses the injected dataSource to create a stored procedure call [using Sql.call{...}]. Executes and traverse the resultset.
Based on the resultset, I subdivide the resultsets into equal sizes chunks and process them in multiple threads.
Each thread tries to do Sql sql = new Sql(dataSource)
Here a deadlock occurs.
Why is that? Does dataSource not return a possibly new or an idle connection?
Try to look into Gpars : It's a groovy parallalization framework.
I run into exactly the same issue. After hours of searching i've found the solution.
In your Datasource.groovy configfile you are able to set the parameters for the connection pooling to the database.
I've changed the minIdle, maxIdle and maxActive settings of the http://commons.apache.org/proper/commons-dbcp/apidocs/org/apache/commons/dbcp/BasicDataSource.html so that my configfile looks something like this:
dataSource {
url = "jdbc:mysql://127.0.0.1/sipsy_dev?autoReconnect=true&zeroDateTimeBehavior=convertToNull"
driverClassName = "com.mysql.jdbc.Driver"
username = "sipsy_dev"
password = "sipsy_dev"
pooled = true
properties {
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
minIdle=100
maxIdle=250
maxActive=500
validationQuery="SELECT 1"
}
dialect = 'org.hibernate.dialect.MySQL5InnoDBDialect'
}
When you are not in a transaction, you have to release the connection that GroovySQL picks up from the datasource. The pool runs out of connections and that's why it locks up.
Inside a transaction TransactionAwareDataSourceProxy will take care of sharing the connection and therefore releasing the connection from GroovySQL isn't required in that case. See http://jira.grails.org/browse/GRAILS-5454 for details.
This is a better way to use GroovySQL in Grails since the OpenSessionInView (OSIV) interceptor will take care of closing the connection and it will share the same database connection as Hibernate. This method works in both cases: inside transactions and outside transactions.
Sql sql = new Sql(sessionFactory.currentSession.connection())