I need advice for design pattern how to use RabbitMQ to select data from Database.
RabbitMQ looks very good solution for inserting and updating data into Database but what about selecting data from DB?
In my case I have REST API module and Database module connected to Maria DB which communicate via queues.
REST API module -> Database module -> Maria DB
But I need to select configuration from database via database module. I can use RPC as a solution but probably there is better way?
Can you advice?
In general, some sort of RPC is the way to go.
However: The point of a queue (asynchronous tasks) is the opposite of a database select (return my data now). If the direct database select requests are performing adequately, use them, avoid the extra complexity. Or some caching system for your config. This might not work for your system architecture and load needs, but is simpler.
Related
Is there any way to communicate with a socket using SQL language? (Why?) Assume that, I manually open SQL Server Management Studio and open a table and then insert a record manually (by manually I want to emphasize on the absence of any middleware in between). At this moment the business demands for signaling the inserted record to another context (as either notification, or report (i.e. grid view, etc)).
The solution that I have in mind is to write the inserted record to a file and using another application monitor the file for change (Emphasizing again that I don't wanna do this through a middleware at all) , but this method is not a standard way to achieve this requirement and it is more of a workaround.
Is there any standard way to signal changes using pure SQL Server syntax/features?
You can have a SQLCLR routine that calls out to "something", whenever a change happens. Where I work we ue that for real-time streaming of dats from SQL Server to RabbitMQ. In your case you would have to have a trigger on the table, which calls the routine.
In our case we always change data through stored procedures, so our procedures call the SQLCLR routine.
You could also use Service Broker and External Activation. In our case we chose not to do it as the performance was not good enough.
If you want, I have a blog-post about the SQL Server -> RabbitMQ integration using SQLCLR. Obviously it doesn't have to be Rabbit, we've also done it through socket connections etc. So if you're interested, the post is here.
Hope this helps!
Niels
I have a database connected with website, data from website is inserting in that Database, i need to transfer data from that database to another Primary Database (SQL) on another server in real time (minimum latency).
I can not use transactional replication in this case. What are the other alternates to achieve this? Can i integrate DataStreams like Apache kafka etc with SQL server?
Without more detail it's hard to give a full answer. There's what's technically possible, and there's architecturally what actually makes sense :)
Yes you can stream from RDBMS to Kafka, and from Kafka to RDBMS. You can use the Kafka Connect JDBC source and sink. There are also CDC tools (e.g. Attunity, GoldenGate, etc) that support integration with MS SQL and other RDBMS)
BUT…it depends why you want the data in the second database. Do you need an exact replica of the first? If so DB-DB replication may be a better option. Kafka's a great option if you want to process the data elsewhere and/or persist it in another store. But if you just want MS SQL-MS SQL…Kafka itself may be overkill.
I just found out about Redis and i find the concept of key-value databases interesting.
I want to start using Redis but i don't quite understand how i would structure my project.
When i use mysql, its more like i have a backend written with Java/Python, clients make request to my web application and my Java/Python code gets information from the database and sends it to the clients or it also writes information from clients into the database.
I would like to know how Redis is structured so i can start building applications with it. I would also appreciate any sample projects/templates (Especially server side)
Thanks
I want to start using Redis but i don't quite understand how i would
structure my project.
You should first start with defining the functionality of your project in order to figure out the requirements for the database structure.
When i use mysql, its more like i have a backend written with
Java/Python, clients make request to my web application and my
Java/Python code gets information from the database and sends it to
the clients or it also writes information from clients into the
database.
Databases (especially redis which has very trivial authentication system) shouldn't be exposed directly to clients, so it's the backend part which is responsible for dealing with data - in your case Java or Python. I think this makes it identical or similar to what you are used to with MySQL.
I would like to know how Redis is structured so i can start building
applications with it.
I would recommend to first read fifteen minute introduction to redis data types and some general overview. Note however that redis doesn't support querying language like SQL which you might be used from relational database systems that could limit it's usefulness depending on your project needs.
I'm currently developing a service for an App with WCF. I want to host this data on windows-azure and it should host data from differed users. I'm searching for the right design of my database. In my opinion there are only two differed possibilities:
Create a new database for every customer
Store a customer-id to every table (or the main table when every table is connected via entities)
The first approach has very good speed and isolating, but it's very expansive on windows azure (or am I understanding something of the azure pricing wrong?). Also I don't know how to configure a WCF- Service that way, that it always use another database.
The second approach is low on speed and the isolating is poor. But it's easy to implement and cheaper.
Now to my question:
Is there any other way to get high isolation of data and also easy integration in a WCF- service using azure?
What design should I use and why?
You have two additional options: build multiple schema containers within a database (see my blog post about this technique), or even better use SQL Database Federations (you can use my open-source project called Enzo SQL Shard to access federations). The links I am providing give you access to other options as well.
In the end it's a rather complex decision that involves a tradeoff of performance, security and manageability. I usually recommend Federations, even if it has its own set of limitations, because it is a flexible multitenant option for the cloud with the option to filter data automatically. Check out the open source project - you will see how to implement good separation of customer of data independently of the physical storage.
I am working on a database for a monitoring application, and I got all the business logic sorted out. It's all well and good, but one of the requirements is that the monitoring data is to be completely stand-alone.
I'm using a local database on my web-server to do some event handling and caching notifications. Since there is one event row per system on my monitor database, it's easy to just get the id and query the monitoring data if needed, and since this is something only my web server uses, integrity can be enforced externally. Querying is not an issue either, as all the relationships are one-to-one so it's very straight forward.
My problem comes with user administration. My original plan had it on yet another database (to meet the requirement of leaving the monitoring database alone), but I don't think I was thinking straight when I thought of that. I can get all the ids of the systems a user has access to easily enough, but how then can I efficiently pass that to a query on the other database? Is there a solution for this? Making a chain of ors seems like an ugly and buggy solution.
I assume this kind of problem isn't that uncommon? What do most developers do when they have to integrate different database servers? In any case, I am leaning towards just talking my employer into putting user administration data in the same database, but I want to know if this kind of thing can be done.
There are a few ways to accomplish what you are after:
Use concepts like linked servers (SQL Server - http://msdn.microsoft.com/en-us/library/ms188279.aspx)
Individual connection strings within your front end driving the database layer
Use things like replication to duplicate the data
Also, the concept of multiple databases on a single database server instance seems like it would not be violating your business requirements, and I investigate that as a starting point, with the details you have given.