My client wants to run arbitrary SQL SELECT queries on the backend database of our web app, for ad-hoc reporting purposes. The requests will be read-only. Suppose the choice of analysis tool is flexible, but might include Access or psql. Rather than exposing the database on the public Internet, I want to transmit SQL queries over HTTP.
Can I implement a web service that would allow database analysis tools to communicate with the database using a user's web app credentials? E.g. instead of the database connection string starting with postgres://, it would start with https://. Ideally I'm looking for a [de facto] standard way of doing this.
Related but different/unanswered:
Communicate Sql Server over http
Standards for queries over SOAP
I'm not aware of a standard for this. MK has a point, this sounds like a huge opportunity for a SLQ Injection attack. Services expose the results of database queries all the time. They're typically requesting a handful of parameters and exposing a well defined response. Giving a public user of the service carte-blanche to run any query they want, means that you have to ensure that they don't sneak in a drop database or delete from table query some how. It can be tricky to defend. All of that said, I've seen this pattern used for a private service to pool the connections that the database server is aware. Database connections tend to be pretty expensive.
Related
I have to execute some queries in Firebird, but I need to hide "query source" from viewing in mon$statements or any other log in database.
That's because the query has some business rules that I can't expose to other people.
Is there any way to do it? Or some "trick" that I can use?
There is no way to do this. However MON$STATEMENT only shows your own queries, unless you are SYSDBA, the owner of the database, or a user with the RDB$ADMIN role (then you can see all queries). Other then MON$STATEMENT, there is also the trace facility which allows people with sufficient access to see queries (either on the server or through the service api). People with insufficient access to the database can still see queries if they can see the network traffic between the application and the database server.
The only way is to not give any form of access to the database server to people who should not be able to see the queries. This can be done by hosting the application as a webapplication, or putting a webservice or other form of middleware between the database and the real application.
I just found out about Redis and i find the concept of key-value databases interesting.
I want to start using Redis but i don't quite understand how i would structure my project.
When i use mysql, its more like i have a backend written with Java/Python, clients make request to my web application and my Java/Python code gets information from the database and sends it to the clients or it also writes information from clients into the database.
I would like to know how Redis is structured so i can start building applications with it. I would also appreciate any sample projects/templates (Especially server side)
Thanks
I want to start using Redis but i don't quite understand how i would
structure my project.
You should first start with defining the functionality of your project in order to figure out the requirements for the database structure.
When i use mysql, its more like i have a backend written with
Java/Python, clients make request to my web application and my
Java/Python code gets information from the database and sends it to
the clients or it also writes information from clients into the
database.
Databases (especially redis which has very trivial authentication system) shouldn't be exposed directly to clients, so it's the backend part which is responsible for dealing with data - in your case Java or Python. I think this makes it identical or similar to what you are used to with MySQL.
I would like to know how Redis is structured so i can start building
applications with it.
I would recommend to first read fifteen minute introduction to redis data types and some general overview. Note however that redis doesn't support querying language like SQL which you might be used from relational database systems that could limit it's usefulness depending on your project needs.
My application connects to 3 SQL servers and 5 databases simultaneously. I need to show statuses {running|stopped|not found} on my status bar.
Any idea/code sample that I can use here? This code should not affect the speed of application or a overhead to SQL server.
Buddhi
I think you should use WMI (using the ServiceController class (with this constructor). You basically query the server where the sql server resides and check its status.
The example below is assuming your application is written in c#:
ServiceController sc = new ServiceController("MSSQLSERVER", serverName);
string status = sc.Status.ToString();
"This code should not affect the speed
of application or a overhead to SQL
server"
This is a Schroedinger's Cat scenario: in order to know the current status of a given remote service or process, you must serialize a message onto the network, await a response, de-serialize the response and act upon it. All of which will require some work and resources from all machines involved.
However, that work might be done in a background thread on the caller and if not called too often, may not impact the target server(s) in any measurable way.
You can use SMO (SQL Server Management Objects) to connect to a remote server and do pretty much anything you can do through the SQL admin tools since they use SMO to work their magic too. It's a pretty simple API and can be very powerful in the right hands.
SMO does, unsurprisingly, require that your have appropriate rights to the boxes you want to monitor. If you don't/can't have sufficient rights, you might want to ask your friendly SQL amin team to publish a simple data feed exposing some of the data you need.
HTH.
There will be some overhead within your application when connecting (verifying connection) or failing to connect (verifying no connection) but you can prevent waiting time, by checking this asynscronously.
We use the following SQL query to check the status of a particular database
SELECT 'myDatabase status is:' AS Description, ISNULL((select state_desc FROM sys.databases WITH (NOLOCK) WHERE name ='myDatabase'),'Not Found') AS [DBStatus]
This should have very little overhead, especially when paired with best practices like background or asynchronous threads
Full Disclosure, I am the founder of Cotega
If you are interested in a service to do this, our service allows for monitoring of SQL Server uptime and performance. In addition you can set notifications for when the database is not available, performance degrades, database size or user count issues occur, etc.
In our environment we have our externally-facing Web servers off-site (same city). Then internally we have our SQL server, which is being called from the above Web servers. (We do have a SQL server off-site, but it's being used less and less now.)
I've heard that SQL calls (we're using Microsoft SQL Server) have (rather wasteful) network overhead associated with them, but outside of this question, I'm not finding much about that online.
Given the situation above, where we have:
off-site (external) Web servers querying a SQL server on-site (internal)
a 3-5+ MB connection between the off-site and on-site facilities
differing levels of caching on the Web server, due to need
does it make sense to continue with SQL calls, or is it more efficient to create a Web service on an on-site (internal) Web server that is then called by those off-site servers?
We're moving 100% to ASP.NET, so we'd probably go WCF for the services, since that's what we've used for third-parties, but I assume performance differences to be the same whichever language is used.
Thanks!
Edit 1: Attempting to clarify: I'm asking about the efficiency of SQL calls across a network connection, as opposed to using a Web service. I'm aware that adding a Web service that makes SQL calls is adding complexity, but if what the Web service returns is more efficient than what SQL returns (for the same data set) ... For example, the difference between sending XML and JSON.
To me, it seems that the SQL call would be much more efficient from a programming standpoint. I can directly query the data that I want/need for particular cases, instead of creating a service method for each, or having a method that returns more information than I need, so it can be used for multiple cases.
If 'using Web services to return data across the network is more efficient than directly calling SQL' is just marketing, and there's no practical impact, then so be it.
Looking at what Microsoft provides, it seems that 'only use a Web service if communicating with third-parties or if you're programming in something that does not easily allow for SQL access' is the suggestion/guideline.
I wouldn't have thought that using a web service would be a very fast solution either. The main benefit I would see in using a web service is that you wouldn't have to expose the SQL server directly to external networks. This would potentially make the web service option more secure.
I think you need to ask yourself a couple of questions such as:
Is the speed of data access actually a concern?
Is the slowest part of the data retrieval operation the transfer of data between the SQL Server and the web server or is it actually the SQL query or the processing of the results by the webserver that is the slowest step?
We want to distribute / synchronize data from our Datawarehouse (MS SQL Server) to external customers (also MS SQL Server). The connection has to be secure, because we are dealing with trusted data. Transmission of data from our system to external client system must be via the http/https
In addition it is possible that the clients still run their systems with an older database schema, so already existing tables and columns should be transmitted and non existing ones should be ignored.
Its most likely that we will have large database updates and the updates have to arrive in almost real-time.
And it is definitely necessary that the data is stored in a client side datawarehouse / SQL database.
The whole process should also include good monitoring possibilities in case something goes wrong.
We started to develop our own .NET solution but I thought it should be almost a common problem to exchange data between different systems.
Does anybody know about an existing solution which we can adapt to our scenario?
Any help is appreciated!
The problem is so common that it has a dedicated component in SQL Server: Service Broker. Rather than start your own .Net thing and take care of the many problems (how are you gonna handle down time? Retries? duplicates? out of order delivery? authentication of non-domain joined computers? routing for machines that change names? service upgrades? transactional consistency, rollbacks? are you gonna use dtc?). You can look at the demo I gave to SQL connections to see how you can easily scale SSB to a throughput of well over 1000 msgs/sec (1k payload) on commodity hardware.
the only requirement is that all partitcipants must be at least SQL Server 2005 (no SSB in 2000).
Just use regular SQL connections over a secure VPN or an SSH tunnel. Should be very easy to setup for your networking guys.
For example, you can create a linked server. Then a SQL scheduled job could move the data:
truncate table targetserver.dbname.dbo.tablename
insert into targetserver.dbname.dbo.tablename
select a, b, c
from dbname.dbo.sourcetable
Since the linked server talks to your server over a VPN or SSH tunnel, all data is send encrypted over the internet.