FDW locking postgreSQL - locking

Can anyone share their experience about FDW locking in PostgreSQL? Or where can I read more about it?
I read doc https://www.postgresql.org/docs/11/fdw-row-locking.html, but didn't find enough information about which kind of locking I get on my target tables if run SELECT * FROM fdw_table?
In the usual case without FDW I get Access Share, which is almost non-conflict. What can I get with FDW?
Is there a thin spot here?

Related

Possible to create SQL Server stored procedure connecting to multiple servers?

My question has to do in regards to the ability of being able to create stored procedures that connect to multiple servers. If anyone's not familiar with it, there's a :CONNECT syntax in SQL that will switch where your query's being run from. For example:
:CONNECT SERVERNAME
SELECT *
FROM Table
GO
This would run the query from where the table is stored as opposed to using linked servers (which cause serious performance issues). Does anyone know if it's possible (and how to achieve it) to create stored procedures that switch between servers? I keep getting various error messages when trying to achieve it. Here would be an example:
:CONNECT SERVERNAME
SELECT *
FROM Table
GO
:CONNECT SERVERNAME2
SELECT *
FROM Table
GO
This would connect to two different servers in the same query.
Thanks
UPDATE - 4.26.2018
All,
We've pretty much decided OPENQUERY is our best solution, at least for stored procedures. Unfortunately, we'll be limited by syntax but performance is MUCH better than using linked servers (which is what we're currently using). I appreciate everyone that's chimed in; Your input was invaluable. If you wish to add anything else, please feel free to do so.
Thanks
Using Linked Servers in a 4-part naming convention will work. It can get ugly fast in regards to performance, so be careful. If you do use Linked Servers, I'd recommend against putting multiple servers in the same SQL statement (including the local server you're on). It isn't terribly efficient. It basically breaks up the query into local and remote queries, it then scrubs the data, and combines it locally before finishing. I've seen users execute queries with multiple servers and take down solid servers in the process.
Another option is the OPENQUERY() method. This still uses a Linked Server, but will send the query to the other server, to process the whole thing there and just ship the data back. This is typically faster and more efficient than the prior.
SELECT
opn.Id,
opn.ColumnName,
opn.AnotherColumnName
FROM OPENQUERY([LinkedServerName],
'SELECT
tbl.Id,
tbl.ColumnName,
tbl.AnotherColumnName
FROM DB.Schema.Object AS tbl
WHERE tbl.ColumnName = ''SomeValue'''
) AS opn
This link gives some more good info: Click Here
SSIS is your best bet here. It's fast, good when using data from multiple servers, and not too difficult to learn (the basics anyway).
So, a few options I've listed...
SSIS - Best option
OPENQUERY() - Better option
Linked Servers - "I guess it works" option

Use SQL or NoSQL?

I'm designing a system that checks a given website for any security vulnerabilities. The system includes a client (firefox plugin) and a server. The server does all the scanning while the client just relays that info to the user. If a website is dangerous, it is blacklisted; otherwise whitelisted.
The system must hypothetically be able to handle several thousands of requests and updates to the database simultaneously.
Although the database is expected to have a very simple structure, I am still considering using NoSQL because my understanding is that it can handle a greater amount of queries. Is this true? Which db technology is better suited for my system?
I suggest a NoSQL database.
In fact I've been working with two databases in the last weeks, and searching on internet I found the differences between a NoSQL an a SQL database.
Pratically, you should use a NoSQL db if you have a lot of data to query. Remind that it's not sure the data recovery in case of a db disaster.
Instead, use a SQL database if your data MUST be permanent, and you can't lose it. But query times will be longer, so it's not suggested if you have tons of data.
I understood, from what you wrote, that you need lot of queries and you "can lose" the data (if you lose a website of the list, you'll just need to re-check it, right?).
So I suggest you to go for a NoSQL db (I worked with MongoDb, it is the most famous worl-wide).
If you consider NoSQL Databases you have to analyze your data to get the right Database.
For your use case I think you should look at document databases (like MongoDB) or, if you want really high performance, a key-value Database like Redis or Riak.
With Key-Value databases you can only use the key to find the data you want.
With document databases you still have some kind of querys to find the data.
For further information look at: http://nosql-database.org/

Oracle 8i trace of sql statements

I am investigating a legacy app that uses an Oracle 8i database in a test environment, specifically trying to find out what tables are accessed for read, insert, update or delete when the user performs an app function.
What is the best/easiest way to do this? Can I simply get a list of all sql statements sent to the database? Can I see when stored procedures are called?
Having little experience with Oracle but getting help from a DBA, I'm thinking I should either use a trace or look at the redo log with LogMiner, but how?
Thanks!
What you could do is to harvest the sql's from v$sql. If the SQL's are properly written - using bind variables - you should be able to catch most of the statements in a table for this. I currently have no running v8 at hand but this should be possible.
In order to get most of them, you probably need to repeat the harvesting during the various workloads that run on the database.

How to track which tables the database wrote data to?

I use "Lexware Warenwirtschaft Premium 2014" (a well-known merchandise management software in Germany). It uses Sybase as a database. I connect to the database by using a ODBC connection(SQL Anywhere driver). The database has 800+ tables. For example when Lexware creates a new Article, it writes data into different tables.
Is there a way to track into which tables Lexware wrote data?
As an ad-hoc measure you could switch on ODBC tracing, and then review the contents.
http://support.microsoft.com/kb/274551 tells you how to do this from a Windows client, and you can find similar information for Linux/Unix and other clients.
You'd then have to parse the trace file to see which queries were inserted into. The first step would probably be to isolate all the SQLPrepare and SQLExecDirect statements, and check them for INSERT, UPDATE and other relevant Sybase statements.
Note that this is not something you'd want as an ongoing solution, just a way to find out what an ODBC client does if you do not have access to e.g. logging information on the database itself. However, the trace slows down execution and would generate a very large trace file if you left it running for any significant period.
I don't think so. Whatever this program does behind the interface is hidden in its binaries and unreadable for humans, so you can't read the code to see which tables are altered.
You might be able to figure out which table was edited last, depending on the SQL-Server and it's version.

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).