Is it possible to output a detailed query result file? - sql

I am new at working with SQL and need to know if it is possible to produce a detailed query result file. I know you can have this file but it only contains info like 1 row(s) affected, but I need to have detailed info like:
"added row ID,Name,Surname; 1, John, Adams".

This is not a feature of SQL Server at this time. If you need this level of insight into your database changes, you could look at using temporal tables or implementing a custom logging solution (like using Modified/Created columns on the table so that you can query the data to see when things changed or were created).
It's hard to say what your options are without knowing the version of SQL Server you're using and what level of control you have over how the data is getting into the system, but these are at least a couple options.

Related

Is there any way to get name of the first, the second, the third table in PostgreSQL?

I'm gathering information about the database by executing time-based SQL injection attacks (in lab environment). I discovered the current database user and the current database name. But now I don't know the way to get names of the first, the second, the third [,...] tables in that current database. Is there any way can help solve the problem?
I'm working with PostgreSQL, but if you know any way in another DBMS, please tell me, I'm so so grateful!
To list all tables in the current database, you can run the \dt command in psql.

How do I copy data from one Azure database table to a different Azure database table and also convert data types?

I have to copy data from one table to another, the tables are held in two different databases within Azure. I did a quick search for answers to this and whilst a query seems fairly straight forward i.e.
INSERT INTO table1 (make, model, type, serial)
SELECT the_make, the_model, the_type, ref_no
FROM database2.dbo.table2
I encountered issues because I'm using Azure.
Msg 40515, Level 15, State 1, Line 16 Reference to database and/or
server name in 'database2.dbo.table2' is not supported in this version of
SQL Server.
The above issue led me to the Cross-Database Queries articles. My requirements are a little more complicated than some of the scenarios provided and I need some help in making it work.
I also need to convert some columns such as reg_no which is a 'string' to an 'int' and then copy the value to the 'serial' column.
My question is, what the best way to create a script for this that allows me to reference both databases without any errors, copy the data and convert the columns at the same time? I tried the simple way of exporting data and importing it, editing the mappings for the columns, it wasn't that good I found and was causing problems all over the place.
Any guidance is appreciated on this.
You're getting this error because there's no linked server by default. You'll need to add it, in order to access the secondary db server. Here's a link about how to do it:
https://www.sqlshack.com/create-linked-server-azure-sql-database/
In terms of the transformation. It depends on many factors e.g. amount of rows, frequency, etc..
Usually the best alternative is by using an external tool (ETL) such as SSIS / Azure Data Factory because you can schedule it's execution and get the status of each execution.

Impala Show Tables

I know in Impala (and other databases) I can run both of the following:
SHOW DATABASES
SHOW TABLES
I also know I can add optional LIKE or IN arguments e.g. to show me all the tables in database Bananas I could write:
SHOW TABLES IN Bananas
What I really want to know is a way of returning all the tables in the databases without having to recurse through (also showing database name and table name in separate fields.
I'll be running this via impala shell so I'd have to first return back all the database names and then produce a script line per database to give me the tables.
It's not a problem to do this as such, I just can't help wondering there must be a better way to end up with:
Unfortunately not yet. Impala will eventually support this by exposing tables for schema metadata (e.g. ANSI INFORMATION_SCHEMA), and IMPALA-1761 tracks that feature request.

SSAS Multidimensional - Table Value Function as a Query for Partition

#GregGalloway was able to answer the question I should have asked. I am adding a more concise question here, while maintaining the original lengthy text
How do I use a table valued function as the query for a partition, when the function is in separate database from my fact and referenced dimensions?
Overview: I am building a SSAS multidimensional cube that is built off of a single fact table in our application's data warehouse, and want to use the result set from a table valued function as my fact table's partition query. We are using SQL Server (and SSAS) 2014
Condition: For each environment (Dev,Tst,Prd) there are 2 separate databases on the same server, one for the application data warehouse [DW_App], the other for custom objects [DW_Custom]. I cannot create any objects in [DW_App], but have a lot of freedom in [DW_Custom]
Background info: I have not been able to find much information on using a TVF and partitions in this way. My thinking is that it will help streamline future development by giving me a single place to update the SQL if/when I modify the fact table.
So in testing out my crazy idea of using a TVF as the query for my partitions I have run into a bit of a conundrum. I am able to use my TVF when I explicitly state the Database in my FROM clause.
SELECT * FROM [DW_Custom].[dbo].[CubePartition](#StartDate, #EndDate)
However, that will not work, because the cube will be deployed in multiple environments before production, and it needs to point to different DBs for each. So I tried adding a new data source, setting my partition query to point to the new data source, and then remove the database name. IE:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
I get an error that
The SQL syntax is not valid. The relational database returned the following error message: Deferred prepare could not be completed. Invalid object name 'dbo.CubePartition'
If I click through this error and the subsequent warnings about the cube not being able to process if I continue I am able to build and deploy the cube. However I cannot process it, because I get an error that one of my dimensions does not exist.
Looking into the query that was generated and it is clear that it is querying my dimensions as well as fact, which do not exist inside of '[DW_Custom]' which explains that error perfectly fine.
So I guess 2 questions:
Is it possible to query another DB (on the same server) from inside of an SSAS partition query?
If not, is there any way I can use a variable as the database name in the query, and update that variable based on the project configuration (Dev,Tst,Prd)
Bonus question: Is the reason that I can not find much about doing it this way because it is an obviously bad idea that I am overlooking, and if so why?
How about creating a second SSAS Data Source pointing to the DW_Custom database (or whatever it's called in the particular environment you're deploying to)? Then when you deploy from Dev to Prod, you need only change that connection string. When you create your partitions, then specify the DW_Custom data source and then specify the query without database name:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
As long as the query plan for that table-valued function is efficient compared to a plain SELECT statement, then I don't see a problem with that.

Pentaho ETL : Database Join vs Table Input

I need to write a database table data to a text file with some transformation.
There are two steps available to retrieve the data from the table, namely Table input and Database join. I don't see much difference between them except the "outer join?" option (correct me if I understood wrongly). So which would be better to use?
Environment:
Database : oracle
Pentaho Spoon : 5.3.* (Community edition)
Thanks in advance.
Table Input step in PDI is used to read the data from your database tables. The query would be executed once and will return you the result set. Check the wiki.
Database Join works slightly different. It will allow you to execute your query based on the data received from the previous step. For every row coming in from the previous step, the query in this step will be substituted and is executed. Check the wiki.
The choice of using the above steps clearly depends on your requirement.
If you need to fetch the data set from a database table, you should use the Table Input Step - The best choice.
In case, you need to run the query in the database for every row to fetch the result, use Database Join - The best choice.
Hope it helps :)