I love SQL Server Management Studio's 'Find in files' capability -- searching specific file types along multiple pre-defined paths for specified strings. (I have a PERL-driven ASCII editor which does something very similar.) But I am needing a batch process to execute the same sort of argument-rich search for many MANY DIFFERENT strings. I had assumed that once I ran a preliminary search, the query windows would provide (along with the search results) the SQL query syntax executed to produce that search -- which I could then use as a template for creating a complex query or batch search. I see now that this 'Find-in-files' capability in Management Studio isn't a SQL query at all, but a process apparently internal to the software itself. I'm an avid SQL user, but not really a SQL programmer. Does anyone have a simple approach to accomplishing this sort of search in a 'scalable' SQL query?
Related
As question states:
Within powerBi there from the 'Get Data from SQL Server' -> connecting to the SQL Server
there are two options import and advanced. With Advanced, you can write a sql query to get the data or the default is import. This shows all the tables on the server and you can just ETL from a click.
What is the real difference?
If you are comfortable writing your own T-SQL select statement, you can use it to bypass the Power Query editor and send your desired statement straight to the SQL database. That is also handy if you have code already written out from a previous query or project, which you can just paste into the Advanced query window.
If you use the Power Query Editor to build you query step by step, you have a better visualisation about what data is returned by the previous step(s), and you can apply data manipulations after sighting the data.
Power Query uses query folding, which means that your individual steps are analysed and then translated into the most efficient SQL code before it is sent to the server.
That means that even if you don't speak T-SQL very well, you can still build efficient queries with the Query Editor, and if you feel you are an accomplished T-SQL developer, you can shortcut the Query Editor steps altogether. Of course that means that it is also possible to use "Advanced" and write clunky, inefficient T-SQL that performs slower than going through the Query Editor steps would.
In the end, it comes down to preference and familiarity. A seasoned DBA might just quickly write out a Select statement, a SQL rookie might prefer to click a few ribbon commands instead. The result can be identical in returned data and performance.
I have a stored procedure function as well as table in the SQL Server enterprise 2014. I also have data in the table. Now I need same table and data in PostgreSql(pgAdmin4).
Can anyone suggest to me the idea to migrate data to POSTGRESQL or any idea on creating the SQL script so that I can use psql to run the script?
Depending on how much data you have, you could script out the table and data. Then you could tweak the script as needed for PostgreSQL:
Right click on the SQL database > Tasks > Generate Scripts
On the "Choose Objects" screen, select your specific table then select "Next>"
On the "Set Scripting Options" screen, select "Advanced"
Find the option called "Types of data to script", then select "Schema and data" and select "OK"
Set the filename and continue through the dialog until the file is generated
Tweak the sql script for any specific PostgreSQL syntax
If there is a larger amount of data, you might look into some type of data transfer tool like SSIS.
Exporting the table structure and data as Josh Jay describes will likely require some fixes where the syntax doesn't match, but it should be doable if not tedious. Luckily there are existing conversion tools available to help.
You could also try using a foreign data wrapper to map the tables in SQL Server to a running instance of PostgreSQL. Then it's just a matter of copying tables. Depends on your needs and where each database server is located relative to one another.
The stored procedures will be far more difficult to handle unfortunately. While Oracle's pl/sql language is substantially similar to PostgreSQL's pl/pgsql, MS SQL Server/Sybase's TransactSQL dialect on the other hand is different enough to require rewrites. If the TransactSQL functions also access .Net objects, the migration task may end up far more difficult as you reimplement dependencies or find logical equivalents.
Note: C# 3.5 application calling a SQL Server 2005 DB on a remote server.
I'm developing a two step process.
1) I search a Windows Indexing Service for a list of files that contain a given word, such as "Bob".
2) I then need to retrieve a list of rows from a DOCUMENT table in a SQL DB by passing in the list of filenames from the Indexing Service.
At the moment I retrieve a list from the indexing service AND all rows from the DOCUMENT table, then filter them in code. This isn't practical as there are 10,000+ documents and the database is through a firewall.
I considered creating a query such as:
SELECT DocName FROM Documents WHERE DocName IN ({list of files from indexing service})
...but given the list of files could be thousands it won't work.
So, what's the best thing I can do? I don't want to query the DB for all 10,000+ rows and pass them back over the firewall (takes 10 minutes). I somehow need to pass in the list of filenames retrieved from the indexing service.
How would linq work in this scenario?
Any advice greatly appreciated.
If you had SQL Server 2008, you could use Table Valued Parameters, but for 2005, there's nothing quite as elegant.
The simplest solution I can think of is:
Create a table in the database
Bulk Insert the results of your Indexing Service into the table
Join your query to this table to filter the results
Retrieve the filered results
It's not a great solution, but I don't know that a great solution exists - that's why TVPs were created.
You can evaluate different solutions for this kind of "massive" operation, may be not necessary to use linq. For example, try to implement a stored procedure on SQL Server, that receives in input the list of file name and returns the list of documents.
I opted for a solution similar to what Bazzz mentioned.
I've set up a nightly operation to copy the required fields from the database and set meta tags on the document files (PDFs). The meta data can then be used in the Indexing Service ;o)
This has proved to be a good solution for this instance, but otherwise what Hallainzil said would've been the best option albeit painful on Sql Server 2005.
Is it possible to search and replace all occurrences of a string in all columns in all tables of a database? I use Microsoft SQL Server.
Not easily, though I can thing of two ways to do it:
Write a series of stored procedures that identify all varchar and text columns of all tables, and generate individual update statements for each column of each table of the form "UPDATE foo SET BAR = REPLACE(BAR,'foobar','quux')". This will probably involve a lot of queries against the system tables, with a lot of experimentation -- Microsoft doesn't go out of its way to document this stuff.
Export the entire database to a single text file, do a search/replace on that, and then re-import the entire database. Given that you're using MS SQL Server, this is actually the easier approach. Microsoft created the Microsoft SQL Server Database Publishing Wizard for other reasons, but it makes a fine tool for exporting all of the tables of a SQL Server database as a text file containing pure SQL DDL and DML. Run the tool to export all of the tables for a database, edit the resulting file as you need, and then feed the file back to sqlcmd to recreate the database.
Given a choice, I'd use the second method, as long as the DPW works with your version of SQL Server. The last time I used the tool, it met my needs (MS SQL Server 2000 / 2005) but it had some quirks when working with database Roles.
In MySQL, you can do it very easily like this:
update [table_name] set [field_name] = replace([field_name],'[string_to_find]','[string_to_replace]');
I have personally tested this successfully on a production server.
Example:
update users set vct_filesneeded = replace(vct_filesneeded,'.avi','.ai');
Ref: http://www.mediacollege.com/computer/database/mysql/find-replace.html
A good starting point for writing such a query is the "Search all columns in all the tables in a database for a specific value" stored procedure. The full code is at the link (not trivial, but copy/paste it and use it, it just works).
From there on it's relatively trivial to amend the code to do a replace of the found values.
What is a dynamic SQL query, and when would I want to use one? I'm using SQL Server 2005.
Here's a few articles:
Introduction to Dynamic SQL
Dynamic SQL Beginner's Guide
From Introduction to Dynamic SQL:
Dynamic SQL is a term used to mean SQL code that is generated programatically (in part or fully) by your program before it is executed. As a result it is a very flexible and powerful tool. You can use dynamic SQL to accomplish tasks such as adding where clauses to a search based on what fields are filled out on a form or to create tables with varying names.
Dynamic SQL is SQL generated by the calling program. This can be through an ORM tool, or ad-hoc by concatenating strings. Non-dynamic SQL would be something like a stored procedure, where the SQL to be executed is predefined. Not all DBA's will let you run dynamic SQL against their database due to security concerns.
A dynamic SQL query is one that is built as the program is running as opposed to a query that is already (hard-) coded at compile time.
The program in question might be running either on the client or application server (debatable if you'd still call it 'dynamic') or within the database server.