I am looking to see if the capability is there to have a custom SSMS sql query imported in SPSS (Statistical Package for the Social Sciences). I would want to build syntax that generates this query as my new dataset that I can then continue my scripted analysis. I see the basic query capability of one table from a Sql Server but I would like to create a query that joins to many tables. I anticipate the query to be a bit complex with many joins and perhaps data transformations.
Has anybody had experience or a solution to this situation?
I know I could take the query and make a table of it that SPSS can then connect to but my data changes daily and I would need a job in another application to refresh this table before my SPSS syntax would pull it and I would like to eliminate that first step by just having the query that grabs the data at the beginning of my syntax.
Ultimately I am looking to build out my SPSS syntax and schedule it in the Production Facility to run daily.
Related
As question states:
Within powerBi there from the 'Get Data from SQL Server' -> connecting to the SQL Server
there are two options import and advanced. With Advanced, you can write a sql query to get the data or the default is import. This shows all the tables on the server and you can just ETL from a click.
What is the real difference?
If you are comfortable writing your own T-SQL select statement, you can use it to bypass the Power Query editor and send your desired statement straight to the SQL database. That is also handy if you have code already written out from a previous query or project, which you can just paste into the Advanced query window.
If you use the Power Query Editor to build you query step by step, you have a better visualisation about what data is returned by the previous step(s), and you can apply data manipulations after sighting the data.
Power Query uses query folding, which means that your individual steps are analysed and then translated into the most efficient SQL code before it is sent to the server.
That means that even if you don't speak T-SQL very well, you can still build efficient queries with the Query Editor, and if you feel you are an accomplished T-SQL developer, you can shortcut the Query Editor steps altogether. Of course that means that it is also possible to use "Advanced" and write clunky, inefficient T-SQL that performs slower than going through the Query Editor steps would.
In the end, it comes down to preference and familiarity. A seasoned DBA might just quickly write out a Select statement, a SQL rookie might prefer to click a few ribbon commands instead. The result can be identical in returned data and performance.
I dont seem to find a way to write the output from a previous step in the flow into a SQL table, using the SQL recipes. When I read the documentation, it seems both types of SQL action can only take as an input a SQL dataset? This cant be write, as you would imagine you would want to create datasets in the flow and then commit them to a database?
https://doc.dataiku.com/dss/latest/code_recipes/sql.html
In the docs above, it describes In\Out parameters as needing to be SQL.
Indeed, it doesn't seem possible with a SQL recipe which executes fully in the database.
That being said you can probably use a sync recipe to put your non-SQL dataset in your SQL db so that you can execute a SQL recipe.
I have a stored procedure function as well as table in the SQL Server enterprise 2014. I also have data in the table. Now I need same table and data in PostgreSql(pgAdmin4).
Can anyone suggest to me the idea to migrate data to POSTGRESQL or any idea on creating the SQL script so that I can use psql to run the script?
Depending on how much data you have, you could script out the table and data. Then you could tweak the script as needed for PostgreSQL:
Right click on the SQL database > Tasks > Generate Scripts
On the "Choose Objects" screen, select your specific table then select "Next>"
On the "Set Scripting Options" screen, select "Advanced"
Find the option called "Types of data to script", then select "Schema and data" and select "OK"
Set the filename and continue through the dialog until the file is generated
Tweak the sql script for any specific PostgreSQL syntax
If there is a larger amount of data, you might look into some type of data transfer tool like SSIS.
Exporting the table structure and data as Josh Jay describes will likely require some fixes where the syntax doesn't match, but it should be doable if not tedious. Luckily there are existing conversion tools available to help.
You could also try using a foreign data wrapper to map the tables in SQL Server to a running instance of PostgreSQL. Then it's just a matter of copying tables. Depends on your needs and where each database server is located relative to one another.
The stored procedures will be far more difficult to handle unfortunately. While Oracle's pl/sql language is substantially similar to PostgreSQL's pl/pgsql, MS SQL Server/Sybase's TransactSQL dialect on the other hand is different enough to require rewrites. If the TransactSQL functions also access .Net objects, the migration task may end up far more difficult as you reimplement dependencies or find logical equivalents.
Currently We have developed a system for a manual work they have been doing using many excel files.
Is there a best practice for data migration? because I wanted to use backend language like .net to do the validation and insert into tables rather than using SQL to do migration.
Total record in excel is around 12K rows but for many tables so its not needed consider a lot about performance and it is only one time.
I would add a few calculated columns in Excel that would generate SQL Insert / Update scripts. Something like ="INSERT INTO table (column) VALUES ('"&A1&"');"
Then just copy calculated column and run it through SQL client. I used to have a macros to run it directly from Excel through OLEDB that would highlight failed expressions and store SQL Exceptions next to them.
That way the data can be easily tidied, corrected and SQL re-run as needed.
RIght at the outset I'd like to say that I am NOT a Cognos Guy .So I have totally disconnected myself from developing cognos cubes / reports whatever you want to call it.
There are COGNOS queries auto generated - very badly written that will cause the Teradata ( DBS 15.1.x ) system to Hog on spool & CPU . I can tune them beautifully after I pull them out from DBQL. I want to know HOW can I implement Custom Queries that can be run periodically as batch reports instead of Cognos auto-generating these queries.
E.g. You create a cube - its writes code behind it and then you can open the code and write custom code that is equivalent to the original code but performs a lot better. Then when you open the cube again - it remembers there is a custom SQL and runs that instead of its own auto generated SQL . This is just how I imagine one way it can do it but again- I am not a cognos resource so pl dont flag me down for lack of knowledge. That is exactly what I am trying to get an idea about
Thanks for bearing with me
In Framework Manager you can create one Query Subject with complex query inside. Do not import tables etc. Just create QS in put your query inside.
You need to use stored procedure to return your expected data and add it to Model.
Then instead of using couple of tables in Cognos report studio (and joins), add one query and point it to your stored procedure. This way your Cognos report will execute the procedure instead of generating query (which may not be efficient in many cases)