I am trying to implement several Azure Logic Apps that query/update an Azure SQL Server Database. The queries return either one value or a table with several rows. I prefer not to create stored procedures, but instead use the 'Execute SQL Query' Connector. My queries are running fine in the Logic Apps, but I have not found a way to extract the output of the queries to use in next steps, or return in an HTTP Response.
Can someone guide me on how this can be done for both single-value and table outputs?
If for some reason you don't want to create a SP, or cannot do it, you can access your custom query results by using this in your JSON:
#body('Name_of_Execute_SQL_Query_step')?['resultsets']['Table1'][0]['NameOfYourColumn']
If you can't find the exact "path" for your data, run and let it fail. Then go check the failing step and there in "Show raw outputs" you will be able to see the results of the Execute SQL Query step. For example:
{
"OutputParameters": {},
"ResultSets": {
"Table1": [
{
"Date": "2018-05-28T00:00:00"
}
]
}
}
To access that date, you'd of course need to use:
#body('Name_of_Execute_SQL_Query_step')?['resultsets']['Table1'][0]['Date']
Stored Procedures are always better for many reasons and the output can be reasonable well inferred by the Connector. That's why Stored Procedure output lights up in the designer.
Execute SQL Actions return 'untyped' content which is why you don't see specific elements in the designer.
To use the Execute SQL output like a Stored Procedure output, you would have to define the JSON Schema yourself, and use the Parse JSON Action to light up the SQL output.
Related
Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);
I have a query like Select '16453842' AS ACCOUNT, I want to change this Account number to a dynamic one. Can anyone tell me what are some possible ways to do it?
The scenario is like, I would also like to accommodate other values as well for the ACCOUNT. So that multiple values can be used for ACCOUNT.
the Nodejs code uses that sql in below code by carrierData.sql
writeDebug({'shipment': shipment, 'carrier': carrier, 'message': 'reading sql file: ' + carrierData.sql});
fs.readFile(carrierData.sql, 'utf8', function (err, sql) {
The carrierData is coming from a json file from where the sql contains the path and name of the SQL which it is going to use. And finally the SQL file which have the query runs a query like below
SELECT 'T' AS RECORD
, '16453842' AS ACCOUNT
and here lies my problem as we have some additional ACCOUNT numbers as well which we would like to accommodate.
And the service starts by node server.js which will call the workers.js file which contains the code that I pasted above.
So please let me know what can be the possible ways to do this
Here are things to research:
Using bind variables. These are used for security and for performance when a statement is executed multiple times.
Using multiple values in IN clauses.
Using executeMany() if you are loading data into the database.
I have a SQL Script with multiple drop & create DDL(Create tables As Select *), I want to run them at one go. I am quite new to informatica powercenter, can some one provide the process of using SQL transformation for BigQuery in informatica.
Sample Query:-
drop table if exists sellout.account_table;
CREATE TABLE sellout.account_table
AS
SELECT * FROM
sellout.account_src
WHERE
UPPER(account_name) IN ('RANDOM');
Similar to the above queries i have around 24 SQL's in a script.
I want to run them at once and later make them as part of informatica job.
If the "PowerExchange Google BigQuery" server and client are installed and after executing the infasetup.bat(sh) validateandregisterallfeatures, the mappings would be opened/exported successfully.
Here are some FAQs that might be handy for you:
Q: Why are the output fields in SQL Transformation not seen?
A: Stored Procedure selected in the SQL Transformation must have output parameters declared. Else it would not have output fields other than default Return Code column.
Q: A set of columns are displayed as result while running the Stored Procedure, however, you still do not see the same columns as output in SQL Transformation. Why?
A: Columns seen in the output might not be defined/declared as output parameters in the Stored Procedure. Procedure might have 'SELECT * FROM' like statement, which retrieves the data when the procedure is run from DB UI and a similar result could be seen when the procedure is run programmatically.
However, to call the same procedure from SQL Transformation, explicitly declared output parameters should be present as the transformation imports the metadata of the proc when selected. Unless you declare the output parameters explicitly in the procedure, it cannot be seen as output in the transformation.
Q: Is it necessary to have input/output parameters in Stored Procedure to call it from SQL Transformation?
A: Yes, it is necessary to have input/output parameters in Stored Procedure if it is not having default ones. As these parameters appear as input/output fields in SQL transformation, without these Mapping becomes invalid.
Q: I have SELECT statement in the procedure, does the SQL transformation can push this to next transforamtion?
A: Approprioate output parameters are required for this to work.
I'm interested to know if anyone has come across a tool that can generate TypeScript type definitions based on the expected result of a SQL query? That is to say, is there a CLI that accepts a SQL schema and .sql file and outputs a .ts file based on the expected result of the query?
Such a tool already exists for GraphQL queries and my team has found it extremely useful because it completely removes errors associated with hand-rolled type definitions.
Yes, PgTyped is a new tool that does that.
It allows you to generate TypeScript interfaces for raw SQL queries.
It works similarly to apollo-codegen, but instead of the gql tag you need to use a sql tag for your SQL queries.
It only supports PostgreSQL and is still in beta stage, but I am actively working on it and any contributions are welcome.
sql-code-generator is another option.
It does:
generating type definitions from SQL resources (e.g., tables, views, functions, procedures)
generating type definitions from SQL queries (e.g., select * from table)
generating typed functions that execute SQL queries from SQL queries (e.g., const sqlQueryFindAllUsersByName = async ({ input: InputType }): Promise)
I have a stored procedure which contains 3 input parameters with multiple SELECTs and INNER JOINs. I want to Call the stored procedure in QlikView. I followed lots of tutorials, but I make it work.
I am Using OLE DB and I'm trying to call as follows:
SQL CALL [DB NAME].[dbo].[ABC] #_ End-Time ='2012-12-31 00:21:06.550', #_ Start-time = '2012-12-31 00:21:06.550',
#_ Username = 'XYZ';
Is this correct? If not, what are the ways to call stored procedures into Qlikview and what permission do I need for this?
i'm not sure that you checked this thread (http://goo.gl/IiGD2) but it might be useful. Couple of things that i'm noticing from it: there is additional string that need to be added to the connection string "(mode is write)" and also to activate the "Open Databases in Read and Write mode" in qv.
Also make sure that you have sql rights to execute.
Regards!
Stefan
A workaround could be to retrieve the three input variables from a table instead, and update this table from qlikview using SQL insert.
It may be possible to run an store procedure from QlikView, but it is not possible to pull any output you get from it. You should convert that to a function if you want to retrieve any data from QlikView.
Creating a MV is your best course of action, and you will have a better performance.