How to create a function in CFML within some SQL queries? - sql

I would like to create a function in CFML taking 3 parameters (a number and two dates). The function will then perform a few cfquery queries on these data like SELECT, UPDATE and INSERT...Any idea on how to code this function ? I'm a cfml newbie so be nice

What you're asking for is very basic and reviewing the documentation of cffunction should be enough to get you started: http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7f5c.html

Adobe docs have a section on how to write UDFs(User Defined Functions). Probably best to start there:
http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=UDFs_03.html

Related

HIVE:: How To Mimic SQL Server Table-Value Function

I am really new to the hive space and am learning as we go. Anyway, currently I am using a SQL Server Table function that accepts several input parameters and returns a table of dates (invoicedate,duedate).
For example, i would pass in ('2017-01-01',12,30,3) (date, duration, terms,interval) and the output would be something like:
'2017-01-01','2017-02-01'
'2017-04-01','2017-05-01'
'2017-09-01','2017-10-01'
'2017-10-01','2018-01-01'
First, is this feasible to do within the hive environment? And second, if so, I'm thinking the UDTF would be the method. If anyone has any thoughts or can point me to an online example they have seen, i would be greatly appreciative.
What you want is a UDTF:
https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide+UDTF
.................................

How can i use the new UDF functionality to create "Dynamic SQL statement"?

How can i use the new UDF functionality to create "Dynamic SQL statement"?
Is there a way to use UDF in order to construct SQL statement based on template and input variables, and later run this query?
The documentation https://cloud.google.com/bigquery/user-defined-functions?hl=en says:
A UDF is similar to the "Map" function in a MapReduce: it takes a
single row as input and produces zero or more rows as output. The
output can potentially have a different schema than the input.
So your UDF receives just a single row.
Therefore - no, UDF is not for the purpose you described in your question.
You might take a look at views - maybe that will suit you better:
https://cloud.google.com/bigquery/querying-data#views

View with parameters in BigQuery

We have a set of events (kind of log) that we want to connect to get the current state. To improve performance/cost further, we would like to create snapshots (in order to not check all the events in history, but only from the last snapshot). Logs and snapshots are the tables with date suffix.
This approach works OK in the BQ, but we need to manually define the query every time. Is there any way to define 'view' with parameters (e.g. dates for the table range query)? Or any plans to do something like that?
I know that there are some topics connected with TABLE_RANGE / QUERY in views (eg Use of TABLE_DATE_RANGE function in Views). Are there any new information on this subject?
That's a great feature request - but currently not supported. Please leave more details at https://code.google.com/p/google-bigquery/issues/list, the BigQuery team takes these requests very seriously!
As a workaround i wrote a small framework to generate complex queries with help of velocity templates. Just published it at https://github.com/softkot/gbq
Now you can use Table Functions (aka table-valued functions - TVF) to achieve this. They are very similar to a view but they accept a parameter. I've tested and they really help to save a lot while keeping future queries simple, since the complexity is inside the Table Function definition. It receives a parameter that you can then use inside the query for filtering.
This example is from the documentation:
CREATE OR REPLACE TABLE FUNCTION mydataset.names_by_year(y INT64)
AS
SELECT year, name, SUM(number) AS total
FROM `bigquery-public-data.usa_names.usa_1910_current`
WHERE year = y
GROUP BY year, name
Then you just query it like this:
SELECT * FROM mydataset.names_by_year(1950)
More details can be found in the oficial documentation.
You can have a look at BigQuery scripting that have been released in beta : https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting

Can a database function be called in the predicate of a llblgen query?

I want to use a table-valued database function in the where clause of a query I am building using LLBLGen Pro 2.6 (self-servicing).
SELECT * FROM [dbo].[Users]
WHERE [dbo].[Users].[UserID] IN (
SELECT UserID FROM [dbo].[GetScopedUsers] (#ScopedUserID)
)
I am looking into the FieldCompareSetPredicate class, but can't for the life of me figure out what the exact signature would be. Any help would be greatly appreciated.
ADDITION -
A better question would be "How do can you interact with a table-valued function via LLBLGen Pro?" I do not see how to generate files/classes for it.
Yes. Use DbFunctioncallExpression, to formulate an expression with a DbFunctionCall and then use a FieldCompareExpression predicate to use it. See 'Calling a Database Function' in ... the manual! :)
http://www.llblgen.com/documentation/3.0/LLBLGen%20Pro%20RTF/hh_goto.htm#Using%20the%20generated%20code/gencode_dbfunctioncall.htm
Please post questions on our forums, it's easier to track them down :)

Database: Pipelined Functions

I am new to the concept of Pipeline Functions. I have some questions regarding
From Database point of view:
What actually is Pipeline function ?
What is the advantage of using Pipeline Function ?
What challenges are solved using Pipeline Function ?
Are the any optimization advantages of using Pipeline Function ?
Thanks.
To quote fom "Ask Tom Oracle":
pipelined functions are simply "code you can pretend is a database table"
pipelined functions give you the (amazing to me) ability to
select * from PLSQL_FUNCTION;
anytime you think you can use it -- to select * from a function, instead of a table, it
might be "useful".
As far as advantages: a big advantage of using a Pipeline function is that your function can return rows one-by-one as opposed to building the entire result set in memory as a whole before returning it.
The above gives the obvious optimization - memory savings from something that would otherwise return big result set.
A fairly interesting example of using pipelined functions is here
What seems to be a good use of them is ETL (extract/transform/load) - for example see here