I'm reaching out to the experts as I have hit a wall with a recent project. I have created an SSIS package (2008R2) that uses a script task to build a SQL statement, where a variable(#month1) is being used within the SQL statement, to specify a month look back in a membership table. I want to also use the #month1 variable as a "counter" for the loop container to specify how many times to execute the query. The SQL query is attached to a data flow task to append these records into a table on a SQL server database. The script task and data flow task work outside of the for loop container with the initial value given for the #month1 variable but I cannot figure out how to make the for loop container update the #month1 "counter" variable so that the for each loop can use it as a "counter" and the SQL statement can use it as a condition with in the created SQL statement. Any one have any ideas or examples on how to do this?
** Update **
The For Loop container is the issue. The script task and data flow task work outside of the For Loop container. It will use the initial variable setting for #month1 and create the dynamic sql script, execute script and transfer data from source database server to the destination source server. The issue is when I place these steps within the For Loop container, the container executes and turns green but does not invoke the steps within it. This is why I'm thinking the container is not reading the variable #month1, even though the variable is set at the package level. Any thoughts?
First of all, try to set the data flow Delay Validation property to True. If it still not working, instead of passing the variable as parameter in the OLEDB Source use expressions:
Create a variable of type string.
Change its EvaluateAsExpression property to True
Set the expression similar to:
"select * from table
Where column =" + (dt_str, 50)#[User::month1]
In the OLEDB Source select the Access mode as SQL Command from variable and select this variable.
Be aware that the month1 variable is not created twice wuth different scopes, click on the data flow task and check the variable panel if it shows additional variables.
I appreciate everyone's responses but it seems I tricked myself on this one. In looking for the most complicated issue I overlooked the most simple and obvious one. The reason my For Loop was not executing the steps inside of the container was because I had the initial value for #month1 set to 3 (intentionally) and wanted to loop until it was resolved to -49. In the EvalExpression setting, it will evaluate until the statement is FALSE...so the evaluation I had in there of #month1 <= -49 was already false. It needed to be #month1 > -49 so as soon as it fell to -49 the statement would be false. I do this to myself more than I should admit, can't see the forest for the trees!
I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary
I have the following problem, i have a SQL file to execute with DBI CPAN module Perl
I saw two solution on this website to solve my problem.
Read SQL file line by line
Read SQL file in one instruction
So, which one is better, and what the real difference between each solution ?
EDIT
It's for a library. I need to retrieve output and the return code.
Kind of files passed might be as following:
set serveroutput on;
set pagesize 20000;
spool "&1."
DECLARE
-- Récupération des arguments
-- &2: FLX_REF, &3: SVR_ID, &4: ACQ_STT, &5: ACQ_LOG, &6: FLX_COD_DOC, &7: ACQ_NEL, &8: ACQ_TYP
VAR_FLX_REF VARCHAR2(100):=&2;
VAR_SVR_ID NUMBER(10):=&3;
VAR_ACQ_STT NUMBER(4):=&4;
VAR_ACQ_LOG VARCHAR2(255):=&5;
VAR_FLX_COD_DOC VARCHAR2(30):=&6;
VAR_ACQ_NEL NUMBER(10):=&7;
VAR_ACQ_TYP NUMBER:=&8;
BEGIN
INSERT INTO ACQUISITION_CFT
(ACQ_ID, FLX_REF, SVR_ID, ACQ_DATE, ACQ_STT, ACQ_LOG, FLX_COD_DOC, ACQ_NEL, ACQ_TYP)
VALUES
(TRACKING.SEQ_ACQUISITION_CFT.NEXTVAL, ''VAR_FLX_REF'',
''VAR_SVR_ID'', sysdate, VAR_ACQ_STT, ''VAR_ACQ_LOG'',
''VAR_FLX_COD_DOC'', VAR_ACQ_NEL, VAR_ACQ_TYP);
END;
/
exit;
I have another question to ask, again with DBI Oracle module.
May i use the same code for SQL file and for Control file ?
(Example of SQL Control file)
LOAD DATA
APPEND INTO TABLE DOSSIER
FIELDS TERMINATED BY ';'
(
DSR_IDT,
DSR_CNL,
DSR_PRQ,
DSR_CEN,
DSR_FEN,
DSR_AN1,
DSR_AN2,
DSR_AN3,
DSR_AN4,
DSR_AN5,
DSR_AN6,
DSR_PI1,
DSR_PI2,
DSR_PI3,
DSR_PI4,
DSR_NP1,
DSR_NP2,
DSR_NP3,
DSR_NP4,
DSR_NFL,
DSR_NPG,
DSR_LTP,
DSR_FLF,
DSR_CLR,
DSR_MIM,
DSR_TIM,
DSR_NDC,
DSR_EMS NULLIF DSR_EMS=BLANKS "sysdate",
JOB_IDT,
DSR_STT,
DSR_DAQ "CASE WHEN :DSR_DAQ IS NOT NULL THEN SYSDATE ELSE NULL END"
)
Reading a table one row at a time is more complex, but it can use less memory - provided you structure your code to make use of the data per item and not need it all later.
Often you want to process each item separately (e.g. to do work on the data), in which case you might as well use the read line-by-line approach to define your loop.
I tend to use single-instruction approach by default, but as soon as I am concerned about number of records (especially in long-running batch processes), or need to loop through the data as the first task, then I read records one-by-one.
In fact, the two answers you reference propose the same solution, to read and execute line-by-line (but the first is clearer on the point). The second question has an optional answer, where the file contains a single statement.
If you don't execute the SQL line-by-line, it's very difficult to trap any errors.
"Line by line" only makes sense if each SQL statement is on a single line. You probably mean statement by statement.
Beyond that, it depends on what your SQL file looks like and what you want to do.
How complex is your SQL file? Could it contain things like this?
select foo from table where column1 = 'bar;'; --Get foo; it will be used later.
The simple way to read an SQL file statement by statement is to split by semicolons (or whatever the statement delimiter is). But this method will fail if you might have semicolons in other places, like comments or strings. If you split this statement by semicolons, you would try to execute the following four "commands":
select foo from table where column1 = 'bar;
';
--Get foo;
it will be used later.
Obviously, none of these are valid. Handling statements like this correctly is no simple matter. You have to completely parse SQL to figure out what the statements are. Unfortunately, there is no ready-made module that can do this for you (SQL::Script is a good start on an SQL file processing module, but according to the documentation it just splits on semicolons at this point).
If your SQL file is simple, not containing any statement delimiters within statements or comments; or if it is predictable in some other way (such as having one statement per line), then it is easy to split the file into statements and execute them one by one. But if you have to handle arbitrary SQL syntax, including cases such as above, this will be a complex task.
What kind of task?
Do you need to retrieve the output?
Is it important to detect errors in any individual statement, or is it just a batch job that you can run and not worry about it?
If this is something that you can just run and forget about, you could just have Perl execute a system command, telling Oracle to process the file. This will be simpler than handling all of the statements yourself. But if you need to process the results or handle errors within Perl, doing it yourself statement by statement will be a necessity.
Update: based on your response, you want to write a library that can handle arbitrary SQL statements. In that case, you definitely need to parse the SQL and execute the statements one at a time. This is do-able, but not simple. The possibility of BEGIN...END blocks means that you have to be able to correctly handle semicolons within a statement.
The SQL::Statement class of modules may be helpful.
I have a stored procedure which contains 3 input parameters with multiple SELECTs and INNER JOINs. I want to Call the stored procedure in QlikView. I followed lots of tutorials, but I make it work.
I am Using OLE DB and I'm trying to call as follows:
SQL CALL [DB NAME].[dbo].[ABC] #_ End-Time ='2012-12-31 00:21:06.550', #_ Start-time = '2012-12-31 00:21:06.550',
#_ Username = 'XYZ';
Is this correct? If not, what are the ways to call stored procedures into Qlikview and what permission do I need for this?
i'm not sure that you checked this thread (http://goo.gl/IiGD2) but it might be useful. Couple of things that i'm noticing from it: there is additional string that need to be added to the connection string "(mode is write)" and also to activate the "Open Databases in Read and Write mode" in qv.
Also make sure that you have sql rights to execute.
Regards!
Stefan
A workaround could be to retrieve the three input variables from a table instead, and update this table from qlikview using SQL insert.
It may be possible to run an store procedure from QlikView, but it is not possible to pull any output you get from it. You should convert that to a function if you want to retrieve any data from QlikView.
Creating a MV is your best course of action, and you will have a better performance.
I have been researching a way to get the SQL statements that are built by a generated Migration file. These extend Doctrine_Migration_Base. Essentially I would like to save the SQL as change scripts.
The execution path leads me to Doctrine_Export which has methods that build the SQL statement and executes them. I have found no way of asking for just them. The export methods found in Doctrine_Export only operate on Doctrine_Record models and not Migration scripts.
From the command line './doctrine migrate version#' the path goes:
Doctrine_Cli::run(cmd)
Doctrine_Task_Migrate::setArguments(args)
Doctrine_Task_Migrate::execute()
Doctrine_Migration::migrate(to)
Doctrine_Migration_Process::Doctrine_Export::various
create, drop, alter methods with sql
equivalents.
Has anyone tackled this before? I really would not like to change Doctrine base files. Any help is greatly appreciated.
Could you make a dev server, and do the migration on that, storing a SQL Trace as you go?you don't have to keep the results, but you would get a list of every command.
Taking into account Rob Farley's suggestion, I modified:
Doctrine_Core::migrate
Doctrine_Task_Migrate::execute
When the execute method is called the optional argument 'dryRun' is checked. If true
then a 'Doctrine_Connection_Profiler' instance is created. The 'dryRun' value is then passed onto
the 'Doctrine_Core::migrate' method. The 'dryRun' value of true allows the changes to rollback when done executing the SQL statements. When the method returns, the profiler is parsed and non-empty SQL statements
not containing 'migration_version' are saved and displayed to the terminal.