Is there something like `SET UPDATE TASK LOCAL` for `CALL FUNCTION ... STARTING NEW TASK`? - abap

In order to force CALL FUNCTION ... IN UPDATE TASK to be executed synchronosuly in the same LUW after COMMIT WORK there is a nice ABAP command SET UPDATE TASK LOCAL.
Is there something similar for CALL FUNCTION ... STARTING NEW TASK?
Background: I want to test some coding which I cannot change using database mocking with CL_OSQL_TEST_ENVIRONMENT. Unfortunately the instance of the test environment is only available in the current LUW when executing unit tests. If there is an asynchronous function module call then all the database accesses go to the database directly because the test environment is no longer valid in another LUW.

Related

Pentaho Carte How to pass parameter in Job Level

I am currently trying to develop simple parameter passing process using Pentaho and execute the job from web (Carte). I have a transformation and also a job. I have successfully pass the parameter if I execute it directly through transformation. http://cluster:cluster#localhost:8080/kettle/executeTrans/?trans=/data-integration/PENTAHO_JOB/test_var.ktr&testvar=1234567
however, when I try to put the transformation in a Job and execute it in job level, I could not get the parameter testvar now even though I can run it successfully. i also found out that there is no Get Variable function in Job level. I wonder if I can get the parameter testvar by executing from job level in Carte?
http://cluster:cluster#localhost:8080/kettle/executeJob/?job=/data-integration/PENTAHO_JOB/test_var.kjb&testvar=1234567
#Raspi Surya :
Its working for me . You need to set the variable in parameter at job level. I used the below URL.
http://localhost:8080/kettle/executeJob/?job=C:\Data-Integration\carte-testing.kjb&file_name=C:\Data-Integration\file1.csv
See the attached SS

What is the use of logging menu in settings

What is the use / purpose of logging menu in the
Settings -> Technical -> Database structure -> Logging
It seems that you can store all the log messages in that view (model ir.logging) as long as you use the parameter --log-db your_database_name when executing odoo-bin in the command line (or add log_db = your_database_name in your Odoo config file).
Check out Odoo 11 info about command line parameters: https://www.odoo.com/documentation/11.0/reference/cmdline.html
--log-db
logs to the ir.logging model (ir_logging table) of the specified database. The database can be the name of a database in the “current” PostgreSQL, or a PostgreSQL URI for e.g. log aggregation
This is the theory, but honestly, I was not able to make it work, and I did not waste much time in trying to know why.
EDIT
As #CZoellner says, it seems that log messages stored in ir_logging table (log messages you see clicking on the menuitem Settings -> Technical -> Database structure -> Logging) come only from scheduled actions. If you create an scheduled action which executes some Python code, you have the following available variables to use in your method code:
env: Odoo Environment on which the action is triggered.
model: Odoo Model of the record on which the action is triggered; is a void recordset.
record: record on which the action is triggered; may be void.
records: recordset of all records on which the action is triggered in multi-mode; may be void.
time, datetime, dateutil, timezone: useful Python libraries.
log: log(message, level='info'): logging function to record debug information in ir.logging table.
Warning: Warning Exception to use with raise To return an action, assign: action = {...}.
If you use the log one, for example:
log('This message will be stored in ir_logging table', level='critical')
That log message and its details will be stored in ir_logging table each time the scheduled action is executed (automatically or manually). This answers your question, but now I am wondering what is the parameter --log-db, as I have tested it and these log messages are stored in ir_logging no matter this parameter is set or not.

DBD::Oracle, Cursors and Environment under mod_perl

Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.

Asp.net database migration, what's the Down method for?

When I add-migration, Up and Down method are generated.
and I know when I update database (update-database), it runs the Up method.
How about Down method?
when it will run, is it for rollback? and, how can I run it?
Its for when you want to "downgrade" the database to a previous migration state. You use it with the -TargetMigration flag of the Update-Database command. For example, if you have added the following migrations:
Initial
FirstMigration
SecondMigration (current state)
You can revert the database to the Initial migration state by:
Update-Database -TargetMigration:Initial
In which case the code in the Down() methods of the SecondMigration and FirstMigration classes will run.

Asterisk with new functions

I created a write func odbc list records files in sql table:
[R]
dsn=connector
write=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES
('${ARG1}','${ARG2}','${ARG3}','${ARG4}')
prefix=M
and set it in dialplan :
exten => _0X.,n,Set(
M_R(${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})= )
when I excute it I get an error : ast_func_write: M_R Function not registered:
note that : asterisk with windows
First thing I saw was you were performing the call to the function incorrectly...you need to be assigning values, not arguments....try this:
func_odbc.conf:
[R]
dsn=connector
prefix=M
writesql=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES('${VAL1}','${VAL2}','${VAL3}','${VAL4}');
dialplan:
exten => _0X.,1,Set(M_R()=${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})
If that doesn't help you, continue on in my list :)
Make sure func_odbc.so is being loaded by Asterisk. (from the asterisk CLI: module show like func_odbc)... If it's not loaded, it can't "build" your custom odbc query function.
Make sure your DSN is configured in /etc/odbc.ini
Make sure that /etc/asterisk/res_odbc.conf is properly configured
Make sure you're calling the DSN by the right name (I see it happen all the time)
enable verbose and debug in your Asterisk logging, do a logger reload, core set verbose 5, core set debug 5, and then try the call again. when the call finishes, review the log, you'll see much more output regarding what happened...
Regarding the answer from recluze...Not to call you out here, but using a PHP AGI is serious overkill here. The func_odbc function works just fine, why create more overhead and potential security issues by calling an external script (which has to use a interpreter program on TOP itself)?
you should call func odbc function as "ODBC_connector". connector should be use in the func_odbc.conf file [connector]. In the dialplan it should call like this.
exten=> _0x.,n,ODBC_connector(${arg1},${arg2})
I don't really understand the syntax you're trying to use but how about using AGI (with php) for this. Just define your logic in a php script and call it from your dialplan as:
exten => _0X.,n,AGI(script-filename.php,${CUSER},${EXTEN},${DTIME})