DBD::Oracle, Cursors and Environment under mod_perl - apache

Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.

If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.

Related

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

Set variables in Javascript job entry at root level

I need to set variables in root scope in one job to be used in a different job. The first job has a Javascript job entry, with the statements:
parent_job.setVariable("customers_full_path", "C:\\customers22.csv", "r");
true;
But the compilation fails with:
Couldn't compile javascript:
org.mozilla.javascript.EvaluatorException: Can't find method
org.pentaho.di.job.Job.setVariable(string,string,string). (#2)
How to set a variable at root level in a Javascript job entry?
Sorry for the passive agressive but:
I don't know if you are new to Pentaho but, the most common mistake for new users, with previous knowledge of programming, is to be sort of 'addicted' to know methods, as such you are using JavaScript for a functionality that is built in the tool. Both Transformations(KTR) and JOBs(KJB) have a similar step, you can better manipulate this in a KTR.
JavaScript steps slow down the flow considerably, so try to stay away from those as much as possible.
EDIT:
Reading This article, seems the only thing you're doing wrong is the actual syntax of the command..
Correct usage :
parent_job.setVariable("Desired Value", [name_of_variable]);
The command you described has 3 parameters, when it should be 2. If you have more than 1 variable you need to set, use 3 times the command. Try it out see if it works.

List all bamboo variables in inline script

I have alot of Bamboo variables defined due the fact that i have a system with alot of legacy and config at places where it does not belong. Getting rid of all this will take a bit longer on the roadmap so i need to find a way to auto replace all these values.
The number im talking about is that there are 8 customer config files with each about 100 variables. Indeed, there was a maniac who added all of those in Bamboo because as you might thought most of them are variable for each environment.
At this moment i want to automate the deployment process and all is going fine exact the fact that i need to replace 100 variables and i dont want to maintain it in my script itself all the time.
I am looking for a way to retrieve all the variables in an array so i can just iterate through all the keys and try to replace them at the config files.
echo "${bamboo.application.myvalue}" will replace the value as expected. The only problem is, how can i get all the keys under bamboo.*
I tried it with the following functions but all without success:
printenv
env
declare
All above without success. How can i retrieve a list of all those variables as inline script in Bamboo.
Thanks alot
I think it is not possible to change the value of the variables on the fly. Instead, you can use the "Inject Bamboo variables" task in order to be able to change the variable value.
This task reads a file to create the variables. So, all you have to do is to create this file with the values you need, and then use this variables.
E.g.: Creating a file from a powershell script:
$path = 'bambooVariaveis.properties'
$connectionstringX = 'connectionstring="Data Source=XXXX;"'
$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding($False)
[System.IO.File]::WriteAllLines($path, $connectionstringX, $Utf8NoBomEncoding)
E.g: Inject Bamboo Variables config
Using it (in a subsequent script task):
echo ${bamboo.inject.connectionstring}

How to use a config file to connect to database set by user

I have a program that will run a query, and return results in report viewer. The issue is we have 10 locations, all with their own local database. What I'd like to do is have each location use the program and utilize the App.config file to specify which database to connect to depending on which location you are. This will prevent me from having to create 10 individual programs with separate database connections. I was thinking I could have 3 values in the app.config file "Database" "login" "password" Generally speaking the databases are on the .30 address... so it would be nice to be able to have them set the config file to the database server IP...
For example:
Location: 1
DatabaseIP: 10.0.1.30
Login: sa
Password: databasepassword
Is it possible to set something like this up using the app.config file?
You should take a look on the resource files.
Originally, they are intended for localization, but they should work for you also.
Go to your project Properties, and set up an Application Setting --> Type (Connection String) from the drop down. This will result in a xlm config file in your output directory in which you can modify the connection string post-compile.
I ended up using a simple XML File to do this. I used this site to accomplish it. I first wrote the XML using the form load, then switched it to the read.

Asterisk with new functions

I created a write func odbc list records files in sql table:
[R]
dsn=connector
write=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES
('${ARG1}','${ARG2}','${ARG3}','${ARG4}')
prefix=M
and set it in dialplan :
exten => _0X.,n,Set(
M_R(${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})= )
when I excute it I get an error : ast_func_write: M_R Function not registered:
note that : asterisk with windows
First thing I saw was you were performing the call to the function incorrectly...you need to be assigning values, not arguments....try this:
func_odbc.conf:
[R]
dsn=connector
prefix=M
writesql=INSERT INTO ast_records (filename,caller,callee,dtime) VALUES('${VAL1}','${VAL2}','${VAL3}','${VAL4}');
dialplan:
exten => _0X.,1,Set(M_R()=${MIXMONITOR_FILENAME}\,${CUSER}\,${EXTEN}\,${DTIME})
If that doesn't help you, continue on in my list :)
Make sure func_odbc.so is being loaded by Asterisk. (from the asterisk CLI: module show like func_odbc)... If it's not loaded, it can't "build" your custom odbc query function.
Make sure your DSN is configured in /etc/odbc.ini
Make sure that /etc/asterisk/res_odbc.conf is properly configured
Make sure you're calling the DSN by the right name (I see it happen all the time)
enable verbose and debug in your Asterisk logging, do a logger reload, core set verbose 5, core set debug 5, and then try the call again. when the call finishes, review the log, you'll see much more output regarding what happened...
Regarding the answer from recluze...Not to call you out here, but using a PHP AGI is serious overkill here. The func_odbc function works just fine, why create more overhead and potential security issues by calling an external script (which has to use a interpreter program on TOP itself)?
you should call func odbc function as "ODBC_connector". connector should be use in the func_odbc.conf file [connector]. In the dialplan it should call like this.
exten=> _0x.,n,ODBC_connector(${arg1},${arg2})
I don't really understand the syntax you're trying to use but how about using AGI (with php) for this. Just define your logic in a php script and call it from your dialplan as:
exten => _0X.,n,AGI(script-filename.php,${CUSER},${EXTEN},${DTIME})