Set the RequestResponseSerializer in ElasticClient - nest

We've seen a resurrection of this issue in a recent update of Elasticsearch (https://github.com/elastic/elasticsearch-net/issues/1937).
We set the SourceSerializer when creating the Client connection but that doesn't seem to help.
Debugging in, I see that RequestResponseSerializer defaults to Nest.InternalSerializer. This JSON serializer has the DateParseHandling field set to DateTime when we want DateTimeOffset. I suspect that this may be the cause of my problem.
Is there a way to set RequestResponseSerializer to verify my theory?
ADDITION: I was able to verify my theory above by altering the NEST code directly. I edited the InternalSerializer::CreateSettings() method to include DateParseHandling = DateParseHandling.DateTimeOffset and that solved the issue.
Now how to set/modify this value for RequestResponseSerializer without modifying NEST code directly...

Turns out my issue was the same as https://github.com/elastic/elasticsearch-net/issues/3164 and seemed to be fixed in v6.2.0 (https://github.com/elastic/elasticsearch-net/pull/3278).
I was running v6.1.0
Upgraded my version to v6.3.1 and all looks well.

Related

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

SendKeys() method ignores some characters when sending to a text box

I move my Selenium installation to a new server, since then some tests using logins no longer work.
After investigation, I found that the password field was populated with an incorrect value. Therefore the tests failed.
I'm trying to do the following :
_passWordTextBox.Clear();
_passWordTextBox.SendKeys("!!ä{dasd$352310!!!\\_XY>èà$£<?^^");
Here is how the field is populated after those lines:
The "!" character was the only one missing. It worked on the previous server. Some other suspicious characters (like $ éà<) also worked.
I've looked at locale settings (culture differences) between the servers.
From these characters sent in a Password string:
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
All of these worked correctly:
"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\ _ abcdefghijklmnopqrstuvwxyz{|}
Only these failed to be sent correctly:
!]^`~
I've also tried in other fields (such as a Description field) and see the same failure.
I've tried to see if the command was sent correctly to the selenium server, but the logs seem to suggest it worked:
08:05:35.850 DEBUG [ReverseProxyHandler.execute] - To upstream: {"value":["!\"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~?"]}
It means that the server receives the command correctly, but for some reason the driver or the server doesn't execute properly.
Try this:
_passWordTextBox.SendKeys(#"!!ä{dasd$352310!!!\\_XY>èà$£<?^^");
Maybe is for the validates from field.
You can try using clipboard:
public static void SendValueFromClipboard(this IWebElement txtField, string value)
{
Clipboard.SetText(value);
txtField.SendKeys(OpenQA.Selenium.Keys.Control + "v");
}
This is written on C#, you will need to rewrite it in language, you are using.
After looking into multiple system settings i discovered that both my piloting and executing machine add the same regional settings (Format : French(Switzerland) , Keyboard : French(Switzerland), and I didn't look any further.
While fiddling around i discovered this setting :
As it turns out , the Language for non-Unicode programs was set to French(Switzerland) on the machine executing the tests. Changing it to English(UK) resolved the problem.
Probably a bug in chromedriver.
Your solution doesn't work for me, since I already have that setting set to English, but here's a solution I found if anyone else's interested.
Just change your keyboard to ENG UK in task bar.

getting a "need project id error" in Keen

I get the following error:
Keen.delete(:iron_worker_analytics, filters: [{:property_name => 'start_time', :operator => 'eq', :property_value => '0001-01-01T00:00:00Z'}])
Keen::ConfigurationError: Keen IO Exception: Project ID must be set
However, when I set the value, I get the following:
warning: already initialized constant KEEN_PROJECT_ID
iron.io/env.rb:36: warning: previous definition of KEEN_PROJECT_ID was here
Keen works fine when I run the app and load the values from a env.rb file but from the console I cannot get past this.
I am using the ruby gem.
I figured it out. The documentation is confusing. Per the documentation:
https://github.com/keenlabs/keen-gem
The recommended way to set keys is via the environment. The keys you
can set are KEEN_PROJECT_ID, KEEN_WRITE_KEY, KEEN_READ_KEY and
KEEN_MASTER_KEY. You only need to specify the keys that correspond to
the API calls you'll be performing. If you're using foreman, add this
to your .env file:
KEEN_PROJECT_ID=aaaaaaaaaaaaaaa
KEEN_MASTER_KEY=xxxxxxxxxxxxxxx
KEEN_WRITE_KEY=yyyyyyyyyyyyyyy KEEN_READ_KEY=zzzzzzzzzzzzzzz If not,
make a script to export the variables into your shell or put it before
the command you use to start your server.
But I had to set it explicitly as Keen.project_id after doing a Keen.methods.
It's sort of confusing since from the docs, I assumed I just need to set the variables. Maybe I am misunderstanding the docs but it was confusing at least to me.

Asterisk dial command return dialed number

I'm looking for a variable that can tell me which number 'won' the call on a multi-target Dial command.
Example:
Dial(SIP/1000&SIP/1001&SIP/1002,30)
Set(the_unlucky_winner=${...})
I'm not getting anything from the ${DIALEDPEERx} variables. Sounds like these vars are broken but I don't know if this is what I should be using.
Ancient version 1.2.14 deployed at this site. All clients are SIP
Thanks anyone
Only realistic way do that - cal via Local channle like freepbx do(check freepbx.org source) or use Macro on answer(i am afraid not work in 1.2)
Parse the contents of the CDR record for the file. One of the fields is dstchannel which will hold a value like SIP/1002-9786b0b0.
Also keep in mind that the call variable stack is wiped on hangup, unless you have an "h" (hangup) extension defined for the context. So, you can most easily handle your post-call processing there.
Further Reading:
http://www.asteriskdocs.org/en/3rd_Edition/asterisk-book-html-chunk/asterisk-SysAdmin-SECT-1.html
Please Note:
if this answer turns out to solve your problem, please "accept" it for the benefit of others trying to solve the same problem later
Hi all I have a solution to this problem. It is working fine for both normal dial and multi target dial.
In dialstring add a macro, here I am adding "followme" macro.
M(followme)
$agi->exec("dial", "SIP/6001#sip.example.com&SIP/6002#sip.example.com,rtTgM(followme)");
Then after call is answered it ll go to context
[macro-followme]
In this context you write one script to get the connected calls information by
$dstchannel=$agi->get_variable("DIALEDPEERNUMBER");
The way I managed to do it is as follows
Dial(SIP/1000&SIP/1001&SIP/1002,30,M(whoanswered))
[macro-whoanswered]
exten => s,1,NoOp(${CHANNEL})
You will see that the actual extension that answered is containes in ${CHANNEL}
If 1001 answered the channel will be something like SIP/1001-00017cf1
Just use the CUT command to cut it by / and -

DBD::Oracle, Cursors and Environment under mod_perl

Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.