Issues pulling change log using python - jira-rest-api

I am trying to query and pull changelog details using python.
The below code returns the list of issues in the project.
issued = jira.search_issues('project= proj_a', maxResults=5)
for issue in issued:
print(issue)
I am trying to pass values obtained in the issue above
issues = jira.issue(issue,expand='changelog')
changelog = issues.changelog
projects = jira.project(project)
I get the below error on trying the above:
JIRAError: JiraError HTTP 404 url: https://abc.atlassian.net/rest/api/2/issue/issue?expand=changelog
text: Issue does not exist or you do not have permission to see it.
Could anyone advise as to where am I going wrong or what permissions do I need.
Please note, if I pass a specific issue_id in the above code it works just fine but I am trying to pass a list of issue_id

You can already receive all the changelog data in the search_issues() method so you don't have to get the changelog by iterating over each issue and making another API call for each issue. Check out the code below for examples on how to work with the changelog.
issues = jira.search_issues('project= proj_a', maxResults=5, expand='changelog')
for issue in issues:
print(f"Changes from issue: {issue.key} {issue.fields.summary}")
print(f"Number of Changelog entries found: {issue.changelog.total}") # number of changelog entries (careful, each entry can have multiple field changes)
for history in issue.changelog.histories:
print(f"Author: {history.author}") # person who did the change
print(f"Timestamp: {history.created}") # when did the change happen?
print("\nListing all items that changed:")
for item in history.items:
print(f"Field name: {item.field}") # field to which the change happened
print(f"Changed to: {item.toString}") # new value, item.to might be better in some cases depending on your needs.
print(f"Changed from: {item.fromString}") # old value, item.from might be better in some cases depending on your needs.
print()
print()
Just to explain what you did wrong before when iterating over each issue: you have to use the issue.key, not the issue-resource itself. When you simply pass the issue, it won't be handled correctly as a parameter in jira.issue(). Instead, pass issue.key:
for issue in issues:
print(issue.key)
myIssue = jira.issue(issue.key, expand='changelog')

Related

The value of the variable on the right hand side of an assignment has been updated after the left hand side variable's value changed afterwards [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

How to extracting large amounts of data (more than 100 MB) from Snowflake into CSV

I am trying to export large amounts of data from snowflake into a CSV. I saw a similar question and the solution given was to “Run the query as part of a COPY INTO {location} command to an internal stage, and then use a GET command to pull it down locally.”
I tried following the guide and ran the following but receives the error, “SQL compilation error: syntax error line 4 at position 3 unexpected 'file_format'.”
I am not sure how to fix this or even if the first part of my syntax is correct. Can someone please help.
copy into #my_stage/result/data_ from (select *
from"IRIS"."PRODUCTION"."VW_ALL_IIS_LHJ"
where (RECIP_ADDRESS_COUNTY = 06065 or ADMIN_ADDRESS_COUNTY = 06065)
file_format=(TYPE='CSV');
[ HEADER = TRUE]
get #%my_stage/result/data.csv/;
I'm pretty sure the issue is that you're missing a closing parenthesis. Try:
copy into #my_stage/result/data_ from (select *
from"IRIS"."PRODUCTION"."VW_ALL_IIS_LHJ"
where (RECIP_ADDRESS_COUNTY = 06065 or ADMIN_ADDRESS_COUNTY = 06065))
file_format=(TYPE='CSV');
[ HEADER = TRUE]
get #%my_stage/result/data.csv/;
Sorry - I don't have a way to test this.
You are missing a parentheses after the where clause. You opened a parentheses after the first FROM and then another one at the WHERE clause, but you only closed the WHERE parentheses.
Also, AFAIK, you don't need to call a get if the stage was properly set. The copy into command will place it in your stage, you then retrieve it from that stage but you can do this by the normal way of accessing the stage you specified. So if you sent it to a s3 bucket, you'd just access the resource from S3 as if it were any other file.
Lastly, remember there are many useful parameters you can indicate in the FILE_FORMAT, such as Record_delimiter, compression and how to handle nulls.
And remove the last semicolon after csv, that's going to cause another error because HEADER is not a valid instruction on its own.
Also you don't have to put HEADER = TRUE between brackets. Brackets in documentation mean it's an optional parameter.

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

How to run a TestCafe test with CLI that contains a metadata array

I have a test with the following metadata:
test.meta({ type: 'smoke', testcase: ['tc01', 'tc02'] });
The testcase metadata contains an array of id's that I would like to use them as filter for running the test using any of the values using the command line interface:
testcafe --test-meta testcase=tc01
testcafe --test-meta testcase=tc02
These two command lines should run the same test, however they don't work. Is there another way to approach to this solution?
Reading this discussion on TestCafe's github page, it seems that metadata has to be only single value strings: https://github.com/DevExpress/testcafe/issues/3267 The issue is then closed and the last explanation was that such a feature request is not very clear because:
In addition, the particular case you are addressing is a bit tricky, what will be the user expected behaviour?, To match only when the array has all the passed values? or when the array is equal to that values (ie: don't have any extra one)?.
Such an option is not mentioned in the official documentation either: https://devexpress.github.io/testcafe/documentation/guides/basic-guides/organize-tests.html#specify-test-metadata
That leads me to believe you can't really achive what you're asking for as of June 2020.

DokuWiki LDAP can't see any groups

We have just changed our domain after protracted name change (the name actually happened two years ago!) and our DokuWiki installation has stopped being able to see any groups and memberships.
The config has been updated to reflect the new server and DCs and login is working correctly, it is only the groups that aren't working.
$conf['auth']['ldap']['server'] = 'ldap://MYDC.mydomain.co.uk:389';
$conf['auth']['ldap']['binddn'] = '%{user}#mydomain.co.uk';
$conf['auth']['ldap']['usertree'] = 'dc=mydomain,dc=co,dc=uk';
$conf['auth']['ldap']['userfilter'] = '(userPrincipalName=%{user}#mydomain.co.uk)';
$conf['auth']['ldap']['mapping']['name'] = 'displayname';
$conf['auth']['ldap']['mapping']['grps'] = 'array(\'memberof\' => \'/CN=(.+?),/i\')';
$conf['auth']['ldap']['grouptree'] = 'dc=mydomain,dc=co,dc=uk';
$conf['auth']['ldap']['groupfilter'] = '(&(cn=*)(Member=%{dn})(objectClass=group))';
$conf['auth']['ldap']['referrals'] = '0';
$conf['auth']['ldap']['version'] = '3';
$conf['auth']['ldap']['debug'] = 1;
Obviously I have edited the doain name there, but for the life of me I can't see what's wrong here, It all worked fine yesterday on the old domain.
I should also state that this is an old version of DokuWiki that for various reasons I can't actually update.
The debug line gives me a "ldap search: success" line, but if I add "?do=check" onto any url within the system I get "You are part of the groups"...... and nothing, it can't see any groups.
It's a massive pain as we have a pretty intricate ACL setup for the site, so it's not like I can just throw it open to all.
If anyone has any suggestions, no matter how obvious, please pass them on.
Solved it by changing the dokuwiki authentication plugin that was used, the 'authad' is more simple to use and just works with what I'm doing.
As a side bonus it also means that I have finally been able to get the install upgraded to the current version.