Need to get issuelinks types with Jira python - jira-rest-api

I am writing a script with Jira python and I have encountered a big obstacle here.
I need to access to one of the issuelinks under "is duplicated by" but I don't have any idea about the attributes I can use.
I can get to the issuelinks field but I can't go further from here.
This is I've got so far:
issue = jira.issue(ISSUE_NUM) #this is the issue I am handling
link = issue.fields.issuelinks # I 've accessed to the issuelinks field
if hasattr(link, "inwardIssue"):
inwardIssue = link.inwardIssue
and I want to do this from here :
if(str(inwardIssue.type(?)) == "is duplicated by"):
inward Issues can be
is cloned by
is duplicated by
and so on.
how can I get the type of inward Issues??

There seem to be a few types of issue links. So far I've seen: Blocker, Cause, Duplicate and Reference.
In order to identify the type that the IssueLink is you can do the following:
issue = jira.issue(ISSUE_NUM)
all_issue_links = issue.fields.issuelinks
for link in all_issue_links:
if link.type.name == 'Duplicate':
inward_issue = link.inwardIssue
# Do something with link

Related

Pyomo dynamic optimisation ERROR: variable that is not attached to an active block on the submodel being written

I am trying to set up a model for a dynamic optimisation problem with pyomo.DAE.
I defined my state variable as well as it's corresponding Derivative (both indexed by m.Time). I then set up a simple constraint that expresses the relationship between state and derivative variable in the most simple terms. Solving the problem with a dummy objective (so just testing the constraint), I get the following error:
ERROR: Model contains an expression (calc_my_state[0])
that contains a variable (derivative_var[0]) that is not
attached to an active block on the submodel being written
Here's an excerpt of what I wrote:
(.....)
m.state_var = Var(m.Time, initialize=0)
m.derivative_var = DerivativeVar(m.state_var, wrt=m.Time)
def calc_my_state(m,i):
return m.derivative_var[i] == m.state_var[i]*2
m.calc_my_state = Constraint(m.Time, rule=calc_my_state)
m.obj = Objective(expr=1)
opt = SolverFactory("glpk")
results = opt.solve(m)
I tried to reproduce the simple setup of an DAE in pyomo, more or less copied and pasted lines from the pyomoDAE docu.
I printed derivative_var.get_state_var() and it gives me the right state variable without error.
I also tried solving simple DAE examples that I found on the internet and solving them with my solver settings worked fine as well.
What am I missing? I am grateful for any input!!! Thanks!
I found the missing link: I did not specify the "Discretization Transformation". Once something like the following was added, the script ran without error!
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, wrt=m.Time)

How to list all topics created by me

How can I get a list of all topics that I created?
I think it should be something like
%SEARCH{ "versions[-1].info.author = '%USERNAME%" type="query" web="Sandbox" }%
but that returns 0 results.
With "versions[-1]" I get all topics, and with "info.author = '%USERNAME%'" a list of the topics where the last edit was made by me. Having a list of all topics where any edit was made by me would be fine, too, but "versions.info.author = '%USERNAME%'" again gives 0 results.
I’m using Foswiki-1.0.9. (I know that’s quite old.)
The right syntax would be
%SEARCH{ "versions[-1,info.author='%USERNAME%']" type="query" web="Sandbox"}%
But that's not performing well, i.e. on your old Foswiki install.
Better is to install DBCacheContrib and DBCachePlugin and use
%DBQUERY{"createauthor='%WIKINAME%'"}%
This plugin caches the initial author in a way it does not have to retrieve the information from the revision system for every topic under consideration during query time.

Error: Not found: Dataset my-project-name:domain_public was not found in location US

I need to make a query for a dataset provided by a public project. I created my own project and added their dataset to my project. There is a table named: domain_public. When I make query to this table I get this error:
Query Failed
Error: Not found: Dataset my-project-name:domain_public was not found in location US
Job ID: my-project-name:US.bquijob_xxxx
I am from non-US country. What is the issue and how to fix it please?
EDIT 1:
I change the processing location to asia-northeast1 (I am based in Singapore) but the same error:
Error: Not found: Dataset censys-my-projectname:domain_public was not found in location asia-northeast1
Here is a view of my project and the public project censys-io:
Please advise.
EDIT 2:
The query I used to type is based on censys tutorial is:
#standardsql
SELECT domain, alexa_rank
FROM domain_public.current
WHERE p443.https.tls.cipher_suite = 'some_cipher_suite_goes_here';
When I changed the FROM clause to:
FROM `censys-io.domain_public.current`
And the last line to:
WHERE p443.https.tls.cipher_suite.name = 'some_cipher_suite_goes_here';
It worked. Shall I understand that I should always include the projectname.dataset.table (if I'm using the correct terms) and point the typo the Censys? Or is this special case to this project for some reason?
BigQuery can't find your data
How to fix it
Make sure your FROM location contains 3 parts
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
Like so
`bigquery-public-data.hacker_news.stories`
*note the backticks
Examples
Wrong
SELECT *
FROM `stories`
Wrong
SELECT *
FROM `hacker_news.stories`
Correct
SELECT *
FROM `bigquery-public-data.hacker_news.stories`
In Web UI - click Show Options button and than select your location for "Processing Location"!
Specify the location in which the query will execute. Queries that run in a specific location may only reference data in that location. For data in US/EU, you may choose Unspecified to run the query in the location where the data resides. For data in other locations, you must specify the query location explicitly.
Update
As it stated above - Queries that run in a specific location may only reference data in that location
Assuming that censys-io.domain_public dataset has its data in US - you need to specify US for Processing Location
The problem turned out to be due to wrong table name in the FROM clause.
The right FROM clause should be:
FROM `censys-io.domain_public.current`
While I was typing:
FROM domain_public.current
So the project name is required in the FROM and `` are required because of - in the project name.
Make sure your FROM location contains 3 parts as #stevec mentioned
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
But in my case, I was using the LegacySql within the Google script editor, so in that case you need to state that to false, for example:
var projectId = 'xxxxxxx';
var request = {
query: 'select * from project.database.table',
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
check exact case [upper or lower] and spelling of table or view name.
copy it from table definition and your problem will be solved.
i was using FPL009_Year_Categorization instead of FPL009_Year_categorization
using c as C and getting the error "not found in location asia-south1"
I copied with exact case and problem is resolved.
On your Big Query console, go to the Data Explorer on the left pane, click the small three dots, then select query option from the list. This step confirms you choose the correct project and dataset. Then you can edit the query on the query pane on the right.
may be dataset name changed in create dataset option. it should be US or default location
enter image description here

Pymongo: insert_many() gives "TypeError: document must be instance of dict" for list of dicts

I haven't been able to find any relevant solutions to my problem when googling, so I thought I'd try here.
I have a program where I parse though folders for a certain kind of trace files, and then save these in a MongoDB database. Like so:
posts = function(source_path)
client = pymongo.MongoClient()
db = client.database
collection = db.collection
insert = collection.insert_many(posts)
def function(...):
....
post = parse(trace)
posts.append(post)
return posts
def parse(...):
....
post = {'Thing1': thing,
'Thing2': other_thing,
etc}
return post
However, when I get to "insert = collection.insert_many(posts)", it returns an error:
TypeError: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping
According to the debugger, "posts" is a list of about 1000 dicts, which should be vaild input according to all of my research. If I construct a smaller list of dicts and insert_many(), it works flawlessly.
Does anyone know what the issue may be?
Some more debugging revealed the issue to be that the "parse" function sometimes returned None rather than a dict. Easily fixed.

Cannot add new Workitems using TFS API

Hi I am trying to add new workitems to the TFS repository using the API, but when I validate the workitem before it is saved, it returns an error. I previously got exceptions regarding the field definitions for a bug namely, Symptom, Steps to Reproduce and Triage. (Error code TF 26027). The code snippet is shown below: Can anyone tell me what's wrong here?
switch (workItemType)
{
case "Bug":
{
workItem.Title = values["Title"].ToString();
workItem.State = values["State"].ToString();
workItem.Reason = values["Reason"].ToString();
workItem.Fields["Priority"].Value = values["Priority"].ToString();
workItem.Fields["Severity"].Value = values["Severity"].ToString();
//workItem.Fields["Triage"].Value = values["Triage"].ToString();
workItem.Fields["Assigned To"].Value = values["Assigned To"].ToString();
//workItem.Fields["Symptom"].Value = values["Symptom"].ToString();
//workItem.Fields["Steps to Reproduce"].Value = values["Steps to Reproduce"].ToString();
// Validate the Work Item fields.
ArrayList result = workItem.Validate();
// If any invalid fields are returned, report an error.
if (result.Count > 0)
MessageBox.Show("An Error occurred while adding the Bug to the repository.");
else
workItem.Save();
}
break;
To find the available field definitions, you can iterate over the collection (FieldDefinitions). The Name and ReferenceName properties are the values you can index by into the collection.
the Field "Symptom" cannot be empty
Just reading the error message it looks like you are defining a field called "somefield" in your work item. I'm thinking that you have some old code hanging around elsewhere, maybe above the code snippet you posted, where you are defining a value for workItem.Fields["somefield"]
Old question, but hopefully helps someone. The field name is "Repro Steps"
.Fields["Repro Steps"].Value