I am working on creating a report which contains "Defect ID, Defect Name, Creation Date and current state" of the reopened defects. This means all defects that had the state of reopened at some point during the defect cycle, the only way to find if the defect has ever been in reopened state is from the defects revision history.
There isn't any report in Rally that currently supports this. If anyone can help us on how to create one or give us an similar example that would be great.
If you hit the new Lookback API (unreleased when Kyle first answered, now in open preview), you can query directly for snapshots (revisions) where the State was ever set to a value "Reopened". Alternatively, you can look for any instance where OpenedDate changed by querying for "_PreviousValues.OpenedDate": {$exists: true}.
You can find information on the LBAPI here. There is support for querying it in the App SDK 2.0's SnapshotStore. Note that SDK 2.0p6 (releasing soon) has some improvements.
I would use the Defects by Closer App as a starting point. It performs a similar function by searching through the revision history for who closed a defect. You should be able to modify is slightly to search the revision text for "OPENED DATE changed" rather than "CLOSED DATE added":
for (j = 0; j < defect.RevisionHistory.Revisions.length; j++) {
var revision = defect.RevisionHistory.Revisions[j];
if (revision.Description.search("OPENED DATE changed") !== -1) {
//Found a reopened defect
}
}
For reference here is an example revision history entry from a reopened defect:
OPENED DATE changed from [Fri Jan 27 07:50:36 EST 2012] to [Fri Jan 27 07:51:00 EST 2012], STATE changed from [Closed] to [Open], CLOSED DATE removed [Fri Jan 27 07:50:50 EST 2012]
For more information on writing apps check out the App SDK documentation on Rally's Developer Portal.
NOTE: You can view the source code for Defects by Closer app here
Related
When calling GetReportsListAsync from my Xero App for a Tennant I'm getting the InnerException "Sequence contains more than one matching element".
I used Xero's API Explorer to call the same Xero Accounting API > Reports Endpoint > Get Reports List Operation for the same Tennant and can see that there is a double-up of the same GST Calculation (ReportType) Activity Statement (ReportName) for the same period of 1 Jan 2022 to 31 Jan 2022 (ReportDate).
How can there be 2 of the same reports for the same Tennant for the same period? Which is the correct one so I can take the ReportID to then call GetReportFromId?
Any help greatly appreciated.
I went to GitHub issues to raise a support ticket but thought of asking the question first to avoid noise.
This is what the docs says-
Omit the version completely or use "latest" to load the latest one (not recommended for production usage):
/npm/jquery#latest/dist/jquery.min.js
/npm/jquery/dist/jquery.min.js
According to the doc, either we can latest or omit it completely to load the latest version. But I'm seeing a difference-
With latest added (URL 1 - U1)
Example- https://cdn.jsdelivr.net/npm/#letscooee/web-sdk#latest/dist/sdk.min.js
It loads the last released version that is cached for 24 hours. That means if we release v2 & v3 within 24 hours, the above URL will still show v1.
The caching period is 1 week.
Without latest (URL 2 - U2)
Example- https://cdn.jsdelivr.net/npm/#letscooee/web-sdk/dist/sdk.min.js
While we omit the latest completely, this loads the latest release immediately i.e. v3 and the caching period is also 1 week.
I have requested for the purge API as per their docs but I believe this behaviour is not aligning with their docs.
Tried to Google the cause and read their docs 3 times. Am I missing something?
Edit 1
After reading Martin's answer, I did the following-
(To view the images, open them in new tab & remove t before .png)
Step Taken
# Time
U1
U2
Purge Cache
12:39:00 UTC
Purged
Purged
See Age Header
# 12:40 UTC
0
0
See Date Header
# 12:40 UTC
Sun, 12 Sep 2021 12:40:25 GMT
Sun, 12 Sep 2021 12:40:31 GMT
Headers
12:41:00 UTC
Result
12:41:00 UTC
Points to latest release 0.0.3
Points to latest release 0.0.3
Publish new NPM release 0.0.4
12:48:00 UTC
Refresh both the URLs
12:49:00 UTC
Shows old release 0.0.3
Shows latest release 0.0.4
The last step shows that I was wrong here. This is working as expected (i.e. showing 0.0.3 only) as per the docs
The caching time is the same in both cases - 12 hours at the CDN level and 7 days in the browser: cache-control: public, max-age=604800, s-maxage=43200
That doesn't necessarily mean both URLs will always return the same content because both the CDN and your browser calculate the expiration for each URL independently, based on when it was first retrieved, so the CDN may serve different versions for up to 12 hours after the release.
Seems to me both the links point to the same sdk URL.
As per how cdns work would be to mention the version of the sdk for example:
<script src="https://unpkg.com/three#0.126.0/examples/js/loaders/GLTFLoader.js"></script>
or as per below which will always point to the latest version of the sdk:
<script src="https://cdn.rawgit.com/mrdoob/three.js/master/examples/js/loaders/GLTFLoader.js"></script>
I am using our Enterprise's Splunk forawarder which seems to be logging events in splunk like this which makes reading splunk logs a bit difficult.
{"log":"[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO rdt.damien.services.CaseServiceImpl CaseServiceImpl :: showCase :: Case Created \n","stream":"stdout","time":"2021-01-19T15:30:57.24005568Z"}
However, there are different Orgs in our Sibling Enterprise who log splunks thus which is far more readable. (No relation between us and them in tech so not able to leverage their tech support to triage this)
[http-nio-8443-exec-7] 15 Jan 2021 21:08:49,511+0000 INFO DaoOImpl [{applicationSystemCode=dao-app, userId=ANONYMOUS, webAnalyticsCorrelationId=|}]: This is a sample log
Please note the difference in logs (mine vs other):
{"log":"[https-jsse-nio-8443-exec-5]..
vs
[http-nio-8443-exec-7]...
Our Enterprise team is struggling to determine what causes this. I checked my app.log which looks ok (logged using Log4J) and doesn't have the aforementioned {"log" :...} entry.
[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO
rdt.damien.services.CaseServiceImpl CaseServiceImpl:: showCase :: Case
Created
Could someone guide me as to where could the problem/configuration lie that is causing the Splunk Forwarder to send the logs with the {"log":... format to splunk? I thought it was something to do with JSON type vs RAW which I too dont understand if its the cause and if it is - what configs are driving that?
Over the course of investigation - I found that is not SPLUNK thats doing this but rather the docker container. The docker container defaults to json-file that writes the outputs to the /var/lib/docker/containers folder with the **-json postfix which contains the logs in the `{"log" : <EVENT NAME} format.
I need to figure out how to fix the docker logging (aka the docker logging driver) to write in a non-json format.
When using the Omniture Data Warehouse API Explorer ( https://developer.omniture.com/en_US/get-started/api-explorer#DataWarehouse.Request ), the following request provides an 'Date_Granularity is invalid response'. Does anyone have experience with this? The API documentation ( https://developer.omniture.com/en_US/documentation/data-warehouse/pdf ), states that the following values are acceptable: "none, hour, day, week, month, quarter, year."
{
"Breakdown_List":[
"evar14",
"ip",
"evar64",
"evar65",
"prop63",
"evar6",
"evar16"
],
"Contact_Name":"[hidden]",
"Contact_Phone":"[hidden]",
"Date_From":"12/01/11",
"Date_To":"12/14/11",
"Date_Type":"range",
"Email_Subject":"[hidden]",
"Email_To":"[hidden]",
"FTP_Dir":"/",
"FTP_Host":"[hidden]",
"FTP_Password":"[hidden]",
"FTP_Port":"21",
"FTP_UserName":"[hidden]",
"File_Name":"test-report",
"Metric_List":[ ],
"Report_Name":"test-report",
"rsid":"[hidden]",
"Date_Granularity":"hour",
}
Response:
{
"errors":[
"Date_Granularity is invalid."
]
}
Old question, just noticing it now.
Data Warehouse did not support the Hour granularity correctly until Jan 2013 (the error you saw was a symptom of this). Then it was corrected for date ranges less then 14 days. In the July 2013 maintenance release of v15 the 14 day limit should be gone. But I have not verified that myself.
As always the more data you request the longer the DW processing will take. So I recommend keeping ranges to a maximum of a month and uncompressed file sizes to under a 1GB, though I hear 2 GB should now be supported.
If you still have issues please let us know.
Thanks C.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've a friend working in an hospital and he asked for an issue tracking system.
Currently they are using only mails. I thought to Jira or Trac, but they use
"programming terms" like "bug" or "patch".
I don't want to spend time customizing, do you know any better solution/software?
I used Trac to implement issue tracking for managing the operations of a building. It is quite configurable, so I was able to conceal the software-bug-oriented wording without a lot of effort, though the admin interface.
I applied a trivial patch to Trac so that it displays boolean values as Y and N rather than the the "computerese" 1 and 0, and also, so that the false value is displayed a blank. (This is better in a columnar report where you have boolean columns, and you just want to clearly see where the Y values are; they are harder to see in a grid of Y's and N's).
Here it is below. Everything else, I did easily through the admin interface.
Index: pyshared/trac/ticket/web_ui.py
===================================================================
--- pyshared.orig/trac/ticket/web_ui.py 2011-09-16 11:59:40.000000000 -0700
+++ pyshared/trac/ticket/web_ui.py 2011-09-16 12:11:31.000000000 -0700
## -1120,7 +1120,7 ##
elif type_ == 'checkbox':
value = ticket.values.get(name)
if value in ('1', '0'):
- field['rendered'] = value == '1' and _('yes') or _('no')
+ field['rendered'] = value == '1' and 'yes' or ''
# ensure sane defaults
field.setdefault('optional', False)
Index: pyshared/trac/ticket/query.py
===================================================================
--- pyshared.orig/trac/ticket/query.py 2011-09-16 14:36:51.000000000 -0700
+++ pyshared/trac/ticket/query.py 2011-09-16 14:37:10.000000000 -0700
## -294,9 +294,9 ##
val = datetime.fromtimestamp(int(val or 0), utc)
elif field and field['type'] == 'checkbox':
try:
- val = bool(int(val))
+ val = val == '1' and 'Y' or ''
except TypeError, ValueError:
- val = False
+ val = ''
result[name] = val
results.append(result)
I know you said that you didn't want to time customizing, but I am afraid you are just out of luck on that. I doubt that there is any issue tracking system out there that will give you exactly what you want without customizing (or without exorbitant fees). So I will still recommend JIRA.
JIRA is incredibly customizable. We use it in our organization for tracking issues of many types from software issue tracking, to Vehicle reservations, to building maintenance work order requests, purchase request, and more. We also have plans to customize it for our students (I work in a college) to be able to request assistance with registration, submit feedback, and more.
JIRA is incredibly robust and, once you get the hang of it, not terrible to configure. I won't lie. Configuring JIRA, at first, is a chore and can be difficult to get a handle on. But there is a great book from O'Reilly called Jira Administration that helped me understand it all much better. And it is quite a small book (187 pages or so), so it is not filled with a bunch of fluff. It is just great and useful information.
We use JIRA Dashboards, Issue type customizations, notification schemes, permission schemes, issue type security, custom workflows, custom screens and forms, plugins, the Web Service APIs, and more. It really is a fantastic system.