Is there a way to access raw data stored in Youtrack? - youtrack

In Youtrack reports, you can view the issues by two fields using creation date as y-axis and any other field as x-axis. But when you do that like in this graph you view number of issues that are currently in the state stated in x-axis. For example, if the x-axis is the state, then you will see the current states of the issues that are created in the date intervals of the y-axis. But I also want to see the number of issues in each state in a chronological way. I want to see the states (or some other field) of the issues in May 21, 2021 (not their current states but their states in May 21).
I know that Youtrack keeps the state changes and their dates and many other data like that because in different reports, I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
I want to access all those raw data. My plan is to create some reports that are not available in Youtrack Reports, using R or Python. Is there a way to access those raw data, or a guideline to access them?

The way to access raw data in YouTrack is through the REST API. For example, you can get the issue's activity data to retrieve the history of changes applied to the issue. This way you can identify how things have changed chronologically.
I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
Report's data can be accessed via API as well. The report's API endpoint is api/reports, however, it's not documented as it may be subject to change. In this case, we can't guarantee backward compatibility. If you are fine with it, you can still use it. To see the exact request, check the network requests in the browser when loading a report.

Related

Tableau dynamic filtering and visualization

I have a few questions about Tableau and how dynamic it is:
Changes to the data in a relational db requires refresh, no events of refresh or something else?
Can we have visualizations at a runtime, e.g if we have some filters and we select some of them are the visualizations going to be updated?
If I have some API and want it to accept some params from the users and pass them to Tableau for querying, this would be a use case.
Tabeau queries your data source directly. Changes in your source are visible whenever Tableau launches a query.
There is no "in memory" database to be refreshed, but there are high performance extracts that can function as a buffer between your data source & Tableau, sometimes approaching in memory performance.
A "new" query to get fresh data is sent when:
Something changes, for example the user filters a view (see question 2 :-) )
The workbook or dashboard is opened
The refresh button is hit
Another periodic refresh is launched
A programmatic event triggers a refresh
It is possible to launch a reload of the extract or a new query based on a trigger. You could create a command line script on the server that is triggered by your source system to reload (using TABCMD - Tableau command line interface or TabPy - Tableau Python integration amongst other thing). Or you could use the API.
Yes, the visualizations will be updated each time you select a filter.
This behavior is very customizable, you have a lot of control on what filter refreshes what viz, even in the front end only.
EDITED for clarity

Retrieve Overwritten Saved Query in Big Query

I accidentally overwrote a saved project query in BQ with a completely unrelated query. I can't find any documentation about retrieving overwritten queries or about any sort of version control. Has anyone done this as well and recovered their query?
Unfortunately, "Saved Query" is UI internal feature (see How to access “Saved Queries” programmatically? and there is respective feature request REST API for Saved Queries), so we really have no way to manage / control this cases
Meantime you can use query history (either in UI or via respective API or in Stackdriver) to locate use of that query and recreate/re-save it again

Backfill Google Analytics in BigQuery

I'm looking for a workaround on the following issue. Hope someone can help.
I'm unable to backfill data in the ga_sessions_ table in BigQuery through product linking in GA. e.g. partition ga_sessions_20180517 is missing
This specific view has already been linked before. Google documentation says that historical load is only done once per view (hence, the issue) (https://support.google.com/analytics/answer/3416092?hl=en)
Is there any way to work around it?
Kind regards,
Martijn
You can use Google Analytics Reporting API to get the data for that view. This method has lot of restrictions like sometimes the data is sampled/only 7 dimensions can be exported in one call, but at least you will be able to fetch your data in a partitioned manner.
Documentation hereDoc
If you need a lot of dimensions/metrics in hit level format, scitylana.com has a service that can provide this data historically.
If you have a clientId set in a custom dimension the data-quality is near perfect.
It also works without a clientId set.
You can get all history as available through the API.
You can get 100+ dimensions/metrics in one batch into BQ.

How to access results of Sonar metrics for use with applications like PowerPivot

I'm trying to run a number of applications with known failure rates through Sonar, with hopes of deciding which metrics are most valuable in determining whether a particular application will fail. Ultimately I'll be making some sort of algorithm that will look at the outputs of whatever metrics I'm using and generate a score from 1 - 100. I've got about 21 applications put through Sonar, and the results have been stored in a MySQL database. I originally planned to use PowerPivot to find relationships in the data, but it seems like the formatting of the tables doesn't lend itself well to that. Other questions on stackoverflow have told me that Sonar's tables are unformatted, and I should instead use the Web Service API to get the information. I'm unfamiliar with API and was unsuccessful in trying to do what I wanted by looking at Sonar's documentation for API.
From an answer to another question:
http://nemo.sonarsource.org/api/timemachine?resource=org.apache.cxf:cxf&format=csv&metrics=ncloc,violations_density,comment_lines_density,public_documented_api_density,duplicated_lines_density,blocker_violations,critical_violations,major_violations,minor_violations
This looks very similar to what I'd like to have, except I'm only looking at each application once (I'm analyzing a sample of all the live applications on a grid), which means Timemachine isn't really what I'm looking for. Would it be possible to generate a similar table, except instead of the stats for a particular application per date, it showed the statistics for an application and all of its classes, etc?
If you're not familiar with the WS API, you can also create your own Sonar plugin to achieve whatever you want: it is written in Java and it will execute on every analysis you run. This way, in the code ot this custom plugin, you can do whatever you want: flush the metrics you need in an output file, push them into a third party system, ... etc.
Just take a look on how to write a plugin (most probably you will create a Decorator). You have concrete examples also to get started faster.

Building a ColdFusion Application with Version Control

We have a CMS built entirely in house. I'm the new web developer guy with literally 4 weeks of ColdFusion Experience. What I want to do is add version control to our dynamic pages. Something like what Wordpress does. When you modify a page in Wordpress it makes some database entires and keeps a copy of each page when you save it. So if you create a page and modifiy it 6 times, all in one day you have 7 different versions to roll back if necessary. Is there a easy way to do something similar in Coldfusion?
Please note I'm not talking about source control or version control of actual CFM files, all pages are done on the backend dynamically using SQL.
sure you can. just stash the page content in another database table. you can do that with ColdFusion or via a trigger in the database.
One way (there are many) to do this is to add a column called "version" and a column called "live" in the table where you're storing all of your cms pages.
The column called live is option but might make it easier for your in some ways when starting out.
The column "version" will tell you what revision number of a document in the CMS you have. By a process of elimination you could say the newest one (highest version #) would be the latest and live one. However, you may need to override this some time and turn an old page live, which is what the "live" setting can be set to.
So when you click "edit" on a page, you would take that version that was clicked, and copy it into a new higher version number. It stays as a draft until you click publish (at which time it's written as 'live')..
I hope that helps. This kind of an approach should work okay with most schema designs but I can't say for sure either without seeing it.
Jas' solution works well if most of the changes are to one field, for example the full text of a page of content.
However, if you have many fields, and people only tend to change one or two at a time, a new entry in to the table for each version can quickly get out of hand, with many almost identical versions in the history.
In this case what i like to do is store the changes on a per field basis in a table ChangeHistory. I include the table name, row ID, field name, previous value, new value, and who made the change and when.
This acts as a complete change history for any field in any table. I'm also able to view changes by record, by user, or by field.
For realtime page generation from the database, your best bet are "live" and "versioned" tables. Reason being keeping all data, live and versioned, in one table will negatively impact performance. So if page generation relies on a single SELECT query from the live table you can easily version the result set using ColdFusion's Web Distributed Data eXchange format (wddx) via the tag <cfwddx>. WDDX is a serialized data format that works particularly well with ColdFusion data (sorta like Python's pickle, albeit without the ability to deal with objects).
The versioned table could be as such:
PageID
Created
Data
Where data is the column storing the WDDX.
Note, you could also use built-in JSON support as well for version serialization (serializeJSON & deserializeJSON), but cfwddx tends to be more stable.