Search for issues that starts with xxx in Jira - api

This is my first time working with Jira and their API. My company wants me to fetch all "ASAPSD" issues, but I don't understand how to. The core problem in itself is that I do not understand exactly how Jira works, and how issues are "built" up.
The issue starts with "ASAPSD" followed by some random characters and numbers. For example "ASAPSD-334". How can I, with a GET request, get all issues that start with ASAPSD?

Basic information about Jira and projects
The first part (prefix) is the Project Key representing a project/collection where all similar issues are stored (in this case, ASAPSD may stand for ASAP Service Desk:-). There are certainly more projects in every Jira instance. Some other projects are intended to track different activities.
Searching for the project issues
You can search for any issues using the search function (available also via REST API).
First, log in to the Jira and try to search for the issues manually by yourself - in Issue Navigator (via Issues top menu bar). Here you'll find that you can search all issues via Basic (Project is ASAPSD) or Advanced search (project = ASAPSD). This advanced search is called JQL (Jira Query Language).
You can then use this JQL in your REST API search method:
https://docs.atlassian.com/software/jira/docs/api/REST/latest/#api/2/search-search
Example
GET https://jira.yourdomain.com/rest/api/2/search?jql=project%3DASAPSD
Notes
The output lists only limited number of issues (usually 50) - to get more issues, you need to increase the limit (maxResults param) or paginate (startAt param) over next results.
Using expand and fields params you can alter the output to get more/less information.
There two types of Jira instances - on-premise Server/Data Center and Cloud one. REST API and usage might slightly differ.
Alternatively, you can get CSV export. When you search for issues in the Issue Navigator, there's an option to export results to CSV. Save this URL and you can request it in your script via GET.

Related

Restrict access of partial implmented API in Production

We need to develop an API which takes a CSV file as an input and persists them in database. Using vertical slicing we have split the reuirement into 2 stories
First story has partial implementation with no data validation
Second story completes the usecase by adding all validations.
Sprint-1 has first story and sprint-2 has second. After imlemneting first story in sprint-1 we want to release it to production. However, we dont want to make the API accessible to public which would be big security risk as invalid data could be inserted into database (story1 ignores validation)
What is the best strategy to release story1 at the end of sprint1 while addressing such security concerns?
We tried disbling the access via toggle flag such as ConfigCat. However, we dont want to implment something which is not required for actual implementation
is there really such a risk that in 1 sprint, someone may start using the API? And if you haven't added it to any documentation, how would they know of it's existance?
But let's say it is possible - what about using a feature toggle? When the toggle is activated, the end point spits out null or even a HTTP error code. Then you can enable to feature toggle when you're ready for people to start using the endpoint.

Has anyone come across issue[]?

issue[] in UiPath.
So I downloaded an extension / package from UiPath GO! named as "Jira Integrating Software". This Packages comes with several APIs to access Jira tickets.
I was working with one of these APIs called "Search by JQL". JQL is Jira Query Language comes in handy for Advance searching. The output type of this "Search by JQL" activity is issues[].
Now when I am Iterating this array it gives me an output of "UiPath.JiraSoftware.Models.Issue"
You should use a For Each activity iterating the ResultArray Array you got from the Search JQL.
The following is just pseudo code! But it should work like this. Maybe the name of the property is not IssueId. That was not completely told in the document. But this you can when you inspect it by using the debug mode:
You should also have a look on the official Jira UiPath page. This resource contains all of the concepts you will need.

How to create a searchable central repository of code documentation using DocFx

I'm looking to create a central repository for all of our published API documentation using DocFx. I have documentation auto-generated via my build (using TFS) and published through my release (using Octopus) just fine for multiple individual sites. However, I'm wanting to pull it altogether in one location. The thinking is that through a parent site you could filter content in any of the individual sites without having to drill down into them. Do you have a recommendation on how to do this?
Also, within this same documentation repository I want to provide the capability to search by all of the meta data (project-level documentation) across the hundreds of projects in our portfolio. This will give our BA, DEV and QA teams easier access to what all our systems do. I like the "filtering" capability built into DocFx, but I'm wanting full-text search across all of the meta data. Do you have a recommendation for this functionality as well?
To change the location of the docfx output, edit the docfx.json file and specify the dest value. By default it is "dest": "_site". For more formatting guidance, reference: https://dotnet.github.io/docfx/tutorial/docfx.exe_user_manual.html.
Regarding full-text search, that is possible by simply ensuring the ExtractSearchIndex post-processor is invoked (in order to generate an index.json file of keywords) and that the global _enableSearch value is set to true in the docfx.json file. A snippet from that file would look like:
"postProcessors": [ "ExtractSearchIndex" ],
"globalMetadata": {
"_enableSearch": "true"
}
For your first question:
I think what you expect is like the .NET API Browser. The source code behind this page is not open to public, so you need create this page by yourself, through collecting xrefmap.yml from multiple sites, and extract the needed data into this page.
For your second question:
DocFX uses Luna to scan all the output files and generate an index file called index.json for later search use. In your case, you should want to limit the search scope only in the metadata you defined. This is also not supported by DocFX by default. You can also use Luna in your central place to search these meta. You can create your specific index.json for each project first, and the cental place to collect them for the search page.

Script to download Google web history

How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.

Normal Google Custom Search

I'm writing an application that analyses search engine results.
With the Google Search API now being depreciated and limited to 1000 queries/day they are forcing developers to move to the AJAX APIs and to use the Custom Search API to do a Google search.
The thing is I don't need a Custom Search, I need a general search not one that is filtered by site; OK maybe filtered by USA/UK (Google.com/Google.co.uk).
Does anyone know how to just do a regular Google search using the AJAX APIs? Is the Custom Search the right thing to be using?
I don't want to hit the 1000/day limit using the old service but this is exactly what I need.
I did find: How do I create a CSE that searches the entire web?
http://www.google.com/support/customsearch/bin/answer.py?hl=en&answer=1210656
But by the sounds of it this will distort the search results.
Thank you.
OK. Here's how I think it is done.
Create a Custom Search Engine.
Add a site such as *.com When this is created go to the Advanced tab
and download the context xml.
Remove the Background Label associated with the site.
Upload the XML to replace the previous context.
This seems to work just fine and is returning the same values as far as I can see.
Yes, you are right *in theory, and this should let you get 100 results a day on the fly. Just this Saturday though, Google confirmed how here -
(* so far though, we can't get it working...)