How to use parameter in json PowerBI to query data using API? - api

I am newbie with PowerBi and I have a big deal with it.
I have a table that I load from azure devops. One of the column is work item id like this:
I want to get the data again from azure devops using API.
I create Blank Query, Advanced Editor, with this script:
(param) =>
let
Source = Json.Document(Web.Contents("https://dev.azure.com/TeamName/ProjectName/_apis/wit/workitems/"& Text.From(param) &"?api-version=7.0")),
#"Converted to Table" = Record.ToTable(Source)
in
#"Converted to Table"
and the result like this:
My expectation is, I want to use work item id from this table as parametr in here`s script:
(param) =>
let
Source = Json.Document(Web.Contents("https://dev.azure.com/TeamName/ProjectName/_apis/wit/workitems/"& Text.From(param) &"?api-version=7.0")),
#"Converted to Table" = Record.ToTable(Source)
in
#"Converted to Table"
So that I don't need to input the parameter manually, because the work item id in that table will be changed everyday.
I duplicate backlog item-221025 table then remove another columns except work item id, then I add a column which is from Query1 like this:
= Table.AddColumn(#"Removed Other Columns2", "Query1", each Query1([Work Item Id]))
I do expand for the table and the result be like this:
But my expectation is to get the data from the Record which is this value System.Reason. I want to create another column with this value System.Reason and sequence with work item id.
Anyone can help me to deal with this, I really appreciate it. Thank you so much

Related

How to create scheduled query that writes to new table everyday?

I have a query that I want to run every day, but the results need to be populated to a new table each day due to the amount of data each table will contain (~10B rows/day).
Essentially want to write to a new table like: my_database_name.my_table_name.my_results_{today's_date} each day.
I see the feature that allows creating a "Scheduled Query" but don't see any option to write to a new table each day.
Is this possible to do in BigQuery? How can I achieve this?
You can use query parameters to achieve this:
my_database_name.my_table_name.my_results_{run_date}
More detail can be found here:
https://cloud.google.com/bigquery/docs/scheduling-queries#available_parameters_2

Querying ActiveRecord Syntax

I'm working on a task where I have to define a function that attaches something to a column in my DB, however I'm quite new and despite looking at the ActiveRecord documentation I don't appear to be able to grab the column I'm looking for.
For example, I have a table with many columns including 'State' and 'Phase', I was able to grab state with the following code:
CaseFileStatus.where(state: case_file.state).first
However I can't somehow manage to grab the 'Phase' column now, as shown below.
CaseFileStatus.where(state: "case_file.phase")
CaseFileStatus Load (2.5ms) SELECT "case_file_statuses".* FROM "case_file_statuses" WHERE "case_file_statuses"."state" = $1 [["state", "case_file.phase"]]
=> []
I'm sure it's a super basic error, but how should I be structuring this query?
If you need to query records based on the phase column, you should do something like this:
CaseFileStatus.where(phase: case_file.phase)

SQL Server: Remove substrings from field data by iterating through a table of city names

I have two databases, Database A and Database B.
Database A contains some data which needs to be placed in a table in Database B. However, before that can happen, some of that data must be “cleaned up” in the following way:
The table in Database A which contains the data to be placed in Database B has a field called “Desc.” Every now and then the users of the system put city names in with the data they enter into the “Desc” field. For example: a user may type in “Move furniture to new cubicle. New York. Add electric.”
Before that data can be imported into Database B the word “New York” needs to be removed from that data so that it only reads “Move furniture to new cubicle. Add electric.” However—and this is important—the original data in Database A must remain untouched. In other words, Database A’s data will still read “Move furniture to new cubicle. New York. Add electric,” while the data in Database B will read “Move furniture to new cubicle. Add electric.”
Database B contains a table which has a list of the city names which need to be removed from the “Desc” field data from Database A before being placed in Database B.
How do I construct a stored procedure or function which will grab the data from Database A, then iterate through the Cities table in Database B and if it finds a city name in the “Desc” field will remove it while keeping the rest of the information in that field thus creating a recordset which I can then use to populate the appropriate table in Database B?
I have tried several things but still haven’t cracked it. Yet I’m sure this is probably fairly easy. Any help is greatly appreciated!
Thanks.
EDIT:
The latest thing I have tried to solve this problem is this:
DECLARE #cityName VarChar(50)
While (Select COUNT(*) From ABCScanSQL.dbo.tblDiscardCitiesList) > 0
Begin
Select #cityName = ABCScanSQL.dbo.tblDiscardCitiesList.CityName FROM ABCScanSQL.dbo.tblDiscardCitiesList
SELECT JOB_NO, LTRIM(RTRIM(SUBSTRING(JOB_NO, (LEN(job_no) -2), 5))) AS LOCATION
,JOB_DESC, [Date_End] , REPLACE(Job_Desc,#cityName,' ') AS NoCity
FROM fmcs_tables.dbo.Jobt WHERE Job_No like '%loc%'
End
"Job_Desc" is the field which needs to have the city names removed.
This is a data quality issue. You can always make a copy of the [description] in Database A and call it [cleaned_desc].
One simple solution is to write a function that does the following.
1 - Read data from [tbl_remove_these_words]. These are the phrases you want removed.
2 - Compare the input - #var_description, to the rows in the table.
3 - Upon a match, replace with a empty string.
This solution depends upon a cleansing table that you maintain and update.
Run a update query that uses the input from [description] with a call to [fn_remove_these_words] and sets [cleaned_desc] to the output.
Another solution is to look at products like Melisa Data (DQ) product for SSIS or data quality services in the SQL server stack to give you a application frame work to solve the problem.

using a lookup table on a form with Oracle Apex Item

I have an application that uses Oracle Apex 4.2 . It has a form ( form and report on a table) that needs to display descriptions for columns on the table. For instance, there is a column on the table called fund which has a numeric value ( 1 to 6). There is a separate table that gives a description for each of these 6 values. Under EDIT PAGE ITEM, under SOURCE, I chose SOURCE TYPE -> SQL QUERY
I entered this query below:
SELECT DESCRIPTION FROM
"#OWNER#"."BU19ANT",
"#OWNER#"."FUNDCD"
WHERE ANTFUNDCD = CODE
where BU19ANT is the table that used for this form
FUNDCD is the name of the look up table
ANTFUNDCD and CODE and numeric fields on the respective tables and DESCRIPTION is the value that I want to look up and display on the form.
This gives me the correct answer MOST of the time, but not all the time.
The key to the table ( and the field used to link from the report to the form) is the Soc Security Number. If I run this same query against the Oracle table hard coding the SS Number, I always get the correct answer.
This form has 5 look ups that work this way and they all have the same problem.
I assume that I DONT need to include the Social Security Number as part of the query Apex already knows that.
But I tried to add that and can not figure out how to code it.
I tried
WHERE ANTSOCIALSECURITYNUMBER ( column on table) = P2_SOCIALSECURITYNUMBER ( the item on this page)
but that gave this error
ORA-00904: "P2_SOCIALSECURITYNUMBER ": invalid identifier
Is there some other way to code this? Or to say where SS Number = current record?
Or am I on the wrong track here?
Try :P2_SOCIALSECURITYNUMBER (for items on session) or &P2_SOCIALSECURITYNUMBER. (for items on page)

Duplicate a record and its references in web2py

In my web2py application I have a requirement to duplicate a record and all its references.
For example
one user has a product (sponserid is the user). and this product has so many features stored in other tables (reference to product id).
And my requirement is if an another user is copying this product, the a new record will generate in the product table with new productid and new sponserid. And all the reference table records will also duplicate with the new product id. Effectively a duplicate entry is creating in all the tables only change is product id and sponserid.
The product table fields will change. So I have to write a dynamic query.
If I can write a code like below
product = db(db.tbl_product.id==productid).select(db.tbl_product.ALL).first()
newproduct = db.tbl_product.insert(sponserid=newsponserid)
for field,value in product.iteritems():
if field!='sponserid':
db(db.tbl_product.id==newproduct).update(field=value)
But I cannot refer a field name like this in the update function.
Also I would like to know if there is any other better logic to achieve this requirement.
I would greatly appreciate any suggestions.
For the specific problem of using the .update() method when the field name is stored in a variable, you can do:
db(db.tbl_product.id==newproduct).update(**{field: value})
But an easier approach altogether would be something like this:
product = db(db.tbl_product.id==productid).select(db.tbl_product.ALL).first()
product.update(sponserid=newsponserid)
db.tbl_product.insert(**db.tbl_product._filter_fields(product))
The .update() method applied to the Row object updates only the Row object, not the original record in the db. The ._filter_fields() method of the table takes a record (Row, Storage, or plain dict) and returns a dict including only the fields that belong to the table (it also filters out the id field, which the db will auto-generate).