Generate dynamic hstore key calls in Prawn - ruby-on-rails-3

I have an hstore column that I'm using to build a table in Prawn (pdf builder). The data will consist of records for a given month. Since it is hstore, the keys used will likely change from day to day so this needs to be dynamic.
I need to determine:
1 What unique keys are used that month
I created a helper to find the unique keys that were used in the month. These will be used as column headers.
keys(#users_logs)
# this returns an array like - ["XC", "PIC", "Mountain"]
The table will display a users dutylog data for the month. For testing...If I explicitly call known hstore keys...the data displays correctly. But, since its hstore...I wont know what the table column will be in production.
For testing, I call known hstore keys...this creates the prawn table row data per duty log.
#users_logs.map do |dutylog|
[ dutylog.properties["XC"],
dutylog.properties["PIC"],
dutylog.properties["Mountain"]
]
end
But, since this is hstore...I wont know what keys to call in production. So, I need to make the above iteration dynamic.
I tried, without success, to iterate over each dutylog entry, then iterate over each unique key and output one "dutylog.properties[x]" call for each key value...but, this just outputs the array of key values. I tried using send() in the block, but that didnt help.
#users_logs.map do |dutylog|
[ keys(#users_logs).each { |k| dutylog.properties[k] }.join(",") ]
end
Any ideas on how I could make the "dutylog.properties[k]" dynamic?

Took some head scratching...but turning out to be quit easy
This will build the rows for the Prawn table
def hstore_duty_log_rows
[keys(#users_logs)] +
#users_logs.map do |dutylog|
keys(#users_logs).map { |key| dutylog.properties.keys.include?(key) ? "#{dutylog.properties[key]}" : "0" }
end
end

Related

fnupdate updates wrong row datatables

I am trying to update a row by passing index.
http://live.datatables.net/raculubo/1/
But it most of the time replaces a wrong row.
The code is :-
$(document).ready(function() {
var table = $('#example').DataTable();
var index = table.column(0).data().indexOf("Cedric Kelly");
console.log("index2",index);
table.row().data(["ax","by","dd"], index);
} );
This is happening because of how you are sorting your data, leading to a difference between the "sort order" index and the "internal DataTables" index.
The table.column(0).data() function will return an array of names, as currently displayed in the table, taking into account sorting. In this scenario, the index of "Cedric Kelly" is therefore 1.
However, the internal unique index value stored by DataTables is actually 3 because that is the order provided to DataTables from your HTML code when the data was loaded for the very first time (where Cedric Kelly is the 4th record listed - so the index is 3).
This initial loading happens before data is sorted, and it is during this step that data indexes are assigned. Once assigned, they never change (unless you delete data).
Your data update function uses the value of 1 - thus updating the wrong row.
The fix for this is to tell DataTables to use the original loading order in the table.column(0).data() function:
var index = table.column(0, {order:'index'} ).data().indexOf("Cedric Kelly");
That directive {order:'index'} causes DataTables to use the original loading order. Now, the correct record will be updated because this index will now return 3 instead of 1.
You can see more details about this "selector modifier" syntax here.
Bear in mind that the correct syntax for updating a row is actually this:
table.row( index ).data(["ax","by","dd"]);
Finally, bear in mind that if you filter your data, then you are OK, since the default value used is search: 'none' - which means "do not take searching/filtering into account" when selecting the column data.

Peoplesoft CreateRowset with related display record

According to the Peoplebook here, CreateRowset function has the parameters {FIELD.fieldname, RECORD.recname} which is used to specify the related display record.
I had tried to use it like the following (just for example):
&rs1 = CreateRowset(Record.User, Field.UserId, Record.UserName);
&rs1.Fill();
For &k = 1 To &rs1.ActiveRowCount
MessageBox(0, "", 999999, 99999, &rs1(&k).UserName.Name.Value);
End-for;
(Record.User contains only UserId(key), Password.
Record.UserName contains UserId(key), Name.)
I cannot get the Value of UserName.Name, do I misunderstand the usage of this parameter?
Fill is the problem. From the doco:
Note: Fill reads only the primary database record. It does not read
any related records, nor any subordinate rowset records.
Having said that, it is the only way I know to bulk-populate a standalone rowset from the database, so I can't easily see a use for the field in the rowset.
Simplest solution is just to create a view, but that gets old very soon if you have to do it a lot. Alternative is to just loop through the rowset yourself loading the related fields. Something like:
For &k = 1 To &rs1.ActiveRowCount
&rs1(&k).UserName.UserId.value = &rs1(&k).User.UserId.value;
&rs1(&k).UserName.SelectByKey();
End-for;

How to create a view against a table that has record fields?

We have a weekly backup process which exports our production Google Appengine Datastore onto Google Cloud Storage, and then into Google BigQuery. Each week, we create a new dataset named like YYYY_MM_DD that contains a copy of the production tables on that day. Over time, we have collected many datasets, like 2014_05_10, 2014_05_17, etc. I want to create a data set Latest_Production_Data that contains a view for each of the tables in the most recent YYYY_MM_DD dataset. This will make it easier for downstream reports to write their query once and always retrieve the most recent data.
To do this, I have code that gets the most recent dataset and the names of all the tables that dataset contains from the BigQuery API. Then, for each of these tables, I fire a tables.insert call to create a view that is a SELECT * from the table I am looking to create a reference to.
This fails for tables that contain a RECORD field, from what looks to be a pretty benign column-naming rule.
For example, I have this table:
For which I issue this API call:
{
'tableReference': {
'projectId': 'redacted',
'tableId': u'AccountDeletionRequest',
'datasetId': 'Latest_Production_Data'
}
'view': {
'query': u'SELECT * FROM [2014_05_17.AccountDeletionRequest]'
},
}
This results in the following error:
HttpError: https://www.googleapis.com/bigquery/v2/projects//datasets/Latest_Production_Data/tables?alt=json returned "Invalid field name "__key__.namespace". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.">
When I execute this query in the BigQuery web console, the columns are renamed to translate the . to an _. I kind of expected the same thing to happen when I issued the create view API call.
Is there an easy way I can programmatically create a view for each of the tables in my dataset, regardless of their underlying schema? The problem I'm encountering now is for record columns, but another problem I anticipate is for tables that have repeated fields. Is there some magic alternative to SELECT * that will take care of all these intricacies for me?
Another idea I had was doing a table copy, but I would prefer not to duplicate the data if I can at all avoid it.
Here is the workaround code I wrote to dynamically generate a SELECT statement for each of the tables:
def get_leaf_column_selectors(dataset, table):
schema = table_service.get(
projectId=BQ_PROJECT_ID,
datasetId=dataset,
tableId=table
).execute()['schema']
return ",\n".join([
_get_leaf_selectors("", top_field)
for top_field in schema["fields"]
])
def _get_leaf_selectors(prefix, field):
if prefix:
format = prefix + ".%s"
else:
format = "%s"
if 'fields' not in field:
# Base case
actual_name = format % field["name"]
safe_name = actual_name.replace(".", "_")
return "%s as %s" % (actual_name, safe_name)
else:
# Recursive case
return ",\n".join([
_get_leaf_selectors(format % field["name"], sub_field)
for sub_field in field["fields"]
])
We had a bug where you needed to need to select out the individual fields in the view and use an 'as' to rename the fields to something legal (i.e they don't have '.' in the name).
The bug is now fixed, so you shouldn't see this issue any more. Please ping this thread or start a new question if you see it again.

Is getting the General ID same as getting FormattedID in rally?

I am trying to get the ID under "General" from a feature item in rally. This is my query:
body = { "find" => {"_ProjectHierarchy" => projectID, "_TypeHierarchy" => "PortfolioItem/Feature"
},
"fields" => ["FormattedID","Name","State","Release","_ItemHierarchy","_TypeHierarchy","Tags"],
"hydrate" => ["_ItemHierarchy","_TypeHierarchy","Tags"],
"fetch"=>true
}
I am not able to get any value for FormattedID, I tried using "_UnformattedID" but it pulls up an entirely different value than the FormattedID. Any help would be appreciated.
LBAPI does not have FormattedID field. You are correct using _UnformattedID. It is the FormattedID without the prefix. For example, this query:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/1111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",_ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}},sort:{_ValidFrom:-1}}&fields=["_UnformattedID","Name","State"]&hydrate=["State"]&compress=true&pagesize:200
shows _UnformattedID that correspond to FormattedID as this screenshot shows:
I noticed your are using fields and fetch . Per LBAPI's documentation, it uses fields rather than fetch. If you want to get all fields, use fields=true
As far as the missing custom fields, make sure that the custom field value was set within the dates of the query.
Compare these almost identical queries: the first query does not return a custom field, the second query does.
Query #1:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/1111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",_ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}}}&fields=["_UnformattedID","Name","State","c_PiCustomField"]&hydrate=["State","c_PiCustomField"]
Query #2:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/11111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",__At: "current"}&fields=["_UnformattedID","Name","State","c_PiCustomField"]&hydrate=["State","c_PiCustomField"]
The first query uses time period: _ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}
The second query uses __At: "current"
Let's say I just create a new custom field on PortfolioItem. It is not possible to create a custom field on PorfolioItem/Feature, so the field is created on PI, but both queries still use "_TypeHierarchy":"PortfolioItem/Feature".
After I created this custom field, called PiCustomField, I set a value of that field for a specific Feature, F4.
The first query does not have a single snapshot that includes that field because that field did not exist in the time period we lookback. We can't change the past.
The second query returns this field for F4. It does not return it for other Features because all other Features do not have this field set.
Here is the screenshot:

Rails postgres hstore - how to ommit a nil input value from hash

I have a form that displays inputs based on user preferences. I am storing the values as an hstore hash since I dont know ahead of time exactly what the form input for each user will be. The problem I am running in to is that even though a user has an input preferenced doesnt mean they have to enter a value for it each time. Which, can result in :foo => "".
All the doc examples show you how to find records you know the key name of. In my case, I dont know the key name...I need to find all the keys in a hash whose value => "".
Then, I should be able to do something like the docs shows...for each empty value
person.destroy_key(:data, :foo).destroy_key(:data, :bar).save
avals(hstore) is likely what I need to user... How do you use avals with rails?
Since hstore is just a hash in rails...you just need to evaluate the hash before saving it.
...in model
before_save :remove_blanks
private
def remove_blanks
self.hstore = self.hstore.reject{ |k,v| v.blank? }
end
replace 'hstore' with your hstore column name