set individual edge widths in Vega edge bundling - vega

I'm trying to use an edge bundle but was hoping to have the edge thickness determined by the edges themselves rather than the source node they originate from.
The example given in the docs is here:
https://vega.github.io/editor/#/examples/vega/edge-bundling
The two data sources are here:
https://github.com/vega/vega-datasets/blob/master/data/flare-dependencies.json
https://github.com/vega/vega-datasets/blob/master/data/flare.json
In this example, the thickness of the edges is determined by line 170 in the script where a value of 1.5 is assigned to 'strokeWidth'.
"encode": {
"enter": {
"interpolate": {"value": "bundle"},
"strokeWidth": {"value": 1.5}
},
I had hoped to use the "size" value in the flare.json input to tailor each width separately. However, the example creates a tree from flare-dependencies.json and although the tree does pull in this value, visible in VEGA_DEBUG.view.data('dependencies') , I don't know how to access it and get those values to set each of the 'strokeWidth' elements for the edges.
Could you advise how I might do this?
Regards,

I was provided an answer to this as follows:
First add a formula to append the size value from the flare.json dataset as a field 'strokeWidth' by writing this at line 90 in the example:
{
"type": "formula",
"expr": datum.size/10000",
"as": "strokeWidth"
},
Next, in the marks, set the strokeWidth value for each edge in the edgebundle to the associated 'strokeWidth' in the column now created by writing this at what becomes line 176 after the above change:
"strokeWidth": {"field": "strokeWidth"}
After this, the diagram should render with edges of a thickness defined by that 'size' variable. Note that in this example, I had to scale the 'size' value in the original dataset by 10,000 to set the lines at a reasonable thickness.
In practice, I would scale the data prior to presenting it to Vega.

Related

How to get the correct centroid of a bigquery polygon with st_centroid

I'm having some trouble with the ST_CENTROID function in bigquery. There is a difference between getting the centroid of a GEOGRAPHY column and from the same WKT version of the column. The table is generated using a bq load with a geography column and a newline_delimited_json file containing the polygon as wkt text.
Example:
select st_centroid(polygon) loc, st_centroid(ST_GEOGFROMTEXT(st_astext(polygon))) loc2,polygon from table_with_polygon
Result:
POINT(-174.333247842246 -51.6549479435566)
POINT(5.66675215775447 51.6549479435566)
POLYGON((5.666771 51.654721, 5.666679 51.655027, 5.666597 51.655017, 5.666556 51.655154, 5.666702 51.655171, 5.666742 51.655037, 5.666824 51.655046, 5.666917 51.654737, 5.666771 51.654721))
POINT(-174.367214581541 -51.645030856473)
POINT(5.63278541845948 51.645030856473)
POLYGON((5.632691 51.644997, 5.63269 51.644999, 5.63273 51.645003, 5.632718 51.645049, 5.632843 51.645061, 5.632855 51.645014, 5.632691 51.644997))
POINT(-174.37100400049 -51.6434992715399)
POINT(5.62899599950984 51.6434992715399)
POLYGON((5.629063 51.643523, 5.629084 51.643465, 5.629088 51.643454, 5.628957 51.643436, 5.628915 51.643558, 5.629003 51.64357, 5.629021 51.643518, 5.629063 51.643523))
POINT(-174.293340001044 -51.6424190026157)
POINT(5.70665999895557 51.6424190026157)
POLYGON((5.706608 51.642414, 5.706624 51.642443, 5.706712 51.642424, 5.706696 51.642395, 5.706608 51.642414))
POINT(-174.306209997018 -51.6603530009923)
POINT(5.69379000298176 51.6603530009923)
POLYGON((5.693801 51.660361, 5.693802 51.660346, 5.693779 51.660345, 5.693778 51.66036, 5.693801 51.660361))
POINT(-174.291766437718 -51.6499633041183)
POINT(5.70823356228228 51.6499633041183)
POLYGON((5.708187 51.649858, 5.708091 51.650027, 5.70828 51.650069, 5.708376 51.649899, 5.708187 51.649858))
POINT(-174.369405698681 -51.653769846544)
POINT(5.63059430131924 51.653769846544)
POLYGON((5.630653 51.653531, 5.630462 51.653605, 5.630579 51.653722, 5.630574 51.65373, 5.630566 51.653729, 5.630551 51.653759, 5.630559 51.65376, 5.630555 51.653769, 5.630273 51.653846, 5.630364 51.653974, 5.630787 51.653858, 5.630852 51.653728, 5.630653 51.653531))
...etc
Is this a bug or am I doing something wrong?
Update
Did some further digging using Michael Entin's answer as a hint. It turns out that bq load with WKT does NOT use the smallest polygon by default. And there is no option with bq load to change this behaviour. The imported json is very large (openstreetmap data) so there is no easy option to change this to geoJson.
To dig deeper into the actual value stored in the column, I did a
select st_asgeojson(polygon) from ...
which resulted in
{ "type": "Polygon", "coordinates": [ [ [5.598659, 51.65927], [5.598651, 51.659295], [5.598638, 51.659293], [5.598626, 51.65933], [5.598788, 51.659353], [5.598799, 51.659319], [5.598855, 51.659139], [5.598692, 51.65912], [5.598643, 51.659268], [5.598659, 51.65927] ], [ [180, 90], [180, 0], [180, -90], [-180, -90], [-180, 0], [-180, 90], [180, 90] ] ] }
So here the wrong orientation can be seen.
Looks like some or all of these polygons might have gotten inverted, and this produces antipodal centroids: POINT(-174.333247842246 -51.6549479435566) is antipodal to POINT(5.66675215775447 51.6549479435566) etc.
See BigQuery doc for details of what this means:
https://cloud.google.com/bigquery/docs/gis-data#polygon_orientation
There are two possible reasons and ways to resolve this (my bet is case 1):
The polygons should be small, but were loaded with incorrect orientation, and thus became inverted - they are now complimentary to what was the intended shape, and are larger than hemisphere. Since you don't pass oriented parameter to ST_GEOGFROMTEXT, this function fixes them by ignoring the orientation.
The correct solution is usually to load them as GeoJson (this also avoids another issue with loading WKT strings - geodesic vs planar edges). Or if all the edges are small and geodesic vs planar does not matter - replace the table geography with ST_GEOGFROMTEXT(st_astext(polygon)).
The polygons should really be large, and were loaded with correct orientation. Then when you don't pass oriented parameter to ST_GEOGFROMTEXT, this function breaks them by ignoring the orientation.
If this is the case, you should pass TRUE as second parameter to ST_GEOGFROMTEXT.

How to extract this json into a table?

I've a sql column filled with json document, one for row:
[{
"ID":"TOT",
"type":"ABS",
"value":"32.0"
},
{
"ID":"T1",
"type":"ABS",
"value":"9.0"
},
{
"ID":"T2",
"type":"ABS",
"value":"8.0"
},
{
"ID":"T3",
"type":"ABS",
"value":"15.0"
}]
How is it possible to trasform it into tabular form? I tried with redshift json_extract_path_text and JSON_EXTRACT_ARRAY_ELEMENT_TEXT function, also I tried with json_each and json_each_text (on postgres) but didn't get what expected... any suggestions?
desired results should appear like this:
T1 T2 T3 TOT
9.0 8.0 15.0 32.0
I assume you printed 4 rows. In postgresql
SELECT this_column->'ID'
FROM that_table;
will return column with JSON strings. Use ->> if you want text column. More info here: https://www.postgresql.org/docs/current/static/functions-json.html
In case you were using some old Postgresql (before 9.3), this gets harder : )
Your best option is to use COPY from JSON Format. This will load the JSON directly into a normal table format. You then query it as normal data.
However, I suspect that you will need to slightly modify the format of the file by removing the outer [...] square brackets and also the commas between records, eg:
{
"ID": "TOT",
"type": "ABS",
"value": "32.0"
}
{
"ID": "T1",
"type": "ABS",
"value": "9.0"
}
If, however, your data is already loaded and you cannot re-load the data, you could either extract the data into a new table, or add additional columns to the existing table and use an UPDATE command to extract each field into a new column.
Or, very worst case, you can use one of the JSON Functions to access the information in a JSON field, but this is very inefficient for large requests (eg in a WHERE clause).

Get display value of structured datatables cell

I'm following https://datatables.net/reference/option/columns.data ("data": { "_": "phone", "filter": "phone_filter", "display": "phone_display"}) to supply structured values for certain columns of the dataTables table, other columns are just simple:
{"filter": "1964486", "display": "Elite 2022 Tryout ('17-'18)", "_": 1964486}
It works fine, displays the display value, searches by the filter value. But in certain places I need to programatically obtain the full data structure (see above) from the cell. However when I try to access it through the API (let's say we are talking about the first row's 6th column's cell data):
myTable.cell(0, 5).data()
This returns only 1964486 instead of the full structure. How can I access the display value?
render() can do that:
myTable.cell(0, 5).render('display')
https://datatables.net/reference/api/cell().render()
It can also return filter and sort values.

simplecart js setting column widths

I've used this site for assistance many times but never had to ask a question so finally joined...anyway, trying to set up simplecart checkout and having some trouble formatting cart output. I have my cart set to table and by default the simpleCart_items class displays the table and it's cells only as wide as they need be to fit the data. I can change this by specifying all cells(columns) as a percentage of the whole table. Unfortunately with 7 columns each only gets about 14% and this is way to much for a 1 or 2 digit quantity and not near big enough for all the characters in the item name without wrapping. What I want is a way to define a different width for each column. I have not found a way to do this even with colgroup but maybe just not doing it right. Any and all help would be appreciated!
Okay, so this may or may not help. This line is located in the simpleCart.js file. I changed it to something simple.
cartColumns : [
{ view: function(item, column){
return"<img src='"+item.get('image')+"' width='250px' height='250px'><br /><center><h3 ><span>"+item.get('name')+"</span></h3><p><a href='javascript:;' class='simpleCart_decrement'><button>-</button></a> "+item.get('quantity')+" <a href='javascript:;' class='simpleCart_increment'><button>+</button></a></p><p>$"+item.get('price')+"</p><p><a href='javascript:;' class='simpleCart_remove remove_icon'><img src='images/icon_x.png' /></a></p></center>";
},
}
],
You can change it to your own html
cartColumns : [
{ view: function(item, column){
return" YOUR HTML HERE ";
},
}
],
It MUST all be in ONE (1) LINE or else it may not work.
here are some values that are used
image = item.get('image')
name = item.get('name')
quantity = item.get('quantity')
price = item.get('price')
you can look at all the values here
http://simplecartjs.org/documentation/displaying_cart_data
but the point is that you can make a cart and display it how you want; with you own html and css. I hope this helped.

Azure HBASE REST - simple get failing for colfam:col

This should be a very simple one (been searching for a solution all day - read a thousand and a half posts).
I put a test row in my HBASE table in hbase shell:
put 'iEngine','testrow','SVA:SourceName','Journal of Fun'
I can get the value for a column family using the REST API in DHC Chrome:
https://ienginemaster.azurehdinsight.net/hbaserest/iEngine/testrow/SVA
I can't seem to get it for the specific cell: https://ienginemaster.azurehdinsight.net/hbaserest/iEngine/testrow/SVA:SourceName
{
"Row": [{
"key": "dGVzdHJvdw==",
"Cell": [{
"column": "U1ZBOlNvdXJjZU5hbWU=",
"timestamp": 1440602453975,
"$": "Sm91cm5hbCBvZiBGdW4="
}]
}]
}
I get back a 400 error.
When successfully asking for just the family, I get back:
I tried replacing the encoded value for SVA:SourceName, and a thousand other things. I'm assuming I'm missing something simple.
Also, the following works:
hbase(main):012:0> get 'iEngine', 'testrow', 'SVA:SourceName'
COLUMN CELL
SVA:SourceName timestamp=1440602453975, value=Journal of Fun
1 row(s) in 0.0120 seconds
hbase(main):013:0>
I opened a case with Microsoft support. I received confirmation that it is a bug (IIS and the colon separator not working). They are working on a fix - they are slightly delayed as the decide on the "best" way to fix it.