React-Native-Fusion_Chart through time series - react-native

enter code hereI am using Fusion chart through time series plotting through multiple line, but it is always plotting a single line chart. so please help me if it is possible.
[enter image description here][1]
here is the data which I am using to create Fusion chart
data =
[
[
"01-Feb-11",
"Grocery",
8866
],
[
"01-Feb-11",
"Footwear",
984
],
[
"02-Feb-11",
"Grocery",
2174
],
[
"02-Feb-11",
"Footwear",
1109
],
[
"03-Feb-11",
"Grocery",
2084
],
[
"03-Feb-11",
"Footwear",
6526
],
[
"04-Feb-11",
"Grocery",
1503
],
[
"04-Feb-11",
"Footwear",
1007
],
[
"05-Feb-11",
"Grocery",
4928
]
]
Schema :
[1]: https://i.stack.imgur.com/QwDwv.png

You can use the code below to render a multi-series chart
yaxis: [
{
plot: "Sales Value",
title: "Sale Value",
format: {
prefix: "$"
}
}
]
Here is a demo : https://jsfiddle.net/ug0af52o/

Related

booleanPointInPolygon is returning wrong values at the border

I have the following polygon displayed on a map:
{
"type": "Feature",
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-125.59984949809844,
45.262153541142055
],
[
-64.97100463461506,
39.503280047917194
],
[
-71.53494497281665,
25.360849581306127
],
[
-121.81059696453559,
26.995032595715646
],
[
-125.59984949809844,
45.262153541142055
]
]
]
},
"properties": {}
}
Calling
console.log(booleanPointInPolygon([-98.65195, 49.42827], polygon)); //logs false
console.log(booleanPointInPolygon([-106.53965, 27.69895], polygon)); //logs true
when the expected output should be the opposite. I'm pretty sure my data is in the right form [longitude, latitude], I am wondering what's giving me the wrong output?

Tileset union aggregation sum results incorrect and depends on zoom levels

I'm trying to create summary census statistics at different zoom levels for a state in the USA (census block group, tract, county and state), and I've found that the total population results generated with aggregation based on unions and a concatenated key using state,county,track census block group - that depends on zoom level can be incorrect if I set the zoom levels to low for e.g. census block group level (i.e. want to show more detail at lower zoom levels)
e.g. testing this recipe below for Rhode Island. Gave me a total population of 343,405 for the state instead of the correct number of 1,097,379 - which I've double checked is correct. There were no errors shown in the tile service after using the tilesets API to upload. If I change the zoom levels to e.g.
>= 0 for STATEFO
>= 5 for COUNTYFP
>= 8 for TRACTCE
>= 10 for BLKGRPCE
then the total population for the state is changed to the correct value 1,097,379. What am I missing here - how can data in the geojson dataset just get ignored?
Thanks
David
{
"version": 1,
"layers": {
"NAME": {
"source": SOURCE,
"minzoom": 0,
"maxzoom": 10,
"features": {
"simplification": {
"outward_only": true,
"distance": 1
},
"attributes": {
"set": {
"key": [
"concat",
[
"case",
[
">=",
[
"zoom"
],
0
],
[
"get",
"STATEFP"
],
""
],
[
"case",
[
">=",
[
"zoom"
],
3
],
[
"get",
"COUNTYFP"
],
""
],
[
"case",
[
">=",
[
"zoom"
],
5
],
[
"get",
"TRACTCE"
],
""
],
[
"case",
[
">=",
[
"zoom"
],
8
],
[
"get",
"BLKGRPCE"
],
""
]
]
},
"allowed_output": ["tpop","STATEFP","COUNTYFP","TRACTCE","BLKGRPCE"]
}
},
"tiles": {
"union": [
{
"group_by": [
"key"
],
"aggregate": {
"tpop": "sum"
},
"simplification": {
"distance": 4,
"outward_only": false
}
}
],
"layer_size": 2500
}
}
}
}

Query an array element in an JSONB Object

I have a jsonb column called data in a table called reports. Here is what report.id = 1 looks like
[
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Food123"
},
{
"productIDs": [
"EFG1"
],
"groupID": "Electronic123"
}
],
"Package": [
{
"groupID": "Electronic123"
}
],
"type": "Produce"
},
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Clothes123"
}
],
"Package": [
{
"groupID": "Food123"
}
],
"type": "Wearables"
}
]
and here is what report.id = 2 looks like:
[
{
"Product": [
{
"productIDs": [
"XYZ1",
"XYZ2"
],
"groupID": "Food123"
}
],
"Package": [],
"type": "Wearable"
},
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Clothes123"
}
],
"Package": [
{
"groupID": "Food123"
}
],
"type": "Wearables"
}
]
I am trying to get a list of all entries in reports table where at least one of data column's element has following:
type = Produce AND
where any elements of Product array OR any elements of Product array's groupID start with Food
So from the example above this query will only return the first index since
The type = Produce
groupID starts with Food for first element of Product array
The second index will be filtered out because type is not Produce.
I am not sure how to query to do AND query for groupID. Here is what I have tried to get all entries for type Produce
select * from reports r, jsonb_to_recordset(r.data) as items(type text) where items.type like 'Produce';
Sample structure and result: dbfiddle
select r.*
from reports r
cross join jsonb_array_elements(r.data) l1
cross join jsonb_array_elements(l1.value -> 'Product') l2
where l1 ->> 'type' = 'Produce'
and l2.value ->> 'groupID' ~ '^Food';

Getting the last datum in a vega dataset

I have a data source A and I'd like to create a new data source B containing just the last element of A. What is the best way to do this in Vega?
This is relatively straight forward to do. Although I am slightly confused by your use of "max" in the aggregation since this isn't the last value?
Either way here is my solution for obtaining the last value in a dataset using this series of transforms,
transform: [
{
type: window
ops: [
row_number
]
}
{
type: joinaggregate
fields: [
row_number
]
ops: [
max
]
as: [
max_row_number
]
}
{
type: filter
expr: datum.row_number==datum.max_row_number
}
]
I was able to get this working in the Vega Editor using the following:
{
"$schema": "https://vega.github.io/schema/vega/v5.json",
"data": [
{
"name": "source",
"url": "https://raw.githubusercontent.com/vega/vega/master/docs/data/cars.json",
"transform": [
{
"type": "filter",
"expr": "datum['Horsepower'] != null && datum['Miles_per_Gallon'] != null && datum['Acceleration'] != null"
}
]
},
{
"name": "avg",
"source":"source",
"transform":[
{
"type":"aggregate",
"groupby":["Horsepower"],
"ops": ["average"],
"fields":["Miles_per_Gallon"],
"as":["Avg_Miles_per_Gallon"]
}
]
},
{
"name":"last",
"source": "avg",
"transform": [
{
"type": "aggregate",
"ops": ["max"],
"fields": ["Horsepower"],
"as": ["maxHorsepower"]
},
{
"type": "lookup",
"from": "avg",
"key": "Horsepower",
"fields": ["maxHorsepower"],
"values": ["Horsepower","Avg_Miles_per_Gallon"]
}
]
}
]
}
maxHorsepower
Horsepower
Avg_Miles_per_Gallon
230
230
16
I'd be interested to know if there are better ways, but this worked for me.

how get multiple line starting with and format them

I Have a file which contains multiple lines.I want only few things from that output.Below is the output i am getting from server.
Output:
"az": "nova",
"cloud": "envvars",
"config_drive": "",
"created": "2016-08-19T17:21:24Z",
"flavor": {
"id": "4",
"name": "m1.large"
},
"hostId": "f714baee5967dc17e7d36c7b72eb92a4f1ab68d9782fa90a968ceae5",
"human_id": "dsc-test-3",
"id": "3f0a1188-c151-4e5e-9930-969d0423601b",
"image": {
"id": "7f4ad1f4-6fab-4978-b65a-ec4b9a407c5c",
"name": "mitel-dsc-7.9-9.15_3nic"
},
"interface_ip": "172.16.17.15",
"key_name": "key1",
"metadata": {},
"name": "dsc-test-3",
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
},
My intention is to get below things
Required:
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
}
Try this method
sed -n '/networks.*{/,/}/p' fileName
Outputs:
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
}
Awk solution:
awk '/"networks"/,/}/' file.txt
This gives the output
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
},
The syntax used here is /start_regex/,/stop_regex/, which starts matching when a line matches start_regex and stops matching after a line matches stop_regex (so all the lines in between also get matched). Since no action is specified, the default {print} action is used.
Compared to your requirement, this outputs an extra , on the last line, since that comma is present in the input. If that is unacceptable, you could get rid of it using the action {sub("},","}");print}. Or by using sed as in the other answer.
... and the tagged grep solution as well:
$ cat file|tr '\n' _| grep -o \[^_\]*\"networks\"\[^}\]*}|tr _ '\n'
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
}
ie. change every \n to _, grep out the requested block and put the \n back. _ may not be the best choice for a substitute but works in this particular case.
With GNU awk for multi-char RS and RT you can just use a regexp to describe the string you're looking for:
$ awk -v RS='"networks": {[^}]+}' 'RT{print RT}' file
"networks": {
"dsc-InterInstance": [
"172.16.18.15"
],
"dsc-OAM": [
"172.16.16.20"
],
"dsc-sig": [
"172.16.17.15",
"10.10.72.15"
]
}