I am learning the implementation of the TensorFlow model at android.
In this tutorial, it put the labels.txt and model.tflite files into assets folder .
https://blog.notyouraveragedev.in/android/image-classification-in-android-using-tensor-flow/
What is that labels.txt should be?
I have a file that has the following format :
"1": "1 Cent,Australian dollar,australia",
"2": "2 Cents,Australian dollar,australia",
"3": "5 Cents,Australian dollar,australia",
"4": "10 Cents,Australian dollar,australia",
"5": "20 Cents,Australian dollar,australia",
"6": "50 Cents,Australian dollar,australia",
"7": "1 Dollar,Australian dollar,australia",
"8": "2 Dollars,Australian dollar,Australia",
Is it that labels.txt file or is it something else ?
Never mind I found the answer now.
It should be a pure class name.
eg. 1 Cent,Australian dollar,Australia
Related
please help me with the following conversion please. So I have a pandas dataframe in the following format:
id
location
{ "0": "5",
"0": "Charlotte, North Carolina",
"1": "5",
"1": "N/A",
"2": "5",
"2": "Portland, Oregon",
"3": "5",
"3": "Jonesborough, Tennessee",
"4": "5",
"4": "Rockville, Indiana",
"5": "5",}
"5": "Dallas, Texas",
and would like to convert this into the following format:
A header
Another header
"5"
"Charlotte, North Carolina"
"5"
"N/A"
"5"
"Portland, Oregon"
"5"
"Jonesborough, Tennessee"
"5"
"Rockville, Indiana"
"5"
"Dallas, Texas"
Please help
You can try this.
import pandas as pd
import re
df = pd.DataFrame([['{ "0": "5",', '"0": "Charlotte, North Carolina",'], ['"1": "5",','"1": "N/A",']], columns=['id', 'location'])
#using regex to extract int values and selecting second int
df['id'] = df['id'].apply(lambda x: re.findall(r'\d+', x)[1])
#Split the string with : and select second value. And remove comma
df['location'] = df['location'].apply(lambda x: x.split(':')[1][:-1])
print(df)
Output:
id location
0 5 "Charlotte, North Carolina"
1 5 "N/A"
kindly need to extract name value, even it's Action or adventure from this column in new column
in pandas
'[{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}]'
You want from_records:
import pandas as pd
data = [{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}]
df = pd.DataFrame.from_records(data)
df
you get
id name
0 28 Action
1 12 Adventure
2 14 Fantasy
3 878 Science Fiction
I have a data set that combines two temporal measurement series with one row per measurement
time: 1, measurement: a, value: 5
time: 2, measurement: b, value: false
time: 10, measurement: a, value: 2
time: 13, measurement: b, value: true
time: 20, measurement: a, value: 4
time: 24, measurement: b, value: true
time: 30, measurement: a, value: 6
time: 32, measurement: b, value: false
in a visualization using Vega lite, I'd like to combine the measurement series and encode measurement a and b in a single visualization without simply layering their representation on a temporal axis but representing their value in a single encoding spec.
either measurement a values need to be interpolated and added as a new value to rows of measurement b
eg:
time: 2, measurement: b, value: false, interpolatedMeasurementA: 4.6667
or the other way around, which leaves the question of how to interpolate a boolean. maybe closest value by time, or simpler: last value
eg:
time: 30, measurement: a, value: 6, lastValueMeasurementB: true
I suppose this could be done either query side in which case this question would be regarding indexDB Flux query language
or this could be done on the visualization side in which case this would be regarding vega-lite
There's not any true linear interpolation schemes built-in to Vega-Lite (though the loess transform comes close), but you can achieve roughly what you wish with a window transform.
Here is an example (view in editor):
{
"data": {
"values": [
{"time": 1, "measurement": "a", "value": 5},
{"time": 2, "measurement": "b", "value": false},
{"time": 10, "measurement": "a", "value": 2},
{"time": 13, "measurement": "b", "value": true},
{"time": 20, "measurement": "a", "value": 4},
{"time": 24, "measurement": "b", "value": true},
{"time": 30, "measurement": "a", "value": 6},
{"time": 32, "measurement": "b", "value": false}
]
},
"transform": [
{
"calculate": "datum.measurement == 'a' ? datum.value : null",
"as": "measurement_a"
},
{
"window": [
{"op": "mean", "field": "measurement_a", "as": "interpolated"}
],
"sort": [{"field": "time"}],
"frame": [1, 1]
},
{"filter": "datum.measurement == 'b'"}
],
"mark": "line",
"encoding": {
"x": {"field": "time"},
"y": {"field": "interpolated"},
"color": {"field": "value"}
}
}
This first uses a calculate transform to isolate the values to be interpolated, then a window transform that computes the mean over adjacent values (frame: [1, 1]), then a filter transform to isolate interpolated rows.
If you wanted to go the other route, you could do a similar sequence of transforms targeting the boolean value instead.
I have a gigantic script that I would like to create in an iterative way (while or for loop), so it becomes overviewable and much shorter. It makes so much sense that it should be doable in SQL but so far I have not succeeded. What I did now in order to make it work is a lot of selections that I UNION together to make one table.
I want to iterate through the years, so while year is lower then 2017 execute function with the year in it as variable, starting from 1995.
So actually, an iterative function that fills in all years in the following lines of code and combines all results within one table: I will keep trying myself and update the code if I make progress.
SELECT
regio, 1995 as year, sum("0") as "0", sum("1") as "1", sum("2") as "2", sum("3") as "3", sum("4") as "4", sum("5") as "5", sum("6") as "6", sum("7") as "7", sum("8") as "8", sum("9") as "9", sum("10") as "10"
FROM
source
where
year = 1995 OR "year-1" = 1995 OR "year-2" = 1995 OR "year-3" = 1995 OR "year-4" = 1995
group by
regio
UNION
SELECT
regio, 1996 as year, sum("0") as "0", sum("1") as "1", sum("2") as "2", sum("3") as "3", sum("4") as "4", sum("5") as "5", sum("6") as "6", sum("7") as "7", sum("8") as "8", sum("9") as "9", sum("10") as "10"
FROM
source
where
year = 1996 OR "year-1" = 1996 OR "year-2" = 1996 OR "year-3" = 1996 OR "year-4" = 1996
group by
regio
You would seem to want:
SELECT regio, g.yyyy as year, sum("0") as "0", sum("1") as "1",
sum("2") as "2", sum("3") as "3", sum("4") as "4",
sum("5") as "5", sum("6") as "6", sum("7") as "7",
sum("8") as "8", sum("9") as "9", sum("10") as "10"
FROM source CROSS JOIN
generate_series(1995, 2017) g(yyyy)
WHERE g.yyyy IN (year, "year-1", "year-2", "year-3", "year-4")
GROUP BY regio, g.yyyy;
I'm trying to use a combination of geopandas, Pandas and Folium to create a polygon map that I can embed incorporate into a web page.
For some reason, it's not displaying.
The steps I've taken:
Grabbed a .shp from the UK's OS for Parliamentary boundaries.
I've then used geopandas to change the projection to epsg=4326 and then exported as GeoJSON which takes the following format:
{ "type": "Feature", "properties": { "PCON13CD": "E14000532", "PCON13CDO": "A03", "PCON13NM": "Altrincham and Sale West" }, "geometry": { "type": "Polygon", "coordinates": [ [ [ -2.313999519326579, 53.357408280545918 ], [ -2.313941776174758, 53.358341455420039 ], [ -2.31519699483377, 53.359035664493433 ], [ -2.317953152796459, 53.359102954309151 ], [ -2.319855973429864, 53.358581917200119 ],... ] ] ] } },...
Then what I'd like to do is mesh this with a dataframe of constituencies in the following format, dty:
constituency count
0 Burton 667
1 Cannock Chase 595
2 Cheltenham 22
3 Cheshire East 2
4 Congleton 1
5 Derbyshire Dales 1
6 East Staffordshire 4
import folium
mapf = folium.Map(width=700, height=370, tiles = "Stamen Toner", zoom_start=8, location= ["53.0219392","-2.1597434"])
mapf.geo_json(geo_path="geo_json_shape2.json",
data_out="data.json",
data=dty,
columns=["constituency","count"],
key_on="feature.properties.PCON13NM.geometry.type.Polygon",
fill_color='PuRd',
fill_opacity=0.7,
line_opacity=0.2,
reset="True")
The output from mapf looks like:
mapf.json_data
{'../../Crime_data/staffs_data92.json': [{'Burton': 667,
'Cannock Chase': 595,
'Cheltenham': 22,
'Cheshire East': 2,
'Congleton': 1,
'Derbyshire Dales': 1,
'East Staffordshire': 4,
'Lichfield': 438,
'Newcastle-under-Lyme': 543,
'North Warwickshire': 1,
'Shropshire': 17,
'South Staffordshire': 358,
'Stafford': 623,
'Staffordshire Moorlands': 359,
'Stoke-on-Trent Central': 1053,
'Stoke-on-Trent North': 921,
'Stoke-on-Trent South': 766,
'Stone': 270,
'Tamworth': 600,
'Walsall': 1}]}
Although the mapf.create_map() function successfully creates a map, the polygons don't render.
What debugging steps should I take?
#elksie5000, Try mplleaflet it is extremely straightforward.
pip install mplleaflet
in Jupyter/Ipython notebook:
import mplleaflet
ax = geopandas_df.plot(column='variable_to_plot', scheme='QUANTILES', k=9, colormap='YlOrRd')
mplleaflet.show(fig=ax.figure)