I'm doing some exercises of Databases from LeetCode. I want to test my codes on my laptop using MySQL. I hope to have a easy way to import data.
Here is the input data from LeetCode:\
{"headers":{"insurance":["PID","TIV_2015","TIV_2016","LAT","LON"]},"rows":
{"insurance":[[1,224.17,952.73,32.4,20.2],[2,224.17,900.66,52.4,32.7],
[3,824.61,645.13,72.4,45.2],[4,424.32,323.66,12.4,7.7],
[5,424.32,282.9,12.4,7.7],[6,625.05,243.53,52.5,32.8],
[7,424.32,968.94,72.5,45.3],[8,624.46,714.13,12.5,7.8],
[9,425.49,463.85,32.5,20.3],[10,624.46,776.85,12.4,7.7],
[11,624.46,692.71,72.5,45.3],[12,225.93,933,12.5,7.8],
[13,824.61,786.86,32.6,20.3],[14,824.61,935.34,52.6,32.8]]}}
What is the data type?
This is a JSON string. JSON is a common data interchange format.
This is the data: https://github.com/sealneaward/nba-movement-data/blob/master/data/01.01.2016.CHA.at.TOR.7z
I am not sure how to convert this data into a dataframe (likely multiple dataframes). Can anyone help me or direct me to a good resource?
I'm wondering if we can convert Protobuf datatypes of map to bigquery schema datatypes. I've tried to search relevant issue and found this issue [1]. However it's seems like I still don't find the answer.
[1] Bigquery importing map's json data into a table
Can someone please help me by stating the purpose of providing the json schema file while loading a file to BQtable using bq command. what are the advantages?
Dose this file help to maintain data integrity by avoiding any column swap ?
Regards,
Sreekanth
Specifying a JSON schema--instead of relying on auto-detect--means that you are ensured to get the expected types for each column being loaded. If you have data that looks like this, for example:
1,'foo',true
2,'bar',false
3,'baz',true
Schema auto-detection would infer that the type of the first column is an INTEGER (a.k.a. INT64). Maybe you plan to load more data in the future, though, that looks like this:
3.14,'foo',true
1.59,'bar',false
-2.001,'baz',true
In that case, you probably want the first column to have type FLOAT (a.k.a. FLOAT64) instead. If you provide a schema when you load the first file, you can specify a type of FLOAT for that column explicitly.
I'm trying to convert a shapefile I have to SQL format.
I've tried doing this using shp2pgsql, but, alas, this program doesn't read the SHAPEFILE.prj file, so I end up with coordinates in an inconvenient format.
Is there a way to convert shapefiles to SQL which respects their PRJ specification?
You may have things in one projection that you want to display or interact with in more familiar values, like longitude and latitude. For example Planet OpenStreetMap uses a spherical mercator and gives you values like this when you ask for text:
cal_osm=# select st_astext(way) from planet_osm_point limit 3;
st_astext
-------------------------------------------
POINT(-13852634.6464924 4928686.75004766)
POINT(-13850470.0501262 4930555.55031171)
POINT(-13850160.8268447 4930880.61375574)
(3 rows)
You can use st_transform to return a more familiar format like this:
cal_osm=# select st_astext(st_transform(way, 4326)) from planet_osm_point limit 3;
st_astext
-------------------------------------------
POINT(-124.440334282677 40.4304093953086)
POINT(-124.42088938268 40.4431868953078)
POINT(-124.418111582681 40.4454091953076)
(3 rows)
A prj file is essentially a text file that contains the coordinate system information in the ESRI Well-known-text(WKT) format. Could you just write a program that uses shp2pgsql to convert the geometries and then store the associated WKT string from the prj?
Of note: The WKT format is an EPSG accepted format for delimiting projected and geographic coordinate system information, but different authorities may have different names for projections or projection parameters. PostGIS might be different from Oracle which might in turn be different from ESRI. So if you store the prj's WKT make sure that it is in an esri_coordinate_system column. PostGIS might have a different naming convention format for parameters.
Also, in case you're interested, there is a C++ open FileGDB api that allows you to access row information without license. It's available in 32 and 64 bit on windows and linux:
http://resources.arcgis.com/content/geodatabases/10.0/file-gdb-api