I'm trying to create an animation of the population density of the Appalachian region from roughly 1790 to 2010 in decennial steps at the county level.
I've successfully created a choropleth for 2010 by modifying what was done in this tutorial by Nathan Yau. I've run into a few problems. For one, US county boundaries evolve rapidly over time so I can't use the same SVG file as in the tutorial. I think I need to do the following:
Obtain historical county boundaries as GIS files from here.
Convert GIS files into SVG files using Kartograph (after installing its numerous dependencies).
Obtain population data (with FIPS info) for each county in Appalachian region since 1790 from US census data.
Mimic what was done in tutorial to create choropleth for each decade and stitch together into animation.
This just seems insanely complicated for something so simple and I'm new to a lot of this so I'm not convinced I'll be able to get all of it to work. I guess my questions are the following:
Will the strategy I outlined work? Is there a better/simpler way to do what I'm trying to do?
Also, as for getting the census data, this also seems harder than it has to be. I just want a simple .csv file with say FIPS label, county name, and population for a given year, and yet the best I can find is something like this with a link to the actual source in some arcane format.
Thanks for any help!
You can download tables of population data by county from the US Census here:
http://factfinder2.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk
Related
In 2007, when I was young and foolish and before I knew about Open Street Map, I started an urban historical map project. I was working in Illustrator, it was going to be an interactive Flash piece, and my process was to draw the maps first, with the thought that I'd label some, but not all, of the street later on.
As we know Flash was began to die about 2010 and I put the project away for a number of years. I picked it up again a couple years ago and continued my earlier practice of just drawing streets and water features, this time with the intention of making it a conventional web map. Now I'm pretty close to finishing the drawing of a five-layer (1871, 1903, 1932, 1952 and 2016) historical map of a medium-sized city, though it still lacks labels.
My problem now is how to add large numbers of labels, many of them duplicates. There could be as many as 10,000 for all five layers, though as a practical matter I may have to settle for a smallish fraction of that number. Based on web searches I gather my workflow is unusual and that mine is therefore an unusual problem.
I've exported my maps and brought them into QGIS and played with the software a little. The process of adding labels to objects doesn't seem terribly efficient or user-friendly, but that's probably due to my unfamiliarity with the program.
So my question is this: Are there any tricks to speed up the painful process of adding large numbers of duplicate labels in either QGIS or ArcGIS? Since so many of the streets exist in all five layers, functionality like the ability to select multiple objects in different layers and edit their attributes simultaneously in the Attribute Table would be a godsend. (Doesn't seem possible.) So would the ability to copy the attributes from one object and paste them onto other objects. Or the ability to do either of these things in Illustrator via a plugin and then export the data along with the shapes to a GIS program.
Thanks for your help!
If I understand the issue correctly I think are several different solutions. When you say that you
Typically for a spatial layer in ArcGIS or QGIS you define how to label all features in a layer once by defining a label scheme to use across all features, 1 or 1 million. This assumes that each feature in the layer has one or more attributes in the associated table for the layer.
How are you converting the Illustrator vectors to a spatial layer? DXF?
You will likely have better/faster responses to this question by posting it to the GIS Stack exchange. https://gis.stackexchange.com/
I am interested in visualizing meteorological and climatological data.
Here we are talking about 2D/3D visualization for weather and climate elements:
Temperature
Pressure
Wind
Example
We have used some tools previously, such as:
GrADS
Surfer (commercial software)
GIS Meteo (commercial software)
What another tools (preferably open source) would you suggest for that purpose nowadays?
I know you mentioned GrADS, but it was the tool I used mostly for development of weather products, a little more intuitive and resource friendly than IDV when I coded, and generally pretty good rate of development. You mentioned Open Source... did you know there is an OpenGrADS (http://opengrads.org/)? Most friends involved in weather product development use a combination of GrADS\OpenGrADS for much of their work. But I agree it doesn't produce knock-your-socks-off graphics.
Another commonly used free program is Gempak, another Unidata product, which really seems to be becoming outdated in my personal opinion).
And then you can talk high end graphics, you're going to pay more. http://moe.met.fsu.edu/~hrw22/movies/WIND_Katrina_2005-08-28_00Z.gif is a great video of Katrina that was produced by someone I knew using Amira. According to Wikipedia, you're looking at
"Cost: $4,000 USD + $800/year support (2009)... although now has much more ugly/complex pricing structure where each feature is priced separately (eg: Amira Mesh Option $360). I believe at NCMIR we pay ~$9000/year for five user-license." Ouch!
I don't have an open source tool, but if you can get access to a Level-II data feed (Level-II is minimally post processed radar data), I and a meteorologist friend use GR2Analyst. I would assume you know enough about weather sources to be able to figure out how to set this up.
If you're looking for an open source (and free) tool that can do 2D and 3D, which also includes access to a wide variety of datasets (obs, model output, remote sensing - radar level 2 and 3, satellite, and more!), then you might want to check out the Unidata Integrated Data Viewer (IDV):
http://www.unidata.ucar.edu/software/idv/
Source code available here:
https://github.com/Unidata/IDV
The interface is a bit complex, but we have some youtube screencasts to help people get up and going:
http://www.youtube.com/user/unidatanews/videos
If you'd like to see a video for a specific thing, we are taking requests :-) (email support-idv#unidata.ucar.edu). We do yearly training workshops as well, and those materials are available online here:
http://www.unidata.ucar.edu/software/idv/docs/workshop/
Cheers!
Sean
Panoply is a multiplataform desktop option if data is available in formats such NetCDF, HDF or GRIB.
I extracted the following text from his site that describes some of the characteristics:
Slice and plot geo-gridded latitude-longitude, latitude-vertical, longitude-vertical, or time-latitude arrays from larger multidimensional variables.
Slice and plot "generic" 2D arrays from larger multidimensional variables.
Slice 1D arrays from larger multidimensional variables and create line plots.
Combine two geo-gridded arrays in one plot by differencing, summing or averaging.
Plot lon-lat data on a global or regional map using any of over 100 map projections or make a zonal average line plot.
Overlay continent outlines or masks on lon-lat map plots.
Use any of numerous color tables for the scale colorbar, or apply your own custom ACT, CPT, or RGB color table.
Save plots to disk GIF, JPEG, PNG or TIFF bitmap images or as PDF or PostScript graphics files.
Export lon-lat map plots in KMZ format.
Export animations as AVI or MOV video or as a collection of invididual frame images.
Explore remote THREDDS and OpenDAP catalogs and open datasets served from them.
If you are interested in interactive visualization over web, there are some options such as:
ncWMS: an webmapping server that reads NetCDF data and publish it using Web Mapping Service standard.
GeoServer: another webmapping server that has plugin to read NetCDF data.
Vtk (visualization Toolkit) is a C++ open source 2D and 3D visualization library that I use to visualize radar data in 3D.
I might have gotten in over my head here, and am looking for any possible assistance, as I am really not familiar with writing code. If you can dumb down any possible answers, that'd be spectacular.
I created a Google Fusion Table that lists worldwide sea ports by city and country, and visualizes them on a map. I want to have the ability to type in an inland location and have the map mark the location, and advise the closest one or two seaports.
For example: I enter a location of Richmond, VA, and the map will mark Richmond, VA on the map, and advise that the Norfolk, VA and New York, NY sea ports would be closest.
I'm not sure where to begin to accomplish this. Is this too vague of a question? Any help provided will be greatly appreciated!
You can accomplish this using a bit of JavaScript code. The Fusion Tables Layer in the Google Maps API allows you to find the nearest n neighbors to a latitude, longitude coordinate. An example can be found here:
https://developers.google.com/fusiontables/docs/samples/nn_example
Here are the overall steps you would take to create the app:
Create an HTML page that has a search box plus a map with the Fusion Table Layer displaying the data from your table
When the user enters a search term, such as Richmond VA, you would geocode the string to get the lat/lon coordinate. You can use the Google Maps API geocoding service:
http://code.google.com/apis/maps/documentation/javascript/geocoding.html
When you get the lat/lon coordinate, use this to update the query sent to the Fusion Tables Layer (similar to the example above) to show the 2 nearest ports.
My name is John and I am a grad student at the University of Florida. As part of my research one of my tasks is to create a piece of software that is to display a map of the surrounding area, which shows the current location (from a GPS), and to implement a shapefile (as a boundary outline). I am not able to really get enough information to get on the right track on how to do this, and would appreciate any assistance!
The project involves a large-scale robot that will be operated by tele-communication in rough terrain. So this mapping and gps software will need to be entirely offline, but the location in use will be known. It is very preferred to find a cost effective means to doing this process (maybe even a simple API that could do the simple task, dll libraries, or active x.
My initial guess is to use a geo-referenced image (that I would get the lat and long of and know the boundaries of that image). Then from a GPS I then would treat the image as an XY plot somehow and that would provide the current position. Obviously even this step can be a challenge depending on what kind of image, map, kml file, etc that I can find and use.
So I would appreciate any advice, suggestions, or comments.
Suggest you online reference source code, and then modify their own, this project is currently on the Internet, you can through search engines to find. Good luck!
I have a shapefile with a road network and it seems like the roads are all listed as 1 big polyline. Is this typical is it possible to get a road network where the roads are listed individually and have names associated with them?
thanks,
Jeff
If someone sent me a shapefile of roads where all the roads were a single polyline, I would assume the person was playing a practical joke on me.
Typically, a useful shapefile of roads would at least be broken into a single line for each defined road, or even better, a network intersection-to-intersection segment shapes.
It's not a trivial task to split up a single polyline into a more useful multi-segmented shapefile.
Doing a quick Google search returns a couple of free solutions for shapefile editors although I can't vouch for any of them. I use my company's own codebase written in C# using Tatuk for working with shapefiles.
http://www.nrdb.co.uk/nrdbview/
http://www.forestpal.com/blog/?p=21