I'm just starting to take an interest in visualization and I'd like to know where I can get my hands on some data, preferably real world, to see what queries and graphics I can draw from it. Its more of a personal exercise to create some pretty looking representations of that data.
After seeing this I wondered where the data came from and what else could be done from Wikipedia. Is there anyway I can obtain data from say, wikipedia?
Also, could anyone recommend any good books? I don't trust the user reviews on the amazon website :-)
You can download the raw Wikipedia data from http://download.wikimedia.org. There are many different views of the data available. The English Wikipedia is by far the largest database, and there isn't a current full dump available, but one is in progress. It will probably take months to finish and be available for download.
The most recent one was 18 GB compressed, which uncompressed to something like 2.5 TB.
A fantastic book is The Visual Display of Quantitative Information by Edward Tufte.
Related
I'm a complete newbie in this matter so please ... be patient!
I have a similar (though more complex, I believe) problem to this (Save data from ArcGIS feature layer) with the comprehensive answer by AaronS.
I thank in advance whoever might be supportive.
My goal is to retrieve (via time based script) data for real time processing from a meteo website (https://www.meteo3r.it/app/public/) based on ArcGIS. In particular, I need two types of data:
A) the most updated values of all measurements available IN A GIVEN METEO STATION (or ALL, then I will filter out)
B) two pixels maps with the radar measurement of rain intensity AND type of fall (rain, heavy rain, snow, hailstorm, ...) FOR ALL AVAILABLE GEOGRAPHICAL COORDINATES
Type A) Data seems available every 30' with a 1h delay while type B) seems more frequent with a 5' sample and 10' delay. Everyone agrees ?
For sure I will benefit of the python code (thank you Aaron again!) to turn data into column vectors as soon as I will be able to retrieve data from the server. And that's exactly the point I'm stuck at.
For A), thanks to Aaron explanation, I found this URL (https://www.meteo3r.it/dati/mappe/misure.geojson) that contains all the measurements for ALL stations at, I believe, the most updated value in time.
B) is definitevely tougher. I can't find anywhere the place where the radar data is stored and how to retrieve it. The only thing I found is that if you search for the word "radar" as Aaron did for "14001", the only file found is this "66xxx.pbf". It's a partially readable binary file that I suspect (not fore sure) is related to sprite images (pixel drawings) that are graphically showing the radar measurement on the map.
Indeed, the website shows something like 10 subsequent sprite images to show a "dynamic motion" of the clouds.
I just need the data in numerical form (lat, long, intensity, type) related to the most updated sprite available on the website.
Anyone knowing how to do it ?
Thanks a lot removing this roadblock to me.
I am making a little personal project.
Ideally I would like to be able to make programmatically a google search and have the count of results. (My goal is to compare the results count between a lot (100000+) of different phrases).
Is there a free way to make a web search and compare the popularity of different texts, by using Google Bing or whatever (the source is not really important).
I tried Google but seems that freely I can do only 10 requests per day.
Bing is more permissive (5000 free requests per month).
Is there other tools or way to have a count of number of results for a particular sentence freely ?
Thanks in advance.
There are several things you're going to need if you're seeking to create a simple search engine.
First of all you should read and understand where the field of information retrieval started with G. Salton's paper or at least read the wiki page on the vector space model. It will require you learning at least some undergraduate linear algebra. I suggest Gilbert Strang's MIT video lectures for this.
You can then move to the Brin/Page Pagerank paper which outlays the original concept behind the hyperlink matrix and quickly calculating eigenvectors for ranking or read the wiki page.
You may also be interested in looking at the code for Apache Lucene
To get into contemporary search algorithm techniques you need calculus and regression analysis to learn machine learning and deep learning as the current google search has moved away from Pagerank and utilizes these. This is partially due to how link farming enabled people to artificially engineer search results and the huge amount of meta data that modern browsers and web servers allow to be collected.
EDIT:
For the webcrawler only portion I'd recommend WebSPHINX. I used this in my senior research in college in conjunction with Lucene.
I'm currently researching recommender systems and would like to know how other researchers acquire or generate test data to evaluate the systems' performance?
When I was working with Recommender Systems I had the exact same problem. I enjoyed the Grouplens dataset the most:
http://grouplens.org/node/12
You can download ratings given by users to movies.
Also, I described in my blog some datasets I found while researching:
http://girlincomputerscience.blogspot.com.br/2010/12/datasets.html
Hope it helps!
I don't know what field you're evaluating, but if it's movie recommendations, you could use the MovieLens data from GroupLens to start out with. (It seems like their site is temporarily down, but I'm sure it will be back up soon).
They have three sets of data - 100,000 votes (preferences), 1 million, and 10 million - and it seems like they're more or less the standard that everyone starts out with.
I'm working on the technical architecture for a content solution integration. The data from the solution provider runs to millions of rows and normalised to 3NF. It is updated on a regular schedule (daily most likely) and its data is split down to a very granular level of atomicity.
I need to search and query this data and my current inclination is to leave the normalised data alone and create a denormalised database from its data (OLAP to OLTP). The 'transfer' can be a custom built program that can contain the necessary business logic in addition to the raw copying power and be run at a set schedule as required. The denormalised database would then reduce the atomicity and allow the keyword searches and queries to run efficiently. I was looking at using Lucene .NET for the keyword work on the denormalised database.
So before I sing loudly from the hills that this is the way forward, I wanted some expert opinion on this and what is the perceived "best practise". Is the method I have suggested the best way forward considering the data I will be provided? It was suggested that perhaps I could use a 'search engine' to search the normalised data. This scared the hell out of me, but raised the question; what search engine and how?
Opinions, flames, bad language and help appreciated :)
I have built reporting databases and data warehouses based on data stored in normalized form. There is quite a bit of work involved in the transfer program (ETL). Given your description of the data feed, maybe some of that work has been done for you by the feeder.
Millions of rows isn't a lot, these days. You may be able to get away with report oriented views into the existing database. Try it and see.
The biggest benefit to building an OLAP oriented database is not speed. It's flexibility. "We love this report, but now we want to see it weekly and quarterly instead of monthly. Bam! Done!" "Can you break it down by marketing category instead of manufacturing category? Bam! Done!" And so on.
A resonably normalized model (3NF/BCNF) provides the best average performance and the least amount of modification anomalies for the largest number of scenarios. That's big, so I would start from there. As your requirements are fuzzy, it's seems like the most sensible option.
Actually, the most sensible thing would be to go over the requirements until they are a bit more "crisp" ;)
Also, if you could get your hands on a few early extracts from your data provider you could experiment with it and get a feeling for the data distributions (not all people live in one country, and some countries holds more people than others. Not all people have children, and the number children per person is vastly different depending on the country). This is a major point and it is crucial that the optimizer can make good decisions.
Other than that, I agree with everything Walter said and also gave him my vote.
We're adding extra login information to an existing database record on the order of 3.85KB per login.
There are two concerns about this:
1) Is this too much on-the-wire data added per login?
2) Is this too much extra data we're storing in the database per login?
Given todays technology, are these valid concerns?
Background:
We don't have concrete usage figures, but we average about 5,000 logins per month. We hope to scale to larger customers, howerver, still in the 10's of 1000's per month, not 1000's per second.
In the US (our market) broadband has 60% market adoption.
Assuming you have ~80,000 logins per month, you would be adding ~ 3.75 GB per YEAR to your database table.
If you are using a decent RDBMS like MySQL, PostgreSQL, SQLServer, Oracle, etc... this is a laughable amount of data and traffic. After several years, you might want to start looking at archiving some of it. But by then, who knows what the application will look like?
It's always important to consider how you are going to be querying this data, so that you don't run into performance bottlenecks. Without those details, I cannot comment very usefully on that aspect.
But to answer your concern, do not be concerned. Just always keep thinking ahead.
How many users do you have? How often do they have to log in? Are they likely to be on fast connections, or damp pieces of string? Do you mean you're really adding 3.85K per time someone logs in, or per user account? How long do you have to store the data? What benefit does it give you? How does it compare with the amount of data you're already storing? (i.e. is most of your data going to be due to this new part, or will it be a drop in the ocean?)
In short - this is a very context-sensitive question :)
Given that storage and hardware are SOOO cheap these days (relatively speaking of course) this should not be a concern. Obviously if you need the data then you need the data! You can use replication to several locations so that the added data doesn't need to move over the wire as far (such as a server on the west coast and the east coast). You can manage your data by separating it by state to minimize the size of your tables (similar to what banks do, choose state as part of the login process so that they look to the right data store). You can use horizontal partitioning to minimize the number or records per table to keep your queries speedy. Lots of ways to keep large data optimized. Also check into Lucene if you plan to do lots of reads to this data.
In terms of today's average server technology it's not a problem. In terms of your server technology it could be a problem. You need to provide more info.
In terms of storage, this is peanuts, although you want to eventually archive or throw out old data.
In terms of network (?) traffic, this is not much on the server end, but it will affect the speed at which your website appears to load and function for a good portion of customers. Although many have broadband, someone somewhere will try it on edge or modem or while using bit torrent heavily, your site will appear slow or malfunction altogether and you'll get loud complaints all over the web. Does it matter? If your users really need your service, they can surely wait, if you are developing new twitter the page load time increase is hardly acceptable.