US phone numbers - formatting

I'm building an app that uses phone numbers to perform different tasks, and recently I've had quite a few requests to implement it for the US market. Unfortunately as I live in the UK I don't have much knowledge of US phone number formats, and with so many USA users on here I was hoping some of you would be able to help.
I'm looking to obtain a list of sample phone numbers as they appear in your call log on your mobile phones. I'm trying to determine if they come through in the format +1234567, +001234567, 001234567, 01234567, 1234567, 234567 etc, or perhaps the format can vary..
Hopefully you're hesitant about giving out phone numbers on the web, so feel free to change a few digits (I'm mainly interested in the first few digits and the format of the numbers).
The more numbers you can provide the better, thanks!

The following formats are common:
+12312322334
2312322334
(231) 232-2334
2322334
232-2334
The last two forms are unusual, though may be encountered. The area code is implied to be local to the phone.
Note that there are some invalid entries: Numbers never start with a "1" (thats the long distance dial indicator, optional on cell phones), the "555" prefix is reserved (so commonly used in movies).

U.S. phone numbers have three parts: A three-digit area code, a three-digit number, and a four-digit number. Generally, these are written in the format (234)-555-1234. If you are calling from the same area code as the person you are calling, you can omit the area code (the (234) part). For landlines, you often need to input a 1 first if you intend to include the area code, but most cell phones don't require this.

As I say -- interesting q. Have you searched for something like "dirty north american phone format" or "how are north american phone numbers typically formatted"? Struck me as being something that has to be done often.
Google brings up this as an example: Phone number format provider. It has a) some example formats and b) some code that actually deals with dirty or non-standard formats, and reformats them ...

So -- from my comment I guess I'd strip spaces (and hyphens) to start with, but from then on assume that you've got a right-most part of the number, and that any missing left-most parts represent increasingly wider geographic areas.
In reverse -- if the assumption works, you can create your own sample numbers by taking a standard format number and chopping groups from the left hand side -- I think.

Related

Oracle String Conversion - Alpha String to Numeric Score, Fuzzy Match

I'm working with a lot of name data where the following events are happening:
In one stream the data is submitted as "Sung" and in the other stream "Snug" my initial thought to this was to convert Sung and Snug to where each character equals a number then the sums would be the same, so even if they transverse a character, I'd be able to bucket these appropriately.
The other is where in one stream it comes in as "Lillly" as opposed to "Lilly" in the other stream. I'd like to figure out how to fuzzy match these such that I can identify them. I'm not sure if this is possible in Oracle.
I'm working with many millions of data points and trying to figure out how to write these classification buckets such that I can stop having so much noise in my primary task of finding where people are truly different people as opposed to a clerical error.
Any thoughts would be very appreciated.
A common measure for such distance is called Levenshtein distance (Wikipedia here). This measures the "edit" distance between two strings -- number of edit operations needed to convert one into the other.
That's the good news. More good news is that Oracle even has an implementation in the UTL_MATCH library.
The bad news is that it is really, really expensive on millions of data points. Unfortunately, I cannot help you there so much. One idea is to determine which names are "close enough" because they already share a certain minimum number of characters.
Another method is to convert the strings to what they sound like. That is called soundex. You may be able to use the two together -- assuming your names are predominantly English (the soundex algorithm was invented by the US Census Bureau, so it would work best on names in America).

explain based on the knowledge of hardware why "Certain floating-point values cannot be exactly represented inside the computer’s memory"?

In the second line of the program’s output, notice that the value of
331.79, which is assigned to floatingVar, is actually displayed as 331.790009.The reason for this inaccuracy is the particular way in which numbers are internally represented inside the computer.You
have probably come across the same type of inaccuracy when dealing
with numbers on your calculator. If you divide 1 by 3 on your
calculator, you get the result .33333333, with perhaps some additional
3s tacked on at the end.The string of 3s is the calculator’s
approximation to one third.Theoretically, there should be an infinite
number of 3s. But the calculator can hold only so many digits, thus
the inherent inaccuracy of the machine.The same type of inaccuracy
applies here: Certain floatingpoint values cannot be exactly
represented inside the computer’s memory.
the above quote comes from Programming in Objective-C – 4th edition
And this post answered a little part but not the kind of answer i'm trying to look for.
Will try to find another book about this later in the day.
Anyway if anyone would like to answer this question, thanks!

Swedish "personnummer" (personal identity number) in SQL

This is a specific instance of an old problem: How to store "numbers" (e.g. phone numbers, IP addresses, social security numbers) in SQL databases?
Background: In Sweden, Personal Identity Numbers ("personnummer") are extremely common: You use them when communicating with the government, the bank, your employer, etc. People born in Sweden are assigned them when born. My immigrant friends lament the dark couple of weeks before they got a personnummer and could finally get a debit card and start looking for jobs.
My organization needs to store personnummer of our members. We have a SQL database for this. How should I store the data?
From Wikipedia, regarding the format of a personnummer:
The personal identity number consists of 10 digits and a hyphen. The first six correspond to the person's birthday, in YYMMDD form. They are followed by a hyphen. People over the age of 100 replace the hyphen with a plus sign. The seventh through ninth are a serial number. An odd ninth number is assigned to males and an even ninth number is assigned to females. Some county authorities, such as Stockholm, and some banks, have started using 12 digit numbers to allow YYYYMMDD. This format is also used on some Swedish ID-cards[clarification needed] and on the Swedish European Health Insurance Cards but not on state-issued identity documents.
The tenth digit is a checksum which was introduced in 1967 when the system was computerized.
So, a personnummer could be "120101-3842" for a person born this year. This is also commonly formatted as "20120101-3842" because of Y2K and "replacing the hyphen with a plus sign" is not well-known.
In a database column, I imagine I can:
Store it as a VARCHAR, formatted as "120101-3842", "20120101-3842" or "201201013842" (shaving of a byte by getting of the superfluous hyphen in the YYYYMMDD-format).
Store the full YYYYMMDDXXXX as an INTEGER, which is too big for 32 bits but fits without problems in 64 bits.
There won't be any issues with leading zeroes in this case, and using a VARCHAR is almost twice the size. Unlike IP addresses, storing this number as an INTEGER does not make it harder to read for a human (i.e. "127.0.0.1" compared to 2130706433).
I appreciate the "strictness" of an INTEGER column but also feel that this might run into unseen issues.
EDIT: We have a real need to validate this input with the checksum et cetera, which requires doing math on the indivdual digits (multiplying, summing etc). Since digits aren't really ... uh... part of a quantity, but of decimal formatting, it might make sense to consider it a varchar after all.
Use VARCHAR with a fixed length because it is the most simple approach. And I don't think that your organisation will store the number of all 9.5 million inhabitants so that saving space is a real design goal? :)
So, as I understand it, the hyphen / plus signs are only required for the format with 2 digit year.
If I were you, I would on the application side convert to the 4 digit year format (And drop the hyphen). Then store the resulting value as an integer. As you have stated, this will save space, and will allow you to mathematically transform the values (Although I imagine that on personal numbers this may be irrelevant).
I think the key here is that you should choose a single format rather than trying to manage two different formats in the database. This will also help to lead to application consistency. When it comes to external applications that require one or another format, you can place a transform into the transfer code.
On a side note, it should be fairly trivial to create a trigger that would automatically assign the 2 digit year format (As long as you replace the hyphen / plus with a digit) To the 4 year format.
I would store the canonical form 201201013842 as a CHAR (rather than a VARCHAR).
The bottom line is that you do not control the semantics of the number (Swedish authorities do). If at some point they decide to add non numeric characters to the number (as the number already does in the older format), you will be better equipped to deal with the change.
We have the same problem and we currently store it as yyyyMMdd-xxxx, but if i where to redesign this today i would store the yyyyMMdd in a date field as that would handle the validation of the date, then i would store the 4 other values in a nchar(4) and add a constraint to ensure its only numbers.

How to simplify big numeric input from user? [Objective C]

I've building a very basic iphone app where the user will be able to enter or select a very large numeric cash value (usually in the thousands or millions).
At the moment I am using a simple text box entry with number pad selected.
I am going to use the example of a Football transfer fee as an analogy.
A transfer fee can be in many millions and I really do not want the user to be mis-typing zero's, or getting frustrated with the number of zero's they have to enter.
In addition, as the text box/numeric cash value is not displayed with any currency formatting it makes it very unintuitive to know just how much you are entering.
In this thread I have a way of displaying big numbers on the screen; you'll also notice the numbers are formatted in chunks (ie: 2.25m, 2m, 7.25m, etc) -- it makes the process more streamlined and is more visually intuitive.
But what I am unsure about is how to make it easy for the user to enter big numbers without typing stupidly long zeros every time.
Possible solution 1 -- Use a UIPickerView with 3+ segments for each of the units.
Problem -- it won't handle smaller numbers properly, also you may get weird looking numbers like 1.15k which although correct is not what I want to display.
Possible solution 2 -- Use a +/- button to allow a user to simply increase/decrease the number by a factor of 250 or 500. This is the simplest answer, but its not as elegant as a UIPickerView
If there is another way to do this, a way to simplify the input of big numeric numbers from a user, I'd be interested.
You could add formatted output right above or below the text field. As they enter numbers, update the formatted field adding currency symbols, commas and decimals. Not the most elegant way to do this, but it would be simple to implement, and intuitive to the user.

Algorithm for almost similar values search

I have Persons table in SQL Server 2008.
My goal is to find Persons who have almost similar addresses.
The address is described with columns state, town, street, house, apartment, postcode and phone.
Due to some specific differences in some states (not US) and human factor (mistakes in addresses etc.), address is not filled in the same pattern.
Most common mistakes in addresses
Case sensitivity
Someone wrote "apt.", another one "apartment" or "ap." (although addresses aren't written in English)
Spaces, dots, commas
Differences in writing street names, like 'Dr. Jones str." or "Doctor Jones street" or "D. Jon. st." or "Dr Jones st" etc.
The main problem is that data isn't in the same pattern, so it's really difficult to find similar addresses.
Is there any algorithm for this kind of issue?
Thanks in advance.
UPDATE
As I mentioned address is separated into different columns. Should I generate a string concatenating columns or do your steps for each column?
I assume I shouldn't concatenate columns, but if I'll compare columns separately how should I organize it? Should I find similarities for each column an union them or intersect or anything else?
Should I have some statistics collecting or some kind of educating algorithm?
Suggest approaching it thus:
Create word-level n-grams (a trigram/4-gram might do it) from the various entries
Do a many x many comparison for string comparison and cluster them by string distance. Someone suggested Levenshtein; there are better ones for this kind of task, Jaro-Winkler Distance and Smith-Waterman work better. A libraryt such as SimMetrics would make life a lot easier
Once you have clusters of n-grams, you can resolve the whole string using the constituent subgrams i.e. D.Jones St => Davy Jones St. => DJones St.
Should not be too hard, this is an all-too-common problem.
Update: Based on your update above, here are the suggested steps
Catenate your columns into a single string, perhaps create a db "view" . For example,
create view vwAddress
as
select top 10000
state town, street, house, apartment, postcode,
state+ town+ street+ house+ apartment+ postcode as Address
from ...
Write a separate application (say in Java or C#/VB.NET) and Use an algorithm like JaroWinkler to estimate the string distance for the combined address, to create a many x many comparison. and write into a separate table
address1 | address n | similarity
You can use Simmetrics to get the similarity thus:
JaroWinnkler objJw = new JaroWinkler()
double sim = objJw.GetSimilarity (address1, addres n);
You could also trigram it so that an address such as "1 Jones Street, Sometown, SomeCountry" becomes "1 Jones Street", "Jones Street Sometown", and so on....
and compare the trigrams. (or even 4-grams) for higher accuracy.
Finally you can order by similarity to get a cluster of most similar addresses and decide an approprite threshold. Not sure why you are stuck
I would try to do the following:
split up the address in multiple words, get rid of punctuation at the same time
check all the words for patterns that are typically written differently and replace them with a common name (e.g. replace apartment, ap., ... by apt, replace Doctor by Dr., ...)
put all the words back in one string alphabetically sorted
compare all the addresses using a fuzzy string comparison algorithm, e.g. Levenshtein
tweak the parameters of the Levenshtein algorithm (e.g. you want to allow more differences on longer strings)
finally do a manual check of the strings
Of course, the solution to keep your data 'in shape' is to have explicit fields for each of your characteristics in your database. Otherwise, you will end up doing this exercise every few months.
The main problem I see here is to exactly define equality.
Even if someone writes Jon. and another Jone. - you will never be able to say if they are the same. (Jon-Jonethan,Joneson,Jonedoe whatever ;)
I work in a firm where we have to handle exact this problem - I'm afraid I have to tell you this kind of checking the adress lists for navigation systems is done "by hand" most of the time. Abbrevations are sometimes context dependend, and there are other things that make this difficult. Ofc replacing string etc is done with python - but telling you the MEANING of such an abbr. can only done by script in a few cases. ("St." -> Can be "Saint" and "Street". How to decide? impossible...this is human work.).
Another big problem is as you said "Is there a street "DJones" or a person? Or both? Which one is ment here? Is this DJones the same as Dr Jones or the same as Don Jones? Its impossible to decide!
You can do some work with lists as presented by another answer here - but it will give you enough "false positives" or so.
You have a postcode field!!!
So, why don't you just buy a postcode table for your country
and use that to clean up your street/town/region/province information?
I did a project like this in the last centuary. Basicly it was a consolidation of two customer files after a merger, and, involved names and addresses from three different sources.
Firstly as many posters have suggested, convert all the common words and abbreveations and spelling mistakes to a common form "Apt." "Apatment" etc. to "Apt".
Then look through the name and identifiy the first letter of the first name, plus the first surname. (Not that easy consider "Dr. Med. Sir Henry de Baskerville Smythe") but dont worry where there are amiguities just take both! So if you lucky you get HBASKERVILLE and HSMYTHE. Now get rid of all the vowels as thats where most spelling variations occur so now you have HBSKRVLL HSMTH.
You would also get these strings from "H. Baskerville","Sir Henry Baskerville Smith" and unfortunately "Harold Smith" but we are talking fuzzy matching here!
Perform a similar exercise on the street, and apartment and postcode fields. But do not throw away the original data!
You now come to the interesting bit first you compare each of the original strings and give say 50 points for each string that matches exactly. Then go through you "normalised" strings and give say 20 points for each one that matches exactly. Then go through all the strings and give say 5 points for each four character or more substring they have in common. For each pair compared you will end up with some with scores > 150 which you can consider as a certain match, some with scores less than 50 which you can consider not matched and some inbetween which have some probability of matching.
You need some more tweaking to improve this by adding various rules like "subtract 20 points for a surname of 'smith'". You really have to keep running and tweaking until you get happy with the resulting matches, but, once you look at the results you get a pretty good feel which score to consider a "match" and which are the false positives you need to get rid of.
I think the amount of data could affect what approach works best for you.
I had a similar problem when indexing music from compilation albums with various artists. Sometimes the artist came first, sometimes the song name, with various separator styles.
What I did was to count the number of occurrences on other entries with the same value to make an educated guess wether it was the song name or an artist.
Perhaps you can use soundex or similar algorithm to find stuff that are similar.
EDIT: (maybe I should clarify that I assumed that artist names were more likely to be more frequently reoccurring than song names.)
One important thing that you mention in the comments is that you are going to do this interactively.
This allows to parse user input and also at the same time validate guesses on any abbreviations and to correct a lot of mistakes (the way for example phone number entry works some contact management systems - the system does the best effort to parse and correct the country code, area code and the number, but ultimately the user is presented with the guess and has the chance to correct the input)
If you want to do it really good then keeping database/dictionaries of postcodes, towns, streets, abbreviations and their variations can improve data validation and pre-processing.
So, at least you would have fully qualified address. If you can do this for all the input you will have all the data categorized and matches can then be strict on certain field and less strict on others, with matching score calculated according weights you assign.
After you have consistently pre-processed the input then n-grams should be able to find similar addresses.
Have you looked at SQL Server Integration Services for this? The Fuzzy Lookup component allows you to find 'Near matches': http://msdn.microsoft.com/en-us/library/ms137786.aspx
For new input, you could call the package from .Net code, passing the value row to be checked as a set of parameters, you'd probably need to persist the token index for this to be fast enough for user interaction though.
There's an example of address matching here: http://msdn.microsoft.com/en-us/magazine/cc163731.aspx
I'm assuming that response time is not critical and that the problem is finding an existing address in a database, not merging duplicates. I'm also assuming the database contains a large number of addresses (say 3 million), rather than a number that could be cleaned up economically by hand or by Amazon's Mechanical Turk.
Pre-computation - Identify address fragments with high information content.
Identify all the unique words used in each database field and count their occurrences.
Eliminate very common words and abbreviations. (Street, st., appt, apt, etc.)
When presented with an input address,
Identify the most unique word and search (Street LIKE '%Jones%') for existing addresses containing those words.
Use the pre-computed statistics to estimate how many addresses will be in the results set
If the estimated results set is too large, select the second-most unique word and combine it in the search (Street LIKE '%Jones%' AND Town LIKE '%Anytown%')
If the estimated results set is too small, select the second-most unique word and combine it in the search (Street LIKE '%Aardvark%' OR Town LIKE '%Anytown')
if the actual results set is too large/small, repeat the query adding further terms as before.
The idea is to find enough fragments with high information content in the address which can be searched for to give a reasonable number of alternatives, rather than to find the most optimal match. For more tolerance to misspelling, trigrams, tetra-grams or soundex codes could be used instead of words.
Obviously if you have lists of actual states / towns / streets then some data clean-up could take place both in the database and in the search address. (I'm very surprised the Armenian postal service does not make such a list available, but I know that some postal services charge excessive amounts for this information. )
As a practical matter, most systems I see in use try to look up people's accounts by their phone number if possible: obviously whether that is a practical solution depends upon the nature of the data and its accuracy.
(Also consider the lateral-thinking approach: could you find a mail-order mail-list broker company which will clean up your database for you? They might even be willing to pay you for use of the addresses.)
I've found a great article.
Adding some dlls as sql user-defined functions we can use string comparison algorithms using SimMetrics library.
Check it
http://anastasiosyal.com/archive/2009/01/11/18.aspx
the possibilities of such variations are countless and even if such an algorithm exists, it can never be fool-proof. u can't have a spell checker for nouns after all.
what you can do is provide a drop-down list of previously entered field values, so that they can select one, if a particular name already exists.
its better to have separate fields for each value like apartments and so on.
You could throw all addresses at a web service like Google Maps (I don't know whether this one is suitable, though) and see whether they come up with identical GPS coordinates.
One method could be to apply the Levenshtein distance algorithm to the address fields. This will allow you to compare the strings for similarity.
Edit
After looking at the kinds of address differences you are dealing with, this may not be helpful after all.
Another idea is to use learning. For example you could learn, for each abbreviation and its place in the sentence, what the abbreviation means.
3 Jane Dr. -> Dr (in 3rd position (or last)) means Drive
Dr. Jones St -> Dr (in 1st position) means Doctor
You could, for example, use decision trees and have a user train the system. Probably few examples of each use would be enough. You wouldn't classify single-letter abbreviations like D.Jones that could be David Jones, or Dr. Jones as likely. But after a first level of translation you could look up a street index of the town and see if you can expand the D. into a street name.
Again, you would run each address through the decision tree before storing it.
It feels like there should be some commercial products doing this out there.
A possibility is to have a dictionary table in the database that maps all the variants to the 'proper' version of the word:
*Value* | *Meaning*
Apt. | Apartment
Ap. | Apartment
St. | Street
Then you run each word through the dictionary before you compare.
Edit: this alone is too naive to be practical (see comment).