How to delete comments and comments count from this hash - ruby-on-rails-3
How to delete comments and comments count from this hash. I want data without comments.
{"id"=>"149536368430_10151515647758431", "from"=>{"category"=>"Product/service", "name"=>"Swiffer", "id"=>"149536368430"}, "to"=>{"data"=>[{"category"=>"Media/news/publishing", "name"=>"Hispanicize", "id"=>"119781464743025"}, {"category"=>"Household supplies", "name"=>"P&G everyday", "id"=>"142671152446083"}]}, "message"=>"Swiffer will be at Hispanicize in Miami Wednesday-Saturday this week. If you are attending, come dust with us at the P&G everyday Lounge and take the Abuelita Test (or Grandmother Test!)", "message_tags"=>{"19"=>[{"id"=>"119781464743025", "name"=>"Hispanicize", "type"=>"page", "offset"=>19, "length"=>11}], "117"=>[{"id"=>"142671152446083", "name"=>"P&G everyday", "type"=>"page", "offset"=>117, "length"=>12}]}, "picture"=>"http://photos-d.ak.fbcdn.net/hphotos-ak-ash3/155668_10151515647728431_245572821_s.png", "link"=>"http://www.facebook.com/photo.php?fbid=10151515647728431&set=a.186758558430.126763.149536368430&type=1&relevant_count=1", "icon"=>"http://static.ak.fbcdn.net/rsrc.php/v2/yz/r/StEh3RhPvjk.gif", "actions"=>[{"name"=>"Comment", "link"=>"http://www.facebook.com/149536368430/posts/10151515647758431"}, {"name"=>"Like", "link"=>"http://www.facebook.com/149536368430/posts/10151515647758431"}], "privacy"=>{"value"=>""}, "type"=>"photo", "status_type"=>"added_photos", "object_id"=>"10151515647728431", "created_time"=>"2013-04-08T18:03:01+0000", "updated_time"=>"2013-04-08T18:22:37+0000", "shares"=>{"count"=>1}, "likes"=>{"data"=>[{"name"=>"Penny Denton", "id"=>"100000188001574"}, {"name"=>"Angie Altman", "id"=>"100002250974930"}, {"name"=>"Jeannise R. Clagett", "id"=>"100002279556425"}, {"name"=>"Andy Tappan", "id"=>"100000485732671"}], "count"=>128}, "comments"=>{"data"=>[{"id"=>"149536368430_10151515647758431_9285816", "from"=>{"name"=>"Melissa Rivera", "id"=>"600150684"}, "message"=>"Thats cute!! Love the idea!! :)", "created_time"=>"April 08 at 06:22PM"}], "count"=>4}, "publishedDate"=>"2013-04-08 18:22:37", "source"=>"fb"}
Any Help??? I will be very Thankfull!!!
my_hash = {"id"=>"149536368430_10151515647758431", "from"=>{"category"=>"Product/service", "name"=>"Swiffer", "id"=>"149536368430"}, "to"=>{"data"=>[{"category"=>"Media/news/publishing", "name"=>"Hispanicize", "id"=>"119781464743025"}, {"category"=>"Household supplies", "name"=>"P&G everyday", "id"=>"142671152446083"}]}, "message"=>"Swiffer will be at Hispanicize in Miami Wednesday-Saturday this week. If you are attending, come dust with us at the P&G everyday Lounge and take the Abuelita Test (or Grandmother Test!)", "message_tags"=>{"19"=>[{"id"=>"119781464743025", "name"=>"Hispanicize", "type"=>"page", "offset"=>19, "length"=>11}], "117"=>[{"id"=>"142671152446083", "name"=>"P&G everyday", "type"=>"page", "offset"=>117, "length"=>12}]}, "picture"=>"http://photos-d.ak.fbcdn.net/hphotos-ak-ash3/155668_10151515647728431_245572821_s.png", "link"=>"http://www.facebook.com/photo.php?fbid=10151515647728431&set=a.186758558430.126763.149536368430&type=1&relevant_count=1", "icon"=>"http://static.ak.fbcdn.net/rsrc.php/v2/yz/r/StEh3RhPvjk.gif", "actions"=>[{"name"=>"Comment", "link"=>"http://www.facebook.com/149536368430/posts/10151515647758431"}, {"name"=>"Like", "link"=>"http://www.facebook.com/149536368430/posts/10151515647758431"}], "privacy"=>{"value"=>""}, "type"=>"photo", "status_type"=>"added_photos", "object_id"=>"10151515647728431", "created_time"=>"2013-04-08T18:03:01+0000", "updated_time"=>"2013-04-08T18:22:37+0000", "shares"=>{"count"=>1}, "likes"=>{"data"=>[{"name"=>"Penny Denton", "id"=>"100000188001574"}, {"name"=>"Angie Altman", "id"=>"100002250974930"}, {"name"=>"Jeannise R. Clagett", "id"=>"100002279556425"}, {"name"=>"Andy Tappan", "id"=>"100000485732671"}], "count"=>128}, "comments"=>{"data"=>[{"id"=>"149536368430_10151515647758431_9285816", "from"=>{"name"=>"Melissa Rivera", "id"=>"600150684"}, "message"=>"Thats cute!! Love the idea!! :)", "created_time"=>"April 08 at 06:22PM"}], "count"=>4}, "publishedDate"=>"2013-04-08 18:22:37", "source"=>"fb"}
my_hash.reject { |key,value| key == 'comments' }
This will return a hash without comments in it.
Related
Seach and delete lines before and after a matching a pattern [closed]
Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 10 months ago. Improve this question The input contains calendar entries framed by { and }, I want to cut out all Mozilla created entries, looking like below and save the result in a new file. { "c_content" = "BEGIN:VCALENDAR PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN VERSION:2.0 BEGIN:VTIMEZONE .. END:VCALENDAR"; "c_name" = "0000000-0000-000-0000-00000000000.ics"; }, The content of one calendar entry is always 39 lines in summary. gawk 'BEGIN{RS=ORS="};"}/Mozilla.org\/NONSGM/;END{printf "\n"}' inputfile This filters out all matching entries but I what to have the opposite result (like a grep -v). Input sample: The second calendar entry containing the line "PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN" should be filtered out. { "c_content" = "BEGIN:VCALENDAR PRODID:-//Inverse inc./SOGo 5.1.0//EN VERSION:2.0 BEGIN:VTIMEZONE TZID:Europe/Berlin LAST-MODIFIED:20210303T135712Z X-LIC-LOCATION:Europe/Berlin BEGIN:DAYLIGHT TZNAME:CEST TZOFFSETFROM:+0100 TZOFFSETTO:+0200 DTSTART:19700329T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU END:DAYLIGHT BEGIN:STANDARD TZNAME:CET TZOFFSETFROM:+0200 TZOFFSETTO:+0100 DTSTART:19701025T030000 RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20180518T085937Z LAST-MODIFIED:20180518T090431Z DTSTAMP:20180518T090432Z UID:005A3BF6-39A9-4771-8170-FD9E74AA818B SUMMARY:Firma Byom DTSTART;TZID=Europe/Berlin:20180518T130000 DTEND;TZID=Europe/Berlin:20180518T140000 CLASS:PUBLIC DESCRIPTION:Prospekt \U00DCbergabe SEQUENCE:1 TRANSP:OPAQUE BEGIN:VALARM ACTION:NONE TRIGGER;VALUE=DATE-TIME:19760401T005545Z END:VALARM END:VEVENT END:VCALENDAR"; "c_name" = "005A3BF6-39A9-4771-8170-FD9E74AA818B.ics"; }, { "c_content" = "BEGIN:VCALENDAR PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN VERSION:2.0 BEGIN:VTIMEZONE TZID:W. Europe Standard Time BEGIN:STANDARD DTSTART:19701025T030000 RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU;BYHOUR=3;BYMINUTE=0 TZNAME:Mitteleurop\U00E4ische Zeit TZOFFSETFROM:+0200 TZOFFSETTO:+0100 END:STANDARD BEGIN:DAYLIGHT DTSTART:19700329T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU;BYHOUR=2;BYMINUTE=0 TZNAME:Mitteleurop\U00E4ische Sommerzeit TZOFFSETFROM:+0100 TZOFFSETTO:+0200 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT CLASS:PUBLIC DTEND;VALUE=DATE:20220330 DTSTAMP:20220406T184433Z DTSTART;VALUE=DATE:20220329 PRIORITY:5 SEQUENCE:0 SUMMARY:This entry has to be filtered out TRANSP:TRANSPARENT UID:040000008200E00074C5B7101A82E008000000000015970CF649D801000000000000000 0100000001EAC086F1BE63E448C32EE561BCB4A1E X-MICROSOFT-CDO-BUSYSTATUS:FREE END:VEVENT END:VCALENDAR"; "c_name" = "040000008200E00074C5B7101A82E008000000000015970CF649D8010000000000000000100000001EAC086F1BE63E448C32EE561BCB4A1E.ics"; }, { "c_content" = "BEGIN:VCALENDAR PRODID:-//Inverse inc./SOGo 5.1.0//EN VERSION:2.0 BEGIN:VTIMEZONE TZID:Europe/Berlin LAST-MODIFIED:20210303T135712Z X-LIC-LOCATION:Europe/Berlin BEGIN:DAYLIGHT TZNAME:CEST TZOFFSETFROM:+0100 TZOFFSETTO:+0200 DTSTART:19700329T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU END:DAYLIGHT BEGIN:STANDARD TZNAME:CET TZOFFSETFROM:+0200 TZOFFSETTO:+0100 DTSTART:19701025T030000 RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20200124T121710Z LAST-MODIFIED:20200310T145851Z DTSTAMP:20200310T145853Z UID:0085F167-6A6E-4D8D-BE15-F1CAEF2C6CD0 SUMMARY:Herr Mayer DTSTART;TZID=Europe/Berlin:20200127T083000 DTEND;TZID=Europe/Berlin:20200127T093000 CLASS:PUBLIC DESCRIPTION:Besuch SEQUENCE:0 TRANSP:OPAQUE END:VEVENT END:VCALENDAR"; "c_name" = "0085F167-6A6E-4D8D-BE15-F1CAEF2C6CD0.ics"; },
Suggest awk scripts: script 1, base on counting lines. Assuming each record is 36 lines long awk '{--skip}/Mozilla.org/{skip=36}skip<1{print}' input.txt care for first record 2 lines head, and last record 34 lines tail. script 2, assuming record margin is }, . Assuming each record ends with }, awk '!/Mozilla.org/{printf "%s",$0 RT}' RS="[[:space:]]*}," input.txt care for last record tail. Is it ending with }, Thanks a lot! Since the records are sadly not all 36 lines long, I tried script 2. I do not get any output with it. When I omit the "!" I do get the Mozilla.org entries, so it seems that the invese with "!" has no effect.
How to get section heading of tables in wikipedia through API
How do I get section headings for individual tables: Xia dynasty (夏朝) (2070–1600 BC), Shang dynasty (商朝) (1600–1046 BC), Zhou dynasty (周朝) (1046–256 BC) etc. for the Chinese Monarchs list on Wikipedia via API? I use the code below to connect: from pprint import pprint import requests, wikitextparser r = requests.get( 'https://en.wikipedia.org/w/api.php', params={ 'action': 'query', 'titles': 'List_of_Chinese_monarchs', 'prop': 'revisions', 'rvprop': 'content', 'format': 'json', } ) r.raise_for_status() pages = r.json()['query']['pages'] body = next(iter(pages.values()))['revisions'][0]['*'] doc = wikitextparser.parse(body) print(f'{len(doc.tables)} tables retrieved') han = doc.tables[5].data() doc.tables[6].data() doc.tables[i].data() only return the table values, without its <h2> section headings. I would like the API to return me a list of title strings that correspond to each of the 83 tables returned. Original website: https://en.wikipedia.org/wiki/List_of_Chinese_monarchs
I'm not sure why you are using doc.tables when it is the sections you are interested in. This works for me: for i in range(1,94,1): print(doc.sections[i].title.replace('[[','').replace(']]','')) I get 94 sections though rather than 83 and while you can use len(doc.sections) this will include See also etc. There must be a more elegant way of removing the wikilinks.
amadeus API list of all possible hotel "amenities"
In Amadeus hotels API there is amenities choices and in the search results there is different possibilities as well. To make amenities more user readable I'd like a FULL list of ALL different possible amenities so that I can populate a database with amenities code and different translations. For a client searching for hotels: stuff like ACC_BATHS, SAFE_DEP_BOX is kind of not readable friendly... I'm referring to this { "data": [ { "type": "hotel-offers", "hotel": { "type": "hotel", "cityCode": "MIA", ... "amenities": [ "HANDICAP_FAC", "ACC_BATHS", "ACC_WASHBASIN", "ACC_BATH_CTRLS", "ACC_LIGHT_ where can I find a csv of all amenities ?
I contacted the Amadeus tech support and they answered me this : (you can copy this list, it's csv format... NAME_OF_AMENITY,amenity_code ) 226 codes PHOTOCOPIER,BUS.2 PRINTER,BUS.28 AUDIO-VIS_EQT,BUS.37 WHITE/BLACKBOARD,BUS.38 BUSINESS_CENTER,BUS.39 CELLULAR_PHONE_RENTAL,BUS.40 COMPUTER_RENTAL,BUS.41 EXECUTIVE_DESK,BUS.42 LCD/PROJECTOR,BUS.45 MEETING_ROOMS,BUS.46 OVERHEAD_PROJECTOR,BUS.48 SECRETARIAL_SERVICES,BUS.49 CONFERENCE_SUITE,BUS.94 CONVENTION_CTR,BUS.95 MEETING_FACILITIES,BUS.96 24_HOUR_FRONT_DESK,HAC.1 DISABLED_FACILITIES,HAC.101 MULTILINGUAL_STAFF,HAC.103 WEDDING_SERVICES,HAC.104 BANQUETING_FACILITIES,HAC.105 PORTER/BELLBOY,HAC.106 BEAUTY_PARLOUR,HAC.107 WOMENS_GST_RMS,HAC.110 PHARMACY,HAC.111 120_AC,HAC.113 120_DC,HAC.114 220_AC,HAC.115 220_DC,HAC.117 BARBECUE,HAC.118 BUTLER_SERVICE,HAC.136 CAR_RENTAL,HAC.15 CASINO,HAC.16 BAR,HAC.165 LOUNGE,HAC.165 TRANSPORTATION,HAC.172 WIFI,HAC.178 WIRELESS_CONNECTIVITY,HAC.179 BALLROOM,HAC.191 BUS_PARKING,HAC.192 CHILDRENS_PLAY_AREA,HAC.193 NURSERY,HAC.194 DISCO,HAC.195 24_HOUR_ROOM_SERVICE,HAC.2 COFFEE_SHOP,HAC.20 BAGGAGE_STORAGE,HAC.201 NO_KID_ALLOWED,HAC.217 KIDS_WELCOME,HAC.218 COURTESY_CAR,HAC.219 CONCIERGE,HAC.22 NO_PORN_FILMS,HAC.220 INT_HOTSPOTS,HAC.221 FREE_INTERNET,HAC.222 INTERNET_SERVICES,HAC.223 PETS_ALLOWED,HAC.224 FREE_BREAKFAST,HAC.227 CONFERENCE_FACILITIES,HAC.24 HI_INTERNET,HAC.259 EXCHANGE_FAC,HAC.26 LOBBY,HAC.276 DOCTOR_ON_CALL,HAC.28 24H_COFFEE_SHOP,HAC.281 AIRPORT_SHUTTLE,HAC.282 LUGGAGE_SERVICE,HAC.283 PIANO_BAR,HAC.284 VIP_SECURITY,HAC.285 DRIVING_RANGE,HAC.30 DUTY_FREE_SHOP,HAC.32 ELEVATOR,HAC.33 EXECUTIVE_FLR,HAC.34 GYM,HAC.35 EXPRESS_CHECK_IN,HAC.36 EXPRESS_CHECK_OUT,HAC.37 FLORIST,HAC.39 CONNECTING_ROOMS,HAC.4 FREE_AIRPORT_SHUTTLE,HAC.41 FREE_PARKING,HAC.42 FREE_TRANSPORTATION,HAC.43 GAMES_ROOM,HAC.44 GIFT_SHOP,HAC.45 HAIRDRESSER,HAC.46 ICE_MACHINES,HAC.52 GARAGE_PARKING,HAC.53 JACUZZI,HAC.55 JOGGING_TRACK,HAC.56 KENNELS,HAC.57 LAUNDRY_SVC,HAC.58 AIRLINE_DESK,HAC.6 LIVE_ENTERTAINMENT,HAC.60 MASSAGE,HAC.61 NIGHT_CLUB,HAC.62 SWIMMING_POOL,HAC.66 PARKING,HAC.68 ATM/CASH_MACHINE,HAC.7 POOLSIDE_SNACK_BAR,HAC.72 RESTAURANT,HAC.76 ROOM_SERVICE,HAC.77 SAFE_DEP_BOX,HAC.78 SAUNA,HAC.79 BABY-SITTING,HAC.8 SOLARIUM,HAC.83 SPA,HAC.84 CONVENIENCE_STOR,HAC.88 PICNIC_AREA,HAC.9 THEATRE_DESK,HAC.90 TOUR_DESK,HAC.91 TRANSLATION_SERVICES,HAC.92 TRAVEL_AGENCY,HAC.93 VALET_PARKING,HAC.97 VENDING_MACHINES,HAC.98 TELECONFERENCE,MRC.121 VOLTAGE_AVAILABLE,MRC.123 NATURAL_DAYLIGHT,MRC.126 GROUP_RATES,MRC.141 INTERNET-HIGH_SPEED,MRC.17 VIDEO_CONF_FACILITIES,MRC.53 ACC_BATHS,PHY.102 BR/L_PRINT_LIT,PHY.103 ADAPT_RM_DOORS,PHY.104 ACC_RM_WCHAIR,PHY.105 SERV_SPEC_MENU,PHY.106 WIDE_ENTRANCE,PHY.107 WIDE_CORRIDORS,PHY.108 WIDE_REST_ENT,PHY.109 ACC_LIGHT_SW,PHY.15 ACC_WCHAIR,PHY.28 SERV_DOGS_ALWD,PHY.29 ACC_WASHBASIN,PHY.3 ACC_TOILETS,PHY.32 ADAPT_BATHROOM,PHY.38 HANDRAIL_BTHRM,PHY.38 ADAPTED_PHONES,PHY.39 ACC_ELEVATORS,PHY.42 TV_SUB/CAPTION,PHY.45 DIS_PARKG,PHY.50 EMERG_COD/BUT,PHY.57 HANDICAP_FAC,PHY.6 DIS_EMERG_PLAN,PHY.60 HEAR_IND_LOOPS,PHY.65 BR/L_PRNT_MENU,PHY.66 DIS_TRAIN_STAF,PHY.71 PIL_ALARMS_AVL,PHY.76 ACC_BATH_CTRLS,PHY.79 PUTTING_GREEN,REC.5 TROUSER_PRESS,RMA.111 VIDEO,RMA.116 GAMES_SYSTEM_IN_ROOM,RMA.117 VOICEMAIL_IN_ROOM,RMA.118 WAKEUP_SERVICE,RMA.119 WI-FI_IN_ROOM,RMA.123 CD_PLAYER,RMA.129 BATH,RMA.13 MOVIE_CHANNELS,RMA.139 SHOWER,RMA.142 OUTLET_ADAPTERS,RMA.159 BIDET,RMA.16 DVD_PLAYER,RMA.163 CABLE_TELEVISION,RMA.18 OVERSIZED_ROOMS,RMA.185 TEA/COFFEE_MK_FACILITIES,RMA.19 AIR_CONDITIONING,RMA.2 TELEVISION,RMA.20 ANNEX_ROOM,RMA.204 FREE_NEWSPAPER,RMA.205 HONEYMOON_SUITES,RMA.206 INTERNETFREE_HIGH_IN_RM,RMA.207 MAID_SERVICE,RMA.208 PC_HOOKUP_INRM,RMA.209 PC_IN_ROOM,RMA.21 SATELLITE_TV,RMA.210 VIP_ROOMS,RMA.211 CORDLESS_PHONE,RMA.25 CRIBS_AVAILABLE,RMA.26 ALARM_CLOCK,RMA.3 PHONE-DIR_DIAL,RMA.31 FAX_FAC_INROOM,RMA.38 FREE_LOCAL_CALLS,RMA.45 HAIR_DRYER,RMA.50 INTERNET-HI_SPEED_IN_RM,RMA.51 IRON/IRON_BOARD,RMA.55 KITCHEN,RMA.59 BABY_LISTENING_DEVICE,RMA.6 LAUNDRY_EQUIPMENT_IN_ROOM,RMA.66 MICROWAVE,RMA.68 MINIBAR,RMA.69 NONSMOKING_RMS,RMA.74 REFRIGERATOR,RMA.88 ROLLAWAY_BEDS,RMA.91 SAFE,RMA.92 WATER_SPORTS,RST.110 ANIMAL_WATCHING,RST.126 BIRD_WATCHING,RST.127 SIGHTSEEING,RST.142 BEACH_WITH_DIRECT_ACCESS,RST.155 SKI_IN/OUT,RST.156 TENNIS_PROFESSIONAL,RST.157 FISHING,RST.20 GOLF,RST.27 FITNESS_CENTER,RST.36 BEACH,RST.5 HORSE_RIDING,RST.61 INDOOR_TENNIS,RST.62 MINIATURE_GOLF,RST.67 BOATING,RST.7 TENNIS,RST.71 SCUBA_DIVING,RST.82 SKEET_SHOOTING,RST.85 SNOW_SKIING,RST.88 BOWLING,RST.9 VOLLEYBALL,RST.98 ELEC_GENERATOR,SEC.15 EMERG_LIGHTING,SEC.19 FIRE_DETECTORS,SEC.22 GUARDED_PARKG,SEC.34 RESTRIC_RM_ACC,SEC.39 EXT_ROOM_ENTRY,SEC.40 INT_ROOM_ENTRY,SEC.41 SMOKE_DETECTOR,SEC.50 ROOMS_WITH_BALCONIES,SEC.51 SPRINKLERS,SEC.54 FIRST_AID_STAF,SEC.57 SECURITY_GUARD,SEC.58 VIDEO_SURVEIL,SEC.62 EXTINGUISHERS,SEC.89 FIRE_SAFETY,SEC.9 FEMA_FIRE_SAFETY_COMPLIANT,SEC.93 FIRE_SAF_NOT_STANDARD,SEC.95
According to the API, you can filter the offers by amenities: https://developers.amadeus.com/self-service/category/hotel/api-doc/hotel-search/api-reference I assume the multiple select list in the amenities property contains all the items you need. EDIT: I noticed that unfortunately, the response example contains additional values, apart from the input. So the input is not enough.
SQL Compare Characters in two strings count total identical
So the over all on this is I have two different systems and in both systems I have customers, unfortunately both systems allow you to type in the business name freehand so you end up with the example below. Column A has a value of "St John Baptist Church" Column B has a value of "John Baptist St Church" What I need to come up with is a query that can compare the two columns to find the most closely matched values. From there I plan to write a web app where I can have someone go through and validate all of the entries. I would enter in some example of what I have done, but unfortunately I honestly dont even know if what I am asking for is even possible. I would think it is though in this day and age I am sure I am not the first one to try to attempt this.
You could try and create a script something like this php script to help you: $words = array(); $duplicates = array(); function _compare($value, $key, $array) { global $duplicates; $diff = array_diff($array, $value); if (!empty($diff)) { $duplicates[$key] = array_keys($diff); } return $diff; } $mysqli = new mysqli('localhost', 'username', 'password', 'database'); $query = "SELECT id, business_name FROM table"; if ($result = $mysqli->query($query)) { while ($row = $result->fetch_object()) { $pattern = '#[^\w\s]+#i'; $row->business_name = preg_replace($pattern, '', $row->business_name); $_words = explode(' ', $row->business_name); $diff = array_walk($words, '_compare', $_words); $words[$row->id][] = $_words; $result->close(); } } $mysqli->close(); This is not tested but you need something like this, because I don't think this is possible with SQL alone. ---------- EDIT ---------- Or you could do a research on what the guys in the comment recommend Levenshtein distance in T-SQL Hope it helps, good luck!
Can you view a friendship relationship's data (the "You and a friend" page) via the Facebook API? [duplicate]
How can I get a friendship detail for two people? So for example in the web it will be: http://www.facebook.com/<my_id>?and=<friend_id> Is there any way I can do this in Graph API? Furthermore can I get specific items such as photos together, wall posts between us, etc? (Not documented AFAIK, but many Graph API features aren't anyway...) EDIT: I think it should be possible with Graph API. For example getting family details (brother, sister, parents, etc) is not documented yet I still able to do it.
You can simulate a friendship query by doing multiple FQL queries. For example, to get all the photos between two friends A and B, you can: Get all photos (p) from A Get all tags of B in p (query on the photo_tags table) Get all comments made by B on p (query on the comments table) Repeat, this time selecting photos from B. You can apply the same concept on other things such as posts, likes, videos etc.
Yes, I think you can also do the same thing phillee answered with the Graph API instead of FQL: Get user's photos https://graph.facebook.com/USERID/photos Get each photo's tags https://graph.facebook.com/PHOTOID/tags Sort through the list of photo tags, and grab all photos with the Friend in them Get each photo's comments https://graph.facebook.com/PHOTOID/comments Sort through the list of photo comments, and grab all comments left by the friend As the other answer also said: rinse and repeat for all data you want https://graph.facebook.com/USERID/feed https://graph.facebook.com/USERID/posts etc etc, see all connections here: http://developers.facebook.com/docs/reference/api/user/ For the interests, movies, activities, etc just make an API call for both (https://graph.facebook.com/ONEUSER/music and https://graph.facebook.com/OTHERUSER/music) and find the intersection of the two sets of data (loop through each list and save all matches to a separate list of mutual Likes) There are no specific API calls for friendships though, so you will have to build your own. My guess is that Facebook is doing exactly this on their Friendship pages :) It should be just as easy with FQL as with the REST API... maybe even easier with FQL since you can add WHERE conditions and get back just the data you need (SELECT * FROM comments WHERE fromid = FRIENDID), instead of sorting through the big list returned by the API (unless there is a way to add conditions to API request URLs?).
If I understand properly you want to "View Friendship" This type of query is not directly possible using the Graph API (at the moment). You would need to gather the information from each resource endpoint and then do some relating on your own part to "View Friendship"
This is what I have been using: <?php function main() { global $Fb; $Fb = new Facebook(array('appId'=>FB_API_ID,'secret'=>FB_API_SECRET)); $logoutUrl = $Fb->getLogoutUrl(); $loginUrl = $Fb->getLoginUrl(); $user = fb_loguser($Fb); if ($user) { $txt .= "<p align=\"center\">Logout</p>"; $txt .= "<h3 align=\"center\">You</h3>"; $access_token = $Fb->getAccessToken(); $user_profile = $Fb->api('/me'); $txt .= "<p align=\"center\"> <img src=\"https://graph.facebook.com/".$user."/picture\"></p>"; $txt .= fb_friends_list($user_profile,$access_token); } } function fb_loguser($facebook) { global $Fb; $Fb = $facebook; $user = $Fb->getUser(); if ($user) { try $user_profile = $Fb->api('/me'); catch (FacebookApiException $e) { error_log($e); $user = null; } } return($user); } function fb_friends_list($access_token) { global $Sess,$Fb; $friends = $Fb->api('/me/friends'); // return an array [data][0 to n][name]/[id] $siz = sizeof($friends['data']); $txt = "<h3>Your FRIENDS on FACEBOOK</h3>"; $txt .= "<table>"; for ($i=0; $i<$siz; $i++) { $fid = $friends['data'][$i]['id']; $src = "http://graph.facebook.com/".$fid."/picture"; $txt .= "<tr><td><img src=\"".$src."\" /></td>"; $txt .= "<td>".$friends['data'][$i]['name'] . "</td>"; $txt .= "<td>" . $fid . "</td></tr>"; } $txt .= "</table>"; return($txt); } ?> Call main()!
Getting the photos where two (or more) users are tagged in: SELECT pid, src_big FROM photo WHERE pid IN( SELECT pid FROM photo_tag WHERE subject=me()) AND pid IN( SELECT pid FROM photo_tag WHERE subject=FRIEND_ID) AND pid IN( SELECT pid FROM photo_tag WHERE subject=ANOTHER_FRIEND_ID) ...
http://bighnarajsahu.blogspot.in/2013/03/facebook-graph-api-to-get-user-details.html This is the complete code to get the friendlist in Fb.