When I set up a branch.io link I see that after the redirect it is something like
http://www.bbc.co.uk/?_branch_match_id=241114587404660876
What is the _branch_match_id parameter?
Is this added onto every link redirect or can it be omitted?
Alex from Branch.io here: the _branch_match_id is a unique ID we append to every link redirection as part of our matching algorithm. It allows us to track where traffic is coming from, so that we can identify each user again within the app after it opens/is installed. There is no way to remove it :)
Related
I have created a deep link successfully & configured baranch.io, but the problem is I have a URL e.g: https://example.com/magiclink/token/*
so, if you can see above the URL, there is * at the end, which means after the "token/" the unique token will be shared instead of *, so as you know in branch.io as far as I know, it gives us the static link to be shared with someone. but in my case, the link can not be static.
Can anyone help me to achieve this?
Thanks in Advance
If you just need to pass a unique token with each of your links then you can append it to your Branch.io link as a Link parameter.
eg. https://www.example.com/magiclink?token=*
Now, you can dynamically change the value of the token and it won't affect your link.
I understand the og:url meta tag is the canonical url for the resource in the open graph.
What strategies can I use if I wish to support 301 redirecting of the resource, while preserving its place in the open graph? I don't want to lose my likes because i've changed the URLs.
Is the best way to do this to store the original url of the content, and refer to that? Are there any other strategies for dealing with this?
To clarify - I have page:
/page1, with an og:url of http://www.example.com/page1
I now want to move it to
/page2, using a 301 redirect to http://www.example.com/page2
Do I have any options to avoid losing the likes and comments other than setting the og:url meta to /page1?
Short answer, you can't.
Once the object has been created on Facebook's side its URL in Facebook's graph is fixed - the Likes and Comments are associated with that URL and object; you need that URL to be accessible by Facebook's crawler in order to maintain that object in the future. (note that the object becoming inaccessible doesn't necessarily remove it from Facebook, but effectively you'd be starting over)
What I usually recommend here is (with examples http://www.example.com/oldurl and http://www.example.com/newurl):
On /newpage, keep the og:url tag pointing to /oldurl
Add a HTTP 301 redirect from /oldurl to /newurl
Exempt the Facebook crawler from this redirect
Continue to serve the meta tags for the page on http://www.example.com/oldurl if the request comes from the Facebook crawler.
No need to return any actual content to the crawler, just a simple HTML page with the appropriate tags
Thus:
Existing instances of the object on Facebook will, when clicked, bring users to the correct (new) page via your redirect
The Like button on the (new) page will still produce a like of the correct object (but at the old URL)
If you're moving a lot of URLs around or completely rewriting your URL scheme you should use the new URLs for new articles/products/etc, but you'll need to keep the redirect in place if you want to retain likes, comments, etc on the older content.
This includes if you're changing domain.
The only problem here is maintaining the old URL -> new URL mapping somewhere in your code, but it's not technically difficult, just an additional thing to maintain in the future.
BTW, The Facebook crawler UA is currently facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
I'm having the same problem with my old sites. Domains are changing, admins want to change urls for seo etc
I came to conclusion its best to have some sort uniqe id in db just for facebook - from the beginning. For articles for example I have myurl.com/a/123 where 123 is ID of the article.
Real url is myurl.com/category/article-title. Article can then be put in different category, renamed etc with extensive logic for 301 redirects behind it. But the basic fb identifier can stay the same for ever.
Of course this is viable only when starting with a fresh site or when implementing fb comments for the first time.
Just an idea if you can plan ahead :) Let me know what you think.
I have a website which when you first go to the website it will just display the normal domain so /. When they use the form they will get forwarded to lets say /question/DYNAMIC(question id).
So google has no way to see these links.
Is there a way to tell google about all of these links without manually putting these in and without having to keep this up-to-date as some question might be removed at a later date?
Submit an XML sitemap
The code for links posted within notes appear as so:
justiceclaus.com
As where links everywhere else are explicitly no-follow like this:
http://www.justiceclause.com/
I am in belief that every link provides some value even if it is very little. It's not like it's going to pass the juice link on the facebook homepage would but even no follow and redirect links mean something.
Having some no follow back links helps with ranking. I think to Google it basically looks like if you have certain percentage of back links as no follow - you aren't a spammer. Besides that it doesn't give you any link juice, doesn't pass any values or keywords.
On the other hand, links from Facebook normally bring in traffic = potential customers. Therefor there might be a different kind of value you're looking at here.
Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer.
I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago?
Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php
There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/
Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy.
Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from.
You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access.
Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page.
The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has.
Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html