Share whatsapp link with IP address - whatsapp

I'm not sure if this is offtopic here but I found other WhatsApp questions.
If I share a link that contains an IP address like:
http://123.456.789.456/mystuff
WhatsApp make a link only for the numbers (like a phone numbers) ignoring all the remaining...
How to format it to tell it it's a whole link?

I understand this is not exactly the solution to the problem, but it might be beneficial for some people out there. All you have to do is use some sort of URL shortener.
For e.g. https://bitly.com/
It will create a shortened URL that can be shared on WhatsApp without worrying about any formatting issues.

Related

Can I send an email with a hyperlink to a different domain (website) that doesn't match my email domain?

Bear with me because I'm still learning about this. My description will probably be long. And please talk to me like a newbie:
My plan is to get multiple domains and through that, create multiple Google Workspace email accounts to send emails to different prospects in my emailing list.
On the emails that I send out from different email domain accounts, I was planning on just adding a link/hyperlink to my website for the recipients to click on & browse.
If I did that, would that ruin my reputation in the eyes of ISP? I fear that because my email domain doesn't match the website domain I'm hyperlinking will lower my reputation score. My fear is that I'll get blacklisted, get marked as spam, and many bad things, if I do it this way.
Thank you in advance for putting up with the long description. It's the best way for me to explain the situation.
I saw an article that says if my email domain and the domain being hyperlinked in the email message doesn't match, it's fine. It just depends on the hyperlinked domain's reputation and what I write on the hyperlink text (not misleading).
But I am just making sure by writing my question here.
And I think there is another way that might work, if this is not recommended:
Have the hyperlinked domain (website) to match with my email domain by connecting those multiple domains (that are used in my email accounts) to my primary domain (website).
Does that make sense?
Let me know if ANY OF THIS makes sense. Sorry again for my ignorance.

Google I'm Feeling Lucky URL

So, I've spent about 2 hours trying to get the I'm Feeling Lucky URL to work. It seems the URL doesn't like the periods in the search parameter, so does anyone have any potential tricks?
Search Value= 40.840.1/8Z
The first result in a regular Google search is the correct page.
Here's what I've tried:
http://www.google.com/search?btnI=I&q=40.840.1/8Z
http://www.google.com/search?btnI=I&q=40.840.1%2F8Z
http://www.google.com/search?btnI=I&q=40%2E840%2E1/8Z
http://www.google.com/search?btnI=I&q=40%2E840%2E1%2F8Z
http://www.google.com/search?btnI=I&q=40%2F840%2F1%2F8Z
(That one was actually pretty close)
http://www.google.com/search?btnI=I&q=40%20840%201%208Z
And all of the above surrounded in quotes (%22)
The problem is that the I'm Feeling Lucky aspect doesn't work. It finds the correct results, it just doesn't navigate to the first result. I'm open to alternatives besides the I'm Feeling Lucky URL parameters as well.
I'm trying to implement this into a .NET application that provides employees with resource information, which is best received from the manufacturer's website(s). The trick is that the resources are from many different suppliers and the links need to be somewhat automatic. Basically I don't whomever manages the software to update these links. To navigate, I'm simply using the Process.Start("http://www.example.com/") command which uses the default browser to navigate to the address.
This post helped a lot by the way.
I wasn't able to get any closer than your closest one.
But if it helps, here's an alternative way of writing the "I'm feeling lucky" URL.
http://google.com/search?q=haimer+usa+40%2F840%2F1%2F8Z&btnI
What I did to find the right url is to navigate to google.com. After this I turned my internet connection off. I entered the search details and pressed submit. You can now see the url in the address bar, but it doesn't redirect you to the first result. You can now copy the url and see how google treats your dots and other weird characters.
So to recap:
Go to google.com
Turn your internet connection off
Enter search term
Press 'I'm feeling lucky'
Copy the url from the address bar
You can create a google custom search engine of your own, and either exclude certain sites or include specific sites only, use http://cse.google.com to do this.
There is a SO tag for google custom search

Google duplicate content issue for social network applications

I am making a social network application where user will come and share the posts like facebook. But now I have some doubts like lets say a user is just shared a content by coping it from another site and same with the case of images. So does google crawler consider it as a duplicate content or not?
If yes then how I can tell to the google crawler that "don't consider it as a spam, its a social networking site and the content is shared by the user not by the me". Is there any way or any kind of technique that help me.
Google might consider it to be duplicate content, in which case the search algorithm will choose 1 version, which it believes to be the original or more important one and drop the other.
This isn't a bad thing per se - unless you see that most of your site's content is becoming duplicated.
You can use canonical URL declarations to do what you are saying, but i wouldn't advise it.
If your website belongs to one of these types - forum or e-commerce, it will not be punished for duplicate content issue. I think "social platform" is one type of forum.
If your pages are too similar, the result is that the two or more similar pages will scatter the click rate, flow etc, so the rank in SERPs may not look well.
I suggest do not use "canonical" because this instruction tell the crawlers do not crawl/count this page. If you use it, in the webmaster tool, you will see the indexed pages decrease a lot.
Do not too worry about the duplicate content issue. You can see this article: Google’s Matt Cutts: Duplicate Content Won’t Hurt You, Unless It Is Spammy

Confirming Source Is From QR Code Scan

I have this project where I need to know if a visitor legitimately arrived from a QR code. Document.referrer value from a QR code shows blank. I have looked at some answers suggesting to put parameter in the query string (e.g. ?source=qr), but anyone could easily add the parameter into the URL and my code would believe it is from a QR code (e.g. www.project.com/check.page?source=qr) . I have thought of adding codes to make sure it is from a mobile phone / tablet as secondary way to authenticate but many browsers have add-ons to fool websites.
Any suggestions would be greatly appreciated.
Thanks in advance.
I think the best solution for you is creating your regional QR Codes pointing to:
Region 1) http://example.com/?qr=f61060194c9c6763bb63385782aa216f
Region 2) http://example.com/?qr=731417b947aa548528344fab8e0f29b6
Region 3) http://example.com/?qr=df189e7f7c8b89edd05ccc6aec36c36d
if the value of the parameter qr is anything other than f61060194c9c6763bb63385782aa216f, 731417b947aa548528344fab8e0f29b6 or df189e7f7c8b89edd05ccc6aec36c36d, then you can ignore it and assume the user didn't come from any QR Code.
Of course, any user can remove the source parameter. But at least he can't add a valid one, unless he really had access to the code.
...but anyone could easily add the parameter into the URL and my code would believe it is from a QR code
Well, anyone could also scan the QR code, view the link, and remove the source=qr from it.
Data collection is never 100% reliable. Users can change their browser's user agent, inject cookies with some strange values, open your page through a proxy server, and so on.
You could create your own device or App for scanning the QR-code. If you read the post I've linked, you will see that this is a waste of time and resources.
So, what is left is to make a solution which will work for most of the users. Appending a source=qr parameter to your URL seems to be the simplest solution. You could also link to an entirely different domain and redirect the request, so it would be more fraud-safe. But it will never be 100% accurate.

Serving IP Country based content, Subdomains and Google

I am designing a pretty big website that will target it's industry on a global level. The site is detecting IP address countries in order to serve content relative to the visitor's country. Basically alot of content will be restricted to visitors in a given country.
The concern I have is that Google doesn't seem to pay too much attention to IP based content, as I read here. They seem to think Google might implement better support for crawling IP based content but aren't sure when and the article is dated Nov 2011.
As a result, I have been considering ways to have Google crawl the site's IP content by country codes like us.site.com or site.com/us still detecting the visitor's country by IP and redirecting to the appropriate location. Im not sure if it's just because I am a little strange at times, but I seem to feel that the subdomain us.site.com seems tidier.
Considering that Google spider ALSO seems to ignore subdomains when there is considerable duplicate content (which may be the case because alot of the content is internationally available), what would you guys recommend?
Should I
Stop being so darn OCD about us.site.com and use site.com/us?
Use subdomains because perhaps while the spider ignores duplicate
content on sub-domains, it won't if there are more unique results?
What about lists of results on my site? Like a category page?
Take a gamble and stick to IP detection only, not using country
codes in the URL and hope for the best that Google will recognise
different content being served on different IP ranges
Thanks in advance
Ok, so i found this which is so far the best explanation I have found, any pointers please feel free to comment