I wanted to use Google Checkout and their Key/URL delivery system for digital content. I'm building a site in which the product is digital content that can only be viewed once so I need a way to generate unique URLs or keys for the customer after they purchase the content.
I found the service http://www.quixly.com/ which looks like they do what I need. I was just wondering if anyone knew of a tutorial, guide, or better way of using Google checkout with unique URLs or if anyone has had quixly with success?
Google checkout is easy enough to use I just have no idea where to start with generating unique URLs.
Hope these are helpful.
Some ideas and server source code on how to implement unique urls for downloads:
Creating unique URL/address for a resource to share - Best practices
How to generate unique URL variables that match to a db record?
http://www.ardamis.com/2009/06/26/protecting-multiple-downloads-using-unique-urls/
Related
One of my client having website which is entirely based on API Content i.e. content coming from 3rd party website. He wants to do some seo on the data. I wonder if it is possible as there is data not available in his database and i think google crawler redirect to 3rd party website while crawling on such pages. We already asked for permission from that website owner to let us store API data on our end in order to do some SEO but he refused our request.
It will be highly appericited if you can suggest any other way that should not be against policies and guidelines.
Thank You
Vikas S.
Yes - with a huge BUT:
Google explains how parameters can be set within their Search Console (Google Webmaster) and how these can effect the crawler's behaviour.
#Nadeem Haddadeen is right with the canonical links between duplicates. There's also an issue if you don't have consistent content when calling up the same parameters. This essentially makes your page un-indexable as it's dynamic content. If you are dealing with dynamic content then you need to optimise a host page based around popular queries rather than trying to have your content rate itself.
It's not recommended to take the same content and post it on your website, its duplicate and Google will give you penalty.
If you still want to post it on your website, you have to make some changes on the original text and then post it on your website to look like its original.
Also if you want to keep it without any changes and to avoid any penalties from Google, you you have to add a link for the original article from your website or add a cross domain canonical link like the below example:
<link rel="canonical" href="https://example.com/original-article-url" />
https://angel.co/api/spec/startups
What would the best approach for hitting every company that is listed on AngelList? My first guess would be to query all the numbers up until 250k, the number of companies on angelList, using this endpoint https://api.angel.co/1/startups/45435
There surely has to be a better way of doing this though.
Yes it is possible via their API. And the API endpoint that you have mentioned in your question is the correct one. I have written a PHP component to achieve this. You can use this exporter application to download the start-ups data for each country into a CSV file : AngelList Data Exporter
I hope this helps you.
Angel.co does not expose its api anymore. So you have to parse the website to get any data.
Also a quick google search would give you a few websites which have different datasets from angel.co website.
I’m developing an ecommerce web site in ASP.NET using SQL server 2008 database.
Most of my pages are database driven and all the content is gathered from a SQL Server.
Every product page is created dynamically from data coming from the database, hence every product’s page URL has a unique query string, containing a “product_id” variable.
*Example: http://www.myecommence.com/products.aspx?product_id=1*
I'd like to improve my Search Engine Optimization.
Dealing with a small number of products could be fine but what if I
had more than 1000 products, how could every product be crawled?
How does the google spider/bot know that a product_id with a
hypothetical number of 767 exists?
I’ve been googleing this, still I can’t understand how pages that
have absolutely no reference in the site or external sites can be
crawled? If this is possible the spider should know how to read the
website’s database tables, but I guess that this is not the case.
At this point since most of the pages and links are dynamic how
could they be indexed, the same thing applies to “user detail” pages
that are accessed via query string using a “user id=n”?
Probably what I’m asking has already been discussed but still I don’t have clear some points.
I would advise using Mod Rewrite rules to make your URLs search engine friendly.
This is very important for Google.
As is a good category structure.
Eg:
domain.com/t-shirts/girls/star-wars-t-shirt/
is far better than
domain.com/products.aspx?product_id=1*
Here is some info:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
http://www.wrox.com/WileyCDA/Section/id-305997.html
To answer your questions:
Dealing with a small number of products could be fine but what if I had more than 1000 products, how could every product be crawled?
If you have a good sitemap / menu structure etc, it is likely that Google will crawl all your pages.
How does the google spider/bot know that a product_id with a hypothetical number of 767 exists?
Via crawling your site, via your sitemap, via the menu system on the site etc. However always remember: Google is not psychic - it cannot find a page unless you tell how to / link to it.
I’ve been googleing this, still I can’t understand how pages that have absolutely no reference in the site or external sites can be crawled? If this is possible the spider should know how to read the website’s database tables, but I guess that this is not the case.
If you have no reference - you are doing something wrong. Improve your site structure.
At this point since most of the pages and links are dynamic how could they be indexed, the same thing applies to “user detail” pages that are accessed via query string using a “user id=n”?
Nothing wrong with a dynamic URL per-se - but again I would recommend implementing search engine friendly URLs via Mod Rewrite or similar - see the above resources.
Good luck,
Colin
Modern systems optimize for SEO by allowing for either custom or automated URLs that remap to your id based url pattern. This URL style allows for a fully custom word for word product title or keyword/description, which carries more weight than a random id number in a URL.
To ensure all individual pages are indexed, you generally benefit most from submitting or making available a sitemap xml. More info from google on generating one here:
https://code.google.com/p/googlesitemapgenerator/
Hope that gets you going in the right direction!
I hope my question is not too irrelevant to stackoverflow.
this is my website: http://www.rader.my
It's a car information website. The content is dynamic. Therefore, google crawler could not find all the cars specification pages in my website.
I created a sitemap with all my cars URL in it (for instance: http://www.rader.my/Details.php?ID=13 is for one car). I know I haven't made any mistake in my .xml file format and structure. But after submission, google only indexed one URL which is my index.php.
I have also read about rel="canonical". But I don't think in my case I should use such a thing since all my pages ARE different with different content but only the structure is the same.
Is there anything that I missed? Why google doesn't accept my URLs even though the contents are different? What can I do to fix this?
Thanks and regards,
Amin
I have a similar type of site. Google is good about figuring out dynamic sites. They'll crawl the pages and figure out the unique content as time goes on. Give it time.
You should do all the standard things:
Make sure each page has a unique H1 tag.
Make sure each page has substantial unique content
Unique keywords and description tags aren't as useful as they used to be but they can't hurt.
Cross-link internally. Create category pages that include links to all of one manufacturer and have each of the pages of that manufacturer link back to 'similar' pages.
Get links to your pages. Nothing helps getting indexed like external authority.
I would like to be able to generate custom bit.lys (http://bit.ly/thecakeisalie type things) through their API. This does not appear to be possible, but I thought I'd check; does anyone happen to know otherwise?
This unfortunately had to be removed for our free users due to on-going abuse. All of the custom bitlinks on bit.ly are created in the same key space, so allowing for automated creation there quickly leads to there being no sane options availble for anybody else.
That being said, we have recently added the ability for our paid customers to create custom bitlinks if they are using a custom domain. In this case, our customers get their own key space so creating custom bitlinks en-mass isn't a problem.
It was removed from the API. I like many others were trying to do this and bit.ly's support email replied saying it has been removed. Similar experiences on their ApiDocumentation wiki here.