What does the second percentage number mean in caniuse.com stats? - caniuse

I want to know how many % of users can use fit-content so I searched it and found this page.
However, what does the second number mean?

Related

Algorithm to decide if true based on 0% - 100% frequency threshold

Sorry if this is a duplicate question. I did a search but wasn't sure exactly what to search for.
I'm writing an app that performs a scan. When the scan is complete we need to decide if an item was found or not. Whether or not the item is found is decided by a threshold that the user can set: 0% of the time, 25% of the time, 50% of the time, 75% of the time or 100% of the time.
Obviously if the user chooses 0% or 100% we can use true/false but for the frequency but I'm drawing a blank on how this should work for the other thresholds.
I assume I'd need to store and increase some value every time a monster is found.
Thanks for any help in advance!
As #nix points out it sounds like you want to generate a random number and threshold based on the percentage of the time you wish to have 'found' something.
You need to be careful that the range you select and how you threshold achieve the desired result as well as the distribution of the random number generator you use. When dealing in percentages an obvious approach is to generate 1 of 100 uniformly distributed options and threshold appropriately e.g. 0-99 and check that the number is less than your percentage.
A quick check shows us that you will never get a number less than 0 so 0% achieves the expected result, you will always get a number less than 100 so 100% achieves the expected result and there a 50 options (0-49) less than 50 out of 100 options (0-99) so 50% achieves the expected result as well.
A subtly different approach, given that the user can only choose ranges in 25% increments, would be to generate numbers in the range 0-3 and return True if the number is less than the percentage / 25. If you were to store the user-selection as a number from 0-4 (0: 0%, 1: 25% .. 4: 100%) this might be even simpler.
Various approaches to pseudo-random number generation in Objective-C are discussed here: Generating random numbers in Objective-C.
Note that mention is made of the uniformity of the random numbers potentially being sensitive to the range depending on the method you go with.
To be confident you can always do some straight-forward testing by calling your function a large number of times, keeping track of the number of times it returns true and comparing this to the desired percentage.
Generate a random number between 0 and 100. If the number is greater than the threshold, an item is found. Otherwise, no item is found.

Getting all Twitter Follows (ids) with Groovy?

I was reading an article here and it looks like he is grabbing the IDs by the 100s. I thought it was possible to grab by 5000 each time?
The reason I'm asking is because sometimes there are profiles with much larger amounts of followers and you wouldn't have enough actions to do it all in one hour if one was to grab it by 100 each time.
So is it possible to grab 5000 ids each time, if so, how would I do this?
GET statuses/followers as shown in that article has been deprecated, but did used to return batches of 100
If you're trying to get follower ids, you would use GET followers/ids. This does return batches of up to 5000, and should just require you to change the URL slightly (see example URL at the bottom of the documentation page)

What's the limit of google transliteration?

I've used google transliteration API experimentally. It's working fine and I've noticed that it allows only five words at a time. Is there any method to send more words? and is there any daily limit? If I have 100 words, I will have to send a set of five and then join them?
100k characters per day for ver 2.
The developer console allows you to apply for higher limits (may cost money depending on your needs?) https://code.google.com/apis/console/
Looks like ther is a method for making more than jut individual words transliteratable: https://developers.google.com/transliterate/v1/getting_started#makeTransliteratable

Youtube API problem - when searching for playlists, start-index does not work past 100

I have been trying to get the full list of playlists matching a certain keyword. I have discovered however that using start-index past 100 brings the same set of results as using start-index=1. It does not matter what the max-results parameter is - still the same results. The total results returned however is way above 100, thus it cannot be that the query returned only 100 results.
What might the problem be? Is it a quota of some sort or any other authentication restriction?
As an example - the queries bring the same result set, whether you use start-index=1, or start-index=101, or start-index = 201 etc:
http://gdata.youtube.com/feeds/api/playlists/snippets?q=%22Jan+Smit+Laura%22&max-results=50&start-index=1&v=2
Any idea will be much appreciated!
Regards
Christo
I made an interface for my site, and the way I avoided this problem is to do a query for a large number, then store the results. Let your web page then break up the results and present them however is needed.
For example, if someone wants to do a search of over 100 videos, do the search and collect the results, but only present them with the first group, say 10. Then when the person wants to see the next ten, you get them from the list you stored, rather than doing a new query.
Not only does this make paging faster, but it cuts down on the constant queries to the YouTube database.
Hope this makes sense and helps.

How to implement a Digg-like algorithm?

How to implement a website with a recommendation system similar to stackoverflow/digg/reddit? I.e., users submit content and the website needs to calculate some sort of "hotness" according to how popular the item is. The flow is as follows:
Users submit content
Other users view and vote on the content (assume 90% of the users only views content and 10% actively votes up or down on content)
New content is continuously submitted
How do I implement an algorithm that calculates the "hotness" of a submitted item, preferably in real-time? Are there any best-practices or design patterns?
I would assume that the algorithm takes the following into consideration:
When an item was submitted
When each vote was cast
When the item was viewed
E.g. an item that gets a constant trickle of votes would stay somewhat "hot" constantly while an item that receives a burst of votes when it is first submitted will jump to the top of the "hotness"-list but then fall down as the votes stop coming in.
(I am using a MySQL+PHP but I am interested in general design patterns).
You could use something similar to the Reddit algorithm - the basic principle of which is you compute a value for a post based on the time it was posted and the score. What's neat about the Reddit algorithm is that you only need recompute the value when the score of a post changes. When you want to display your front page, you just get the top n posts from your database based on that score. As time goes on the scores will naturally increase, so you don't have to do any special processing to remove items from the front page.
On my own site, I assign each entry a unique integer from a monotonically increasing series (newer posts get higher numbers). Each up vote increases the number by one, and each down vote decreases it by one (you can tweak these values, of course). Then, simply sort by the number to display the 'hottest' entries.
I developed an social bookmarking site, Sites Favoritos, and used a complex algoritm:
First, the votes are finite, an user only have a limited number of votes, and the number of votes depends on the user points. To earn points each user must add links that get positive votes.
Then, users can vote -3,-2,-1,1,2 or 3 votes for each link. As the votes are limited, each user will vote only on those links that they like.
To prevent user to vote only on links for the same user, creating support groups, the points each vote adds to the link depends on a racio between total votes and votes to links of the owner of the voted link. If you always vote on the same users links, your votes will lose value.
Votes lose value with time.
New links from users who don't have points (new users) will have a starting 0 points. New links from older users will have points depending on their points. Ranging from +3 to -infinite. Links from users with negative points will have negative starting points, links from users with positive points will have positive starting points.
Users will get random points when their links are voted. Positive votes give positive points, negative votes for negative points.
Paul Graham wrote an essay on what he learned in developing Hacker News. The emphasis is more on the people/interactions he was trying to attract/create than on the algorithm per se, but still well worth a read. For example, he discusses the different outcomes when stories bubble up from the bottom (HN) versus exploding to the top (Digg) of the front page. (Although from what I've seen of HN, it looks like stories explode to the top there also).
He offers this quote:
The key to performance is elegance, not battalions of special cases.
which in light of the purported algorithm for generating the HN front page:
(p - 1) / (t + 2)^1.5
where
p = an article's points and
t = time from submission of article
might be a good starting point.
I implemented an SQL version of Reddit's ranking algorithm for a video aggregator like so:
SELECT id, title
FROM videos
ORDER BY
LOG10(ABS(cached_votes_total) + 1) * SIGN(cached_votes_total)
+ (UNIX_TIMESTAMP(created_at) / 300000) DESC
LIMIT 50
*cached_votes_total* is updated by a trigger whenever a new vote is cast. It runs fast enough on our current site, but I am planning on adding a ranking value column and updating it with the same trigger as the *cached_votes_total* column. After that optimization, it should be fast enough for most any size site.