Yeah but I for one won't do coding on a system where I'm typing on the screen. IDEs massively benefit from keyboard shortcuts because you can leave your hands on the keyboard.
You can prevent it from advancing by keeping you cursor in the carousel body. That doesn't really make up for the fact that a carousel is a poor UX choice, but it's a handy trick for carousels that advance too fast and it works on most implementations.
Edit: Now that I think about it, using the cursor hover to prevent the carousel from advancing seems like an anti-pattern. If most users are like me, they will move their cursor out of the way of the carousel to prevent obscuring the content. I have seen users who like to trace and/or select the current text they are reading, so maybe it works better for them.
Am I correct in saying that the shuffle algorithm is flawed? It looks to me like it swaps each card iteratively with a random card in the deck. This means that some cards are likely shuffled more times that others, and some sequences are then more common than others.
For those interested, here is the reasoning from a combinatorics perspective. There are 52! different arrangements of a deck of cards. To see this, note that in any arrangement there are 52 locations for the Ace of Spades, once that choice is made there are 51 locations for the next card to end up in and so on until the last card must go in the only open location. Hence there are 52 factorial arrangements of cards in to a deck.
Shuffling, by swapping each card with any of the 52 other spots produces 52^52 (i.e. 52 * 52 * 52 * ...) arrangements. How can this be when there are only 52! (i.e. 52 * 51 * 50 * ...) possible arrangements. The answer is than many of the arrangements generated by this shuffling technique end up with the cards in the same order. This follows from the pigeon-hole theorem since 52^52 > 52!
Furthermore, we know that 52^52 is not a multiple of 52! (to see this realize that, for example, 11 divides 52! but not 52^52). Therefore, some bad shuffle generated arrangements will occur more than others and, consequently, that bad shuffle algorithm does not produce a "fair" shuffle.
While I don't doubt your mental narrative nor dismiss the value of evidence, if you think about the parent's assertion ("US and pop-culture bias") longer, it's obviously correct.
I agree with the parent poster that the article is US pop-culture centric. I just wanted to point out that the data isn't, necessarily, strictly US specific. I also felt that that the parent comment was a bit snarky given that this website has a large US audience, and US music and pop-culture are popular around the world.
In which case, I should voice my opposition to the suggestion that we non-USians (eg. I'm an Australian in China) should communicate (even in our own language) with hat tipped to US popular culture because (inertia of Colosseum-fawning masses).
Here's a contrary view: I believe that intelligent people tend to respect and encourage diversity because it's both more interesting for them ("are we nought but latter-day curios for the coming AI overlord?") and because many fields of science (chiefly biology) show us strength in heterogeneity. The parent's comment was, I believe, offered in this spirit.
This answer gets more awesome the more I think about it.
Initially, I was going to reply that it was crass. Then, I thought about it and grinned.
But I think that flakmonkey's point is that yes, if you're poor, you'll be stuck in traffic at least just as long. More likely though, you'll be stuck in traffic far more than currently, because people who can afford to pay for it will. Being that freeways use public funds, that probably shouldn't be the case.
Time redistribution. People with more time will be required to pull over to the side of the road and wait for some time interval to redistribute their share of excess time to people in a hurry. It's part of the self driving car program and will be administered by the Federal Dept of Transportation. It's only fair.
If you want to validate a bearer token, multiple backends in different datacenters need to share a datastore that replicates at least somewhat fast. If you use HMAC-Signed URLS you just need to replicate the API keys that you use to calculate the token and the required guarantees are lower than for a bearer token. In principle, the method is sound - AWS uses a similar approach for signed URLs that allow access to S3 resources.
I guess it's a question of where do you store application state - do it at the server and then you have to replicate the database that stores that state. Store session state at the client and pass it to the server means you can go to any data center at the potential risk of getting the security wrong.
That 30% faster figure is because the iPhone is rendering the scene at a lower resolution than many of the phones with larger, 1080p screens. Look at the next graph where each phone is rendering a 1080p scene to see the difference.
The GPU in the PS4 is, essentially, an Nvidia 7900-class GPU. Anandtech recently did a comparison of old PC GPUs to current SOCs and the results are surprisingly close. I'd imagine Snapdragon 800 and Oscar close the gap considerably. http://anandtech.com/show/6877/the-great-equalizer-part-3/3
edit: I'm stuck in past. This comment is about PS3 not PS4.
The rate that the battery charges is independent from the capacity of the battery. Assuming the new tech is 10x less energy dense and has a 10x faster charge rate, if you compare one of these new batteries with capacity equal to that of the Model S battery, the new battery will charge 10x faster but occupy 10x the volume, since it is less energy dense. A new battery of this kind that occupied the same volume as a Model S would contain 10x less energy than the Model S battery, but it would charge to full capacity 100x faster than the Model S.