Wednesday, 29 October 2014

Fractional data types

There's likely a good solution to this already, but it eludes me:  Why are there no fraction data types at the same basic level at int, char, and float?

A lot of statistical operations, like matrix inversions, are computationally very expensive. Part of the reason they take so long to perform for large matrices is because of all the division of numbers that's going on. Compared to addition and multiplication, division of floating point numbers is very hard for computers. It has to be done in a form of binary long division, which involves a subtraction and multiplication for each bit.

However, division of fractions is simply cross-multiplication, which is cheap.
Consider a double length float in a 32-bit system. It has 64 bits, 52 which store the actual number ( the mantissa), 11 which store how large the number is, and 1 for the plus or minus sign.

Modelled on this, a 64-bit fraction data type could have two 28-bit mantissae, 11 bits for the magnitude, and 1 for the sign.

Note that fractions are not unique representations of numbers. 18/12 is 6/4 is 3/2. We can exploit that to use only a single sign and magnitude for the fraction as a whole.

Conversion from fraction to double is a simple division, so we can't avoid division entirely, but we could do intermediate steps as fractions.

Conversion from double to fraction is harder. It can be done in a arbitrary manner, of setting the denominator to 1, but in the above system, 28 bits of precision are lost, and the whole denominator is wasted storing '1'.

Is there a way to convert a double to fraction to minimize loss? Can it be determined in a way that's cheap enough to make conversion worthwhile? My uninformed hunch is converting x to x^1.5 / x^0.5 would work, but square roots are still a bit costly, and I have no idea if it's particularly good at preserving precision.

Any ideas?

Sunday, 26 October 2014

SQL - An Introduction in 25 Minutes

Twice a year, the stats grad students from UBC and from SFU meet at the Harbour Centre downtown and have a one day seminar of student talks. Rather than present research, I gave a 25 minute introduction to SQL.

Specifically, it's a few slides on what SQL is, and some example queries using data from Major League Baseball's PITCHf/x data with the pitchRx package in R, and a package called sqldf. sqldf  interprets queries on R data frames that are stored in local memory or in .csv files. This package is very handy for writing practice queries when you don't have access to an external database, and faster than setting up one on localhost through Wamp server.

The example queries in the talk were a showcase of common clauses in a select query, and a simple join. Below are the PDF slides and the TeX code for the presentation if you wish to use it as a template. You won't be able to compile it without the images, so be sure to comment them out first.

SQL - An Introduction in 25 Minutes, PDF Slides, TeX Code

Saturday, 25 October 2014

Some fan-made Dominion cards.

Playwright (3+)
Action
You may overpay for this card. Gain one coin token for each 1 you overpay.
+1 Card, +1 Coin Token

This is a foil to the Guilds card Masterpiece, which is a poor card on its own, but allows to you overpay to put a Silver into your deck for each amount overpaid. 

It's called Playwright because coin token cards traditionally are people, and it seems plausible for a playwright to produce one amazing work and then be only marginally valuable after. What I like about Masterpiece is that it's a viable buy for 7 coins, the 'deadzone', without being board-defining.

Playwright fixes the deadzone problem doubly - it gives you something worth buying for 7, gives you enough coin tokens to prevent issues in the future, or set up for a megaturn.



Dig Site
Victory/Treasure (0*)
This card cannot be bought normally.
You may gain this card when you trash a victory card.
1 VP, +1 Money.


Museum
Victory/Action (0*)
You may gain this card when you trash a card worth 6 or more.
3 VP, +1 Action, +1 VP chip.


Dig Site is intended to give some purchasing power when you trash an Estate, or Overgrown Estate, or beef up Silk Road without much drawback. Museum is a late game vehicle to convert Golds into Victory Points, and it's not a dead card if it ends up in your hand.

Farmland combos brilliantly with either of these cards.

These cards push the bounds of the original game a bit because there's usually at least ten non-basic cards that can be bought in a game. If there's a card with a special gain condition, like Prize, or Madman, it's placed in the game as an attachment to an existing card (Tournament and Hermit, respectively). These could be attached to an... Archeologist card?

The creator of Dominion has said he's not interested in making more cards because the game is complex enough with 220+ cards across all the sets. However, I feel like there's still room for expansion in the single-player mode that's playable online, much like how there are special cards and conditions that are only in Hearthstone's Naxxrammas fights.

The Value of Choice in Game Design

Choice has value, but it’s not always possible to quantify. This allows a game mechanics to be recycled. Consider the following three cards in Magic: The Gathering.

1. Lightning Bolt - A spell to deal 3 damage to a creature or player once, and at any time.
2. Kird Ape - A creature with 2 attack and 3 defence, as long as a certain easily attainable condition is met, and worse otherwise.

3. Vexing Devil - A creature with 4 attack and 3 defence, but the opposing player may choose to take 4 damage immediately instead of having this creature enter play.


 

All of the cards have the same requirements to be played (one red mana). There are differences in the years between when each card was tournament legal, but all three cards are considered better than the typical card. In short, all three of these cards are highly interchangeable in a deck. So which one is the best?

Case I:

If the opposing player allows Vexing Devil to come into play, it has more attack than a Kird Ape would, so it's better than using a Kird Ape in that situation.

Case II:

If the opposing player does NOT allow Vexing Devil to enter play, then the opposing player takes 4 damage. This is more than the 3 damage than a Lightning Bolt would deal.

In either case, it seems like Vexing Devil performs better than either card. What makes these three cards fairly comparable? The opponent's choice does.

From the player of Vexing Devil's perspective, they will always end up with the least favourable of the two cases. If either one of those cases were as good as the two alternative cards mentioned, the alternative card should be used instead. So, to make Vexing Devil viable, both cases must be better than those on alternative cards.

From the opponent's perspective, they can choose to let the creature come into play only when they can deal with it in a way that is less costly than taking 4 damage.

In short, the value of choice must be priced into the relative strength of game elements. This is what makes the Queen in chess strictly better than either the Rook or the Bishop. The Queen can only make moves that either the Rook or the Bishop can do, but having the choice to do either is what gives her her power.

 

Consider one more card:

Forked Bolt – Deals 2 damage to target creature or player, OR 1 damage to each of 2 such targets.

This card is also considered above-par, and has the same mana requirements as the other three. It's arguably weaker than Lightning Bolt, but not by much. Why? The player of this card has the additional choice of splitting the damage.

Crowd Science idea

Video blog that explains the rationale behind a poorly studied treatment (eg binaural beats, neurofeedback, RISUG), and how it could be tested. Then the ad revenue from the videos, plus crowdfunding, is used to actually perform the described test and analysis, and the results are blogged out. Idea credit to Rachel Lipson, necessity credit to NSERC.

Binaural beats as a treatment for insomnia may be a good place to start. There is an open source Android app that I could edit to administer a placebo. The placebo would be a real binaural that fades into a monaural after a short time. It could use some ID number like the SIM card to determine who gets a placebo, and could record how much it was used. A validated survey instrument for insomnia could be attached to the app, and voila! I have my data recording done.

The only costs would be licensing for the validated instrument and maybe hiring an app developer. If it was a success, it could spring into more expensive treatments like neurofeedback.

Penny Words







If poetic words like "exhilarated" are considered five-dollar words, then words like "phone", "brought", "debt", "subpoena", and "ghoti" must be one-cent words. They're the pennies of the language. We take the extra time to exchange them solely because, convention, misplaced nostalgia, and lack of imagination.

Spelling is important for clear and efficient communication, but if it this the reason for conventional spelling then the words mentioned above are SPELLED WRONG!

Other than being used to the above already, would you have any harder time understanding me if I had said "fone", "brot", "det", "supeena", and "fish"? Are these spellings any more awkward to accents removed from The Queen's English? I doubt it but I want to hear your input.

Much like how pennies make cash worse and push people to use alternative payment methods, penny words make English worse and push people to communicate to alternative means of communication (or more specifically, makes it less motivating to adopt.)

From a native English speaker's perspective, I enjoy the advantage of my tongue being the standard dialect of business and science and would like to keep it that way. From a teacher and statistician's perspective, I dislike penny words because they disobey the principle of parsimony.

It's not enough to teach phonetic spelling because that would hamstring learners like English As She Is Spoke did. However, I imagine it would be possible to instead make a standard phonetic spelling an acceptable set of alternative spellings in English. I further imagine that when faced with an alternative as an equal, some of the original spellings of these words will become archaic and obsolete.

How does this sound for a plan?
First, crowdbuild a database of traditional and phonetic spellings of words. In the first phase, spellings could be resolved by a voting system weighted by credentials and experience like IMDB's rating system. In a second phase, the spellings could be manually curated by established writers and English grads.

Next, build a translator plug-in that could translate digital text from standard to phonetic English, which, if we're limiting this to spelling and not syntax, should be direct replacements most if not all of the time. This is likely sufficient for webpages. Also, the translator need not be an all-or-none deal.

Each spelling replacement could have a priority score, and readers could choose to translate only the highest priority words and first and make a progressively larger transformation of what they're reading. Difficult words that are either rarely used or are already misspelled often because of their poor construction like "subpoena" could have high priority, and ones that are more for phonetic purists like "fone" could have a low priority. Would there be many issues with back translation if someone writing in phonetic English wanted to translate to traditional?

Next, this plug-in could be added to e-books, or phonetic translations of open domain books could be made available and again checked manually for quality by human readers to avoid the Kindled / Nookd situation that happened with some ereader brands.

Once there were widely accepted books available in phonetic, would it make the new spellings more acceptable to people?

Pica

Pica is a compulsion or habit to eat things that are not food, such as cigarette butts, dirt, and coins. Swallowing gum probably doesn't count because it's usually completely in your mouth when used as intended.

I had shared my idea with one or two of my intro psych profs at cnc. At least one had encouraged me to pursue it is as legit academic research in the future.

Filling in the blanks... It was that some quirk in the hypothalamus made it perceive there to be a nutritional need that wasn't being met with real food, but could be addressed by whatever nonfood was being eaten. It would explain things that didn't accumulate like cigarettes or loose soil. It didn't explain coins, which would accumulate and be digested slowly. I imagined a nutritional bar that could contain the less dangerous nonfood substances that pica sufferers would be ingesting, hopefully as a safer alternative that would reduce cravings. It was to be called pica chew.

This is a real story and not just a lead up to that awful pun. Blame 2004 me for the name.

This idea and the way in which it was received played a role in my motivation to do graduate school, even though it ended up being in something much different. Thanks for reading and indulging this mental spring cleaning.

Remote Healthcare Workers for Outbreaks

My vision is of humanoid robots in hospitals being controlled remotely by healthcare workers through a VR system like the Oculus Rift. Camera and monitor system to have face-to-face interaction. There was a video floating around of someone using a VR helmet and a curved pad that let the user walk or jog in place to control the on screen character's navigation. The robot's hands could be controlled with a pair of force-feedback gloves that resist the user's finger's movements with appropriate force when the robot hand is grasping something.

If information or extra workers were required to move quickly from one part of the hospital to another, they could switch which robot they were using. This would also further prevent the spread of infections from ward to ward because physical bodies could be contained to a floor, at least until samples were required to be moved.

American Psychometric

Eight discrete bars tastefully separated from each other, presenting itself as a bar graph of distinct categories instead of a histogram of continuous values. Labelled axes showing popularity values for each car colour on a brushed steel background. My god, it even has the “other” category!







The Research Potential of Twitch TV

 






What is Twitch?

Twitch is a live video streaming service marketed to video game and e-sports enthusiasts. It arose from a community of dedicated competitive head-to-head games and "speedrunners" which complete single player games as fast as they can. Twitch, formerly Justin TV, allows players to upload a live video feed of their screen, perhaps with some overlays like a timer or a camera of the player's face or hands. Twitch earns its revenue much how traditional television does, with commercials that intersperse the play. For most players, whom have 0-10 viewers at any given time, these show up for 15-30 seconds upon beginning to watch a channel. Proven high-tier players (proven in terms of viewership, not in-game achievement) can enter into profit sharing agreements with Twitch, and may play additional commercials to generate revenue for themselves and for Twitch. Since many games have predictable moments of downtime, such as a restart of a speed run attempt, or a long loading screen, when an advertisement doesn't detract from the experience. These high-tier players can also monthly subscriptions that give small cosmetic benefits and the removal of ads to subscribed viewers.


Why Twitch is a big deal, and why it's different.

Recently Twitch was purchased by Amazon for nearly $1 billion USD (or $1000 million USD if you're using European notation). Aside from being the web's everything store, Amazon also sells a portfolio of large-scale computing services, so Twitch wasn't an out-of-character purchase. They have the hardware means to support the growing operation of receiving, compressing (often into multiple formats for viewers at different bandwidth levels), and sending video streams from thousands of players to, at the moment, roughly half a million viewers.

That is a LOT of bandwidth, and the senders and receivers could be anywhere and could change at any time. Logistically, that makes what Twitch does a lot more impressive than the feat or providing traditional television to 500,000 simultaneous viewers. With traditional television, a few hundred channel streams can be sent to a local station, which can copy those streams as needed for the short distance from the local station to the viewer. Bandwidth on the main lines does not increase as the number of viewers increases for traditional television. Youtube does what it can to get similar cost advantages. Youtube is a one-to-many system with being a central repository of videos that are pre-uploaded. It has the advantage of knowing which videos are popular or becoming popular, and can use that reduce its bandwidth costs by storing copies of popular video files with local internet service providers. I assume Netflix has a similar arrangement. Even with live streaming of large events, mainline bandwidth can be saved by branching if demand can be predicted such as with playoff hockey games, and Olympic events - in that order.

Twitch, however, with its many autonomous content providers and dispersed viewers, cannot predict where enough viewers of a given channel will be to take advantage of a local provider like traditional television can. They also can't store their main product, which is live, on a central repository. In short that massive amount of bandwidth has to go through the internet in an ad-hoc fashion that must make the per-viewer costs much higher than competing entertainment. Now that Amazon has put colossal gobs of money behind Twitch, it surely has some ideas to reduce these costs. Predictability, may have made Youtube more efficient, but what could be predictable about video game streamers?

"I'm tired of dating responsible, burly, musical chef-lumberjacks. What would really seduce me is a lecture on video compression and the power law." - Nobody. Ever. :(

The popularly of games follows the power law. Lots of things follow the power law. You may have heard of this as the 80-20 rule, such as "80% of alcohol is consumed by 20% of the drinkers". For television it would be "80% of the revenue comes from 20% of the shows.". Sometimes it's called the 90-10 rule for similar reasons (as I've heard it in the relative frequency of words in a language), but the basic principle is the same: There are a few things (channels, games, drunks) that far surpass all the others.

For Twitch, this works for games. It's almost 2am on a weeknight in Vancouver, Canada as I write this, which is a lull for Twitch. Three games: DOTA 2, League of Legends, and Hearthstone: Heroes of Warcraft, have 63,000, 40,000, and 33,000 viewers respectively. There are three games with 5000 - 10000 viewers presently, twelve games running between 1000 and 5000 viewers, and fifty games between 100 and 1000 viewers, and hundreds of games with 1-99 people watching. The viewership of channels within a game also follow the power law. 70% of Hearthstone's 33,000 viewers are watching a single channel. 15% are watching the next two most popular channels, and so on.

Why would anyone care that viewers (and therefore, revenues and costs) follow the power law? Because it means that improving the efficiency of the streaming of only a handful of games can go a long way towards improving the efficiency of all the streaming that happens.

By improving efficiency, I'm referring specifically to reducing the number of bits of information that have to be routed across the internet to deliver a stream to a single viewer at a fixed quality. The most common way of doing this is with video compression codecs*. To upload and download something that's essentially thirty pictures per second, digital videos typically use codecs, which are like books of shorthand that are shared between the sender and the receiver. In text, you can think of the acronyms*, jargon, and in-jokes that you know (BC, GLM, Old Gregg) as your personal codec, because you can use them to communicate much more quickly with other people that know the same acronyms, et cetera.*** Computers do similar things with streaming videos, they exploit common knowledge to produce a smooth video without. having. to. send. the. entire. picture. for. every. frame. Among other things, codecs exploit the fact that most frames are similar to the ones just before them. Most of these frames are for movement or animation. Even cuts within a scene share a colour palate, and there are usually transitions between scenes. This is why when things get really fast-paced in a video, or if you're skipping around it wildly, the picture goes a little crazy and you'll see ghosts of previous things or strange colours and blocks.

Twitch already uses codecs, but I imagine that the ones it currently uses are designed for general video, or at best for general video games. However, we already established that Twitch is primarily used to stream a handful of games. Each of these games has their own patterns that could be used to make codecs that will work better for those specific games.

Here are two typical screen captures for a Hearthstone stream, taken half an hour apart. (From Hafu's channel: http://www.twitch.tv/itshafu Screencaps used without permission.)

This is a card game, so there's already a lot of static elements that a codec can use. Most of the outer rim is the same, save for a shading change, so a codec needs only send "no change in this area" each frame. Likewise for most of the centre board space. Several interface elements have changed little between the two pictures and have minimal changes from frame to frame. Aside from the shading change and the movement in the live camera, the biggest changes the play arena are the decorations in the corners. You can't tell from the static images, but the ruby in the moai statue's eye in the left screen glints regularly, and the water in the waterfall burbles down in an animated loop. Likewise, the spiky zeppelin in the right image floats around a fixed pattern, and the smoke the hut slowly billows.

Tor Norretranders would call this "exformation".

If the codec were specifically designed around Hearthstone, it could recognize the decoration elements like the glinting ruby and the billowing smoke. Then, instead of Twitch's servers having to send the slight changes to the ruby and the smoke over time, they could sent a much simpler "carry on" message as if that part of the image was static. The receiving end, knowing it had a ruby or some smoke, could fill in the animation in the video without having to have it described to it explicitly by the server. Since a large portion of the viewers of this stream have a copy of Hearthstone and play it themselves, the codec could even draw upon the art assets of the game to create the animations semi-autonomously.

Other object recognition could be done to leverage the sort of repetition that isn't found in general video, but is found in video games. The dark gems in the lower right corner of each image are repeated. With access to the art assets, Twitch could sent "draw four [dark gems] here" instead of the most longer "draw the following thousands of blue and black pixels.". Without the assets, a more general signal could be send to draw the pixels for one gem, and simply a repeat command for the next three.

Finally, object recognition could be used as a graphical form of automobile amputation autocomplete. See that "Soul of the Forest" card? If you knew every card in the game, you could cover up the bottom 85% of that card and still recognize it. A server at Twitch, streaming this to the 2,400 viewers that there were for this channel, could save a lot of effort by recognizing that card, and telling the viewers with art assets to draw that card in that position, rather than describing the card pixel-by-pixel to every viewer. It helps greatly that cards have many graphical characteristics in common, like the gem with the number in the upper left, that a server could use to recognize that the object in that portion of the screen is card, and where it should look for the rest of the card and what else to look for.

Streaming is bandwidth intensive, and bandwidth has an electricity cost and a carbon footprint. Amazon could save a lot of money, provide a more reliable product, and offset carbon at no extra cost if it found some ways like these to take advantage of the predictable content that people are streaming on Twitch.

My conjecture is that Amazon is already working on such technology, which I'm calling Application Specific Video Codecs for now. But just in case they're not, Fabian, you know how to find me. ;)

To clarify, this is about more than just a codec for gaming, that's just the inspiration. Imagine a means of video streaming that's halfway between pre-rendering and streaming. It could also be used for sports programs with consistent overlay formats, and shows with fancy transitions that get used a lot (and typically are hard to compress because they involve large movements).

It also seems like something a third party would develop and then sell the tailoring service to various streaming systems.


* I apologize for my inexact description of video compression and codecs. This is a blog post, not a textbook, and I am not yet an expert in this particular field.
** Acronyms and initialisms. Insufferable pedant.
*** et cetera. Something you should never end a sentence with. Also see "preposition".

One Thousand Words on Writing Better Surveys

1. Make sure your questions are answerable. Anticipate cases where questions may not be answerable. For example, a question about family medical history should include an 'unknown' response for adoptees and others who wouldn't know. If someone has difficulty answering a survey question, that frustration lingers and they may guess a response, guess future responses, or quit entirely. Adding a not-applicable or open ended 'other' question, or a means to skip a question are all ways to mitigate this problem.

2. Avoid logical negatives like 'not', 'against', or 'isn't' when possible. Some readers will fail to see the word 'not', and some will get confused by the logic and will answer a question contrary to their intended answer. If logical negatives are unavoidable, highlight them in BOLD, LARGE AND CAPITAL.

3. Minimize knowledge assumptions. Not everyone knows what initialisms like FBI or NHL stand for. Not everyone knows what the word 'initialism' means. Lower the language barrier by using as simple language as possible without losing meaning. Use full names like National Hockey League, or define them regularly if the terms are used very often.

4. If a section of your survey, such as demographic questions, is not obviously related to the context of the rest of the survey, preface that section with a reason why are you asking them. Respondents may otherwise resent being asked questions they perceive as irrelevant.

5. Each question comes at a cost. Don't include questions haphazardly or allow other researchers to piggyback questions onto your survey. Every increase in survey length increases the risk of missed or invalid answers. Engagement will drop off over time.

6. Be specific about your questions, don't leave them open to interpretation. Minimize words with context specific definitions like 'framework', and avoid slang and non-standard language. Provide definitions for anything that could be ambiguous. This includes time frames and frequencies. For example, instead of 'very long' or 'often', use '3 months' or 'five or more times per day'.

7. Base questions on specific time frames like 'In the past week how many hours have you...', as opposed to imagined time frames like 'In a typical week how many hours have you...'. The random noise involved in people doing that activity more or less than typical should balance out in your sample. Time frames should be long enough to include relevant events and short enough to recall.

8. For sensitive questions (drug use, trauma, illegal activity), start with the negative or less socially desirable answers first and move towards the milder ones. That gives respondents a comparative frame of reference that makes their own response seem less undesirable.

9. Pilot your questions on potential respondents. If the survey is for an undergrad course, have some undergrads answer and critique the survey before a full release. Re-evaluate any questions that get skipped in the pilot. Remember, if you could predict the responses you will get from a survey, you wouldn't need to do the survey at all.

10. Hypothesize first, then determine the analysis and data format you'll need, and THEN write or find your questions.

11. Some numerical responses, like age and income, are likely to be rounded. Some surveys ask such questions as categories instead of open-response numbers, but information is lost this way. There are statistical methods to mitigate both problems, but only if you acknowledge the problems first.

12. Match your numerical categories to the respondent population. For example, if you are asking the age of respondents in a university class, use categories like 18 or younger, 19-20, 21-22, 23-25, 26 or older. These categories would not be appropriate for a general population survey.

13. For pick-one category responses, including numerical categories, make sure no categories overlap (i.e. mutually exclusive), and that all possible values are covered (i.e. exhaustive.)

14. When measuring a complex psychometric variable, (e.g. depression), try to find a set of questions that have already been tested for reliability on a comparable population (e.g. CES-D). Otherwise, consult a psychometrics specialist. Reliability refers to the degree to responses to a set of questions 'move together', or are measuring the same thing. Reliability can be computed after the survey is done.

15. Ordinal answers in which a neutral answer is possible should include one. This prevents neutral people from guessing. However, not every ordinal answer will have a meaningful neutral response.

16. Answers that are degrees between opposites should be balanced. For each possible response, its opposite should also be included. For example, strongly agree / somewhat agree / no option / somewhat disagree / strongly disagree is a balanced scale.

17. Limit mental overheard - the amount of information that people need to keep in mind at the same time in order to answer your question. Try to limit the list of possible responses to 5-7 items. When this isn't possible, don't ask people to interact with every item. People aren't going to be able to rank 10 different objects 1st through 10th meaningfully, but they will be able to list the top or bottom 2-3. An ordered-response question rarely needs more than 5 levels from agree to disagree.

18. Layout matters. Make every response field unambiguously next to its most relevant text. For an ordinal response question, make sure that ordering structure is apparent by lining up all the answers along one line or column of the page.

19. Randomize response order where appropriate. All else being equal, earlier responses in a list are chosen more often, especially when there are many items. To smooth out this bias, scramble the order of responses differently for each survey. This is only appropriate when responses are not ordinal. Example of a appropriate question: 'Which of the following topics in this course did you find the hardest?'

20. A missing value for a variable does not invalidate a survey. Even if the variable is used in an analysis, the value can substituted with a set of plausible values by a method called imputation. A missing value is not as bad as a guessed value, because then the uncertainty can be identified.

Main Source: Fink, Arlene (1995). "How to Ask Survey Questions" - The Survey Kit Vol. 2, Sage Publications.
Word count starts, inclusive, with the title and ends, not inclusive, with the signature. Counted 2013-08-23 by Word Count Tool

The Order of Data Contains Information, But How Much?


For some sampling designs, especially adaptive ones like those in Thompson and Seber (1996) , the order of the values in the sample matter. Statistics derived from the tuple of observed values (3,10,4,9) may be different than statistics derived from the tuple (9,3,10,4). The order in the round parentheses implies the order in which these values were observed.

In other cases, like simple random sampling, only the observation values and not their order matters. In simple random sampling, the samples {3,10,4,9} and {9,3,10,4} are the same. The curly braces imply set notation, where the order doesn't matter.

The unordered dataset can be produced from the ordered data, but the converse isn't true. There is an implication that ignoring the order of a tuple (ordered set) of numbers is destroying some information about that tuple. The unordered set {3.10,4,9} contains less information than the ordered tuple (3,10,4,9), but this isn't obvious because both are described as four numbers. Other sufficient statistics like the total, a sufficient statistic for the simple mean, 26, are obvious data reductions because they describe something using fewer than all of the values from the set or tuple. However, removing ordering doesn't reduce the dimensionality, so how does it reduce data, and by how much?

 

Example one - Sample of 8 Binary Responses


We can show that some information is lost, and find a lower bound to how much, but describing an unordered set of numbers in fewer bits than a tuple of those same number could be described. First, consider a data set where the only possible measured values are 0 and 1. The ordered data of n such observations (uncompressed, as we're making no assumptions about the data, yet) requires n bits to store.

Let a tuple of eight binary numbers, such as (0,1,1,0,0,1,0,1) come from a distribution of eight-tuples. There are 2^8 = 256 possible tuples that could come from this distribution. Let's assume the maximum entropy case, where all 256 tuples are equally likely to come from this distribution. In this case, tuples require an average of exactly 8 bits to store and transmit.

Now consider the unordered sets derived from these tuples. The set {0,1,1,0,0,1,0,1} is the same as {1,1,0,0,0,0,1,1} or {1,1,1,1,0,0,0,0}. It can be expressed as the number of ones in the data set without loss. That leaves only eleven possibilities, ones = 0, 1, 2, 3, ... ,7, 8, and that information takes at most 4 bits to express. If each tuple was equally likely, then we could use a code like this to reduce the average transmission length to about 2.8 bits:


Output Code Length Probability
4 ones 00 2 70/256
3 ones 010 3 56/256
5 ones 011 3 56/256
2 ones 100 3 28/256
6 ones 101 3 28/256
1 ones 1100 4 8/256
7 ones 1101 4 8/256
0 ones 1110 4 1/256
8 ones 1111 4 1/256

Average length: Sum(bits X probability) = 2.796

 

Example two - Sample of 10 Integer Responses

Consider a sample of ten values from a discrete uniform (min=0,max=255) distribution. All the values from 0 to 255 are equally possible. In an ordered data set, the shortest expected length that data from this set can be expressed is 80 bits (8 bits of entropy per observation times 10 observations, see Cover and Thomas (2006) ).

However, if this were an unordered set of integers, they can be always be expressed in fewer than 80 bits. Here is one example of how:

X, the number of observed values 0-127 inclusive (4 bits, representing 0-10)
X observations within [0,127] with the most significant bit removed (7X bits)
10-X observations within [128,255] with the most significant bit removed. (70-7X bits)

74 bits in total.

If there are X observations from 0 to 127, all of them have 0 as their most significant bit. Since order doesn't need to be preserved, these X observations are stored first and the most significant bit is assumed to be 0 for each of them. The remaining 10-X observations must begin with a 1, so their most significant bit is redundant as well.

This doesn't work with a tuple where order matters because knowing how many values are within [0,127] isn't enough; we also need to know where they are in the tuple. We would require a collection of ten zeroes and ones in order to describe the first bit. As shown above, this requires 10 bits, making the total length 80 bits and saving nothing.

For large data sets, further savings can be rendered by using 4, 8, or 16 bins and removing the first 2, 3, or 4 significant bits from each value respectively. In signed integers (those that include negatives), the most significant bit is the sign and it can be handled exactly the same way.

Continuous data can be treated similarly if it is quantized to a fixed precision.

By ignoring the order of a collection of 10 integers uniformly from 0 to 255, we destroy at least 6 bits of information, or 7.5%. We can also find an upper bound of the information destroyed by looking at the number of possible mappings.

Example: If all 10 values from 0 to 255 in the above example were unique, then one unordered set of numbers could be the unordered version of any one of 10!, or 3,268,800 tuples with equal likelihood. log2 of 10! is approximately 21.791, so the information lost from ignoring ordering cannot be more than 21.8.

Also, not every element is unique, so for many cases there are fewer than 10! tuples mapping to an unordered set. An extreme case where all the values are the same, no information is lost by ignoring the order. Finding the distribution of the number of bits lost to ordering is tedious, but we can approximate it with a Monte Carlo Simulation

R-code (Version 2.14.1)

set.seed(12345)   # arbitrary seed for reproducability
Nruns = 100000   # sets the number of runs
bitslost = rep(NA,Nruns)  # records the number of bits lost in each case
l2fac10 = log2(factorial(10))   # bits lost with all unique elements

for(k in 1:Nruns)  # for each run...
{
 tuple = floor(runif(n=10) * 256)    # generate a 10-tuple, discrete unif[0,255] 
 tab = table(tuple)   # get the number of instances of each value
 bitslost[k] = l2fac10 - log2(prod(factorial(tab)))   #calucates log2 of # of mappings
}

#output
summary(bitslost)
table(bitslost)


I got the following results:

  Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  17.21   21.79   21.79   21.61   21.79   21.79


Bits Lost Relative Frequency
17.206 .00003
18.206 .00008
18.791 .00018
19.206 .00165
19.791 .00866
20.791 .15426
21.791 .83514


Steven K. Thompson and George A. F. Seber, 1996, Adaptive Sampling, Wiley.

Thomas M. Cover and Joy A. Thomas, 2006, Elements of Information Theory, Wiley, 2nd ed.

Daily Grind

(Archival, meaning salvaged from a previous blog of mine)

What if there was a button hidden down by the docks of a city, perhaps London, which if pressed at exactly between 4:51pm and 4:52pm each day would produce $100. Once it produces its first $100, it has to be pressed every day or else it will stop working forever.

 

Imagine someone activates this button and, by trying to recreate the conditions of payment, figures all this out and assumes the last condition (maybe because of something they read in the library).

 

This becomes the person's 'job', where they make their way to the button every day and press it in a way discrete enough that nobody figures out what she's doing. Maybe all the other jobs she finds require her to be there until 5, maybe this becomes a farce or commentary on the rigidity on labour when a lot of the jobs with this requirement have no obvious reason to have a schedule that includes 5pm other than the workday tradition (Transcriber comes to mind).

 

So the button becomes the job.

 

At first this seems fantastic. It's almost $3000 per month for doing no useful work. Sure it's a bit of a commute from where she lives, and close to rush hour too, but who cares?!

 

Then it hits. Every day. Every day she has to be there at the button regardless of weather and traffic and crowding on the underground. She decides to move closer into town to get to the button faster each day. Even then it takes almost two hours each day to get there and back and back to whatever she was doing. It's also expensive to live in town, which is eating into her button payments.

 

It's harder to see her friends now that she's moved into the city. She's having a hard time meeting new people because she doesn't have a job to go to where she has to interact with people. It's hard to go out and meet new people even though she' an interesting person because she's afraid of being asked what she does for a living.

 

This isn't what she went to college for, and she's starting to feel a personal decay from her lack of self-actualization. She gets accustomed to drifting in neutral gear and it's eating away at her. It makes London, which has more to offer than almost any other city on earth to her, feel mundane and pointless. She needs a vacation, a vacation from doing nothing.

 

But she can't. The button must be pressed every day.

 

Does she trust someone else in on her secret, to 'buttonsit' while she's away? Would she have to pay someone to do what she's been doing for free for months?

 

Then the story is left unresolved because that's how reality works. We get stuck just like this and the narrator has no right nor ability to tell you how get out of your own ruts.

 

The title of the story would be "Nine to Five".