Sunday, 31 May 2015

First thoughts on narrative reporting

I just finished reading The Mayor of Aihara, a biography of a man named Aizawa from rural Japan derived from 1885-1925 of his daily journal. It was the first history book I've read in six years.

I read it because I wanted to get a sense of how non-fiction is written outside of the sciences. The content was good, but it was the style I was looking for, which turned out to be a 160 pages of narrative sandwiched between two 20-page blocks of analysis and commentary.

The introductory chapter discusses the limitations of the biography as a view into the life of the Japanese as a whole. It also gives some general context of the world that Aizawa lived in.

The next five chapters cover blocks of Aizawa's life. Events within each chapter are told mostly in chronological order. There is some jumping around to help organize the material into something like story arcs, except that it's almost all about one person.

In other places, details that the biography author couldn't possibly have known are included, such as the details of Aizawa's birth, and the reasoning behind the local police officer having a bicycle.

Sometimes the author's interpretations are injected, such as that Aizawa was mostly unaware of the plight of the tenant farmers in his village, and that he cared more about his family than was typical. (In the conclusion chapter, some of these assumptions are justified.)

These aspects gave the biography more cohesion as a story, with clearer connections between cause and effect, than Aizawa's life likely had in reality. I didn't mind much because it made the material easier to read, track, and remember.

Still, the reporting is subjective, not just where necessary, but also where the author could improve the work's narrative quality without sacrificing much accuracy. Contrast that to scientific reporting : when doing a chemistry experiment properly, every scientist should produce identical lab notes and the resultant reports should provide the same information regardless of who produced them. If someone else were to examine Aizawa's journal, even if they had the same background in Japanese history as the biography author, they would produce a biography with different information.

This focus on narrative in providing facts is perplexing but the rationale is visible.

Friday, 8 May 2015

The end of jobs

A friend recently asked me if I foresee any chance of "jobs", or in his words "the economic trade of labour in return for payment" becoming so unsustainable that we as society would abandon it. My response is below.



The term "chance" implies that I'm not certain about the unsustainability.

There will NEVER be enough meaningful jobs for everyone. Unemployment is only in a 'healthy' range around 6-7% right now because of a system stretched to its utter limit to create jobs, often at the cost of getting meaningful work done.

First, self employment is counted as jobs in this statistic, as is part time work. So the proportion of people that trade their labour for payment likely a lot smaller than official surface figures.

There are also a large portion of jobs that simply shouldn't be.

- Literally pointless jobs like full-service gas pumps. Really gas stations could be fully automated and could behave like large vending machines.

- Parasitic jobs such as car salespeople, day traders and arbitrageurs. I separate these from pointless jobs because they do perform a service, but only because of legacy systems that mandate that these services are necessary.

- Fields where the output of the field is only loosely related to the number of people working in the field, such as marketing. From the perspective of companies, if half of all ads disappears, the only real effect would be for each remaining ad to be twice as effective. Likewise, the benefit to the consumer, knowledge of a product, would be just as large with perhaps only 10% of ads retained. In that sense, 90% of the work in advertising, from ad creation to posting billboards, is parasitic.

Then there are the jobs that won't be for much longer.

- Physical jobs that are due for automation, such as semi-truck driving.

- Small manufacturing jobs that can be simply replaced by on-demand 3D printing.

- Technical jobs that routinely get automated, like how much search engines have supplanted librarians.

- Many service jobs are a fixed portion of the population, such as teaching, haircutting, and child care. However, the population of countries in the developed world are either flat, declining, or dependant upon immigration to maintain the population increase that modern economics relies upon so dearly.

- Many resource-based jobs are at risk to better energy efficiency, better labour efficiency, and automated landfill harvesting and reclaimation. Even argiculture is being turned upside down by cultured meat. With it, there goes shipping.

Finally, the work that NEEDS to be done such as environmental restoration, medical services, and the development of space technology, simply doesn't work well under an exchange-for-payment system because economically 'rational' people and corporations either won't or can't pay for it.

I would refer you to the 20 minute video "Humans Need Not Apply" for a compelling argument about how this is inevitable. My best resources for universal basic income and on post-scarcity are the novels Accelerando and Red Mars touch on these topics.

Wednesday, 6 May 2015

Prelude to a FUSS

I apologize in advance if this one is incoherent, as most of it has come in fever dreams over the last couple days.

I want to make a FUSS. That is, a Formula-Unspecified System Solver. 

The FUSS would input an n * p matrix of X values, and a 1 * p vector of y values.  

The FUSS would output a formula, such as y = log(a + bx1 + c * sqrt(x2)), where the parameters a,b, and c are chosen to fit y the best for this formula by some criterion such as sum of squares*. The formula would be chosen by a simulated annealing method that starts with a basic formula and mutates to find one that balances accuracy with simplicity.

Here's the mechanics as I've worked them out so far:

The R function nlm(f, p, ...) takes in a function f and initial parameters 'p', which get fed into the function. nlm then estimates the values for the variables specified in 'init' that produce the lowest output in 'f'. Usually, the output of 'f' is some value representing distance from some ideal value, so the output of nlm effectively estimates the best value of things for you. 

The '...' represents the additional variables that are fed into nlm(f, p, ...) that are not changed by nlm when it's trying to optimize. So what would happen if those extra variables determined the formula to optimize 'p' on. 

For example

FUSS_INNER = function(p, x, y, form)  ## Assume p=3, x is 2*n
   y_hat = rep(0,length(y))
   y_hat = rep(0,length(y))
   for(k in 1:length(y))
      ## Start with a polynomial of exponents
      y_hat[k] = p[1] + p[2]*x[1,k]^form[1] + p[3]*x[2,k]^form[2]
      ## Apply transformations
      if(form[3] == 1){ y_hat = log(y_hat)}
      if(form[4] == 1){ y_hat = sqrt(y_hat)}
   ## compute lack-of-fit
   lack_of_fit = sum( (y - y_hat)^2)
   ## compute a complexity score based on the formula used
   complexity = 1
   ## 1 point for each non trivial exponent (0 or 1)
   complexity = complexity + length(which( !(form[1:2] %in% c(0,1))))
   ## 1 point for each transformation applied
   complexity = complexity + length(which(  form[3:4] == 1))


For the above example formula of y = log(a + bx1 + c * sqrt(x2)), this would be specified by the form vector of "0,1/2,1,0", and would have a complexity of 3.

With a bigger FUSS_INNER function, more possibilities for mutation are available. Also, there's a lot of housekeeping that I'm ignoring in this function because it's all very high level at this moment. Also, I would use apply() instead of for() to improve speed, but this is for demonstration purposes.
It's 2:30am in Montreal, and that's what the FUSS is about.

* maximum likelihood would be better, but let's start small.