On Employment

What have we worked for? What do we work for? What will we work for? |

The Present

You know, I am one of the lucky ones. Right from the start, I lucked out - born male, white, and straight, I was endowed with the US social privilege trifecta. For nothing, no effort at all, I got to have it easy in many ways I most likely don’t even perceive. I was lucky, too, with economic privilege - my parents made my childhood easy and stable, and within the span of that childhood managed to move us from the ranks of middle class in Ukraine to the same in the US. Think that’s impressive? Get this: I got through a top tier college without a penny of debt - cue the tears of millions of my recently graduated peers.

So, big surprise, I ended up getting a degree (well, two) followed by a Software Engineering job which assured my ability to have a very comfortable life. I am 22, and many, too many, in the world cannot even hope for this much. But, insurmountable ethical queasiness about inequality aside, my life is good, right? Right. So then, it’s really a testament to the ingenuity of the human mind and spirit that I am not kicking back and enjoying life, satisfied and content. Oh no, that would make life all too easy and lacking difficult things to write lengthy quasi-essays about. No - in truth, I hope for yet more. Not just a job that pays well, but a job that I find interesting and enjoyable, and not just a job that I find interesting and enjoyable, but a job that I find, crucially, fulfilling. I have no idea what that even means yet, and cannot help but wonder about this drive within me. A comfortable life is nice, but something within me wants for, moreso, a life that feels right - whatever that is.

Relax! I am not actually having an existential crisis regarding working at Oracle or elsewhere. See, I am doing this personal narrative intro bit, motivating this giant slab of text to get people into it. And, if you happen to not be a coworker of mine, how dare you read this?! Kidding. I sure do hope you found that piece of text amusing; obviously it was a calculated move to dispel the largely self-serious tone found here thus far.

Fun though it would be to go into how this connects to humans’ general inability to ever feel content, I can’t claim to be quite so profound. No, this drive for fulfilling work is hardly original these days - it is a common sentiment usually boiled down to the Steve Jobs sactioned phrase “Do What You Love”. This strand of thinking has become so conventionally accepted that Slate ran an impassioned critique against it that begins by stating that “There’s little doubt that ‘do what you love’ is now the unofficial work mantra for our time” and goes on to point out to lucky bastards like me that it is in fact “a secret handshake of the privileged and a worldview that disguises its elitism as noble self-betterment.” The Atlantic, likewise, has rebuked the concept with eloquent sick burns:

Today’s knowledge economy is defined by a duplicitous charm in its categorical refusal to see work as labor. Desk workers are everywhere encouraged to “love what you do”: to embody their company’s values, identify with its brand, and then celebrate its accomplishments through internal and external marketing. For these creative workers, it is a badge of honor to never be off the clock.

First of all, I have a secret handshake!? Second - is this actually true? I’ve already written of my fondness for work, so I won’t waste time whining about the articles’ simplistic characterization of me as a privileged workaholic brainwashed by The Man. But, me aside, is this more generally true? With regards to the ‘Millennial’ age group of people who were born sometime between the inception of the NES and that of The Matrix, the group to which I belong, it only takes a quick Google to find that “Millennials Work For Purpose, Not Paycheck”. Except, not really. Look up a piece backed by some numbers rather than fun anecdotes, and you find stuff like this:

Competitive pay is the single biggest contributor to job satisfaction for Millennials and non-Millennials alike, with 68% and 64% citing it as important or very important, respectively. The second largest factor for both was bonuses and merit-based rewards, at 55% and 56%, respectively. … Only one-fifth of each generational group felt that “making a positive difference in the world” had an impact on their satisfaction. And slightly fewer Millennials – 29% versus 31% of other workers – said that achieving work/life balance was a key contributor to their professional happiness. Same thing with “finding personal meaning in work,” where the numbers came out 14% to 17%, in favor of non-Millennials.

So much for that story. The likes of me may make up that 14-17%, but it’s clear Do What You Love is hardly a universal mantra. This is the US, and a proclivity for pragmatism is alive and well.

Anyway, what fun are percentages? For all I know they are entirely made up, since much like you it’s not like I bothered to go back and check the veracity of the study. In typical human fashion, what made me think about this is not numbers, but people. Specifically, the people I interacted with in college. Specifically, the small rank of people in college that, like me, found the time to work on projects not required by any class, simply because they felt like it. Or rather, specifically, that vast majority of people who were not in that rank - who did not go to hackathons, who did not have repositories full of half-finished side projects or tables covered by half a dozen breadboards, who really just generally enjoyed their major and wanted to get a job in that field.

Many in this great majority of people were about as lucky as me, but seemed to lack my yearning for yet more beyond a good job; it seems to me most enjoyed programming as a whole, but not so much as to do it in their free time, and they wanted little more out of a job than being payed well to do programming they enjoyed. To me, that’s not Do What You Love, but is more like Do Something You Enjoy. That is what I saw in the majority of CS majors who studied alongside me. And, despite my fabulously sheltered life I am aware many more are not so lucky to think even like that, that many have little choice but to embrace Do What You Can.

So then, despite the great social, cultural, artistic, and technological transformations of the past century, despite the supposed cultural acceptance of the DWYL idea, despite all that, I am left wondering if our collective cultural view of work is still about the same as ever - a means of trading time and energy for resources necessary for survival and, if lucky, leisure. Many today want the trade to be pleasant and the labor interesting, but still - has much really changed from a century ago?

The Past

Okay, fine, obviously much has changed (in the US, at least). Most good jobs now require at least a high school degree, and by extension there are many more jobs that are interesting and require creativity. More people get to be artists, more people get to be engineers, more people get to be just about everything - except farmers and factory workers. More people have jobs that did not exist a century ago than those two, as the anti-DWYL Atlantic article nicely summarizes:

Over the past 50 years, the main source of employment in the United States has shifted from the manual to the mental—from doing things with one’s hands to delivering a service or a feeling. As Enrico Moretti points out in his 2013 book The New Geography of Jobs, any given American is now statistically more likely to work in a restaurant than a factory.

So yes, a lot has changed. But, that’s just the particulars of how people prepare for work and what it ends up being. What about why they work, what they work for? Well, it’s hard to say what a mere commoner such as myself thought about it back in the day, since no one kept blogs or posted any tweets. So, let us examine the opinions of those few past lucky bastards whose thoughts survived to this day; starting, of course, with John Smith. Turns out, the papa of capitalist economies certainly had a bleak answer to those questions:

“Adam Smith, the great theoretician of the capitalist economy, is much more explicit when, in a different context, he defines work as an activity requiring the worker to give up “his tranquility, his freedom, and his happiness.” Wages, according to Smith, are the reward the laborer receives for his or her sacrifices.”

It does not take a sociology or psychology or psychological sociology degree to see that this does not reflect the modern attitude towards work in the US. Yes, for some it may perhaps be accurate, but for great numbers of people (again, lucky people), their work is also their chief metric of success and source of pride. I can nitpick my peers’ zeal for their field all I want, but I think very few would agree they don’t enjoy or take any pleasure and pride in their work. The funny thing is, despite US culture being most encouraging for this sort of work-for-pride thinking it actually follows the thoughts of a certain German who was no big fan of capitalism:

“‘Smith has no inkling whatever that the overcoming of obstacles is in itself a liberating activity—and that, further, the external aims become stripped of the semblance of merely external natural urgencies, and become posited as aims which the individual himself posits—hence as self-realization, objectification of the subject, hence real freedom, whose action is, precisely, labor.

So, is work still seen through the mindset of Smith, or his great nemesis Marx? According to one NY Times Opinion Piece, Smith is still supreme: “Today, in factories, offices and other workplaces, the details may be different but the overall situation is the same: Work is structured on the assumption that we do it only because we have to. “ And yet, the same paper only two weeks before published a scathing piece discussing how Amazon sought out high achievers and squeezed productivity and innovation out of them like human fruit while promising only the reward of great accomplishments rather than Google-esque utopian work conditions. And so we come to an annoyingly predictable conclusion - the world is grey, and it’s hard to say quite which color is predominant.

Rather than pretend to have understanding of the ambiguous, let’s explore the larger truth this Amazon story speaks to - the emergence, over the past few decades, of this new subset of human endeavor chiefly defined by writing software, and its peculiarities. Perhaps uniquely outside of art and philosophy, this field is home to droves of big egos who think they can hit it big, or in the words of an entirely unironic TechCrunch editorial, “Do Great Things”:

“We who work in technology have nurtured an especially rare gift: the opportunity to effect change at an unprecedented scale and rate. Technology, community, and capitalism combine to make Silicon Valley the potential epicenter of vast positive change. We can tackle the world’s biggest problems and take on bold missions like fixing education, re-imagining energy distribution, connecting people, or even democratizing democracy.”

The core idea of that article is that with great power comes great responsibility, and those of us with the power of programming are like superheroes capable of fixing the world’s woes. A bit much, obviously - the world’s big problems were built with much effort, and software alone is unlikely to move those glaciers far. This sort of thinking has become so common and exaggerated that it has come to be satirized, most notably by the show Silicon Valley - ironically beloved by many in the Valley.

Still, to me the more notable subset of weirdos in the comp sci space is the huge army of people who practice the craft out of genuine enjoyment - me among them. How else could a site like GitHub, a sort of programmer social network notable as much for safely storing code as its social features, become a standard in this big space within a few years. And, furthermore, there sprung up a host of websites to enable people such as myself to code in their free time: Project Euler, TopCoder, HackerRank, DevPost, Hackaday, and even more.

And that’s great! Clearly I am of the perspective that work should not be a sacrifice, but an investment - an investment in a pursuit, an idea, yourself. An investment not to change the world, not to be some hero, but just to do something you want. But, most likely, few are quite as invested in this notion as I am. The intrinsically motivated crowd who uses such sites must be small compared to the many more who just enjoy the work but think of it chiefly as such. So, let’s summarize: by and large, employed work is still seen as labor, but compared to the past many more people get to be lucky enough to have the freedom to take the Marxist view of work as a means of finding happiness.

The Future

So, some things have changed, attitude wise, but as Slate and The Atlantic so eloquently pointed out really only lucky bastards like me get to benefit from the changes while most still regard work as labor, and not one of love. Fun as it is was to reflect on our present and reminisce about our past, I clearly have to take the next step - ponder our future. Will things change yet more? Yes, of course. So, how?

The agent of change will clearly be, as before, technology. And that can only mean one thing - robots! Right? Having worked in robotics I have seen oh so many pieces about the job eradicating potential of that field - most not worth taking seriously. Robots suck at doing anything halfway intelligent, still, period, and the core of problem that causes the fragility of such intelligent robotic systems - a lack of common sense - is not going to be solved any time soon. Why? Here’s a summary from “The problem with robots: If they only had a brain”:

In the 1960s and 70s, a great wave of enthusiasm for artificial intelligence (AI) in computers based on human thought processes crashed against our sheer ignorance of those processes. Engineers mostly stopped talking about “intelligence” and opted for signal processing algorithms and relational databases that appear to perform intelligently but only within highly scripted tasks - e.g. robots. That is why moving robots from the mindless assembly line into the real world usually produces pitiful, sometimes catastrophic results. If a robot encounters an unfamiliar object or even a familiar one in the wrong orientation, it is likely to proceed extremely cautiously or to damage the object or itself by handling it inappropriately. A human worker that inept would be fired immediately.

And so there are precious few examples of robots out there in the wild, doing things that humans do that also happen to be more complicating than endlessly executing the same repetitive motion in a caged factory. And, as was pointed out relatively few jobs (in the US) are based on physical labor, and it’s even less clear that people would like awkward robots to replace their human counterparts in jobs requiring social interaction. The non-factory robots out there are for the most part in the vein or the Roomba or are more specialized tools meant for hospitals or film sets, and despite some interesting efforts I would stake my measly reputation on the idea that it’ll still be decades before significant amount of US jobs will be threatened with robotic automation.

I am being a little harsh here. Okay, quite harsh. Robotics as a field has actually been moving incredibly rapidly in the past decade towards intelligent automation that goes beyond the cute little Roomba. The clearest success story is Kiva Systems, which greatly improved the efficiency of warehouses and has led to more ambitious efforts such as those of Fetch Robotics. I suspect the next great success stories will come from agriculture, where companies such as Precision Hawk and Blue River Technology are paving the way towards data-driven decisions for farmers that could be executed more precisely by steel machinery rather than meaty muscle. And though I called the Baxter, Jibo and Savioke 'interesting', they are hugely promising for starting the push towards smarter cooperative robots in the factory, funner social robots in the home, and more pleasant robotics in the service industry respectively. I just don't expect these efforts to have widespread impact for decades yet, based on their slow start and the difficulty of robotics in general. Oh yes! And there are the self driving cars. Even I am not enough of a skeptic to deny the awesome impact that self driving cars will soon have.

But! That is not to say things are not set to change. Oh, they so are. But, the big bad boy who will bring about change is not robotics, at least not it alone. Moreso, it’ll be our old economic grandpa, declining costs, our good old friend, increasing computing power, and our new hip friends, Big Data and Machine Learning. With the steady march of Moore’s Law yet unabated, computers have gotten fast enough, and algorithms good enough, that robotics can now figure out such complicated things as how to grasp items or move very complicated arms for arbitrary goals while avoiding collisions - things that used to be basically intractable. And this trend will continue, until decades will have passed, and the long foretold future of robotic automation will quite inevitably get here. But, since that is a while off let’s talk about something else - how the exponential rise of computing power and improvement in algorithms has enabled another, and in the short term more significant advance - the amazing feats accomplished by the field of Machine Learning in the past several years.

Machine Learning (ML) is really not as fancy as it sounds. Take a bunch of points on a 2D graph and find the nicest line (straight, hyperbolic, whatever you like) to fit through them, and you've got a form of basic machine learning - a general way to get some output from some input, gotten from some set of examples of correct input and output. Do this line fitting trick a bunch, and make the Y of some lines become the X for some other lines, and maybe likewise make the X for some lines the Y for other lines, and you've got a 'neural net'. This is oversimplifying things, of course, but captures the idea at the heart of the field. It is computer scientists' best attempt at equipping computers with humans' greatest strength - pattern detection and generalization of learned things to new situations. This is conceptually incredibly useful, since tasks that are very difficult or impossible to write programs for (say, figuring out whether an image has a cat or a cactus in it) but that we as humans can easily do can suddenly be taught to the computer. Trouble is, really tough tasks like recognizing what people say or what is in an image did not work so well, so ML was very cool but largely not practical for real problems - until recently.

So, what has been going on with ML recently? A LOT. Several key insights, the most significant being to use better computing power with GPUs and a better scaling algorithm, has led to massive gains in how well the techniques can be applied to large amounts of data; all this is nicely summarized by one of the architect of the movements, Andrew NG. By the way, the seemingly fancy term Deep Learning basically equals slightly tweaked algorithms that have existed for decades on GPUs and with lots of data. As it became clear these algorithms can now be run at scales that can solve meaningful problems using GPUs, this whole Big Data trend also came about, and companies like Google and Facebook and many others started applying these research techniques to their Absurd amounts of data. This confluence has led to many amazing research advancements in the past several years and has made the people responsible for these advancements very successful.

So, how does this relate to jobs again? One word: automation. Or, more correctly, computerisation. Computers have obviously already had a huge impact on how many jobs are done, and relegated many jobs to the past. But until now, they have largely just been excellent tools to be wielded by humans - hugely empowering, but not able to do much without being set in motion by the squishy brain of a human being. And the things we set in motion! Despite just being really fast calculators, computers have been used for so much more than computation - art, science, communication, exploration, and so much. But, without a person at the helm, computers are still left to follow a very rigid set of steps and instructions, and it turns out we just can’t figure out instructions for things humans are really good at, like recognizing speech and finding cute cat pictures on the internet. Until now. Machine Learning is now empowering the hard silicon logic computers to do things that used to be entirely within the domain of squishy human brains - and in some cases, empowering computers to do those things better.

Please, don’t take my word on it. What do I know, really? Take the word of academics who spent years crafting an excellent 2013 paper from Oxford that explored this idea in much detail, and ended up finding that 47% could be automated right out. Really - don’t take my word for it, read it. Or, at least read this excerpt:

Historically, computerisation has largely been confined to manual and cognitive routine tasks involving explicit rule-based activities (Autor and Dorn, 2013; Goos, et al., 2009). Following recent technological advances, however, computerisation is now spreading to domains commonly defined as non-routine. … Although the extent of these developments remains to be seen, estimates by MGI (2013) suggests that sophisticated algorithms could substitute for approximately 140 million full-time knowledge workers worldwide. Hence, while technological progress throughout economic history has largely been confined to the mechanisation of manual tasks, requiring physical labour, technological progress in the twenty-first century can be expected to contribute to a wide range of cognitive tasks, which, until now, have largely remained a human domain.

As the paper later points out, the development of ML will likewise expand the reach of robotics beyond the drudgery or repetitive industrial work:

Mobile robotics provides a means of directly leveraging ML technologies to aid the computerisation of a growing scope of manual tasks. The continued technological development of robotic hardware is having notable impact upon employment: over the past decades, industrial robots have taken on the routine tasks of most operatives in manufacturing. Now, however, more advanced robots are gaining enhanced sensors and manipulators, allowing them to perform non-routine manual tasks.

This sort of talk scares people. And perhaps it should - change is scary. And when thoughts of scary change come about, people who do not habitually read ML and robotics papers start to take notice. A fantastic youtube video essay just on this topic has garners 5 million views, and generated much fanfare on internet communities such as reddit. But, fantastic though it is, it is also wrong. The video covers much of the same content as I have expressed above, quite eloquently, but completely overstates the degree of machine intelligence we can predict at present. Robot baristas, self-driving cars, factories with cooperative robots, news bots - all of that is likely to happen soon, or is happening. But robots capable of writing anything as conceptually complicated as this, or the sort of software that I write for work, or even move with the agility of waiters - these are a long long time off, and perhaps will never fully come to fruition. Like Elon Musk, Bill Gates, and Stephen Hawking, the writer makes predictions from current trends that I (as someone who does habitually read ML and robotics papers) find unrealistic, and unreasonable.

Elon Musk and Bill Gates and Stephen Hawking seem like a pretty impressive bunch, so where do I get the nerve to call them out on being excessively worried about AI? Well, that would take a whole separate 5000 word quasi-essay to go into. So, let instead call on the power of Ethos and present the sentiments Andrew Ng, someone actually leading those revolutionary advances in the field, has expressed:

“Those of us on the frontline shipping code, we’re excited by AI, but we don’t see a realistic path for our software to become sentient. There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”

So, it’s just too early to look that far in the future - it is entirely possible the capabilities of the present surge of AI systems will again plateau, and true cognitive intelligence will not become a reality. This plateau may and likely will be one with millions fewer jobs, and much more futur-y with robots and the like all about, but it will not be a drastic feature where all human endeavor is automated or humans are entirely done away with in favor of machines. There are strong arguments for why we should be generally aware of the progress of AI, but they do not make a compelling case that such a future is inevitable.

I really cannot reasonably get into those book length explorations of the future. Suffice it to say that the presumption of persistent exponential returns with technology is not well supported - fundamental physical or conceptual principles may halt the progress of computational power and make an AI driven utopia unfeasible. I grant you I still have not read either Kurzweil's or Bostom's arugments in full, but I have not been convinced that we should be more than conscious of the possibility at present.

But, millions of jobs is not nothing! And so we finally turn back to the question: how will these economic and technological changes affect attitudes toward work? The answer, or my answer, is predictable: I hope that with diminishing numbers of routine somewhat-creative jobs, work as a whole will increasingly be seen as something that should be interesting, creative, and ultimately fulfilling. And eventually - eventually - the notion of how our economy works may have to be completely revised to account for the ability of intelligent robots to take care of most human work, and the dreams of Marx finally get realized into some form of actual utopica where people work as a creative and personal pursuit rather than an economic transaction with society. But, in the near future, we can just hope that the dire existential straits immortalized in Office Space will be no more.

Back To The Present

But all of that is far off, and for all we know our world will be lucky enough to instead get the beautifully dreadful future of Blade Runner. So rather than dread or anticipate the changes of the future, we better worry about our incredibly imperfect present. At the same time, we ought not forget the past, and appreciate how far we have come - people live longer, are happier, and are more educated than in the past. There are still relatively very few people as lucky as me out there, but that’s changing, and as a rule of thumb most things are getting better. And me? I just finished writing this absurdly long think piece for no real reason beyond feeling like it - so it seems I am doing alright.