Book Review: Ghost Work

I have planned to read Ghost Work by Mary Gray and Siddharth Suri for a long time. I bought the book when I visited a talk by Mary Gray at Data and Society about a year ago. Since then the book still sits on my bookshelf. The general theme of the book is about the necessary unseen human labor behind the seamless automated systems. The core concept of the book is the idea “paradox of automation’s last mile.” It refers to the phenomenon whereby when Artificial Intelligence becomes more advanced, it would create “temporary labor markets” to solve “unforeseen and unpredictable tasks.” Furthermore, the grater paradox of automation is that “the desire to eliminate human labor always generate new tasks for humans.” In other words, automation cannot and would not in a foreseeable future, rather automation reconfigures work, and reorganizes human input in the production process.

In the authors’ own words, on-demand service work is necessary because we do not know when machines need human input:

“As machines get more powerful and algorithms take over more and more problems, we know from past advances in natural language processing and image recognition that industries will continue to identify new problems to tackle. Thus, there is an ever moving frontier between what machines can and cannot solve. We call this the paradox of automation’s last mile: as machine progress, the opportunity to automate something else appears on the horizon. This process constantly repeats, resulting in the expansion of automation through the perpetual creation and destruction of labor markets for new types of human labor. In other words, as machines solve more and more problems, we continue to identify needs for augmenting rather than replacing human effort. This paradox explains why on-demand services – commercial ventures that combine humans and APIs to source, schedule, and deliver access to goods and services – are more likely to dominate the future of work than AI and bots alone (p.176).

The authors interview workers in the United States and India, and bring them to us. They go behind the API curtain, and reveal to us who the workers are, what are their aspirations, and why they work for on demand platforms. This book is comparative on many levels. First, it compares workers in different countries: The US. and India. Second it compares experiences of workers who work for multiple platforms: Amazon Mechanical Turk, Microsoft Internal On-Demand platform, Amara translation service, and LeadGenius. The diversity of platforms, and countries allow us to see a picture of an emerging global on-demand labor market, which performs million of tasks, which vary in complexity. This book therefore enables us to see what is often hidden and abstract.

The authors then provide institutional background on why the on-demand labor market has become necessary in the digital revolution. They also show us how on-demand labor market is not a new system. Before the industrial revolution, many women and households took on the job of sewing the last buttons to clothes before the textile industry figured out how to automate the process, and confined the work within the walls of garment companies. This inclusion of non-institutional labor is important in the process of automation.

As they walk us through lived experiences of their interview subjects, the reader recognizes heterogeneity of human labors in this on-demand market. What becomes apparent is that since the market does not have a clear requirement of educational background, training level, etc, it allows for a diverse labor force with different levels of training, different levels of education, and coming from diverse racial, ethic, and religious backgrounds. However, this heterogeneity of labor supply, and diverse forms of tasks also create inefficiency, and friction because workers have different levels of skills, and that requesters/ employers have to define the tasks themselves. The authors show that this system involves a lot of transaction costs for both workers and employers. The two sides have to put in the time to find the right match, and to explain to each side how to do the task as intended. Workers avoid the problem of looking for tasks by creating social networks outside of the platform. They rely on social media, online forums to find the right tasks. As the author outlines this problem of transaction costs, I wonder whether building a well thought-out communication platforms for on-demand workers and employers would be a potential solution for the various transaction cost problem in this market. This is a technical solution for the current inefficiencies in this market.

One theme that I observe in this book with other Gig economy books that I have read in the past year is that workers in this economy are subject to algorithmic arbitrariness. Workers are suspended, and kicked out of the platforms sometimes randomly, and sometimes according to rules that are not taking their real life situations into account, while workers have no recourse, no where to complain. This shows the power of platforms over workers, and that workers though important to platforms’ profitability are not treated as assets but expandable number that could be eliminated at will.

Gray and Suri explain:

“The worst expression of algorithmic cruelty is disenfranchisement. Under the guise of safety, systems designers make it easy to block or remove an account in case a bad actor tries to cheat the system. This adversarial stance means that good workers are sometimes misinterpreted as shady players. Inevitably, mistakes are made. A worker changes an address, loses her internet connection, or shares an IP address with another worker. Each one of these things is potential red flag. The algorithmic system sees the flag as a possible security threat and, with no one at the helm to distinguish friend from foe, the worker is penalized. The penalty may look like being blocked or suspended, or having an account deactivated. Again, in an ecosystem in which workers are seen as interchangeable, the system automatically eliminates what it deems bad apples. The sad irony is that even the best – intentioned and most seasoned workers can get caught in the dragnet.” 86

Workers are dehumanized through the process of de-identification. Mturkers become lists of numbers. This reminds me of how Jewish prisoners given a number during the Holocaust. Giving a working human being a code to interact with is so dehumanizing for both sides: the requesters, and the Mturkers. The authors though qualify this statement by saying that in case of workers who come from discriminated classes (gender, religion, etc), not being identified by names and gender sometimes giving them advantage.

At the end, I feel that the book presents a good narrative of what is going on in the tech economy. However, as a sociologist of work, one question remains unanswered is the question of “work process” among on-demand gig workers: Why do they work so hard for very little paid, and why don’t they quit? What is the average tenure of an on-demand gig worker working for an on-demand platform? The authors point out the 80/20 Pareto rule to create a typology of three group of workers. However, I want to know among those who make on-demand work their full time career, why do they work so hard for little pay? Another question is why they not call them gig workers? What is then the difference between gig work and on-demand work? Aren’t they the same?

To answer the question what keeps they in the game, the authors provide a partial answer: many of them are in the game for the cognitive benefit of it. They learn new things, keep up their skills (most of these answers come from Indian subjects). However, my sense is that because the book is not an ethnographic research, they can never quite get at the process that workers rationalize the decision to remain in an exploitative labor scheme.

Besides, How about their American counterparts? Why are they working so hard for little pay? The answers are either implicit or not satisfactory. Implicit in the sense that they work for various reasons. One, the workers population are so heterogeneous, they should have different reasons why they work in this sector. Thus, they should also have different reasons why they stay. Is there anything about the on-demand aspect of this that keeps them stay? Is there anything about the brandname (, or Microsoft) that make them stay? These questions remain open.

Finally, as a methodology enthusiast, I feel the book to be not transparent in its methodology. Who were involved in the interviewing process, who was contacted, who was doing the interview, etc. These pieces of information is absent. As mentioned earlier, because the book is not explicit in whether ethnography was involved at all, readers cannot really picture the embodiment aspect of online/on-demand work.

Because I care so much about reproducibility of research, the book does not have a methodological appendix that makes me cringe. I know that it is produced for popular audience, but as a scholar, a researcher, a scientist, I want to know how many people they have interviewed, how did they interview them, how many in person, how many remote. How did they avoid positionality biases being MSR employers, privileged, and at times employers of those ghost workers.

Overall, I agree with the authors that there’s a global ghost work sector that is increasing in size because of the increase in demand for human in the loop tasks from various tech companies. They are working outside of the formal employment structure, and they are subject to the whim of the platforms, and being exploited by requesters because of the platform design. However, I think the book has not answered many questions, and one of which is methodological, and another is theoretical.

Despite many questions that I have, the book is a starting point of a long over-due conversation: who are the human workers who power machines. How can we as society protect them, and enable their creativity for our better future. The book is both practical, and hopeful that we actually will continue to need humans in the loop. The book also provides one practical solution for job training program at the city level that I really like: supporting public education, and letting residents to take college classes that they would want to take in order to benefit their work. This similar program enabled me to audit courses at Humboldt University, Free University and Goettingen University during my stay in Germany. It plugged me into the intellectual environments of those excellent public universities, and through those courses I had also made long lasting friendships. I’m all for investing in public universities and making their courses available to those who pay their taxes to support such excellent public education.

Audm vs. Diversity in the Age of #BlackLivesMatter

While America is experiencing a social revolution lead by Black Live Matters activists, every individual, every institution is forced to pay attention to the question of diversity, inequality. At the same time, as I am reading news coming from different sources about Covid19, and social justice, I feel that I appreciate the good work that journalists do. I read about the troubles that journalists of color are going through. Newsrooms across the nation are grappling with the racial inequality conversation that the nation is having. I want to support their work, especially supporting good works by journalists of color.

Then I found Audm, an app that reads high quality news articles aloud. The company has recently acquired by the New York Times. Some observers have said that this acquisition marked a turning point in the New York Times’s approach to audio content, and audio production. The New York Times has beefed up its audio content production. Its the Daily podcast is one of the most popular podcasts in the world. The news organization is now behaving more like a tech company than a newspaper company. Its Data Science department is staffed with some of the most well-known data scientists in the world. Its constant acquisition of startups makes it looks like Amazon, a website of everything. I wonder at what point all of my news source would come from some organization that is associated with the New York Times.

At first, I rejoiced at the idea that now I can listen to the highest news content by a very cool app. It felt authentic, and intimate like listening to a podcast. At the same time you’ll get to know the most important information out there written by the best journalists in the industry. Then after having listened to a few articles, I found one pattern: all of my news is read by white men even when the news was written by a brilliant writer of color, or a female writer of color. This does not sound right to me. Instead of giving power to the writer, curating some of the most important content to readers, the app and its voice over staffs reproduce a type of “audible inequality” in the voice over industry. If there’s diversity of writers among New York Times staffs, I’d want to also have diversity in voice over actors.

Even when second generation Asian Americans born and raised on American soil, they have a distinct voice that is different from a middle age white man’s voice. For example, a Vietnamese young writer would have a voice that has been nourished by their migrant parents who came to the US, overwhelmingly in the aftermath of the Vietnam War. This person has been raised within a community that is still grappling with the idea that they are now a minority group in an increasingly diverse society. The young writer has been nourished with fish sauce, and Pho, and a history of the Vietnam War, and growing up being told a model minority student. The voice that this young individual produces is representative of all of those lived experiences. It is unique and distinct. I want to listen to an article written by a Vietnamese talented journalist, being read by a talented Vietnamese voice over actor. In that process, an intellectual work (the article) benefits two knowledge workers (the writer, and the voiceover actor) of color.

What is happening now is that the writer of color does the difficult work of producing a piece of intellectual work (the article), and it is read by a white middle age actor, who benefits from the first person’s work, and reproducing the stereotype that only white voice actors are talented because their voices are featured. This reality incurs symbolic violence on the talented writer, and reinforcing existing income and racial inequality that upholds the current economic structure. When the audience does not critically think about what they listen, they gradually acquire an association that there’s no talented voice actor of color out there. This is especially damaging for young people of color who would dare not go into a field like voice over because they never saw any people like them in the field.

To conclude, I suggest that Audm, and by extension the New York Times, should diversify its cast of voice actors. If an article is written by a writer of color, it should be read by a voice actor of color. On a broader scale, the audio industry itself should diversify. There are plenty of opportunities for voice actors of color to contribute, they should be given roles and opportunities where appropriate. As of now, I will not subscribe to Audm. I don’t think my money would be well spent here. I would rather read the article written by the talented writers of color, and imagined how they would sound in my head rather than listening to the app, whose voices do not represent the real writers. And then donate money directly to an artist of color on Patreon, where I know for sure that I directly contribute to their creative work.

Visualizing Data as Discovery

I have been obsessed with data visualization lately. My go-to tool at this point is till R, which I have been told over and over again that it’s not as versatile as Python. However, it’s the matter of path dependence, and that I am used to figuring out how to to ask the right questions in R in order to get desirable results.

With Python, different steps that I need to figure out to get desirable results are still black boxes to me. While writing this blog post, I have realized that I really need to master the Python programming language this summer. I have gotten down the basics. It is the matter of practice. Thus, this is the right time for me to actually sit down, and become really familiar with python, and be able to produce work using python programing language.

Back to the issue of data visualization. Today, I spent 8 hours straight trying to figure out how to create stacked chart in R. I have been trying to create it for quite a while. It started about 4 weeks ago when I promised my research partner that I would create a stacked chart figure for our text mining paper. I asked all people I know around to help me. They all did not deliver. Today, it turned out, I rolled up my sleeves, sat down in front of my laptop, and figured how to create the chart. My final result is not as clean as what I would have liked. It’s nowhere close to a scientific journal level quality. But the figure conveys the main idea, and that it is sufficient for me to draw some conclusions from the data.

This is the figure that I produced after a day trying to create it. Besides, it also took some serious conceptual understanding of what this figure actually represents. In other words, I learned both the technical skills, and the conceptual understanding behind the process of creating it. legend18

What I did was that I downloaded a corpus of text from the subreddit podcasting, a community dedicated to creating podcasts. My goal was to create a stacked chart that demonstrates different topics over time. The topics are represented by trigrams. Specifically, I calculated top trigrams per month, and charted them over time. Even though I downloaded all content from the subreddit, which started in 2010, I found that trigram chart only matters once I narrowed down the date range to 2016-2020.

The resulting figure shows that the subreddit started with accepting promotional podcasts, then became dominated with weekly podcast discussions, and technical discussions (such as mic, mixer audio interface). One topic that remains central over time is the different podcast distribution platforms (Apple Podcast, Google, Spotify).

The overall topics concern with technical aspects of producing content, and the different main platforms that one could distribute episodes, as well as finding shows. From these various topics one can conclude that in the past 5 years of the previous decade, the podcasting community focused a lot on the technological aspects of the field.  Technology matters from both sides: creation and consumption. Thus, it seems that the main driver of the podcasting field so far has been the sheer development in technology both for content creation and content consumption. What is surprising to me through this exercise is that the discussion about how to monetize a podcast doesn’t show up at all  in the top trigrams per  month analysis. This raises a question about whether the goal of being able to make money from producing a podcast ever a goal for a podcaster.

After spending my weekend working on this little project, I actually felt good about my product. I felt that I actually spent a day building something, and that at the end of the day, I actually saw the result of what I built. This satisfying feeling made me recognize how much I actually appreciate coding. One computer scientist I followed wrote in his newsletter that code doesn’t lie. One knows the exact effects of all the actions. When the final results are not attained, no way bullshitting would help.

More visualizations will come out of my work in the next weeks to come. So far, I am very happy with my progress in learning data visualizations. The more I get into visualizing data, the more I understand the importance of being able to use charts and graphs to understand the social world that we’re living in.

Technology vs. Social Problems

Our society has entered a phase where many people believe that more technological innovations seem to be solving every problem. New York City is spending money on building more incubators and accelerators to speed up creation of startups. Higher institutions across the country are opening data science master’s programs to channel a large chunk of the educated workforce into high tech. The speed at which technological innovations happen is astounding. However, does more technological innovations solve persistent social problems such as inequality, racism, sexism, and radicalism?

Maybe no, and I would argue that the obsession with technological innovations is unhealthy.

First, I want to foreground that I am not advocating for impeding technological innovations. I think they are necessary, and that they are inevitable for an advanced economy.  What I am looking at is the ideology that more technology is better, and that more people study and work on technical problems would make society a better place. This ideology is flawed, one-sided, and potentially could cause unintended consequences.

Second, I posit that more technically advanced developments might not solve the existing problems that are embedded in social institutions. These institutions have been created by humans to accomodate for human flaws, and human relations.  No well-designed algorithms can account for all human eccentricities. Besides, social institutions also change. This creates a constant need for changing technology as well.

Third, personnel who solve technological problems might not be the best people who can solve social problems. They are sure good at identifying technological problems. They are also solution-oriented in their approach. But the social world is full of intricate social relations that do not have a one-to-one direct relationship. A social world is comprised of different social systems that if one system is changed, it might cause reactions from another. This requires a big-picture thinking from those who understand this complex arena.

Fourth, I am not advocating that social scientists who often conclude at the end of their research “things are complicated” as the sole problem solvers. However, I think their understanding of the complex social world might provide an important perspective that data scientists, computer scientists should learn from, and should collaborate with.

Currently our society has become obsessed with coming up with new technological innovations, but we are unwilling to try to see whether they would work in practice. More importantly, before we even figure out how one problem could be solved without any technical expertise, we already try the most expensive, most sophisticated, and most advanced tool out there just to demonstrate our prowess is laughable. Methods that are out of date are not irrelevant as long as they are able to provide a simple and elegant answer to a complex social problem.

I particularly see the obsession of applying newest tools such as deep neural networks in studying a social problem is unhealthy. Sometimes, practitioners of such approach  do not even try to understand the social problem, and try to see where the problem comes from and how to solves without any technology. They want to quickly build a sophisticated deep neural network model whose architecture is comprised of 20 different layers of information. Then finally they create a predictive model that spits out a few number, and claim that their model is better than any existing model, or methods. Then with their credentials, they go around, and convince public policy makers, program managers and others to implement their models because there is a convincing base-line model whose results are not as impressive as their model. At this point, I think as machine learning, and predictive analytics are catching up with the social world, people with decision making power in any organization should at least equip themselves with basic knowledge of these new techniques to anticipate whether they are necessary and important for their organization. The social world must push back, when the technical world is encroaching on us. Not all technological advancement is necessary, and not all of them is pushing toward progress.

In conclusion, the question: does more technological innovation solve social problems? Well, it’s complicated. One needs to reflect on it more. But as of now, I believe that obsession with technological innovation without healthy reservations, pushbacks is dangerous.

Github for Social Scientists

The best thing about setting a goal to learn computational skills is the pleasure of figuring out the knowledge on my own. After the summer institute in computational social science (2019) at Hunter College, I decided to improve my machine learning, and data wrangling skills. Those skills are very difficult for me to learn in a structured classroom context. The reason is that in a structured classroom context, the instructor often teaches what is easy for them to teach instead of what one needs to advance as a researcher. As a colleague once said: “As a researcher, one should not have a hole in one’s knowledge.” I decided that I do not want to have any hole in my  cleaning data skills. This time the desirable set of skills that I want to acquire is pertaining to GitHub, a platform that hosts repositories (repos for short) for coding projects.  It has become an essential platform for open science, software development, and scientific collaboration. Lately it is my go-to site to explore how codes in R and python should be written. In many ways, I have become a passive consumer of codes through its repos.

As mentioned many times before on this blog, I am bit of a conservative learner. That means I want to see how other people preferably knowledgeable and experienced instructors  to show me how to do things, and then I model either their process, their logic, or the steps that they take. I often come back to an authority figure to make sure that I do is right. One can think of this as supervised learning, where one follows well-defined steps laid out by an instructor. This is one way to learn new things, but this is a limited way because what the instructor shows is one way of doing things. There can be many different other ways that either the instructor does not know, or does not have enough time to show. Ideally, as a researcher I want to switch to the unsupervised learning mode where I discover knowledge, and learn to detect patterns myself. Yet given my conservative learning nature, and my impatience for self-discovery, I chose to follow a GitHub 101 course on Coursera to save time.

The course is called Version Control with Git. Each lecture is a bite-size 3-5 minute video. This is very good because as a baby in the world of collaborative coding, version control, and open-source software, I need detailed, step-by-step instructions of how to do certain things. The videos in this course do precisely just that. The instructor gives clear instruction of how to accomplish certain tasks. I feel like I actually understand the meaning of each command line, and what I want to accomplish. So far I  have finished watching the first couple of weeks of the course materials. Besides watching the videos, I practice the command lines immediately. I use Terminal on my own computer to improve and a side project that I have been working on for about half of a year now: updating my own website. It has been a fun experience. I am now able to write basic command lines to change my website interface, and push the code to my repo. The feeling of making something work as expected is so satisfying. After this episode, I feel more confident using Terminal.

What I have realized is that I really enjoy learning new things, and really enjoy school in the sense of acquiring knowledge. At the PhD level, I know that many classes are not really worth my time, but the idea of constantly acquiring new knowledge and skills excites me. This experience is just an example of why I keep going back to school, and I feel the urge to learn new skills.

This reminds me of the feeling I had after I passed the orals exam (a qualifying exam in my PhD program). It was weird. Once my committee said that it was the last exam in my academic career, and I passed, I felt that I was finally accepted in the profession, which I always felt foreign in. Now I feel that I partially belong, that I could start doing meaningful research, create knowledge that other people will consume, and appreciate.  Finally I am allowed to claim my title as a social scientist. Similarly, now once I know how to use simple Git commands, I feel like I am one step closer to be able to call myself a data scientist. The next goal is to learn how to use LaTex to write papers to submit to computational social science conferences.

Venture Labor vs. A Company of One

In my orals reading list for sociology of work, two books: Venture Labor by Gina Neff, and  A Company of One by Carrie Lane, share many similarities that are worth a critical review essay. They both write about tech workers’ experiences  in the 2000s, and the effects of the dot-com boom and bust on their personal lives. The authors look at workers’ experience in the so-called “New Economy.” The first book was written by a organization sociologist, and the other by a cultural anthropologist. Venture Labor asks the questions why did people leave their jobs to work for high tech companies which went burst immediately after a short time; and why did they exhibit entrepreneurial behavior in their jobs while not being entrepreneurs themselves? Among many questions Carrie Lane asks in her book, the one that summarizes it all is how did unemployed white-collar tech workers make sense of their unemployment? In other words, Neff focuses on the decision making process before the dot-com bust, while Lane focuses on how individual workers cope with effects of the dot-com burst.

In order to answer the questions that each scholar poses, they did interviews, and ethnography in two centers of the tech industry in the early 2000s. Neff conducted her fieldwork with startup workers in Silicon Alley in New York City, and Carrie Lane in Dallas Texas. The different locations gave different contextual answers to seemingly similar research questions.  In answering their questions, both employ cultural theory, and cultural interpretation of their interview data. Neff uses a political economy approach in study the tech workers, and Lane is interested in how the macro neoliberal ideology has an impact on how an individual sees the world. In many ways, their central concern is the relationship between the self and society in the realm of work.

Each author contributes to their respective field of studies. Carrie Lane uses the concept “A Company of One” as a theoretical framework to explain how unemployed white-collar workers make sense of their unemployment, their financial loss, and their adjustment to life after lay-off. Each individual employs career management to navigate their companies of one. Each of them manage their own career trajectories independent of any company or corporation. Lane uses this concept to explain how her workers make sense of the effects that unemployment had on their family lives, on their understanding of themselves as an employable person, a worker, and a partner. Each person is responsible to manage their own careers is what the term suggests. However, Lane is also able to provide the reader with nuanced understanding of the strategy. The term is full of contradictions.

On the one hand:

Career management is a life raft to which displaced tech workers can cling amid the roiling sea of insecure employment and prolonged joblessness. Seeing oneself as an independent company of one (rather than a discarded employee) can bolster the optimism and self-esteem of job seekers while providing them with tangible strategies for finding employment. Casting secure employment as a situation of foolish, emasculated dependency provides some protection against the emotional upheaval of an unexpected layoff or prolonged job search. Conceptualizing job seeking as just another kind of job allows tech workers to retain a resilient sense of self-worth and professional value in the prolonged absence of paid employment.

During the period of unemployment, individual workers lead their own job searching process, and they conceptualize it as another job. They are the leaders of their own lives. They manage their own company whose brand is themselves. This management technique helps them to remain optimistic about their employable future. Lane shows throughout her book that it is a way that white-collar workers justify their downward mobile career trajectory. It is also a way for workers to internalize economic uncertainty of the New Economy.

But on the other hand,

[Career management] can just as reasonably be imagined as a stick with which they are beaten, and with which they beat themselves, as they try to stay afloat. To see it one way without also seeing the other is to sacrifice a fuller view of these job seekers and the cultural logic they inhabit. Despite tangible and often painful losses, the ultimate cost of career management lies in its naturalization of the losses.

Workers blame themselves for their economic losses, and sometimes their inability to find a job, whose financial reward was similar to what they had before. Career management places responsibility and agency to the workers instead of the structure. Workers find all resources they could in order to get themselves out of their dire situation without making any structural claim. They barely ask for government supports, or government intervention into their joblessness problem.  They go to job seekers networking event in order to find job leads, and network with other job seekers, hoping that social networks could help them out of their situation. Most would eventually figure out that socializing with other job seekers would not give them a job, and they would quit in the end. In other words, this cultural logic shifts responsibility away from corporations and government, and makes workers themselves think that they are responsible for their financial security during economic downturns.

Similar to Lane, Neff also comes up with a cultural answer for her question. The term that she comes up with is “venture labor” which is what the workers exhibit in driving innovation, and company flexibility. It is “the investment of time, energy, human capital and other personal resources that ordinary employees make in the companies where they work.” This particular concept is used to explain how companies and the tech industry were able to socialize economic risks. Neff argues:

This behavior is a part of a broader shift in society in which economic risk shifted away from collective responsibility toward individual responsibility. In the new economy, risk and reward took the place of job loyalty, and the dot-com boom helped glorify risks. Company flexibility was gained at the expense of employee security.

All workers exhibit entrepreneurial behaviors regardless of whether they were entrepreneurs or not.  Neff also traces larger structural and cultural changes that enable this particular kind of risk-taking behavior:

Three economic forces increased the level of economic risks people bore in the late 20th and early 21st centuries: the increasing “financialization” of the American economy; rapidly changing valuations of work, products, and services within the new economy; and the widespread diffusion of flexible work practices.

These forces come together at the beginning of the 21st century, and they enable startup workers to stomach risks. They leave their stable jobs, work for small startups in Silicon Alley. That is to say, Venture Labor examines how startup workers frame their risk taking behavior. In the process, Neff finds three different types of narratives, which workers use to justify their risk-taking behavior. She calls them creative, financial and actuarial strategies of risk management.

Table 3.1 on page 94 summarizes three types of risk-management strategies:

summary of Gina_1

These strategies are employed by workers to flexibly adjust, understand and rationalize the “economic uncertainty they face.” The financial strategy people “evaluate their companies for their potential as lucrative investments, accounting for risk in expressly financial terms, and actively assessing the financial potential of their labor as an investment in their companies.” Being able to calculate their worth, values, and their tangible contribution to the companies, these people fit in the Internet millionaire stereotype. People who employ the creative strategy for risk emphasize the fact that creative projects have risks associated with them. Therefore, taking risks is a part of their labor. Finally, those who use the actuarial strategy for risk calculate risks for each position, each project that they occupy/ work on. They hedge against risks, and even when or whether the market crashes. Put it differently, they do not invest everything in their company like the financial type, but diversifying their work and position in order to avoid risks.

By creating three ideal types of how tech workers conceptualize and frame risks, Gina Neff gives the concept “venture labor” contours and depth. These types help readers imagine how people actually perform this type of labor in real life, and how they would justify their behavior.

Even though each author approaches their subjects of study differently by calling them “a company of one,” or “venture labor,” they share many things in common. Both argue that during economic downturns, networking would not help workers much.  When their networks become homogenous (that is, when everyone in the network is like them, unemployed), then networking would not help workers to find a job. The authors both argue that economic downturns expose various assumptions that previous network scholars made about the social power of networking.

In conclusion, the two books are great monographs of how workers in the tech industry make sense of their world. Lane writes a compelling story about how neoliberal ideology affects how workers make sense of their world. Neff analyzes various structural and cultural reasons where workers internalize entrepreneurial behaviors, whereby they would take more risks. As we see that the tech industry is expanding in every city, these two books should be must-reads for sociology of work, tech and society, and cultural sociology.

Amazon’s Interface Change on Prime Pantry

Lately because I was so stressed out with my orals exam, I started shopping more on Amazon Prime particularly in the Prime Pantry section. Sometimes I would get very good deal, and if the total purchase is more than $35, they would ship it for free to my apartment. That is a good deal because honestly every week it would cost me more than $100 for grocery. Yes, grocery in New York is expensive. I acknowledge that Amazon does not treat its workers well, and that it is moving heavily into the monopoly direction. Yet like many Americans and consumers in the 21st century, I face a dilemma: If I choose to not make Amazon a behemoth, I should quit the platform, but since I am overly stressed, and did not want shopping to become another source of anxiety, I should reduce my various grocery trips to one Amazon shipment. It is surprisingly a common dilemma for many American households who do not have extra disposable income, and extra time to shop locally.  This dilemma has been a source of moral contention between me and myself, between me and my partner. On the one hand, I feel guilty giving more money to Amazon. On the other hand, I am very tired of going to various stores in New York, and feeling like I am overcharged for grocery items. After I awhile, I resorted to pure cost-analysis calculation, which is that I would save money instead of being stressed out about whether my individual action makes me feel good morally. In other words, I keep shopping on Amazon on a weekly basis.

Last weekend however I noticed a difference on Amazon Prime Pantry platform. When I searched for raisins, I could no longer see the cost associated with each item. The search came out on showed up as follows:

Screen Shot 2019-03-31 at 14.51.54.png

Before, one could search for an item, and see the price immediately. So an individual shopper compare brands, price, and whether there was any discount for each item. Now all the benchmarks are gone. I can no longer see the price for each item. In order to find out how much each item costs, I now have to click on each one, remember how much it costs, come back and click on others, then and compare the prices. This is simply inconvenient for a shopper like me. It’s possible that it’s just a test that changed its interface for Prime Pantry items only. However, it could be also a new strategy that the company has. Maybe it wants customers to spend more time on its website to compare the price, quality, and reviews.

What priorly brought me to Amazon, namely information transparency, has become somewhat more inconvenient to get at. Instead of saving my shopping time, the lack of information on the first search page makes me feel like I need to spend more time on the website. Now it takes twice the amount of time to shop for the same number of items. This inconvenience pissed me off for an afternoon. Yet it was still not strong enough to drive me away from the platform. I wonder when I will leave this platform altogether.

Automation and the Pain of Eating Out – Example from La Guardia Airport

A couple of weeks ago, when I flew to Atlanta to visit friends, I took a flight out of La Guardia (LGA). Because La Guardia was a much smaller airport than JFK, it took a lot less time to wait in line for the security. Thus I had a lot more time to spend inside the airport. However, since it was also a much smaller airport, there were a lot less things to do. One thing I had to do was to feed myself because the flight was around noon, and I had not had breakfast that day.

Then I discovered that other than a take-out deli-style restaurant, other restaurants at La Guardia had tablets for ordering. When I spent some more time to explore a few gates, I recognized that these tablets were also installed at various waiting areas. They looked pretty much like this one:

Image result for tablet at laguardia

Since I wanted to have a similar to eat-out experience at the airport, I had no choice but go into one of those restaurants where one must order with a tablet.

First, a waitress came out and greeted me: “How can I help you?”

I answered: “I need a table, and I prefer not to sit at the bar.”

Then she led me to a small table back near the kitchen, and away from the lobby which led to various gates at La Guardia. In many ways, that was ideal because I could stay away from foot traffic. As soon as I sat down, she started to explain how the tablet worked, and how at the end I could paid using my credit card at one of the credit card shaped tool next to the table. Suddenly what I saw was that my dinning out experience became more like a practicing cashier experience.

I had a brief time working as a waitress at a couple of restaurants in Berlin, and this dinning experience reminded me of everything I did not like about the job. First the tablet gave me a lot of options, and the waitress was not there to explain any of the options. I was a busy flight passenger, I did not have the time to go through all lunch options to figure out which one was the best for me. Oftentimes, when I eat out, I would look at a few options on the menu that I like, and then ask the waiter/waitress which one should I choose because it could be up to the chef that day to make something excellent, or it depends on the quality of ingredients that day. The server is oftentimes my best friend in deciding. In sociological terms, waiters and waitresses  are cultural intermediaries in these instances. They shape my taste, and eventually my consumption and how I spend my money. In case of the tablet waitress, there was no more cultural intermediary to consult, I would be my own waitress, deciding for myself which would best suit my time, budget and consumption. But the information given to me by the tablet was incomplete at best. Therefore, at the moment when I scrolled through various options on the tablet, I became instantly info-glut, and overwhelmed. My mind was paralyzed.

Furthermore, the tablet was up in front of me for the entire time that I came into the restaurant. After using it for about 10 minutes, I became irritated. In the time period, I was trying to cut down my screen contact time because I was going through a digital minimalism experiment. My goal was to cut down screen time, cell phone time in particular. In another blog post, I argued that urbanites, especially in New York, are increasingly over exposed to screens. I call this process “ubiquitous computerization:” that is, one is exposed to digital technology 24/7 even when one sleeps. Now my dinning out experience is also not exempt from this process. I am not a doctor, but I am aware that looking at a bright screen for along time is not good for my eyes. Plus, I do not have money to buy those blue light blocking glasses that potentially could help me deal with more screen time, and still protect my eyes. Then I finally decided to walk out of the restaurant because I could not bear the aesthetics of the tablets and their blue light that attacked my eyes and drained my mental energy.


In many ways, I walked out because I was annoyed, irritated, and tired. I did not at all think about health consequences of looking at the bright screens of a tablet. Yet now reflecting upon the experience, I felt that I should be more aware of health consequences of those devices.


However my experience seemed to be one of the extreme. The popular media, particularly the New York Times, gave me the impression that most travelers enjoyed having iPads installed all over La Guardia.  The idea is that customers of flights could instantly access information about their flights. They could use this time at the airport to entertain themselves. However, the experience sounds awfully lonely to me. Ipadization of La Guardia airport seems to contribute to the process of alienation of the self from society even more. The man in the New York Times article is a businessman. And he seemed to be enjoying using these tools for his business purposes. Somehow the image of  Ryan Bingham (played by George Clooney) in the movie, Up in the Air, came to my mind. Despite the convenience of business class seats, and various amenities Ryan gets at all airports, he ultimately travels alone, and feels lonely most of the time. If Ryan is the average, the stereotypical traveler for Airport designers,  maybe getting a meal via an iPad, and spending time more with an iPad instead of anybody else would be ideal. Instead of feeling lonely, he would be wired to the Internet, and maybe eventually he could chat with a female bot to spend his time, and get a dose of emotional connection with an AI chatbot.

Regardless of my experience, and my opinion, the trend is clear. We are all wired, and connected now. Ubiquitous computerization is here to stay. We are connected 24/7 through personal laptops, cell phones, and now free amenities at public transportation stations, and travel hubs. Digitalization is now transforming every aspect of our life: work, play, and leisure.

Digital Decluttering or Digital Withdrawal?

A few weeks ago, I read, and wrote a review of the book Digital Minimalism  by Cal Newport. The author argues that in order to live a fuller life, one should optimize for non-digital activities. The book lays out the arguments for why minimal use of digital technology is important, why not having too much contact with cell phones, screens makes one happier. It also provides a roadmap for a decluttering experiment.  That was precisely what I did. I followed the plan, and cleanse digital junks out of my cellphone, and emails. This blog post summarizes my decluttering experience.

First, I deleted almost all social media apps: Facebook, Twitter, Instagram. Those have been called  “weapons of mass distraction.”  Yet because I was not really addicted to those apps in the first place, I did not see any significant change. The app that gave me the most headache was Mail app, pre-installed in my iPhone. Before, I checked and answered emails every five minutes. Now, if I need to check emails, I open my laptop, and get access to them from my computer. Sometimes I still check emails on my cell phone. But instead of going to the normal email app, now I open Safari browser, and read emails from the website. It really slows down the effort, and sometimes the process discourages me reading and answering emails altogether.

Then I even go further to do deep cleaning of my mailbox. I un-subscribed to almost all un-important newsletters, or mailing lists that gave me no essential information; they simply flooded my mailbox with unnecessary information. By removing myself from emailing constantly, and freeing my inbox from unnecessary information, I suddenly have so much more free time. Theoretically, I can use this new free time to focus on my essential work which is to write my research papers, and reading for orals. Deep work needs deep concentration.  However, what I am going through could only be described as “emptiness.” Before I felt so busy like a businesswoman. Now I need to find a reason to find an email to answer to. Information stops flooding my consciousness altogether. I need to go to the New York Times website to read news.

From behaving like a digital addict, now I suffer from digital withdrawal. Now I open my emails 10 times a day, refresh the web browser every half an hour, and I still don’t receive new emails anymore. Before I could never read through all of the emails that I received on a daily basis. Now I don’t have anything coming at me, and I don’t know what to do about it. I feels like I am not pursued by people/companies any more. Suffering from digital withdrawal isn’t a comfortable feeling. I am still trying to fill the void of digital emptiness.

In Digital Minimalism, Cal Newport also suggested that after deep cleaning, one can also  re-introduce digital technologies into one’s work flow, and life activities. But one should only introduce essential tools that help one to improve the quality of life and work.

Newport suggests the following:

After the break, I determine what is important, and add back into my life. The three criteria to make sure that a technology is critical and necessary for my well beings are:

  1. It serves something that I deeply value
  2. It is the best way to use technology to serve this value
  3. It has a role in my life that is constrained with a standard operating procedure that specifies when and how to use it.

Having thought carefully about these criteria, I did not re-install anything that I deleted. I only dowloaded and installed a few new apps that restrict my cell phone use even further. For example, Moment gives me reports on how much time I spend on my phone. My goal now is to pick up the phone less than 25 times, and less than 1.5 hours in total a day. I was surprised by how many times I picked up my phone a day before the Mail app was deleted. It came around 50-60 times, and I often spent more than 2.5 hours on various applications.

The experiment has not been all liberating because I am still going through digital withdrawal. Sometimes I feel completely disconnected with the world. That is a very uncomfortable feeling. I recognize that I have “the fear of missing out,” which I never thought that I had. This revelation made me contemplate about the relationship between the self and society with regard to digital technology. How has digital technology altered one’s understanding of oneself in society? This is a question that needs some social theory to answer, and as of now I will just pose it as a question without any concrete answer.

The experiment nudges me to think deeper about minimalism in general as a philosophy. It basically means that one can cut down un-essential things in life in order to live a more fulfilling life. On a personal level, only cutting down my digital consumption already frees up my mental space, and I have more time to focus on important activities that bring me joy, happiness, and also success. I wonder whether this philosophy of “everything minimalism” can be applied to organizations and institutions. For example, can universities cut down on unnecessary, symbolic programs such as huge football teams, building new dormitories, etc, so students do not have to acquire unnecessary students’ debt for higher education? Can a society as a whole pursue a minimalist philosophy? Is it anti-capitalism, or is it simply a necessity in modern society.

Digital De-cluster: Apps-Cleansing

Last week I wrote a review of the book Digital Minimalism by Cal Newport, which gave me some ideas about how I can make my life more simple so that I can focus on doing important things to my professional and private life. To summarize, the main idea is relatively simple: whatever digital tools you don’t need, eliminate it, and do not regret that you have left it out of your life. Do not internalize the fear of missing out is the big message. In Newport’s own words, the definition of this philosophy is as follows:

Digital Minimalism is a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value and then happily miss out on everything else.

Newport suggests three steps to take in order to de-cluster one’s digital life:

  • Step 1: Define your technology rules
  • Step 2: take a 30 day break
  • Step 3: reintroduce technology

Before taking those steps, I needed to figure out for myself what is important to my life at this point, and what are my goals. My conclusion is that I want to claim back my time and attention for three important tasks: (1) to get enough time to study for my oral exams; (2) to be able to focus to be creative with my writing; (3) to claim time for my hobby: running. With those three goals in mind, I started to clarify each step in more details.

In the first step, I defined for myself what optional technologies are. They include Twitter, Facebook, and all gossiping news channel on the Internet. I deleted all the apps that take too much of my attention. Almost all of them are social media apps including Twitter, Facebook, and Skype. I recognized that if I really need these platforms, I could go to their websites, and get information from there. Surprisingly all Amazon apps (Amazon Prime, Amazon Now, etc.) were also deleted. I discovered in the previous months that I spent way too much time on Amazon looking at consumer goods, and spending way too much money on unnecessary items such as sweet almond oil one extra pair of tooth brushes because they were on discount. Deleting Amazon helps me to get back my precious time, and preserve my little amount of money in my saving account. What is even more surprising is the biggest culprit of them all: EMAIL! Emails do really make me feel stupid, and be “enslaved” to other people’s demand.

The act of deleting these apps already made me feel so liberating. In order to de-cluster even more in the realm of emails, I started to unsubscribe to all mailing lists that I do not read, which keep crowding out my email space, and make me feel guilty that I do not read them. Yet, I know that this is only temporary because I will subscribe to many more in the future. That means I need to do an email purge periodically. Or the act of de-clustering should be done every once in a while. One should build it into one’s work flow. Maybe one should do de-clustering every other month, or once every quarter. Declustering also means re-evaluating one’s priorities in life in relations to work, and personal happiness.

Even though Newport suggests that one should take a 30 day break from technologies (which do not critically affect one’s professional life), I think a 2-week experiment could already tell whether his tips work. Therefore, I give myself a two week trial, and see if I can keep following his advice. This week is only the first week, that means I am only half way through step 2. In one week, I will report what happens in the next seven days, and what apps/ digital platforms that I think are necessary to re-introduce into my life.

In the past one week, the experiment has shown me that I have somewhat gained some independence from my iPhone.  Without the email app on it, the phone serves a few simple functions including texting, calling, and sometimes reading news. I feel less attached to it.

I especially need a period of uninterrupted time to study for my orals exam. Thus, this de-clustering experience really helps me to squeeze out some extra time for work, and gain me some peace of mind to focus on what is essential for my intellectual life.

What I find so powerful about Newport’s suggestion is that I am allowed to ignore the unessential things, and that I can be unapologetic about ignoring them. This is the “POWER OF IGNORANCE.” One is happier, and more focused when one’s ignorant really. In the age of info-glut, ignorance is a good virtue. One does not need to know everything. Wikipedia needs to know everything. Human beings should live an examined life, feel productive, and be able to contribute to society. I feel empowered by this. I no longer pay attention to trivial things. I don’t feel like I need to know everything.

One question after deleting those apps is how to use my time productively? Newport suggests that I can spend it on being not connected because unconnectedness is actually good for my mental and psychological well beings. For example, I could spend time alone, exploring my own thought, being uninterrupted by other people’s thoughts and opinion. That is to say, I should seriously spend time on giving credits to my own original thoughts. As a knowledge worker, I cannot agree more. He basically argues that my opinion and originality matter, and that in order to acknowledge their values, I need to spend time more with them, and more for them. Walking around the neighborhood, or in the park alone is one of the ways in which I can spend time on my own feet with my own thoughts.

This advice suddenly gives a different function into my daily walk and run. Normally I conceptualize that running and walking are physical exercise whereby the main focus is on my physical well-being. Hence, I often listen to a podcast or music to keep a fast pace rhythm, or to not have to focus on my thoughts. Now Newport changes this practice. He suggests that I should redirect these exercises functions: from physical to intellectual. I stop listening to podcast or music when I run in Central Park, and when I take a long walk from home to City College to teach. Cal Newport mentioned that he lives about a mile away from MIT when he was in grad school. To my amusement, and also surprise, I also live exactly 1 mile away from City College. This makes the walk to the college the more enjoyable, and more intellectual. This week, I spend the time to think about what I would talk with my students about,  how can I improve their learning experience, or how I can formulate the introduction to an academic paper that I am writing. I start to enjoy my walk all the more. Thinking does not require me to sit still at a cubicle with pen and paper. Thinking can happen anywhere as long as I can focus my attention. I have been doing it wrong all along by wasting my short attention span on listening to millions interesting podcast on the Internet, and taking my eye off the ball.

Overall, the experience has been delightful. By participating in the two-week self-imposed experiment, I learned a lot more about myself, how I spend time, and what my priories at the moment were.When the goals became more clear, I could adjust how I would achieve these goals, and how I could cut down on unnecessary activities. In exactly one week, I will report how I feel after the entire experiment, and what I think about the digital minimalism philosophy.




Ubiquitous computerization: The Case of New York

If you have seen Black Mirror season 1, you probably still remember episode 2 where character Bing is surrounded by TV screens 24/7. He lives in a world where everybody is surveilled, and measured constantly. It feels as if he was living in a reality TV show all the time.

Image result for black mirror season 1 episode 2

When I watched this episode the first time, I wondered what it felt like to be surrounded by TV screens 24/7. Later on I learned about the concept “ubiquitous computing,” which refers to when computing appears every time, every where, in any location, and in any format. In many ways, we are already experiencing this ubiquitous computing with portable devices such as smartphones, Apple watch, and Fitbit. We are connected 24/7. Our personal data is collected constantly.

Suddenly, on the way to work in the past few days, I realized that New York City is experiencing intensifying ubiquitous computerization. LinkNYC kiosks appear everywhere.  They help New Yorkers to connect with free Wi-Fi service instantaneously. And the Internet speed is exceptionally fast. Who wouldn’t want to use that? Increasingly, New York City is investing in tech infrastructure, and other social infrastructure in order to lure tech companies. Thus an intensification of technological infrastructure is inevitable.

image.jpgThe city is increasingly more connected. Inside the subway system, one sees more screens.


So in addition to screen time from one’s computers, smartphones, and other electronic devices, one is now surrounded with more screens, planted by the city government in an attempt to modernize New York, and to make this city ever more connected.

Yet this attempt to make New York more and more connected should be met with healthy criticisms.  One the one hand, I see that there is an intensification of technology in our daily life. On the other hand, I do not see how cosmetic technological fix would make any meaningful change to solve an analog problem. Take the subway system for example. How can seeing more screens being put inside the subway station would fix the out-dated train system that has not gotten any new investment for decades? What one needs to fix the mass transit crisis in New York is not some new screens, but just solid century-old mechanical engineering. New Yorkers deserve clean stations, safe and functioning trains. Those bare minimum requirements have not been met for decades. Yet once the crisis broke out, the city and particularly its IT department wants to beautify those stations by exposing New Yorkers to more screen time. I don’t see how getting more screen time would improve the basic transportation needs for New Yorkers.

This experience leads me to think more about how tech would be able to “disrupt” or “change” or “make the world a better place.” Without doubt, technology has been able to enhance our experience in various aspects of life. Yet it has not been able to replace material production, and the core services such as transportation, which are still provided pretty much by human. In other words, I am questioning how the tech industry can substantially change the world if other infrastructure have not been well-designed, and well-implemented.

Another concern that I have about ubiquitous computing is its impact on our psychological health. In the book Digital Minimalism, Computer Scientist Cal Newport makes the point that in order to ensure our productivity, and happiness, we should limit our screen time. It is an important idea for workers and parents nowadays. I am already connected 24/7 via my smart phone. I am also connected to the world via the Internet through my laptop, which is often been carried around on my purse. Now, even when I go to work on the subway, and take a walk in my neighborhood, I can hypothetically be connected via LinkNYC, or other types of screens. This ubiquitous connection makes me feel dizzy, and stressed. No wonder why whenever I feel too stressed, and unwind, I want to get away to the Catskills or somewhere upstate New York, where big screens leave space for forests, creeks, and big sky. In other words, with ubiquitous computing, city residents are experiencing even more intense pressure on their mental and psychological well beings.

I wish there are more places in New York like Central Park where one can stay away from connectivity. In Manhattan, Central Park is the only place where LinkNYC has not intruded yet. It’s the last piece of land near where I live where I can still breath fresh air, and smell morning grass. This following map shows the coverage of LinkNYC.


Originally, when I first watched Black Mirror, I thought ubiquitous computing was  science fiction. After only a couple of years, with the increasing presence of various devices in my surrounding including voice assisted tools such as Alexa, Siri, and smarter and smarter smartphones, I realize that this idea is becoming reality. However, one does not really know what the unintended consequences of this idea are to our wellbeing and happiness.

Hacker Culture Becoming Mainstream?

While writing review of two books: The Mastermind and Bad Blood, I noticed that organizations described in them consciously employed hacking as main ways to solve problems. In the case of the Mastermind, the entire enterprise where Paul le Roux built was centered around hack culture. He figured out various loopholes in the American health care system, and exploited those loopholes to sell painkillers on the Internet. In Evan Ratliff’s words, le Roux often hacks his way out of a situation:

Typical of Le Roux, the plan was kind of hack. Just as he had exploited a hole in the American healthcare system to sell painkillers, he planned to take advantage of a dysfunctional government to exploit the resources it couldn’t harvest.

Sitting in his headquarters in the Philippines, he could mobilize more than 1,000 employees across the world to work for his many companies. He hacked his way into becoming a mastermind of the dark web. In Bad Blood, Elizabeth Holmes’s team of engineers, and scientists at some point had to buy Siemens’s industrial blood testing machine, took it apart, and did reverse-engineering, hoping that they could shrink the German engineering  device into a portable size. They wanted to hack their way to success.

Hacker culture started out as a subculture of people who enjoyed “the intellectual challenge of creatively overcoming limitations of software systems to achieve novel and clever outcomes.” That is to say, originally it was confined pretty much among software engineers, and computer scientists. However since 1990, tech companies have been growing very fast, and drawing a huge swath of creative workers for employment. Companies such as Google, and Facebook encourage their workers to play while at work. They have become powerful organizations where their work culture is revered, and influential. From the periphery of the economy in Silicon Valley, they are now influencing politics, media, culture, and education. In other words, their work culture is increasingly becoming mainstream. Hacker culture is no longer a sub-culture.

Gradually the tech industry is now spreading all over the United States, and also around the globe. Companies such as Google and Facebook are no longer just sitting in the Bay Area. They have offices all over the world. They are going to main street of every city. In New York, they are closer to Wall Street, than to universities. Their workers are now bringing their creative vibe with them to every corner of the city. In a few years, I wonder whether schools, and universities will adopt this culture as a way to move forward in the twenty-first century. Will schools organize “hack” events where students identify loopholes in some systems, given it’s school system, food systems, healthcare systems, etc. Then they will think about a way in which they could monetize their hack by coming up with a solution to manipulate the loopholes that they identify. It seems the more loopholes the better for a hack culture. Will regulations and system designers learn from those hacks, and work with hackers to identify problems, and help them solve problems?

Hacker culture seems to be no longer contained in the tech world. It seems to spread outside of that confinement, and is inserting its influence in other industries. Put another way, hacker cultures is becoming mainstream.

Book Review: The Mastermind by Evan Ratliff

Sometimes I ask myself, why people especially men are obsessed with action movies, and thrillers? What is that in hyper violent scenes that mesmerize the masculine mind? Yet, I recently found myself completely captivated by one such case: Paul le Roux  through the mesmerizing book: The Mastermind by Evan Ratliff. The book hooked me from the very beginning when it opens with the mysterious murder of Catherine Lee, a real estate agent in the Philippines. Then Evan Ratliff introduced me to the world of criminals, law enforcement, and trans-border pharmaceutical and drug business. Even though it is not a work of fiction, I feel like I was introduced to a new world that I would never experience myself in real life. I felt the urgency of each scene, the severity of each action taken by the characters. The question of why one gets captivated by thrillers became less mysterious to me. The reason is that one’s adrenaline gets pumped up when the story gets unraveled.

Similar to how John Carreyrou reconstructs the case of Theranos  in Bad Blood, Even Ratliff tells the story of how Paul le Roux made money via selling painkillers online to American customers, and how his organization was brought down. The two books are similar in telling a story of how an organization became so successful in a short period of time, and then dismantled because of their illegal practices, and their disrespect for customers’ health consequences. Yet, Theranos was a legit company with an unrealistic product, while Paul Le Roux’s RX Limited was illegal by design. The case of RX Limited cautions the readers about the dark web, where technologically advanced criminals can manipulate the Internet, and exploit users’ weaknesses, and legal loopholes to make money.

Paul le Roux created a sophisticated network of companies that spanned many countries. The network he created was called RX Limited. He exploited loopholes in American healthcare system to sell painkillers online for American customers. Paul was not an American passport holder. He’s born in Zimbabwe, raised briefly in South Africa. He operated his entire operation from his home and office in the Philippines where law enforcement could be bribed, and politicians could be bought.

How could a South African men operating his business from the Philippines could run a huge network of pharmacies, call centers, and manipulating doctors to sell painkillers to American consumers? He used the Internet. He recruited doctors, pharmacies, employees, and advertised to customers via the Internet. This one-man empire was so expansive, and impressive that everyone had to agree that he’s really a mastermind.

If Paul le Roux’s greed stopped at only selling painkillers to Americans, his case would not have been so news-worthy.  “He made money from pharmacies, and then he decided that he wanted to make more money, fast,” one of his employees confessed. He wanted to be bigger, and operated more in the realm of illegal. He became internationally famous, after having earned millions of dollars from selling painkillers to Americans, and expanded his business to other criminal ventures such as selling arms, trading narcotics, and killing his own subordinates. These deeds were enough to make him one of the most infamous criminals of the twenty first century. And not surprisingly, American Drug Enforcement Administration (DEA) was after him.

What was most surprising about his story was that after being captured by the DEA, he turned himself in as an informant. In other words, he willingly cooperated with the US government, so that his subordinates could be captured as well. Evan Ratliff argues that this event really changed the course of how Paul le Roux was tried, and how this case was so different from other cases. Normally law enforcers would capture subordinates, hoping that gradually they would be able to capture the boss sitting on top of the pyramid. In this case, the pyramid was reversed. The drug enforcers captured the boss first, and then used the boss to captures the underlings. By doing so, they government created an air of secrecy around the case, which made Evan Ratliff’s reporting to be both valuable, and unexpected. At courts, defendants and their lawyers often quoted his reporting as if they were truth because the boss was under American government’s protection and was not available as witness.  In other words, the man who created the entire empire, the one who knew most about the entire case was not available to testify. Thus, everybody had to use hypotheses, and reconstruction of the story by a journalist to make their case hurt. This is a perverse dilemma.

The book reads like a detective story with many surprising twists and turns. It started out with a story of a painkiller online drug law, to various mission impossible of a criminal king, and finally it ends by exposing various problems associated with American criminal justice system. As a sociologist, I read it with an eye toward social problems, and the book is filled with social, structural problems to keep my mind engaged. There is the problem of regulation on the Internet. There is the question of transnational organization. Closer to home is the question of what American criminal justice system actually does. Does it want to protect its own citizens, or does it want to fish international drug lords out of their water?

In short, the book is full of thrilling details, which could help it make a blockbuster movie. All the elements of lust, greed, violence, and betrayal in this book promise that movie goers would have a mesmerizing and unsettling experience. More importantly, after one finishes reading it, one has to ask questions about the rise of the Internet, the fragmented nature of organization in a global economy, and the logic and purpose of law enforcement. Those are difficult questions to answer, but the book has successfully raised them to the reader.


Book Review: Digital Minimalism

Struggling to get off Facebook addiction has been on of my problems for along time. I remember joining Facebook sometime after high school. It was  useful because it helps me to connect with friends, and find long-lost connections. Increasingly, it has become a tool which my parents use to keep updated about where I go, and what I do without having to ask where I have recently spent time. However, in college I started to be aware of my addiction to Facebook. During exam time, I would de-activate my account. At that time, I did not even have a smart phone. Therefore, I only checked Facebook feeds using a computer. Still it was a huge struggle. Then I read Deep Work by Cal Report, which suggests that in order to be a knowledge worker, one needs to stay away from digital distraction including social media, and emails. They are only there to distract. They don’t contribute to the base line of intense intellectual work. That was a brilliant advice. I started to turn off my cell phone during classes, and during work. Yet I was still hooked to Facebook until this very winter, when I decided to not check Facebook for weeks in a row. This purposely not checking Facebook has helped me claim back time to do something else more productive. Then Cal Newport published a new book: Digital Minimalism .

The book  is a non fiction, popular book, yet Cal Newport uses evidence from the most recent research on online communications and behavioral psychology to back his arguments. It starts with an observation that “social media is new tobacco.” Put it differently, social media are addictive, and it is not unintentional. He claims that tech companies encourage this behavioral addiction from users. The two mechanisms that these companies can keep users on their platforms are intermittent positive reinforcement and the drive for social approval. One is how the platform is designed, while the other has to do with the fact that as social beings, we yearn for social approval. Exploiting this combination by using designs, and algorithms that keep us spending numerous hours on their platforms is what social media companies are doing. This mindless spending time on these platforms and other digital distractions is bad for our productivity, happiness, and social relations, Newport claims. Therefore we have to claim back our happiness, freedom, and productivity by systematically rethink about our relationship with digital communication tools.

Newport’s solution is “digital minimalism,” which is a philosophy, and as a practice. He writes:

Digital Minimalism is a philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value and then happily miss out on everything else.

In other words, one needs to be mindful of what is essential, and what is not. There are more distractions in the online world than what one wants. Therefore, one needs to be selective of what one is doing online (both with a computer, and with a smart phone).

Why digital over-consumption is bad?

Because it affects our psychological well-being, and destroys high quality communication. Citing various psychology and behavior research, Newport argues that digital over-consumption, and digital communication via text messages, facebooking, and tweeting increases our anxiety level. One can say that we are living in the age of ultra-anxiety because of the ubiquitous of the Internet. Everywhere one turns, there is a device that is connected to the Internet. We have to make a conscious decision now to stay off-line. And being constantly connected is not a good thing for human psychology.

It affects the quality of our communication, and friendship. Having one or two friends with whom one can rant for hours about a bad relationship, bad boss, or the traffic is better than being connected to 1000+ “friends” on Facebook, whose attention to you is less than 30 seconds. It is a trade off between quality and quantity. It seems to be counter-intuitive to think that in order to be happier, one needs to limit one’s friend circle. Newport argues that it is a natural process. Once one puts a cap on how many people one can care about, the number of such people would drop naturally.

The more conversation you want to have, the fewer friends you will have: Conversation-centric communication requires sacrifices. If you adopt this philosophy, you’ll almost certainly reduce the number of people for which you can uphold this standard will be significantly less than the total number of people you can follow, retweet, “like,” and occasionally leave a comment for on social media, or ping with the occasional text. Once you no longer count the latter activities as meaningful interaction, your social circle will seem at first to contract.

Because communication is so essential to human need, we have figured out how to do it right for a long time. Communication technology has really changed how we communicate, and thus changed our relationship with communication itself. If one wants to claim back our quality relationship, we need to claim back our ownership of time and of the ways we communicate because

You cannot expect an app dreamed up in a college dorm room, or among the Ping-Pong tables of a Silicon Valley incubator, to successfully replace the types of rich interactions to which we’ve painstakingly adapted over millennia.

At the end of the day, we can participate freely on various platforms such as Google, Facebook, and Twitter because these platforms sell our attention. This attention seeking economy wants to get as much time out of our days as possible.

Our time = Their money

Cal Newport doesn’t go quite far as critiquing capitalism, and the form of capitalism that these tech giants are creating. This topic will be discussed in future post when I review the book The Age of Surveillance Capitalism by Shoshana Zuboff. In that book, the author analyzes the new economic logic of our time where this attention-seeking quest has lent itself to a new form of capitalism. Cal Newport only suggests that we should contain our own behavior, and not give away our precious time and happiness to the tech giants.

He then advocates us to spend time alone, contemplate about life, and not be distracted by anybody including books, digital technology, and even our friends, and family. This is essential to a good life according to him. He argues that throughout history, intellectuals including Aristotle, Schopenhauer, Kant, Nietzsche have done that. They lived a happy and productive life. I think that Newport is biased in this argument. As an academic, and an author, his main activity is intellectual work. This is not necessarily the same for other people such as a housewives, a plumber, and a nurse. I agree that everyone needs down time for oneself: maybe to wind down after a long work day with a beer with friends. That is also fine to me. Besides, I don’t entire buy in the argument that Schopenhauer and Nietzsche actually lived a good life by constantly contemplating about life on their own. It’s a romantic interpretation of an intellectual’s ascetic life.

In short, by explicating what “digital minimalism” is, Newport advocates for a life without much distraction. It is a more balanced, and happy life when digital technology does not take charge of our lives. I agree with him that we need to de-cluster our use of technology,  and reclaim our time and attention to people and activities that truly matter to us.

More Women Colleges should Provide Computer Science Majors

The tech world does not have many women. There’s a pressure to hire more women in tech startups. The tech world is still a man’s world is a general perception. Sexism is rampant in this world, and it’s difficult to talk about. There are fewer tech startup founders to be women than men. They are not in the leadership position. Women represent only 26% of professional computing occupations. When one looks on the Internet about this topic about women and tech, one see titles such as “Where have all women gone,” or “4 Reasons why you might fail to attract women in tech“. These titles suggest that the lack of women in tech is epidemic. It is natural, yet man-made. It deserves some explanation. However, I would like to offer some remedy to this problem in this blog post instead of analyzing it like how I would normally do to a social phenomenon.

I think women’s colleges could solve some part of the problem. Women’s colleges in the United States typically focus on liberal arts education. That is to stay, students would become well-rounded. This is a very good thing because they have the ability to think broadly. They read well, write well, and dare to become leaders, dare to fail. Those are important skills and quality of a leader, of a worker, and colleague that one would want to have.  However, because of the focus on liberal arts, some colleges do not spend their resources in building their computer science department. My alma mater, Agnes Scott, is an example. The school does not build an in-house computer science department, but out-sources it to other techie schools around by providing a dual degree in engineering and computer science with Emory University, which is 15 minutes away. This I think is a de mistake that my alma mater made for its students. Why? Because the dual degree is expensive for students. If there’s no in-house computer science department, students do not cultivate a culture of thinking that computer science is something that could be learned, but something that they are not supposed to learn. Having a computer science department signals that the school is supporting women in the tech industry, and that it’s willing to train a workforce for this growing industry.

Curious to see if other women’s colleges have made the same mistake like my alma mater, I looked to see if others provide computer science major at all. Following the Forbes’s list of best 10 women’s colleges in the United States, I created this following table:

No. College Computer Science Major 
1 Barnard College Major, Minor
2 Bryn Mawr College Major, Minor
3 Cedar Crest College Not clear (possibly yes)
4 Mills College Major, Minor
5 Mount Holyoke College Major, Minor
6 Simmons College Major, Minor
7 Smith College Major, Minor
8 Spelman College Major, Minor
9 Sweet Briar College Major, Minor
10 Wellesley College Major, Minor

So the top ten women’s colleges do provide computer science as a major or minor. That is good news. But the question is whether the tech industry actually invests in these programs, and have built a pipeline between these programs and their workforce. This is an empirical question that one needs to figure out. If anyone has some information about this, please let me know.

Why do I think the tech world should focus on women’s colleges when they want to solve their women’s problem? This solution even though counter-intuitive, I think is necessary. My solution is that in order to combat sexism, and implicit bias in hiring, and working in the tech world, young women should first be socialized in a safe environment where they see other women learn coding, programing, and becoming leader of their future industry. Sociologist Daniel Kleinman (2009) argues that when there are explicit measures to promote female participation in labor force in hard sciences such as astrophysics, women tend to fare better. That is to say, if explicit measures would help combat implicit biases in male dominated industries such as tech and astrophysics. The tech world because it has developed too fast does not provide explicit measures to attract and retain women in the workforce. In other words, implicit biases run rampant in the tech world. Women’s colleges are one environment in which implicit biases against women are turned on its head. Therefore, young women are socialized in an environment where they are trained necessary skills to become workers, thinkers and leaders in this particular field. I think this should not be taken lightly by women’s colleges, and the tech industry.

In a nutshell, combating sexism, and the lack of female workers in the tech industry is a multifaceted problem. One of my small contribution to this debate is that women’s colleges should step up to the game to train the next generations of workers, and leaders in this field. The reasons are two-fold. One is that women’s colleges offer an environment where young women can stay away from implicit biases that typical computer science programs often have. Two they are training women not just to become engineers, but well-rounded workers, and leaders that will benefit the skewed world of the tech industry.


Public Sociology: Podcast as an Effective Medium

In the previous blog post with the title We are All Public Scholars Now,  I argued that most sociologists have agreed that doing public sociology is desirable, and that the Internet has significantly lowered barriers to entry to disseminate scholar work, and to voice their expert opinion. Furthermore, I raised various issues that are associated with Twitter as a platform to engage with the publics. Twitter offers instantaneous access to public debates, but scholars can also get into polarized debates because of social network effect. In this blog post, I would like to focus on another platform where sociologists can also engage in public sociology: Podcast.

The five main ways that I have seen scholars engage with the public, and disseminate their work are as follows:

1. Public lectures

2. Traditional media (newspaper, talk shows, popular books)

3. Blog

4. Social Media (Tweet, Facebook, etc.)

5. Podcast.

Among those five categories, the first two are “conventional.”  Scholars have given public lectures, and talked to traditional media since the inception of the university as an institution. Increasingly I have seen that scholars write more popular books than academic books to engage with well-read audience who are not necessarily academic-oriented. Even when writing scholarly books, they try to eliminate academic language such as “As XYZ writes,” or “XYZ argues.” They try to stay away from those rigid academic language that does not flow in a normal conversation. For example, Richard Ocejo in his latest book Master of Craft, tried to “break the frame of writing academically,” and avoid “the academic shorthand.” These practices challenged him “to explain our concepts in other language and not rely on what we take for granted” (Scholars’ Conversation: Richard Ocejo).  Increasingly it has become blurring between scholar writing and popular/creative non-fiction writing. This shows that scholars have incorporated the idea of public-facing sociology into their knowledge production process.

The next three categories among the five categories only started with the rise of the Internet. Blogging has been a popular form to engage with the blogsphere. Increasingly more scholars start to use Twitter as a place to disseminate their work. Because scholars have used the first two ways to engage with their various publics, there has been an established protocol about how to disseminate their work via these routes. A scholar needs some credentials, which establish that they are an expert in certain field. Besides, there are various gatekeepers such as TV managers, anchors, and other network personnel that could facilitate or prevent a scholar to disseminate their work and engage with the public. With the rise of the Internet, and the decreasing barriers to entry to various platforms such as blog, social media, and podcast, scholars can now disseminate their work quickly, and cost-effectively.  They can avoid the middleman problem, and totally stay away from institutional gatekeepers that sometimes might not want them to voice their opinion.


Sociologists Arlene Stein and Jessie Daniels in their book Going Public gave a timely advice to social scientists about how to be more of a public scholar, engaging with different publics using digital technologies. In the book, they emphasize that writing concisely, clearly, and not using jargon is the foremost important requirement for a scientist to engage with the public. Then they explore various digital technologies, and how they change the way scholars are doing their work. They give detailed descriptions of how to start and maintain a scholarly blog, or a Twitter account, and whether Facebook is a problematic platform to maintain scholarly presence. However, they do not mention anything about podcast as a communication tool to engage with the wider audience. This book is telling. Social scientists have written blog posts, written for the New York Times, yet few of them maintain a podcast where they could directly communicate with the wider audience. Why don’t we embrace this particular medium?

One reason has to do with the time cost of maintaining a well-run podcast. It takes a lot of time, and work to create and maintain a podcast. Even though social scientists have not come out and talk about their burnout problem as much as physicians do, they are nevertheless burdened with lots of administrative work on top of their heavy teaching load, and doing research. Maintaining a podcast? No thank you very much. Just the idea of starting a podcast, and making sure that people are listening to the podcast, and that it runs well is overwhelming to any busy scholar.

However, there are some good sociology podcasts out there such as the Annex, or Thinking Allowed. Both of these podcasts are exceptionally well-run by veteran sociologists in the English speaking world. I have written a review of the two podcasts on this blog about a year ago, and one can find out more about it here. SozioPod from Germany also does such a great job in bridging the communication gap between social scientists and the publics about social issues in the German speaking world. Those examples show that other than cost/benefit reason, there must be another reason that podcasting is a difficult market to crack for social scientists. It could be inherent to graduate school training that we all receive.

When writing a blog post, for the New York Times, a popular book, or simply tweeting, the main skill that one needs is to be able to communicate effectively in the written forms. In contrast, podcasts require a completely different skills set: story telling, conversational reaction with the host, some humor. In other words, the podcasting repertoire is completely different from the scholarly repertoire. In order to become a good scholar, one needs to think and write mostly. Even though teaching is a part of a professor’s work, it is not the main criterion where one’s evaluated as a scholar. The image of a socially awkward professor still comes to mind when one thinks about a serious scholar. Most of us are introverts who read more than playing with our peers outside during our childhood. During our grad school training, we become even more introverted because of the solitary nature of our work. Podcasting requires more than just knowledge. One needs to step outside of one’s comfortable introvert zone to talk to the audience, and to maintain that long-term connection.

In a nutshell, one does not get trained to become a good podcaster in grad school. This explains why most scientists have chosen to become public scholars on platforms such as blogs, and social media. There is “a skill consonance” between being a scholar and being able to maneuver those platforms. While there is “a skill dissonance” between being a scholar and being a good podcaster.  Therefore, even though the cost to enter the podcasting world has significantly reduced, scholars still have not moved into this space with a rapid number to engage with the wider audience.




We are All Public Scholars Now

Sociologists have been talking about doing public sociology for almost two decades. Since Michael Burawoy’s 2004 ASA Presidential Address, these scientists have been trying various ways to engage the public (broadly defined). Burawoy states that “the challenge of public sociology is to engage multiple publics in multiple ways.” In other words, we have different publics to engage. One target audience is the scientific community at large. That is to say, we have to do scientific work, which is theoretically rigorous and empirically rich, and that our findings could stand the falsification test. Another target audience could be the community in which we do research. More often than not sociologists give voice to an underprivileged, disadvantaged group that is difficult to reach. Our research thus must first and foremost benefit them. The third group could be policy makers who might listen to what we have to say about their work, and how to make their work stronger with our tools. And lastly it is the general public.

Most sociologists are convinced that we should communicate with people outside of academia. The rise of fake news, increasing attacks against public intellectuals from far-right activists, and America’s general anti-intellectual culture urgently plead for sociologists’ involvement in public debates. In the past decade and a half since Burawoy’s public address, sociologists have been taking up on the call. Some use blogs as a platform to engage with the wider audience. Some tweet. Some talk to the mainstream media. Some start writing popular books instead of academic books. The goal is to reach as wide and as far as technology allows. In this particular blog, I would like to address a few issues when professional sociologists attempt to engage with the wider public via digital tools such as social media, and blogging. In other words, I am taking up the issue whether digital technology has enabled sociologists to become better public intellectuals. What are the advantages and disadvantages? What should one be aware of when using various digital tools to popularize one’s own opinion, and scholarly work.

Before going into the topic of social media and scholarly work, I would like to address different approaches toward social media within the academe. Academia is an  established institution where different generations of scholars do research, teach generations of students, and train new scholars. Because of its heterogeneity, the reception of digital media has been varied, especially when it comes to using digital media to disseminate one’s own work. In general, there are four different groups:  (1) digital natives, (2) digital embracers (or early adopters), (3) digital opportunists, and (4) digital rejecters. These four categories constitute a spectrum. Everyone can be placed somewhere between a digital native, and a digital rejecter.

Digital natives are those who came of age during the digital revolution. Most people who were born around the birth of World Wide Web   in 1991  are considered digital natives. They often take for granted that they could find the answer to almost everything on the Internet, and that they spend much of their childhood and adulthood learning how to take advantage of the Internet, and contribute to its hegemony. This generation also came of age not questioning much about their privacy online while sharing their personal data via Snapchat, Facebook, etc. Almost everyone has a Facebook, or Instagram, or Snapchat account. They freely share their personal experience on the Internet.

Digital embracers or early adopters are those who came of age before the World Wide Web revolution. Yet since they are early adopters of technology, they understand the advantage of the Internet, and seamlessly incorporate digital media, and technology into their work, and daily life. However, since they came of age before the revolution of the Internet, they experienced what it meant to have privacy, and to separate between Internet life from their personal life. In other words, they might not necessarily share everything on their Internet from the pictures of their debutante to their baby shower. The Internet might be a place to share work, but not life. There is a separation between work and life, and between real life and the Internet.

Digital opportunists are those who are not frequent Internet users, who go in and out of the Internet as they see fit. In other words their relationship with the Internet is rather instrumental. They only use it when they need it. They don’t really contribute to the digital culture, or help it to spread.

Digital rejecters are those who reject the Internet. In other words, individuals in this category refuse to acknowledge the advantages provided by the Internet, or they insist that without the Internet they could do their work and run their lives as usual.

Sociologists who fall under the first three categories might use the Internet to engage with the public.  Many have written blog to bring sociological reasoning, and methods to the blogsphere. One of the most successful sociologists who uses such method is Philip Cohen of University of Maryland. His blog Family Inequality is widely circulated and respected. There are other blogs run and maintained by sociologists that also get a wide readership. I myself use blog as a template to jot down my thoughts, think about social problems that I experience, and brainstorm my research ideas. In other words, I think out loud via blogging. As a young scholar, whose professional identity is not yet formed and shaped, I feel my engagement with the readers on the blogsphere is rather limited because I have not yet claimed my expertise in any particular sub-field.

Recently I have seen that more and more sociologists use Twitter as a platform to disseminate their work, and engage with the public. In one of the professionalization lecture series on writing articles that I went to, the speaker encouraged everyone to use Twitter, and make the world known that their article is published. Her advice stopped at article dissemination. Other senior colleagues who are more adept at tweeting suggest me to engage in public discussions on Twitter, and get connected to other scholars on this platform. Many have embraced its effectiveness in creating a public discourse around a social issue, and how quick one’s tweet might get attention of the entire community and the Internet. However, others also have raised issues about being attacked by the far-right on the Internet when discussing controversial topics around white supremacy.

Being a public sociologist on Twitter is a very different kind than being a public sociologist using the blog medium. The blog format is rather static, and it seems that in blogsphere, writers and readers communicate in a more traditional way. On the contrary, Twitter enables information to be disseminated quickly, and also attacks to come quickly.

Among the four different types about scholars and digital technology, they have different relationship with the Internet particularly when it comes to privacy. I conceptualize that leisure activity and family information are considered as private information, while work is considered as public information for public sociologists. Given these proxies, I came up with this following table that summarizes behaviors of different groups towards their privacy and their work.

Work Leisure (family)
Digital Native Online Online
Digital Embracer Online Partly online
Digital Opportunist Partly online Partly online or nothing online
Digital Rejecter Nothing online Nothing online

Clearly, digital natives, being raised and grew up with the Internet have put so much of their information online before they became a public sociologist. That means they have established some online identity on the Internet before it became a place where they disseminate their work. Therefore, the Internet contains a mixed bag of personal and public information for them. For example, most of people who were born after 1991 have a Facebook page where they put their pictures during college years, going to a frat party with their buddies. Now a decade later, they become a young assistant professor. Students could Google their names, and figure out their partying pictures in college. How would they take their public sociology about sexual assault on campus seriously after having seen that they were also participating in that culture in college? The Internet has potential to undermine one’s credentials.

Digital embracers seem to be able to separate the two spheres a bit more clearly. They experienced a world without the Internet before, and know what it means not putting too much personal information on the Internet. I have met various digital embracers who strictly use the Internet for their professional identity. Nobody knows whether they are married, or having children from using Google alone.

Similarly digital opportunists only put enough information on the Internet so long as it benefits their professional work. In other words, they are selective in choosing the type of information to put out for the public.

Finally digital rejecters do not want to have any of their information being circulated online.

Now Twitter makes dissemination of scholarly works even more complicated when journalists are scouting on Twitter for information. Under the current pressure when most newspaper no longer makes money, fewer journalists actually go to the field, but more of them go to the Internet. Sociologist Angèle Christin in her 2017 article Algorithms in practice shows that journalists are required to use predictive analytics software to see how their articles fare for online readers. This creates a situation where journalists are prone to the whim of social media readership while writing their articles. Now one sees that journalists would go directly to Twitter and look for politicians’, experts’, and celebrities’ opinions on certain matter rather than call them up and ask them critical questions about the issue at hand. Taking someone’s Twitter at face value, and sometimes out of context could be dangerous. For one, a spiral of one’s tweet could amplify a rather trivial point. Second, whether a tweet would become spiral is a function of Twitter’s algorithm and the Twitter’s public. No-one knows what Twitter’s algorithm prefers. But Facebook’s scandal regarding to the 2016 election is telling. It is clear that Facebook algorithm prefers sentimental feeds and downplays feeds that actually deal with important social issues. Social scientists studying social media and society have learned that these platforms could potentially have a polarizing effects on American populace. This face has implication for the scientist who wants to disseminate their work on Twitter. If Twitter prefers to spread sentimental Tweet, and if the scientist wants to reach a wider audience, (s)he might choose to disseminate more sentimental tweet. This creates a vicious circle where the scientist is caught in engaging in contentious debates, which might not be necessarily productive for the scientific community nor the public.

Social scientists disseminate their scholarly work on Twitter should be mindful of two aspects: social network effect, and Twitter’s algorithm. Social network effect as a concept is rather ambiguous. Some define that the effect means that one’s behavior could be predicted if their friends’ behaviors are known (What is the Social Network Effect? – Youtube). Twitter uses this principle in designing their algorithm. This algorithm might want to predict whether you want to read certain kind of Tweet, and/or disseminate certain kind of information based on your prior tweeting behaviors, and your network information.

In conclusion, social media and the Internet have made it easier to engage in public sociology. However, how one should choose to engage with one’s audience is subject to various factors including the type of platforms, and the topics at hand. As a digital native, I am more concerned with how to draw a line between public and private information more than how to disseminate my work. In other words, the challenge is not that one should not take advantage of the Internet to become a public sociologist, but the challenge is how to transition to use the Internet for work instead of for play.


Why do Advisors Mentor Graduate Students at All?

One of the perpetuating puzzles of graduate school in social sciences is why it is so unstructured, and that graduate students feel lost all the time? On the one hand, there are lots of brilliant minds, being concentrated in a small institution (the university). On the other hands, we have a phenomenon that these brilliant minds cannot go through a graduate program in a timely manner, despite having a host of other brilliant minds who are supposed to be their supporters, and advocates.

One answer to this puzzle is that advisors do not  do a good job mentoring their students, and showing them how to graduate on time with flying colors.  Students thus are left alone to figure out how to become a member of the academe. In the process, they accumulate stress, debt, and frustration of having their dreams not realized.

Many critics of graduate school training have written about this phenomenon. They both describe it, and prescribe solutions to solve this problem. Karen Kelsky in her book The Professor is In decries that as a community, scholars, advisors, and professors, fail to mentor the next generations. Therefore, she came up with a market-oriented solution: creating her own consulting (for-profit) business to guide novice scholars (graduate students) through the process. Karen Kelsky’s example demonstrates two failures: (1) the academe’s failure to have an accountable system, whereby advisors are supposed to guide their students. As of now the student-advisor relationship is relatively informal, whereby the student can ask for the advisor’s help, and consultation as the advisor sees fit. There is no formal mechanism to stay that the advisor must do such and such, otherwise their salary will be deducted. In a sense, one can blame the tenure system that facilitates this particular phenomenon, whereby students in graduate school have to do most of the work, while advisors can choose do whatever they see fit with their scholarly agenda, administrative ambition, and personal life. Despite the success of Karen Kelsky’s business, I am also not entirely sure if market solutions would help solve this issue.

This mentor-mentee expectation mismatch dilemma is not only the social sciences problem. Other fields experience it as well. For example, Philip Guo, an Assistant Professor in Cognitive Science, after his PhD published an eBook called The Ph.D. Grind that documents his doctoral experience at Stanford in computer science. One would assume that since he was at one of the best computer science departments in the world, and that he came in with funding, and that Stanford is one of the most endowed universities in the world, his experience would be less grueling, and less frustrating than others. However, Guo shows that even though he had a head start in the PhD game, he became frustrated with the isolating experience in grad school. It became even more frustrating when advisor-advisee relationship did not work out.

Most of Guo’s experience was about trial-and-error where his research ideas did not work out, or his chosen advisor would not help him progress in his program. However, as he became more mature as a scholar, he also started to experience the joy of being a researcher who had ownership of his innovative ideas, and conducted ground-breaking research. One of those moments happened at Microsoft Research in Seattle where he worked as a summer intern with other more senior scholars. The difference between Microsoft and Stanford was that at Microsoft he worked with a mentor who would check on his progress on a weekly basis. This experience single-handedly changed his view about research. At the end of the day one needs to put one’s own labor into the research product. There must be some feedback loop to realize one’s progress, and see whether the eventual goal will be researched. At Stanford where one worked in a silo, grad students and professors alike, he did not feel any progress. At Microsoft Research, he felt that there was a formal structure between mentor-mentee, where they had a binding social contract to contribute to a research project. That was the moment he recognized how one could see the end of an open-ended research project, and how to work with a hands-on advisor.

There are two points pertaining to Guo’s advisor-advisee experience at Microsoft Research in comparison  to that at Stanford that one should pay attention too. One is the changing structure of knowledge production in the United States (Kleinman & Vallas, 2001). Guo’s experience shows that industries have adopted the academic model of knowledge production. That is, they give knowledge workers (researchers) more flexibility to work on projects that they are compelled to do, without the tenure, and service requirements that come with tenure. In the meantime, universities have adopted corporate practices in measuring outputs of knowledge workers (professors, and graduate students). This changing structure suggests that researchers in industry increasingly experience more flexibility, while researchers in academia increasingly experience constraints and market pressures. From the standpoints of work/life balance, and  work satisfaction, it seems that researchers in industry have experienced more gain, while researchers in academia have experienced more control, and stress. Thus, the advisor-advisee relationship in industry could potentially work out well because in general people are less stressed, and less concerned with administrative tasks, and more focusing on their research. Thus,  more productive and meaningful relationships could be formed and sustained.

The debate about advisor-advisee relationship often asks why advisors don’t do more for their students. This question presupposes that it is in the interests of the advisor to mentor, and guide their student well. However, what if the incentive structure is working against students’ interests? That is, what if there is no incentive whatsoever for advisors to mentor their students? The question then becomes why should any advisor take time and mentor their student at all? When one thinks about the tenure system, one sees that there is absolutely no incentive to spend time, and mentor graduate students. This is not a part of one’s tenure package. As a scholar, one is only evaluated based on how much one has published. That means knowledge output is more important than who was involved in the process.  Thus, the advisor-advisee relationship is not a part of the equation.

Jeffry Sallaz’s recent article on labor process in a post-Fordist labor regime is very illuminating in explaining why advisors in graduate school do not tend to mentor their students.


Basically, he shows that in the Fordist regime of labor process, workers experience the regime of responsible autonomy where they experience work through the learning game and the reward game. That means workers learn what is expected of them in the job, and if they fulfill the expectation,  they will be rewarded. In the current work regime (Post-Fordist regime), workers only experience the learning game without any reward. That means, workers constantly learn new things, but do not get any reward even if they do the job well, or master the rules of the game. This comparison is very illuminating if one conceptualizes a professor as a worker in a permanent pedagogy regime. Professors now have to constantly work with new students, who have amazing ideas, and some also fail to produce any interesting research. Yet, they are required to sign onto a social contract that says they should advise them without receiving any concrete reward. Logically, they would choose to minimize their mentoring time to move on to either the next student, or to focus on their own research projects. Helping a novice researcher with lots of questions would not give them any reward in a short run. In other words, it is very much the advisor’s good nature to mentor a student because there is no requirement for them to be a good mentor. On the contrary, the system is built in such a way that they should spend as little time mentoring students as possible because they would not reap any tangible rewards.

Now do I have any solution for the issue: advisor-advisee relationship in graduate school? Somewhat. One is restructuring of academia where professors do not have to over-do services, and that they could focus on being a good academic which is to produce good research, and train the next generations. I believe that if professors are not overworked, and stressed out with successive teaching, and service requirements, they would have more time for their graduate students. The other solution is that there should be an incentive system that is built in each program to facilitates student-faculty collaboration. This would help increase the quality of graduate school experience for graduate students, while giving advisors the necessary incentives to spend more time mentoring the next generations of scholars.

Post-Fordist Work Floor

One of the great things about preparing for my orals exam is to read sociological literature that I would have never read if it is left to me to decide what is interesting. Every once in a while, I would stumble upon a research paper whose analysis and conclusions ring true to my ears.“The transformation of work revisited” by Steve Vallas and John Beck is one of those papers.

They study the organization of work in relations to technological changes. The authors challenge flexibility theory when it comes to manual workers. The theory suggests that in the new area of technological development, manual workers are required a higher level of skill; there should be “an expansion of craft discretion, presaging a synthesis of mental and manual functions within the automated plant;” and a shift from bureaucratic control to organizational commitment as the principle that undergirds  the new structure of work.

In order to test whether the above-mentioned claims are true, they conducted a research at four paper mills across the country. What they found is very illuminating. In contrast to what post-Fordist theorists predict about the empowering effects of automation toward manual laborers, Vallas and Ford found that Fordist principles remain stubbornly the organizing principles of work in paper mills. Workers are not really empowered. They experienced higher level of control by managers, and engineers.

The paragraph that rings true the most to me is the following:

Standardization of decisions. A further way in which shoopfloor life has been reconfigured centers on the ways in which analytic functions and decision-making powers have been distributed. Recall that post-Fordist theory expects the process of work restructuring to reallocate a portion of these tasks downward, blurring or even transcending the traditional division between mental and manual labor. We find little evidence of such a trend. Instead, our research indicates that the dominant tendency has involved a pattern of tightened constraints upon manual workers’ judgement rather than the “relaxation of constraints” that flexibility theory foresees. 352

Once a new technology is introduced to the workplace, work relations are reconfigured. In the case study, power falls into the hand of those who know how to operate new machines.   There is no trickle down effect where manual workers start to take charge of the machine. Instead they experience more constraints, and their knowledge becomes less valuable.

This analysis is similar to what we hear nowadays about how computers make doctors feel frustrated, and how their authority is threatened by programmers who know little about how to cure patients, but teach them how to operate information machines. This following excerpt from the New Yorker’s “Why Doctors Hate Their Computers,” shows this sentiment well:

On a sunny afternoon in May, 2015, I joined a dozen other surgeons at a downtown Boston office building to begin sixteen hours of mandatory computer training. We sat in three rows, each of us parked behind a desktop computer. In one month, our daily routines would come to depend upon mastery of Epic, the new medical software system on the screens in front of us…..Our trainer looked younger than any of us, maybe a few years out of college, with an early-Justin Bieber wave cut, a blue button-down shirt, and chinos. Gazing out at his sullen audience, he seemed unperturbed. I learned during the next few sessions that each instructor had developed his or her own way of dealing with the hostile rabble. One was encouraging and parental, another unsmiling and efficient. Justin Bieber took the driver’s-ed approach: You don’t want to be here; I don’t want to be here; let’s just make the best of it.

Physicians are leaving their professions because of burn-out, computerization, and McDonaldization of care. Their prestige and authorities are constantly threatened by new technological innovations. It seems the flexibility theory would predict that technology would empower doctors in a way that their patient outcomes would improve when they take advantage of the inevitable changes. However, life is messy, and technological advancement has not been a smooth process. Many digital adoptions in the health care industry prove to be provincial and create more bureaucracy. For example, each hospital buy a different Electronic Health Record service (EHR). If you have your dental done at a dentist’s, you have some health information in their system. But this same piece of information might not be merged to your electronic health record at your general practitioner’s office. This proliferation of servers has created a digital nightmare for doctors instead of creating an easy solution to document their patient’s health.

Again, Vallas and Beck’s paper is classic in a sense that it asks an important question: how does work organization change when a new technology is introduced to a work place? This question is more pertinent today than ever before because of the speed at which technology has been changing. How would AI change work relations in big and small firms? Where can one study this change? Should companies have in-house AI experts, or should they contract the services out to another company? These are important financial, and political questions that each company has to deal with in an age of data abundance.

Boys in White: Physicians Then and Now

The most fulfilling aspect of preparing for the oral exams (the second qualifying exams in my PhD) is to go back in time, and read the most important sociological work that has influenced generations of researchers. The book that I am reading now is Boys in White by Howard Becker and colleagues. This book is about medical students, and their becoming doctors: i.e. their training in medical school. It uses sociological theory, and social psychology theory to look at student culture, how medical students change during their medical school. Partly the book was motivated by the desire to reform medical education in the middle of the last century. It was reported that medical interns in hospitals were overworked. The authors were right in studying how medical students collectively think and experience their education. Therefore, in order to reform the institution, their experience and perspective should be taken into account. In a nutshell, this book is about medical students, and their culture in the institutional context of medical school.

There are many takeaways from the book. My most important one is that the authors used the concept “perspective” and operationalized it as the way to understand students’ experiences during their training. Students exhibited different perspectives at different stages of their training. Most of these students were highly motivated because the training took a long time, yet at the end they would be initiated in one of the highest-status professions in the United States. In other words, this profession has been associated with prestige. The authors showed how students internalized this prestige, and created boundaries between themselves as doctors in training, and other related professions such as technicians, and researchers.

While reading the book, I kept thinking constantly about contemporary issues that medical doctors have to deal with. For one, the health care industry in the United States is undergoing through serious crises. The insurance companies are almost all private. Many people dont have insurance to visit doctors. Doctors do not spend time much with patients because their time is structured in a way to ensure their paycheck by insurance companies instead of the quality that their patients receive. Another issue is that in an effort to digitize patients’ health records, doctors are now spending more time dealing with computers than with real patients. Furthermore, most EHRs (electronic health records) are discrete because there are different companies that provide services to host these records. In other words, in terms of health data, it is a hot mess that doctors are required to participate in for the common good of their profession, yet the efficiency and transparency of their act are monitored, and oftentimes monetized by other institutional actors.

A recent article on the New Yorker titled “Why doctors hate their computers” by Atul Gawande quoted a research:

A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software.

That is they spent twice as much time doing extra patient visit work to document what the patient’s health conditions were instead of actually working with them to improve their health. It is not the doctor’s fault according to the article, but the computer system that many hospitals have installed to keep themselves up-to-date with technological advancement in the field. In many ways, doctors now are required to do things that were not relevant to their medical practices, and that they were not trained in medical school to do “secretarial work” of documenting their patients’ health records. Many would want to believe that with technology, one could become more productive. In this case, technology eats doctors’ time, and potentially could create mental blocks, burnout, and other work-related health issues.

Physicians in the United States are leaving their profession because of high level of burnout,  in a recent New York Times Oped, Dr. Siddhartha Mukherjee quotes another study:

In one study, 42 percent of doctors reported feeling burned out. The worst affected were obstetricians, internists and intensivists — doctors in subspecialties that require work in emergency-oriented conditions, and those who confront frequent lawsuits, and those who require constant documentation, surveillance and billing. The least affected were plastic surgeons and ophthalmologists — doctors who inhabit procedure- or skill-oriented domains. The most common reasons listed for burning out were the overwhelming strains of bureaucracy and paperwork, the vast quantity of time spent at work and a lack of respect from administrators and employers. Lack of adequate compensation was fifth on the list. Among doctors, too, it seemed, resilience and survivorship tracked along the same essential dimensions: meaning, mastery and autonomy.

Again the amount of paperwork, and bureaucracy that have been created by both technology, and insurance companies really drive doctors to the edge. Like other industries, the ballooning of the management class whose interest is different from the doctors’ interest also causes a huge problem. In many ways, I can see similarities between medical doctors and college professors, whose jobs are inherently people-oriented, yet the administration, managers push them to do extra bureaucracy work that has little to contribute to the bottom line of the quality of their work, but helps to sustain, increase prestige, and preserve the interest of the institution. In so many ways, I can see these two professions are losing out their prestige, power, their voice, and oftentimes suffering from stupid decisions collectively made by their managers.

Another issue that I think most severely affected young physicians is school debt. Many of them can have up to half a million dollars in debt when they leave medical school. A part of this debt ballooning phenomenon has to do with the profess of financialization of everything in the US. Physicians, like lawyers, now have to choose their specialties based on the potential incomes that that specialty promises to pay them in the future when debt is taken into account. Circling back to Becker’s and his colleagues’ work, I wonder how this quintessential late capitalism feature of a professional education has affected their physicians’ lives, and their perspective?

On the one hand, I see the starting point of erasing prestige of the medical profession. Additionally, medical school has become more of a financial decision than a calling. On the other hand, I can see promises of high return to the career choice once digital health revolution becomes more mature. At this point, I am eager to see both transformations, and setbacks of the profession by technological innovations, and financial instruments.

Book Review: 10 Arguments for Deleting Your Social Media Accounts by Jaron Lanier

After the 2016 presidential election in the United States, and the Cambridge Analytica scandal in 2018, many people have started leaving social media including Facebook, Twitter and Instagram en masse. However, social media culture, like hook-up culture on campus, affects everyone regardless of whether they opt in or not. I myself have thought about quitting social media many times, yet I never successfully made the transition. I simply have too many accounts. My life is too reliant on social media. The system of social media accounts is too convoluted that as an individual if I delete all, I would become empty. I am afraid of that void. While looking for some ways to rationalize the decision to be less connected in this networked world, I picked up the book Ten Arguments for Deleting Your Social Media Accounts by Jaron Lanier to learn how he justifies the decision to delete all social media accounts.

In a nutshell, the book “argues in ten ways that what has become suddenly normal – pervasive surveillance and constant, subtle manipulation – is unethical, cruel, dangerous, and inhumane.” In other words, Lanier suggests that in totality a system of all social media accounts has become “unethical, cruel, dangerous and inhumane.” Therefore, one should not participate in it, support its existence, and its reproduction.

The ten arguments are summarized on the back cover as follows:

Argument one:  You are losing your free will.

Argument two: Quitting social media is the most finely targeted way to resist the insanity of our times.

Argument three: Social media is making you into an asshole.

Argument four: Social media is undermining truth.

Argument five: Social media is making what you say meaningless.

Argument six: Social media is destroying your capacity for empathy.

Argument seven: Social media is making you unhappy.

Argument eight: Social media doesn’t want you to have economic dignity.

Argument nine: Social media is making politics impossible.

Argument ten: Social media hates your soul.

He works out the argument one by one using the term BUMMER which stands for “Behaviors of Users Modified, and Made into an Empire for Rent.” It is “a machine, a statistical machine that lives in the computing clouds.”  There are two main parts to his definition of BUMMER: modification of users’ behaviors, and rent seeking endeavor.

How do social media companies modify users’ behaviors? This question leads Lanier to give us a brief overview of what behaviorism is, and how this approach has become very influential in social media companies. In brief, behaviorism is a scientific movement that studies ways to train animals and humans. It arose before computers. Behaviorists focus on the environment where certain behaviors are produced, and reproduced. The implication is that when the environment is changed, the behavior is also changed.

What is rent-seeking? This is an economic term that describes one’s activity to increase one’s share of existing wealth without creating new wealth. This behavior can have harmful effect to the economy because of poor allocation of available resources.

In many ways, social media companies seek rent by offering a free platform for users to exchange information while altering their behaviors via algorithmic manipulations. Since users’ behaviors can be manipulated via these platforms, they can be also manipulated by other factors such as their social networks, bots, and foreign intelligence agencies during elections, etc. The one sharing place on the Internet that Lanier believes to not have been colonized by corporate interests is podcasts. I share his view on this, and have blogged about the democratization effect of podcasting, where individual broadcasters can reach out to their audience directly, instead of going through various distributional channels that are known to be biased, and dominated by a certain group of people. Lanier suggests that it is possible to corrupt the podcast space. However, given the current technology, it is very difficult.

I am buying into various arguments that Lanier brings up to convince each individual to quit social media. From a sociological point of view, Lanier is setting up a system of arguments to show detrimental effects of social media to each individual in a society. It is also harmful to society at large when each individual is easily manipulated.

On the macro-level, Lanier is right that social media as the whole has done more harm than good to society. Yet, on a personal level, I feel so conflicted about deleting one account at at time. For example, I belong to the Facebook generation. Everyone keeps in touch with their friends (childhood friends, college friends, backpacking friends, etc.) on Facebook. It is a casual place to strike a conversation. Now if I close Facebook permanently I dont know what my friends are up to. Keeping in touch with them will be more difficult. Even my parents follow me on Facebook to get a glimpse of what I do sometimes. Then my Twitter account is explicitly used for academic purposes such as following eminent public sociologists, whose ideas, and insights are relevant to my work. Now if I get rid of this channel, I feel as if I dont know what my field is talking about any more. The fear of losing out is taking over my thought processes. Then should I trust Lanier at all if he has never started a social media account to start with?

As a scientist, I see that the book comes short because it only presents a rough sketch of ten arguments with not much substantial evidence. Call me dogmatic if you will, but I would prefer some rigorous research to tease out each one of the ten arguments that Lanier makes. He presents many theories,  hypotheses, insider’s information, and sometimes good stories. These hypotheses could be tested in the real world. For example, Argument Five states that “social media is making what you say meaningless.” The logic is that when everyone can broadcast their own opinion, the meaning of what one says decreases significantly.  From a neoclassical economic point of view, this makes sense because when there is more supply of words/ messages, the price (here is meaning) of what one has to say should reduce. But how can I see this in real life? Is there a way to quantify meaning? How do I know that social media is the main factor that causes quality of conversation and messages that I broadcast to decrease? Or is it a general trend in an info-glut society, and social media is just one of the many tools that inundate each individual with information? There are too many confounding factors to have a conclusive statement about effects of social media on meaning. That said, I still agree with Lanier that social media plays a decisive role in eroding real and meaningful conversations.

In conclusion, the little book called Ten Arguments for Deleting Your Social Media Accounts gives social media users a lot of ideas why they should leave these platforms at least for a brief period of time. As an insider, a computer scientist, and someone who cares about the effects of digital technology on society, Lanier gives us many insights to appreciate. As a social scientist, I think this book contains many valuable hypotheses to test. That is to say, one can use this book as a guide to come up with some extensive research agenda that examines effects of social media on society.

Book Review: Bad Blood by John Carreyrou

About two months ago, all news media talked about the Theranos scandal, where the once unicorn startup had to shut down and liquidated because its founder, Elizabeth Holmes was indicted for wire fraud and conspiracy. Theranos was a health technology company that tried to “disrupt” the health care industry by designing a blood test device that used only a small amount of blood. This failure challenged many promises that Silicon Valley tech startups have been making all along: technology can solve many things fast. It raised many questions about Sillicon’s fake-it-till-you-make it and play-fast-and-loose culture. A couple of weeks ago while I was attempting to learn about AI technology, and its social, political implications. A social scientist mentioned to me that she was listening to an audiobook, whose name is Bad Blood to get a more nuanced understanding of the scandal. The book piqued my interest. I immediately requested it, gave it a read, and now feel that I have a better understanding of how Silicon Valley works.

What is Bad Blood about?

Originally, I thought that the nonfiction followed a traditional path of an investigative journalistic work where it would ask the questions such as who is Elizabeth Holmes, how did she come up with her startup idea, how did she make it a unicorn, why and how did it fail? In a lot of ways, it is a case study of how Silicon Valley created one of the sickest unicorn that was meddling with the health care industry. In a closer read, I found that my original thought was naïve. The book is more than just how Elizabeth Holmes rose to stardom and descended rapidly. It is a detective work where the author himself was involved in bringing down a tech darling from her unrealistic dream by exposing her lies and delusions. The book portrayed Elizabeth Holmes to be the charming, intelligent female antagonist, and the journalist at the WSJ, John Carreyrou, the author of the book to be the male protagonist, the detective who did not appear until very late in the story. Yet his journalistic instinct helped him paint an accurate picture of Theranos, its embarrassingly mediocre technology, sloppy management, and delusional culture of Silicon Valley. The shift in narrative from Elizabeth Holmes and people around her to how the journalist put the case together made the story much more convincing.

The female villain of the book is Elizabeth Holmes, the charming blond startup founder of Theranos. She dropped out of Stanford University  in 2003 to found a company that promised to test blood with a small device that could perform an array of blood tests using only one drop of blood. This promise was tempting to many investors, and corporate partners including a large swath of respectable venture capitalists, Wal-Greens, and at some point even the United States Defense Department. The book  structured chronologically.  In many ways it is a biography of a company. It has less to do with Elizabeth Holmes even though she’s a big part of it, who gave shape, form, and contour, and character to her startup. Therefore a significant amount of space in the book is dedicated to the complex character of Elizabeth Holmes. The underlying question around Holmes is why did shecreate what she created and why did she lie the way she lied to everyone?

In the process to answer this question, the author portrays other characters around her, and use the words of these characters to portray Holmes. He is like a sculpturist who creates contours and dimension to each character, and also shows the nature of relationship between different characters. What is the most striking feature of this person? How can one bring to the reader’s mind a 3-D portrayal of this character in this story?  He paints a picture of Elizabeth to be a horrible person, yet she’s very smart. She’s ruthless, controlling. Here’s Holmes from an ex-employee, a friend’s point of view, how Elizabeth was getting bad influence from her boyfriend:

In her relentless drive to be a successful startup founder, she had built a bubble around herself that was cutting her off from reality. And the only person she was letting inside was a terrible influence (her boyfriend). How could her friend [Elizabeth] not see that? (p.80)

Why did Theranos fail? The company was glutted with sloppy corporate governance, bad management, and despotism. Holmes hired her college roommate, boyfriend, and her incompetent brother to work as her closest people. She valued loyalty more than competency, and expertise. That’s not a way to go for a high-tech company. She might have over compensate because she’s young and tried to assert her authority over her well-trained brilliant employees. Her smartness could not really cover for her lack of training in the medical field. This insecurity manifested in making each department a silo. It translated in her obsession with leaking trade secrets out by surveilling her employees.

She had a vision that she genuinely believed in and threw herself into realizing. But in her all-consuming quest to be the second coming of Steve Jobs amid the gold rush of the “unicorn” boom, there came a point when she stopped listening to sound advice and began to cut corners. Her ambition was voracious and it brooked no interference. If there was a collateral damage on her way to riches and fame, so be it (p. 299).

The book really made the character Elizabeth Holmes appear like some body that one could spot in street of Silicon Valley. She was loved by the press. She was the woman engineer that everybody wanted to have in the male-dominated world of the tech industry. On her rise to stardom, the author writes:

Her journal interview had gotten some notice and there had also been a piece in Wired, but there was nothing like a magazine cover to grab people’s attention. Especially when that cover featured an attractive young woman wearing a black turtleneck, dark mascara around her piercing blue eyes, and bright red lipstick next to the catchy headline “THIS CEO IS OUT FOR BLOOD” (p. 208).

John Carreyrou was not afraid to criticize the press, including his own employer, the Wall Street Journal for having buy into Holmes’s promises early, and brought her from the periphery  to the center of national attention. This increasing publicity helped her raise money, and bring political and elite powerful people closer to her. Her company got valued a lot higher because of all those PR articles. This story makes clear the process, whereby a startup gets more funding via its inflated images portrayed by the presses. Then the press would give it even more attention for its successful rounds of funding. The startup is then swelling with funding, and flowery media images of itself. It’s a vicious circle.

One great thing about the book is that the author makes chemical and engineering processes read-able to the wider audience. What is high-tech is suddenly within grasp. For example, when talking about how a commercial blood analyzer might introduce errors, Carreyrou writes:

Alan had reservations about the dilution part. The Siemens analyzer already diluted blood samples when it performed its assays. The protocol Daniel and Sam had come up with meant that the blood would be diluted twice, once before it went into the machine and a second time inside it. Any lab director worth his salt knew that the more you tampered with a blood sample, the more room you introduced for error. 170

My favorite aspect of the book is its language: very matter-of-factly. There is no over flowery language. Everything is straight to the point. It is a long form of investigative journalism. It’s about the truth. There are great sentences for sure, but these great and stylistic sentences are not the main focus of the book. Now I understand my obsession with non-fictions when I was a teenager. When my English was not great, I preferred reading for information, and that I didnt have to guess what each symbolism meant.

Even though Carreyrou writes with a matter-of-factly tone, his superb writing skill really makes the book read like a movie. Each chapter rolls like a movie sequence. He zooms his camera at some characters, their stories, and then zooms out, and re-introduces them again later. Every character is presented to show the progression of the rise and downfall of Theranos. The book also reads like an urgent detective work. The sense of urgency is seeping throughout the book. Sometimes, it feels like one is watching a thriller movie.

As a sociologist, I must say that the book is very sociological. The author paints a  complex social network around Holmes, and how it influences Holmes’s decision making process. Theranos’s downfall is a social failure, not necessarily just the failure of a female startup founder. Carreyrou argues that it’s the fake-it-till-you-make it culture in Silicon Valley that contributed to this failure. Theranos was engaging in practices what other tech giants was doing with their products. For example, Apple, MS, and Oracle were accused of “vaporware,” a term to describe:

A new computer software or hardware that was announced with great fanfare only to take years to materialize, if it did at all. It was a reflection of the computer industry’s tendency to play it fast and loose when it came to marketing… Such over promising became a defining feature of Silicon Valley. The harm done to consumers was minor, measured in frustration and deflated expectations (p.296).

Holmes was just following the footsteps of those who she admired:

By positioning Theranos as a tech company in the heart of the Valley, Holmes channeled this fake-it-until-you-make-it culture, and she went to extreme lengths to hide the fakery  (p. 296).

However, she’s wrong. She’s working in the intersection between the tech industry, and the healthcare industry, where no “vaporware” is tolerated because it messes with human life. Computers and human beings are not the same. Legislation is still trying to catch up with computers, but human kind has had thousands of years dealing with illnesses. Life is invaluable, not disposable like an Apple computer.

When I first read Chapter 19: The Tip,  I thought well it looked like an Appendix in a sociological monograph, the end was coming. However, I was wrong. Instead, the chapter was the plot twist. And the I-character entered the story out. Carreyrou started narrating how he became involved in the project to bring down the unicorn. From then the story shifted to how the WSJ  published important investigative articles that showed the truth. This in effect alarmed regulators. They made regulators aware of all the wrongdoings that Theranos.  This is where the hero, the protagonist of the book was introduced.

While the book does a great job at telling the story of Theranos, and how journalists can work with regulators, and public policy makers to bring to light harmful practices, its main focus on Theranos, Elizabeth Holmes leaves many questions unanswered. For one, what is the fate of various early skeptics of Theranos? For example, the Fuisz’s, who was sued by Theranos early on because of a patent. Theranos really destroyed their family wealth in an expensive  legal battle. John Fuisz left his law firm because of reputational damage caused by Elizabeth Holmes’s accusations. Carreyrou seems to leave them out of the picture all together when his mission is accomplished: to reveal Holmes’s ruthless character.  It seems that the author deploys lawyer’s practice: to discredit the character of the defendant by showing the court how she has been treating people really badly all along.Yet, what others think, feel after Theranos was liquidated is not at all discussed. The battle is won. There’s no point of following up with other witnesses.

Dropping characters aside, why did Theranos failed? It’s the delusion and bad practices of Theranos, the author asserts:

Hyping your product to get funding while concealing your true progress and hoping that reality will eventually catch up to the hype continues to be tolerated in the tech industry. But it’s crucial to bear in mind that Theranos wasn’t a tech company in the traditional sense. It was first and foremost a health-care company. Its product wasn’t software but a medical device that analyzed people’s blood. As Holmes herself liked to point out in media interviews and public appearances at the height of her fame, doctors base 70 percent of their treatment decisions on lab results. They rely on lab equipment to work as advertised. Otherwise, patient health is jeopardized (p.297).

However, is this failure solely Holme’s responsibility? As a sociologist I see it to be the problem of the tech industry when it tries to disrupt a more established industry: healthcare, where a patent’s life is at risk. This is a more systematic failure of the Silicon culture, and business practices.  The line between faking it and lying about it is oftentimes blurred. All companies exploit legal loopholes to disrupt various industries. Holmes lived within the Silicon Valley universe, and had exercise her agency as what she was supposed to do. She was just following the herd, but she picked the wrong battle. The healthcare industry is not the same as the taxi industry or mass media.  One can lie about a hardware that has not existed, but one cannot lie about how a technology will not harm a patient.

My take-away after reading the book is that running a startup or any business is like running a marathon, not a sprint. One cannot force the race to be faster. It is a long training process where prior preparation is required to ensure a successful race. Another insight that I learned is that the quest to accelerate automation could bring potential harmful effects. Of course, automation can help with production, and various aspects of life. In healthcare, one must admit that human workers, particularly doctors, and lab technicians are magically accurate because their long-time training. They have hunches, and intuition which a machine does not have. Yes, humans make mistakes, but they also work wonder.


5 Things to Learn From A People’s Guide to AI

Recently I have become increasingly interested in the question of “what is AI?” In the context where AI technology is increasingly becoming ubiquitous, I think an understanding of AI is important for everyone including social scientists. While digging through Twitter, Google, and other social media platforms, I found A People’s Guide to AI, written by Mimi Onuoha and Mother Cyborg (Diana Nucera), published by Allied Media Projects in 2018. As the title suggested it is a 101 AI text that doesn’t scare people off with nomenclature and engineering jargon. It is a good starting point for a computer dummy like me. In this blog post, I will outline 5 important aspects of AI that I learned from the Guide.  

First. AI stands for artificial intelligence. As a scientific term, it means the study of intelligent agents: that is, any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goal. In a layman’s term, AI is used when a machine imitates human cognitive skills in problem solving. AI technology is increasingly becoming ubiquitous. It infiltrates almost every aspect of our daily life.  It should be thought of as “salt rather than its own food group” (p. 21). The authors created the salt metaphor to suggest that it is a way of working with data. It is a method instead of theory. It is sprinkling everywhere, but it’s in itself is not a real thing. Put it differently, “it is a number of different technologies that can be incorporated into almost any digital space to make things work more efficiently” (p.21). One can recall various voice-assisted functions on one’s phones, or Amazon’s Alexa.

Second, AI is not a new invention of the twenty-first century.

It has existed as a field of research since the early 1900s, and the ideas that inform it go back to centuries. But in recent times, the field has had a huge surge in popularity. Only in the last 10 years that we have seen the availability of huge amounts of information, computers that are powerful enough to make sense of that information, and people who know how to write code that can put both of those things together (p. 22).

Third, what distinguishes the recent development in AI (the 21st century version) vs. AI technology of the 20th century is machine learning. It is responsible for some of the larger innovations in AI.  What distinguishes AI and machine learning then?

In AI, humans set the terms of the algorithm that a computer will use, while in machine learning the computers construct the algorithms themselves…. Pattern recognition is at the heart of the human brain; it is one of the traits that has been most helpful to us as a species. It’s also a skill that you use every day.  Machine learning programs can successfully recognize patterns and predict what likely comes next (P. 44).

Fourth, one important aspect of machine learning is a field called deep learning. It is a sub field of machine learning.  Deep learning is different than just machine learning because of the idea of scale: deep learning requires more data so that the programs can make more connections and do more complex operations.  In deep learning, what is more important is not just that the computer programs can recognize the data patterns in data and predict what comes next, but that the programs get better at doing this while they are being trained on existing data. Deep learning is also about this ability of certain computer programs to improve to their own. The programs seem like they’re learning because they get better at their tasks over time, just like humans do when we are learning. Like algorithms their goal is to present outputs, not necessarily the processes behind those outputs.

Fifth, the most interesting applications of AI are happening in hospitals. Some hospitals are trying to use machine learning to be able to predict whether certain medical emergencies will happen to patients based on matching their medical data and symptoms with those of other patients who have had similar emergencies (p. 48). Another has to do with artificial body parts. I have seen advertisement of prosthetic limbs everywhere in Europe. AI helps these limbs work more smoothly and more powerfully because they can learn and adjust as they are used by amputees. These prosthetics have the ability to make limbs that are smarter than flesh ones.

Despite the various advantages of AI development, there is one issue that one has to pay attention to which is “algorithmic accountability.” That is, it has to with governance of AI. AI technology, and the understanding of it are currently available to only a few. Therefore, it is subject to misuse such as the spread of fake news during the 2016 presidential election in the US. Therefore, more actors including government officials, social scientists, philosophers, workers should be involved in the discussion about the development of AI. With the rapid development of this technology, we have to work together to envision a different future of work, culture and life.



Soziopod: Sociology Podcast from Germany

In the past, I have reviewed two sociology podcasts from the United States and the United Kingdom, namely The Annex Sociology, and Thinking Allowed (Than 2017). Still an avid listener of both channels, I am constantly learning about new ideas and the development in my field on the two sides of the Atlantic. Yet, the previous blog post reveals that my consumption of sociological knowledge is very Anglo-American centric. That is, outside of what is available in the English language, I almost never tried to read sociological knowledge written in another language. While in Berlin, I discussed this issue with a good friend, Herrmann Königs, a sociologist in training at Humboldt University in Berlin. He suggested that I should listen to a sociology podcast in German. It’s called Soziopod. I took a listen, and was pleasantly surprised by its content, the quality of the debates, and the number of episodes available. This blog post summarizes my overall evaluation of the podcast.


One can find more information about the podcast here. According to Wikipedia, it is dedicated to sociological and philosophical topics, and started in 2011. The podcast is unfortunately in German, which means that it is non-accessible to many. Unlike the two podcasts mentioned above, which focus mainly on sociology and other related social sciences, this podcast brings philosophy to the center of all social debates. This element in itself is very refreshing.

The podcast is hosted by Dr. Nils Köbel and Patrick Breitenbach. Dr. Köbel is a trained sociologist of children, youth, and religion, and Patrick Breitenbach is an expert in digital media. They make a good pair of hosts because both of them are invested in various topics. Since one of them is a media expert, he could translate abstract concepts into layman’s language. Many a times, the podcast avoids sociological jargon, which only insiders could understand.  The purpose of the podcast is to make sociological knowledge accessible to everyone. Dr. Köbel stated that they try present the topics in a manner of general understanding to “bring Sociology to the streets, where it belongs.”

A typical episode lasts around one hour. It is structured around a topic such as social inequality, migration, power, right-wing extremism, religion, or the Frankfurt School of social theory. That means, it’s a wealth of knowledge for anybody who is interested in social debates in Germany. Every once in a while, they also air a special episode where the hosts discuss an issue with a body of audience, and interact with them. Sometimes they invite experts to comment on certain topics. That means listeners could directly raise a question to the hosts/moderators, and sometimes debate with the two hosts as well. Since its inception in 2011, the program has produced more than 70 episodes, a few public forums for an audience to interact with the hosts, and they have published one book. This is quite impressive!

After the topic is being introduced, the hosts would define an important concept or concepts. Then they introduce the different social theorists who have written about the topic, and elaborate more on how these theorists are in conversation with one another. More importantly the discussions are situated in the context of contemporary Germany, which makes abstract scholarly debates relatable to daily life experience.

The discussions have a lot of pedagogical values. During the course of one hour, one can learn many important social theory concepts, and could look for appropriate examples to make sense a particular concept.  Each episode contains lots of knowledge about social theory.  The hosts often highlight theoretical concepts which have been invented by German theorists such as Jürgen Harbamas,  Thomas Luhmann,  or Theodor Adorno. I found these discussions fascinating because I have never really read these authors closely, nor used any of their works before. What is even more intriguing is that the hosts would relate sociological concepts to philosophical concepts. In other words, they acknowledge the foundation of sociology: philosophy. When unearthing the genealogy of a particular term, one could trace it back to some philosopher who wrote about similar topics. This is a contrast to my current sociological training in the United States, which as a field has developed into something that has been moved quite far from philosophy, or social theory.

Even though the podcast is a great pedagogical channel, as an American trained student of sociology, I cannot help but point out some of its shortcomings. First, its main topics would be categorized under the umbrellas of social theory or political sociology in American sociology terms. According to the recent sections that are listed on the American Sociological Association’s website, social theory and political sociology are two among its 52 official sections. In other words, the podcast covers a very small fraction of all possible sociological topics that one can study.

Given the nature of its leaning toward social theory, and philosophy,  most discussions stay on the abstract level. The discussions are centered around a topic, relevant sociological concepts, and different possible directions that could be taken to deal with the topic. What is barely discussed is empirical evidence to test whether the theory actually works on the grounds. The general structure of one episode is organized as follows:

  1. Definition of a concept
  2. How to operationalize the concept?
  3. Can one use the concept in a particular context in relation to the given topic?
  4. Who else has talked about the concept and this phenomenon since ancient philosophy?
  5. What else can we learn about the phenomenon?
  6. Is there any unresolved contradiction?

The hosts barely cite new research conducted in contemporary Germany. They often talk about big thinkers, who came up with concepts that could be applied universally. There are almost no discussions about methodology and data, which in my opinion are the strength of sociology. We are a pluralistic bunch of scientists who employ a variety of methods, theories, and data to study the social world. The podcast’s main focuses are concepts, and argumentation. As a student of immigration, work, and the urban, I find the podcast lacking because those fields are by definition not the main focus of the podcast. Because of its emphasis on theory, the podcast is also not paying enough attention to the lived experience of a particular group, which quintessentially showcases how a person inhabits their living environment, and reveals their social world.

When I brought up my observation about the lack of empirical research discussions in the podcast, my friend, Herrmann Königs, commented that this illustrates what is valued and emphasized in sociological research and pedagogy in Germany. In his words: “German sociology emphasizes intellectual history of a concept, and whether the concept could be applied universally.” We then went on to debate the question: Is it necessary to learn about the historical context, through which the concept arose in order to understand a contemporary social phenomenon? We couldn’t come up with a consensus whether it is productive to learn about intellectual history of a concept, or whether it is more productive to learn how to apply it in a contemporary situation. However, our discussion highlights the differences in our training on the two sides of the Atlantic. American sociological training tends to emphasize the empirical; the German, the theoretical.

Due to their training, my German counterparts impress me with their expertise in close reading of original texts, and the logic of their argumentation. However, I find their main interests on formal institutions such as the church, the school, and the state to be limiting. Sociologists can also study sub-cultures such as that of the urban squatters, Punk Rock culture, the Fusion (the equivalent of the Burning Man), the proliferation of Yoga, and the immigrants. All of those marginal groups might one day become mainstream, and by studying these subgroups, sociologists could reveal social transformations.

One could criticize that I am too American-centric, and that I cannot impose an agenda set by my profession on one side of the Atlantic to the other. I agree that I am an American trained sociologist, but I also think that as a profession, scholars on both sides of the Atlantic have much to learn from each other. German sociology provides rigorous theoretical training that I wish American graduate programs could provide. I would like to see students from day one to engage more with theoretical texts, and learn how to do it properly rather than seeing people like myself scared of social theory, and opt to do empirical research from day one. Thus, many a times sociology papers read a-theoretical to me. However, American pragmatism is much to be praised. With this pragmatic orientation in mind, we are looking for mechanism of why something is the case, and using our sociological imagination to reveal it. The two papers that I have read lately that showcase how a mechanism-focused researcher could be done are “When two bodies are (not) a problem” by Lauren Rivera (2017), and “All that is Solid” by David Peterson (2015). They exemplify some of the best contemporary sociological research that American academia has to offer.

Another aspect that I find not satisfactory is that the main (if not only) geographical focus of the podcast is Germany. It doesn’t give any air time other German speaking countries such as Switzerland, Austria, and Lichtenstein. If the social concepts are so universally applicable, why are they not applied in other cultural, sociopolitical contexts? According to Jaeeum Kim(2017), the field of sociology is openly anti-area studies. In other words, American sociologists tend to study American society; Germans study German one. Despite all odds, many sociologists travel across nation-state boundaries to study a particular social phenomenon. A few great books that I have read in the past two years include Jaeeun Kim (2016)’s Contested Embrace, where the author examined immigration from the Korean Peninsula, and their diasporic politics in the 20th century. Another example is Kimberly Hoang (2015)’s Dealing in Desire, which is an excellent ethnography that looks at the co-production of gender and capital in the sex market place in the context of globalizing Vietnam. Two growing subfields of sociological research are China Studies and Asian Studies. The 21st century has been dubbed as the Asian Century. It would be a mistake to not pay any attention to this important geographical area. In other words, only paying attention to social phenomena that occur within the geographical boundary of the German nation is a disadvantage for German sociologists in the context of increasing interdependence and interconnections of different areas of the world.

In conclusion, Sociopod has provided me with a substantial vocabulary to talk with my sociology colleagues on this side of the Atlantic. If you’re comfortable with social theory, political sociology, or pedagogy, you should give it a try. It is packed with bite-size discussions of theoretical knowledge. Its ability to reach a popular audience is aspiring. Bringing sociology to the street is such an inspiring goal, and it ought to be supported. In the context of the increasing emphasis of public sociology, I wish that all academics could use some of the hosts’ techniques to mainstream sociological knowledge to the wider audience. Sociology indeed belongs to the street, and that the knowledge of the profession should not be contained within the walls of the academe.




Bookstores as Cultural Institutions: Amazon Bookstore vs. The Strand

The summer before starting my PhD program, I backpacked through the UK. Out of all places, I loved my alone time on the coast of Aberdeen in Scotland the most. The city embodied Scottish charm, and provided me with familiarity: huge American SUVs running alongside rows of granite houses. The coast was beautiful, and the water was ice cold. My favorite discovery in the old town was King’s College at University of Aberdeen. That was the place where I discovered the magical feeling of reading and feeling leather-bound books.

King's College at University of Aberdeen
King’s College at University of Aberdeen

I took a walking tour of the college, and finally ended up spending two hours in a two-story lecture hall, whose bookshelves covered the entire second floor. My attention was fixated on Treasure Island, by the famous Scottish author, Robert Louis Stevenson. The book embodied the idea of what is old, and good about British literature. It was a leather bound red novel covered with dust of time. I guessed that generations of Scottish must have studied this lecture hall, and took this book home with them. Ever since I became obsessed with leather bound books. I would go for a leather-bound copy if I could afford a book.


Treasure Island


Since coming back to the United States, the only one place that I ever found leather-bound books on sale is the Strand at Union Square. One day I stumbled upon it while just hanging out with a friend. My first impression was that there were so many books, and I could not possibly know what I wanted. The sheer numbers of book overwhelmed me. Yet the first shelf that drew me to was a book shelf with leather bound books. I took one book down, and examined from the beginning to the end of the book. It did not have the magical feeling of an old book that was taken care of by generations. It was a new book on sale. I cannot remember the title of the book any more, but I do remember one thing: It was printed in the United States, and leather bounded in Switzerland. “Yes of course,” I thought to myself, “the paper is too white, too thin; it must be printed in the United States.” The fact that the leather cover was bounded in Switzerland still until this day puzzled me. At times, I wonder whether the United States never developed this leather-bound books industry like in Europe. Or because of industrialization and de-skilling of certain trades that this market was never big in the United States to start with. These are hypotheses to be tested. The fact is that leather bound books that I encountered were not bounded in the United States. One rarely find them at a chain bookstore like Barnes and Noble, and lately at Amazon physical bookstores.

In September this year, Amazon opened a physical bookstore on 34th Street and 5th Ave, which is on my way to work every day. The store front is not so big, but enticing because of its big tall glass windows. One day I ventured inside to study what was sold inside, I reckoned that the number of books it carried was not so impressive for a physical bookstore. The Strand obviously has a lot more books. Yet Amazon bookstore still triggered my curiosity, and attention. It sold very specific books. I still could not figure out what algorithm it used to decide which books should be on display or not, but a cursory search shows me that the majority of books have a lot of reviews on it globally. I wonder what it means by globally. One of my favorite books on display was Fatelessness by Imre Kertész. It was first published in Hungarian. I read it in German, and was so happy to see that Americans also read it. For every book, there is a little tag with a review on it, and also the number of stars it has. I wonder whether 90 reviews here are aggregated globally, or just in Amazon American market.

Fatelessness by Imre Kertesz

The store is well organized, and it does not overwhelm customers with a sheer number of books. Yet, somehow it does not look warm or inviting to me. A sociologist friend once told me, it is only “a place that tries to sell you stuff.” What does he really mean by trying to sell you stuff? Does he mean Amazon Bookstore is only a market place, and the marketplace functions every more efficiently with assistance from technology. Everyone can rate, and write reviews on Amazon. These rates, and reviews in turn determine what books could be on display in Midtown, Manhattan. Yet, to me a bookstore is not only a marketplace where books are exchanged, but it is also a cultural institution where knowledge and practices are transferred.

Does Amazon bookstore play this role of cultural carrier? No. But the Strand, yes. The Strand facilitates social and human interactions by displaying the sign that if you do not know anything: please ask. The chaotic display of books at the Strand invites confusion, and interactions between customers and staff members. The organizing logic here is not only about assisting customers with their purchase, but also about inviting  conversations around books, and around cultural consumption between two social groups. Through those customer-staff interactions, consumers gain something, and staff members gain something, and thus contribute to the maintenance, and reproduction of the reading culture.

My favorite corner at the Strand

My interactions with staff at Amazon were never unpleasant, but never fulfilling. They were instrumental at best. I needed a staff for information about a certain book, which I could look up on my smartphone. I never bothered to ask about book suggestions. I never looked at Amazon staff with an awe of respect because of the length of time that they have worked there, and that they know everything about what the store carried. They have worked there at best a few months, and at most a year. They have not acquired the kind of institutional knowledge for me to trust their expert knowledge. Besides, in order to reduce staff-customer interactions, Amazon equips various machines around the store so that customers could look up the information themselves. I wonder whether one goes to a bookstore to find it a solitary experience. If a seasoned staff member could introduce me to the kind of books that I have never read before, but I might be interested in, that would be terrific. I would love to read science fictions from Poland such as those written LEM, or a thriller by a Norwegian author. On my own, I would never discover those treasures, unless a well-read staff that I trust would introduce me to those.

I would ask a staff at the Strand for recommendations, but I could not imagine myself doing it at Amazon bookstore. It is not a place for discovery. It’s a store whose appearance facilitates instrumental interactions between people, and between humans and machines. It’s efficient in terms of facilitating a financial transaction. But it is not at all efficient in building up and/or sustaining a reading culture. At the end of the day reading is a cultural activity. Books are cultural objects. Buying books is just one step, and one small component in the entire enterprise. Amazon bookstore has not introduced any other aspect in sustaining this longer and bigger picture of the entire enterprise. Its vision in running its various bookstores is maximizing profits, and reducing human interactions. How would the expansion of Amazon affects book culture in a long run? I don’t know. I am yearning for a place where a staff member could tell me precisely what edition of the book it is, and where the book was printed and bounded, instead of just telling me to look up on Amazon, and see all the information there. If it was the case, I feel depressed after a book shopping experience.


Google Ad Algorithm Changes my Music Consumption Preference

Creative workers, writers, and artists all seem to have their favorite music playlists. They leave the music on when they are working. My first year PhD advisor confided that he liked listening to Mozart when writing a theoretical article. An anthropologist friend whose office was next to mine when I was working as a research assistant in Goettingen, Germany suggested that he listened to Bach when writing about ethnic conflicts. Particularly, listening to the St. Mathew Passion helped with intense concentration. When asked, “what music do you listen to when writing a paper,” a computer scientist friend told me recently that he listened to Chopin. So not only social scientists prefer classical music when writing, but also computer scientists. These three male scientists are not typical of all knowledge workers, and that their music tastes are by no means representative. Yet, their answers are revealing in terms of what one chooses to listen to when one has to attend to an abstract thought.

Following their advice, I listen to classical music to for inspiration. I find that my writing, and typing tempo change according to that of the piece of music that I listen to. When I write ethnographic field notes for example, I prefer listening to Norwegian or Swedish music. The fairytale feeling associated with Scandinavian folk music gives me some energy to really write “thick description.” As much as one tries to become objective in writing down details of a social event, one cannot get away with infusing each sentence with emotion and meaning. “Think description” to me captures the feeling that I had at the moment when I emerged in a particular social context. Music does it to me. It helps me infuse my science with feeling and rhyme. In this intense period of writing down observational notes, I work best when I do not understand the lyrics behind each song. Lyrics are sometimes irrelevant because what matters more is the tempo of the song. I can be totally oblivious of the meaning of the song.

Before, I listened to random songs on YouTube. It seems that YouTube has everything that one wants to listen to. Yet lately, I moved away from YouTube. The primary reason is that there are too many commercials popping up in the middle of my music. They commercials distract me of my writing, thinking, and reading. More annoyingly, this ad bothers me:


Screen Shot 2017-11-07 at 10.40.19 PM

It is an advertisement for those who could potentially apply to Agnes Scott College, my alma mater. I love Agnes. So do not get me wrong. I had a great education, and an unforgettable college experience. The friendships that I made there are life-long friendships. I still keep in touch with my former professors to ask them for advice on both life and academic issues. However, as of now I do not want to get a second bachelor degree in anything. I am working on a doctorate degree. Doesn’t Google recognize that I am not one of those potential applicants? If I could afford to get a second college degree at Agnes Scott, I would major in astrophysics, because I love Agnes’s observatory and planetarium; I would design my second major in artificial intelligence, and minor in creative writing. That sounds crazy, doesn’t it? But who cares? If I could afford to pay for college again, I should take full advantage of the resources that the school has to offer.

Of course, Google ads are more about capturing the right demographic group than the group with the right intention. I still don’t understand why this ad targets me out of all YouTube users out there. The information that I attended Agnes Scott has been out there, at least on Facebook. If Google claimed to know everything about me, it should have figured it out already. It should not have showed me the picture of myself in order to allure me to apply to the school again. It might not be Google’s fault. It might be Agnes Scott’s fault to upload my picture, without asking me, and without specifying that it’s me! My guess is that I as a target audience check these following boxes: age (16-30), race/ethnicity (Asian), gender (female), and a few other attributes. In any case, I  find this ad to be more annoying, than puzzling.

I feel flattered that my happy and chubby-face picture is being used to show a happy place where women can obtain knowledge, follow their passion, remain sane, and be happy with what they are doing. Looking at the picture, I could recall whom I was hanging out with at that very moment. It makes me want to visit my friends, who are currently living in the South. It reminds me of my experience eating gumbo, harvesting kumquat, and baking sweet potato pies in Alabama. However, I am not narcissistic enough to look at myself every time when I go on YouTube. Now, I use Amazon Listen to meet my music need. Ever since the switch, I have been exposed to more interesting music that has nothing to do with what I have watched/listened on YouTube before. I listen to random Indie music, classical music, musicals and rock music too. Furthermore, there are no ads on Amazon Listen. While running away from literally an image of myself, I am running into more inspiration, and interesting music encounters.

The moral of the story is that Google ads changed my music consumption pattern simply because I hate them so much, even when the picture on an ad is myself.