11. Glenn Wallis. For Education


Click image for pdf file.

Click here for an updated and expanded version of this text.


For Education

Explication is the annihilation of one mind by another…whoever teaches without emancipating, stultifies. —Jacques Rancière, The Ignorant Schoolmaster

Professors in the humanities view themselves as fostering crucial human capacities. These capacities, deemed necessary not merely for our flourishing, but for our very survival as a species, include: sound reasoning, critical thinking, engaged dialogue, creativity and innovation, analytical acumen, broad cultural knowledge, and empathic understanding of diverse worldviews. This statement from the Stanford University Humanities Center on ”Why the Humanities Matter” is indicative of the general spirit:

Today, humanistic knowledge continues to provide the ideal foundation for exploring and understanding the human experience. Investigating a branch of philosophy might get you thinking about ethical questions. Learning another language might help you gain an appreciation for the similarities in different cultures. Contemplating a sculpture might make you think about how an artist’s life affected her creative decisions. Reading a book from another region of the world, might help you think about the meaning of democracy. Listening to a history course might help you better understand the past, while at the same time offer you a clearer picture of the future.1

In this Item, I would like to explore that might. For, I believe that the very nature of humanistic pedagogy—its very success or failure— hinges on which side of that might an instructor operates. I will state at the outset that I believe that a “taking sides” metaphor is valuable because it evokes the political consequences that, I further hold, are at stake. Thus, in brief, on the right side of the might divide lies a liberal-conservative-right politics; on the left side lies a libertarian-socialist-left politics. By far the greatest issue at stake in negotiating this might, and in humanities education generally, however, is what results from manifesting these two (largely implicit) politicized orientations in the classroom: the cultivation of actual human subjects—students, people, citizens—in the real world. We must, of course, include the professor here, for that role is formed in the same institutional apparatus as that of “student.” So, to convey the sense of what I intend by this right-left divide, it will be useful to explore it briefly in terms of the kinds of implied subjects embedded therein.

On the right

To the right of the might is a subject who assumes the inevitability of the current institutional, and, by extension, social-economic, status quo. Below is a statement to that effect from a professor at a liberal arts college. (We’ll call this person “Professor X.”) It comes from a recent Facebook discussion on the limits of critique in teaching religious studies in relation to the ostensibly much more beneficial project of fostering “meaning-making” among the students. (Indeed, this critique/meaning-making divide is a contentious issue that has animated humanities pedagogy virtually from its inception.) The statement I wish to highlight is in response to this statement (slightly emended to eliminate extraneous references):

An instructor who engages in the “meaning-making” that seems to be called for in this thread (affirmative, unmolested by excess critique, spiritualized) is serving the perpetuation of the kinds of social formations that some of us are strenuously countering in the classroom: inequality, domination, paternalism, elitism, authoritarianism, and the tyranny of positivity.

I will return in a moment to the fact that that statement operates on the left side of the might. First, here is the (emended) response from Professor X on the right:

I probably don’t disagree with you on most of your critique of the university. However, the unspoken premises of my original post include: (a) that I want to keep my job, and learn to do it better; (b) that I don’t expect massive changes will suddenly be made to the university system in my lifetime; (c) that I am likely to continue to encounter the same general student demographic as I currently am, at least for the foreseeable future; and (d) that I will have to continue to meet, at least for the most part, assessment metrics and pedagogical expectations set by the institution. For better or worse, I’m looking for solutions and best practices within the context of this bounded field of contemporary academic [religious studies].

Why is that a response from the right? I imagine the reader thinking, all of that seems reasonable enough. I, however, contend that Professor X’s statement exemplifies the first condition of the ultimate failure of the humanities; namely, a professor and a student body operating firmly ensconced within the status-quo. Professor X’s statement is, in fact, an excellent illustration of how an Althusserian Ideological State Apparatus functions: Society is run through with alienating values and relations (inequality, authoritarianism, elitism, etc.); the university as institution absorbs those values and replicates those relations via, for instance, “assessment metrics,” “pedagogical expectations,” and a none-too-subtle compulsion for professors to remain within the “bounded field” of their discipline; professors, wishing to keep their jobs (i.e, who have been successfully “interpellated” by the economic “law”), obediently accept that this is just the way it is, and so reflexively embody the values and reinforce the alienating relations in the classroom vis à vis flesh and blood students; these students exit the university as properly inculcated citizens, ready to carry it all forward still further via their relations to one another and to society at large.

In the most basic sense, the statement augurs the defeat of the humanities in that it creates a subject as spectator. The institution is a kind of immutable theater of action wherein professor and students enact a drama of learning that has been prescripted, to a decisive extent, by “assessment metrics and pedagogical expectations set by the institution” and by the “bounded field” of our “disciplines” (an apt metaphor). It is an institution, we are further asked to believe, that will not be fundamentally changing any time soon. It is crucial to note that the construction of these metrics and fields themselves is driven by the demands of a social-economic system that is in turn driven, not by the personal interests of the professors or students per se,2 nor by “the public interest,” but by that of the primary “stakeholders” in that system: the wealthy ruling class.3  (Hence, the inexorable rise of STEM along with the pervasive hand-wringing about “the death of the humanities.”) Professor and student are not active agents in such an institution; they are quite literally its passive subjects, absorbing, reinforcing, and replicating the dominant ideology. The university is thus a static spectacle that demands of its subjects only that they adequately perform their duties as spectators. As this indicates, the spectacle is primarily a social relation, it is a machinic regulation of human power dynamics and hierarchies that are mediated by the already given image of the university.

This concept of spectacle originated with Guy Debord in The Society of the Spectacle. It is certainly not irrelevant to my argument that the student occupation at Paris University Nanterre on March 22, 1968, was directly inspired by this text and its warnings to the passive spectator. That occupation, driven by resistance to the issues mentioned in the quote above (inequality, paternalism, etc.), in turn, fueled the insurrection of May 1968. Debord’s Thesis 16 is one of many examples of how this idea of the spectacle is useful to my argument: “The spectacle subjugates living people to itself to the extent that the economy has totally subjugated them. It is no more than the economy developing for itself. It is the true reflection of the production of things, and the false objectification of the producers.”4 It is no coincidence that item a in the rationale for Professor X’s stance is “I want to keep my job.” Again, while this desire is understandable, it is not as self-evidently justifiable as it is apparently intended to be. It is only obviously justifiable if Professor X, as “producer” in the economy of the university, quite consciously determines to acquiesce to the capitalist logic at work therein. Otherwise, short of conscious allegiance to the economic system that arguably creates the very conditions of that which the humanities aims to countervene (authoritarianism, elitism, racism, etc.), item a, far from a self-evident justification, only reveals the extreme power of economic capture at work in the formation of the university professor subject. Professor X’s item a is a confession of the deep embodiment of that logic at work as a subjugating force in the life of the university. Taken in context with the additional items, furthermore, it reads as a confession to the “false objectification” of Professor X as an alienated producer in the machinery of the economic Golem that is modern day capitalism. Without belaboring the point, it should not be difficult to see that the student is equally such a “producer,” in that “student” indexes yet another location of “the economy developing for itself.” Indeed, the very language around studentship is run through with capitalist logic, placed unabashedly front and center for all to see.5

So, the first salvo from right of the might is acceptance—tacit or overt—of the current status quo. I consider this a move from the political right because such acceptance is a key feature of the liberal-conservative political-economic nexus known as neoliberalism. One of neoliberalism’s founding slogans, popularized by Conservative British prime minister Margaret Thatcher in the 1980s is TINA—There Is No Alternative! Since no better system of production and exchange than market capitalism exists, there will certainly be, as Professor X prophesizes, “no massive changes” to the institutions that circulate within that marketplace, including the university. Hence, the neoliberal logic continues, we must fashion for ourselves meaning-making narratives that enable us mere spectators to retain our ability to function, indeed, even come to thrive, within the inevitable state of the situation. To be truly effective, such narratives should be woven with the threads (and tell-tale signs) of neoliberal subjectivity: vulnerability, resilience, adaptivity, self-help, feigned positivity.

It will be useful to explore this link to neoliberalism more closely. The orthodox definition of neoliberalism sees it as “a theory of political economic practices proposing that human well-being can be advanced by the development of entrepreneurial freedoms within an institutional framework characterized by private property rights, individual liberty, unencumbered markets, and free trade.”6 If you are privileged and wealthy, all that probably sounds quite fine to you. If you are neither particularly privileged nor wealthy, you are more likely to agree with critics of neoliberalism, who argue that it is nothing less than “the ideology at the root of all our problems”7—massive financial inequality, poverty, entrenched patriarchy and misogyny, institutional racism and bigotry, authoritarianism, environmental degradation, international strife and warfare. Be all that as it may, for our purposes a more salient approach to the topic is to consider neoliberalism as “a theory and practice of subjectivity.” For:

we cannot understand how neoliberalism is able to function as a socioeconomic program…without addressing how it problematizes human subjectivity. It is the interpretive capacities through which human beings reflect upon the nature of their world, their relations with themselves, each other, and their environments that are seen as being of crucial issue for the legitimization of neoliberal practices of government.8

How, then does neoliberalism, and the acquiescence to it as the  status-quo, problematize human subjectivity? And, directly to our purpose, what role does the university play in perpetuating neoliberal ideology by aiding in the creation of its subject? In brief, the subject “inculcated through neoliberal discourses” and university participation is a “resilient, humble, and disempowered being that lives a life of permanent ignorance and insecurity.” This plight follows from embodied acceptance of the unchangeability of the institution and of the unknowability of the very conditions for that change. It should be clear that this position entails humility and powerlessness, or humility in the face of powerlessness. That being the case, our only recourse is to develop resilience, the ability to keep going, and, in doing so, to keep it going.

Estragon: I can’t go on like this.

Vladimir: That’s what you think…We are not saints, but we have kept our appointment [with the status quo]. How many people can boast as much?

Estragon: Billions.9

We are, furthermore, seeing here a subject who has come to denigrate the “hubris of ideals of autonomy.” This is perhaps the most damning disposition of status-quoist subjectivity. It is damning not because it belies what is perhaps the very defining tenet of the humanities—belief in the individual as an autonomous, eminently rational agent—but because it entails a defeatist cynicism bordering on a kind of gaslighting abusiveness toward arguments for real change. Such arguments are dismissed by the status-quoist as “unrealistic,” “too idealistic,” “too radical,” and, most common of all, “impossible!” (I will return to this feature of the argument later.) Finally, and perhaps most significantly, this is “not a subject that can conceive of changing the world, its structure and conditions of possibility.” Rather, it sees as its only hope the necessity of adapting as effectively as possible to the unchanging way things are.

In the following section, I will present an alternative approach—the one from the left—to that of the university professor who is “looking for solutions and best practices within the context” just described—a context embedded in a liberal-conservative-right politics. And in the final section, I will present a teaching model that aligns with that alternative. In this section, I have argued that we can never avoid the political in the classroom (or indeed in our scholarship). That is, how and what we teach, our negotiation of authority vis à vis our students, our relationship to the very institution of higher education, and so on, always extrapolates out into a politicized social formation. The formation outlined thus far is that of a spectacle arising spectre-like out the demands of a capitalist logic that insists on profit over people, and on its and its shareholders’ interests before that of “the public,” much less that of diverse individuals. The extrapolated consequences suggested here, I believe, point toward a dangerous complicity with the prevailing, profoundly dehumanizing, status quo.

On the left

To the left of the might is “a subject capable of conceiving the transformation of its world and the power relations it finds itself subject to.”10 As I hope to have shown, it is this very failure of the imagination to conceive of alternatives to the current state of the situation that marks the liberal-conservative-right side of our divide. A major hindrance is the seeming incapacity of the status-quoist subject to conceive of anything but macro transformation. If, like Professor X, you “don’t expect [that] massive changes will suddenly be made to the university system in [your] lifetime,” then what choice do you have but to “look for solutions and best practices within the context” of your currently “bounded field”? The left side of the divide extrapolates out into a libertarian-socialist-left formation precisely because it refuses to adhere to these boundaries, whether disciplinary, institutional, or indeed worldly. It, furthermore, adamantly rejects the status-quoist dogma that only once “massive changes” to the institution, or indeed to the world at large, have occurred can meaningful collective transformation take shape. As I will show in the following section, the key to such change lies in a practice called prefiguration, or the direct, viable, immediately lived micro realization of a classroom, university, and world envisioned by the humanities. First, it will be necessary to be more specific about what I assume the humanities necessarily to be.

Etymologists tell us that the word “education” stems from two Latin roots: educare, to train, to, mold; and educere, to draw out, to lead out.11 The former suggest a regime, whereby the learner is fashioned into a quite specific social subject, one predetermined by a given ideological formation. Pedagogically, this view suggests exercises like passive listening to lectures, memorization of data, and displays of successful inculcation via tests. Significantly, as we will see in the next section, it entails operating within a particular “bounded field” or scholarly discipline. The term educere suggests rather an inherently emancipatory practice, whereby the learner is being led out of, for instance, precisely such restrictive formations. Pedagogically, this practice entails fostering the development of critical and analytical skills. It also entails the eschewal of bounded fields. I imagine it goes without saying that I hold the latter version to describe what happens on the left side of our might, while the former version describes what counts as “education” on the right side (though I will suggest a better term for it in the final section). I will say more in the following section about actual pedagogy. Here, I want to briefly sketch a view of the humanities that would necessarily foster this practice of educere.  

Returning to the Stanford University Humanities Center, we find what can serve as a sine qua non definition of the humanities: “Through exploration of the humanities we learn how to think creatively and critically, to reason, and to ask questions.”12 This definition assumes the Enlightenment values that, in fact, gave rise to the modern university. That is, it assumes what, prior to the spread of the Enlightenment, were three radical—indeed, impossible!—ideas. First, people are capable of reasoning. This means that we are no longer dependent on priests, masters, kings—or professors—to determine what is best for us. Second, if we are capable of reasoned thought about what to do and how to live, then we are capable of action based on that thinking. This means that we are no longer dependent on priests, masters, kings—or professors—to determine how we should act. Third, if we are capable of sound reasoning and of action based on that reasoning, then we are capable of fashioning a better world (institution, situation, etc.) than the one we currently inhabit. We are no longer dependent on the prescript of the spectacle to dictate the terms of a state of affairs.13 This is the view of education promoted by actual libertarian-socialists, such as Noam Chomsky, who argues that “The highest goal in life is to inquire and create. The purpose of education from that point of view is just to help people to learn on their own.”14 It is not difficult to intuit the danger to the status quo lurking in such a view. For, if  critique and creativity are the lifeblood of the humanities, then an education in that vein creates a subject at odds with the indoctrinational impulse of educare. And if, as Chomsky says, the parameters of what constitutes an education is determined by the individual, then the very concept of a “bounded field” must be dispensed with. How, then, should we proceed? With this question, we can turn to a specific example of how the reader might realize such an emancipatory education, even within the neoliberal classroom. The example involves the libertarian-socialist, or specifically anarchist, idea of prefiguration. This idea in turn assumes deeply counter-intuitive images of the teacher or professor, the learner or student, the “field” or subject of study, and the very purpose of education. The general example that I will provide is that of “unlearning.” Specifically, in the final section, I will discuss Jacques Rancière’s concept of “intellectual emancipation,” first articulated in his 1987 book, The Ignorant Schoolmaster.

The ethics of prefiguration

The first appearance of the concept of prefiguration that I intend here is found in André Gorz’s 1968 New Left Review article “The Way Forward.”15 Not insignificantly, Gorz’s piece was published just after the Situationist-inspired Paris insurrection referred to earlier. With visions of imminent revolution dancing in their heads, activists and intellectuals alike were asking how best to prepare for the looming new state of the situation. Gorz suggested a strategy in which the most committed revolutionary figures—the “vanguard party” in Leninist parlance—“prefigures the proletarian State, and reflects for the working class its capacity to be a ruling class.” Such “reflection” is precisely the educative strategy of prefiguration because it mirrors, at the immediate micro level of revolutionary association, the very forms and relations it desires to see manifested in society at large. This idea is in fact a central strategy of libertarian-socialist thought going back to the First International. Again, it figures in a manner that is not insignificant to my argument. I am referring to the famous falling out between the “authoritarian” wing comprised of the supporters of Karl Marx, and the “libertarian” wing of Mikhail Bakunin’s supporters at the London conference in 1871. We glimpse the nature of the divide from a comment by Bakunin:

[Society] can and should reorganize itself, not from the top down according to an ideal plan dressed up by wise men or scholars nor by decrees promulgated by some dictatorial power or even by a national assembly…[but] from the bottom up, by the free association or federation of workers.16

Like the professors on the right of our might divide, Marx and his followers reflexively assumed the necessity of a paternalistic approach to leadership, which further assumed that certain guru-like figures are best equipped to determine the way forward for all involved. Bakunin rightly predicted that Marx’s eventual “dictatorship of the proletariat,” if ever realized, would end with tyrannical new masters, the “red bourgeoisie,” merely replacing the tyrannical old masters. For Bakunin, this outcome was prefigured not only in Marx’s theory of revolution, but in his very demeanor: “the instinct of liberty is lacking in him; he remains, from head to foot, an authoritarian.”17 An obvious question that arises here is: on what grounds should we expect a free society to arise out of hierarchical process? Indeed, this was precisely the question posed in the circular written by the Bakunin camp in the aftermath of the split with the “authoritarian” socialists:

How could one expect an egalitarian and free society to emerge out of an authoritarian organisation! It is impossible. The International, embryo of the future human society, must be, from now on, the faithful image of our principles of liberty and federation, and must reject from within any principle tending toward authority, toward dictatorship.18

Risking what may seem like overreach to some readers, I use these examples from revolutionary politics for several reasons. The most basic reason, as I mentioned earlier, is that I hope to convince the reader that there are real, potentially large scale, social implications to what occurs in the seemingly inconsequential and ostensibly apolitical environment of the college classroom. Indeed, I can rephrase to my purpose this crucial statement from the 1871 circular: “The future society should be nothing else than the universalization of the organization that the classroom forms for itself. We must therefore strive to make this organization as close as possible to our ideal.”19 Do we desire a society that has realized equality, that rejects unjustified institutionalized power and authority, that is anti-racist, that is feminist, pro-LGBTQ, environmentally friendly, and more? If so, we must use the classroom as a mirror of this society. We must create “an ethically consistent relationship between the means and ends,” in Cindy Milstein’s words, wherein our values align to our practice and our practice to “the new society before it is fully in place.”20

As we learn from the division between the two leading figures of socialism at the First International, Marx and Bakunin, however, this prefigured ideal will meet resistance from actors who appear to be our logical allies. Paternalistic professors, even left-leaning ones, will insist that students simply are not yet capable of such an approach. Hence, the necessity of an intervention that is both radical and ethical. It is radical because it realizes “the new society” at its very root—in the unfolding interactions of lived human associations; in our case, in the university classroom. This approach is in distinction to one that uses the levers of university governance to achieve its aim. Again, a political correspondence suggests itself. One of the most definitive differences between leftist and liberal approaches to change is captured in Bakunin’s strategy calling for “direct economical struggle against capitalism, without interfering in the political parliamentary agitation.”21 Presumably, Professor X says “I don’t expect massive changes will suddenly be made to the university system in my lifetime” because the professor is all-too-aware of the absurdly slow movement of the college’s manager-heavy bureaucratic machinery, as well as the “seemingly endemic cowardice and personally petty antipathies” operating therein.22 And yet, apart from leaving academia, there is no alternative for those on the liberal-conservative-right of the might than to work within the “parliamentary” system of the institution—its faculty senate, its committees, its strategic plan, its top-heavy leadership hierarchy, and so forth. This is, in fact, a defining feature of liberal-conservative-right politics, namely, the strategy of reform from within. Actors on the  libertarian-socialist-left adamantly reject this strategy as ineffectual, status-quoist, accommodationist, and hence ultimately futile. In its place it injects precisely an immediate, lived, prefigurative practice—direct educational struggle against the neoliberal university. This strategy, as Milstein tells us, is ethical, because we enact our classroom practice even if, I would hasten to add, “the new society” has no realistic chance of ever coming into place. That is, the driving imperative behind my prefigurative classroom is simply that I hold it to be right and just; hence, ethics demands that I enact it whether or not it ultimately “contains within it the forward surge of an achievement which can be anticipated” in society at large.23 This uncertainty of ultimate outcome tempers any claim to a necessarily “successful” or even affirmative prefiguration. It entails rather a cleared-eyed ethics that does not flinch from the undead ghoul that is the political-economic catastrophe haunting the university. It is an ethics rooted as deeply in anxiety, hopelessness, impossibility, and mourning as it is in justice, passion, insistence, and utopian yearning.


So, what might such a prefigurative classroom look like? In this final section, I would like to present what I believe is a realizable example of an “emancipated” classroom. The broad concept with which I am working, derived from radical education theory, is called “unlearning.” We can get a rough sense of the spirit of unlearning by returning to the statement with which I opened this essay: “Contemplating a sculpture might make you think about how an artist’s life affected her creative decisions;” but not if the terms of that contemplation have been predetermined by the fields of “Art,” “Aesthetics,” or “Art History.” “Listening to a history course might help you better understand the past;” but not if that history is bounded by “History.” And so on. The claim here is that what takes place on the right of the might is a regimen of learning. It will become clear as I proceed that “learning” is educare, something wholly distinct from “education,” which is educere. Learning occurs within what Professor X refers to as a “bounded field.” It has as its goal the replication and perpetuation of a predetermined program of “knowledge.” This bounded field is permeated by “disciplinary” procedures and judgements concerning, for example, standards, rigor, interpretation, skills, and, most crucially, explication. Explication is the very life blood of learning. It is the conduit of the life-giving cells of an aptly termed corpus, or “body of knowledge.” Explication is also, I contend, the pedagogical ally of a liberal-conservative-right worldview and politics. Not least of all, it is the death of education. Hence, the necessity of unlearning.

Jacques Rancière’s concept of “intellectual emancipation,” first articulated in his 1987 book, The Ignorant Schoolmaster, is an exemplary strategy for, negatively, countering learning, and for, positively, implementing education. Rancière expresses the issue at the heart of the matter when he writes: “Explication is the annihilation of one mind by another…whoever teaches without emancipating, stultifies.” This statement presents a starkly divided pathway, one  that will likely strike many readers as based on a ridiculous overstatement. It claims that if you, as teacher, employ the very tactic that constitutes our very notion of what it is to teach—explanation—then you are engaged in the destruction of your students’ intelligence. The statement claims that every time you open your mouth to explain the contents of your bounded field, you are expanding your students’ capacity for stupidity. In political terms, the statement suggests that explicative teaching supports the creation of a passive subject content to engage the spectacle, while emancipatory teaching encourages a courageous subject fit for resistance and creative innovation. All of this obviously suggests a practice that calls for a drastically, indeed radically, different vision of education from the one circulating in our current institutions of higher education. Because it assumes that the classroom must prefigure—must itself manifest, reveal, and actualize—a world devoid of neoliberal detritus (inequality, patriarchy, poverty, racism, etc.)—such a classroom obviously has no need for an instructor-who-is-supposed-to-know, that guru-like figure who possess the requisite wisdom for professing before the unschooled student body, and leading them to the “meaning” encoded within the guru’s “bounded field.” Indeed, the emancipatory instructor in a prefigurative classroom will be unrecognizable to anyone who is able to function contentedly as a “professor” (an explicator) in higher education.

What, then, is emancipatory teaching? First of all, this model contains within it what I imagine to be an insurmountably objectionable feature to many readers: elimination of the master explicator. Why such a drastic move? Because the explicator is constituted through the logic of two intelligences: inferior vs. superior, ignorant vs. knowing, professor vs. student. Readers may be asking themselves: And what’s wrong with that? Some people are more intelligent and better educated than others. Professors have doctorates after all! So, why shouldn’t they determine the shape of the bounded field?

I will now tell a wholly implausible—but true!—story. In The Ignorant Schoolmaster, Rancière relates the tale of one Joseph Jacotot (1770-1840). In 1818, Jacotot had been invited by the King of the Netherlands to lecture in French literature at Louvain. Thinking it would amount to a protracted vacation after the tumult surrounding the return of the Bourbons to power (Jacotot had been a minister under the Convention), he accepted. What he found instead of rest and relaxation, however, was an exhilarating intellectual adventure. For, Jacotot knew no Flemish and his students knew no French. Determined to engage the students nonetheless, Jacotot gave careful thought to the matter. He concluded that, in the first instance, “the minimal link of a thing in common had to be established between himself and them.”24 It just so happened that a French-Flemish bilingual edition of Fénelon’s Télémaque was coming out in nearby Belgium. This would do. He had the book delivered to his students, and, through an interpreter, asked them to refer to the Flemish text only as a means to understand the French. He had them work hard at it. He provided the environment for education, but they did all the work. Those students who had the self-motivation to persist to the end were then asked to write, in French, a detailed account of Télémaque. Given the counter-intuitive nature of the experiment, its seemingly obvious fate as abject failure, the results were nearly impossible to grasp. Rancière quotes an early commentator on the experiment:

[Jacotot] expected horrendous barbarisms, or maybe a complete inability to perform. How could these young people, deprived of explanation, understand and resolve the difficulties of a language entirely new to them? No matter! He had to find out where the route opened by chance had taken them, what had been the results of that desperate empiricism. And how surprised he was to discover that the students, left to themselves, managed this difficult step as well as many French could have done! Was wanting all that was necessary for doing? Were all people virtually capable of understanding what others had done and understood?25

Deprived of explanation. Left to themselves. These are keys to understanding what Jacotot would come to call his method of “universal education.” Specifically, Rancière identifies two  mechanisms of explicative teaching which, when removed, entail the emancipated intelligence that is an accompanying goal of the method. The two mechanisms are extensiveness and progressiveness. I will detail them in a moment. First, a few more words to highlight the professor-figure whom, I believe, represents the biggest obstacle to implementing this pedagogy. Am I, along with Rancière, justified in considering this figure an agent of stupidity, a stultifying explicator? Look at this account from The Ignorant Schoolmaster and see if you don’t recognize a familiar figure.

The stultifier is not an aged obtuse master who crams his students’ skulls full of poorly digested knowledge, or a malignant character mouthing half-truths in order to shore up his power and the social order. On the contrary, he is all the more efficacious because he is knowledgeable, enlightened, and of good faith. The more he knows, the more evident to him is the distance between his knowledge and the ignorance of the ignorant ones. The more he is enlightened, the more evident he finds the difference between groping blindly and searching methodically, the more he will insist on substituting the spirit of the letter, the clarity of explications for the authority of the book. Above all, he will say, the student must understand, and for that we must explain better. Such is the concern of the enlightened pedagogue: does the little one understand? He doesn’t understand. I will find new ways to explain it to him, ways more rigorous in principle, more attractive in form—and I will verify that he has understood.26

The master explicator operates by deploying the countless examples derived from the the “bounded field” of tradition, typically as recorded in authoritative books—historical texts, canonical works, scripture and commentary, contemporary textbooks and scholarly tomes—books from which the professor-explicator offers no escape. In any case, the crucial point is this: the explication, derived as it is from the master’s superior knowledge, wisdom, and experience, always prevails over the insights of the inferior intelligence: that of the student. For Rancière, something vile and pernicious is seething beneath this  stupid-making practice of prioritizing one intelligence over another: the perpetuation of social inequality. For inequality operates within, and thereby strengthens and perpetuates, “the very framework within which we get educated and acquire knowledge.” The explicative classroom is the very place where “our intellectual capacity comes into agreement with the inequality of the social order.”27

This framework can be dismantled through application of the following formula: Everything is in everything; learn something and relate it to all the rest.28 Here’s how that works. The ray of light that kills the intelligence-sucking vampire of explication is precisely that something. The logic of two intelligences lives and breathes in the shared delusion that the student cannot learn something.

As long as you are before “something,” you are before an opaque particularity which has its reason outside itself. You are before an opaque fragment of an unknown totality. You cannot learn anything unless you understand its connection to the whole of which it is a fragment.29

The logic of two intelligences holds that any given conceptual something is only a minute fragment of a greater “totality.” This totality is nothing other than what Professor X calls “the bounded field.” The bounded field represents a hermeneutic circle, the whole of which is known only to the professor-as-explicator. It is within this circle that the professor derives power, for there is virtually no end to the totality of a bounded field. The student cannot understand something without knowing how it connects to the whole of the bounded field. Only the professor knows how to link the part to the whole. A crucial element in this logic is the fact that the whole of the bounded field is itself “unpresentable,” and therefore “must be presupposed as inherent to the power of making the links, to the capacity of those who know how to know.”30 It is here that inequality, in both its pedagogical and social forms, shows its distorted face. For, the capacity to know the whole can only be demonstrated before an unequal intelligence. The professor’s intelligence ranges exultant over the transparent space of the bounded field. The student’s intelligence shrivals cramped, “enclosed in the relation of a private—idiotic—mind to particular things.”31 This is the principle of extensiveness.

Contained within the principle of extensiveness is another feature that many readers, I imagine, will recognize as a self-evident necessity for learning to occur: progressiveness. The principle of extensiveness holds that the professor’s knowledge will always range far distant from the student’s. Might it be possible for the student to close this distance? Yes. But it takes time. The time it takes is bound to a quite particular progression, the specific steps of which only the professor-explicator has knowledge. The logic of the principle of progressiveness is this: learn this something, then this something, then this something. The bounded field may be knowledgeably traversed only in a definite progression, a progression determined ad aeterno by the bounded field and divined exclusively by its professor-explicator agent. Where extensiveness operates spatially, progressiveness operates temporally: the bounded field may be finally circumnavigated only once the proper time has unfolded. Thus, the professor not only has command over the full range of proper connections to be made within the hermeneutic circle of the bounded field, but also “knowledge of the progression according to which the ignoramus is able to make this or that step in his travels,” extending over a protracted period of “learning.”32  

So the logic of explication calls for the principle of a regression ad infinitum: there is no reason for the redoubling of reasonings ever to stop. What brings an end to the regression and gives the system its foundation is simply that the explicator is the sole judge of the point when the explication is itself explicated. He is the sole judge of that, in itself, dizzying question: has the student understood the reasonings that teach him to understand the reasonings?…The master’s secret is to know how to recognize the distance between the taught material and the person being instructed, the distance also between learning and understanding. The explicator sets up and abolishes this distance—deploys it and reabsorbs it in the fullness of his speech.33

I said that the framework of inequality can be dismantled through application of the emancipatory formula Everything is in everything; learn something and relate it to all the rest. I am also arguing that the exercise of this formula in the classroom enables a prefiguration of a just world. Entailed in these claims are the additional ones that the formula disables the hypnotic seduction of the spectacle and, in so doing, strikes a blow against the capture of education by the prevailing system of dehumanizing neoliberal values. Most importantly, all of this allows to emerge the lineaments of a subject creative enough to imagine, and courageous enough to act toward, a just society. So, now we must ask: what is the right way forward that is revealed by the non-explicative professor?

Immediately, Jacotot retorts, there is no right way forward! Proclamations of a right method merely replicate the logic of two intelligences at work in the pedagogy of explication within a bounded field. All attempts at formulating a right way “boast of knowing how to know.”34 Since there is no right way of knowing, there can be neither accredited explicator nor stultified student. Everything is in everything; learn something and relate it to all the rest. Each “bounded field” is an illusion. It is an hallucination conjured up by medieval scholastics to defend their precious religious dogma against the rising tide of secular pluralism, and perpetuated by the petty departmental politics of the modern managerial university. Against the “bounded field” the emancipatory classroom facilitator assumes that everything is in everything, the whole is everywhere. This very essay before your eyes or in your hands is:

a whole from which you can discover your own capacity of making an infinite number of connections, hence your capacity of making links and wholes in general. The only condition of those operations is an “opinion:” the opinion of the equality of intelligence: the opinion that there is only one intelligence and that the master and the student are only two speaking beings, two travellers weaving their path in the forest of things and signs.35

Can not every one of us gather ample evidence that no explicator, no professor, no writer or speaker, can control the connections and links made in the process of another’s hearing or reading or thinking? Why, then, the charade of order, of correctness and control? Why, then, the insistence on the proper negotiation of extensiveness and progressiveness within a bounded field? Why, then—even with all of the current celebration of interdisciplinarity—the perpetuation of the pedagogical fetish fantasizing the value, meaning, and wisdom of departmentalized humanities fields? In short, why the epistemological discipline on the right of our might, that of liberal-conservative-right learning, rather than the epistemological anarchism of libertarian-socialist-left education? A greater question presents itself: why would an educator even desire such control, entailing as it does the shrivelling of imagination and the proscription of potentially idiosyncratic insight? And finally the looming question: why would an educator choose to function as an agent of the dehumanizing, debilitating yet seemingly unchallenged opinion known as inequality?  

Un-explaining in general means undoing the opinion of inequality. Undoing it means undoing the links that it has tightened everywhere between the perceptible and the thinkable. On the one hand, the un-explanatory method unties the stitches of the veil that the explanatory system has spread on everything; it restores the things that this system caught in its nets to their singularity and makes them available to the perception and the intelligence of anybody. On the other hand it returns their opacity, their lack of evidence, to the modes of presentation and argumentation which were supposed to cast light on them. By so doing it substitutes a community of equal speaking beings for the distribution of the positions opposing the learned to the ignoramuses.36


1 http://shc.stanford.edu/why-do-humanities-matter.

2 Katerina Kolozova writing about Marx’s notion of workers’ interest, notes that interest “is not an idea in the sense of ‘causa finalis.’ It is not a purpose. It does not have a ‘meaning’ per se. It does not require ‘wisdom,’ ‘superior knowledge,’ or education to know what one’s interest is.” See Katerina Kolozova, Toward a Radical Metaphysics of Socialism: Marx and Laruelle (Brooklyn: Punctum Books, 2015), 3.

3 See, for instance, Salon, “The Interests of the Wealthy: How the rich control politicians—even more than you think.” Political scientist Michael Jay Barber, discussing his research into the issue, says: “What we found, when we looked senator-by-senator, was that the opinions of donors and the behavior of senators are very closely aligned, whereas the opinions of the typical voter in a senator’s state were not nearly as closely connected.” https://tinyurl.com/y8cyw7s3.

Guy Debord, The Society of the Spectacle. Emphases added for clarity. https://tinyurl.com/pv4k9nq.

The most basic function of capitalist logic is of course the creation of a commodity out of either concrete or abstract entities. “Education” is a human abstraction that can only be packaged and sold for a massive profit when materialized as, for instance, a college or university. Within this context, we can see many other functions of capitalist logic at work (terms in quote actual commerce- and finance-oriented higher ed verbiage): the student as “customer” whose “business” must be earned through the college’s “delivering” of a stylized “student experience” and “retained” (tuition paid,) until completion of the “credential” (degree); “business plans” and “strategic plans” compiled by the college’s various “stakeholders;” celebration of perpetual “expansion,” as in the construction of new buildings, parking lots, student housing, gyms, stadiums, bookstores, cafes, etc; constant innovation, such as the creation of new programs, certificates, courses, and of mostly unnecessary changes in learning management systems, email providers, and the interminable and mostly unnecessary updates of computers, phones, and other technology; the selling of “credits” tied to time “employed” in the classroom (the “credit hour,” “extra credit,” “points earned” and “points deducted”); the system of rewards and punishments known as “grading” (ranking, classifying, distinguishing) ensuing from “competition” with other students; the presence of a factory-like taskmaster (the professor) setting the “terms” of the course via a “contract” (the prospectus-like “agreement” called a syllabus), and incessantly inspecting, probing, assessing, and evaluating student “performance” through “tests;” college rankings, viewed by “buyers” (potential students and their parents) as indicative of an institution’s “stock value.” I am just skimming the surface here, and could continue. But let’s finally mention the most blatant form of economic subjugation bearing on the student: debt. As Noam Chomsky says: “Students who acquire large debts putting themselves through school are unlikely to think about changing society. When you trap people in a system of debt . they can’t afford the time to think. Tuition fee increases are a ‘disciplinary technique,’ and, by the time students graduate, they are not only loaded with debt, but have also internalized the ‘disciplinarian culture.’ This makes them efficient components of the consumer economy.” See also “Noam Chomsky on Student Debt and Education,” https://chomsky.info/20130227. For more on the concept of capitalist logic, see, Ronald Edsforth, “On the Definition of Capitalism and Its Implications for the Current Global Political-Economic Crisis,” https://tinyurl.com/yagcctvd.

David Chandler and Julian Reid, The Neoliberal Subject: Resilience, Adaptation and Vulnerability (London: Rowman and Littlefield, 2016), 2. There is ample literature on how the university fits into this theory. See, for instance, Sheila Slaughter and Gary Rhoades, “The Neo-Liberal University,” New Labor Forum, no. 6 (Spring-Summer, 2000): 73-79.

George Monbiot, “Neoliberalism—the ideology at the root of all our problems,” The Guardian, https://tinyurl.com/hkbom5x.

The Neoliberal Subject, 2.

Samuel Beckett, Waiting for Godot. https://tinyurl.com/ybb9wxsw.

10 The Neoliberal Subject, 4.

11 See, for instance, Randall V. Bass and J. W.  Good, “Educare and Educere: Is a Balance Possible in the Educational System?” The Educational Forum, Volume 68, Number 2, Winter 2004: 161-168.

12 http://shc.stanford.edu/why-do-humanities-matter.

13 See Cindy Milstein, Anarchism and its Aspiration (Oakland: AK Press, 2010), 17-18.

14 https://tinyurl.com/ydex2vzd.

15 Uri Gordon, “Prefigurative Politics between Ethical Practice and Empty Promise,”  Political Studies 2018, Vol. 66(2), 526.

16 Gordon, “Prefigurative Politics,” 529.

17 Mikhail Bakunin, “Reflections on Marx and Engels.” https://tinyurl.com/y9q2ette.

18 Gordon, “Prefigurative Politics,” 529.

19 Gordon, “Prefigurative Politics,” 528-529. The original reads: “The future society should be nothing else than the universalization of the organisation that the International has formed for itself. We must therefore strive to make this organisation as close as possible to our ideal.”

20 Cindy Milstein, Anarchism and its Aspirations (Oakland: AK Press, 2003), 68.

21 Peter Kropotkin in his seminal article, “Anarchism,” The Encyclopaedia Britannica, 1910. https://www.marxists.org/reference/archive/kropotkin-peter/1910/britannica.htm.

22 L.O. Aranye Fradenburg and Eileen A. Joy, “Unlearning: A Duologue,” in Aiden Seary and Éamonn Dunne, The Pedagogics of Unlearning (Brooklyn: punctum books, 2016), 172. The quote is from Joy’s section of the “duologue.”

23 This is Ernst Bloch discussing his quite relevant concept of “concrete utopia;” in Gordon, “Prefigurative Politics,” 533.

24 Jacques Rancière, The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation, trans. Kristin Ross (Stanford University Press, 1991 [1987]), 2.

25 Rancière, The Ignorant Schoolmaster, 2.

26 Rancière, The Ignorant Schoolmaster, 7-8.

27 Rancière, “Un-What?” 26.

28 Rancière, “Un-What?” 27.

29 Rancière, “Un-What?” 27.

30 Rancière, “Un-What?” 27.

31 Rancière, “Un-What?” 27. Rancière is playing on the ancient Greek sense of the word idiōtēs, which simply denoted “’a private person.” Later Latin usage extends the meaning to “ignorant, uneducated.”

32 Rancière, “Un-What?” 28.

33 Rancière, The Ignorant Schoolmaster, 4-5.

34 Rancière, “Un-What?” 29.

35 Rancière, “Un-What?” 29.

36 On “epistemological anarchism,” see Paul Feyerabend, Against Method (London: Verso, 1993 [1975]).

37 Rancière, “Un-What?” 35.

10. Kaitlin Smith. Calling All Inspired Intellectuals

Click image for pdf file. 

Calling All Inspired Intellectuals 

Kaitlin Smith is the founder of Wild Mind Collective. The collective consists of several formats, including a podcast, website, blog, and Facebook page. Its mission is to counter the toxic effects of the neoliberal-consumerist-industrial complex that counts as today’s academia, a state of affairs that “poses significant problems in both the lives of individuals and in broader communities hungry for the contributions of visionary thinkers rendered meek and self-doubting through academic socialization.” The following piece first appeared on the Wild Mind Collective  blog.




As conditions within colleges and universities around the U.S. grow ever more dire for many knowledge workers, it is no longer terribly controversial to regard academia as a space of chronic disempowerment and emotional abuse for nearly everyone who enters its professional socialization process. This is particularly so for those inspired to amplify marginalized perspectives, deliver biting social critique, or resurface traditions of contemplation that contrast with the logic of mechanistic scholarly production.


This blog entry chronicles my experience walking without a map in response to these warnings as an inspired intellectual—someone with an unquenchable love of ideas and an unwavering commitment to personal authenticity in pursuing them. Though I have hesitated to launch this project with an extended discussion of my own experience, I believe that things will not change for this community until we find the courage to marry intellection with vulnerability in open dialogue and await no one’s authorization.

It all started for me after watching Noam Chomsky’s Manufacturing Consent documentary in high school. After giving some of his writing a cursory read and feeling inspired by his courage and intelligence, I concluded that I, too, wanted to become a professor-radical public intellectual. Perhaps needless to say, I did not understand that those things overlap only very infrequently. Despite that, this goal was reinforced at the picturesque Swarthmore College where I was groomed to continue along this trajectory and did so until a fortuitous trip to New York City brought my journey to a grinding halt. During a research trip to the Schomburg Center for Research in Black Culture in Harlem, I began to perceive the dehumanizing levels of abstraction inherent in the research process as an engine of personal estrangement and the whole experience as a precursor to the isolation that awaited me as a professional academic woman. I had previously read plenty of harrowing stories about the particular experiences of many scholars of color, the unique challenges faced by academic women of childbearing age, and the consequences of the hiring freeze that had been occurring at the time. There was something about riding the New York subway every day amid droves of strangers that amplified my manufactured estrangement and revealed that inertia would continue to push me along that path unless and until I forcibly stopped it. Suddenly, the prospect of launching into a highly consuming career in which I would not be able to speak candidly, would be undermined in pursuing partnership and motherhood, and made perpetually precarious felt like a choice that would quickly be made for me if I did not decide to hit the brakes. Though I did put on the brakes, I would soon learn that forging my own path—without clear models or support—was not at all easy.

My realizations prompted me to halt the process of applying to doctoral programs and, instead, begin work in healing arts— psychotherapy and modalities that nurture the mind-body-spirit. Like the tender places within some humanistic disciplines, this is a domain where narratives can be rewritten at the level of the self and the community against all apparent odds. Through a combination of primarily self-directed study in healing arts and master’s-level training and practice in psychodynamic psychotherapy, I gained rich experience working one-on-one with clients. Unsurprisingly, however, it was not long before my “intellectual DNA” resurfaced to call me back to neglected parts of myself. Throughout graduate study and clinical training, I remained the person who had many more questions than could be answered, who was unsatisfied with the mainstream paradigm of treatment, and perpetually frustrated by the politics of academic and medical knowledge production more generally. Rather than feeling at peace during client sessions, I desired a platform from which to share my ideas and engage others in dialogue. I also wanted the freedom to speak openly about my own experiences in the first person (not only a no-no within scholarly writing but also in the psychoanalytic / psychodynamic tradition in which I was trained).

What truly prompted me to re-evaluate this new direction, however, was work with supervisory figures who diagnosed my intellectualism and lack of apparent anger as pathological for a black woman. They asserted that I was “too cerebral” for the clinical profession and communicated that, in general, critical thinking beyond a certain threshold is simply not valued within the field. Though I did not know this at the outset, the anti-intellectualism present within the field is a known problem amongst many of its defectors and, hence, a particularly poor destination for someone with my innate proclivities. Scenarios like these seem to be among the worst nightmares of many academics I have known. The threat of such experiences beyond the ivory tower seems to be the glue that keep a great many people in line amid severe cognitive dissonance. Though I certainly wouldn’t wish these particular experiences upon anyone, I know firsthand how difficult it is for socially marginalized people who think critically and speak courageously to work under people who are simultaneously intimidated by us and empowered to reinforce their dominant position. The opportunity to, at least seemingly, not have a boss and engage primarily with colleagues who demonstrate some measure of sociopolitical awareness seem to be among the major selling points of the academic career for people who meet this profile. When your self-expression disturbs stereotypical renderings of your community, finding a career where you can gain genuine respect for being yourself can feel like an impossible task. As it turns out, this seeming impossibility led me back into the lion’s den.

Despite my longstanding misgivings, it was this experience of feeling deeply pathologized, coerced, and misappropriated that led me to take my next career steps within academia after all. Wasting away in a field in which some of my greatest assets were being received as liabilities seemed like an unconscionable, unsustainable waste of time, health, and spirit. As I examined the examples and role models around me, I struggled to imagine conducting the work that felt like mine without the organizing framework of a doctoral program. I knew that I wanted to write, speak, and conduct events related to the ritual of education, academic knowledge production, and the bankruptcy of symbolic culture alone for driving social transformation. I was also inspired by existing lines of inquiry within Africana philosophy, indigenous thought, and ecophilosophy as I reflected upon these issues. Though the social vision behind my ideas seemed to run counter to the purpose of most traditional departments within the U.S., my private thinking was that I wouldn’t necessarily become a professor for all of the reasons that had given me pause years prior. In my imagination, I would become some sort of hybrid of public intellectual and entrepreneur once I emerged sufficiently prepared and adorned with the PhD.

As you might imagine, however, things within my doctoral program at UChicago hardly went according to plan. One of the primary faculty I had intended to work with ended up taking a role at another institution without informing me and the advisor who was ultimately assigned to me was chronically unresponsive. I was irritated that I was wasting days, weeks, and months arguing with colleagues and professors about the merits of a project that was subsequently published in a similar form by someone else roughly a year later with positive critical reception. Though I may have found a more congenial home within an interdisciplinary department or one with a more overt activist stance, the basic qualms that stopped me from taking this path earlier resurfaced with much greater clarity. Beyond the obvious challenges that afflict almost everyone on the job market in the humanities and humanistic social sciences, I found that my abstract misgivings became more concrete as I learned more about my professors’ real lives. From the constant relocations, lack of significant relationships, and lack of agency to speak their truths on campus and off, I simply could not imagine committing to that life-long marathon. In addition, I also began to experience some significant health challenges that were difficult to address on my meager fellowship stipend and that was severely aggravated by conditions of chronic stress. It became clear that choosing myself and choosing to live out my interior academic fantasy were mutually exclusive paths. After a summer of tough deliberation, I chose the former and have never looked back.

Though I had not been sure how I would weather the unique challenges of the academic socialization process before I started, I failed to understand how the process fundamentally changes people and would urge me to change, too. While some people may consider this a character flaw, I believe that my unrelenting stubbornness and inability to tolerate conditions of servitude saved me from what may have become many years and decades of suffering in service to an idealized career vision that is increasingly askew from reality. The notion that one will simply get the degree and get out promptly, unchanged and unscathed, is an irrelevant pipe dream for so very many people. The fanciful equation in which the existing me + PhD will invariably = a better version of myself just doesn’t balance out. The self who endures the process of domestication may very well be markedly different in mind, body, and spirit than its predecessor (and not necessarily in a manner that the person would elect to repeat). What would happen if we could begin claiming that latter state of completion and self-authorization in the here and now?

Despite my early exposure to the horror stories of too many committed and impassioned people within this industry, my and others’ willingness to thrust ourselves into this toxic system suggests the need for a new space that helps inspired intellectuals build meaningful bodies of work and care for themselves irrespective of academic affiliation. Regardless of the slew of digital opinion pieces warning us of impending danger and abject precarity, it is extremely difficult to act upon such knowledge without a clear sense of viable alternatives and a supportive community to bear witness.

I have launched the Wild Mind Collective website because I believe that this void poses significant problems in both the lives of individuals and in broader communities hungry for the contributions of visionary thinkers rendered meek and self-doubting through academic socialization. Whether someone makes their professional home in academia, beyond it, or somewhere in-between, I want to live in a world in which all inspired intellectuals feel empowered to deliver authentic bodies of work within their chosen domains. I believe that our world desperately needs this sea change and we do, too.

If this issue has touched you or someone you know and you would like to be part of this conversation, I hope that you will tell me about your experience in the comments, share this post with a friend, and join my mailing list.

08. Glenn Wallis. Inciting Change Through Courageous Thought + Action


Podcast interview at Wild Mind Collective

In this episode, Kaitlin Smith, creator of Wild Mind Collective, interviews Glenn Wallis, a scholar of Buddhist Studies and Founder and Director of Incite Seminars—a series of animated humanities seminars that agitate personal awareness and incite social engagement amongst the general public in Philadelphia.
Here, we discuss:
(1) His journeys within and beyond educational institutions
(2) Intrinsic barriers to the creation of new kinds of subjects (people) within academic training

(3) How political concerns have reshaped his scholarly work

(4) How the unexamined assumptions of Buddhism, mindfulness, and psychology undermine the liberation they claim to promote

(5) How he is creating space for inciting, public dialogues beyond the ivory tower

To listen, click image below.

Wild Mind Collective.png

06. Colman McCarthy. Anarchism, Education, and the Road to Peace


Click icon for pdf file.

First published in Amster, et. al. (eds.), Contemporary Anarchist Studies: An introductory anthology of anarchy in the academy (London: Routledge, 2009). Reprinted here with permission of the author. May not be reprinted without similar permission.

Colman McCarthy is a former Washington Post columnist. He has taught courses in peace studies for over twenty years at numerous colleges and high schools. He is also the founder and director of the Center for Teaching Peace. His essays have appeared in The New Yorker, Readers Digest, and the Catholic Worker. He was awarded the El-Hibri Peace Education Prize (2010), the Olender Peacemaker Award (1996), and the Pax Christi Peace Teacher Award (1993).

Anarchism, Education, and the Road to Peace

One of the major draws on the US lecture circuit some one hundred years ago was Prince Peter Kropotkin. In October 1897, the revered “father” of modern anarchism, who was born to nobility in Moscow in 1842, addressed the National Geographic Society in Washington. In New York City he lectured to audiences of 2,000 people. In Boston, large crowds at Harvard and other sites heard him speak on the ideas found in his classic works, Mutual Aid; Fields, Factories and Workshops; Law and Authority; The Spirit of Revolt; and The Conquest of Bread.

Admission was 15 cents, sometimes a quarter, or else free so that (as Kropotkin desired) “ordinary workers” would be able to attend. Kropotkin came back to America for another tour in 1901. In Chicago, Jane Addams, the director of Hull who would win the Nobel Peace Prize in 1931, was his host. Emma Goldman (who believed that “organized violence” from the “top” creates “individual violence” at the “bottom”) and Clarence Darrow praised him then, as would Lewis Mumford, Ashley Montague, and I.F. Stone years later. The prince, a serene and kindly activist-philosopher and the antithesis of the wild-eyed bomb throwers who commonly come to mind when anarchism is mentioned in polite or impolite company, enjoyed packed houses when the military muscles of American interventionism were being flexed with great fervor. In 1896, Marines were dispatched to Corinto, Nicaragua under the guise of protecting US lives and property during a revolt. In 1898 Marines were stationed at Tientsin and Peking, China to ensure the safety of Americans caught in the conflict between the dowager empress and her son. The following year, Marines were sent to Bluefields, Nicaragua to keep their version of the peace. Then it was back to China, ordered there by the McKinley administration to protect American interests during the Boxer rebellion.

Political Washington couldn’t fail to notice that Kropotkin was on the loose, going from one podium to another denouncing the favored form of governmental coercion, the military:

Wars for the possession of the East, wars for the empire of the sea, wars to impose duties on imports and to dictate conditions to neighboring states, wars against those “blacks” who revolt! The roar of the cannon never ceases in the world, whole races are massacred, the states of Europe spend a third of their budget on armaments; and we know how heavily these taxes fall on the workers.

Unfortunately, we don’t know, or choose not to know. If it were the opposite, the lives and thoughts of nineteenth- and twentieth-century anarchists would be as discussed and studied in schools as those of the politicians who raise the funds for wars and the militarists who are paid to do the killing. After Kropotkin’s second lecture tour, with the crowds growing larger and the prince’s message growing bolder, Congress took action. It passed a law in 1903 forbidding anarchists to enter the country. In a letter to Emma Goldman, Kropotkin described an addled and anxious America that “throws its hypocritical liberties overboard, tears them to pieces—as soon as people use those liberties for fighting that cursed society.”

In the courses on pacifism and nonviolence that I’ve been teaching in law school, university, and high school classes since 1982, students get full exposure to Kropotkin. In the first minutes of the semester, I cite the Russian’s counsel to students: “Think about the kind of world you want to live and work in. What do you need to build that world? Demand that your teachers teach you that.” Hidebound as they are to take required three-credit courses that current curricula impose on students, and a bit unsteady on exactly how to pursue the art of demanding, only a few are up to acting on Kropotkin’s call. For me, it’s a victory if students make demands on themselves and dive into Kropotkin on their own, inching a bit closer to a theoretical understanding of anarchy.

To get their minds in motion, I ask students what word they first think of when anarchy is mentioned. “Chaos,” they answer, “anarchy is chaos.” I am consistently surprised by their responses linking anarchy with chaos. However, when I conceptualize chaos, these types of questions come to mind: What about the 40-odd wars or conflicts currently raging on the world’s known and unknown battlefields? Isn’t it chaotic that between 35,000 and 40,000 people die every day of hunger or preventable diseases? Doesn’t economic chaos prevail when large numbers of the world’s poor earn less than $1 dollar a day? Isn’t environmental chaos looming as the climate warms? Aren’t America’s prisons, which house mentally ill or drug addicted inmates who need to be treated more than stashed, scenes of chaos? All of these questions address the real chaos that is occurring in the world today. Anarchists aren’t causing all that, but rather (it might be said) are trying to prevent it. Instead, it falls on those lawmaking legislatures instructing the citizens, raised to be faithful law-abiders, on what is the public good: Laws. Laws. Laws. They make us more “civilized,” say our law-making betters. The problem is, laws are made by people and people are often wrong, so why place your faith in wrong-headedness?

The root word of anarchy is arch, Greek for rule. A half-dozen archs are in play. Monarchy: the royals rule. Patriarchy: the fathers rule. Oligarchy: the rich few rule. Gynarchy: women rule. Stretching it a bit, there is Noah’s-archy: the animals rule. (Pardon the pun. No, wait. Don’t pardon it. A certain strain of anarchists, I fear, tends to brood, so a laugh now and again can be useful.) And then we arrive at anarchy, where no one rules. Fright and fear creep into students’ minds, especially those who suspect that anarchists are high-energy people with chronic wild streaks. With no rules, no laws, and no governments, what will happen? The question is speculative, but instead of fantasizing about pending calamities that might happen, think about the calamities that are happening now: war, poverty, and the degradations of violence sanctioned by political power and laws. Indeed, as Kropotkin himself once warned:

We are so perverted by an education which from infancy seeks to kill in us the spirit of revolt, and to develop that of submission to authority; we are so perverted by this existence under the ferrule of a law, which regulates every event in life—our birth, our education, our development, our love, our friendship—that, if this state of things continues, we shall lose all initiative, all habit of thinking for ourselves. Our society seems no longer able to understand that it is possible to exist otherwise than under the reign of law, elaborated by a representative government and administered by a handful of rulers. And even when it has gone so far as to emancipate itself from the thralldom, its first care has been to reconstitute it immediately.

Extending these points, on November 17, 1921, Mohandas Gandhi wrote in his journal:

Political power means the capacity to regulate national life through national representatives. If national life becomes so perfect as to become self-regulated, no representation becomes necessary. There is then a state of enlightened anarchy. In such a state everyone is his own ruler. He rules himself in such a manner that he is never a hindrance to his neighbor. In the ideal state, therefore, there is no political power because there is no state.

The solution to the dilemma, at least in the anarchism to which I subscribe, is to remember that either we legislate to fear or educate to goodness. Law abiding citizens are fear abiding citizens, who fear being caught when a law is broken or disobeyed. Fined. Shamed. Punished. When a child is educated to goodness, beginning in a family where the adults have a talent or two in solving their conflicts without physical or emotional violence, he or she is exposed to lessons of kindness, cooperation, and empathy that leads to what might be called “the good life.”

Anarchists, especially when they dress in all-black and mass-migrate to protests at the World Bank or International Monetary Fund conclaves, don’t do much to persuade the public to sign on when they shout epithets at the hapless bureaucrats and papercrats crawling into work. The verbal violence serves mostly to reinforce the perception that anarchists are more generally violent, conjuring the age-old image of the bomb-thrower. It’s true enough that anarchists have thrown bombs in isolated demonstrations, although we know that the greater threat are the bomb-droppers (beginning with the two atomic bombs dropped on the Japanese people, and the 35 more tested in the Marshall Islands during the late 1940s and early 1950s – not to mention US bombings in the last 60 years of China, Korea, Guatemala, Indonesia, Cuba, Congo, Peru, Laos, Vietnam, Cambodia, Grenada, Libya, El Salvador, Nicaragua, Panama, Iraq, Afghanistan, Yugoslavia, and Yemen, to name a few, constituting what Martin Luther King, Jr. once called “the world’s greatest purveyor of violence”). To me, and to counter the violence of the state, anarchism needs to be twinned with pacifism. Violent anarchism is self-defeating, and bangs its head into the truth once stated by Hannah Arendt in her essential work On Violence: “Violence, like all action, changes the world, but the most probable change is to a more violent world.”

And yet, if any creed is less understood than anarchism, it is pacifism. The uneducated equate it with passivity. The really uneducated pair it with appeasement. Among the latter is the late Michael Kelly, whose column “Pacifist Claptrap” ran on the Washington Post op-ed page on September 26, 2001:

Organized terrorist groups have attacked Americans. These groups wish the Americans not to fight. The American pacifists wish the Americans not to fight. If the Americans do not fight, the terrorists will attack America again…The American pacifists, therefore are on the side of future mass murders of Americans. They are objectively pro-terrorist.

A week later he was back with more, in a column arguing that pacifists are liars, frauds, and hypocrites whose position is “evil.” Kelly, whose shrillness matched his self-importance, was regrettably killed in Iraq in April 2003, reporting on a US invasion that he avidly and slavishly promoted.

The pacifist position on countering terrorism was more astutely articulated by Archbishop Desmond Tutu in a lecture on February 24, 2002, at St. Paul’s Cathedral in Boston: “The war against terrorism will not be won as long as there are people desperate with disease and living in poverty and squalor. Sharing our prosperity is the best weapon against terrorism.” Instead of sharing its wealth, however, the United States’ government hoards it. Among the top 25 industrial nations, it ranks 24th in the percentage of its GNP devoted to foreign aid.

Furthermore, pacifists are routinely told that nonviolent conflict resolution is a noble theory, but asked where has it worked? Had questioners paid only slight attention these past years, the answer would be obvious: in plenty of places, as the following list of recent examples nicely illustrates.

  • On February 26, 1986, a frightened Ferdinand Marcos, once a ruthless dictator and a US-supported thug hailed by Jimmy Carter as a champion of human rights, fled from the Philippines to exile in Hawaii. As staged by nuns, students, and workers who were trained by Gene Sharp of the Einstein Institute in Boston, a three-year nonviolent revolt brought Marcos down.
  • On October 5, 1988, Chile’s despot and another US favorite, General Augusto Pinochet, was driven from office after five years of strikes, boycotts and other forms of nonviolent resistance. A Chilean organizer who led the demand for free elections said: “We didn’t protest with arms. That gave us more power.”
  • On August 24, 1989, in Poland, the Soviet Union puppet regime of General Wojciech Jaruzelski fell. On that day it peacefully ceded power to a coalition government created by the Solidarity labor union that, for a decade, used nonviolent strategies to overthrow the communist dictator. Few resisters were killed in the nine-year struggle. The example of Poland’s nonviolence spread, with the Soviet Union’s collapse soon coming. It was the daring deeds of Lech Walesa, Nobel Peace Prize winner, and the nonviolent Poles on the barricades with him that were instrumental in bringing about this change.
  • On May 10, 1994, former political prisoner Nelson Mandela became the president of South Africa. It was not armed combat that ended white supremacy. It was the moral force of organized nonviolent resistance that made it impossible for the racist government to control the justice-demanding population.
  • On April 1, 2001, in Yugoslavia, Serbian police arrested Slobodan Milosevic for his crimes while in office. In the two years that a student-led protest rallied citizens to defy the dictator, not one resister was killed by the government. The tyrant died during his trial in The Hague.
  • On November 23, 2003 the bloodless “revolution of the roses” toppled

Georgian president Eduard Shevardnadze. Unlike the civil war that marked the power struggles in the 1990s, no deaths or injuries occurred when tens of thousands of Georgians took to the streets of Tblisi in the final surge to oust the government.

Twenty-five years ago who would have thought that any of these examples would be possible? Yet they happened. Ruthless regimes, backed by torture chambers and death squads, were driven from power by citizens who had no guns, tanks, bombs, or armies. They had an arsenal far superior to weapons of steel: weapons of the spirit. These were on display in the early 1940s when Hitler’s Nazi army invaded Denmark. Led by a defiant King Christian X, the Danes organized strikes, boycotts, and work stoppages, and either hid Jews in their homes or helped them flee to Sweden or Norway. Of this resistance, an historian quoted in the landmark 2000 film A Force More Powerful observed that

Denmark had not won the war but neither had it been defeated or destroyed. Most Danes had not been brutalized, by the Germans or each other. Non-violent resistance saved the country and contributed more to the Allied victory than Danish arms ever could have done.

Only one member of Congress voted no against US entry into the Second World War: Jeannette Rankin, a pacifist from Montana who came to the House of Representatives in 1916, four years before the 19th amendment gave women the vote. “You can no more win a war than win an earthquake,” she famously said before casting her vote. The public reaction reached so strong a virulence that Rankin had to be given 24-hour police protection. One of her few allies that year was Helen Keller, the deaf and sightless Socialist who spoke in Carnegie Hall in New York:

Strike against war, for without you no battles can be fought. Strike against manufacturing shrapnel and gas bombs and all other tools of murder. Strike against preparedness that means death and misery to millions of human beings. Be not dumb obedient slaves in an army of destruction. Be heroes in an army of construction.

Students leaning toward anarchism and pacifism often ask how the principles of both can be personalized. I suggest that one start by examining where you spend your money. Deny it to any company that despoils the earth. Deny it to any seller of death, whether Lockheed Martin (the country’s largest weapons maker) or to sub-contractors scattered in small towns in all regions of the land. Deny it to the establishment media that asks few meaningful questions and questions few meaningless answers. In short, “live simply so others may simply live,” which is perhaps the purest form of anarchy.

In my own life, I’ve tried to do it by means of a cruelty-free vegan diet, consuming no alcohol, caffeine, or nicotine, and getting around Washington mostly by a trusty Raleigh three-speed bicycle. Is any machine more philosophically suited to anarchism than a bicycle? Is there an easier way to practice anarchism than joyriding on two wheels? Being street smart, which means being totally considerate of other travelers and pedaling safely, I think of all the useless laws the anarchist-cyclist can break: riding through red lights, stop signs, one way signs—all the while getting a feel for outdoor life and its weathers, those balms cut off by windshields.

Speaking experientially—meaning 35 years and more than 70,000 miles of motion by leg-power—I’ve become an autophobe. In the clog of traffic, when car owners are penned like cattle on a factory farm and torture themselves in massive tie-ups, I remember some lines by Daniel Behrman in his minor 1973 classic from Harper’s Magazine, “The Man Who Loved Bicycles:”

The bicycle is a vehicle for revolution. It can destroy the tyranny of the automobile as effectively as the printing press brought down despots of flesh and blood. The revolution will be spontaneous, the sum total of individual revolts like my own. It may already have begun.

William Saroyan likewise wrote in his introduction to 1981’s edited volume The Noiseless Tenor, that “the bicycle is the noblest invention of [hu]mankind.” Amen to that, but only if you add that anarchism is a close second.

05. Judith Suissa. What is Anarchist Education?

Click image to view the video presentation.

Screenshot 2018-08-09 at 10.54.52 AM

Judith Suissa is Professor of Philosophy of Education at the Institute of Education, University College London. She is the author and editor of numerous publications, including Anarchist Education: A Philosophical Perspective (scroll down for reviews) and Education, Philosophy, and Well-being. Judith writes of her research interests:

I am interested in the intersection between political ideas and educational practice. I am particularly concerned to challenge the narrow focus on state schooling characteristic of so much educational philosophy, theory and research, and to explore the underlying political and moral assumptions of pedagogical relationships outside the arena of institutional forms of education. These include parent-child relationships, educational experiments that challenge the state system, and informal education. My research draws on political and moral philosophy, with a particular focus on anarchist theory, questions of social justice, the control of education, utopian theory, social change, and the role of the state.

For a succinct take, start here: “Anarchy in the classroom”New Humanist. Volume 120. Issue 5 September/October 2005.

You can read more about Judith Suissa’s important and increasingly urgent work here.

03. Para-Academic Handbook

click image

I 03.png





NOTES ON THE PREFIX Alexandra M. Kokoli




A LESSON FROM WARWICK The Provisional University




HIGHER DEGREE (UN)CONSCIOUSNESS  Emma Durden, Eliza Govender, Sertanya Reddy




NO MORE STITCH-UPS! ⋅  Charlotte Cooper









02. Gary Hall. The Uberfication of the University

Gary Hall


Click on icon for pdf file.

Portions of The Uberfication of the University appeared in an earlier version as “The Uberfication of the University,” Discover Society, July 30, 2016, http://discoversociety.org/.

The Uberfication of the University by Gary Hall is licensed under a Creative Commons Attribution 4.0 International License.

Published by the University of Minnesota Press, 2016

111 Third Avenue South, Suite 290

Minneapolis, MN 55401-2520


The University of Minnesota is an equal-opportunity educator and employer.


Fuck off and die—and not in that order.

—Then London mayor BORIS JOHNSON, speaking to a London taxi driver during a row over Uber, June 5, 2015




The Sharing Economy

Platform Capitalism


The Reputation Economy

The Microentrepreneur of the Self

The Para-academic

The Artrepreneur

Affirmative Disruption




AT FIRST the 2008 financial crisis looked as if it was going to constitute a major threat to the long-term viability of neoliberalism. Viewed from our current vantage point, however, it seems merely to have given the champions of the free market an opportunity to carry out with increased intensity their program of privatization, deregulation, and reduction to a minimum of the state, public sector, and welfare system. The result is a condition we can describe as postwelfare capitalism.

The Uberfication of the University explores what neoliberalism’s further weakening of the social is likely to mean for the future organization of labor by examining data and information companies associated with the emergence of the corporate sharing economy. It focuses on the sharing economy because it is here that the implications for workers of such a shift to a postwelfare capitalist society are most apparent today. This is a society in which we are encouraged to become not just what Michel Foucault calls entrepreneurs of the self but micro-entrepreneurs of the self, acting as if we are our own, precarious, freelance microenterprises in a context in which we are being steadily deprived of employment rights, public services, and welfare support. Witness the description one futurologist gives of how the nature of work will change, given that 30 to 80 percent of all jobs are predicted to disappear in the next twenty years as a result of developments in automation and advanced robotics: “You might be driving Uber part of the day, renting out your spare bedroom on Airbnb a little bit, renting out space in your closet as storage for Amazon or housing the drone that does delivery for Amazon.”1

The book analyzes the implications of this transformation to a postwelfare capitalist society for the organization of labor largely through the prism of those who work and study in the university. It does so partly because academics, researchers, and students are now being encouraged to become microentrepreneurs of themselves and of their own lives—so even so-called good jobs are being affected—but mainly because the university provides one of the few spaces in postindustrial society where the forces of contemporary neoliberalism’s anti–public sector regime are still being overtly opposed, to a certain extent at least.2 It follows that such changes in the way labor is organized will be all the more powerfully and visibly marked in the case of the publicly funded and legally nonprofit university system. Indeed if, as research reveals, being an academic is one of the most desired jobs in Britain today, it may be precisely because this occupation is seen as offering a way of living, of being alive, that is not just about consuming and working and very little else.3

Notes to Preface

  1. Rohit Talwar, futurologist and CEO of Fast Future Research, quoted in Nicola Slawson, “Today’s Pupils ‘Could Still Be Working at 100,’” Guardian,October 7, 2015, 5.
  2. Higher education is not the only place where neoliberalism is, for the moment, being resisted. There are those who place faith in the ability of the unions to form a counterhegemonic block, while in the United Kingdom, the National Health Service and the BBC are still publicly owned institutions delivering much-valued public services—although the latter is more in the position of holding back the tide than actively resisting, being even more of an establishment institution than the university in some respects, many of its governors, managers, and employees having the same ideas about politics, business, and the world as conservative neoliberals.
  3. “YouGov research reveals that the most desired jobs in Britain are not what you might expect; they are not even the most reliably well paid ones. . . . Being an author is the number one most desired job in Britain. Not only would the most people like to be one (60%), the smallest percentage would not like to be one (32%). The only other jobs preferred by a majority are equally as bookish: librarian (54%) and academic (51%).” Will Dahlgreen, “Bookish Britain: Literary Jobs Are the Most Desirable,” YouGov UK, February 15, 2015, https://yougov.co.uk/news/2015/02/15/bookish-britain-academic-jobs-are-most-desired/.

The Sharing Economy

TALK ABOUT BEING CAREFUL WHAT YOU WISH FOR: a recent survey of university vice-chancellors in the United Kingdom identifies a number of areas of innovation with the potential to reshape higher education. Among them are “uses of student data analytics for personalized services” (the number one innovation priority for 90 percent of vice-chancellors); “uses of technology to transform learning experiences” (massive open online courses [MOOCs]; mobile virtual learning environments [VLEs]; “anytime-anywhere learning” (leading to the demise of lectures and timetables); and “student-driven flexible study modes” (“multiple entry points” into programs, bringing about an end to the traditional academic year).1 Responding to this survey, an editorial in the academic press laments that “the UK has world-leading research universities, but what it doesn’t have is a higher education equivalent of Amazon or Google, with global reach and an aggressive online strategy.”2 Yet one wonders whether any of those proclaiming the merits of such disruptive innovation have ever stopped to consider what a higher education institution emulating the expansionist ambitions of U.S. companies like Amazon and Google would actually mean for those currently employed in universities.

We can see the impact such aggressive, global, for-profit technology companies have on the organization of labor by looking at information and data analytics businesses associated with the sharing economy. Emerging from the mid-2000s onward, the sharing economy is a socioeconomic ecosystem that supplies individuals with information that makes access to things like ridesharing and sofa surfing possible on a more efficient, expanded basis. Indeed, because of the emphasis that is placed on the cooperative sharing and renting of preowned and unused goods, the activities and services of the sharing economy are frequently held as being very different from, or as even as offering an alternative to, those that are provided through private, state, or public channels. As such, the sharing economy is portrayed as a means of bringing community values back into the ways in which people consume. It is also said to help address environmental issues resulting from the depletion of the planet’s resources, for example, by reducing the carbon footprint of transport. Yet the sharing economy is just part of a much larger socioeconomic ecosystem, one that is dominated by the use of computing and satellite technology to coordinate workforces and create global transnational supply chains and that enables just-in-time manufacturing through the production of low-wage labor and the exploitation of outsourced workers. Given this, it is almost as if the sharing economy has been devised to take the edge off some of the harsher aspects of life in advanced, postindustrial capitalist society, including those that have been generated in the name of austerity: unemployment, precarity, increasing income inequality, large discrepancies in property ownership, high levels of debt, and low levels of class mobility.

Certain aspects of the sharing economy, however, are also helping to enact a significant societal shift. It is a shift in which state-regulated service intermediaries, such as taxi companies and hotels, are replaced by information and data management intermediaries, such as the start-ups Uber (an app that enables passengers to use their cell phones to hail a ride with a taxi, rideshare, or private car) and Airbnb (a community marketplace for renting out private lodging and other kinds of accommodation that, like Uber, was founded in San Francisco).3 Of course, it is important to acknowledge that the sharing economy is made up of a variety of different economic arrangements, many of which are not directly involved in the replacement of state-regulated service intermediaries. These arrangements embrace for-profit, nonprofit, and collaborative structures too: those associated with fair trade collectives, freecycling networks, peer-to-peer file sharing, and the Occupy movements, for example. Even the information and data management intermediaries of the sharing economy—which include BlaBlaCar, Liquid, and Zaarly, among many others too numerous to mention—are not all the same. Each has its own specific features, characteristics, and spheres of operation within the larger ecosystem of the sharing economy. Nevertheless, rather than sharing activities, goods, and services in a fair and resilient fashion that enables a more direct exchange between the parties involved by cutting out the unnecessary middlemen, what most of these for-profit start-ups are doing is corporatizing and selling cheap and easy-to-access assets that are underutilized. In the case of Uber and Airbnb, still the two best-known examples, these assets take the form of seats in vehicles and rooms in properties that are otherwise occupied on an infrequent and temporary basis. In other words, they are idle resources it has up until now been difficult for capital to commodify and whose value from an entrepreneurial point of view has therefore been wasted.

For some, this move away from state-regulated service intermediaries such as taxi companies and hotels toward for-profit businesses is part of market capitalism’s increasing co-option and rebranding of the “true” community values of the sharing economy. Even if this form of economy is presented as a revival of community spirit, it actually has very little to do with sharing access to goods, activities, and services and everything to do with selling this access. (Many people insist on referring to it not as the sharing economy but as the renting economy for just this reason.) The sharing economy thus does hardly anything to challenge inequality and injustice. At best, it is capable of providing an additional or alternative source of income in what many are experiencing as economically straightened times. (It is significant that Uber and Airbnb were both founded around the time of the financial crisis, in 2009 and 2008, respectively.) For others, these technology start-ups are simply innovating too quickly for the politicians and lawmakers to keep pace—a situation regarded as likely to have highly disruptive social consequences if it continues unconstrained.4

But this societal shift from state-regulated service intermediaries to information and data analytics intermediaries also provides us with a means of understanding some of the ways in which neoliberalism has been able to advance with its program of privatization, deregulation, and reduction to a minimum of the role played by the state and public sector even after the crash of 2008. The important point to note in this respect is that, by avoiding preemptive state regulation, these profit-driven sharing economy businesses are operating according to a postwelfare model of capitalism. Here there are few legislative protections for workers and hardly any possibilities for establishing trade unions or other forms of collective agency, action, or means of generating the kind of solidarity that might be able to challenge this state of affairs. This set of circumstances often leaves those who provide services by means of the platforms of information and data management intermediaries laboring for less than the minimum wage and without a host of workers’ rights (a bundle of rights being what employment is, after all). The list of lost benefits is a long one. As Mike Bulajewski notes, it includes “the right to have employers pay social security, disability, and unemployment insurance taxes, the right to family and medical leave, workers’ compensation protection, sick pay, retirement benefits, profit sharing plans, protection from discrimination on the basis of race, color, religion, sex, age or national origin, or wrongful termination for becoming pregnant, or reporting sexual harassment or other types of employer wrongdoing.”5 All of this goes a long way to explain why in the March 2015 budget, the British government declared that it is planning to make the United Kingdom a global center for the sharing economy.6 (Uber was declared legal in the United Kingdom in October 2015.)

Notes to The Sharing Economy

  1. PA Consulting Group, Lagging Behind: Are UK Universities Falling Behind in the Global Innovation Race?, June 18, 2015, http://www.paconsulting.com/our-thinking/higher-education-report-2015/#here.
  2. John Gill, “Losing Our Place in the Vanguard?,” Times Higher Education,June 18, 2015, 5, https://www.timeshighereducation.co.uk/opinion/losing-our-place-vanguard.
  3. Evgeny Morozov, “What You Whistle in the Shower: How Much for Your Data?,” Le Monde Diplomatique, English edition, August 2014, posted on the nettime mailing list by Patrice Riemens, August 24, 2014, https://www.mail-archive.com/nettime-l@mail.kein.org/msg02769.htm.
  4. See Sebastian Olma, “Never Mind the Sharing Economy: Here’s Platform Capitalism,” Institute of Network Cultures, October 16, 2014, http://networkcultures.org/mycreativity/2014/10/16/never-mind-the-sharing-economy-heres-platform-capitalism/; Yochai Benkler, “Challenges of the Shared Economy,” World Economic Forum, February 24, 2015, https://www.youtube.com/watch?v=mBF-GFDaCpE.
  5. Mike Bulajewski, “The Cult of Sharing,” Mrteacup: A Blog of Philosophical Reflections and Speculations, August 5, 2014, http://www.mrteacup.org/post/the-cult-of-sharing.html.
  6. “Support for the Sharing Economy,” in H. M. Treasury, Budget 2015, section 1.193, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/416330/47881_Budget_2015_Web_Accessible.pdf.

Platform Capitalism

IT IS NO SURPRISE that one of the other names associated with this aspect of the sharing economy is “platform capitalism.” As Sebastian Olma points out, building on the work of Sascha Lobo, this is the term given to the “generic ‘ecosystem’” in which a software-driven environment such as eBay or TaskRabbit, rather than simply acting as a marketplace for connecting customers to companies, is “able to link potential customers to anything and anyone, from private individuals to multinational corporations.” In platform capitalism, then, “everyone can become a supplier for all sorts of products and services at the click of a button.”1 Indeed, if neoliberalism can be understood as trying to introduce market rationality into all areas of society, the for-profit sector of the sharing economy appears as almost the neoliberal ideal. It creates a situation in which the general population not only aspires to own their own homes—the vision the conservative prime minister Margaret Thatcher sold to the British working classes in the 1980s with the right-to-buy scheme (itself designed as a means of reducing the role played by the state in the form of collectively owned social housing)2—but also have the opportunity to become private capitalist entrepreneurs and economic agents themselves. And in the case of Airbnb, one way in which they can do so is precisely by trading otherwise underutilized space in their now privately owned homes. As the company’s cofounder and CEO, Brian Chesky, proclaims, previously, “only businesses could be trusted, or people in your local community. Now, that trust has been democratized—any person can act like a brand. . . . It means that people all over a city, in 60 seconds, can become microentrepreneurs.”3

The information and data management intermediaries of the sharing economy may create jobs, then, but “it’s a new kind of job,” as Chesky readily acknowledges. “Maybe it’s like a 21st-century job,” he suggests. Or maybe, given the lack of workers’ rights and degree of externalized risk, it’s like a very old kind of job: a Victorian, nineteenth-century job.4 For these companies, and the microentrepreneurs who labor for them—and who in the past would have been known as employees—are operating in an open market that is relatively free from the ability of state regulators, the labor movement, and trade unions not only to put a limit on the maximum hours those employed in these new kinds of jobs work in a day or week but also to specify the minimum wage they should receive, the number of days off they need, and the paid holidays and free weekends they are entitled to.5 It is as if many of these means of taking a break from work and having some downtime are now to be provided by the for-profit sharing economy businesses themselves—along with other companies with an investment in the management of information and data (and the aggressive avoidance of tax), such as Apple and Google (or Alphabet, as the search engine’s parent company is now called)—through their ability to save people’s time and complete tasks for them. To provide just one example that is today already commonplace, consider the linking of users’ electronic diaries to their e-mail accounts and sending of automatic calendar appointment and “to-do” list reminders to them on their cell phones. (From June 2012, the Reminders app has come with every Apple device running iOS 5 or above, while the ColorNote app for Android has been downloaded 80 million times since it was launched in 2009.) It won’t be too long before phones and watches are scheduling meetings and sending replies for people, with driverless cars that can be summoned by smartwatch predicted to be just fifteen years away. The “taptic engine” feature of the Apple Watch means users don’t even have to spend time taking their phones out of their pockets to know they’ve received a message: the watch just gives them a gentle tap on the wrist. Given that a study finds the average user picks up her phone eighty-five times a day, amounting to approximately one-third of the time she is awake, this represents a not inconsiderable saving.6 Yet a question can be raised as to whether these companies and their products are saving people’s time or whether they are actually enabling them to work even more. A survey of fifteen hundred senior staff released by the Chartered Manager’s Institute in 2016, for instance, reveals that they spend a total of twenty-nine days a year working on smartphones and tablets outside of working hours. That’s the equivalent of most employees’ annual holiday allowance. No wonder the mode of production we appear to be moving toward has been described as not quite capitalism as it is classically understood but as “something worse.”7 Or that some tech workers are temporarily unplugging from electronic communications and putting their phones into flight mode as a means of gaining relief from the stress of having to answer e-mails and check Twitter constantly (and concentrate all the better on developing the systems that so control them and everyone else).8 Digital detox has even become something a luxury status symbol among celebrities, with the actor Eddie Redmayne admitting recently that he has swapped his smartphone for an old-fashioned handset that doesn’t have an Internet connection.

Production and control, profit and risk, are not shared in this sector of the economy at all, then. It is the decentralized networks of users who benefit from the greater convenience and reduced prices afforded by the sharing economy and who help to build the platform by providing the aggregated input, data, and attention value that function to generate a market. Noticeably, these users do not form a social community in the manner they do on other kinds of digital platforms: Wikipedia, for example, or even Facebook. And this is despite the fact that the technology companies concerned often employ the language of grassroots movements when addressing them: when Uber tries to mobilize its app-enabled users to protest against attempts by the state and other representatives of the “old economy” (trade unions, city mayors, cab companies) to regulate it, for instance. What is more, because they never really own the products and services they are purchasing, these users are more susceptible to questionable practices on the part of the sharing economy’s microentrepreneurs than would ordinarily be the case in a state-regulated market.

Meanwhile, it is the owners of the information and data management intermediaries who take the profits generated by financializing, corporatizing, and exploiting the “sharing” of goods and services between the users and microentrepreneurs that these owners enable by turning this exchange into a market. Not surprisingly, the former tend to be well-funded professional entrepreneurs, as opposed to the more amateur microentrepreneurs who do a lot of the actual labor. The owners—who are small in number, any wealth generated thus being concentrated in the hands of relatively few—also centrally control the platform, software, algorithm, data, and associated ecosystem, deciding on pricing and wage levels, work allocation, standards, conditions, and preferred user and laborer profiles. This means that who does and does not get to work as a microentrepreneur in this economy is not delimited by state legislation or organized union activity designed to protect workers from being exploited and discriminated against. In fact, research on the sharing economy shows that a certain “homophily” occurs, by which it is often “similar ‘types’ of people [who] provide and use these services (in terms of class, education and race),” especially when a rating system is employed.9 Uber, for example, enables both customers and drivers to rate one another and suspends drivers if their scores are not high enough. There is also reported to be a regular scarcity of female “drivers for hire” in many of the cities across the world in which Uber operates.

Finally, it is the often quite isolated microentrepreneurs (who can now be potentially “any person” rather than a specific set of formally contracted employees) who labor to provide these services in the market created by the platform on a freelance, low-paid, on-demand, and precarious basis; who take the risks associated with having lost their rights, benefits, and protection as employees in this “gig economy,” as it is sometimes known; and who, depending on the particular platform, often face “increased surveillance, deskilling, casualization, and intensification” of their labor too.10 Hence former U.S. secretary of labor Robert Reich’s description of this economic model as less of a sharing economy and more of a “share-the-scraps economy”: “the big money goes to the corporations that own the software. The scraps go to the on-demand workers.”11

Of course, this four-part structure consisting of customers, owners, labor intermediaries, and workers has long been a feature of advanced capitalism. For twenty years and more, companies have been downsizing their role as employers by outsourcing work to independent contractors, freelancers, and temps, thus reducing costs by circumventing labor laws that establish minimum standards, with all the attendant consequences for staff income, conditions, rights, benefits, and pensions. After all, companies cannot be held responsible for people if they are not officially contracted as their employees. The dispute at the National Gallery in London over the outsourcing of visitor services is thus merely one of the more recent examples of this long-standing practice, the (failed) attempt to casualize academic work at Warwick University by outsourcing hourly paid staff to a company called Teach Higher another. What makes the corporate sharing economy significantly different in this respect is the following:

  1. The intermediaries are no longer agencies for outsourced labor. Such agencies have been replaced by data-driven platforms or apps, making it difficult for workers to negotiate for better pay and conditions—you can’t argue very easily with the logic of an algorithm.
  2. The customers and laborers are subject to monitoring, surveillance, and control on an individual, finely grained basis facilitated by the development of GPS-enabled location services and networked mobile media (smartphones, tablets, etc.).
  3. Workers are not a coherent group of formally contracted employees (even if they are often managed as though they are) but can now be anyone. It is a state of affairs that has the effect of turning all of us into potential self-employed economic agents, as anyone can rent out spare capacity in her home or car.12

Notes to Platform Capitalism

  1. Olma, “Never Mind the Sharing Economy.” See also Sascha Lobo, “Die Mensch-Maschine: Auf dem Weg in die Dumpinghölle,” Der Spiegel,September 3, 2014, http://www.spiegel.de/netzwelt/netzpolitik/sascha-lobo-sharing-economy-wie-bei-uber-ist-plattform-kapitalismus-a-989584.html. For more on the politics of platforms, see the “Platform Politics” issue of Culture Machine 13 (2014), http://www.culturemachine.net/index.php/cm/issue/view/25.
  2. Whereas 42 percent of the U.K. population lived in a council house in 1979, today that number is less than 8 percent.
  3. Brian Chesky, interview by Rik Kirkland in “The Future of Airbnb in Cities,” McKinsey & Company, November 2014, http://www.mckinsey.com/Insights/Travel_Transportation/The_future_of_Airbnb_in_cities?cid=other-eml-alt-mip-mck-oth-1411. It is worth noting that Chesky’s last point is not strictly correct: more often than not, you need to have an asset—be it property, a car, or time—to be in a position to “share.”
  4. Ibid.
  5. What is more, like many neoliberal businesses, these information and data management intermediaries continue to have a parasitical relationship with the state at the same time as they argue against it and its regulation. For example, while Chesky argues for the modernization—which in this context is nearly always a code word for neoliberalization—of “outdated” state legislation and laws that restrict what is possible when a person acts like a brand, he nevertheless goes on to acknowledge of Airbnb that “the reason it’s grown so fast is, unlike traditional businesses, we don’t have to pour concrete. The infrastructure and the investment was already made by cities a generation ago. And so all of a sudden, all you needed was the internet.” Ibid. It is an Internet, one might add, which was itself the product of state funding for research, education, and technological development, namely, that behind CERN.
  6. “How We Use Our Smartphones Twice as Much As We Think,” Lancaster University, October 29, 2015, http://www.lancaster.ac.uk/news/articles/2015/how-we-use-our-smartphones-twice-as-much-as-we-think/; Sally Andrews, David A. Ellis, Heather Shaw, and Lukasz Piwek, “Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use,” PLoS One, October 28, 2015, http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0139004.
  7. McKenzie Wark, “Digital Labor and the Anthropocene,” DIS Magazine,http://dismagazine.com/disillusioned/discussion-disillusioned/70983/mckenzie-wark-digital-labor-and-the-anthropocene/.
  8. My thanks to Kathleen Fitzpatrick for this last point.
  9. Moira McGregor, Barry Brown, and Mareike Glöss, “Disrupting the Cab: Uber, Ridesharing, and the Taxi Industry,” Journal of Peer Production, no. 6 (January 2015), http://peerproduction.net/issues/issue-6-disruption-and-the-law/essays/disrupting-the-cab-uber-ridesharing-and-the-taxi-industry/.
  10. Ibid.
  11. Robert Reich, “The Share-the-Scraps Economy,” Robert Reich (blog), February 22, 2015, http://robertreich.org/post/109894095095.
  12. As we shall see in what follows, it is a state of affairs that also threatens to take us beyond even the level of potential disruption associated with what we might call the Warwick University–Teach Higher model of higher education. In this model, it is universities, not learners, who are purchasing casualized teaching services from intermediaries such as Teach Higher on behalf of students as consumers. It is also universities in this model that, with the encouragement of for-profit corporation education providers such as Pearson, are “unbundling” their different functions to be able to contract each of them out separately to agencies with the aim of using competition to improve efficiency. My thanks to John Holmwood for reminding me of the importance of this Warwick University–Teach Higher model to any account of the future of higher education.


IN THE INTERESTS OF CAPITAL, the for-profit sharing economy can therefore be seen to be involved in the process whereby each of us is being transformed into a dispersed, atomized, precarious, freelance microentrepreneur. That said, concerns about the sharing economy are all too easy to push to the back of our minds when we’re trying to find an inexpensive place to stay for a weekend break or calling a taxi to take us home from a friend’s place late at night. Many women especially consider Uber to be safer than a minicab, with its unknown driver (although there have been complaints that the company could do more to ensure the safety of female passengers). Uber also has the advantage of costing less than a licensed taxi and being easier and more convenient than both. With Uber you can track your vehicle as it approaches in real time and so be sure you are getting into the right one. Others appreciate the freedom from having to deal with cash that Uber’s frictionless digital payment system provides. It is only when we begin to think about these information and data management intermediaries from the point of view of a worker rather than a user, and consider their potential to disrupt our own sphere of employment—with the associated consequences for our job security, income, sick pay, retirement benefits, pensions, and, as we shall see, subjectivities—that the full implications of the shift to a socially weakened form of capitalism they are helping to enact are really brought home. So what is the potential effect of this transformation in the organization of labor on higher education?

In April 2015, LinkedIn, the social networking platform for professionals based in Mountain View, California, spent £1.5 billion purchasing Lynda.com (also based in California), a supplier of online consumer-focused courses. Although it does not address the sharing economy specifically, a report of this deal published shortly afterward in the U.S. Chronicle of Higher Education under the title of “How LinkedIn’s Latest Move May Matter to Colleges” was quick to draw attention to its potential implications for higher education.1 Of course, with its University Pages and University Rankings Based on Career Outcomes,2 LinkedIn already has enough data to be able to provide the kind of detailed analysis of which institutions and courses are launching graduates into which jobs and long-term career trajectories that no single traditional university can hope to match—and that’s before its purchase of Lynda.com. But what the piece in the Chronicle made clear is that, with LinkedIn’s imminent transition into being both a social network and an actual provider of education, such data could easily be used to develop a successful information and data intermediary business model for higher education: if not next year, then certainly in the near future, and if not by LinkedIn, then by some other for-profit technology company (Uber or Academia.edu, say, the latter having a business plan that depends on its ability to exploit data flows related to research).3 Such a model would be based on providing “transparent” information on a finely grained basis to employers, students, funding agencies, governments, and policy makers. This information would indicate which of the courses, classes, and possibly even teachers on any such educational “sharing economy” platform are better at enabling students from a given background to obtain a particular academic degree classification or other educational credential or qualification, make the successful transition to a desirable job or career, reach the top of a given profession in a particular town, city, or country, and so achieve a high level of job satisfaction, security, salary, income, and earning capacity over a specific period or even a lifetime. You know the kind of thing: if you liked reading this book by this author, then you might also like taking this undergraduate course at X college, and this master’s course at Y, and applying for this starter post at company Z.

It doesn’t end there. The Chronicle article also detailed how, in 2014, LinkedIn bought a company called Bright. Bright has developed algorithms enabling it to match posts with applicants according to the latters’ particular achievements, competencies, and skill sets. And it wouldn’t be too difficult for a for-profit business, with the kind of data LinkedIn now has the potential to gather, to do much the same for employers and students—right down to the level of their salary expectations, extracurricular activities, “likes,” or even reputational standing and degree of trustworthiness. This business could charge a fee for doing so, just as online dating agencies make a profit from introducing people with compatible personalities and interests as deduced by algorithms. It could then charge a further fee for making this ultra-detailed information and data available on a live basis in real time—something that would no doubt be highly desirable in today’s “flexible economy,” where many employers want to be able to draw from a pool of part-time, hourly paid, zero-hours and no-contract workers who are available “on tap,” often at extremely short notice.4 Moreover, feeding all the data gathered back into the system would mean the courses, curricula, and class content of any such educational data and information intermediary, along with their cost, could be continuously refined and so made highly responsive to student and employer needs at local, national, and international levels. More ominously still, given that it would be able to control the platform, software, data, and associated ecosystem, it is clear that such a platform capitalist higher education business would also have the power to decide who could be most easily seen and found in any such alternative market for education, much as Google does with its page ranking. (In April 2015, the European commission decided that Google has a case to answer regarding the possible abuse of its dominance of search through “systematically” awarding greater prominence to its own ads.)

Perhaps understandably, following all the furor over MOOCs, the Chronicle’s analysis of LinkedIn’s acquisition of Lynda.com shied away from arriving at any overly pessimistic conclusions as to what all this may mean for higher education and its system of certification and credentialing. Nevertheless, if a company like LinkedIn made the decision to provide this level of finely grained information and data for its own unbundled, relatively inexpensive online courses (and perhaps any other nontraditional for-profit education providers that sign up with them), but not for those offered by its more expensive market competitors in the public, nonprofit sector, it would surely have the potential to be at least as disruptive as Coursera, Udacity, FutureLearn, and others have proven to be to date, if not considerably more so. For the kind of information about degrees and student final destinations, and ability to react to market changes any traditional bricks-and-mortar institution is capable of providing on its own would appear extremely unsophisticated, limited, and slow to compile by comparison. And lest the adoption by a for-profit sharing economy business of such an aggressive stance toward public universities seems unlikely, it is worth noting that Google maintains its dominance of search in much the same way. In the words of its chief research guru, Peter Norvig, the reason Google has a 90 to 95 percent share of the European market for search is not because it has better algorithms than Yahoo! and Bing but rather “it just has more data.”5 Indeed, one of the great myths about neoliberalism is that it strives to create competition on an open market. Yet, as the venture capitalist Peter Thiel, cofounder of PayPal and early Facebook investor, emphasizes in his book Zero to One, what neoliberal businesses actually want is to be a monopoly: to be so dominant in their areas of operation that they in fact escape the competition and become a market of one. “Competition,” as Thiel puts it elsewhere, “is for losers.”6 As if to testify to this belief, LinkedIn was itself bought by Microsoft in mid-2016 for $26.2 billion.

Of course, as a consequence of neoliberalism’s program of privatization, deregulation, reduction to a minimum of the public sector, and insistence that even publicly funded universities operate like businesses and embrace a lot of the same practices and value systems as for-profit corporations (despite that many are registered charities and therefore have education, not profit generation, as their primary function), large numbers of those who work in higher education already have temporary, fixed-term, part-time, hourly paid, zero-hour, and other forms of contingent positions that make it difficult for them to offer much by way of resistance to the erosion of their academic freedom and economic security. According to the American Federation of Teachers, “76% of the total faculty workforce is now in non-permanent posts and 70% of these are part time.”7 Meanwhile a report by the University and College Union (UCU) finds that “54% of all academic staff, and 49% of teaching staff in UK universities are employed on insecure contracts,”8 with a UCU survey of twenty-five hundred casualized staff identifying one-third of those in universities as already experiencing difficulty paying household bills, while as many as one-fifth have problems finding enough money to buy food.9 Yet if something along the lines of the preceding scenario regarding the development of a successful information and data intermediary business model for higher education does come to pass, it will without doubt have the effect of further disrupting the public, nonprofit university system—only this time by means of a profit-driven company operating according to a postwelfare capitalist philosophy, just as Uber is currently disrupting state-regulated taxi companies and Airbnb the state-regulated hotel industry. Increasing numbers of university workers will thus find themselves in a situation not dissimilar to that facing many cab drivers today. Instead of operating in a sector regulated by the state, they will have little choice but to sell their cheap and easy-to-access courses to whoever is prepared to pay for them in the “alternative” sharing economy education market created by platform capitalism. They too will become atomized, freelance microentrepreneurs in business for themselves. And as such, they will experience all the problems of deprofessionalization, precarity (in the sense of being unable to control or even anticipate their own future), and continuous performance monitoring by networked surveillance technologies that such an economy brings. Is this what vice-chancellors and university presidents actually want?

Notes to Uber.edu

  1. Goldie Blumenstyk, “How LinkedIn’s Latest Move May Matter to Colleges,” Chronicle of Higher Education, April 17, 2015, http://chronicle.com/article/How-LinkedIn-s-Latest-Move/229441/. Although Blumenstyk does not discuss the sharing economy, the speculative scenario that follows was inspired in part by this article’s reflections on some of the possible implications for higher education of LinkedIn’s acquisition of Lynda.com.

2.http://blog.linkedin.com/2013/08/19/introducing-linkedin-university-pages/; https://www.linkedin.com/edu/rankings/us/undergraduate.

  1. For one (uncritical) account of how the technology—including mobile apps, online assessments, and a blockchain system for recording all aspects of each transaction—already exists to make such an HE platform a reality, see “Uber-U Is Already Here,” May 6, 2016, http://teachonline.ca/tools-trends/exploring-future-education/uber-u-already-here. For more on Academia.edu, see Gary Hall, “What Does Academia.edu’s Success Mean for Open Access: The Data-Driven World of Search Engines and Social Networking,” Ctrl-Z: New Media Philosophy 5 (2015), http://www.ctrl-z.net.au/journal/?slug=issue-5, and Janneke Adema and Gary Hall, eds., Really, We’re Helping to Build This . . . Business: The Academia.edu Files(London: Open Humanities Press, 2016), http://liquidbooks.pbworks.com/w/page/11135951/FrontPage.
  2. Alessandro Gandini reports that “a recent survey conducted by the Freelancers Union in partnership with Elance-oDesk, a major digital marketplace for contractors and freelancers worldwide, shows how 53 million Americans were generating some or their entire income earned in 2013 from freelancing, making up 34% of the entire American workforce.” Alessandro Gandini, “Digital Work: Self-Branding and Social Capital in the Freelance Knowledge Economy,” Marketing Theory, October 1, 2015.
  3. Peter Norvig, speaking to Tim O’Reilly, quoted in Tim O’Reilly, “A Few Thoughts on the Nexus One,” Radar, January 5, 2010, http://radar.oreilly.com/2010/01/the-nexus-one-vs-iphone.html.
  4. Peter Thiel, “Competition Is for Losers,” Wall Street Journal, September 12, 2014, http://www.wsj.com/articles/peter-thiel-competition-is-for-losers-1410535536.
  5. See Mary O’Hara, “‘I Feel Guilty Spending My Money on Food. That’s How Low My Income Is,’” Guardian, November 17, 2015, 40.
  6. University and College Union, Precarious Work in Higher Education: A Snapshot of Insecure Contracts and Institutional Attitudes, April 14, 2016, https://www.ucu.org.uk/media/7995/Precarious-work-in-higher-education-a-snapshot-of-insecure-contracts-and-institutional-attitudes-Apr-16/pdf/ucu_precariouscontract_hereport_apr16.pdf.
  7. University and College Union, Making Ends Meet: The Human Cost of Casualisation in Post-Secondary Education, May 21, 2015, http://www.ucu.org.uk/media/7279/Making-ends-meet—the-human-cost-of-casualisation-in-post-secondary-education-May-15/pdf/ucu_makingendsmeet_may15.pdf.

The Reputation Economy

THE SHARING ECONOMY thus intensifies the neoliberal belief in the power of markets that are unregulated and underregulated by the state to improve the efficiency of society’s performance by using competition and consumer choice as a way of expressing public decisions—in this case, regarding the funding of higher education.1 If you are a poor teacher in this economy, the market forces you to take responsibility for being so and either improve or quit. State regulation is unnecessary—as is institutional intervention. The same applies if you teach a subject that proves to be unpopular with students. (Any anger or critique is directed accordingly: inward onto the self rather than outward onto social, political, or economic factors. It’s not the system of higher education that’s the problem; it’s me!) Indeed, the development of preemptive technologies means that in the future, the market may even be able to discipline and control you before you have done anything wrong—and, what’s more, without you knowing it’s doing so. Some employers are already rejecting job candidates based on the browser they use when electronically submitting their applications. This is because analysis of the relevant data has revealed to these companies that applicants who use a less common browser to do so statistically make for better employees. The Open University in the United Kingdom has even developed an algorithm capable of predicting a student’s final grade based on her performance during just the first week of a degree course. Significantly, it takes into account factors such as “how enthusiastically students participate in online learning forums to improve their results.”2

It is not hard to see where a situation of the kind sketched here is likely to lead. Among the finely grained data gathered by any such platform capitalist higher education business will almost certainly be student ratings of the performance of individual teachers as compared to their peers: not just the kinds of evaluations students already provide of their professors concerning how easy it is to get good feedback, marks, and grades in their classes, for example, or the extent to which they are always on and available to answer student questions and respond rapidly to queries by e-mail, Facebook, or text message, no matter what time of day or night it is, but also how good they are at adapting to the social and emotional needs of students—even how upbeat, friendly, and fun they are (in the way Uber drivers are expected to be chatty).

According to research commissioned by the UCU in 2013, increasing numbers of U.K. academics are experiencing problems of mental health. No doubt belonging to a profession that attracts overachievers does not help. Nor does the fact that performance indicators encouraging self-monitoring, self-assessment, and self-comparison are more or less built into the career trajectory. Academics learn very early on just what kind of student evaluation and feedback is needed, and how many books, journal articles, and grants are required in their particular field, if they are to get that first full-time job, acquire tenure, achieve promotion, rise to chair—and what is more, they learn to accept this state of affairs as the norm. Now that continual benchmarking in terms of “excellence” has been introduced, academics are constantly asked to keep a measurable account of everything that happens in their working lives. This includes the number of keynote and plenary lectures they give, the visiting positions they hold, the classes and courses they teach, the leadership they display, and the amount of external income they generate, along with a host of other indicators of the significance, rigor, and originality of their research, its influence, and impact.

A couple of questions are worth raising here. Is all this auditing a way to manage academics in an era when, as Sarah Brouillette puts it in relation to the creative economy, “a spirit of opposition to assigned roles and an openness to change have become crucial facets of the ability to labor successfully” and produce the kind of innovation that leads to economic growth? In such circumstances, does one have to submit to the ceaseless (self-)scrutiny of the management protocols as “a marker of one’s commitment to one’s work” and self-exploitation?3 Certainly it is a situation that appears to be at odds with the way the profession often operates according to something of a “celebrity” approach, whereby, in each field, only a relatively small number of thinkers are deemed “fashionable” on a given topic (i.e., those everyone must quote and cite, and whose work thus dominates reading lists and bibliographies). Or at least it does until one realizes just how effective this setup is in fostering an acceptance of high levels of intense, individualistic, masculine, neoliberal competition. As Angela McRobbie indicates, drawing on the example of the 9:00 A.M.to 7:00 P.M. working day Thomas Picketty maintains is required to have a successful academic career, it is a model of excellence based on the idea of the brilliant man: someone who can have an enjoyable family life only because he has a wife to provide large amounts of unrecognized domestic support in the form of shopping, cooking, cleaning, and childcare.4

The result is that many academics are indeed suffering from stress, anxiety, loneliness, psychological exhaustion, depression, and distress. Yet circumstances will grow markedly worse for this workforce—a high percentage of whom are already in an insecure position—if a shift to a for-profit sharing economy higher education ecosystem occurs, with its ever-present risk to individual teachers of their ratings falling, and even of their being suspended from the platform should they be unable to keep their scores and student acceptance ratio high enough. Such monitoring will be all the more stressful for being never ending, with no final judgment being arrived at—other than suspension or rejection. Rather, these rating systems will act as prods by which academics are motivated to continuously try to do better. In such a scenario, it will not be long before they begin to act like those microentrepreneurs on eBay who, because their service is constantly being rated, are desperate not to be given any negative feedback on their sales.5

That said, ratings systems are far from confined to the online world of platform capitalism. Nowadays, it is not unusual for those working in retail to ask in advance to be given positive feedback if their company has a policy of following up in-store purchases with online requests for customer feedback and a rating of the shopping experience, which almost invariably includes an assessment of the helpfulness of the assistant. In fact, it looks like most individuals in the future will have a reputation score, analogous to their credit rating, based on their online influence and behavior, social connections, and the degree to which they can be trusted, whether they are borrowing money, applying for a job, taking out health insurance, asking for a date, sharing a ride, posting a review, or just leaving a comment.

Again, this may appear something of an exaggeration—yet it, too, is already happening. In November 2014 an anonymous article appeared in the press. It was written by a woman in Germany who purchased tickets for a concert at the Glocke concert hall in Bremen. This woman, the governess of a school in possession of a perfect credit history, discovered that she was unable to arrange a place to stay in the city for the weekend through Airbnb because the website deemed that she had too few friends on Facebook. It turns out that she only had fifty, whereas she required at least one hundred to verify her online identity and prove she was real to Airbnb—and this despite having booked with a credit card and verified her offline identity by scanning in a copy of her driver’s license or passport.6 A “person rating” app called Peeple—described by its developers as “a positive app for positive people”—has even been released that allows users to recommend and review individuals they know on a personal, professional, or romantic basis using a positive–neutral–negative rating system. And before you laugh, such ratings, whereby a person’s character is turned into a form of currency, may prove to be increasingly important, because, as Michael Fertik, founder and CEO of Reputation.com, emphasizes, your reputation is tied to that of others in your networks. “The more often your friends default on debt,” for example, “the more likely you are to default on debt as well.”7 So if you are thinking of taking out a mortgage and know someone with money troubles, you may want to reconsider your relationship.

Not surprisingly, a number of companies have already been set up to manage people’s reputations for them—at a price. (Reputation.com charges a minimum of US$1,050/£700 a year.) They do so by showing you “what keywords to put in your resume . . . and LinkedIn profile to ensure that you come up at the top of recruiters’ and potential employers’ search results.”8 They also bombard sites such as Instagram, YouTube, and Tumblr for you with positive—and mostly bland—content to “create false trails and digital smoke screens.” It turns out adopting an interest in cats and top-forty chart music is particularly helpful in this respect. As a result, anything negative or that “doesn’t match how you want to be perceived” moves down the rankings of content search engines, making it harder to find and so far less visible.9 Fifty-three percent of people only pay attention to the first two search results, according to Google, while 89 percent don’t make it to the second page.

In the future, are academics going to have to manage their reputations too? Are we going to have to put a lot of work into performing sociality with our colleagues, students, peers, and friends on Facebook, Twitter, and Academia.edu to ensure that we maintain a good reputation score?10 Will we similarly have to feed these platforms with a stream of vanilla, on-message content—whatever the academic equivalent of cat pictures is—so as to make anything potentially controversial or overly negative more difficult for platform capitalism’s algorithms to discover and highlight?11

Notes to The Reputation Economy

  1. Of course, these markets only appear to be unregulated and underregulated by the state. The state still defines the rules and limits of the market, protecting private property, financial assets, and so forth.
  2. “The Week in Higher Education,” Times Higher Education, July 13, 2015, 4.
  3. Sarah Brouillette, Literature and the Creative Economy (Stanford, Calif.: Stanford University Press, 2014), 207. For another—albeit perhaps extreme—example of a system of this kind in operation, see the account of life working at Amazon provided in Jodi Kantor and David Streitfeld, “Inside Amazon: Wrestling Big Ideas in a Bruising Workplace,” New York Times,August 15, 2015, http://www.nytimes.com/2015/08/16/technology/inside-amazon-wrestling-big-ideas-in-a-bruising-workplace.html?_r=1.
  4. Angela McRobbie, “Women’s Working Lives in the Managerial University and the Pernicious Effects of the ‘Normal’ Academic Career,” London School of Economics and Political Science: The Impact Blog, September 3, 2015, http://blogs.lse.ac.uk/impactofsocialsciences/2015/09/03/womens-working-lives-in-the-managerial-university/.
  5. Lest there be any confusion, I want to make it clear that none of this is to suggest professors should not care about their students or the quality of their teaching. As will become evident later, my point is rather to highlight the antipedagogical nature of this kind of neoliberal audit culture.
  6. “I Didn’t Have Enough Facebook Friends to Prove to Airbnb I Was Real,” Guardian, November 14, 2014, http://www.theguardian.com/money/blog/2014/nov/14/airbnb-wont-let-book-room-facebook-friends. The woman in question kept her name off the article as she was understandably reluctant to broadcast her “official friendlessness” over the Internet.
  7. Michael Fertik, The Reputation Economy (London: Piatkus, 2015), 172.
  8. Ibid., 16.
  9. Ibid.
  10. The workload is increased by the fact that each platform caters to a different community and uses different tools to produce its ratings and scores. Each platform therefore requires the adoption of a slightly different approach when it comes to reputation management and the performance of sociality.
  11. Actually, the academic equivalent of cat pictures may be . . . cat pictures. See both the hashtag #academicswithcats and the Academia Obscura blog’s Academic Cats Hall of Fame, http://www.academiaobscura.com/academic-cats-hall-of-fame/.

The Microentrepreneur of the Self

ONE THING IS FOR SURE: achieving a degree of autonomy from these processes of advanced capitalist valorization, modernization, and control by means of a strategic withdrawal of intellectual and bodily labor of the kind championed by the Autonomists—and more recently by Slavoj Žižek—is going to be difficult, if not impossible.1 Although it is perhaps better suited to the Fordist means of production of the factory with its heavy reliance on presenteeism, there is still a certain amount of potential to adopt such a strategy in the public higher education system today, thanks in particular to the protection afforded by state legislation and the unions. Witness the example of Derek Sayer, a professor of cultural history at Lancaster University, who appealed against his inclusion in the 2014 Research Excellence Framework (REF), the system by which the quality of research in U.K. higher education institutions is assessed.2 But apart from the fact that, as Sayer’s situation demonstrates, such quantification systems are capable of exploiting our labor whether we consciously opt into them or not (just as we are all on Facebook, regardless of whether we have signed up to join its social network), so any freelance individual microentrepreneur who assumes an attitude of noncompliance, nonproductivity, inactivity, laziness, silence, refusal, time wasting, or passive sabotage is unlikely to acquire the kind of rating and reputation score that is needed to retain a gig as an academic in a platform capitalist higher education market. She is quickly going to be found “metrically inadequate,” in John Holmwood’s memorable phrase.3

Faced by such a situation, it is all too easy to imagine fewer and fewer academics being prepared to take a chance on teaching the kind of critically inclined arts and humanities courses that run the risk of being rated as difficult, complex, or otherwise economically unproductive and unviable: say, because they are challenging the status quo (rather than merely servicing it) by exploring alternative social, political, and economic visions of the future that are indeed about more than work, consumption, and the generation of large profits for someone else to own privately. Instead, academics are likely to prefer to run courses in subjects that are perceived by student debtors-as-consumers as having the potential to help them gain a “good” job with a decent salary. They will thus be involved mainly in producing the type of unthreatening, lower-level, vocational “workers” that are needed by postwelfare capitalism (and which the current push on the part of many governments toward an “employability agenda” for much of higher education seems determined to generate) rather than the kind of educated public citizens or creative critical thinkers who are capable of maintaining some control over their own work and futures (and who therefore might not be quite so focused on the maximization of production and profit). Any future politicians, business leaders, scientists, or technologists who want an education of that nature will need to attend the kind of “leading” traditional university capable of surviving such disruption.

Yet the for-profit sharing economy acts on far more than the sphere of labor. It acts even on those elements of life that used to be beyond the control of the corporation—underused assets in those most private of spaces, people’s homes and cars—but also their sociability, their modes of self-presentation, their personalities. It is not just a political and economic system of management and control, then; it is a psychological one. In fact, the sharing economy is a regime of subjectification designed to produce a specific form of self-preoccupied, self-disciplining subjectivity: that of individuals who function as if they are their own freelance microenterprises. They are individuals who help to generate an environment that produces and valorizes particular modes of behavior by taking responsibility for managing their own employment, learning, health, and well-being in the circumstances created by postwelfare capitalism, with its weakening of social democracy and care. Indeed, because of the degree of surveillance, casualization, and debt they experience, these individuals have little opportunity to act otherwise, having lost the ability to plan and control their own futures. Consequently, they remain personable and positive, even when their way of life is rendered poor and precarious.

In this way, the platform capitalist sharing economy functions to transform us as citizens into connected yet atomized and dispersed individuals who develop our personalities as brands and endeavor to generate social, public, and professional value by acting as both microentrepreneurs and microentrepreneurs of our own selves and lives.4 Consequently, the kind of setup I have outlined here regarding the development of an information and data intermediary business model for higher education will affect not just what courses academics teach but also who teaches them, together with the kind of life they can lead and people they can be. Homophily and the use of rating systems mean it is likely to be only certain individuals who will acquire regular work as academics in the for-profit sharing economy. They will be individuals who are similar to those who use these corporate platforms: who think similar things and are prepared to live and work in similar ways (without defaulting on debt, or ranting against rival supporters in front of a camera crew, as one off-duty lawyer did recently as he left a football match, resulting in the loss of his job with an international law firm). They will also be individuals who are capable of performing the necessary emotional labor to achieve a good student rating: who are smiley, friendly, lighthearted, and “genuine”—or at least capable of appearing to be in what amounts to a kind of forced informality and authenticity—and who are able to mirror the “natural” feel of much social media and so maintain a positive, if largely bland, profile and reputation.

Admittedly, there may be some who see advantages to operating as a freelance academic microentrepreneur for those who can successfully pull it off (and all the more so if they are lucky enough to live in a country with national health provision, an unconditional basic income for everyone, free child care, and other well-developed social safety nets). Doing so may offer more autonomy, independence, and control over the number of hours worked and when, making childcare arrangements easier, for example (although such flexibility has to be put into context: freelancers in the corporate sharing economy still have to operate according to the work allocation, timetable, and conditions set by their respective platform’s owners). There will certainly be little or no institutionally generated administration and bureaucracy. Such freelance microentrepreneurs may never again need to deal with a human manager face-to-face. (This is particularly important in view of the fact that having to deal with cab dispatchers and minicab controllers who decide who gets work and who doesn’t, often on the basis of favoritism and discrimination, is one reason some drivers have given for switching to Uber.) Nor will there be anything like the same pressure there is now on those in the traditional university system to apply for external funding—money that academics, in the arts and humanities especially, may not actually need for their research but that is required from them by their managers and administrators nonetheless to keep their institutions at the top of the league tables and other metrics. (So-called grant capture is becoming a standard contractual expectation for many at prestigious Russell Group universities in the United Kingdom.) Because CVs and past experience will be less relevant—it is people’s work and reputations that will matter now—higher education sharing economy businesses will have the further advantage of providing opportunities for work for those academics who are currently unable to obtain a post in a traditional university (e.g., because they have nontraditional qualifications or life experience; because they are considered to be too old; or because there aren’t enough positions available)—or who simply don’t want one, perceiving it as too dull and conservative. What is more, a 2015 study of the creative digital IT economy shows that many freelancers are actually paid more than their formally employed counterparts.5

Significantly, this study is not concerned with freelancers working for the information and data management intermediaries of the sharing economy. Many of the latter are laboring for less than the minimum wage, having lost a host of rights and benefits in the bargain. To be sure, when it comes to higher education, it is unlikely that individual microentrepreneurs will make enough money working for profit-driven postwelfare capitalist businesses to be able to afford to travel regularly to international conferences—or, indeed, replicate many of the other research-related advantages that come with being employed as a full-time academic in a traditional university. In fact, it is hard to see how research will find much of a place in a for-profit, higher education sharing economy ecosystem at all, especially when the whole point of such an ecosystem will be to offer less expensive teaching while at the same time maximizing the potential for revenue generation. One way of achieving the latter will be by stripping out the cost of paying for all functions other than teaching: precisely activities such as academic research and scholarship, in other words.

The freedom from heavy workloads that provides time for the intellectual contemplation, doubt, curiosity, creative absent-mindlessness, idleness, and apparent inefficacy that is needed to generate “original” research, certainly in the arts and humanities, is far more likely to be found in the established university system, where the primary goal is education, not profit. (This is why many artists and writers take jobs teaching in a university: as a way of paying their bills and supporting their creative output.) Yet this system is itself becoming increasingly restrictive and is making it more difficult for radical left academics in the arts, humanities, and softer social sciences to find space in which to maneuver, in part because their work does not lend itself quite so readily to being audited and measured and its economic impact assessed as does that of those in the more instrumental and applied harder sciences and quantitative social sciences. Consider the way the U.K. government is using the Research Excellent Framework (REF) and other control mechanisms to ensure (fundamental) research is carried out predominantly in centers of excellence located in the elite—what some hold to be the more politically and culturally conservative—“research intensives,” which are themselves becoming “increasingly audit intensive universities.”6 (In such circumstances, the neoliberal drive to reduce “bureaucracy” and “inefficiency” is often used as an alibi for achieving concentration.) Meanwhile, because they are being encouraged to adopt the same values and practices as for-profit corporations (auditing, measurement, division of labor, routinization, casualization, contracting out), universities are themselves shifting much of their attention and resources away from teaching and research to focus on management, marketing, and income generation: “Between 2004 and 2010 the total number of students in UK universities increased by 9%. Over the same period the number of HE managers working in finance, marketing, widening participation, human resources, student services and quality assurance increased by 33%.”7 Indeed, something approaching a symbiotic relationship is apparent here. The more the institutional labor force is encouraged by capitalism and its culture industries to demonstrate a “spirit of opposition to [traditional] assigned roles and an openness to change” by treating work as a form of individual, creative self-development and self-realization (i.e., as an expression of who they are as autonomous, flexible, networked individuals), the more this labor force requires an enlarged system of management and bureaucracy to oversee and control it.8 In turn, those managers and administrators are demanding that academics act as entrepreneurs, both of themselves and of their research. And one way they are insisting academics do so is by bringing in money from external grants, the applied, practical, monetizable aims of which are steered by government—again putting those in the arts and humanities at a considerable disadvantage compared to their colleagues in science, technology, engineering, and math (STEM)—not least through its role in the appointment of the heads of the research funding bodies. In the context of this shift of focus within the public system of higher education, from teaching and research to academic entrepreneurship and income generation, it is worth noting the following facts as far as the United Kingdom is concerned:

—The Higher Education Funding Council for England is run by the chairman of a real estate firm.

—The Medical Research Council was previously run by a billionaire arms manufacturer and, since 2012, has been led by the former chairman and chief executive of the investment arm of Barclay’s Bank, who also oversaw the disastrous (for the public purse, at least) privatization of the Royal Mail.

—The Natural Environment Research Council was run by the head of a construction company and, since 2013, has been led by the man who steered AEA Technology plc, which is an offshoot of the U.K. Atomic Energy Authority, through its privatization.9

Notes to The Microentrepreneur of the Self

  1. See Mario Tronti, “The Strategy of Refusal,” Libcom (blog), July 23, 2005, http://libcom.org/library/strategy-refusal-mario-tronti; Slavoj Žižek, In Defense of Lost Causes (London: Verso, 2009).
  2. Derek Sayer, “One Scholar’s Crusade against the REF,” Times Higher Education, December 11, 2014, http://www.timeshighereducation.co.uk/features/feature-one-scholars-crusade-against-the-ref/2017405.fullarticle.
  3. John Holmwood, speaking at the Radical Open Access conference, Coventry University, June 15–16, 2015, http://disruptivemedia.org.uk/radical-open-access-conference/. Video recordings of all the talks from the Radical Open Access conference are available at https://archive.org/details/@disruptive_media.
  4. In The Birth of Biopolitics, Foucault writes of the neoliberal “homo oeconomicus as entrepreneur of himself, being for himself his own capital, being for himself his own producer, being for himself the source of [his] earnings.” Michel Foucault, The Birth of Biopolitics: Lectures at the Collège de France, 1978–79 (London: Palgrave Macmillan, 2008), 226. My use of the term microentrepreneur of the self here is thus a repurposing of Foucault’s neoliberal homo oeconomicus, adapted to the context of postwelfare capitalism and the corporate sharing economy. The term microentrepreneur of the self is also a play on his concept of “technologies of the self,” the general framework of which is very different from the “traditional philosophical questions,” for Foucault. It is concerned not with “What is the world? What is man? What is truth? What is knowledge? How can we know something?” but with “What are we in our actuality?” Michel Foucault, “The Political Technology of Individuals,” in Power: Essential Works of Foucault 1954–1984, vol. 3, ed. James D. Faubion (London: Penguin, 2002), 403. See also Michel Foucault, “Technologies of the Self,” in Technologies of the Self: A Seminar with Michel Foucault, ed. L. H. Martin, H. Gutman, and P. H. Hutton, 16–49 (London: Tavistock, 1988).
  5. Jonathan Sapsed, Roberto Camerani, Monica Masucci, Mylene Petermann, and Megha Rajguru, with Phil Jones, Brighton Fuse 2: Freelancers in the Creative Digital IT Economy, January 2015, http://www.brightonfuse.com/wp-content/uploads/2015/01/brighton_fuse2_online.pdf. For an interesting discussion of this research, see David Garcia, “Reframing the Creative Question,” February 26, 2015, http://www.nettime.org/.
  6. John Holmwood, “Papering Over the Cracks: The Coming White Paper and the Dismantling of Higher Education,” Campaign for the Public University(blog), April 25, 2016, http://publicuniversity.org.uk/2016/04/25/papering-over-the-cracks-the-green-paper-and-the-stratification-of-higher-education.
  7. Diane Reay, “From Academic Freedom to Academic Capitalism,” Discover Society (blog), February 15, 2015, http://discoversociety.org/2014/02/15/on-the-frontline-from-academic-freedom-to-academic-capitalism/.
  8. Brouillette, Literature and the Creative Economy, 207.
  9. This is an updated version of facts first presented in George Monbiot, “These Men Would’ve Stopped Darwin,” Guardian, May 11, 2009, http://www.theguardian.com/commentisfree/2009/may/11/science-research-business; see also Reay, “From Academic Freedom to Academic Capitalism.”

The Para-academic

ONE ALTERNATIVE for those who wish to produce radical political, critical, or experimental research that is difficult to audit and gain immediate economic impact from (or who just want to have an opportunity to do less and think more) will be to try to survive by operating on a part-time basis as teachers in the corporate sharing economy or by finding other work to support themselves and their research—much as increasing numbers of musicians are finding that, with record sales falling and there being less money in the music industry, they have to hold down other jobs, even when they have a recording or publishing contract. As Stevphen Shukaitis points out, while once it may have been possible to use music or writing to escape from more regular forms of work, “today it much more seems that it is work which escaped from us, in the sense that there [are only a small] number of decent paying jobs left within publishing and media industries more generally.”1 Is something similar going to happen to those who are employed by universities? Will working solely as an academic and nothing else become largely a thing of the past? To do interesting creative labor, to live a stimulating life comparatively free from postwelfare capitalism’s control, surveillance, and deskilling, will they too have to find work outside of the university?

Has a change of this nature not in fact already begun to take place over the course of the last decade or so? I am thinking of those members of the academic precariat who, having successfully studied for a postgraduate qualification, might in other eras have expected to acquire a full-time tenured or otherwise permanent position in the academy. What they are finding now, however, is that there is no longer secure—let alone interesting or satisfying—employment to be had in higher education, or even in the arts and cultural industries (museums, art galleries, and so on).2 So they have developed what Eileen Joy dubs “alt-cult” or “alternate-cultural” organizations and projects that occupy the institutional interstices instead: autonomous universities, schools, presses, journals, and magazines. It is a section of the population that is still interested in scholarly research and ideas—in critical theory, continental philosophy, and so forth—and who often collaborate with those who are employed in higher education on a more secure footing. Only now it is from “the position of the ‘para’ [the ‘beside’], a position of intimate exteriority, or exterior intimacy.”3

The much-vaunted ability of artists to “contest bureaucratic management and other forms of regimentation” is no longer unique today, then, as Brouillette acknowledges. Whether they occupy a position inside, outside, or beside the university, academics are likewise coming up against the kind of “contradictory imperatives” that are a feature of the creative economy. They too are finding themselves in a situation where they are “critical of the institutions that employ them but devoted to the work they [now perhaps just at times] do within them; enjoined to make work an expression of who they really are but in circumstances that leave them little time for thought about what that might mean and that ask [them] to package that expression into a readily tradable form.”4 Consider the small “artists” bookshops operating somewhere between the scholarly and trade markets that have appeared in cities such as Los Angeles, London, and Amsterdam in recent years to cater to this demographic. It is also this para-academic community that the “radical press” Zero Books, an imprint of John Hunt Publishing Ltd., is appealing to.5 “Intellectual without being academic, popular without being populist,” as the mission statement printed at the back of each of its volumes puts it, Zero Books is particularly fashionable among para-academics, being somewhere they can publish shorter-form, reasonably priced books on radical theory, philosophy, and politics. Zero has a comparatively quick turnaround, in large part because John Hunt Publishing doesn’t insist on spending as much time and money as a more traditional academic press on providing services such as rigorous editorial input, copy editing, and peer review.6 Yet, as a publisher of “critical and engaged” intellectual work, the lack of such mandatory services does not do Zero as much reputational harm as it might have done in previous eras. This is because extensive copy editing and peer review are nowhere near as important to the current generation of para-academics as they are to those who are more firmly ensconced within the institution of the university and who still need them for professional reasons to do with academic legitimation and accreditation.

Notes to The Para-academic

  1. Stevphen Shukaitis, “Toward an Insurrection of the Published? Ten Thoughts on Ticks and Comrades,” European Institute for Progressive Cultural Polices, June 2014, http://eipcp.net/transversal/0614/shukaitis/en.
  2. For example, “one group of researchers estimates the number of UK-qualified PhDs who do not obtain a permanent academic job within three years is close to 80%, based on a survey of 2505 researchers who had received their PhD in 2010.” Sam Moore, “The Practical and Theoretical Foundations for a New Ecosystem of Open-Access Publishing for the Humanities” (PhD thesis, King’s College, London, forthcoming), referring to “Some Hard Numbers,” Hortensii (blog), January 1, 2015, https://hortensii.wordpress.com/2015/01/01/some-hard-numbers/.
  3. Eileen Joy, “A Time for Radical Hope: Freedom, Responsibility, Publishing, and Building New Publics,” In the Middle (blog), November 19, 2013, http://www.inthemedievalmiddle.com/2013/11/a-time-for-radical-hope-freedom.html#sthash.s5ZcU46o.dpuf. For more on the para-academic, see Alex Wardrop and Deborah Withers, eds., The Para-Academic Handbook(Bristol, U.K.: HammerOn Press, 2014).
  4. Brouillette, Literature and the Creative Economy, 207.
  5. I put “radical press” in quotation marks because, although the content of what Zero Books publishes may be radical, its business model is not. This is something that one of its founders, Tariq Goddard, emphasized to me in the question-and-answer session that followed his presentation at the November 2014 Post-Digital Scholar conference in Leuphana, Germany (https://hybridpublishing.org/postdigital-scholar-conference/). Meanwhile, Eileen Joy, who is the editor of another para-academic press, Punctum Books, has summed up the somewhat contradictory nature of Zero Books’s philosophy as follows: “while Zero Books, indeed, offers a particularly electric and eclectic list of reasonably-priced, shorter-form books (Slime Dynamics, Nuclear Futurism, and Levitate the Primate are just a few samples of their bracing titles), they do not offer any of their publications in open-access form. Thus, their desire for a reinvigorated and non-bland, non-consensual sphere of public intellectual debate is still somewhat in the shadow of the multinational corporations (such as Amazon.com, to which all of their book pages link) that their mission statement scorns.” Eileen Joy, “All in a Jurnal’s Work: A BABEL Wayzgoose,” Punctum Books (blog), February 15, 2013, http://punctumbooks.com/blog/all-in-a-jurnals-work-a-babel-wayzgooseZero Books thus illustrates how even that produced by those edgy nonconformists who inhabit the less obviously controlled interstices of the formal institutional and corporate domains can serve as potential material for capitalist exploitation, and how capital can count on a supply of this material, even while refusing to finance its production by providing secure employment in the higher education and creative industries sectors.

Does this explain why a number of established academics who are employed full time in a traditional university are also publishing with Zero? The appeal has to do not just with the degree of intellectual freedom and quick turnaround time Zero Books offers but also with the fact that Zero Books is different in the sense many academics want—having an air of edginess and nonconformism about it—but not so very different that publishing with Zero will actually challenge these academics and the way they live, work, and think (in terms of copyright, IP, fixity, the finished object, etc.).

  1. Details of the publishing process of Zero Books are available on its website, http://www.zero-books.net/publishing-process.html, and that of John Hunt Publishing, https://www.johnhuntpublishing.com/jhp-publishing-process.html. All proposals submitted get several very short Reader Reports for which authors are not charged, while all manuscripts accepted for publication receive a “light edit.” Anything more than that is optional and charged for. This includes “a longer, 500–1000 word evaluation” of the proposal, a “more detailed evaluation of the manuscript, with suggestions for improvement, 3000–5000 words,” and a “heavy edit” to enforce the style manual and correct typos, spelling, grammar, and general punctuation mistakes.

The Artrepreneur

THERE IS THUS A VERY REAL DANGER that the range of those who have an opportunity to create, publish, and disseminate adventurous—what we might today call brave—political or critical scholarship and research may grow even narrower in the future. If much of the publicly funded, nonprofit university system is disrupted by the higher education information and data intermediaries of the sharing economy (and is becoming a less hospitable place for radical arts and humanities academics to work anyway), and if these for-profit platform capitalist businesses provide little direct support for research themselves, the writing of scholarly books and journal articles will be restricted increasingly to those who are acceptable to the forces of the market—or at least those parts of the market that are willing and able to provide authors with sufficient income to buy them time for their research. If this income is to be gained from their writing, it means far greater emphasis will be placed on producing articles, trade books, introductions, textbooks, and reference works that have the potential to garner a wide nonacademic readership. Even entering these more publicly accessible sectors of the market may not be enough to ensure that one has an opportunity to write and be published, however. According to the Zero Books website, approximately “one quarter of the titles on the list have an element of subsidy from the author, where the readers/editors liked the book but were not confident that [they] would recover the publishing costs.”1 Taken together, it is a set of circumstances that risks creating a situation in which only those who are already comfortable financially can afford to be critical of capitalism.

Does this mean that academics who produce radical, critical, or avant-garde work (i.e., research that is often unpredictable and noncalculable) will have to look to very different sources of funding and support, much as many in the art world are doing? The Louis Vuitton Foundation recently opened an arts center in Paris, designed by Frank Gehry, while the Prada Foundation has opened one in Milan, this time designed by Rem Koolhaas. So we have privately owned companies, making their money from the sale of luxury goods, starting to fill the gaps left by the withdrawal of public funding. What these companies receive in turn is an enrichment of their brand and an extension of their reach into new spheres of society. Is something of this kind actually possible for the kind of critical work many arts and humanities academics are interested in? Even if it were (and of course there are numerous examples of cultural philanthropy being shown by previous generations of capitalists), would it not require them to again become, at best, entrepreneurs of themselves and of their research, much as “artist-entrepreneurs” such as Jeff Koons, Tracey Emin, and Grayson Perry have become their own celebrity brands through their blurring of the boundaries between fine art and luxury labels?2 As such, would it not discourage risk by rewarding primarily those who are able to deliver works that are recognizable as part of their brand identity on a regular basis, just as the art market does with regard to high-value artists now?

Another option—one that has become available to authors only in the last decade or so—will be to try to appeal to enough people who are prepared to pay a small amount each to see a book brought to publication under the kind of crowd-funding model that is championed by Kickstarter (which has attracted $2 billion in pledges since it launched in 2009), Unbound, and Readership. The latter describes itself as a “digital book publisher controlled by readers.” Writers first upload extracts of their work. Readers then vote yes or noand accompany each yes vote with a financial donation. Readership publishes every book that reaches its financial target.3 However, the crowd-funding model represents not so much an alternative to free market capitalism as an extension of it. Basically, it functions as a “reverse market with prepaid investment,” as Michel Bauwens and Vasilis Kostakis describe it. Rather than “going to the banks for money” to set up the business, the crowd-funded publisher and author simply go to the people.[4]

Moreover, what if our research is not designed to provide prospective readers with what they already know they want—or think they know they want—and are thus willing to pay for, either pre- or postpublication? What if we wish to produce work that does not necessarily have a predefined audience or market it is trying to appeal to?

Notes to The Artrepreneur

  1. http://www.zero-books.net/about-us.html, accessed July 19, 2015. This figure has since been reduced. At the time of this writing (July 3, 2016), the Zero Books website claims that only approximately 10 percent of its titles are subsidized by authors.
  2. See Giulia Zaniol’s solo exhibition and public forum parodying celebrity art and luxury branding, Brand Art Sensation: A Mass Debate, Gallery Different, London, June 9–13, 2015, http://www.zaniol.com/eminent.
  3. http://readershipbooks.com/Home/About.
  4. Indeed, Bauwens and Kostakis go so far as to quote Mike Bulajewski’s description of Kickstarter, in particular, as “the very definition of parasitic capitalism,” in that it is actually a “sophisticated web hosting provider which charges ‘60 times the actual cost of providing a service by skimming a percentage off financial transactions.’” Michael Bauwens and Vasilis Kostakis, Network Society and Future Scenarios for a Collaborative Economy(London: Palgrave Pivot, 2014).

Affirmative Disruption

IN A 1983 ARTICLE for Le Monde called “The Tomb of the Intellectual,” the philosopher Jean-François Lyotard responds to a plea from the French government for intellectuals to have a “concrete involvement” with thinking on economic and social matters.1 Given that we are facing similar calls from governments today for our research to have the kind of impact that changes the behavior of its addressees, what is so interesting about this article is that philosophers are not intellectuals, according to Lyotard, in that they do not identify with, nor endeavor to speak for, a universal subject, be it “man, humanity, the nation, the people, the proletariat.”2 Nor are philosophers experts whose role is to achieve the best possible “input/output (cost/benefit)” performance ratio in their preconstituted fields. (It is the experts—ideas people, decision makers, those who have specific administrative, economic, social, and cultural responsibilities—that the French government is really appealing to when it calls for thinkers to have a concrete impact.) Along with artists and writers, Lyotard assigns philosophers instead to a third category: that of “creator” or experimenter.3 As such, they are responsible only to the question “what is painting, writing, thought?”4 In contrast to experts, experimenters therefore remain unperturbed by the notion that the vast majority of people may not readily understand what they do. They are unconcerned by this notion because they do not have a pregiven addressee, whether this be known as a “public,” “readership,” “audience,” or “market,” that they are trying to win over and seduce. Rather, philosophers, artists, and writers are by definition involved in questioning the limits of preconstituted fields—along with the accepted criteria of judgment (i.e., of performativity, of “what works best”) by which they would be held to account if they were to be criticized for not being intelligible, useful, profitable, or political enough.

Of course, Lyotard was writing in a different time and place. Yet what if, in the era of platform capitalist higher education, we too wish to produce “troublesome, impossible” research that is engaged in questioning the “received compartmentalization of realities and . . . criterion for the evaluation of actions” on which any public, readership, audience, market, or indeed “crowd” that preexists this research and can be generated around it might be based?5 In that case, will we not have to try to create performatively the very economy in which such research can find funding and support?

To approach this issue from a slightly different angle, how might we in turn disrupt the disruptors of public, nonprofit higher education, with a view to inventing a different, more caring future: for academic labor, for the sharing economy, even for advanced postindustrial society? Thus far, I have primarily used the term disruption in the widely adopted Silicon Valley sense, which, although it may be derived from it, is not quite the same as the theory of technological disruption of Clayton Christensen and his colleagues at the Harvard Business School. A disruptive technology, for Christensen, as laid out in his book The Innovator’s Dilemma, is one that typically facilitates the production of a new market for products and services and eventually succeeds in disrupting an already existing market. So, to return to the subject of transport, in the 1960s and 1970s, Honda’s introduction of small off-road motorcycles to the North American market disrupted established, over-the-road motorbike manufacturers such as Harley-Davidson. Christensen’s argument is that organizations “entering these markets early have strong first-mover advantages over later entrants.” The problem is, as these organizations “succeed and grow larger, it becomes progressively difficult for them to enter the even newer small markets [that are] destined to become the large ones in the future,” which means they are themselves likely to be disrupted eventually by other, fresher upstart organizations.6 This is because of what Christensen calls the “innovator’s dilemma,” whereby, for reasons of institutionalization, “companies find it very difficult to invest adequate resources in disruptive technologies—lower-margin technologies that their customers don’t want—until their customers want them. And by then it is too late.”7

While Uber may be an innovation that is disruptive in the looser Silicon Valley sense (as for convenience’s sake I have been terming it), in that it threatens to transform the taxi industry and put many previously successful cab companies out of business, it is not strictly speaking a genuinely disruptive innovation according to Christensen’s theory. This is because, as Christensen makes clear in a recent coauthored article for Harvard Business Review, Uber does not have its origins in either “low-end or new-market footholds”:

It is difficult to claim that the company found a low-end opportunity: that would have meant taxi service providers had overshot the needs of a material number of customers by making cabs too plentiful, too easy to use and too clean. Neither did Uber primarily target non-consumers—people who found the existing alternatives so expensive or inconvenient that they took public transit or drove themselves instead: Uber was launched in San Francisco (a well-served taxi market), and Uber’s customers were generally people already in the habit of hiring rides.8

In fact, whereas “disrupters start by appealing to low-end or unserved consumers and then migrate to the mainstream market,” Uber has done precisely the opposite. It has begun by creating a “less expensive solution to a widespread customer need” in the mainstream market, before proceeding to appeal to segments of the market that have been overlooked historically.9 For Christensen et al., Uber is thus much more of a sustaining innovation than a disruptive innovation, in that it is making what customers already consider a good product even better.

I want to emphasize, however, that I am not interested in the process of disruption for the reasons Christensen and Silicon Valley are interested in it: as a way of understanding innovation-driven economic growth to show how it is possible to succeed as a disruptive innovator, that is, as “a smaller company with fewer resources” that is able to “successfully challenge established incumbent businesses.”10 It is not my intention to try to sustain and develop the current capitalist economic system, its overall logic, modes, and relations of production, by playing up the potential of disruptive technologies to generate innovations that are capable of facilitating the creation of a new market while at the same time playing down the destructive effects of these technologies. Rather than helping capitalism to constantly renew itself with what Joseph Schumpeter, building on the economic theory of Karl Marx, understands as waves of “creative destruction,” my interest is in disrupting the free market itself by using such technologies to experiment with the invention of neweconomies and new economic models. This is why I am using the term “affirmative disruption” here—to mark this difference. I am employing this concept in the sense in which Roberto Esposito writes of an “affirmative biopolitics” in relation to the work of Michel Foucault, where an “affirmative biopolitics” is “one that is not defined negatively with respect to the dispositifs of modern power/knowledge but is rather situated along the line of tension that traverses and displaces them.”11

At the same time, it is important to be aware that affirmatively disrupting the disruptors of public, nonprofit higher education will require us to revolutionize more than the instruments and relations of production, that is, the way in which we work.12 After all, aggressive, global, for-profit technology companies such as Amazon, Google, Uber, and Airbnb are concerned not just with what we do but with who we are. Capital and life (bios) are intertwined, in other words—to the point where we are the very neoliberal microenterprises we will be trying to creatively destroy and experimentally place in question. Affirmatively disrupting the postwelfare capitalism of the sharing economy will thus mean affirmatively disrupting the microentrepreneurs of our own selves and lives we have become.

Notes to Affirmative Disruption

  1. Jean-François Lyotard, “The Tomb of the Intellectual,” in Jean-François Lyotard: Political Writings (London: UCL Press, 2003), 3.
  2. Ibid.
  3. Ibid., 5. “Experimenter” is the term Geoffrey Bennington suggests on the grounds that it “avoids the Christian overtones of ‘creator.’” Bennington, Lyotard: Writing the Event (Manchester, U.K.: Manchester University Press, 1988), 6.
  4. Lyotard, “Tomb of the Intellectual,” 4.
  5. Ibid., 7, 4.
  6. Clayton M. Christensen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail (Cambridge, Mass.: Harvard Business School Press, 1997), xx.
  7. Ibid., xix.
  8. Clayton M. Christensen, Michael E. Raynor, and Rory McDonald, “What Is Disruptive Innovation?,” Harvard Business Review, December 2015, https://hbr.org/2015/12/what-is-disruptive-innovation.
  9. Ibid.
  10. Ibid.
  11. Roberto Esposito, The Third Person: Politics of Life and Philosophy of the Impersonal (London: Polity, 2012), 18. Esposito writes, “Life, one might say, is a biological stratum that, for Foucault, is never coextensive with subjectivity because it is always caught in a dual, simultaneous process of subjection and subjectification: it is the space that power lays siege to without ever managing to occupy it fully” (18). A further articulation of an affirmative approach to disruption can be found in Pauline Van Mourik Broekman, Gary Hall, Ted Byfield, Shaun Hides, and Simon Worthington, Open Education: A Study in Disruption (London: Rowman and Littlefield International, 2014).
  12. See Karl Marx and Friedrich Engels, The Communist Manifesto, 1848, in Marx/Engels Selected Works, vol. 1 (Moscow: Progress, 1969), available as Manifesto of the Communist Party by Karl Marx and Frederick Engels, Marxists Internet Archive, 16, http://www.marxists.org/archive/marx/works/1848/communist-manifesto. For more on this point, see Gary Hall, Pirate Philosophy: For a Digital Posthumanities (Cambridge, Mass.: MIT Press, 2016).


IT IS WITH THE ENACTMENT of such an affirmative disruption of the ways in which we live, work, and think—not only as neoliberals but as liberals too—that I have been experimenting in recent years. I have been doing so along with a number of different actors, groups, and organizations, some of which operate under the names of Culture Machine, Open Humanities Press, and the Centre for Disruptive Media.[1] The result has been a series of performative media projects—performative in the sense that they are concerned not so much with representing the world (or not just with doing so) as with acting in or intra-acting with it.[2] They include Media Gifts, the Liquid Books series, Liquid Theory TV, Photomediations Machine, and Photomediations: An Open Book.[3] The Uberfication of the University is part of an expanded, iterative text involved in creating just such a performative media project, the aim of which is to affirmatively disrupt platform capitalism and the corporate sharing economy.

Notes to Postscript

  1. See Culture Machine, http://www.culturemachine.net/; Open Humanities Press, http://openhumanitiespress.org/; Centre for Disruptive Media, http://disruptivemedia.org.uk/.
  2. In Meeting the Universe Halfway, Karen Barad writes, “The agential realist approach that I offer eschews representationalism and advances a performative understanding of technoscientific and other naturalcultural practices, including different kinds of knowledge-making practices. According to agential realism, knowing, thinking, measuring, theorizing, and observing are material practices of intra-acting within and as part of the world.” Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham, N.C.: Duke University Press, 2007), 90–91. In this respect, Barad prefers the notion of intra-action to that of interaction, seeing the latter as presuming “the prior existence of independent entities/relata.” Karen Barad, “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter,” Signs: Journal of Women in Culture and Society 28, no. 3 (2003): 815. As such, she considers intra-action to represent “a profound conceptual shift” (815). Similarly, for her, the move toward performative alternatives to representationalism shifts the focus from questions of correspondence between descriptions and reality (e.g., do they mirror nature or culture?) to matters of “practices/doings/actions” (802).
  3. See Media Gifts, http://garyhall.squarespace.com/about/; the Liquid Books series, http://liquidbooks.pbwiki.com/; Liquid Theory TV, http://www.culturemachine.net/index.php/cm/article/view/354/358; Photomediations Machine, http://photomediationsmachine.net; and Photomediations: An Open Book, http://www.photomediationsopenbook.net/.


I would like to thank Joanna Zylinska, Kathleen Fitzpatrick, John Holmwood, and Clare Birchall for their comments on earlier versions of this book and some of the ideas it contains. Special thanks are due to John Holmwood for initially inviting me to write about the uberfication of the university for Discover Society.


As an open access entity, we depend on generous contributions from readers. If you benefit from this site, please consider making a donation. Thank you.


01. Mary Keator. Reclaiming the Deep Reading Brain in the Digital Age

Mary Keator


Click icon for pdf file.

First published:

Radical Pedagogy (2017)

Volume 14 Number 2

ISSN: 1524-6345

Author: Mary Keator Department of English Westfield State University, USA

e-mail: mkeator@westfield.ma.edu

Abstract Although students today are adept at scrolling, surfing and searching the web, they struggle to read deeply and interpret meaning. They spend hours on their handheld digital devices, unaware that the time they spend on these devices is in fact altering their neuro-circuitry and weakening their ability to engage in deep reading. This article focuses on some of the current challenges digital devices pose to students, specifically their ability to be present, read deeply and interpret meaning in what they are reading, both in a literary text and in the world around them. I propose the contemplative method called lectio divina as a possible remedy that instructors can use in the classroom to build awareness and strengthen students’ ability to read deeply and interpret meaning for their lives. Keywords: digital devices, neuro-circuitry, deep reading, critical inquiry, lectio divina, meditative thinking, calculative thinking, leisure, interpret meaning, literary texts, higher education

In the 20th century, Martin Heidegger (1977), himself aware of and concerned with the rising problems regarding human beings’ relationship with technology wrote, “everywhere we remain unfree and chained to technology whether we passionately affirm or deny it” (Heidegger, 1977, p. 4). Today, this unhealthy and disordered relationship with technology continues in our college community classrooms. Many students are influenced and shaped in a techno-addicted, techno-driven and techno-obsessed culture. Based on my personal experience teaching in the college classroom over the past seven years as well as multiple conversations I have had with both colleagues and students, students seem more interested in their text messages, Facebook newsfeeds, blog posts, tweets, Instagram feeds, and snapchat stories than in entering into a dynamic conversation with a literary text, exploring it for deeper meaning and wisdom.

On many college and university campuses, students walk into class with earphones fixed in their ears and smartphones in hand. Students walk into a class oblivious to their surroundings and the people within it. In the classroom, there is little inclination to engage in a natural conversation before class begins. Once class is called to order and educators have instructed the students to put away their digital devices, some students simply ignore the instruction and continue to text, scroll and surf throughout class. In Reclaiming Conversation: The Power of Talk in the Digital Age (2015), Turkle expounds on the struggle for students today to engage in a face-to-face conversation with others. My students support her assertions: “I have found that too often I will be in a room with people and instead of interacting with them, I will be looking [at] my cellphone…I wasted ample time that could have been used to get to know someone better or make a lasting impression on another’s life” (student, fall 2016). Students find it difficult to be in the present moment, open a book, focus on the course material, recollect their thoughts or just sit quietly waiting for class to begin. Although students are physically present, they are intellectually, emotionally and socially disconnected from themselves and the people around them. On average, students shared with me, they are able to be fully present about 20 minutes or less out of a 90-minute class.

Previous to the 21st century, most technologies were location-specific, creating boundaries around where and when they were used. For example, a phone was fixed to a wall, a computer to a desk. Today, most digital devices are no longer location-specific. Portable and no longer tethered to a particular space, these digital devices can be brought into any space, anytime, anywhere. Due to advanced technology, students carry their smartphones, computers and tablets with them into the classroom space, allowing for continuous stimuli through connections, distractions and interruptions. During class, students are always available and “on call” via their digital devices and as a result, are trapped in endless anticipation of the next text message or snapchat. Students are in constant need of stimulus and although these digital devices appear to ease their anxiety and relieve their boredom, in truth they may be adding to their agitation and feeding their narcissistic tendencies. In Thrilled to Death: How the Endless Pursuit of Pleasure is Leaving us Numb, Hart (2007) notes,

Teenagers are bored, not because there is nothing to do…But because they are overstimulated. Despite the phenomenal array of gadgets that can feed them entertainment twenty-four hours a day in every conceivable place, many teenagers feel bored most of the time…teenagers today are often bored because they are overstimulated. Their pleasure centers are saturated. (p. 28)

In his presentation “The distracted mind: Ancient brains in a high-tech world” at The Alberta Teachers’ Association May 27, 2016, Larry Rosen shared that “67% of teens and young adults check their phones every 15 minutes or less…If they can’t check in that often, 50% get moderately-to-highly anxious.” Without realizing it, students are always “on call,” awaiting the next text message, Facebook post or Instagram feed. Constantly on standby, students have become what Heidegger termed ‘standing-reserve,’ “ordered to stand by, to be immediately at hand, indeed to stand there just so that [they] may be on call for a further ordering” (Heidegger, 1977, p. 17).

In light of these observations, the more pressing issue is that in their overidentification with and over-reliance on these digital devices, students have become disempowered, giving power over to their digital devices and not relying on the power within themselves. Students are no longer independent and freethinking; instead, they have become overly dependent on their smartphones, laptops and tablets, thereby disabling their capacity for self-awareness, self-reflection and empathy towards others. Digital devices are numbing students’ sensibilities. This is reflected in the classroom when students, challenged to engage in a literary text, cannot read it deeply and respond meaningfully to the complexities of the human condition contained within it.

When students are fixated on their digital devices, they are no longer aware of who and what is around them. Although this is not always a problem, it becomes one when a student is in class and engaged in reading a literary text or in a dialogue around a literary text. Just yesterday, I asked my students “What makes a class interesting?” They responded, “When everyone is present, participating and asking interesting questions.”

My point here is not to demonize technology; rather, it is to call attention to and address the simple truth that technology, specifically digital devices, is having a negative impact on students’ ability to focus and actively engage in their learning experience.

In World Literature, I always turned my phone off, this is one way I blocked the distractions from clogging up my brain. If I don’t not have it turned off, once I hear or feel a buzz I automatically start to think of all the possibilities it could be on my phone. I did not realize how much my mind drifted off when I would have my phone on my desk in other classes. By now understanding this I no longer have my phone on during class any class times, and when I am doing work I keep my phone in a desk and turned off. (student, spring 2016)

With the rise of advanced technology, students are often more comfortable being in virtual relationships than in personal face-to-face relationships with the people around them and need help to see the pull that technology can have on them. I am concerned digital devices are affecting students’ ability to be present, engage deeply with a text and with one another. Each semester, I witness more and more students so preoccupied with their digital devices that they are no longer free, no longer at leisure to read deeply, question critically, think creatively, and respond meaningfully to a literary text, to one another and to the challenges within the world around them.

Digital devices are diminishing students’ ability to be attentive to the present moment. One student commented, “with all this new technology it is getting harder to pull myself away from the distractions and attachments” (student, fall 2016). They struggle to be present to themselves, to one another and by extension in the classroom to the complexities of the human condition exposed in the world of literary texts. In Reclaiming Conversation: The Power of Talk in the Digital Age, Turkle (2015) offers an insight from a conversation she had with a college student named Haley.

Haley thinks that realistically, seven minutes is the amount of time you have to wait to see if something interesting is going to happen in a conversation. It’s the amount of time you should have to wait before you should give up and take out your phone. If you want to be in real conversation, you have to be willing to put in those seven minutes. She says that they are not necessarily interesting minutes. In those seven minutes, “you might be bored.” (p. 153)

What Turkle highlights from her conversation with Haley is the struggle for students to be present to another long enough to engage in meaningful conversation. In effect, students are unable to enter into a personal relationship and sustain a meaningful conversation. Students find it difficult to have conversation because they have to consider “the other” and be willing to listen to the other’s personal story. Where conversations take time, digital devices make obsolete the necessity to show up, be present, attentive and patient, to wait for something interesting to arise through the face-to-face conversation. As Haley remarks, “in real conversation, you have to be willing to put in those seven minutes.” If students do not have the capacity to enter into relationship and wait to have a meaningful conversation, if they do not know how to create time and space to listen deeply to “the other” then they will not be able to see and understand the literary text as a doorway into the myriad of human relationships.

Students today find it difficult to unplug and detach themselves from their digital devices because these devices pacify them, entertain them and empower them. Yet, surprisingly, they actually learn to appreciate a “no cell phone policy.”

Having the strict no phone policy during class helped me focus on what was going on in class, where in some classes I find myself so worried about social media and who is or is not texting me at the moment that I find myself being so lost in class from not having my focus on the class itself. (spring, 2016)

Notice the above student remarked that in other classes she is “distracted” because she is worrying about “social media.” For educators, this raises another important question: How does all this time online affect students’ ability to think critically and read deeply? Carr (2011) highlights that as early as the 1950s Marshall McLuhan suggested that mediums of information in fact shape thought, later noting, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation” (p. 6). McLuhan’s observation in the 1950s offers an insight into students attending institutions of higher learning today. They too, struggle with concentration and the need for contemplation in their lives.

Maryann Wolf, a professor and researcher at Tufts University, is interested in the way the internet is altering our capacity for deep reading. In her article “Our ‘deep reading’ brain: Its digital evolution poses questions” (2010), she poses the question: “will we lose the ‘deep reading’ brain in a digital culture?” In it she states, “soundbites, text bites, and mind bites are a reflection of a culture that has forgotten or become too distracted by and too drawn to the next piece of new information to allow itself time to think.” Wolf, a professor of child development and neuroscience is interested in how the brain develops the ability to read. In Proust and the squid: The story and science of the reading brain (2007) she questions whether all this time online searching, scrolling, and streaming is rewiring students’ neuro-circuitry, resulting in a weakened capacity for deep reading and concentration (pp. 14-16). Even Wolf herself noticed one day while sitting down to read Herman Hesse’s The Glass Bead Game that she too, was struggling to read deeply. Turkle (2015) notes that it was this personal struggle with deep reading that led Wolf to explore further the impact on-line reading was having on our brains (p. 221).

However, McLuhan and Wolf are not alone in their critique of the relationship between the human person and technology, Nicholas Carr, author of The Shallows: What the Internet is Doing to our Brain (2011), has also become aware of the fact that his brain is also being re-shaped by technology.

Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going –so far as I can tell-but it’s changing. I’m not thinking the way I used to think. I feel it most strongly when I’m reading. I used to find it easy to immerse myself in a book or a lengthy article. My mind would get caught up in the twist of the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle. (pp. 5-6)

Another researcher Gary Small, a professor of Psychology at UCLA and the director of its Memory and Aging Center, has also been studying the effects of digital technology on the brain. He too, concurs with McLuhan, Wolf and Carr on the fact that the internet is changing our brains. In iBrain: Surviving the Technological Alteration of the Modern Mind (2008), Small and Vorgan explain that the daily use of digital devices, i.e. computers, smartphones, search engines, “stimulates brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brain while weakening old ones” (p. 1). The weakening old neural pathways, the ones’ associated with deep reading, are what led McLuhan, Wolf and Carr to struggle with concentration, deep reading and contemplation.

Although McLuhan, Wolf, Carr and other brain researchers like Small and Vorgan recognize that technology itself is neutral, time spent online via technologies is not neutral. Quite the contrary, time spent searching, scrolling and surfing is having a significant impact on our brains. The neuro-circuitry needed to surf the web is not the same neuro-circuitry needed to concentrate and read deeply. For example, the mental activity needed to surf the web is agile and quicker, while the mental activity needed to dive into a text is intentional, slower and leisurely. According to Small and Vorgan (2011), by surfing the web, we “sacrifice the facility that makes deep reading possible. We revert to being ‘mere decoders of information’” (p. 122).

Whether or not we want to admit it, educators, especially educators in the Humanities, are faced with a serious challenge. Students arrive in our classrooms with a deficit. Many students cannot concentrate, contemplate, and read deeply. Students (spring 2016) have shared with me that they do not know how to read deeply. “My generation is definitely know for ‘skimming’ instead of reading and re-reading to fully understand the concepts and important points in the stories.” However, all is not lost. The good news according to the research of Wolf, Carr, Small and others is that through continuous practice we can re-wire our brain and strengthen its ability to focus, read deeply and ponder. Scientists have discovered that the brain is highly adaptable, meaning its plasticity allows it to build new circuitry through practice (Carr, 2011, p. 5). Turkle (2015) agrees, noting that although “our brains are wired for talk, we can also train them to do deep reading, the kind that demands concentration on a sustained narrative thread with complex characters” (p. 69).

What research highlights is the necessity of practice. Students need to exercise their mental capabilities by stimulating and engaging their neuro-circuitry in deep reading, critical thinking and contemplation or else these neuro-pathways will begin to weaken and give way to the other neuro-pathways, those activated by constant searching, scrolling and surfing. According to Norman Doige, author of The brain that changes itself: Stories of Personal Triumph from the Frontiers of Brain Science (2007), “if we stop exercising our mental skills, we do not just forget them: the brain map space for those skills is turned over to skills we practice instead” (p. 317). Therefore, educators, particularly those in the Humanities, may benefit from incorporating a method and practice that helps students to develop and strengthen their ability to focus on a literary text, read it deeply, reflect on it, and interpret it in a meaningful way.

The Reawakening of Contemplative Education

In “The Question Concerning Technology,” Heidegger (1977) offered the following insight: “[M]an becomes truly free only insofar as he belongs to the realm of destining and so becomes one who listens and hears, and not one who is simply constrained to obey” (p. 25). What Heidegger highlights is the need for human beings to develop their ability to listen to and discern the essence of what it means to be fully human. When human beings learn to listen to their inner self in conversation with others, they can begin to discover hidden possibilities for the transformation of humanity and the world we inhabit together. They are no longer controlled by the decisions/algorithms offered by a non-living, non-questioning, non-ethical digital device. By engaging in active listening, conversations, and discernment, human beings remain the subject of their own experiences and destiny, not mere objects mastered and manipulated by technology. Students can begin to think, consider, question, ponder and discern meaning for themselves. What Heidegger points out is that in a technologically obsessed culture, what is actually at stake is human freedom. In his Memorial Address (1966), Heidegger offers the following solution:

We can use technical devices, and yet with proper use also keep ourselves so free of them, that we may let go of them at any time. We can use the technical devices as they ought to be used, and also let them alone as something which does not affect our inner and real core. We can affirm the unavoidable use of technical devices, and also deny them the right to dominate us, and so to warp, confuse, and lay waste our nature. (p. 54)

In keeping with Heidegger’s insight, students who have given over their power to their digital devices are at risk of losing their freedom. Tethered to their digital devices, students risk their freedom to learn, to read deeply, and think critically so that they can respond intelligently and creatively to the challenges and problems of the human condition and the issues facing the world around them. The remedy, therefore, is to help students to detach consciously from their digital devices, and engage in deep listening to insure that their sensibilities will not become “dominated, warped and confused” by their over-attachment to digital devices. Perhaps surprising to some, there are students who welcome this challenge: “continue to challenge dependency on cell phones; this course made me see the effects technology is having on my own generation and I am grateful for that” (fall 2016).

In “The Memorial Address” (1966), Heidegger points out two distinct types of thinking: calculative thinking, which “computes and races from one concept to the next. It never stops and never collects itself” (p. 46) and meditative thinking, which “dwell[s] on what lies close and meditate[s] on what is closest; upon that which concerns us…” (p. 47) According to Heidegger, “man is a thinking, that is a meditative being” (p. 47). He has the ability “to ponder,” remain open and reflective. Meditative thinking is the skill required for sustained focus and critical reflective reading. As Heidegger notes, “but-it is one thing to have heard and read something, that is, merely to take notice, it is another thing to understand what we have heard and read, that is, to ponder.” (p. 52). In the Humanities, students need to acquire the skills for meditative thinking in order to notice, understand and ponder the human condition exposed in the literary text they are reading.

The Humanities, as an academic discipline, study human experience, human culture and its accompanying zeitgeist, its purpose being not the indoctrination of the students, but the cultivation of the soul. Using stories, essays and other literary texts, the Humanities offer students the opportunity to be challenged and inspired by the stories of others who have lived, reflected and struggled with the human condition. Through their deep reading of and critical reflection on these texts, students can begin to awaken from their slumber, grow into and actualize their fullest human potential. By engaging in deep reading and critical thinking, students are better equipped to read the world they are living in and be prepared to address its current challenges.

Over the past twenty years, the emergence of the contemplative movement within higher education has emphasized the value of first-person experiences as critical in the learning process. As Barbezat and Bush (2014) note,

By legitimizing students’ experiences, we change their relationship to the material being covered. In much of formal education, students are actively dissuaded from finding themselves in what they are studying; all too often, students nervously ask whether or not they may use ‘I’ in their papers. A direct inquiry brought about through contemplative introspection validates and deepens their understanding of both themselves and the material covered. (p. 6)

Self-charged with the task of helping students to grow into their fullest potential and become compassionate contributing members of society, the contemplative movement within higher education strives to develop and support students as first person learners, subjects of their learning process. It is not a new movement; rather it is a re-discovery of an ancient movement reaching all the way back to ancient Greece, where the purpose of education was to know the self, develop the self and contribute ethically and virtuously to society. In the ancient Greek philosophical schools, students’ minds were not perceived as empty containers waiting to be filled with the teacher’s wealth of information; rather, students were actively engaged in their own learning. It was the active engagement in the learning process that led students to develop self-awareness, while deepening critical inquiry.

However, today, as noted by Palmer and Zajonc (2010), “our institutions of higher education seldom embrace a genuinely transformative view of the pedagogies they consciously or more often unconsciously adopt. Our view of the student is too often as a vessel to be filled or a person to be trained” (p. 101). Harry Lewis, former dean of Harvard College offered the following critique of higher education: Universities have forgotten their larger educational role for college students. They succeed, better than ever, as creators and repositories of knowledge. But they have forgotten that the fundamental job of undergraduate education is to…help [students] grow up, to learn who they are, to search for a larger purpose for their lives, and to leave college better human beings. (Taylor, 2010, p. xii)

Using various contemplative practices from around the world, contemplative educators guide students into contemplative practices to build and strengthen their abilities to concentrate and contemplate. Today, the reading of literary texts as a valuable and meaningful way to develop and strengthen students’ abilities to concentrate and contemplate is undervalued. As noted by Barbezat and Bush (2014), contemplative practices as part of the Humanities “provide the opportunity for students to develop insight and creativity, hone their concentration skills and deeply inquire about what means most to them” (p. 8). Contemplative practices by nature incorporate specific techniques to slow down the reading process and create space for silence, leisure, deep reading, critical thinking, reflection and multidimensional responses. By integrating contemplative practices back into the Humanities – as originally taught within the Greek and monastic schools – students become engage in their own learning process. As subjective learners and with the help of contemplative practices, students can learn to detach from their digital devices and re-connect to themselves and can begin to train/re-train their minds to focus, read deeply, think critically and ponder what they are reading.

Contemplative practices shift students’ attention away from learning about something to the experiencing of what they are learning. In Contemplative Practices in Higher Education: Powerful Methods to Transform Teaching and Learning, Barbezat and Bush (2014) lay out four main objectives of contemplative practices:

  1. Focus and attention building, mainly through focusing meditation and exercises that support mental stability
  2. Contemplation and introspection into the content of the course, in which students discover the material in themselves and thus deepen their understanding of the material
  3. Compassion, connection to others and a deepening sense of the moral and spiritual aspects of education 4. Inquiry into the nature of their minds, personal meaning, creativity and insight. (p. 11)

Educators from various disciplines select from a wide variety of contemplative practices to help students bring the material they are learning into the subjective and intersubjective realms where they can encounter the material thoughtfully and meaningfully.

In a technologically saturated culture, contemplative practices can bring balance to students and offer them ways to slow down the learning process and deepen mental activity by activating the neuro-circuitry needed for deep reading, critical thinking and reflection. In his article, “Opening the contemplative mind in the classroom” (2004), Toby Hart states: “contemplative techniques offer both a portal to our inner world and an internal technology—a kind of mindscience— enabling us to use more of the mind rather than be driven by habitual responses or emotional impulsivity” (p. 46). Instead of relying on external technology to hold their information, contemplative practices help students to strengthen their minds and build an internal information network. Students learn to develop their memory, make meaningful connections with information already within their memories and bring these thoughts, ideas and feeling into a productive dialogue.

As an educator, I am deeply concerned with the amount of time students spend online. I am not a Luddite; however, like McLuhan, Wolf, Carr and Turkle, I too see that all this time online is having negative consequences on students’ abilities to concentrate, read deeply, think critically, contemplate, and respond meaningfully to the human condition and the literary texts that explore the human condition. I find that students read a literary text the same way they read information online. They scroll, search, and surf through the literary text to grab bits and bytes of information, unaware of the consequences their time online is having on their brains. As Carr (2011) explains:

As the time we spend scanning Web pages crowds out the time we spend reading books, as the time we spend exchanging bite-sized text messages crowds out the time we spend composing sentences and paragraphs, as the time we spend hopping across links crowds out the time we devote to quiet reflection and contemplation, the circuits that support those old intellectual functions and pursuits weaken and begin to break apart. The brain recycles the disused neurons and synapses for other, more pressing work. We gain new skills and perspectives, but lose old ones. (p. 120)

As students spend increasing amounts of time online they are in danger of losing their ability to read deeply. Not only do they miss the deeper meaning of the literary text, but they are also in danger of reducing their self-awareness and freedom, since in order to be free, students need a modicum of self-awareness.

Freedom requires that students think for themselves. It also requires that students are in control of the technology and not the other way around. Technology is a powerful tool that has the ability to manipulate and persuade thinking. “The Net’s cacophony of stimuli short-circuits both conscious and unconscious thought, preventing our minds from thinking either deeply or creatively” (Carr, 2011, p. 119). When students rely solely on their digital devices they short-circuit their capacity to read deeply, think critically, and respond meaningfully. Technology drives, shapes, enhances, but it can also diminishes both human thought and human relationships. Students need to question whether their digital devices are enhancing or diminishing their humanity and their capacity for self-awareness and compassion.

The more unaware students are of their techno-addiction, the more they will be disconnected and alienated from themselves, others and the world around them. Due to their fixation with their digital devices, they will not be able to be attentive to the people around them; they will not be able to develop and maintain healthy relationships with others, and run the risk of being overpowered and dominated by technology. Students who are overpowered by technology cannot focus, listen, question and interpret meaning. They cannot develop and sustain meaningful conversations or learn to understand themselves, others and the world in which they live. If students cannot engage in meaningful conversations and understand others, they will not be equipped to read accurately the signs of the times. If students cannot read deeply, they cannot discern meaning and respond intelligently, ethically and compassionately to themselves, others and the world around them.

Lectio Divina

If the brain’s plasticity allows it to be adaptable as suggested by Wolf, Turkle, Carr, Small and other scientists, then perhaps contemplative practices, which teach sustained attention, concentration and contemplation, can offer a way to build and strengthen the neuro-circuitry that is diminished from time spent searching, scrolling and surfing the Web. In the realm of contemplative practices, I suggest the ancient monastic practice of lectio divina, a practice originally developed to teach students how to read and interpret both sacred and other literary texts. Lectio divina teaches and builds sustained attention, deep reading and critical thinking; the same processes that Wolf and Carr realized were diminishing in them due to their increased time spent on the Web. I propose that lectio divina can offer a remedy for students who struggle to read deeply, think critically and ponder meaningfully the human condition and the literary texts that explore the human condition.

By engaging in contemplative practices, students become more self-aware and more integrated human beings. Since contemplative practices are designed to reposition the students as the subjects of the learning process, students can learn to detach from their digital devices and become grounded in themselves. These practices can empower students to limit their time on their digital devices, pay attention to their inner world and be more deliberate in the search for meaning and purpose in their lives. I have used contemplative practices in my classroom and have witnessed firsthand the many positive benefits that slow reading, meditation, introspection, and contemplation have on students’ ability to focus, concentrate, read deeply, think critically and consider the human condition.

However, I also discovered that contemplative practices can sometimes confuse and disorient students when these practices are not contextualized and applied to the course material and the overall learning process. Aware of this issue, I wanted to find a more comprehensive contemplative approach that I could weave into the course material and learning outcomes, so I turned to an ancient monastic practice called lectio divina that I learned about in 2004. Over the years, I have worked to re-appropriate this method for my World Literature I course. I have found it to be a fruitful contemplative method to help students develop sustained attention, critical thinking and reflection. Through lectio divina, students learn to become the subject of their learning experience and engage in and grow through the learning process.

Lectio divina is a contemplative practice, composed of a four-fold movement (lectio, meditatio, oratio and contemplatio) that positions students as the subjects of the learning process. Through the practice of lectio divina students learn how to read deeply, think critically and respond meaningfully to the enduring questions of humanity. As students moved through the lectio divina method, they leave their digital devices behind and enter into the world of the text. Once in the text, they continue to dive deeper as they move from the objective world of the text to the subjective world of the self were they began to sense, feel, intuit and experience the wisdom embedded within the text.

Lectio, the first movement in the lectio divina method, teaches students to slow down, focus and concentrate on the text before them. Once students begin to slow down the reading process, they begin to enter more deeply into the world of the text and the complexity of the human condition explored within the text. They begin to encounter characters and the joys and struggles that they face. The more deeply they encounter the characters, the more they get to know them. As they begin to listen to their stories and struggles, they begin to identify with and understand them and as a result, their minds, hearts and souls begin to open and become transformed in the process.

Lectio teaches students not only to read a text, but to read their lives, the lives of others and the world around them. Lectio is a way of reading, not only a text but also life itself. When students learn to slow down their reading of a text, they also learn to slow down their reading of life unfolding all around them. This ability to slow down and read deeply is even more critical today than in the monastic schools. In the monastic schools, the monks had distractions, mostly interior distractions. One method I use in class to slow down the reading process is to have students engage in a type of performative reading. Performative reading engages students physically, intellectually and emotionally. It brings the text to life, making reading enjoyable and memorable, for students and assisting them on their journey to uncover wisdom and truth with the text.

One approach I use to introduce students to performative reading and bring a text to life is through hand gestures. For example, when reading the Prologue to Gilgamesh listed below, I demonstrate to the students how to tell the story using hand gestures. I explain to them that reading a story with hand gestures will help them pay attention to, remember and experience what they are reading.

He had seen everything, had experienced all emotions,

From exaltation to despair, had been granted a vision

into the great mystery, the secret places,

the primeval days before the Flood. He had journeyed

to the edge of the world and made his way back, exhausted

but whole. He had carved his trials on stone tablets,

had restored the holy Eanna Temple and the massive

wall of Uruk, which no city on earth can equal. (Mitchell, 2004, p. 69)

After I read and demonstrate the hand gestures to a passage from Gilgamesh, I invite students to stand up and as we move around the circle (created at the beginning of class), they mimic my gestures as each one leads the passage. At first, they have to look at the text, but by the time, we get to about the seventh person; many of the students have already memorized the passage. Although at first this process is a bit odd to them, slowly they begin to enjoy the experience. At the end of the semester, a student offered the following comment on reading Gilgamesh with hand gestures.

Putting motions to words helps you remember what you are reading…The very first time we read, we read Gilgamesh and I remember feeling shy about putting motions to words. “He had seen everything, had experienced all emotions, from exaltation to despair,” (69) and “He had journey to the edge of the world and made his way back, exhausted but whole” (69). We put motions to the words by moving our arms and hands like we were searching for something when Gilgamesh had seen everything. To show exaltation we threw our arms up in the air and then brought them back down and looked sulky for despair….I am surprised at myself for still remembering what we did because most of the time I would read something for a class and immediately forget it when I did not have to know it anymore…If we had not put the motion to the words I would not remember the beginning of the text. It helps me study because putting certain motions to things helps my memory. (student, spring 2015)

Students are surprised at how well they remember a text after going through this exercise. They feel a sense of pride for what they were able to accomplish in a short period of time. One student even commented that she went back to her dorm and shared what she learned with her friends.

We did weird things to understand the text better. First, we stood up and put actions to each line of the prologue and repeated it until we could do it without the professor’s instruction. Since I have a horrible memory, I was really nervous and a little discouraged. But, after class, I found myself showing my friends what I learned and did the whole performance with my roommate by memory. (student, spring, 2016)

Performative reading not only engaged this student, but it strengthened her memory, developed her self-confidence, and inspired her to transfer her learning to another.

Today students still have interior distractions but in addition, they have the external distractions brought about in the age of modern technology. Technology accelerates the pace of life, not allowing time to consider, reflect and ponder. To slow down is to value life; it is to create time and space to consider deeply that, which is before us – whether it be a text, ourselves, another or the world in which we dwell; it is to live the examined life. Slow reading is a critical practice in the Humanities. As students practice the art of slow reading they learn to focus, concentrate and encounter the voice of the other whether in the text or in life.

Meditatio, the second movement in the lectio divina method, teaches students how to interpret a text for deeper meaning and purpose. Through meditatio, students learn how to contextualize a text and ruminate on it as they plunge below the surface of the literal meaning to discover hidden deeper meanings. The practice of rumination builds memory as students learn to organize and store thoughts and information within their own minds. Meditatio also teaches students how to analyze and interpret the various meaning of texts. They learn to question and discern meaning for a particular time period as well as to question and discern meaning that they can apply to their own lives.

After we engaged in performative reading on the prologue to Gilgamesh, we begin the meditatio process to dig within the story for deeper meaning, by posing questions.

What do you know about Gilgamesh?

What would you like to know more about?

How do you imagine Gilgamesh?

How do you imagine life in the City of Uruk?

Students share what they noticed, such as: Gilgamesh endured a lot; he went on a journey that was difficult; he knew hidden mysteries and secrets; he returned exhausted but whole. Their interest and curiosity, naturally sparked through performative reading, prepared them to engage in reading Gilgamesh. The students want to find out who Gilgamesh is, what journey he went on, why he is exhausted, what secrets he knew, and what made him whole. In addition, I have students engage their imaginations by drawing pictures of the city of Uruk and inviting them to walk around the class as they imagined Gilgamesh strutting through his city.

The art of interpretation is another critical practice in the Humanities. Meditatio not only trains students to slow down and interpret a text for deeper meaning, but as students slow down and learn how to interpret a text for deeper meaning, they begin to learn how to slow down and interpret the deeper meaning of life. They learn to ruminate on the fundamental questions of humanity. They learn to store within their memory the insights shared by others who have dialogued, reflected and written on the human condition. They build self confidence as they learn to rely on their own power to remember and ability to make meaningful connections, and not rely solely on their digital devices to do this work for them. Meditatio also teaches students how to analyze the insights of others for deeper meaning that can be applied to life today.

Oratio, the third movement in the lectio divina method, teaches students how to respond meaningfully to the texts that they are reading. Students can only respond meaningfully once they have understood, analyzed, considered and reflected on what they have read, all of which takes time. Again, the slow deliberate pace of the contemplative life is essential for students to deepen their understanding of what they have read and offer an intelligent and meaningful response to what they are learning. Authentic responses take time and cannot be preempted, but arise naturally from the processes of deep reading, dialogue, interpretation and reflection.

I encourage students to respond in a number of ways. Students respond orally in class, write both informal and formal reflections, create and perform a modern rendition of the text, and create booklets and or informative literature newsletters. For example, I have students who have written and performed songs about Gilgamesh, created and performed for the class modern interpretations of a section of the story, and have even had students create their own prologue to a new story about a female character named Gilgamesha and her trusted friend and tutor Enkidia. Students have shared with me that they enjoy watching each other’s interpretations of the story and are always amazed with the various ways in which students respond to the text, commenting: “I loved watching everyone’s and seeing the text in a new light” (student, spring, 2016). Once students learn to respond intelligently and meaningfully to the complexities inherent within a text, they are better equipped to respond intelligently and meaningfully to the struggles inherent in their own lives, the lives of others and the world in which they dwell, and give voice to them. As students practice responding to a text, they begin to recognize their own inherent struggles. They begin to see how they may also struggle with similar issues uncovered through their time in a literary text. As they become more aware of their own struggles, they develop the capacity to respond to them with new awareness. They are less likely to react to situations that arise in their lives and more likely to reflect and respond with deep thought and care.

Contemplatio, the fourth movement, is the culmination and fruit of the lectio divina practice. Rooted in the Greek understanding of theōria (wonder and awe), contemplatio is an experience with Beauty, Goodness, Truth and Wisdom. Contemplatio is not a practice; it is an awakening to a new or deeper realization. The practice of lectio divina not only guides students to discover the deeper hidden Wisdom embedded within the text, but to experience this deeper hidden Wisdom embedded within themselves. Once students have an experience with Wisdom, they are transformed. They know something now that they had not known before. Their minds and hearts have grown larger and the soul has awakened and been transformed by the experience.

In Gilgamesh, I learned that you have one life on Earth, so take advantage of what life has to offer. Do what makes you happy and do something meaningful while you are here. Appreciate everything that comes in and out of your life, not everyone or everything is meant to stay, but there is something always to be learned… (student, spring, 2016)

The experience of contemplatio is the whole purpose of education, since its goal is the growth and transformation of the students. As John J. Conley notes in “The Humanities and the Soul” (2015), it is “to awaken [the students’] souls to deeper ways of being human” (p. 29).

No matter how advanced technology becomes, students still need to know how to read deeply, interpret critically and respond meaningfully to the texts they read, their lives, the lives of others and the world around them. Students still need a pathway to experience Beauty, Truth, the Good, and Wisdom, since these experiences will continue to inform, transform, and nourish their lives. We live in a different time from the ancient Greek and monastic schools, but the same fundamental questions of humanity endure. When I pick up a literary text to read with my students, we enter into an aspect of the human condition and begin to explore the way in which this particular literary text is speaking to our soul and the soul of all humanity. We wonder what we might unlock as we read and explore the text for deeper meaning, not only within the text, but within ourselves.

As we enter into the world of the text, we intentionally leave our digital devices and their mode of calculative thinking behind. We are intentional, we work to open time and space and build trust as we slowly begin to enter into the text and walk through it together as we encounter characters and explore their thoughts and feelings. None of this is easy. Every day we have to meet the challenge to welcome the contemplative life in the classroom. We have to remind each other to strengthen our concentration, challenge our thinking and deepen our reflection. We have to allow space for each other to explore our thoughts and feelings without judging and criticizing one another. We have to hold space for what Heidegger (1966) refers to as “meditative thinking” and open ourselves to the myriad possibilities of what we can learn as we stroll through “the vineyard of the text,” (Illich, 1993) together, experiencing the fruits it has to offer.


Barbezat, D. & Bush, M. (2014). Contemplative practices in higher education: Powerful methods to transform teaching and learning. San Francisco, CA: Jossey-Bass.

Carr, N. (2011). The Shallows: What the internet is doing to our brain. New York: W.W. Norton & Company.

Conley, John J., S.J. (2015, December 10). “Humanities and the Soul.” America. https://www.americamagazine.org/issue/humanities-and-soul

Doige, N. (2007). The brain that changes itself: Stories of personal triumph from the frontiers of brain science. New York, NY: Penguin.

Hart, A. (2007). Thrilled to death: how the endless pursuit of pleasure is leaving us numb. Nashville, TN: Thomas Nelson, Inc.

Hart, T. (2004). Opening the contemplative mind in the classroom. Journal of Transformative Education, 2(1), 28-46. doi.org/10.1177/1541344603259311

Heidegger, M. (1977). The question concerning technology and other essays. Translated by William Lovitt. New York, NY: Harper Perennial.

Heidegger, M. (1966). “The Memorial Address.” in Discourse on Thinking. (J.M. Anderson & E.H. Freund, Trans.). New York, NY: Harper Perennial. (Original work published by Vering Günther Neske, Pjullingen under the title Gelassenheit, 1959)

Illich, I. (1993). In the vineyard of the text: A commentary to Hugh St. Victor’s “Didascalicon.” Chicago, IL: University of Chicago Press.

Palmer, P. Zajonc, A., & Scribner, M. (2010). The heart of higher education: A call to renewal. San Francisco, CA: Jossey-Bass. Rosen, L. (2016, May 27). The distracted mind: Ancient brains in a high-tech world. Retrieved from https://www.teachers.ab.ca/SiteCollectionDocuments/ ATA/About-theATA/Education%20Research/Promise%20and%20Peril/ Alberta%20Presentation%205-27-16.pdf

Small, G. & Vorgan, G. (2008). iBrain: Surviving the technological alteration of the modern mind. New York, NY: Collins.

Taylor, M.C. (2010). Crisis on campuses: A bold Plan for reforming our colleges and universities. New York, NY: Knopf.

Turkle, S. (2015). Reclaiming conversations: The power of talk in a digital age. New York, NY: Penguin Press.

Wolf, M. (2010). Our “deep reading” brain: Its digital evolution poses questions. Retrieved from http://niemanreports.org/articles/our-deep-reading-brain-itsdigital-evolution-poses-questions/

Wolf, M. (2007). Proust and the squid: The story and science of the reading brain. New York, NY: Harper Perennial.

© Radical Pedagogy


As an open access entity, we depend on generous contributions from readers. If you benefit from this site, please consider making a donation. Thank you.