title stringclasses 114 values | description stringlengths 71 138 | essay stringlengths 412 63.4k | authors stringlengths 7 67 | source_url stringlengths 52 104 | thumbnail_url stringlengths 113 249 |
|---|---|---|---|---|---|
Space exploration | When self-replicating craft bring life to the far Universe, a religious cult, not science, is likely to be the driving force | Some time late this century, someone will push a button, unleashing a life force on the cosmos. Within 1,000 years, every star you can see at night will host intelligent life. In less than a million years, that life will saturate the entire Milky Way; in 20 million years – the local group of galaxies. In the fullness of cosmic time, thousands of superclusters of galaxies will be saturated in a forever-expanding sphere of influence, centred on Earth. This won’t require exotic physics. The basic ingredients have been understood since the 1960s. What’s needed is an automated spacecraft that can locate worlds on which to land, build infrastructure, and eventually make copies of itself. The copies are then sent forth to do likewise – in other words, they are von Neumann probes (VNPs). We’ll stipulate a very fast one, travelling at a respectable fraction of the speed of light, with an extremely long range (able to coast between galaxies) and carrying an enormous trove of information. Ambitious, yes, but there’s nothing deal-breaking there. Granted, I’m glossing over major problems and breakthroughs that will have to occur. But the engineering problems should be solvable. Super-sophisticated flying machines that locate resources to reproduce are not an abstract notion. I know the basic concept is practical, because fragments of such machines – each one a miracle of nanotechnology – have to be scraped from the windshield of my car, periodically. Meanwhile, the tech to boost tiny spacecraft to a good fraction of the speed of light is in active development right now, with Breakthrough Starshot and NASA’s Project Starlight. The hazards of high-speed intergalactic flight (gas, dust and cosmic rays) are actually far less intense than the hazards of interstellar flight (also gas, dust and cosmic rays), but an intergalactic spacecraft is exposed to them for a lot more time – millions of years in a dormant ‘coasting’ stage of flight. It may be that more shielding will be required, and perhaps some periodic data scrubbing of the information payload. But there’s nothing too exotic about that. The biggest breakthroughs will come with the development of self-replicating machines, and artificial life. But those aren’t exactly new ideas either, and we’re surrounded by an endless supply of proof of concept. These VNPs needn’t be massive, expensive things, or perfectly reliable machines. Small, cheap and fallible is OK. Perhaps a small fraction of them will be lucky enough to survive an intergalactic journey and happen upon the right kind of world to land and reproduce. That’s enough to enable exponential reproduction, which will, in time, take control of worlds, numerous as the sand. Once the process really gets going, the geometry becomes simple – the net effect is an expanding sphere that overtakes and saturates millions of galaxies, over the course of cosmic time. Since the geometry is simplest at the largest scale (owing to a Universe that is basically the same in every direction), the easiest part of the story is the extremely long-term behaviour. If you launch today, the rate at which galaxies are consumed by life steadily increases (as the sphere of influence continues to grow) until about 19 billion years from now, when the Universe is a little over twice its current age. After that, galaxies are overtaken more and more slowly. And at some point in the very distant future, the process ends. No matter how fast or how long it continues to expand, our sphere will never overtake another galaxy. If the probes can move truly fast – close to the speed of light – that last galaxy is about 16 billion light-years away, as of today (it will be much further away, by the time we reach it). Our telescopes can see galaxies further still, but they’re not for us. A ‘causal horizon’ sets the limit of our ambition. In the end, the Universe itself will push galaxies apart faster than any VNP can move, and the ravenous spread of life will stop. Communication becomes increasingly difficult. Assuming you invent a practical way to send and receive intergalactic signals, you’ll be able to communicate with the nearby galaxies pretty much forever (though, with an enormous time lag). But the really distant galaxies are another matter. If we assume fast probes, then seven out of eight galaxies we eventually reach will be unable to send a single message back to the Milky Way, due to another horizon. The late Universe becomes increasingly isolated, with communication only within small groups of galaxies that are close enough to remain gravitationally bound to each other. Our VNP project might encounter another kind of limitation, too. What if another intelligent civilisation had the very same idea, initiating their own expansion from their own home in a distant galaxy? Our expanding spheres would collide, putting a stop to further expansion for each of us. We don’t know if that will happen, because no one has observed a telltale cluster of engineered galaxies in the distance, but we should be open to the possibility. If we can do it, another civilisation can too – it’s just a question of how often that occurs, in the Universe. Taken as a whole, this entire process bears an uncanny resemblance to a cosmological phase transition, with ‘nucleation events’ and ‘bubble growth’ that come to fill most of the Universe. There is even ‘latent heat’ given off in the process, depending on how quickly these massive civilisations consume energy. Despite the limitations imposed by nature, suffice it to say that a single VNP launch would offer an unimaginable wealth of the Universe’s resources to dispose of as you wish. OK, maybe not you, but whoever programs that VNP. Which raises a rather sticky point – what exactly should they do? It’s easy to imagine VNPs pillaging the resources of the Universe for no good reason, but what’s the actual benefit? What would motivate anyone to do anything like this? The power it would manifest – millions of years in the future, of course – is so beyond the scale of human experience that we’re still in the earliest stages of imagining what to do with it. It hasn’t even begun to be digested by popular culture and entertainment. But, as a first hint, imagine that, 50 years from now, you were approached to fund a cosmic-scale VNP project. In addition to instructions to ‘reproduce and expand’, each probe will carry a vast library of genetic data and information to reconstruct human bodies and minds on each world, along with an array of plants, animals and cultural information. If you’re still reluctant to fund the project, suppose I throw in a perk: a copy of you, reconstructed with your current memories intact, installed as absolute ruler on countless worlds. Promise of an eternal reign in a heavenly realm has, after all, been known to motivate real people. But no matter how great your god complex, all the returns-on-investment occur ‘out there’ in space and time, and won’t make anyone rich in the here and now, in the direct manner of, say, asteroid mining. After 1,000 human lifespans, cosmic expansion will still be in its infancy. Don’t expect so much as a snapshot from the nearest large galaxy for at least 5 million years. This pulls us back to the central question. If every direct, tangible benefit is deferred to a weird kind of technological afterlife, why would anyone do it? The real product of the early space programme was a taste of a new kind of purpose and meaning At least one answer has been considered by people who think about artificial superintelligence. Maybe we won’t do it – maybe a super-AI will do it for some arcane instrumental reason that doesn’t pay off for billions of years (aggressive resource-acquisition benefits almost any sufficiently long-term goal). I don’t find this answer too satisfying. It’s basically saying that humans will launch VNPs indirectly, by failing to put any limits on an AI’s behaviour. Yes, it could happen, but it doesn’t seem too likely. No doubt the superintelligence control problem is a serious challenge. But writing instructions that constrain an AI to a small region of spacetime should not be the slippery sort of problem that is infinitely easy to get wrong (unlike instructions to ‘make everyone happy’). Generally, I sense that invoking super-AI makes little difference to the question. ‘Why would anyone do it?’ just becomes ‘Why would anyone use super-AI to do it’? A real answer has to lie with human incentives in the present, on Earth. So, if there is no direct product in the present, what about the indirect products, that do occur in the here and now? This is where the answer must lie. Space programmes have known about these since Apollo. The early space programme did generate some tech spin-offs, but the real product was something different – it was a taste of a new kind of purpose and meaning, as we constructed the story of humanity’s first tenuous steps into a new realm. In the kind of VNP project we’re imagining here, human meaning will be embedded in a cosmic story spanning billions of years, superclusters of galaxies, and a narrative that grants special status to those who participate. The story will contain a moral dimension too, since you’ll need an overpowering moral imperative to justify appropriating galaxies. Regardless of whether a moral imperative exists at present, if a demand for one exists, a supply will emerge to fill it. Let’s be sceptical of that last sentence. Perhaps we’re offended by this entire discussion, and conclude that humanity must not despoil the cosmos with VNPs. Further, suppose we have total faith in our ability to convince the world that a ‘no cosmic expansion’ philosophy is the best vision. Well, that’s not good enough, because this philosophy must also compete for all future opinions. For the sake of argument, let’s say that our ‘no cosmic expansion’ philosophy is dominant for 1,000 years before briefly falling out of favour, allowing a single VNP to be released. The net outcome for the cosmos is identical to a world in which our philosophy never existed at all. No, reliance on human persuasion is insufficient, if we’re really committed to the cause. A more practical, long-term way to safeguard the Universe from life would be to launch a competing project of cosmic expansion, using our own VNPs. One whose goal is to spread everywhere and, with minimal use of resources, do nothing but prevent others from gaining a foothold on the trillions of worlds we come to occupy. Only then can we smugly sit back and let it all go to waste in sterility. The point is that any competing philosophy with a sufficiently strong opinion must adopt some form of cosmic expansion, even if it opposes the entire concept. Those efforts will unavoidably create their own Cosmic Story with Moral Dimension, enshrining the progenitors and offering Purpose and Meaning. There doesn’t seem to be any way around it, short of snuffing out humanity before any of this can happen. What about this ‘Cosmic Story with Moral Dimension that delivers Purpose and Meaning’? That description may seem familiar. That’s because it’s religion, by another name. It could be a secular religion (that will inevitably take offence at religious comparisons), or it could be one that imports spiritual beliefs from pre-existing religions. Either way, religion it will be. Cosmic Story. Moral Dimension. Transcendent Purpose and Meaning for practitioners. One can go further – based on what we’ve seen before, it’s likely to be a cult. That may sound like a stretch, so let’s unpack it. If your goal is to conquer and utilise the accessible Universe, you’ll need absolute certainty in your philosophy. At least, you’ll need to approach certainty before launching your VNPs (it’s no good changing your mind after the launch!) So, you’ll need to identify and recruit participants inclined to fully commit to your cause. And you’ll need to relentlessly purge dissenters who occasionally arise inside your organisation – they threaten to mutate the ‘absolutely certain’ goal. You’ll also have a strong incentive to adopt secrecy as a tool to prevent infiltration, spying and sabotage from competing groups, or government interference. So, then, what do you call an insular, highly dogmatic religion that ruthlessly enforces conformity? Exactly. The underlying philosophy will need supreme self-confidence to justify asserting itself on the cosmos, and it must strenuously avoid meddling from outsiders before the launch date. These projects won’t necessarily start out as cults – they may even work against cultish behaviour – but as the decades pass and objectives become less abstract and goals get nearer, they’ll find strong incentives to move in a cult-like direction, and very little incentive to move back. Another obvious observation is that competing religions tend not to get along with each other. When they do get along, it’s usually because one or more has given up on certain ambitions, and/or stopped taking their doctrine too seriously. They become more agreeable as they become more about ‘personal faith’, and less outward-focused. That condition will not be present in a race to deploy VNPs to capture the cosmos. The next 100 billion years of the Universe will be at stake, depending crucially on events happening today. The future of millions of galaxies. Someone will surely point out that direct physical conflict in the here-and-now on Earth is preferable to cosmic-scale conflict later on. In other words, there will be an incentive to violence, before launch-day. The most successful cult – by hook or by crook – is going to inherit the cosmos I’m hardly unique in predicting conflict over future technology. Science fiction loves to do that. Others, like Hugo de Garis, have predicted an eventual world war over the question of ‘whether humanity should build godlike massively intelligent machines’. But this is different. I’m talking about the few. Conflict between small, secretive groups of highly technical zealots. People who could tell you the distance to the Andromeda Galaxy but hope you don’t want to know. While the rest of humanity is fretting over issues like AI safety on Earth and shouting about impacts to their personal way of life, these people will be thinking about something else entirely, and watching with a jealous eye for others like themselves. Because the most successful cult – by hook or by crook – is going to inherit the cosmos. There’s an important point we touched on before. Each religion is in competition with the others of the present, but also with the others of the future. Being the first to launch VNPs isn’t enough to guarantee victory over the competition. The reason is that intergalactic travel takes millions of years. Suppose you launch VNPs with a travel speed that’s 50 per cent of the speed of light, and your competitor launches VNPs with a speed 1 percentage point faster. Your competitor then arrives at the nearest large galaxy with a 100,000-year lead. That’s enough lead to capture the entire thing, depending on the dispersal pattern of the probes. The effect is magnified the further out you go; you’ll quickly be cut out of all future expansion, finding every galaxy fully colonised by your competitor by the time your probes arrive. It’s irrelevant if you were the first to launch by a decade, a century, or a millennium. Thus, if your moral imperative dictates that you capture the cosmos, you want to launch and want to see no future launches by anyone else. This creates an incentive that is truly perverse. If you want certainty that your probes are successful, you’ll have to act to prevent all future competition. It’s hard to imagine many ‘nice’ ways to do that. Even the most heavy-handed political schemes tend to become uncertain in less than a century. A group that successfully launches first will be placed in an awkward position, weighing the wellbeing of one planet – Earth – against the future of millions of galaxies. In a nightmare scenario, a truly committed cult could become the most extreme kind of death cult, determined to leave a poison pill for the rest of us, to ensure the ‘correct’ cosmic outcome. No one knows the probability to assign to any of this, but it’s unwise to ignore incentives just because they’re horrific. The strength of the incentive is magnified by the scale of the future. If the future promises to be big and glorious enough, almost anything is justified in the present to ensure a righteous outcome. We’ve seen a similar moral calculus at work in 20th-century political movements, and real-world implications of this kind of futurist reasoning are already appearing in the present day, as with the case against Sam Bankman-Fried. What happens when those incentives reach their maximum possible strength, with the cosmic future in the balance? I’ll advance a picture that seems plausible to me. The humans recruited would be technical types, and those with connections, money or other useful resources. They would have to be attracted to (or tolerant of) cult-like behaviour, with personalities that accept the demand for extreme control, and for whom personal meaning, ‘secret knowledge’, and a new/special identity are a big draw. They would, of course, also be selected for a proven capacity to keep their mouths shut in the face of any number of red flags. The overlap of those requirements narrows the pool, yet large numbers are not essential. Just enough to have their fingers in the relevant technologies, and the ability to take them a few steps in their own direction. Imagine something like a secret network within a few powerful companies – one with a charismatic leader (not necessarily a CEO) and a critical mass of followers in key positions, willing to do almost anything to advance the leader’s grandiose cosmic scheme. I’m favouring small, secretive groups over large, overt players such as governments or big organisations, publicly dedicated to their own vision. The reason is that, for any specific Moral Imperative you might propose, there will be many more people who oppose it than who agree – just as no single, coherent religious sect commands a human majority. Large, overt organisations are also easy to infiltrate and sabotage. Imagine any active politician – even one you think is particularly good. How comfortable would you be in handing over all cosmic resources and the next 100 billion years to a Moral Imperative of their choosing? Can you imagine anyone willing to take extreme measures to prevent it from happening? And what do you think would happen if, let’s say, the UN wanted to select the Imperative by vote? The sci-fi we all grew up with trained us to think too small about the future, in space and time I suspect that getting and maintaining sufficient agreement, secrecy and control implies a small group. Small groups could tap ‘off-the-shelf’ technologies as they become increasingly available. High availability implies that more small groups will compete, when the time is right. What does this imply about the Moral Imperative itself? It will probably incorporate extreme versions of beliefs that are trendy with engineering types at the time (two or three generations hence), with a proven ability to evoke strong emotions and commitment. A lot of history will occur between now and then, so I hesitate to even speculate on the theme it will take. I seriously doubt it will be an idea that is fashionable today. Where are we are in this timeline right now? In the very early days. References to our interplanetary future are still largely found in science fiction; yet it’s a great irony that the big-budget sci-fi we all grew up with trained us to think too small about the future, in space and time. Fictional world-building invoked fanciful notions like faster-than-light space travel and ‘aliens everywhere’ so that events could unfold in a short time, and not too far away. It was never a case of invoking implausible tech as part of ‘thinking big’. The real Cosmic Story is yet to be imagined. The most distant and uncertain part of the picture is the Moral Imperative. I haven’t seen one that looks compelling. Eventually, I expect there to be many. For now, though, the heavy lifting is done by the vastness of scale, not by the moral dimension – but eventually, it must become the ultimate driver. Of course, the most dedicated agents may not make their programmes public. Someone with a coherent long-term plan might prefer this state of affairs to persist as long as possible, where no one can imagine a moral imperative connected with ‘outer space’ – simply as a matter of having less competition. Finally, what about Purpose and Meaning? It’s making an appearance already. However one might critique longtermism in detail, it has surely discovered a powerful human response that won’t be going away. Since Copernicus in the 1500s, humanity’s place in the Universe has been continually and relentlessly demoted by astronomy. Unfortunately, human meaning was demoted along with it. Wouldn’t it be intoxicating, then, to learn that the entire point of that 500-year enterprise wasn’t to show us our insignificance, after all? The real purpose, I submit, was to comprehend the scale of events that we mere mortals would be setting in motion. This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon+Psyche are not involved in editorial decision-making. | Jay Olson | https://aeon.co//essays/cosmic-expansion-is-a-given-who-inherits-the-cosmos-is-not | |
History of science | To the detriment of the public, scientists and historians don’t engage with one another. They must begin a new dialogue | Would boycotting Russian scientists be an effective protest against the Russian invasion of Ukraine? Where do terms like ‘altruism’ come from, and what assumptions come with them? How long should research groups be allowed to embargo their data, and why? Why is the normal curve assumed to be normal for so many disparate phenomena, from the distribution of heights to the distribution of observational errors? Who should count as an author of a scientific publication? These are questions that, in the here and now, tax scientists’ judgment and shape their research. Historical perspective and understanding can illuminate these and other problems facing scientists. The problem is that the scientists and the historians have stopped talking with and listening to one another. Scientists found the thickly contextualised, sharply focused histories of now-discarded science irrelevant and indigestible. Historians bridled at the scientists’ demands for a mythologised and anachronistic version of the past. We think it’s time to restart the conversation, for the benefit of both scientists and historians. What would the scientists stand to gain? First, they would learn a lot to help them in making consequential decisions. Take the question of whether to boycott Russian science because of the invasion of Ukraine: history is rich in lessons about how effective a boycott is likely to be, as well as about the potential costs. Specifically, historical precedents suggest that a boycott is likely to significantly damage Russian science as well as serving as a statement of moral disapprobation. Given the indifference of the present Russian regime to the flourishing of domestic science, however, a boycott is unlikely to have any direct impact on the course of the present conflict. History also gives us a better understanding of the question of whose work merits recognition in the author line of a scientific article. Norms of scientific authorship have evolved constantly since the 17th century, when aristocratic anonymity (what could be more vulgar than splashing one’s good name across a work anyone with some spare change could buy?) prevailed. By the 18th century, signed, individual authorship became the norm, but all manner of other forms of authorship were and are being tried. For example, the Bourbaki, an influential mid-20th-century group of mathematicians, chose the collective pseudonym Nicolas Bourbaki in order to modernise and standardise the teaching of mathematics at French universities. Today we see lists of 80-plus names on publications in high-energy physics. Historical studies show how the present norms of authorship originated in past circumstances – and past values (for example, values that elevate theory over practice, or seniority over actual contribution) – that may no longer hold. The same goes for norms concerning the open publication of data: the ongoing debate over how long scientists may embargo the results of clinical trials in medicine or whether corporate-funded research belongs in the public domain are only the latest episodes in the long history of who, if anyone, owns scientific data. Knowing something about these developments can be liberating as well as enlightening: not so long ago, scientists made very different decisions about similar matters. What was different in the past can change again; history can show the plasticity of things scientists sometimes presume fixed. Second, history can provide strong insight into flawed premises. The most well-documented cases are of racial and gender biases malforming all-too-many studies of human difference, some with tragic consequences, as in the case of eugenics researchers who recommended policies of social exclusion, sterilisation and even genocide. In January 2023, the world’s largest body of human geneticists, the American Society of Human Genetics, apologised for the role of its past members in promoting racism, eugenics and other forms of discrimination. The board’s pledge to do better in the future was grounded in historical knowledge of its past. ‘This time of reckoning with history is overdue,’ announced the society’s president, ‘but it forms the foundation for a brighter future.’ A deeper understanding of metaphors can sharpen the scientist’s eye for inaccuracies or distortions Political prejudice is only the most obvious source of dubious assumptions. More subtle but more pervasive are the metaphors and analogies coined to capture newly discovered phenomena and newly invented ideas that, by their very novelty, stretch existing terminology. Sometimes old words, such as ‘intelligence’, which meant general quickness of understanding, are turned to new purposes, as in the case of how Alfred Binet, Lewis Terman and other 20th-century psychologists defined an ‘intelligence quotient’ (IQ), which measures specific verbal and quantitative skills. But the new, scientific applications still drag along the baggage of the old words: IQ is conflated with all intelligence, and altruism with all unselfishness, and not just by the lay public. Importantly, a metaphor that shines a spotlight on one aspect of a phenomenon – for example, ‘partnership’ to pick out the mutually beneficial aspects of symbiotic relationships among organisms – can plunge other interesting features into darkness. In this case, for example, it obscures the fact that more than two organisms may be involved and that the relationship can be both competitive and cooperative. Similarly, Richard Dawkins’s arresting metaphor of ‘the selfish gene’ helped popularise George C Williams’s gene-centric view of evolutionary change and William Hamilton’s notion of inclusive fitness. Dawkins’s work promoted an understanding of how the persistence of altruism and self-sacrificial behaviours in populations could be consistent with the core evolutionary principles of competition and the struggle for survival. But the power of the metaphor led to an overemphasis on natural selection and diverted attention away from other mechanisms of evolutionary change such as developmental constraints. Proponents of a new ‘extended evolutionary synthesis’ have persuasively argued that evolutionary thinking needs to be expanded beyond simplistic models that focus purely on genes and natural selection. Other metaphors – nature as a machine, the brain as a computer, DNA as a ‘blueprint’, to cite just a few examples – can harden into uncritical assumptions and inhibit innovative thinking. Social scientific studies suggest that scientists are often unaware of the extent to which their activities are shaped by such constrictive metaphors and analogies. A deeper understanding of key terms and metaphors that form some of the currency of scientific communication and thought can sharpen the scientist’s eye for blind spots, inaccuracies or distortions. In fact, the repeated and uncritical use, in word and thought, of familiar images and metaphors can easily lead those engaged in the scientific pursuit of knowledge astray, to distortion rather than illumination. The pinpoint specialisation required in scientific research, and the prestissimo tempo of contemporary science, may be efficient but they also produce myopia. In the competitive environment of scientific research, it is natural to acquire an ant’s-eye view of a landscape shaped as much by the availability of funding, professional relationships, institutional resources and serendipity as by some inexorable march toward truth. The history that does survive in research programmes straightens out meandering paths of development: every journal article begins with a review of the relevant literature, most of it recent but some of it stretching back centuries to honoured ancestors such as Johannes Kepler or Charles Darwin. These brief reviews connect the research at hand with a trajectory of past enquiry, as if extending a curve through the latest data point. That is not the story the history of science tells. Instead of the one smooth curve, there are many curves, each with multiple forking branches, some veering off at unexpected angles, and some petering out altogether. We suspect that this history, the result of research every bit as scrupulously empirical as that of the scientists themselves, is in fact closer to the lived experience of most working scientists. What the history of science can provide is a wide-angle orientation that helps scientists see a bigger picture, including why they’re studying what they’re studying – and what the alternatives might be. How would the historians benefit from renewed lines of communication? Historians, too, suffer from specialisation and the pressure to publish. They are swept up in the pursuit of the latest hot research topic to the exclusion of all else. Writing for an audience of scientists would force historians of science to look up and outwards from trends within their discipline. It would also oblige them to forge new kinds of narratives. It is one thing to reject a teleological plot line, in which past science inevitably and triumphantly culminates in present doctrines. It is quite another to invent an alternative narrative that captures the drama of how science actually advances while remaining true to the fundamentally unpredictable path of empirical research. With a few notable exceptions, historians of science have largely ducked this challenge. Writing for scientists – not to mention the broader public – would force them to confront it. And not just in writing. Some of the most compelling recent accounts of key episodes in the history of science have been television series and films – see, for example, Light Fantastic (2004) and Black Holes: The Edge of All We Know (2020), respectively. Historians would also benefit from a deeper and more supple understanding of their most fundamental concept: context. The relevant context for, say, political historians may not be best suited for historians of science. More discussions with scientists might heighten historians’ awareness of the cosmopolitan character of both past and present science. Nationalist geographies and chronologies frame most forms of history, and the history of science is no exception. Since the rise of the nation-state in the 19th century, most historians have tailored their specialties to fit this political framework. But these nation-centric frameworks are a bad fit for the scientific ideas and practices of almost any epoch, which traversed many cultures and languages, and in all directions. This is not just the case for jet-age science. Studies of premodern science and scholarship in many parts of the world reveal the remarkable mobility of people and ideas across seas and continents. Framing science within the history of a single nation-state – or, for that matter, within a single culture, language or religion – is as misleading for the 13th century as it is for the 21st. Historians of all stripes stand to benefit from the cosmopolitan perspective that has long been integral to science. Historians insist that what the present takes for granted should not be projected onto the past. But it is also true that hindsight can be powerful, and that present experience does shine a new light onto parts of the past that historians have neglected. Occasionally, lighting up dark corners can transform a familiar historical landscape. The social movements of the latter half of the 20th century inspired new kinds of history: workers’ history, women’s history and the history of marginalised minorities. This extraordinary body of scholarship has, in many cases, brought us more vital and sophisticated understandings of political revolutions, industrialisation and imperialism, to name only a few examples. Similarly, historians of science have a lot to learn from the present experience of scientists – and not because any easy analogy can be drawn from past to present science (such analogies are usually superficial). Rather, what’s happening now can pose new questions to the past. For example, the methods of big data have alerted historians to the historical collection of data troves, from the astrometeorological observations of ancient Babylonia (some still cited in NASA’s Five Millennium Canon of Lunar Eclipses: -1999 to +3000) to the weather diaries of 19th-century ships’ captains (a valuable source for tracking climate change). Conversely, historians investigating the impact of climate episodes such as the Little Ice Age, which lasted from about 1300 to 1850, have greatly benefited from scientific data such as the width of tree rings, which can be reliably correlated with annual average temperature. Scientists accused historians of ignorantly and maliciously undermining scientific authority Consider, too, the Ordered Universe project, a collaborative endeavour involving scientists and historians (along with philosophers, artists, educationalists and others). The project focuses on the remarkable scientific writings of the polymath Robert Grosseteste (c1170-1253). One of its accomplishments has been to show us how modern physics can assist with the interpretation of medieval works on optics and experimental methods. Thinking the history of past science in terms of present science can sometimes enlarge, not just distort. Since conversations between scientists and historians of science would benefit both, why are they so rare? History provides some insights. For almost two centuries, the increasing specialisation of the sciences has militated against the flow of information between the sciences and the academic humanities (and, indeed, among the sciences themselves). As early as 1834, the polymath scientist-historian William Whewell bemoaned the ‘division of the soil of science into infinitely small allotments’. Science, he feared, was ‘a great empire falling to pieces’. This dismal prospect prompted Whewell to coin the term ‘scientist’ in the hope of bestowing a semblance of collegial unity on fragmenting disciplines. Overall, the new appellation has been an outstanding success. ‘Scientist’ is now an immediately recognisable designation. It conveys the impression of an authoritative community of professionals committed to a proven and reliable ‘scientific method’. The profession of the scientist, moreover, is a relatively prestigious one and is consistently ranked by the general public as among the most trustworthy. But it is significant that the label ‘scientist’ initially struggled to gain acceptance among those whom it was intended to unite. Some thought it a crass Americanism. To others, it was redolent of ‘dentist’ – someone who is paid for the performance of uncongenial labours. For much of the 19th century, the preferred descriptors continued to be ‘philosopher’ or ‘naturalist’. These terms reflected the older disciplines of natural philosophy and natural history, which then constituted much of what we now refer to as the natural sciences. In the Anglophone world, the eventual success of the designation ‘scientist’ led to an effacing of the origins of the older disciplinary configurations in which core scientific activities were conducted as branches of philosophy and history. In a related development, there emerged the accompanying idea of ‘the scientific method’, understood as a singular and systematic approach to knowledge-making that distinguished genuine science from other ways of knowing. In all of this, the word ‘science’, which had once encompassed all legitimate forms of knowledge, came to refer only to the natural sciences (again, in the Anglophone world at least – other languages, such as German, still use their cognate terms for ‘science’ to refer to all forms of systematic knowledge). In the latter half of the 20th century, after English became the lingua franca of science, the narrowed meanings of the words ‘science’ and ‘scientist’, along with the conception of a distinctive and unified ‘scientific method’, helped establish an enduring science/humanities division, and began the reversal of a balance of cultural prestige that had once strongly favoured humanistic disciplines. The division hardened further and became acrimonious in the 1960s and ’70s. The philosophers Thomas Kuhn and Paul Feyerabend challenged the self-perceptions of many scientists, who understood them to be impugning the objectivity and rationality of science. In the 1990s, the ‘science wars’ broke out, in which scientists such as the physicist Steven Weinberg accused historians of science and science studies scholars of ignorantly and maliciously undermining scientific authority (and, with it, public support for big scientific projects). Historians countered that their accusers were too arrogant to learn history and accept legitimate criticism. These episodes left deep scars on both sides. After a decade of bitter exchanges between scientists who felt science was under attack and scholars who felt scientists had no authority to talk about history, an uneasy detente was bought at the price of mutual non-interference, which subsequently became mutual indifference. Like most people, scientists are interested in history. Yet they often prefer to read triumphal works produced by fellow scientists rather than scholarship by professional historians. To the chagrin of historians, popular interest in the history of science often resembles the kind preferred by scientists, with celebrity scientists reproducing digestible myths and memes that offer entertaining and engaging stories about the past. These popular histories are replete with heroes and villains and offer a simple narrative arc of truth vanquishing error and ignorance. These one-dimensional accounts often promote public misconceptions about how science really works. Equally importantly, among the scientific community they can also erode the self-critical spirit that is so central to the success of scientific endeavours, replacing it with a self-congratulatory one. Worse, they can suggest that what scientific textbooks currently teach are only eternal truths – a position difficult to reconcile with belief in scientific progress and how history works. Such a rigid view will undermine public confidence in scientific authority when new evidence prompts scientists to change their minds. If there is one lesson to be drawn from the history of science, it is caution about smug complacency. Science doesn’t stand still long enough to rest on its laurels. Its triumphs are at once very real and never the last word. To begin again a fruitful conversation, both historians and scientists will have to overcome some misgivings. Historians would have to surmount fears that learning from and writing for scientists would compromise their critical edge and independence as scholars. Scientists would have to overcome their fear that non-triumphalist history is somehow, by definition, hostile to science. Both sides’ fears are exaggerated – and counterproductive. Historical scholars and scientists should recognise the need to make common cause against political forces that would discredit all research-based expertise, whether scientific or historical. Dialogue between science and history could have improved public understanding of the pandemic Aside from what benefits scientists and historians may derive by restarting the conversation, the public also stands to gain. The COVID-19 pandemic showed us all that neither scientists nor historians have done a particularly good job of explaining how science works: that fierce controversy over how to make sense of empirical results is a feature, not a bug; that while there is no central authority to decide who’s right and who’s wrong, some sources of information are more reliable than others; that views can evolve quickly as research accelerates. A perplexed public expecting scientists to deliver eternal truths that could guide policy and conduct during a dangerous pandemic was understandably confused and disappointed when those truths seemed in flux. Many scientists seemed hard put to reconcile their commitment to both the permanence of scientific truth and the desirability of scientific progress, which brings all such truths under scrutiny and revision. A better dialogue between science and history could have contributed to improved public understanding. Historians and scientists need one another to reconcile and communicate how the practice of science is always fragile, sometimes chaotic, but also astonishingly successful. What scientists and historians share is a deep commitment to empirical enquiry. Scientists may claim that they’re the experts on what science is and how it’s done in the present, and historians may counter that they’re the experts on what science was and how it’s been done in the past. Both sides often regard the perspective of the other as irrelevant. Yet science-present and science-past have lessons to teach each other. Past science shows that current arrangements are neither inevitable nor necessarily optimal; present science shows how novel ideas and practices emerge in real time. Might this be the occasion to think about how science is done with the same empirical rigour that both historians and scientists bring to their own specialties? That means including both the past and present practice of science; here, historians and scientists can make common cause. There are some hopeful signs that such conversations are being restarted: historians who spend time in the lab or in the field as well as the archive and the library; scientists who inform themselves about how their field has grappled with past challenges in order to seek guidance in current dilemmas, both intellectual and ethical. Whether the resulting conversations will be worth the effort required of both parties to take part in them is admittedly a wager – especially when both scientists and historians of science have plenty to occupy themselves with in their own thriving disciplines. But the stakes are high for both sides: nothing less than a deeper understanding of how science has changed – and is still changing. | Lorraine Daston & Peter Harrison | https://aeon.co//essays/science-and-history-cannot-afford-to-be-indifferent-to-each-other | |
Religion | Once a centre of Afghan culture, Sufism seems to have disappeared in the maelstrom of war and upheaval. But still it survives | My introduction into the world of Afghanistan’s Sufism began in 2015, over lunch with my friend Rohullah, the director of a research institute in Kabul. I had been working in Afghanistan in various sectors from government to nongovernmental jobs, and had returned to explore topics for a PhD that I had embarked on, a year prior. I asked what had happened to Afghanistan’s Sufis. Were they all gone? Afghanistan had, after all, once been the cradle of mystic interpretations of Islam, the place of origin of Mawlana Jalaluddin Balkhi, known in the West as Rumi. Had the Sufis disappeared in the exodus precipitated by successive wars that had engulfed Afghanistan since the late 1970s? Or had they been replaced by more radical and austere forms of Islam, as some analysts speculated? Rohullah laughed. ‘They are still here,’ he said. ‘You foreigners just don’t ask about them. All you care about is gender, counter-insurgency and nation-building.’ Any cursory look through titles in bookstores or newspaper headlines on Afghanistan substantiated Rohullah’s insight: Western policymakers, journalists and most researchers tended to nurture the kinds of knowledge about Afghanistan that informed policy, and for that purpose Sufis were not particularly useful. But even when searching regionally for literature on Afghanistan’s Sufis, all I could find were texts on the historical prevalence and importance of Sufis, though nothing about their present-day lives and struggles. On occasion, Sufism still burst onto the public stage, for instance in 2016 when Iran and Turkey tried to claim the Masnawi Ma’navi, Rumi’s magnus opus, as their joint cultural heritage (the poet died in Konya, in present-day Turkey, in 1273 – and wrote in Persian, a language spoken in both Iran and Afghanistan). Western scholars and pundits barely took notice but, in Afghanistan, public intellectuals such as the poet laureate and Sufi poetry teacher Haidari Wujodi argued that ‘Maulana belongs to present-day Afghanistan and yesterday’s Khorasan. It is the responsibility of the Afghan government to take swift action about it to protect our heritage.’ An online petition decried the attempt to lay claim to Afghanistan’s cultural legacy while the Ministry of Foreign Affairs held talks with UNESCO over the perceived slight. And Atta Mohammad Noor, the then governor of the northern province of Balkh where Mawlana’s family originated, penned a letter to the UN condemning Iran and Turkey’s ‘imperialistic’ attempts to appropriate Rumi and disregard Balkh as the esteemed poet’s ‘motherland’. This ‘diplomatic frenzy’, as Radio Free Europe/Radio Liberty called it, revealed Afghan pride in Sufism and that it still has the power to spark intense debate. Sufis in Afghanistan never really fit into Western narratives about the Taliban or the war and occupation. So, Sufism was ignored. The chaotic US military evacuation in 2021 and the sweeping Taliban takeover, with all the scenes of suffering and human rights abuses that followed, have made it even more difficult to imagine an Afghanistan where Sufi scholars debate the finer points of Islamic ontology and poets ruminate on the infinite ways to lose oneself in the beauty of God’s creation. It requires a real stretch to remember that Sufism, in its multifaceted incarnations, has been a central thread in the tapestry of Afghanistan’s historical, artistic, educational and political life. Sufi traditions were once so influential in royal courts that kings extended patronage to poets and Islamic figurative artists who illuminated manuscripts, weaving Sufi literary motifs into exquisite paintings. Some historians, such as Waleed Ziad, even go as far as to say that Sufi orders that were firmly rooted in what later became Afghanistan built their own ‘hidden caliphate’, creating networks throughout the Middle East, Central and South Asia. These chapters remind us that Afghanistan’s history transcends the geopolitical tumult of the present, tracing back to a rich heritage of spiritual and artistic expression. The history of centres of Sufi learning, such as the Pahlawan Sufi lodge in an old part of Kabul, starts in this different time: in the 18th century, the capital shifted from southern Kandahar to the mountain-crested city of Kabul, a migration that ushered in a wave of cultural and spiritual transformation. Among those embarking on this northward journey was a man named Sufi Sher Mohammad and his son Mir Mohammad. Sufi Sher earned the sobriquet of Pahlawan, or ‘wrestler’, a testament to his reputed superhuman fighting prowess. But, also, a name to praise that he fought for the powerless. In the heart of Kabul, they built the Khanaqah Pahlawan, or the Lodge of the Wrestler, in a district fittingly named Asheqan-o-Arefan, a place where lovers and mystics, the seekers of gnosis, congregated in their pursuit of divine wisdom. Here, seekers assembled for weekly meditative zikr (literally, ‘remembrance of God’) rituals and spiritual advancement through reading and learning. In the modern era, Sufism continued to play a central role in Islamic thought and practice in Afghanistan until at least the last quarter of the 20th century. Sufi poetry was not a fringe phenomenon but a mainstream approach to teaching Islam in Afghanistan’s madrasas. Alongside the Quran and Hadith, students learned poetic exegeses based on the compilations of Rumi, Saadi and Hafiz. ‘In the past, there was oral knowledge on how to understand, recite and sing poetry,’ an Afghan friend told me. ‘Until the Soviet time, [in addition to the Quran,] the mosques were also teaching poetry, through collections such as Panj Ganj … Now there is only learning by heart, no analysis.’ The Soviet occupation of Afghanistan, my friend pointed out, was a time of radical change and violence on multiple levels. Fighting and destruction sent many Afghans into neighbouring countries, while ideas about what constituted Islamic authority shifted during the jihad, fought against the backdrop of the Cold War. Khanaqah Pahlawan’s spiritual lineage stretches back centuries but the structure of the Kabul lodge itself bears the scars of its journey through Afghanistan’s recent history. During one visit in 2018, Haji Tamim, the custodian, told me: ‘We had to rebuild the roof and upper floor two times,’ explaining how they were hit by rockets that had shaken the lodge’s foundations. ‘Then the mujahidin came,’ he continued. ‘They looted and burned everything that was in here. They took out all the dishes and all the stuff from the mosque [on the first floor] and from the khanaqah [lodge]. They took even the carpet from the mosque!’ In the era of the civil war, when various mujahidin factions fought each other, the khanaqah often found itself on the precipice of violence, its serenity disrupted by war. The Sufi community sometimes chose to congregate instead in a mosque in another part of Kabul, where they continued their zikr sessions and spiritual studies. When they were finally able to return in the 1990s, the community came together to repair the damaged khanaqah. An extensive network of students and regular visitors pitched in financially and with labour to reconstruct the building. Sufis in Kabul use the khanaqah for meetings and celebrations, for rituals as well as a community space for studying poetry, hagiographic compendia and philosophy. Without any state support, Sufi religious networks coalesced, repaired and rejuvenated as best as they could. This could have been the end of the Sufi lodge, its leadership starting new lives abroad As I walked through the principal congregational chamber on the second floor, an elongated rectangular space adorned with richly patterned red carpets, illuminated by a cluster of chandeliers, Haji Tamim led me to a dark-blue metal cabinet tucked away in the room’s corner. He unlocked the cabinet and, with reverence, began retrieving a collection of relics. The first, a wooden walking stick, had once been the steadfast companion of Pahlawan Sahib, the founder of the khanaqah, more than two centuries earlier. As Haji Tamim cradled the staff, he told the history of each item. Haji Tamim displaying the cap that belonged to Haji Ahmad Jan. Khanaqah Pahlawan, Kabul, 2018 They included a cap that had belonged to Haji Ahmad Jan, a respected teacher, whose prospects were bound to the tumultuous era of Hafizullah Amin, when the Communist coup of 1978 set in motion a harrowing, year-long campaign of ideological cleansing to assert control over religious education. In this brief yet catastrophic period, the estimated tally of the disappeared ranged between 50,000 and 100,000. Intellectuals who dared to critique the government, liberal thinkers, Maoists, religious scholars as well as those arbitrarily swept up in the purges found themselves ensnared in a web of persecutions. Even the devoted disciples and revered teachers of Sufi orders were not spared this repression, Haji Tamim recounted, his voice lowering. ‘Haji Ahmad Jan was the one leading the khanaqah. They came and dragged him outside and arrested him. When they manhandled him, he lost his cap. It fell to the floor. He never came back.’ The persecution of religious teachers by the People’s Democratic Party of Afghanistan (PDPA) ultimately led to the most enduring transformations, including unlikely alliances that would guarantee the safety of the lodge and its members. These cherished relics symbolise not only the foundation of the khanaqah but also a turning point, marked by the Communist regime’s oppression, which forced the family that had been its steadfast guardians into exile. The teacher was arrested, and so were other members of the Pahlawan family who were detained for several years. At the time, one could never know whether an arrest would lead to an eventual release or disappearance. The Pahlawan family made the decision to leave Afghanistan for good – first to Pakistan, then India, before settling in the United States and Germany. This could have been the end of the Sufi lodge, its leadership starting new lives abroad, students dispersing to other places of learning or giving up their path altogether. But the family struck a deal with a quiet, unassuming mullah from another part of town: he would become the pir of the order, guarding the lodge and leading the community. Thus began the leadership of Haji Saiqal, the unlikely leader of Kabul’s Pahlawan Sufi community. When Haji Saiqal went from the threshold of his mosque out into the streets in Kabul’s Microrayon district, dashing first through wide boulevards and turning into winding alleyways on his way to the reverent confines of the Khanaqah Pahlawan, he crossed multiple spaces and boundaries. At the mosque, the plainly dressed old man with his well-groomed white beard, signature flawless pirhan tumban and a modest turban on his balding head was the keeper of the Law, the imam who, five times a day, led prayers for a neighbourhood of believers. On Fridays, he delivered a sermon expounding the message of the Quran and the Hadiths. At the lodge of the Pahlawan Sufi community, Haji Saiqal was the keeper of a place of spiritual knowledge his followers believe brought them closer to God’s divine presence. Moving from one role to the other, from mullah to Sufi guide (pir) and back again, was as much – perhaps more – a spiritual transition. This double role was also almost unheard of in the reporting on Afghanistan. In recent times, mullahs have become, perhaps unfairly, a disreputable class of Islamic leader, in both the East and the West. In its most basic sense, a mullah is an educated Muslim trained in Islamic theology and sacred law, holding an official post in a mosque as an imam. But this term embodies a wide spectrum of attributes, from esteemed community leader to rigid dogmatist to bumbling object of ridicule. Mullahs are believed to hold the potential to rouse fervent crowds or even frenzied mobs, particularly when their Friday sermons delve into politically charged terrain. Since the Taliban takeover of Afghanistan in 2021, and its theocratic precursor in Iran in 1979, the geopolitical influence that ruling mullahs can wield has been a cause for both regional concern and strategic interest. But they can also be the butt of jokes, as with Mullah Nasruddin – a satirical character in the trope of the wise fool, well known in regional folklore from the Balkans to China; at times witty, at other times wise, he dispenses pedagogical humour that criticises the powerful and humbles the listener. Zikr meeting at Khanaqah Pahlawan, August 2021 Regardless of where they fall on the spectrum – whether respected, reviled or ridiculed – mullahs are often portrayed as the antithesis of Sufis. Yet in Afghanistan, supposedly the embodiment of all that is wrong with ‘mullah Islam’, there was Haji Saiqal, occupying both roles with relative ease. How was it possible that a mullah, putatively antagonistic to Sufi thought and practice, could become a Sufi leader, the head of a revered and storied khanaqah in the heart of Kabul, taking on the mantle of both esoteric knowledge and protector of the Pahlawan Sufi community? Sufism and Islam were separated and located within different – and antagonistic – personas For historians of Islam, Haji Saiqal’s dual position is not so surprising. Many traditional scholars (ulama) throughout history have simultaneously inhabited the role of legal experts and Sufi thinkers, leaders and guides, including al-Ghazali, Abdullah Ansari and Rumi himself. However, at the time when Haji Saiqal was chosen as leader, the changes during Afghanistan’s civil war widened a conceptual rift between what is perceived by many as a Sufi Islam that stands in stark contrast to a legalistic ‘mullah Islam’, a rift that remains to the present day. The rift has its origins in colonial and Orientalist literature, which divided Islam between a perceived legalistic Islam in contrast to mystic Sufism as an individual, liberal pursuit. One example of this division is the writing of the early colonial envoy Mountstuart Elphinstone (1779-1859), who describes three categories of religious functionaries: the ‘moollahs’, the ‘holy men’ (sayyids, dervishes, faqirs and qalandars) and the ‘Soofees’, whom he considers a minority sect of philosophers. Setting aside the misrepresentation of Sufism as a sect, Elphinstone saw mullahs and Sufis as diametrically opposed enemies in the religious field. Sufism and Islam were separated and located within different roles: the alim who studies the Islamic sciences, in contrast with the Sufi who sees beyond them. Ignoring the reality of a dual orientation of scholar and mystic in a single person, Sufism and Islam were separated and located within different – and antagonistic – personas. Not only was Islam split in two (legalistic vs mystic), but Sufism was also divided: Sufism as philosophy – the high art and literature of mystic poetry – in contrast to living, contemporary Sufi pirs who were often seen as flawed, or even charlatans. As the anthropologist Katherine Ewing sketched out in 2020 in her overview of the politics of representing Sufism, the living ‘holy men’ were studied and carefully managed by colonial administrators. In contrast, Sufi mystic poetry and literature were to be deciphered by Orientalist scholars. Rather than seeing these various forms as belonging to a varied spectrum of belief, they were located in mutually exclusive roles and personas. These conceptual splits also played a part in the allocation of religious authority during the decades of war in Afghanistan. Before the onset of the conflict, traditional claims to religious authority were based on religious knowledge, clerical training or Sufi lineages. The problem for Islamist party leaders who rose to prominence during the anti-Soviet jihad was that they lacked all of these credentials. Islamism developed in Afghanistan’s urban university milieu in the 1950s and ’60s, and most leaders of Afghanistan’s emerging Islamist parties, all based across the border in Peshawar, were university-educated men with no traditional religious training or pedigree. Instead, they legitimised their claims to leadership with the fact that they were the first to initiate jihad against the PDPA government in Kabul and had access to weapons and money through the assistance of Pakistan and other foreign powers, including the US and Saudi Arabia. In an environment of both raw destruction and more fine-grained societal change, in which the external performance of piety was linked either to a position within the war as a mujahidin or as a recognisable authority through title and position, Haji Saiqal proved to be the right man for the moment in two key ways: first, his position and training as a mullah; and, second, his personal pragmatism in dealing with expectations of powerbrokers. His position as a low-level cleric made him recognisable to mujahidin commanders and Taliban officials as a respectable, though nonthreatening, conservative religious scholar, someone whose official position in his mosque they recognised and whose rank would mark him out in a way as ‘one of them’ – a rightful member of religiously legitimated authority. He could face officials when they came for visits to check what was going on at the khanaqah, and he could present an image of respectability by asserting that ritual practices were situated within the strictures of Islamic law. The neighbourhood mosque that Haji Saiqal led in the Soviet-built neighbourhood of Microrayon seemed to be a physical manifestation of this adeptness at social camouflage. The simple concrete building, rectangular walls, empty halls and plain red carpets were a far cry from the dazzling tiles, arches and impressively constructed domes of Islamic architecture in Central Asia and the Persianate world. I had somehow expected a more outwardly beautified place as the seat of a Sufi leader. But, here, Haji Saiqal did not wear that mantle, donning instead the garb of a humble neighbourhood mullah. The mosque, it turned out, was a repurposed depot and distribution centre where Afghans once came to redeem their food stamps during the PDPA government in the late 1970s and ’80s. Later, it became one of the 94,000 estimated unregistered mosques in Afghanistan. The environment that Haji Saiqal had chosen as his base for teaching and preaching was inconspicuous – one mosque among many, one mullah among hundreds. The choice of Haji Saiqal as leader of the Pahlawan community was a stroke of navigational genius. The powerbrokers who took control of Kabul in the 1990s – whether mujahidin or later Taliban – were focused on the outward compliance of conduct and representative titles that met their expectations for religious credentials; Haji Saiqal checked all of those boxes. For the Sufi family of the Pahlawan lodge and their followers, however, he was chosen for his character and deeds. They had seen him growing up, from the time when he was a young boy who sometimes joined his father on his visits to the Khanaqah Pahlawan for zikr. This knowledge of Haji Saiqal’s inner state trumped his outward credentials when the community decided to whom to entrust the future of the khanaqah. Haji Saiqal, the mullah and the pir, becomes a symbol of the creative adaptation For his part, Haji Saiqal demonstrated a canny ability to manage the volatile environment. He could, when needed, appeal to the Taliban’s morality police from the Ministry for the Promotion of Virtue and the Prevention of Vice with his deep knowledge of Sharia. He could just as expertly administer to the needs of the Sufi community. When varying ministers made moves to shut down the Sufi lodge, he drew on his network of madrasa students and their connections to various Taliban officials to keep the doors of the khanaqah open. He led the community into the 21st century, caring for the modernisation of the Sufi lodge in the coming two decades under the coalition governments, until new changes within the governmental set-up were afoot. When I last visited the Sufi lodge in the winter of 2022, the Taliban had not only taken over Afghanistan, but had also closed all Sufi lodges nationwide after a bomb had struck within another Sufi lodge in Kabul in April – in the same place where Haji Saiqal had originally received his ijaza (authorisation for transmitting knowledge). Not only were the lodges closed but so too were religious foundations in which Sufi scholars were teaching weekly Masnawi classes. The official reason was the same in all instances: the danger of attacks (presumably by the Islamic State’s Afghanistan affiliate, although none of the attacks on Sufi places had been officially claimed by them). One of the Sufi alims in Kabul opined that the Taliban had used the attack as a convenient excuse to close the lodges because they were in reality against Sufism, arguing that, if the Taliban had been concerned for the wellbeing of Sufi affiliates, they would have given the lodges additional security personnel rather than completely shutting them down. After all, why would they want to shut down a place that offered support, spiritual edification, a warm meal and tea, all the manifestation of community self-help at a time when Afghanistan was hard hit by an economic depression and many families were sliding into poverty? Haji Saiqal would not see these changes – he passed away from a tumour two years before the Taliban took over. Just like in years past, internal transitions within the lodge took place alongside the more overt political changes within Afghanistan. After many deliberations within the community both in Afghanistan and its diaspora, the calm seller of mobile phone cables Haji Tamim, who had been the guardian of the Sufi lodge for decades together with Haji Saiqal, took on the leadership. The story of how Haji Saiqal and Haji Tamim cared for the Sufi lodge in old Kabul is only one of many. Once we shift our gaze from the capital to other cities, from Kandahar to Herat, Bamiyan to Badakhshan, we find others, maybe not a mullah and a mobile phone-cable seller, maybe this time calligraphers and booksellers, university professors and shopkeepers, who hide books, rebuild community centres and shrines, or who argue with authorities. As the places and persons change, so do their adaptive strategies in dealing with violence and repression. What stays the same is their lives within a centuries-long history of Sufis in Afghanistan, immersed in literature, art, belief, philosophy and worship. Following the Sufi lodge’s trajectory backward in time, through Afghanistan’s recent history of war and instability and the Pahlawan community’s struggles to sustain its traditions, leads us to a place where we begin to see Afghans very differently, not as victims in need of saving but as active agents in preserving Afghanistan’s rich and varied cultural heritage. From this perspective, Haji Saiqal, the mullah and the pir, becomes a symbol of the creative adaptation – an ethos that his successor has taken on as well. Haji Tamim shrugged when I asked him about the lodge’s closure. ‘The khanaqahs have been here before I was born, and they will exist long after we are gone.’ In his view, governments came and went, but Sufi groups endured – sometimes by simply outliving them, sometimes through engagement and clever navigation. Governments or rulers and their laws could change, but Sufis would not stop gathering. | Annika Schmeding | https://aeon.co//essays/sufi-transitions-between-mullahs-and-sufis-in-afghanistan | |
Thinkers and theories | The intrepid logician Kurt Gödel believed in the afterlife. In four heartfelt letters to his mother he explained why | As the foremost logician of the 20th century, Kurt Gödel is well known for his incompleteness theorems and contributions to set theory, the publications of which changed the course of mathematics, logic and computer science. When he was awarded the Albert Einstein Prize to recognise these achievements in 1951, the mathematician John von Neumann gave a speech in which he described Gödel’s achievements in logic and mathematics as so momentous that they will ‘remain visible far in space and time’. By contrast, his philosophical and religious views remain all but hidden from view. Gödel was private about these, publishing nothing on this subject during his lifetime. And while scholars have grappled with his ontological proof of God’s existence, which he circulated among friends towards the end of his life, other tenets of his belief system have received no significant discussion. One of these is Gödel’s belief that we survive death. Why did he believe in an afterlife? What argument did he find persuasive? It turns out that a relatively full answer to these questions is buried in four lengthy letters written to his mother, Marianne Gödel, in 1961, to whom he makes the case that they are destined to meet again in the hereafter. 13831Kurt and Marianne Gödel pictured together in 1964. Courtesy the Vienna City Library Before exploring Gödel’s views on the afterlife, I want to recognise his mother as the silent heroine of the story. Although most of Gödel’s letters are publicly accessible via the digital archives of the Wienbibliothek im Rathaus (Vienna City Library), none of his mother’s letters are known to have survived. We possess only his side of their conversation, left to infer what she said from his replies. This creates a mystique when reading his letters, as if one were provided a Platonic dialogue with all the lines removed, except for those uttered by Socrates. Although we lack her own words, we owe a debt of gratitude to Marianne Gödel. For, without her curiosity and independence of thought, we would have one less resource in understanding her famous son’s philosophy. Thanks to Marianne’s direct question about Gödel’s belief in an afterlife, we get his mature views on the matter. She asked him for this in 1961, a time when he was in top intellectual form and thinking extensively about philosophical topics at the Institute for Advanced Study (IAS) in Princeton, New Jersey, where he had been a full professor since 1953 and a permanent member since 1946. The nature of the exchange compelled Gödel to detail his views in a thorough and accessible manner. As a result, we have (with some supplementation) the equivalent of Gödel’s full argument for belief in an afterlife, intentionally aimed at comprehensively satisfying his mother’s questions, which appear in the series of letters to Marianne from July through to October 1961. While Gödel’s unpublished philosophical notebooks present a space in which he actively worked out views and experimented through often gnomic aphorisms and remarks, Gödel wanted these letters to be understandable and to provide a definitive answer to an earnest enquiry. And because the correspondence was private, he did not feel the need to hide his true views, which he might have done in more formal academic settings and among his colleagues at the IAS. Albert Einstein and Kurt Gödel photographed at the IAS by the economist Oskar Morgenstern, c1948. Morgenstern recounted how Einstein confided that his ‘own work no longer meant much, that he came to the Institute merely … to have the privilege of walking home with Gödel’. Photo courtesy the Shelby White and Leon Levy Archives Center, IAS, Princeton, NJ, USA. In a letter dated 23 July 1961, Gödel writes: ‘In your previous letter you pose the challenging question of whether I believe in a Wiedersehen.’ Wiedersehen means ‘to see again’. Rather than the more philosophically formal terms of ‘immortality’ or ‘afterlife’, this term lends the exchange an intimate quality. After emigrating from Austria to the United States in 1940, Gödel never returned to Europe, forcing his mother and brother to take the initiative to visit him, which they first did in 1958. As a result, one can intuit here what must have been a deep longing for lasting reunification on his mother’s behalf, wondering if she would ever have a meaningful amount of time with her son again. Gödel’s answer to her question is unwaveringly affirmative. His rationale for belief in an afterlife is this: If the world is rationally organised and has meaning, then it must be the case. For what sort of a meaning would it have to bring about a being (the human being) with such a wide field of possibilities for personal development and relationships to others, only then to let him achieve not even 1/1,000th of it? He deepens the rhetorical question at the end with the metaphor of someone who lays the foundation for a house only to walk away from the project and let it waste away. Gödel thinks such waste is impossible since the world, he insists, gives us good reason to consider it to be shot through with order and meaning. Hence, a human being who can achieve only partial fulfilment in a lifetime must seek rational validation for this deficiency in a future world, one in which our potential manifests. His opinions are informed and critical, albeit imbued with optimism Before moving on, it is good to pause and capture Gödel’s argument in a nutshell. Assuming that the world is rationally organised, human life – as embedded in the world – ought to possess the same rational structure. We have grounds for assuming that the world is rationally organised. Yet human life is irrationally structured. It is constituted by a great potential but it never fully expresses this potential in a lifetime. Hence, each of us must realise our full potential in a future world. Reason demands it. Let’s linger first with a key premise of the argument, namely, the claim that the world and human life, as part of it, display a rational order. While not an uncommon position to hold in the history of philosophy, it can often seem difficult to square with what we observe. Even if we are a rational species, human history often belies this fact. The first half of 1961 – permeating the background of Gödel’s awareness – was filled with rising Cold War tensions, violence aimed at nonviolent protestors during the civil rights movement, and random suffering such as the loss of the entire US figure-skating team in a plane crash. Folly and unreason in human events seem the historical rule rather than the exception. As Shakespeare’s King Lear tells Gloucester when expounding on ‘how this world goes’, the conclusion seems to be: ‘When we are born, we cry that we are come to this great stage of fools.’ It would be a mistake, however, to think that Gödel was naive in his insistence that the world is rational. At the end of a letter dated 16 January 1956, he asserts that ‘This is a strange world.’ And his discussions in his correspondence with his mother show that he was up to speed on political topics and world events. Throughout his letters, his opinions are informed and critical, albeit imbued with optimism. What is tantalising, and perhaps unique, about his argument for an afterlife is the fact that it actually depends on the inevitable irrationality of human life in an otherwise reason-imbued world. It is precisely the ubiquity of human suffering and our inevitable failures that gave Gödel his certainty that this world cannot be the end of us. As he neatly summarises in the fourth letter to his mother: What I name a theological Weltanschauung is the view that the world and everything in it has meaning and reason, and indeed a good and indubitable meaning. From this it follows immediately that our earthly existence – since it as such has at most a very doubtful meaning – can be a means to an end for another existence.Precisely in virtue of the fact that our lives consist in unfulfilled or spoiled potential makes him confident that this lifetime is but a staging ground for things to come. But, again, that is only if the world is rationally structured. If humanity and its history do not display rational order, why believe the world is rational? The reasons that he gives to his mother in the letters display his rationalist proclivities and belief that natural science presupposes that intelligibility is fundamental to reality. As he writes in his letter dated 23 July 1961: Does one have a reason to assume that the world is rationally organised? I think so. For it is absolutely not chaotic and arbitrary, rather – as natural science demonstrates – there reigns in everything the greatest regularity and order. Order is, indeed, a form of rationality.Gödel thinks that rationality is evident in the world through the deep structure of reality. Science as a method demonstrates this through its validated assumption that intelligible order is discoverable in the world, facts are verifiable through repeatable experiments, and theories obtain in their respective domains regardless of where and when one tests them. It is this result that shook the mathematical community to its core In the letter from 6 October 1961, Gödel expounds his position: ‘The idea that everything in the world has meaning is, by the way, the exact analogue of the principle that everything has a cause on which the whole of science is based.’ Gödel – just like Gottfried Wilhelm Leibniz, whom he idolised – believed that everything in the world has a reason for its being so and not otherwise (in philosophical jargon: it accords with the principle of sufficient reason). As Leibniz puts it poetically in his Principles of Nature and Grace, Based on Reason (1714): ‘[T]he present is pregnant with the future; the future can be read in the past; the distant is expressed in the proximate.’ When seeking meaning, we find that the world is legible to us. And when paying attention, we find patterns of regularity that allow us to predict the future. For Gödel, reason was evident in the world because this order is discoverable. Although unmentioned, his belief in an afterlife is also imbricated with the results from his incompleteness theorems and related thoughts on the foundation of mathematics. Gödel believed the world’s deep, rational structure and the soul’s postmortem existence depend on the falsity of materialism, the philosophical view that all truth is necessarily determined by physical facts. In an unpublished paper from around 1961, Gödel asserts that ‘materialism is inclined to regard the world as an unordered and therefore meaningless heap of atoms.’ It follows too from materialism that anything without grounding in physical facts must be without meaning and reality. Hence, an immaterial soul could not count as possessing any real meaning. Gödel continues: ‘In addition, death appears to [materialism] to be final and complete annihilation.’ So materialism contradicts both that reality is constituted by an overarching system of meaning, as well as the existence of a soul irreducible to physical matter. Despite living in a materialist age, Gödel was convinced that materialism was false, and thought further that his incompleteness theorems showed it to be highly unlikely. The incompleteness theorems proved (in broad strokes) that, for any consistent formal system (for example, mathematical and logical), there will be truths that cannot be demonstrated within the system by its own axioms and rules of inference. Hence any consistent system will inevitably be incomplete. There will always be certain truths in the system that require, as Gödel put it, ‘some methods of proof that transcend the system.’ Through his proof, he established by mathematically unquestionable standards that mathematics itself is infinite and new discoveries will always be possible. It is this result that shook the mathematical community to its core. In one fell swoop, it terminated a central goal of many 20th-century mathematicians inspired by David Hilbert, who sought to establish the consistency of every mathematical truth through a finite system of proof. Gödel showed that no formal mathematical system could ever do so or prove definitively by its own standards that it was free of contradiction. And insights discovered about these systems – for instance, that certain problems are truly non-demonstrable within them – are evident to us through reasoning. From this, Gödel concluded that the human mind transcends any finite formal system of axioms and rules of inference. Regarding the incompleteness theorems’s philosophical implications, Gödel thought the results presented an either/or dilemma (articulated in the Gibbs Lecture of 1951). Either one accepts that the ‘human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine’, from which it follows that the human mind is irreducible to the brain, which ‘to all appearances is a finite machine with a finite number of parts, namely, the neurons and their connections.’ Or one assumes that there are certain mathematical problems of the sort employed in his theorems, which are ‘absolutely unsolvable’. If this were the case, it would arguably ‘disprove the view that mathematics is only our own creation.’ Consequently, mathematical objects would possess an objective reality all its own, independent of the world of physical facts ‘which we cannot create or change, but only perceive and describe.’ This is referred to as Platonism about the reality of mathematical truths. Much to the materialist’s chagrin, therefore, both implications of the dilemma are ‘very decidedly opposed to materialistic philosophy’. Worse yet for the materialist, Gödel notes that the disjuncts are not exclusive. It could be that both implications are true simultaneously. How does this connect with Gödel’s view that the world is rational and the soul survives death? The incompleteness theorems and their philosophical implications do not in any way prove or show that the soul survives death directly. However, Gödel thought the theorem’s results dealt a heavy blow to the materialistic worldview. If the mind is irreducible to the physical parts of the brain, and mathematics reveals a rationally accessible structure beyond physical phenomena, then an alternative worldview should be sought that is more rationalistic and open to truths that cannot be tested by the senses. Such a perspective could endorse a rationally organised world and be open to the possibility of life after death. Suppose we – cynics and all – accept that the world, in this deep sense, is rational. Why presume that human beings deserve anything beyond what they receive in this lifetime? We can guess that something similar troubled his mother. Gödel says in his next letter’s theological portion: ‘When you write that you pray to creation, you probably mean that the world is beautiful all over where human beings cannot reach, etc.’ Here, Marianne might have agreed that much in creation appears ordered, but challenged the assumption that all of reality is so ordered, in particular when it comes to human beings. Must the whole world be rational? Or might it be that human beings are irrational aberrations of an otherwise rational order? Gödel’s response reveals extra degrees of nuance to his position. In the first letter, Gödel had only loosely referenced a ‘wide field of possibilities’ that go underdeveloped but which demand completion. In his subsequent letters, he details what it is about humanity that requires existence to continue – that is, what is essential to humanity. It is first important to explain what Gödel meant by an ‘essential’ property. We have, of course, many properties. I have the property, for example, of standing in a relationship of self-identity (I am not you), of being a US citizen, and of enjoying the horror genre. Although there is no unanimity on exactly how to understand Gödel’s use of ‘essential’, his ontological proof for the existence of God includes a definition of what he means by an essential property. According to that definition, a property is essential of something if it stands in necessary connection with the rest of its properties such that, if one possesses said property, then one necessarily possesses all its other properties. It follows that every individual has an individuated essence, or as Gödel notes in the handwritten draft of the proof: ‘any two essences of x are nec. [sic] equivalent.’ Gödel, like Leibniz, believed that each individual possessed a uniquely determinable essence. It’s the human ability to learn from our mistakes in a way that gives life more meaning At the same time, even if essence is defined as individual-specific in the proof, there is evidence that Gödel thought that essences could also be kind-specific. He thought all human beings are destined for an afterlife because they all share a property in virtue of their being human. There are sets of necessary properties that hang together and that are interrelated across individuals such that the possession of this set would entail something being the kind of thing it is. In his ontological proof, for example, he defines a ‘God-like’ being as one that must possess every positive property. As for human beings, I am a human being in virtue of possessing a kind-specific set of properties that all human beings possess necessarily and that at least some of which are completely unique to us (just as only a God-like being can have the property of possessing every positive property). In Gödel’s letter of 12 August 1961, he points out the crucial question, which is too often overlooked: ‘We not only don’t even know whence and why we are here, but also don’t know what we are (namely, in essence and seen from within).’ Gödel then notes that if we were capable of discerning with ‘scientific methods of self-observation’, we would discover that every one of us has ‘completely determined properties’. Gödel playfully in the same letter remarks that most individuals believe the opposite: ‘According to the common conception, the question “what am I” would be answered such that I am something that has absolutely no properties in its own right, something along the lines of a coat rack on which one can hang anything one pleases.’ That is, most people assume that there is nothing essential about the human being and that one can ascribe to humanity any trait arbitrarily. For Gödel, however, such a conception presents a distorted picture of reality – for if we have no kind-specific essential properties, on what grounds can categorisation and determination of something as something begin? So what essentially human property points towards a destiny beyond this world? Gödel’s answer: the human ability to learn, and specifically the ability to learn from our mistakes in a way that gives life more meaning. For Gödel, this property hangs necessarily together with the property of being rational. While he admits that animals and plants can learn through trial and error to discover better means for achieving an end, there is a qualitative difference between animals and human beings for whom learning can elevate one into a higher plane of meaning. This is the heart of Gödel’s rationale for ascribing immortality to human beings. In the 14 August 1961 letter, Gödel writes: Only the human being can come into a better existence through learning, that is, give his life more meaning. One, and often the only, method to learn arises from doing something false the first time. And that occurs of course in this world truly in abundant quantity.The folly of human beings mentioned above is perfectly consistent with the belief in the world’s rationality. In fact, the world’s ostensible senselessness provides an ideal set-up to learn and develop our reason through the contemplation of our shortcomings, our moments of suffering, and our all-too-human proclivities to succumb to baser inclinations. To learn in Gödel’s sense is not about our ability to improve the technical means for achieving certain ends. Rather, this distinctive notion of learning is humanity’s capacity to become wiser. I might, for example, learn to be a better friend after losing one because of selfish behaviour, and I might learn techniques for thinking creatively about a theoretical approach after multiple experimental setbacks. An essential property of being human is, in other words, being prone to develop our reason through learning of the relevant sort. We are not just learning new ways of doing things, but rather acquiring more meaning in our lives at the same time through reflection on deeper lessons discovered through making mistakes. All this might lead one to infer that Gödel believed in reincarnation. But that would be overhasty, at least according to certain standard conceptions of it. An intriguing feature of Gödel’s theological worldview is his belief that our growth into fully rational beings occurs not as new incarnations in this world, but rather in a distinct future world: In particular, one must imagine that the ‘learning’ occurs in great part first in the next world, namely, in that we remember our experiences from this world and come to understand them really for the first time, so that our this-worldly experiences are – so to speak – only the raw material for learning.And he elaborates further: Moreover one must of course assume that our understanding there will be substantially better than here, so that we can recognise everything of importance with the same infallible certainty as 2 x 2 = 4, where deception is objectively impossible. The next world, therefore, must be one that liberates us from our current, earthly limitations. Rather than recycling back into another earthly body, we must become beings with the capacity to learn from memories that are latently brought along into our future, higher state of being. The belief that it is our essence to become something more than we are here explains why Gödel was drawn to a particular passage in St Paul’s first letter to the Corinthians, which I discovered when perusing his personal library at the archives of the IAS. In a Latin, pocket-sized edition of the New Testament, Gödel jotted at the top of the title page in faint pencil: ‘p. 374’. Following this reference, one is led to Chapter 15 of St Paul’s letter where Gödel marked verses 33 through 49 with square brackets and drew an arrow to one verse in particular. In the bracketed verses, St Paul describes our bodily resurrection. Employing the metaphor of crops, St Paul notes that sown seeds must be destroyed in order to grow into plants that it is their nature to become. So too, he notes, will it be with us. Our lives and bodies in this lifetime are only seeds, awaiting their destruction, after which we will grow into our ultimate state of being. Gödel drew an arrow pointing at verse 44 to highlight it: ‘It is sown in weakness, it is raised in power. It is sown a physical body, it is raised a spiritual body.’ For Gödel, St Paul had apparently arrived at the correct conclusion, albeit by prophetic vision as opposed to rational argument. We are left largely to wonder about Marianne’s reaction to her son’s views on the hereafter, though it is certain that she was puzzled. In the letter dated 12 September 1961, Gödel assures his mother that her confusion about his position has nothing to do with her age and much more to do with his compact explanations. And in the last letter, from 6 October 1961, Gödel objects against the claim that his views resemble ‘occultism’. He insists, on the contrary, that his views have nothing in common with those who would merely cite St Paul or discern messages directly from angels. He admits of course that his views might appear ‘unlikely’ at first glance, but insists that they are quite ‘possible and rational’. Indeed, he arrived at his position through reasoning alone, and thinks that his convictions will eventually be shown to be ‘thoroughly compatible with all known facts’. It is in this context that he further presents a defence of religion, recognising a rational core to it, which he claims is often maligned by philosophers and undermined by bad religious institutions: N.B. the current philosophy curriculum doesn’t help much in understanding such questions since 90 per cent of contemporary philosophers see their primary objective as knocking religion out of people’s heads, and thereby work the same as bad churches.Whether this convinced Marianne or not, we can only guess. For us who remain with both feet still in this world, Gödel’s argument presents us with a fascinating take on why we might continue to exist after shuffling off this mortal coil. Indeed, his argument glows with an optimism that our future lives, if reason is to be satisfied, must be ones in which we maximise certain essential human traits that remain in a paltry state here. Our future selves will be more rational, and somehow capable of making sense of the raw material of suffering experienced in this life. Can we assume that Kurt and Marianne are now reunited? Let us hope so. | Alexander T Englert | https://aeon.co//essays/kurt-godel-his-mother-and-the-argument-for-life-after-death | |
Thinkers and theories | For Rachel Bespaloff, philosophy was a sensual activity shaped by the rhythm of history, embodied in an instant of freedom | Shortly after Rachel Bespaloff’s suicide in 1949, her friend Jean Wahl published fragments from her final unfinished project. ‘The Instant and Freedom’ condensed themes that occupied the Ukrainian-French philosopher throughout her life: music, rhythm, corporeality, movement and time. One of Bespaloff’s key ideas, ‘the instant’, is less a fragment of duration than a life-changing event, a moment of embodied metamorphosis. In the midst of a noisy world, torn between transience and eternity, the human being listens to the sound of history. Had she completed and published it, ‘The Instant and Freedom’ might have become the masterpiece of an important early existentialist thinker. Instead, her name is hardly mentioned today. Yet Bespaloff was a brilliant and original thinker, among the first wave of existentialists in France. Albert Camus, Jean-Paul Sartre and Gabriel Marcel all admired her. A professional dancer and choreographer, she had finely tuned ears for the musicality of philosophical writing. For Bespaloff, philosophy is a dynamic, sensual activity of listening to and engaging with the voices of others, including those long dead. In dialogue with Homer, Kierkegaard, Nietzsche and Heidegger, she found her own voice. At the heart of Bespaloff’s world is an original conception of time shaped by embodiment and music: the instant is a silent pause that suspends history’s repetitive rhythm. Through our bodies, we experience that break from history as a brief moment of freedom. Her more famous contemporary Simone Weil also used her body to express her philosophy: Weil eventually starved herself to death in solidarity with friends and compatriots in occupied France. Bespaloff shared Weil’s interest in attention, listening and waiting as mystical practices of the body. For both thinkers, philosophy was an existential embodiment of their ideas. However, Bespaloff did not use her body as a weapon against itself; rather, she was interested in dance as a creative alchemy of movement. Bespaloff’s philosophy of the body is closely linked to the experience of time: it is our embodied day-to-day existence that measures and gives rhythm to time. In an essay on Homer’s Iliad written during the Second World War, Bespaloff captured the experience of living through the horrors of exile and war. The human being, ‘bound to her time by disorder and misfortune, acquires a new perception of the time of her own existence.’ (All translations here from the French are my own.) Bespaloff’s own life was one of repeated displacement: she moved from Ukraine to Switzerland, Paris to southern France, to Mount Holyoke via New York. Born in 1895 in Nova Zagora in Bulgaria to a Ukrainian-Jewish family, she spent her childhood in Kyiv and then in Geneva where the family moved in 1897. Her mother Debora Perlmutter was a philosopher who taught at university; her father, Daniel Pasmanik, a surgeon, became a leading theoretician of Zionism in the Russian Empire. A fervent anti-Bolshevik, Pasmanik fought for the White Army in the Russian Civil War. In Switzerland, Bespaloff (then Rachel Pasmanik), studied piano and composition at the conservatory, philosophy at the university, and eurythmics with Émile Jaques-Dalcroze. These three areas of study are all entwined in her existential philosophy of embodiment. Dalcroze eurhythmics is a holistic method of musical education; it turns the body into an instrument. Different temporalities are concretised through movements, arm gestures and steps. For Bespaloff, eurythmics became an intimate practice of listening with her entire body. Dalcroze’s favourite student, she was sent to work in Paris in early 1919. She began teaching eurythmics at the Paris Opera while also publishing short texts on dance. Bespaloff’s ‘plastic dance’ aimed to restore a lost dynamism. Her method attracted the attention of Jean Cocteau and Sergei Diaghilev, who introduced this new corporeality to his Ballets Russes. If philosophy sharpened her ears, eurythmics sculpted her body towards an embodied experience of temporality. She believed that a more authentic sense of time, lost in modernity, still lurked beneath our skin. ‘She listened with her whole person: with her hands, with her lips, with her eyes’ In 1921, Bespaloff was the choreographer of the ‘Royal Hunt’ scene in Hector Berlioz’s opera The Trojans – a theme she would return to in her Iliad essay. In ‘Dance and Eurythmics’ (1924), Bespaloff wrote that dance is a universe with ‘its vocabulary, a fixed language, its own logic, its needs.’ Eurythmics is the system of this universe, turning movement into existential experiences. Through the plasticity of our bodies, we can reach new forms of being. In the fragment ‘The Dialectic of the Instant’, Bespaloff describes time consciousness as ‘nothing other than a certain way of grasping the relationship between finitude and infinity in the instant.’ The instant’s brevity points us towards a lost continuity that can be restored. Through music and dance, Bespaloff discovered what she calls the experience of ‘magic interiority’. By externalising movement, the subject of eurythmics plunges herself into an inner experience. Bespaloff met her second important teacher in 1925, the Jewish existentialist philosopher Lev Shestov (born in Kyiv as Yehuda Leib Shvartsman). The encounter with Shestov changed her life: Bespaloff the choreographer decided to become a philosopher. This was a radical move but, by then, she was already married to a Ukrainian businessman, which allowed her to quit her job at the Opera and soon after have a daughter. Shestov was a central figure in the philosophical émigré circles of interwar Paris. French existentialism gained fame much later through the works of Sartre and Camus. However, Sartre was deeply indebted to Shestov’s original synthesis of Nietzsche, Kierkegaard, Dostoevsky and Jewish theology. Shestov’s charisma and unsystematic thought magnetised young philosophers, among them Georges Bataille. In many ways, the Shestov circle was the hotbed of French existentialism. Along with the Romanian poet Benjamin Fondane, Bespaloff was at the centre of Shestov’s salon. Her friend Daniel Halévy described her sitting on Shestov’s sofa, completely motionless, while ‘she listened with her whole person: with her hands, with her lips, with her eyes.’ One of the few women in the circle, she soon became friends with the Christian existentialist writer Gabriel Marcel and the Jesuit theologian Gaston Fessard who both admired her work. A female philosopher in the 1930s was, as Olivier Salazar-Ferrer put it, ‘a bit like a woman in the 19th century wearing men’s clothes.’ However, Bespaloff would soon wear her own clothes. In 1929, she had dinner with Edmund Husserl whose phenomenology she confidently attacked with Shestovian arguments. Bespaloff caused another stir with the publication of her ‘On Heidegger (Letter to Daniel Halévy)’ in La Revue philosophique in 1933. It was among the very first discussions of Martin Heidegger’s thought in France. Fluent in German, Bespaloff had read Heidegger’s Being and Time (1927) in the summer of 1932. Heidegger’s greatness, she wrote, was that ‘he situates himself in the inextricable; he does not want to detach himself.’ Similar to the experience of eurythmics, Heidegger’s philosophy proposes our hopeless entanglement with the world. It is not difficult to imagine a 28-year-old Sartre being drawn to Bespaloff’s letter, where she wrote excitedly: ‘Existence projects itself into the possible: choice is its destiny.’ For Bespaloff, interpreting Heidegger, this choice is not a matter of free will but of irrevocable commitment. By actively choosing, we dash beyond ourselves into an uncertain future. As a musician, Bespaloff ‘listened’ to Heidegger’s text as if to a performance of Bach, a ‘monumental Art of Fugue’. She recognised that, as in a Baroque fugue, all the motifs ‘bring us back to the central theme of Being taken up in all its possible aspects, with increasing infinite variation, but always identical to itself.’ Bespaloff’s enthusiasm for Heidegger’s musical metaphysics was soon tempered by the discovery of another existentialist: Søren Kierkegaard. In 1934, she published notes on Kierkegaard’s Repetition (1843), a work that emphasised the musicality of repetition as continuous transformation. She declares war on her teacher’s total denial of any possibility of truth Repetition does not add anything, it only accentuates what is irreducible to human existence. Repetition in Kierkegaard is ‘the will to live again and the refusal to survive’. Only by repeating can we become authentic subjects. In Kierkegaard’s ‘beautiful moment’, Bespaloff found what she called ‘the instant’: an experience of absolute, eternal silence. The absence of a path, she wrote on Kierkegaard, is the only path his philosophy wants to follow. This Zen-like image also perfectly captures the meandering trajectories of her own thought, which Laura Sanò has called ‘nomadic’. A wandering cosmopolitan, Bespaloff was forced to traverse the boundaries of various countries, languages and cultures. Her philosophy mirrored that nomadism, with subtle attention to the embodied experience of movement, melody and metamorphosis. Bespaloff’s essay collection Paths and Crossroads (Cheminements et Carrefours) appeared in 1938. Dedicated to Shestov, the book includes texts on Julien Green, André Malraux, Marcel and two essays on Kierkegaard. The chapter ‘Shestov before Nietzsche’ declares war on her teacher’s total denial of any possibility of truth. By refusing to think, she writes, Shestov had returned to another dogma – a radical relativism that ultimately turned into nihilism. Against Shestov’s rejection of reason, Bespaloff poses Nietzsche’s attempt to reach truth through and within one’s life. Nietzsche’s concept of the Will to Truth, she thought, could reconcile us to the tragedy of existence. Where Shestov saw an unbridgeable gap, Bespaloff made a leap: in the instant, happiness is in our reach. Bespaloff’s ‘happy consciousness’ made a deep impression on Camus who read the book closely in the summer of 1939. Bespaloff’s writings on Kierkegaard coincided with the publication of Wahl’s Kierkegaardian Studies (1938) – a testimony to their friendship and lifelong collaboration. Bespaloff and Wahl were trendsetters in Paris. Introducing Kierkegaard’s anti-Hegelian philosophy into France, they prepared the ground for the existentialism that flourished in wartime Paris. Their ventures into Christian existentialism directly reacted to Hegel’s revival in France instigated by Alexandre Kojève’s lectures, held between 1933 and 1939. Another émigré from the Russian empire, Kojève was as pivotal as Shestov to the formation of French modernism. It was these refugees from eastern Europe, among them Bespaloff, who shaped the course of French culture by importing new currents to Paris, including Surrealism, Marxism, phenomenology and existentialist philosophy. In the spring of 1938, Bespaloff began rereading the Iliad with her daughter Naomi. Her extensive notes turned into a brilliant essay on Homer’s epic poem. Shestov’s death that year deeply upset her. In a letter to Wahl, she calls Shestov one of the few truly noble men she knew. The family moved to her husband’s estate in southern France in 1939. Just before the Nazis occupied Paris, she wrote a letter to Marcel: ‘But the worse it gets, the more I realise that you can’t love life, the more I discover the urgent need to find new reasons to love it. And I am afraid that this time I won’t be able to, which would be worse than death…’ Her work on the Iliad essay became an existential ‘method of facing the war’. She soon became aware of a similar text, written coincidentally, that appeared in Cahiers du Sud in 1940: Weil’s ‘The Iliad, or the Poem of Force’. Bespaloff began to revise her essay; she critically responded to Weil’s condemnation of any use of force. Living as a Jew in Vichy France, Bespaloff became increasingly desperate, and with good reason. In November 1941, she wrote to Marcel: ‘I feel as if I am stuck in a sad, restless, absurd dream. And I am very afraid of waking up.’ Her friend Wahl, also Jewish, had been imprisoned and tortured by the Gestapo, and worse was to come for many Jews in Paris. In 1942, Bespaloff managed to escape, boarding one of last ships to leave Nazi-occupied France, with her mother and daughter, her library and grand piano. Having narrowly fled a concentration camp outside of Paris, Wahl joined them. With his encouragement, Bespaloff began to rework her essay on the Iliad. She eventually finished her notes in yet another exile, this one in New York. Published in English translation in 1943, On the Iliad framed war as an absolute ‘question of losing it all to gain it all’. In the words of Fondane’s letter to his wife, war became ‘the moment to live our existential philosophy’. According to Bespaloff, Homer felt both intense love and intense horror of war. Where Weil claimed that force transforms subjects into objects, Bespaloff, emphasises brief moments of beauty that occur in the midst of violence. With war being waged all around, there are flashing instants of generosity and grace. In the Iliad, force is both a supreme reality and an illusion. It is the superabundance of life itself, ‘a murderous lightning stroke, in which calculation, chance, and power seem to fuse in a single element to defy man’s fate.’ This does not mean that Bespaloff glorified violence. Far from it. But the experience of the Second World War made her realise the inescapability of force and its power to transform an individual’s understanding of the human predicament. At the heart of her essay is Hector, the ‘resistance-hero’ who embodies justice and courage. Like every human in the Iliad, Hector cannot flee his fate – and he knows it. Hector’s flight from force is short but has ‘the eternity of a nightmare’. That is the horrifying temporality of war that Bespaloff experienced first-hand. Hannah Arendt’s reading of Kafka echoed Bespaloff’s existentialist despair The most crushing parts of Bespaloff’s Iliad essay are dedicated to Helen, a woman with whom she clearly identifies. Clothed in long white veils, she is the most austere character of Homer’s poem. Both unbearably beautiful and unfortunate, Helen awoke in exile and felt ‘nothing but a dull disgust for the shrivelled ecstasy that has outlived their hope.’ She is the prisoner of her own passivity, forced to live in horror of herself. Ultimately, Helen’s promise of freedom, like Bespaloff’s own, remains unfulfilled. Helplessly, Helen watches the men who went to war for her, observing ‘the changing rhythm of the battle’. The breaks that interrupt the fighting are rare instants of silence: The battlefield is quiet; a few steps away from each other, the two armies stand face to face awaiting the single combat that will decide the outcome of the war. Here, at the very peak of the Iliad, is one of those pauses, those moments of contemplation, when the spell of Becoming is broken, and the world of action, with all its fury, dips into peace.While in New York, Bespaloff preserved her ties to Parisian intellectual life from her exile by exchanging letters with Fessard and Marcel. She got a job with the Voice of America’s French broadcast before moving to Mount Holyoke College in Massachusetts, where she taught French literature. Mount Holyoke became an important outpost for French culture in the US during the war. At gatherings of exiled scholars organised by Wahl, Bespaloff met Jacques Maritain, André Masson, Marc Chagall and Claude Lévi-Strauss. This ‘small, dark lady who wore white gloves’, as her translator Mary McCarthy described her, also made an impression on Hannah Arendt who visited in August 1944 to deliver a lecture on Franz Kafka. Arendt’s reading of Kafka, later published in Partisan Review, echoed Bespaloff’s existentialist despair. Under the dark shadow of war, Arendt describes humanity as inescapably trapped in history’s meshes. Kafka’s ‘nightmare of a world’ had become reality. In an essay on Camus, her last published work, Bespaloff describes how history forced her generation ‘to live in a climate of violent death’. After the war, despite previously having been fêted by them, Bespaloff became a vocal critic of the new generation of French existentialists, especially Sartre. In a 1946 letter to the musicologist Boris de Schloezer, Bespaloff wrote that ‘the hollowness of subjectivity that Sartre opposes to what I call “magical interiority” is much less the foundation of a new humanism than the harbinger of a new conformity.’ She argued that, instead of liberating the individual, Sartre’s existentialism destroyed the magical interiority through which humans can authentically connect with one another. For Bespaloff, Sartre degraded the subject into an object under the gaze of the Other. This objectified ‘subjectivity curiously aligns with American “individualism”, which unleashes itself in action to mask the absence of the individual.’ Like Helen’s Troy, the US felt both dull and hostile to Bespaloff. Bespaloff’s journey to Mount Holyoke was her final exile. During term break, in April 1949, for reasons not entirely clear, she sealed her kitchen doors and turned on the gas oven. Her own complex fugue ended with a tragic cadence. She had written earlier of the happiness that can be found in an instant. In her final note, alluding to Camus’s claim, she wrote: ‘One can imagine Sisyphus happy, but joy is forever out of his reach.’ | Isabel Jacobs | https://aeon.co//essays/for-rachel-bespaloff-philosophy-was-a-sensual-activity | |
Architecture | Architectural drawing speaks of mathematical precision, but its roots lie in the theological exegesis of a prophetic book | Years ago, my professor would make his architectural history students prepare for seminars by pinning large sheets of paper to a noticeboard. Each had finely printed plans and elevations on them. Over the week, I’d stand in front of those sheets for at least an hour looking at the various drawings, as instructed. Back in class, students took turns to explain what exactly the drawings represented, determining the building’s appearance from the drawings alone and describing how a person might move through the space as if we were there. Those well-spent hours were among my favourite during my degree; the language of drawing was a catalyst to my imagination, creating worlds beyond what words could ever do. In learning about this language, I realised that we know remarkably little about how it developed, as if it arose fully formed in the 13th century, since no single drawing can be linked to a specific building project until that century’s end. This baffled me. How could monuments like Durham Cathedral, the renovated basilica of Saint-Denis outside Paris (the genesis of the Gothic style), and all the High Gothic churches in northern Europe have been made without something so simple as a drawing? Visually communicating the appearance of a building seems a natural thing to do – an easier way of planning. As it happens, drawings were used in the construction process before the 13th century. In the 1st century BCE, the architect Vitruvius wrote his De architectura in an attempt to elevate the practice of architecture to the level of the liberal arts; that is, work derived from the mind rather than the mindless graft of one’s hands. Near the beginning of the treatise, Vitruvius describes three types of architectural drawing: plans, elevations, and (very likely) drawings in perspective (his precise meaning is hotly contested). Despite this evidence for the use of drawings, none survive from antiquity. The only examples to weather the test of time are monumental plans inscribed on stone or mosaics, but these could have been decorative objects – simple maps or sculptural monuments: their purpose is not clear. Also, most were done after the buildings they depict were completed, so they cannot have been used in the construction process. After the decline of the Roman empire in the 5th century, the infrastructure for educating and training architects vanished in the West. Not until the 13th century do we get a designer who oversees several projects simultaneously – a sort of proto-architect. Prior to their emergence, there was a master mason who’d make certain geometrical constructions on the ground or in plaster, allowing him to construct the layout of a building. This master mason is an obscure historical figure. He likely did not have a formal education but started his career as an apprentice who learned structurally sound forms from his master. He would have travelled across building sites learning and picking up new designs and ideas. At 12th-century Canterbury, for example, the original designer of the Gothic building was William of Sens, who likely had experience of working on the new Gothic elements of the French cathedral. He could promise that and more to his new patrons in England, not necessarily using a drawing but by describing what exactly he would do over the coming years. Later, in the 15th century, the artist and architect Leon Battista Alberti, in his brief mention of architectural drawings, assumes that they are done only by architects. This leaves us with a story of architecture that follows a well-worn narrative: the decline of Rome led to a dearth of advanced practices, which were picked up again only in early modern Italy. But this is not the real story. Towards the middle of the 12th century, a Scottish theologian named Richard moved across the Channel to Paris and to the Abbey of Saint Victor on the left bank, about a 20-minute walk from where Notre-Dame Cathedral stands today, but outside the walls of the medieval city. Here, Richard penned a commentary on the Book of Ezekiel, filled with more than a dozen plans and elevations that systematically represent the buildings the prophet describes. These are key to understanding the beginning of architectural drawing in the West. Richard is the first person to use the term ‘plan’ with reference to a drawing that would be recognised as a plan today. He was the first person we know of to represent a building more than once, offering a three-dimensional view of the structure; and the first to provide a clear sectional elevation, where part of the building is sliced through to give a view of the interior. His commentary suggests that architectural drawings were in use a full century earlier than is conventionally held, complete with a fully fledged language for the representation of three-dimensional objects. The Abbey of Saint Victor was established in 1108, at the beginning of what the historian Charles Homer Haskins in 1927 called a ‘Twelfth-Century Renaissance’. This renaissance was characterised by a renewed focus on classical literature and a drive to understand the physical world. Latin translations of ancient Greek and Arabic works on mathematics, geometry and every other subject gave energy to scholars to interrogate the world a little deeper. Works by Aristotle, Euclid and Plato, dimly known but whose writing was thought gone forever, began to arrive on the shores of Europe. Within this swirling intellectual storm, the members of Saint Victor had one of the best libraries in Europe and a commitment to teach whoever wanted to learn. They were spiritual centrists, never veering close to zealotry and never losing their minds to the new fashion for pure logic, characterised by the infamous self-promotor Peter Abelard. The Abbey of Saint Victor, Paris, etching, 1702. Courtesy of INHA, Paris With the fine resources and stimulation that Saint Victor provided, Richard began his commentary on the Book of Ezekiel, a daunting task for anyone who has read it. Ezekiel prophesied during the Israelite’s exile in Babylon where the once-captive Jews remained in the centuries following the destruction of the First Temple in Jerusalem. It is a book of consolation and of hope; especially the last section, which contains a detailed architectural description of a new temple that would descend from the heavens onto the mountain when the Israelites returned to their homeland. Ezekiel describes meeting a man with ‘brazen complexion’ holding a measuring stick, who accompanies the prophet around the buildings, while measuring every detail. At first glance, the buildings Ezekiel describes, and their arrangement, seem straightforward. There are three courtyards of diminishing size, set into one another, and each section is accessed via an elaborate gateway. The new temple at the centre of everything (and modelled on Solomon’s original) is perched on the mountain’s plateau. Out front in the smallest courtyard there is an altar for sacrifices, while the Temple contains three spaces: a vestibule, a long narrow hall and a smaller room called the ‘Holy of Holies’. Richard needed to know exactly what the prophet saw But the details are impossible to follow. For example, the description of the gateways range over different chapters and, though Ezekiel specifies their parts, the measurements do not always make sense. We are told the breadth of the building is one reed (six cubits), that the threshold to the gate is six cubits, and the ‘porch of the gate’ is eight cubits. As the description continues, it is tempting to pick up a pen and draw alongside its reading, the better to follow, but the layout is difficult to grasp. In the 6th century, Pope Gregory I concluded that it was impossible to understand the architecture in a literal way, and that the lack of sense within Ezekiel’s words was a sure sign that they could only be allegorical in nature. Gregory gave the example of a door described as wider than the wall to which it is attached. For Richard, brought up on an intellectual diet defined by the rigours of Saint Victor’s school, it was important to understand the facts of Ezekiel’s words. He needed to know exactly what the prophet saw. And so, Richard’s commentary on this part of the Book of Ezekiel included more than a dozen plans and elevations to help realise the prophet’s vision. Telling his readers why he included the drawings, Richard says he wanted to show that no matter how ‘simple’ they might be, the truth of his argument was that these buildings had a tangible existence, and that Ezekiel’s description makes sense if the reader has the wit to follow it. Richard’s drawings Richard’s drawings are like nothing made before. They are precocious in pointing to a masterful visualisation of space long before the language of architectural drawing was systematised. Richard used recent developments in geometry to fully articulate the relationship between the plans and elevations: in fact, the drawings represent the beginning of architectural abstraction in the West, not because he uses plans and elevations, but because he uses them together to give readers a real sense of the buildings’ three dimensions. As far as we know, no one in Europe had done this previously. Richard’s final plan, vestibule running through the middle In his commentary, Richard takes his readers through the envisioned Temple complex carefully, starting with a very general sense of the entire layout, allowing us to situate ourselves properly. Then he zooms in to focus on one building in particular, the gatehouses that combine the three atria that surround the temple. He provides three plans and two elevations for the gatehouses on three sides of the complex. The three bird’s eye plans show its general layout, a detailed rendering of half the building, then its complete internal footprint. The final plan (above) shows a vestibule running through the middle with long rectangular rooms perpendicular to it. If we set this plan alongside the elevation of the building (below), we can see these same rooms perched upon each step of the vestibule. To aid readers, Richard labelled the rooms in Latin, making it easy to work from one drawing to another. We can take the complexity a little further, since, in the elevation, the viewer can see the interior of the ground floor as if the drawing were a section where part of the building is cut open. This would make it the first clear sectional elevation, and an important development in architectural drawing. Richard’s drawing of the sectional elevation None of Richard’s innovations are accidental. Rather, they are rooted in the language of geometry: by including the elevation, the viewer sees that the gatehouses are located on a mountainside. Yet having to reconcile the plan with the elevation disturbed Richard’s attempt at accuracy. He knew that measurements taken along a flat surface and on a sloped one would be different when compared with one another, and result in discrepancies between his plans and elevations. To combat the problem, Richard proposed a method by which a ‘plan’ measurement could be translated into one that accommodated the mountain’s slope, using something very similar to the Pythagorean theorem, which was then circulating around western Europe. The plans include measurements that assume the site was flat, and so he calls them ‘planum’. This is the first time this term was used in reference to a drawing. For Richard, a ‘plan’ was a two-dimensional drawing that showed the layout of a building on a plane (ie, flat) surface – language that we still use today. A number of tantalising plans, predating Richard’s commentary, survive, but they lack a systematic approach to representing reality. The best known is the 9th-century Saint Gall plan (below). This shows a monastery laid out in red ink, complete with church, cloister, abbot’s house, medicinal garden and everything else a monastery could need, right down to the number of beds in the dormitory. The note attached to the plan refers to it as an exemplata, a word that could mean anything from ‘copy’ to ‘proof’. The Saint Gall plan likely survived because a couple of centuries later someone wrote a life of Saint Martin on its reverse side – which for a medieval audience held much more value than this diagram. The Saint Gall plan is impressive, but it’s not entirely clear what any of the proposed monastery buildings would look like since there are no elevations and it is only one single drawing, unlike Richard’s more expansive approach. Saint Gall plan, 9th century There are other theological plans, images of the Celestial Jerusalem, of the Holy Sepulchre and a couple of others. Their survival suggests that, even in the context of so much medieval material that has been lost, there must have been more drawings that were destroyed or else deemed not important enough to save. Not all had the fortune of a life of Saint Martin written on their reverse. It is tempting to imagine Richard in conversation with masons who must have been a constant presence in Saint Victor. Richard held the position of prior, a sort of second-in-command in the abbey. One of his responsibilities would have been to oversee the masons’ work in the abbey and to interact with the builders on a regular basis. There was, however, a cultural chasm between the cloister and the building site that Richard would not have crossed lightly, even for the sake of an important work such as a biblical commentary. To do that, I think he crossed another type of boundary. Saint Victor was famous by the end of the 12th century, and one of the reasons why is how good its literal commentaries on the Bible were, an approach advocated by its earliest superstar and Richard’s mentor, Hugh of Saint Victor. Hugh was a teacher, writer, very much the 12th-century renaissance man, and he is known to have spoken to nearby rabbinic scholars and their schools. Hugh recognised the value of Jewish knowledge, especially when it came to understanding history: as one modern author put it, from a Christian perspective, talking to a Jewish scholar was like picking up the phone to the Old Testament. Richard, it seems, continued that tradition. If Richard could speak that visual language, then so could the people he hired to build Saint Victor As if to underline the connection between the two traditions, one of the earliest Christian manuscripts containing Richard’s work includes a map of the Holy Land described in the Book of Ezekiel 48 (below). It shows the land belonging to the tribes of Israel on the left and right, with the Holy City and its environs below. The map is almost identical to one in a Hebrew commentary (below) by the famous scholar Rashi who established his school not far from Paris. The only difference between the Hebrew and Latin maps is the language of the labels. Not long after Richard wrote his commentary, the Jewish scholar Maimonides included architectural drawings in his discussion on the Second Temple, called the tractate Middoth. Not just plans, but elevations and even sections, just as in Richard’s commentary. All these architectural drawings appearing in theological texts of the 12th century does not prove that drawings were used in the construction process. What it does prove, though, is that people such as Richard and the many readers of his work could understand the language of architectural drawings if a master mason put one in front of them. Richard’s map of the Holy Land, as described in the Book of Ezekiel 48, in Latin Rashi’s map of the Holy Land, in Hebrew In response to my own question, as to how some of the great architectural monuments of medieval Europe could be built without drawings, the answer is they could not. Although we don’t have those drawings made by 12th-century master masons, Richard’s commentary certainly suggests that the practice of architectural drawing was common enough across the religious divide, and that if a patron such as Richard could speak that visual language, then so could the people he hired to build Saint Victor. Richard’s importance is clear but, since he was a theologian working in a cloistered community, his legacy within architectural history is difficult to quantify. He never built anything as far as we know. I do not want to suggest that the invention of architectural drawing was a top-down affair, where the language and visual forms were invented by a patron within theological contexts and imparted to the lowly builder. I do not even want to suggest that parallel traditions in Islamic countries, and further east in China, lag behind the West: only that the practice as we know it in the West developed out of the complex relationship between patron and mason, and almost certainly predates the 13th century. Richard’s commentary helps us fill in the gaps. It demonstrates how known unknowns (the existence of drawings in the 12th century) become known. Masons and, later, architects used and developed these drawings to a remarkable degree, perhaps not based on direct knowledge of Richard’s commentary, but certainly from the world that those drawings inhabited. | Karl Kinsella | https://aeon.co//essays/the-surprising-history-of-architectural-drawing-in-the-west | |
Stories and literature | Bereft and suicidal, I lay on my sofa. Only David Foster Wallace’s novel kept me tethered to life, and still does | In the surreal aftermath of my suicide attempt and amid the haze of my own processing, my best friend visited me in the hospital with a (soft-bound and thus mental-patient-safe) copy of David Foster Wallace’s Infinite Jest under his arm. It was the spring of 2021. A couple months earlier, I had slipped in a tub, suffered a concussion, and triggered my first episode of major depression, and those had been the most difficult months of my life. Though a lifelong ‘striver’ and ‘high achiever’, nothing I’ve ever done was harder than waging that war against myself while catatonic on that Brooklyn sofa. This was an inarticulable and so alienating war, one during which, at every moment, it was excruciating and terrifying to exist at all. I thought I knew the extent of my own mind’s capacity to torture itself, to hurt me, and what this thing we call depression can really be like. But I had been wrong. For anyone who hasn’t experienced it at its worst, I now think it is psychologically impossible to imagine. It may even prove impossible for those who have experienced to still remember it after the fact, just as someone who temporarily perceives a fourth dimension wouldn’t really, fully remember what it was like once the perception is lost, only facets of the larger, unfathomable thing. So maybe I can’t really remember, either: but I can recall thinking again and again these staggered reflections I’m writing now. Some of the swirling emotions that distressed and disoriented me on that sofa also remain faintly accessible, like the crippling inability to make any decisions, no matter how small, such that even contemplating a choice among some host of mine’s warmly offered selection of teas would incapacitate me with self-loathing and breathless, gushing tears. I remember hopelessly trying to make myself feel even the glimmer of anything good, turning to everything – the music, the friends – that had brought me so much joy before, only to find that I could no longer feel any of it but rather just, from somewhere afar, see and long for it while watching as the ever-darkening blackness in me instead consumed it all. I remember the debilitating guilt and shame that emerged for everything I had ever done, including for having the audacity to keep existing for so long. And I remember an overwhelming empathy as I wondered how many others felt this way in the history of the world, imagining the vastness of all these solitary confinements within our minds across space and time. At the same time, it was unfathomable to me that anyone had ever felt like this, or that there could even be enough darkness in the universe to realise the experience more than this once. From the days following my injury through the several months after, my ultimate challenge on that sofa was finding a way to endure the passage of time. I needed something to help me get through each moment and make it to the next one while still intact. I couldn’t actually do anything, but staring into space (or even watching TV) kept me vulnerable, as the cognitive passivity left ample room for the darkness to seep in and swallow me away. After a few desperate weeks, I eventually found that reading fiction – filling my head with another world that left room for little else – was the one thing that made it more bearable to exist. My best friend then suggested (after having gently and generously recommended the book to me for years) that perhaps this was the moment to read Infinite Jest. I think every day about how grateful I am that he did. I started reading and it soon became the case that so long as Infinite Jest was in my hands, it was possible, okay even, for me to stick around. The core themes of the book that would soothe and sustain me over the coming weeks can be conveyed, I think, by its two dominant and contrasting venues – a halfway house for addicts in recovery on the one hand, and an elite and high-pressure tennis academy on the other – in conjunction with an underlying and unifying thesis: all of us, whether we’re chasing substances, achievements or whatever else we hope will satisfy us and make it bearable to exist, are afflicted. We are all, for lack of a better word, fucked in the head in the very same ways. With Infinite Jest in my hands, I was suspended afloat by a contradictory catharsis, this evanescent insight that I could hold on to so long as I just kept reading and rereading the book’s (blessedly many) pages: that I was not crazy, nor alone, precisely because I really was crazy, which is to say that this all wasn’t me but rather it – it was a human condition. The book assured me that this was just what it was like to be crazy in this way, was exactly how others crazy in the same way were made to feel, a crazy that made them feel just as alone as I now felt. The book witnessed me, affirmed me, and assured me that my experience was familiar to the world. I can’t put it any better than just saying the book was my friend. The book’s most famous lines are on suicidality, and the air-tight logic that it brings along Some passages can only speak for themselves, as they so articulate (and help me remember) facets of the thing I was facing on that sofa. On the ‘psychotic depression’ suffered by the character Kate Gompert, the most haunting and compelling personification of depression I have come across: It is a level of psychic pain wholly incompatible with human life as we know it. It is a sense of radical and thoroughgoing evil not just as a feature but as the essence of conscious existence. It is a sense of poisoning that pervades the self at the self’s most elementary levels. It is a nausea of the cells and soul … It … is probably mostly indescribable except as a sort of double bind in which any/all of the alternatives we associate with human agency – sitting or standing, doing or resting, speaking or keeping silent, living or dying – are not just unpleasant but literally horrible.No description that I’ve encountered has better conveyed, so clearly and directly, the precise nature of that moment-by-moment agony in which I had found myself. Infinite Jest’s most famous lines are on suicidality, and the air-tight logic that it brings along. The book analogises it to the choice faced by those trapped inside a burning building and deciding whether to jump: Make no mistake about people who leap from burning windows. Their terror of falling from a great height is still just as great as it would be for you or me standing speculatively at the same window just checking out the view; ie, the fear of falling remains a constant … It’s not desiring the fall; it’s terror of the flames. And yet nobody down on the sidewalk, looking up and yelling ‘Don’t!’ and ‘Hang on!’, can understand the jump. Not really. You’d have to have personally been trapped and felt flames to really understand a terror way beyond falling.The suicidal person, in other words, is not misguided but rather literally facing different choices – ones unimaginable to those who do not also have flames slowly engulfing them. I don’t think I can really explain what reading all this meant to me. The book could see me like a mirror at that moment and describe it all right back. More concretely, I can’t explain what it meant to find such forceful validations of my particular sense of this ‘mental illness’, not as some wrong or irrational reaction by me, a misapprehension or miscalculation on my part, but rather as something happening to me; it was a thing inside me – a billowing shape, as the book often calls it – to which all my dread and despair was actually just the reasonable and appropriate response. But I can tell you that, once I finished Infinite Jest, my grip on this self-understanding – and so my self-preservation – quickly started to slip away, and it was only a few days later that I tried to kill myself. By then, I was back to being alone on that sofa, surrounded by those flames the book had managed to keep at bay. I think reading Infinite Jest had been keeping me alive. So that’s why, when he came to the hospital, my friend knew to bring along another copy of the book. I remember looking up at him then, bleary-eyed with anxious shame for what felt like my most monumental failure, a profoundly self-absorbed act of weakness on my part – and, not to mention, a terrible inconvenience for all those I’d dared to drag into my life. He smiled softly while waving Infinite Jest in a silent reminder that these emotions, though compelling in their presentation and thus reasonable to be so compelled by, weren’t really reflecting the reality of the matter. And with a copy to share, in that secured visiting area, we then had our own little pop-up book club. I admit to sometimes feeling guilty for being the one who found salvation in his book instead of him It all felt a bit like Bible study or something, in the fluorescent sterility and chaos of that strange space, and I remember my friend making some fittingly dark joke about how this was probably how DFW would’ve most wanted the book to be read anyway: like the word of God, among rock bottoms, being involuntarily held. It was a glimmer of Wallace’s raw hilarity, which fills so much of Infinite Jest (1996) – a grotesque humour, one that could punctuate my otherwise continuously unbearable tenure on that sofa with stitches of transcendent laughter, and which not only kept me alive but sometimes feeling alive, wanting to be, hoping I do somehow make it through it all, if for no other reason than because laughing still felt like something worthwhile. I was reminded, in our pop-up book club, that maybe this was still worth doing. In truth, the reality of what had happened was only beginning to crash down upon me, and it was going to be a very long road ahead. But we at least managed to make it all a bit gentler and more intelligible in that moment. As of this September, it has been 15 years since Wallace’s suicide and two and a half years since my attempt. Like Wallace’s, my own decision to take my life had immediately followed an adjustment to my antidepressants. I remember it clearly: I’d been holding on so long as I’d still been reading, and when the reading was over and the enkindling darkness took its place, there was just barely enough left in me to pull myself up and pick up a phone, to articulate the necessary words and ask the professionals if they could possibly find some way to help me out. I’d still been searching in anguish for an escape as the walls closed in, a way to still win, to stick around. Sadly, it was the prescribed dosage increase itself that hit me – as it is sometimes known to do – with another dark wave, knocking me back into the depths of myself, right as I’d been treading so very hard to reach a stable surface. I know Wallace’s suicide had been amid choppy chemical changes of his own, which is to say that we’d both still been fighting, and so these disparate outcomes were the product of random chance. There is a tragedy and humanity, I think, for one’s own desperate attempt at staying alive to be the very thing that does one in – and I admit to sometimes feeling guilty for being the one who found salvation in his book instead of him, as though this salvation was itself cosmically predestined to be scarce. When I’m asked what exactly I found in Infinite Jest, I limit myself to noting two things. I found powerful portraits of mental illness, and I also found empathy. Like I said, the book was my friend. But the thing is, I know that many others have very different things to say about Infinite Jest – about the book, its author, its ‘prototypical’ readers, the very idea of it, and the ethos it has come to represent. In her chapter ‘On Not Reading DFW’ (2016), Amy Hungerford defends her choice never to read it by arguing (among other things) that there’s no reason to think DFW could have anything valuable to say about women. More recently, in the London Review of Books this July, Patricia Lockwood said of Infinite Jest that ‘it’s like watching someone undergo the latest possible puberty. It genuinely reads like he has not had sex.’ Hungerford, Lockwood and the mainstream ethos generally dismiss the book’s intended and actual audiences as white, male and not to be trusted, driven by Stockholm syndrome, sunk costs or delusions of self-interested grandeur in calling the book genius or important. I’m not exaggerating when I say that I find these critiques – so often snide or irreverent in their cadence – baffling, gaslighting, disempowering, at times even agonising. I can’t understand what they could possibly have to do with this book that I know as my friend, that I found myself in at my most alienated moment. And the bitter irony is that this ethos all concerns a man who, after writing such an empathetic book about mental illness, took his own life; for it is a collective instance of the very kind of empathy failure that I think Infinite Jest asks us to resist and helped me resist myself. I guess it is the least I can do for it now – and for my own survivor’s guilt – to join this ongoing chorus on the book with my own belting, discordant voice. Mental illness can persuade you that you’re now seeing the reality that had always been real Infinite Jest was life-saving for me, but I don’t just mean when I say this that it had been saving me while I was reading it on that sofa, or even the times that I’ve read the book since. Infinite Jest is saving my life all the time. There’s a recurring motif in the book, a haunting symbol for all of our many mental demons: the Face in the Floor. It first appears in a second-person vignette as an evil presence that only you, the reader, can feel. You wake up from a nightmare, you look around, and you suddenly notice that there is the Face in the Floor beneath you. It is a Face that you know is evil, and you know this evil is only for you. But as soon as you notice this Face in the Floor, you are also convinced that it has actually been there all along. You are certain of this, that its ‘horrid toothy smile [has been] leering right at your light all the time,’ and that it had simply been ‘unfelt by all others and unseen by you’ until now. In a later passage, this evil Face in the Floor – ‘the grinning root-white face of your worst nightmares’ – comes back, but this time, it’s your addiction. It ‘finally remove[s] its smily-face mask to reveal centerless eyes’, and you see that the Face in the Floor – your addiction – has now completely taken you over. The Face in the Floor has become your own. It’s ‘your own face in the mirror, now, it’s you’ for it has ‘devoured or replaced and become you’. I think about the Face in the Floor every single day. I remind myself of it. One of the most harrowing things about mental illness is not anything captured by descriptions of its first-order symptoms, but rather the way it can convince you that these symptoms are just picking up on something that is and has always been the case, that was actually there all the time; and when you didn’t feel this way it was because you had been blind. Mental illness can persuade you that you’re now seeing the reality that had always been real, the Face that had always been there in the Floor – which is all to say that your epistemic position has simply been improved. So long as that is what you are being made to believe, then how can anyone expect you to also believe ‘this too shall pass’ (or anything of the sort), or to somehow just stop it from swallowing you up? I’m no longer on that sofa or surrounded by those flames. But still, I’ll probably always be moving with and managing my own billowing shape. Mine is a synergistic and explosive Molotov cocktail of depression and ‘emotion dysregulation’. This basically means that my internal reality is prone to quickly and intensely turn itself upside down again and again – somersaulting through euphoria, despair, mania, shame, rage, paranoia, guilt, panic, bliss, self-aggrandizement, self-hatred, even within a single day. My challenge in the dissociated midst of these episodes will always be to find something from outside the moment to believe in, or to at least have faith that any such thing could even exist, and so to resist the recurring immersive insistence that only this moment and nothing before it is what’s real. Maybe that’s why I needed to say all of this, to give my experience this reality and write it all down, and paper over at least one of the Floor’s Faces and preserve this here instead for myself; and maybe these revelations are also my redemption for that audacity to have been the one saved. But when I say that Infinite Jest is saving my life all the time, what I mean is that I still keep trying my very best to tell myself – because I still need and will keep needing to tell myself – what has become both my mantra and my prayer: it’s the Face in the Floor. It’s the Face in the Floor. It’s the Face in the Floor. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. Or text HOME to 741741 to reach Crisis Text Line. In the UK and Ireland, the Samaritans can be contacted on 116 123 or email jo@samaritans.org or jo@samaritans.ie In Australia, the crisis support service Lifeline is 13 11 14 Other international helplines can be found at www.befrienders.org | Mala Chatterjee | https://aeon.co//essays/how-infinite-jest-tethered-me-to-life-when-i-almost-let-it-go | |
History of technology | From its mythic beginnings in a Chinese garden, the story of silk is a window into how weaving has shaped human history | Some say that history begins with writing; we say that history begins with clothing. In the beginning, there was clothing made from skins that early humans removed from animals, processed, and then tailored to fit the human body; this technique is still used in the Arctic. Next came textiles. The first weavers would weave textiles in the shape of animal hides or raise the nap of the fabric’s surface to mimic the appearance of fur, making the fabric warmer and more comfortable. The shift from skin clothing to textiles is recorded in our earliest literature, such as in the Babylonian Epic of Gilgamesh, where Enkidu, a wild man living on the Mesopotamian steppe, is transformed into a civilised being by the priestess Shamhat through sex, food and clothing. Judaism, Christianity and Islam all begin their accounts of their origins with a dressing scene. A naked Adam and Eve, eating from the forbidden tree, must flee the Garden of Eden. They clothe themselves and undertake a new way of life based on agriculture and animal husbandry. The earliest textile imprints in clay are some 30,000 years old, much older than agriculture, pottery or metallurgy. Persian Carpet Dealer on the Street (1888) by Osman Hamdi Bey (1842-1910). Nationalgalerie der Staatlichen Museen zu Berlin – Preußischer Kulturbesitz, Berlin. Courtesy Wikimedia Commons In the 21st century, the Silk Roads have re-emerged as the catch-all name for a highly politicised infrastructure project across Asia. The name Silk Roads comes from the origin and spread of sericulture – the practice of making silk fibres – in which Chinese women have played a special role. The discovery of silk fibres is attributed to the Empress Ling Shih, known as Lei Zhu. Legend says a silk cocoon fell into her cup and began to unravel in the hot tea water while she sat under a mulberry tree. Another legend tells that it was a Chinese princess who brought sericulture out of China to the Kingdom of Khotan by hiding silkworm eggs in her hair when she was sent to marry the Khotanese king. In Modern Chinese, sī (絲, ‘silk, thread, string’) is commonly reconstructed as Middle Chinese *si. Linguists believe that the word journied via nomadic tribes in western China who also adapted the Mongolian word sirkeg (‘silk fabric’) and the Manchu sirge or sirhe (‘silk thread, silk floss from a cocoon’). The Greek noun sērikón and Latin sēricum come from the same Chinese root. The English word silk, Old Norse silki and Scandinavian silke – transferred into Finnish and Karelian as silkki, Lithuanian šilkas, and Old Russian šĭlkŭ – all have the same origin in Chinese. It took approximately one millennium for the word ‘silk’ to travel from China to northern Europe via Central Asia and Iran: 10,000 kilometres in 1,000 years. In ancient Asia, silk was valuable and coveted, even by the powerful. It is said that in the year 1 BCE, China paid off invaders from the north with 30,000 bolts of silk, 7,680 kg of silk floss and 370 pieces of clothing. Among the less powerful, textiles possessed even greater value. We know from 3rd- and 4th-century Kroraina kingdom legal documents (from Chinese Turkistan, present-day Xinjiang province) that the theft of ‘two jackets’ could occasion a crime and that ‘two belts’ were significant enough to appear in wills. Silk became the symbol of an extravagance and decadence The classical Greek and Roman world thought of India as the site of great textiles and garments. The Romans marvelled at Indian saffron (Crocus indicus), a precious spice and dye plant yielding a bright yellow. Indigo was among the most valuable commodities traded from Asia. Diocletian’s Edict of Maximum Prices of 301 CE tells us that one Roman pound of raw silk cost the same as nine years’ wages of a smith. In Rome, silk became the symbol of an extravagance and decadence that some saw as corrupt and anti-Roman. Cleopatra was also said to wear quite inappropriate clothing of Chinese origin, revealing her breasts and therefore also her vanity, and indicating loose morals and greed. The Roman emperor Elagabalus was described contemptuously by his contemporary Herodian, who wrote that the ruler refused to wear traditional Roman clothes because they were made of inferior textiles. Only silk ‘met with his approval’. The Roman poet Horace dismissed women who wore silk, arguing that its lightness meant that ‘you may see her, almost as if naked … you may measure her whole form with your eye.’ Wall painting of two young Roman women wearing fine translucent fabric. Roman, 1-75 CE. Gift of Barbara and Lawrence Fleischman. Photo by J Paul Getty Museum, Los Angeles The technology behind silk had long been a historical puzzle. The recent archaeological discovery of a 2nd-century BCE Han dynasty burial chamber of a woman in Chengdu has now solved it. Her grave contained a miniature weaving workshop with wooden models of doll-sized weavers operating pattern looms with an integrated multi-shaft mechanism and a treadle and pedal to power the loom. Europeans wouldn’t devise the treadle loom, which enhances power, precision and efficiency, for another millennium. Chengdu loom model (digital reconstruction). Photo courtesy China National Silk Museum, Hangzhou, Zhejiang province This technology, known as weft-faced compound tabby, also emerged in the border city of Dura-Europos in Syria and in Masada in Israel, dating to the 70s CE. We can, however, be confident that the technique known as taqueté was first woven with wool fibre in the Levant. From there, it spread east, and the Persians and others turned it into a weft-faced compound twill called samite. Samites became the most expensive and prestigious commodity on the western Silk Roads right up until the Arab conquests. They were highly valued international commodities, traded all the way to Scandinavia. Fragments of silk samite from fabric no 1 from Oseberg, as drawn by Sofie Krafft. Photo by Ann Christine Eek. © Museum of Cultural History, Oslo, Norway In Norway in 834 CE, two women were buried in the large Oseberg Viking ship, loaded with silk textiles, including more than 110 silk samite pieces cut into narrow, decorative strips. Most of the Oseberg silk strips are of Central Asian origin and they were probably several generations old when they were buried. The old Norse sagas speak of exquisite fabrics that were perhaps samites, even calling them guðvefr, literally ‘God-woven’. These samite strips could have come to Scandinavia via close contact with the Rus communities settled along the Russian rivers, who could negotiate favourable conditions of trade with Byzantium. We know from historical sources that if a Rus merchant lost a slave in Greek territory, he would be entitled to compensation in the form of two pieces of silk. However, Byzantia also set a maximum purchase allowance for the Rus, and the maximum price for silk was 50 bezants. These silks that the Rus were trading in Byzantium, and then again with the Scandinavians, came from the Syrian cities of Antioch, Aleppo and Damascus. Most early medieval silks in Europe are Byzantine, not Chinese. The Scandinavians also exported fur products to Asia that fuelled luxury consumption in Byzantium and eastwards, including coats, but also trimmings for hats and boots, and hems for kaftans and collars. The combination of fur and silk remained popular in prestige clothing to the Renaissance kings of Europe, and still exists in royal ermine robes. Under the Muslim dynasties of the Umayyads (661-750), the Abbasids (750-1258), the Ilkhanids (1256-1335) and the Mamluks (1250-1517), diplomatic clothing gifts evolved into robes of honour. In Arabic, these are called khilʿa or tashrīf, and they are precious garments that a ruler would bestow upon his elites. They would then wear them to show loyalty. Silk gift-giving worked in both directions, it seems, and a caliph might receive hundreds of garments from one of his subjects. A huge textile industry, private as well as royal, flourished in Baghdad in the 9th to 10th centuries, employing at least 4,000 people in silk and cotton manufacturing alone. Precious dyes, such as kermes from Armenia, offered opportunities for exclusive designs of bright-red fabric. Early Islamic scholars praise Central Asia not only for its silk but also for its wool, linen, fur and especially fine cotton. The 10th century also saw the spread of Islam, and the advance of trade networks lead to a renaissance in West African weaving and textile production. The Rules and Regulations of the Abbasid Court state that, in the year 977 CE, the wealthy Adud al-Dawla sent the caliph gifts of 500 garments in a full range of qualities, from the finest to the coarsest – an excellent example of ‘silken diplomacy’. The Abbasid dynasty invested in palace textile workshops producing sophisticated patterns and techniques, such as the renowned tirāz. Originally a Persian loan-word, the term tirāz eventually became used for exquisite decorated or embroidered fabrics with in-woven inscriptions of the name of the ruler or praising Allah. The silk tapestry roundel unites symbolic and aesthetic concepts from both the Islamic and Chinese realms The purpose of tirāz textiles, at least to begin with, may have been a form of tax or tribute that was paid by provinces in Central Asia to honour new rulers when they took power. The term also came to be the name for a workshop where such exquisite fabrics with inscriptions were produced. The author Ibn Khaldūn, who wrote in the 14th century, dedicated a whole chapter to tirāz textiles in his book Muqaddimah: Royal garments are embroidered with such a tirāz, in order to increase the prestige of the ruler or the person of lower rank who wears such a garment, or in order to increase the prestige of those whom the ruler distinguishes by bestowing upon them his own garment … A 14th-century silk and metal-thread slit tapestry roundel. At its centre, an elegant ruler is seated on his throne, clad in a blue and gold robe or kaftan girded by a golden belt. He has a beard and a Persian-style crown, and is flanked by two seated noblemen, both wearing kaftans; on the right side is a Mongol prince or general, under whose foot is a blue tortoise, a typical Chinese symbol of longevity and endurance. Behind the throned ruler stand two guards wearing the same helmet-like hats. The medallion is decorated with an outer band of good wishes woven in Arabic golden letters, and inner bands of animals and imaginary creatures. Photo by Pernille Klemp, courtesy of David’s Collection, Copenhagen/Wikimedia Commons The Abbasid rule ended in 1258 when Baghdad was conquered by the Mongols under the command of Hulegu, a grandson of Chinggis Khan. Hulegu took the title of Il-Khan to signal that he was subordinate to the Great Mongol Khans of China. One of his successors is portrayed in a silk tapestry roundel, uniting symbolic and aesthetic concepts from both the Islamic and Chinese realms (see image above). The depicted figures – Mongols, Persians and Arabs – manifest the union of ethnic and political groups in an idealised image of the Pax Mongolica. The technical features of this tapestry, made using a gold thread with a cotton core, suggest it may have been made in a cotton-growing region yet woven by Chinese weavers. The Mongols are famous for many things; it is less well known that they were great patrons of arts, crafts and textiles. The Ilkhanid dynasty ruled for some generations until it collapsed around 1335. European imports of silks from China and Central Asia rose steadily in the Middle Ages. In 1099, after the capture of Jerusalem by the knights of the First Crusade, they increased again. The creation of Christian states in the Holy Land opened new trade routes, which facilitated the rise of the Italian city-states. The westward expansion of the Mongol Empire under Chinggis Khan and his successors also helped augment the power of these Italian trading centres. Great quantities of raw silk coming into Italy helped stimulate creative and technological progress in Europe, generating new techniques and patterns as well as new technologies. The lampas or woven fabrics especially fuelled innovation in patterning and the introduction of the treadle loom in medieval Europe. While China was an important source of silk and other goods, South Asia had long been part of exchange networks linking the Indian Ocean world with the Gulf, Africa, Europe, and South-East and East Asia. Economic and political shocks from the 14th century led to surging prices for silk in European markets. The value of silk thread per ounce approached the price of gold. In the early 15th century, the Chinese white mulberry (Morus alba) began to be successfully cultivated in Europe, in particular in Lombardy in Italy. We should not think of European silk cultivation and silk weaving only as a short business venture or a mere adjunct to Chinese or Asian dominance. Italy remained a leading global producer over several centuries, first of silk fabrics and then of silk threads, maintaining its position as the world’s second largest exporter of silk threads after China into the 1930s. To this day, Italian capacity and expertise in silk production survives. The most famous legend tells of two monks who smuggled silkworm eggs to Europe New silk institutions also emerged. In Valencia in Spain, between 1482 and 1533, the ‘Silk Exchange’ was erected to regulate and promote the city’s trade. It served as a financial centre, a courthouse for arbitration to solve commercial conflicts, and a prison for defaulting silk merchants. The Hall of Columns in the Lonja de la Seda or ‘Silk Exchange’ in Valencia, built 1482-1533. A UNESCO World Heritage Site of cultural significance, its impressive pillars are shaped like z-spun threads. Photo Trevor Huxham/Flickr Many legends arose around silk, primarily because of its value, with the technology of sericulture and silk production jealously guarded in China for millennia. Perhaps the most famous legend tells of two monks who smuggled silkworm eggs to Europe, thus breaking the production monopoly and revealing how silk was made. In the second half of the 17th century, Paris became the centre of European textile production, design and technique. This included the emergence of a luxury shopping environment of boutiques and fashion houses. Fashion magazines such as Le Mercure galant reported on style and new trends from the royal court. The largest Parisian fashion houses, such as the Gaultier family business, supplied the wardrobes of the royal family and the nobility, and held shares in the French East India Company. King Louis XIV and his minister Jean-Baptiste Colbert invested in fashion and textile production as an important innovative sector to showcase France’s greatness. Illegal imports of foreign textiles and luxury copies posed a challenge for French trade and domestic production. French consumers had a large desire for foreign textiles, and colourful, cheap fabrics flooded the market. Illicit products from Asia arrived via trading posts in the Philippines and Mexico, putting pressure on European fabrics and fashionable goods in terms of price and quality. King Louis XIV of France and his grandson, Philip V of Spain, sent Jean de Monségur, an industrial and commercial spy, on a mission to Mexico City to collect intelligence on the legal and illegal trade between India, China and Europe. His detailed intelligence report addressed the trade in textiles, clothing and fashion. With great concern, he wrote: [T]he Chinese have got hold of our patterns and designs, which they have utilised well and can today produce quality goods, although not everything that comes from over there can match the European standard … The times are over when one could assume that the Chinese are clumsy, without talent or trade talent, or that their goods are not in demand.Monségur also noted that Chinese silks were highly competitive because of their lower prices. In Mexico, even commoners wore Chinese silk clothing. When the victorious Mongols conquered new land, they selected artisans, especially weavers, and saved their lives because they were crucial to the expanding empire’s needs and ambitions. These skilled craftspeople were then ordered to settle where the empire needed them, hence the large-scale forced movements of textile workers within the Mongol Empire. Beginning in the 15th century, the colonisation of the Americas brought about the largest forced textile labour movement in history. It forcibly displaced some 13 million people, transporting them from West Africa to the Caribbean and North America. Coerced labour was central in the establishment and development of a textile industry heavily dependent on cotton and indigo. Even today, cotton harvesting is very labour intensive: every year from September to October, millions of workers pick cotton in Turkmenistan, Uzbekistan, Pakistan, India, the United States and China. Cotton pledges have been signed by textile and fashion companies committed to banning forced labour in the cotton harvests, yet the massive need for labour and the low price of cotton are obstacles to these efforts. ‘Christmas greetings from the Danish West Indies’: postcard from the cotton plantation Bettys Hope on the island of Saint Croix, a Danish colony until 1917 and today part of the US Virgin Islands. Courtesy of the Royal Danish Library, Copenhagen Some 60 per cent of the 40 million people employed by the garment industry today are in the Asia-Pacific region. Working conditions and pay levels are often poor, in part because of the pressure to lower production costs. Implications for the health and safety of workers are often terrible: for example, when the poorly constructed Rana Plaza complex in Bangladesh collapsed in 2013, more than 1,100 garment workers lost their lives. Everyone knows that clothing can symbolise power, legacy, glory, as well as ethnic or national identity and aspirations. In male power-dressing, we observe over time how clothing emphasises the ruler’s head, shoulders and torso, and a belt highlights bodily strength. Jewellery, weapons and other royal insignia serve as garnish. The choice of simple clothes, preferred by many Left-wing leaders, also projects meanings – and the source of their power. The last emir of Bukhara, Alim Khan (1880-1944), dressed in a deep-blue silk robe. Photo by Sergei Prokudin-Gorskii. Courtesy Wikimedia Commons Among the elite in many parts of Eurasia, Western dress practices became symbolic of a progressive mindset. In the late 17th century, Peter the Great imposed Western clothing on the civil administration of Russia. In Meiji-era Japan, the ruler and his family adopted full Western attire. The Japanese emperor would wear the sebiro, the Japanese term for ‘suit’ derived from Savile Row, the London street that was home to the finest gentlemen’s tailors. Emperor Meiji in 1873, dressed in Western military parade uniform and with an admiral’s hat. Photo by Uchida Kuichi (内田九一) (1844-75). Albumen silver print from glass negative with applied colour. Courtesy of The Met Museum, New York In the early 20th century, clothing became so accessible and cheap that rulers could demand that their subjects dress in a certain way and adapt their clothing to the ruler’s politics. They wanted the general population to mirror the rulers’ values, political beliefs and ambitions. For example, in 1925, the Greek dictator Theodoros Pangalos imposed a law stipulating that women’s dresses should not rise more than 30 cm from the ground. The same year, Ataturk’s Hat Law was passed in Turkey, another historical example of clothing regulations being used as a political instrument to orient, redress or change the mentality of an entire society. Wearing a Western hat and abandoning the traditional Ottoman and Islamic headgear of the turban and fez became a political act of adherence to the Kemalist republic. Men’s headgear became a potent symbol of ideology, and the ‘wrong’ hat was penalised with fines and, occasionally, even with capital punishment. At the Yalta Conference in February 1945, Winston Churchill wears a civilian double-breasted wool coat, Franklin D Roosevelt, a civilian suit under a cape with tresses and a fur collar, and Stalin, a double-breasted Soviet uniform whose design mirrors both earlier Tsarist and 20th-century European uniforms. A Persian carpet from western Iran forms a connection between them all. Photo courtesy of Wikimedia In the 20th century, military uniform design and cut followed those of the country’s allies and ambitions. We can see this in the military uniforms used across Eurasia during the Cold War, with a ‘communist’ style in countries allied with the Soviet Union or China, versus the ‘capitalist’ NATO styles used by the West’s allies. Throughout the world, rulers have tried to control people by regulating their clothing It is notable that textile metaphors gained currency to represent both the reign of the Cold War, with its ‘Iron Curtain’, and the period’s historic end in 1989, with the ‘Velvet Revolution’ in Czechoslovakia. The expressions play on both the softness of fabric (velvet) and its capacity to cover and conceal (curtain). In popular culture, it was denim and blue jeans that caught the imagination of young people in the East, as symbols of youth and of political and moral freedom. The name ‘denim’ comes from the French city of Nîmes in Occitanie, a major producer of blue dye from woad (Isatis tinctoria) and synonymous with workers’ blue cotton cloth. The word ‘jeans’ connects to the French name of Gênes and the Italian city of Genova, from where such coarse fabrics were exported. Throughout history, and throughout the world, rulers have tried to control people by regulating their clothing. Regulations can be prescriptive or proscriptive, and carry gendered and social meanings and ramifications. Dress codes – from the military to school uniforms – indicate political and social alignment, to visually express unity, loyalty and adherence. Meanwhile, bans, prohibitions or censure of the dress practices of certain individuals or groups aim to exclude. When the Chinese emperor Zhu Yuanzhang, the founder of the Ming dynasty, took the throne in 1368, he banned the former regime’s style of clothing, branding it ‘barbaric’, and ordered a return to the clothing style of the Han dynasty. Clothing regulations can be social or legal, and across Eurasia many have attempted to regulate how people dress to enforce an ideal, or to protect national production from foreign imports. Sumptuary laws (from Latin sumptus, meaning ‘expense’) could regulate both manufacturing and trade, as well as national moral economies that would influence consumption patterns and values. They represented social, gendered and racial hierarchies, and expressed them visually. Many regulated the use of jewellery and the practices surrounding feasts or funerals. The main objective was always directed at dress practices, with greater significance given to fabrics, fibres, weave and decoration than to cuts and tailoring. In Lima, Peru – in Spanish colonial America – sumptuary laws stipulated that women of African or mixed African and European descent were prohibited from wearing woollen cloth, silks or lace – though forbidden luxury fabrics often simply reappeared as cheaper copies, and trade labels were faked. Fabric merchant in Samarkand, photographed between 1905 and 1915 by Sergei Prokudin-Gorskii. The merchant’s goods include striped silks, printed cotton, wool fabrics, and carpets. He wears a white turban and a silk kaftan adorned with Chinese-inspired floral motifs. Photo courtesy of the Library of Congress As globalisation intensified, it brought about technological breakthroughs in transport, communication and trade, through which dress has become more standardised, with many rich and diverse clothing cultures of the world diminished. Fortunately, the early 20th-century photographers Albert Kahn and Sergei Prokudin-Gorskii captured the clothing of many glorious local traditions of Central Asia. Today, we can see some of these local costumes only in tourist shows and museums. Not surprisingly, we know much more about the textiles and clothing of the elite than about the attire of ordinary people on the Silk Roads. Archaeology can help. The Chehrābād tunic belonged to a salt-mine worker, perhaps trapped and killed when the mine collapsed around 400 CE. It was woven of monochrome cotton cut and sewn into a knee-length tunic with long sleeves. Perhaps the tailor knew the body size of the worker or about his hard toil in the salt mines, since gussets were inserted in the armpit areas and at the hips to provide him with greater freedom of movement. Weaving mistakes occur in many places, as if woven in a hurry, or maybe because this was, after all, a work outfit. The history of textile production has always been linked to cheap labour. Shepherding, sericulture, and cotton and flax cultivation require many hands, time, constant tending, efficiency, and standardised tools and techniques. The mechanisation of the clothing industry and of textile production therefore produced dramatic change. Richard Arkwright’s inventions in the 18th century were put into industrial-scale production when the English entrepreneur introduced the spinning frame, adapted it to use waterpower, and patented a rotary carding engine. Arkwright’s achievement was to combine power, machinery, semi-skilled labour and a new raw material, cotton, to create mass-produced yarn. European ladies wore fashionable, soft pashmina shawls with Iranian and Central Asian paisley patterns The French city of Lyon took advantage of geographical advantages that helped it become the centre of a silk ‘tiger economy’. The hill of Croix-Rousse housed factories, with every street filled with the clamorous sounds of mechanical looms. With its 30,000 canuts (the nickname for Lyon’s silk workers), this industrious district turned Lyon into a major hub for textile production, especially silk-weaving, providing garments for the royal court and the nobility of Europe. In the social world of the rising 18th- and 19th-century Western bourgeoisie, we find many products of the Silk Roads, both in textiles and designs. Ladies wore fashionable, soft pashmina shawls with Iranian and Central Asian paisley patterns – a style that had travelled from representing the bonds between Britain and its empire in Asia. Young and fashionable women in European royal families would inspire others to wear these colourful soft shawls as a new accessory. One of the most iconic ‘influencers’ was Empress Joséphine of France who integrated pashmina fabrics and paisley patterns into her wardrobe. Portrait of Empress Joséphine (c1808-9) by Antoine-Jean Gros. Courtesy of the Musée Masséna, Nice/Wikipedia Women of the Spanish Empire would wear the mantón de Manila, also known as the Spanish shawl, which takes its name from Manila in the Philippines, from where it was traded eastwards over the Pacific into the Spanish Empire of the Americas. Originally, it was a silk garment adorned with embroidery, and woven in Southern China, which was traded from the late 16th century via Manila and the Spanish-American colonies, then further into Europe via Spain. Russian girls in a rural area 500 km north of Moscow, photographed by Sergei Prokudin-Gorskii in 1909. Industrially woven, colourful printed fabrics were accessible even in remote villages, and likely used, re-used, sewn and mended. At this time, dyes were chemically bonded and developed from the industrial competition between Germany, France and the UK in the race to patent new synthetic dyes. Courtesy of the Library of Congress In An Inquiry into the Nature and Causes of the Wealth of Nations (1776), Adam Smith wrote that trade was not only mutually beneficial to trade partners but to society as a whole. To illustrate his argument, he explored the competitive advantages of cloth compared with wheat. Textile production was to Smith a sign of economic dynamism. It was only following the French Revolution that clothing regulations were abolished and the nation’s citizens could dress as they wished: ‘Everyone is free to wear whatever clothing and accessories of his sex that he finds pleasing.’ However, the very same decree stipulated the obligation to visibly wear the cocarde knot of red, white and blue ribbons, emblematic of the French Revolution. It was implicitly asserted that clothing should be gender-appropriate and respect earlier dress regulations. Two Germans with particular textile histories would revolutionise the political landscape of the 19th century. Friedrich Engels was the scion of the family behind the cotton company Baumwollspinnerei Ermen & Engels in western Germany, and he settled in the English city of Manchester, a leading centre for global cotton trade and manufacture. Karl Marx was greatly influenced by his close friend Engels and by the textile industry in particular. In Das Kapital (1867), Marx illustrated his arguments about the working classes by referring to the Lumpenproletariat – or the ‘proletariat of rags’ – and by using the example of an overcoat as an allegory for the measure of labour, resources, technology and the uneven rewards of capitalism. ‘Drilling and training for the revolution, spinning and weaving for the people’: Chinese poster, 1974. Courtesy of the Landsberger Collection/chineseposters.net In the 20th century, political transformations and new economic conditions and ideologies have negatively impacted artisanal weaving and other kinds of traditional crafts globally. Much intangible textile craft culture has been lost; new technologies have made handicrafts obsolete or very expensive; urbanisation has standardised fashion; and people no longer want to carry out what is seen as tedious textile work. The word ‘text’ comes from Latin texere (‘to weave’), and a text – morphologically and etymologically – indicates a woven entity. We can therefore say that history starts not with writing but with clothing. Before history, there was nudity, at least in the Abrahamic tradition; clothing thus marks the beginning of history and society. The representation of nudity as part of a wild and pre-civilised life mirrors the European colonial perspective of the naked human as ‘wild’. Across the world today, there are two main ways to dress: gendered into male and female, and stylistically into clothing tailored to fit the body, or draped/wrapped around it like the Roman toga or the Indian sari. Fitted clothing dominates globally, especially after the Second World War, with blue jeans and T-shirts now ubiquitous across all continents. Today, a T-shirt on sale in any shop around the world is the result of a finely meshed web of global collaboration, trade and politics. From cotton fields in Texas or Turkmenistan, to spinning mills in China, garment factories in Southeast Asia, printers in the West, and second-hand clothing markets in Africa, a T-shirt travels thousands of kilometres around the world in its lifetime. On average, a Swede purchases nine T-shirts annually, and even if they are made to last 25 to 30 washes, consumers tend to discard them before. Greenpeace found that Europeans and North Americans, on average, hold on to their clothes for only three years. Some garments last only for one season, either because they fall out of fashion, or because the quality of the fabric, tailoring and stitching is so poor that the clothes simply fall apart. This is the impact of fast fashion that has taken hold since the beginning of the 21st century: for millennia, clothing had always been expensive, worth repairing and maintaining, and made to last. Along with the acceleration of consumption came falling prices and an ever-narrowing margin for profit. The fast-fashion business model requires seamless global trade, inexpensive long-distance transportation, cheap flexible labour and plentiful natural resources. That equation is changing in a world that is warming and where trade barriers are coming up. The future of fabrics, textiles and clothing is bound up in the great themes of the present – and the future. This Essay is based on the chapter ‘The World Wide Web’ by Marie-Louise Nosch, Feng Zhao and Peter Frankopan, from the UNESCO report Textiles and Clothing Along the Silk Roads (2022) edited by Feng Zhao and Marie-Louise Nosch. | Peter Frankopan, Marie-Louise Nosch & Feng Zhao | https://aeon.co//essays/silk-is-a-thread-that-opens-up-the-weave-of-human-history | |
Earth science and climate | "Three earthquakes hit Mexico City on the same date in 1985, 2017 and 2022. The coincidence left the(...TRUNCATED) | "Shortly before 7:19am on 19 September 1985, time began to shift in Mexico City. It started with a t(...TRUNCATED) | Lachlan Summers | https://aeon.co//essays/the-earthquakes-that-shook-mexico-citys-sense-of-time | |
Subcultures | "Bending a mysterious world to your will was the goal of esoteric practices. Now it’s the unashame(...TRUNCATED) | "Deep in the labyrinthine tags of TikTok, a group of teenage occultists promise they have the power (...TRUNCATED) | Tara Isabella Burton | https://aeon.co//essays/how-the-internet-became-the-modern-purveyor-of-ancient-magic |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 8